Pods and Containers

Chethana Dhavaji
4 min readMay 6, 2021

Consider two different web sites deployed in pods as follows:

The first web site is responsible for serving static files (content that doesn’t change often), e.g., HTML files, images, JavaScript libraries, etc. This site is deployed in a single pod consisting of two containers. The first container contains the web server, and the second container is responsible for retrieving a copy of the static files from a central master server, and then periodically check for updates to those files.

The second web site is responsible for generating complex dynamic responses based on data stored in a database, e.g., an online shopping web site, social media site, etc. This site is also deployed in a single pod consisting of two containers. The first container contains the web server, and the second container is contains the database (or replica of the database).

Interestingly, although these two scenarios sound very similar, the pod for the first web site would be considered a reasonable use of multiple containers in a pod, whilst the second would not be recommended and two separate pods should be used instead.

According to the above two scenarios

Summaries and discuss the similarities between the two pods

Discuss the differences between the two pods

Some issues you may wish to consider include (and/or you can discuss issues you have identified):

Differences in the work required for the pod to be created and initialized.

Differences in the behavior and workloads of the web server containers.

Differences in the behavior and workloads of the non-web server containers.

How differences in these pods might impact replication/scaling out.

Conclude your discussions by explaining why the first approach is considered reasonable but the second is not recommended

Similarities of the given two pods
The given two pods follow the same qualities that is having in a multi container pods, such supporting co-located and co-managed helper process for the main function. Sidecar pattern is an example for that. In that sidecar container act as a utility container and doing the auxiliary tasks like giving shipping logs, monitoring agents among others. And to simpler the communication among containers can have two container in the pod, the shared volume can used to communicate among these containers.

Both static and dynamic webpage those scenarios have those qualities commonly.

Differences in the work required for the pod to create and initialized
The second scenario is about a complex dynamic webpage and the server, database in the same pod in different containers. Static webpage, not that complicated, But this dynamic web page act as a multi-tier application and logically having more than one operational layers.
Differences in the behavior and workloads of the web server container
Unlike the static webpage static server supply to content as it is, but dynamic website is much more functional, and it also has a server side programming, with the user interaction dynamic server must provide the functionalities interact with the database in the other container, hence the dynamic server got a more workload than the static server.

Differences in the behavior and workloads of the non-web server containers
Static web page non server container retrieving copy of the files and check the updates of these files, and according to the sidecar pattern the non-server container can consider as the sidecar container it is the same as the dynamic page non server container, sidecar container can include in that too, but in the dynamic non server container having the database, database update frequently and provide the pages and details that user needs, so this dynamic non server container to doing a much more workload.

How differences in these pods might impact replication/scaling out
The second scenario there are having front end(dynamic webpage) and backend(database) components, theta are having the different scaling requirement. Backend and front end scaling must be done in separately. When this having both front end and back end Kubernetes replicate it and that will cause instances in both ends. Databases are not easy to scale.

Conclusion
According to the principal of the ‘One process per container’, splitting a process to a single container is the recommended way. But when it comes to the multi-tier applications like databases, it distributed more than one layer, In micro service architecture to reduce dependencies between each layer to clean interface and then allow each layer to be deployed and scaled separately from the others. If both database and server are in the same pod, both run on the same cluster node. If it is a two nod cluster only having this one pod, that means wasted CPU, memory, disk storage and bandwidth, splitting the container into two pods improve the utilization of the hardware too.

References

· Sidecar pattern https://kubernetes-csi.github.io/docs/sidecar-containers.html

· Multi-Container Pod Design Patterns in Kubernetes https://matthewpalmer.net/kubernetes-app-developer/articles/multi-container-pod-design-patterns.html

· Kubernates in action by — Marko Lukša

Chapter 1. Introducing KubernetesChapter

Chapter 3. Pods: running containers in Kubernetes

· Sidecar container lifecycle changes in Kubernetes 1.18 https://banzaicloud.com/blog/k8s-sidecars/

· Relevance of service mesh architecture — https://istio.io/latest/docs/concepts/what-is-istio/

--

--

Chethana Dhavaji

“Find your spirit, and no challenge will keep you from achieving your goals.” Computer Science Student | Fast learner | dreamer