What are Containers and Kubernetes?

Eyal Estrin ☁️
6 min readJul 8, 2019

Containers are considered an evolution in in the way we consume infrastructure and in the way we build new applications.

As a matter of fact, when looking at the way new applications are being developed these days, Containers are an underlying component of what we call “Cloud-Native applications”.

Looking into the past

Traditionally, infrastructure in data centers were comprised of physical hardware (servers), connected to storage and network equipment.

In the early 2000’s the evolution of virtualization has begun and organizations began virtualize servers, install operating systems and the entire application stack, all on top of a hypervisor, as you can see the diagram below:

Thus in time we were able to better utilize the hardware and run multiple separate VM’s in parallel and even run a multitude of business applications side by side on the same platforms;

With all of those benefits it still meant that we had to install the entire operating system (Guest OS), for each server running several applications, which wasted expensive compute resources that could be utilized otherwise.

Sometimes around 2013 (not long after Docker container technology was launched), we began to see more and more companies using Containers technology; first for development purposes, experimentation and sometime later even running production workloads.

Containers are different from VM’s, whereas the Hypervisor required us to launch the whole underlying OS, containers share the same host OS, the same Kernel and even same libraries, however we can deploy multiple, separate (even though not 100% isolated) Container instances of different applications — leading to better utilization of the host OS resources, and enabling us to scale resources as needed with much more ease and flexibility.

Here is a diagram of how Containers looks in the infrastructure stack:

Containers are typically tens of MBs in size (as compared to tens of GBs in size of VMs), and they usually takes a few seconds to load (for Linux-based Containers).

The trends are always shifting, whereas the technology stated making ways into tech startups first is steadily becoming the de-facto standard when developing modern applications.

Containers Pros and Cons

Containers fit into modern application development architectures such as micro-services, which enable us to break complex application into small pieces or components.

Using this method we are able to achieve the following capabilities:

· Scalability — ability to add more Containers to specific component according to load or demand

· Ease of deployment — the ability to deploy new build of a specific component, separately from the rest of the application components (there is no need to upgrade the entire application every time we developed a new feature).

· Technology freedom — the flexibility of developing using different languages for each component inside a Container.

Other pros for using Containers:

· Fast boot time — a Linux container will usually load within a few seconds

· Multiplatform host OS — Containers can run on Linux (RHEL, CentOS, Ubuntu, SUSE, etc.), Windows (Windows 10, Windows 2016/2019) and even Mac

· Ease of portability — we can begin developing code inside Containers on our laptop, and port the Docker image into production systems, public clouds (such as AWS, Azure, GCP), etc.

· Upgrade and patch management — in order to deploy security patches or code upgrades, we update the Container image, and gracefully deploy new builds of Containers

Containers do have Cons (and here is a partial list):

· Persistency — in most cases, Containers are meant for stateless applications. Developers should be aware of this, when re-architecting existing applications, meaning if you want to persist data this will happen outside the container.

· Monitoring — in order to have better tracing and understanding what is going on inside cluster of Containers, we need to deploy 3rd party solutions, and gather all logs into a central repository

· Security — in order to support authentication and transport encryption between Containers, we need to deploy 3rd party solutions and fine tune a lot of settings that may otherwise lead to breaches,

· Open source libraries and container images — when using Container image from public repositories (such as Docker Hub), we can never know what is included inside the image and when was the last time open source libraries and their dependencies were updated

So, where does exactly Kubernetes fit into this story?

When we need to run large amount of Containers (hundreds and even thousands), we need someone to orchestrate the entire fleet of Containers, monitor which Containers are not responding, load a new Container to replace them while stopping the traffic going to the non-responding Container.

Kubernetes is usually in-charge of tasks such as:

· Container deployment — specify to Kubernetes where is the image repository and how many Containers to deploy

· Container and application stack scaling — specify to Kubernetes at which scenarios to scale up or down (such as CPU load, scheduled intervals, etc.)

· Container availability — Kubernetes monitors the health of the Containers and know when to load a new Container instead of a non-responding Container

· Container load sharing — Kubernetes split the Containers between physical or virtual hosts

Due to the fact that deploying and maintaining Kubernetes clusters is a complex task, it is highly recommended to use well-known managed Kubernetes systems on public clouds:

· Amazon EKS

· Google GKE

· Azure AKS

The benefits for using managed systems:

· Fully managed — we don’t need to maintain the underlying infrastructure, Kubernetes used to be notoriously hard to properly set up.

· Patch management and upgrade — these are all part of the cloud vendor responsibility

· Monitoring the Kubernetes cluster health — this is part of the cloud vendor responsibility

· Cost — we pay for what we actually use and nothing else.

Few words about the future of Containers and Kubernetes

The Cloud Native Computing Foundation (CNCF) is trying to standardize the Container runtime to able to run and scale on any Kubernetes system, and they are working on a new image format through an open source project called CRI-O.

Currently (July 2019) CRI-O is not yet supported by public cloud vendors, but eventually their Kubernetes managed services will support this format.

In-terms of multi-cloud, Google presented its Anthos system, which allows customers to have a central managed Kubernetes platform, for managing Kubernetes clusters from Google GKE, Amazon EKS, Azure AKS and even an on premise Kubernetes cluster (GKE On-Prem).

Google also presented lately its Anthos migrate V2K (soon to be released), which allows customers to migrate VMs from they’re on premise VMWARE infrastructure into Kubernetes in Google cloud.

Summary

To sum-up, now it’s the perfect time for any infrastructure engineer, developer or DevOps personnel to invest their time in Containers and Kubernetes.

Theoretically it is possible to take an existing monolith application and wrap it inside Docker container — it is not always very cost effective, but it will allow you to begin porting existing applications into the cloud, create an image and run it in a large scale.

In-parallel, it is good idea to learn how to wrap and build new applications inside Containers, run them locally on your desktops and migrate them to a public cloud, and process the migration from the on premise environment to the public cloud.

About The Author

Eyal Estrin is a cloud and information security architect, the owner of the blog Security & Cloud 24/7, with more than 20 years in the IT industry.
You can connect with him on Twitter, LinkedIn and Instagram.

--

--

Eyal Estrin ☁️

Author | Cloud Security Architect | AWS Community Builder | Public columnist | CISSP | CCSP | CISM | CDPSE | CISA | CCSK | https://linktr.ee/eyalestrin