Clear the deck for – Kubernetes!

Posted by: Rahul Sahasrabuddhe , on 4/13/2018, in Category Microsoft Azure
Views: 12877
Abstract: In this tutorial, we will learn about what containerization is, how it is rapidly changing the cloud-based deployment landscape and where Kubernetes fits in. We will also touch upon how various cloud providers like Azure and AWS are dealing with this change.

When you hear the term Kubernetes, is it all Greek to you?

Well, Kubernetes is indeed coined from a Greek word. In this article, although we will not learn the Greek language, we will learn about what containerization is, how it is rapidly changing the cloud-based deployment landscape and where Kubernetes fits in. We will also touch upon how various cloud providers are dealing with this change.

Are you keeping up with new developer technologies? Advance your IT career with our Free Developer magazines covering Azure, C#, Patterns, .NET Core, MVC, Angular, React, and more. Subscribe to the DotNetCurry (DNC) Magazine for FREE and download all previous, current and upcoming editions.

Kubernetes - Introduction

TL;DR: Kubernetes is containers + orchestration.


*image courtesy:

Confused? Ok; let’s get more technical and understand it better.

Kubernetes (also called as k8s) is an open source system (or a platform) that is used for orchestration of application deployments based on Linux’s container-based approach across a variety of cloud offerings.

Quite a loaded definition isn’t it?

Kubernetes essentially allows you to effectively manage application’s deployments lifecycle viz. handling dependencies, packaging, auto-scaling them up or down, rolling in and out the versions of deployment, managing high availability aspects and so on.

Ok so why is it called Kubernetes?

It literally means pilot or captain in Greek. So if you want to ship your app, then you need a right pilot or captain – and that, my fellow passengers, is Kubernetes for you! If you look closely at the logo of Kubernetes, it is actually similar to a ship’s wheel with spokes.


Figure 1: Kubernetes Logo

Kubernetes was founded by Joe Beda, Brendan Burns and Craig McLuckie for Google as an internal project based on Linux containerization and was called as Borg. This was eventually made as an open source project by Google and was donated to Cloud Native Computing Foundation.

Let’s look at what Kubernetes is and what it is not:


1) provides deployment and rollout configurations that allow you to specify how you want your apps to be deployed initially and updated later on. This also ensures that rollbacks are easier to manage. The deployment configurations are called manifests and are written in YAML or JSON.

2) supports automatic bin-packing. It ensures that containers run optimally in terms of their resource requirements like memory, CPU etc. You can define min and max resource requirements for your containers and Kubernetes will fit them into a given infrastructure to optimize the workload. This optimizes underlying infrastructure without compromising performance metrics of your application.

3) has built-in service discovery. Kubernetes exposes your containers to other containers across boundaries of physical hardware (i.e. over internet as well). This allows you to use other services or allows your apps services to be consumed by other apps. This is a truly micro-services based model in the works.

4) provides auto-scaling: It allows your apps to be up-scaled based on increased demand and down-scaled based on reduced demand. This can be done at configuration level.

5) nodes are self-healing. The high availability aspect is owned by Kubernetes services. It also ensures that containers restart and reschedules nodes if they go down or are not responsive to user requests. When it brings up the containers again, they are not available to the user until they are in an appropriate state for containers to serve user requests.

6) It has a built-in mechanism to manage secret information securely.

7) Storage orchestration allows Kubernetes to provide persistent storage out of the box. Under the hood it can connect to any cloud provider’s storage service.

8) Batch execution is out of the box for managing CI workloads.

Kubernetes is not..

1) It is not just some open source science project. It is much more serious than that and it is evident by the fact that it was initially developed up by Google and now it is owned and maintained by CNCF (Cloud Native Computing Foundation).

2) It is not just limited to running applications. It can also run “applications” like Spark, RabitMQ, Hadoop, MongoDB, Redis and so on. Basically if an app can run in a container, Kubernetes can manage it.

3) It does not own CI/CD aspects for your app. You will still need to own and take care of those.

4) Using Kubernetes does not absolve you from not worrying about application architecture and multi-tenancy. It takes no responsibility of what is inside the container. Kubernetes only ensures that the infrastructure used by apps is container-based. Kubernetes is not the silver bullet.

5) Lastly Kubernetes is not some scary and complicated technology. Anyone who can invest some time in understanding the concepts behind containerization well can work with Kubernetes on any cloud platform confidently and efficiently.

In order to understand the value proposition that Kubernetes brings in, we will need to take a step back first and understand as to how application deployment in general has evolved over the last few years and why containerization has caught all the attention of late.

Its show time folks – an app release!

Be whichever role you are in – developer, tester, project manager or product manager – the real acid test of all your day’s (or sprint’s) work is a successful deployment of your application.

You can pretty much compare this experience with a movie release.

After days and months of efforts of conceiving, scripting, acting, recording, editing and doing whatever it takes to make a movie, the anxiety of a movie release can be well felt by all of us who work hard in building software.

Not so many years ago, an application or product release used to be an elaborately-planned and a once-in-a-3-to-6-months kind of a phenomenon. There was a careful process of verification of features developed and corresponding regressions by thorough testing through various environments like Dev > Test > Staging > Prod.

However nowadays, companies release an application directly on production systems at times and that too every hour (or every second sometimes). And hence Continuous Integration (CI) and Continuous Deployment (CD) have become critical aspects of product (application) development & deployment methodology.

So what has caused this change?

Following are some key factors that have a role to play in this shift of deployment strategy:

1) Move to cloud: These days most applications are deployed to private or public clouds. On-premise deployment is now considered as a special case. In fact, until the containerization arrived on the scene, cloud-based deployment nearly has been synonymous to a virtualized hyper-visor based deployment. So you would end up installing or deploying your applications on VMs allocated to you as a part of private/public/hybrid cloud infrastructure.

2) Time-to-market: The technology advances and WYSIWYG JavaScript based UIs for web or mobile have reduced time to develop apps significantly. Additionally, fierce competition in product space has resulted into pushing deadlines from months to weeks and from weeks to days. Product releases have to be fast enough to maintain user stickiness. So CI/CD has pushed the bar to deploying apps as you develop.

3) Microservices: This is more of an architecture level change forced by points #1 and #2 together. Applications have evolved into loosely coupled service or function based components that are mashed up to get desired results. So your applications should no more be tightly coupled monolithic components. And to support that model, the deployment of application should also be modular and flexible enough.

So the application deployment has now become a fast-paced process that needs to run reliably and optimally irrespective of the application development methodology.

Making a “K”ase for Kubernetes

Let’s look at current and past application deployment options with an analogy to understand containerization, and thereby Kubernetes.

Consider your application as an individual that wants to live in a specific community or area (i.e. the infrastructure in which it gets deployed) and while doing so, he/she will consume resources like water, electricity and so on (i.e. the services it uses like storage, CPU, memory etc.).

The following figure depicts three scenarios of living viz. a house/bungalow colony, an apartment complex and finally a hotel.



Figure 2 : App Deployment Analogy

Here are some salient points:

1) This is used as an analogy to build a case and illustrate the point. Please do not stretch it too far!

2) In context of the living in bungalow (on premise):

a. When you are living in a bungalow, you get desired freedom but at the same time, you have to be responsible for managing and maintaining various services yourself (electrical wiring within the bungalow, repairs to water line etc.). This maps to OS upgrades, app upgrades and so on in context of application deployment from IT perspective.

b. Each house may have services used and developed differently. So plumbing layout for one house is not the same as plumbing layout for another. From a regulatory authorities’ perspective, they do not have much control over such situation. This can be related to challenges faced in managing various applications on premise from IT perspective.

3) For the apartment complex (virtualization):

a. Some things are clubbed together as common services. So the drainage line, water-line are same. However, some things are still required to be taken care of by the owner (internal electrical wiring or plumbing or bath fittings). This is similar to having a guest OS installed on a VM that runs on a hyper-visor with its own OS.

b. The apartment owner would still incur some cost for maintaining various services.

c. Any internal change or update needed for each apartment needs to be handled by accessing each apartment. So app updates are not really seamless in a virtualized environment.

4) For the hotel (containerization):

a. More number of services are clubbed together in this model. Consequentially, you are in control of only a few services in your hotel room (like hot/cold temperature control of water in your room). This model allows the executioner to optimize various parameters.

b. You are allowed to bring in only specific things as per hotel policies. So in a container-based ecosystem, the app can only have specific privileges.

c. You can rely on specific services that are available for common use. For example laundry service. So a containerized app can rely on external services and can reuse certain libraries and frameworks.

5) From bungalow to apartment to hotel, there is a trade-off made in terms of having to worry less about managing the services as against the individual (i.e. the app or app provider) losing “control” over services. While the individual can do whatever he/she wants in his/her bungalow, it certainly will not be acceptable in a hotel room. You know what I mean!

So from the perspective of an infrastructure service provider, bungalow to hotel is a great transition to have, since the resources are optimized across consumers and more control is gained ensuring a reliable service.

From the consumer perspective, it works out well too because the consumer does not have to worry about the nitty-gritties of managing services.

In a nutshell, containerization is the way to go when it comes to reliable, scalable and cost-effective application deployment.

Let’s now look at visualization vs. containerization and understand the nuances better.

Virtualization vs. Containerization


Figure 3 : Virtualization vs. Containerization - how do they stack up?

Following are some key aspects:

1) The basic premise of virtualization is optimization of hardware whereas containerization focuses more on an application that needs to be hosted or handled within the scope of container. VMs are abstracting the hardware whereas containers are abstracting the services at OS level.

2) In a virtualized set up, hypervisor runs on one OS (and machine) and then each VM has a guest OS that it runs on. In containerization, the containers use base OS services like kernel, hardware and so on. This implies immediate cost saving in OS (and in some case framework/packages) licenses. It also means that same OS license can be used for managing multiple containers having multiple applications. That results in further saving of cost of hardware.

3) VMs usually would take significant time to boot up; whereas containers can be brought up fairly quickly. Hence, overall performance of bringing up the apps is faster in containerization as compared to a virtualized environment since the unit being handled in case of containerization, is an app, as against a VM.

4) The unit of operation – VM – in case of virtualization limits the scalability of infrastructure across private/public/hybrid clouds. You have to bring up a new VM to scale up (only applicable to hardware-based scaling or horizontal scaling).

5) For very fast and very frequent deployments (which is a business requirement now), containerized model is far more efficient, scalable and reliable as compared to virtualized model.

6) It can be argued that containers are less secured as compared to VMs because they share the kernel of the OS (as against a VM that is well insulated so to say). But this has been handled by relevant container solutions by using signed containers for deployment or scan contents of container before/during deployment.

Let’s Talk Greek – the Kubernetes Language

Following is a schematic representation of core components of Kubernetes. It does not include ALL the components; but nonetheless, this will give you a good enough idea of the components involved.



Figure 4: Kubernetes Architecture Schematic

Kubernetes works on the basic principle of grouping or clustering the resources that are needed by applications to be deployed. The unit of work or measure is the application. So the application that we want to deploy is the focal point of all the operations or actions.

Kubernetes architecture consists of following key components:

1) Cluster: It is basically a group of nodes that can be physical or virtual servers. A cluster houses Kubernetes set up. A cluster has a master node.

2) Node: A node (used to be called as minion) is given/assigned tasks by Master and is controlled by Master. This is the single unit of operation that handles containers. It is a worker machine and it might be a VM or a physical machine based on the underlying cluster. Each node runs the following two components:

a. Kubelet: Each node runs an agent called as “kubelet”. It keeps a watch on the master for the pods that are assigned to the node on which it is running. It performs operations on nodes with a continuous update provided to the Master for all the operations. This is how the master would know the current status of containers (and hence apps) and can take corresponding actions.

b. kube-proxy or proxy: It handles various network aspects for node and plays a vital role in network load-balancing. It is accessed by Services.

c. Each node runs a container runtime (like docker runtime) and that is the binding glue between the nodes and Master when it comes to execution of tasks.

d. The nodes also have an elaborate logging mechanism in place to log all the activities. Fluentd, Stackdriver or even Elasticsearch.

3) Pod: A pod is nothing but a group of containers deployed to a single node. Pods create the required abstraction for containers to work across nodes. A pod plays an important role in scaling of apps. You can scale up/down components of app by adding/removing pods respectively. Pods are created and destroyed by replication controllers based on the template provided to them. Pods have labels assigned to them. Labels hold pod-specific data in key-value pairs.

4) Master: This is the controller that holds together all the services of Kubernetes and hence controls all the activities of nodes in a cluster. This is also called as the Control Plane of Kubernetes. The master is hosted on one of the nodes. It has the following sub-components:

a. API Server (kube-apiserver): It provides access to various RESTful APIs that handle various aspects like manipulation of states of objects and persisting their state in data store. The API server allows the UI and CLI to access the APIs through the API layer shown in Figure 4.

b. Controller Manager (kube-controller-manager): It manages various controllers like node controller, endpoint controller, replication controller and token controllers.

c. Scheduler (kube-scheduler): It simply manages schedules for newly created pods and assigns them to nodes.

d. Data Store (etcd): It serves as single source of truth when it comes to all the data used by various Kubernetes components. The data is stored in a key-value pair.

5) Service: This is an abstraction on top of pods and provides a “virtual” IP for nodes to communicate with each other. It defines logical set of pods and policies to access them. They are called as micro-services.kubctl: This is a command-line tool used for various operations between API service and master node.

6) Volumes: these are directories that carry data used by pods for operations. A volume provides an abstraction for the underlying data and data source to be accessed. For example, an Azure file or Azure directory can be accessed through volumes and so on.

7) Deployment: It is a declarative way of managing deployments of apps. It is used by deployment controller for managing deployments.

8) Secrets: A secret object is used to hold sensitive information like passwords, OAuth tokens etc.

The list can go on forever and will have more updates after every release of Kubernetes. This link has a detailed glossary of terms used in context of Kubernetes.

There is a Kubernetes for that!

Since Kubernetes is based on containerization principle, it is not essentially tied to any specific IT infra model or any cloud provider. And that is why it is termed as a “platform” that can run on various configurations.

The following link gives a complete run-down of all possible scenarios and corresponding Kubernetes offerings. Please do go through this link to understand the versatility of the platform.

Having gone through what all is on offer, let us now focus only on specific key cloud platforms (not necessarily in any order of popularity or preference – before you begin to take sides!)

All the cloud providers already have a container service developed (CaaS or Containerization as a service) and now they have come up with a Kubernetes-based container service (i.e. managed Kubernetes service – wherein some basic aspects of Kubernetes are already owned/managed by underlying cloud provider).

Following is a short summary:


Now that we know so much about Kubernetes, it would be interesting to know about real products that are deployed using Kubernetes. Head over to the case studies page and you will find a lot of real-world products using Kubernetes.

And an interesting fact for game lovers – the application logic for Pokemon Go runs on GKE and the auto-scaling abilities of Kubernetes were put to real test (and use) when the popularity of the game sky-rocketed in certain timeframes.

The Competition

Loving Kubernetes? Well, let’s see at what the competition of Kubernetes looks like.

Usually Kubernetes gets compared with Docker. However, the key difference is Docker is a way of defining the containers for the application, whereas Kubernetes is the container orchestration platform or engine (COE). So you will create a Docker image of your application and then you will use Kubernetes to deploy it onto one of the cloud platforms. A Kubernetes pod would contain a Docker image for an app.

So it is not Kubernetes vs. Dockers but it is Kubernetes and Dockers.

Docker has come up with a service called Docker Swarms and that essentially is comparable to Kubernetes when it comes to functionalities offered by it.

Apache Mesos is another name that usually comes into picture when it comes to comparing COEs. It is a DC/OS (Datacenter Operating System). Marathon is the COE based on Mesos.

Following is a short and sweet comparison of all three offerings:



* Dates might vary based on sources.


Containerization is probably where the application deployment world is having more focus of late than virtualization. And various cloud providers are giving due attention to this euphoria as well.

As far as container-based application deployment space is concerned, Kubernetes has made quite a splash and is following that up with good action in terms of rolling out new features and versions.

Does it all make virtualization irrelevant? Will Kubernetes be the clear winner?

Well, time will tell.

You must have realized that Kubernetes is no child’s play! However, here is an excellent video tutorial that pretty much discusses all the concepts we talked about in a much simpler kindergarten story-telling way. Have a look.

Φιλοσοφία Βίου Κυβερνήτης – ok that’s actually in Greek and it means – philosophy (or wisdom) is the governor of one’s life!

This article was technically reviewed by Subodh Sohoni and Abhijit Zanak.

Was this article worth reading? Share it with fellow developers too. Thanks!
Share on LinkedIn
Share on Google+
Further Reading - Articles You May Like!
Rahul Sahasrabuddhe has been working on Microsoft Technologies since last 17 years and leads Microsoft Technology Practice at a leading software company in Pune. He has been instrumental in setting up competencies around Azure, SharePoint and various other Microsoft Technologies. Being an avid reader, he likes to keep himself abreast with cutting edge technology changes & advances.

Page copy protected against web site content infringement 	by Copyscape

Feedback - Leave us some adulation, criticism and everything in between!





Free DNC .NET Magazine



jQuery CookBook