As the adoption of containerization and orchestration technologies continues to grow, understanding how to leverage platforms like Microsoft’s Azure Kubernetes Service (AKS) is becoming increasingly important for a developer. Microsoft Azure offers a variety of options for utilizing containers and Kubernetes, including Azure Kubernetes Service (AKS), Azure Container Instances, and Azure Container Apps (ACA). In this article, we will focus on AKS, the most favoured choice of developers for running Kubernetes workloads inside Azure.
AKS offers a managed environment for deploying, managing, and scaling applications using Kubernetes, an open-source container orchestration system. This article provides a dive into AKS, focusing on key aspects such as cluster creation, scaling, connecting to other APIs, and managing updates.
If you are new to Kubernetes, this article is a must-read
Understanding Kubernetes: A Developer's Guide to Containerized Applications
Understanding Azure Kubernetes Service (AKS)
AKS is designed to simplify the complexities of Kubernetes management. It absolves users from the need to manage the Control Plane or API Server, allowing them to focus on deploying applications, scaling, and managing cloud infrastructure. While the control plane is managed by Azure, users are responsible for the upkeep of worker nodes and other aspects such as scaling workloads, monitoring, and observability.
Note: A new service named Azure Container Apps (ACA), introduced at Microsoft Build 2022, offers serverless Kubernetes. It has significant differences from AKS. Those planning to use ACA should thoroughly learn about its nuances.
Creating an AKS Cluster Manually
Even in a world leaning towards automation, it’s crucial to understand the manual process of creating an AKS cluster. Here’s how to do it:
1. Log in to the Azure portal.
2. Search for ‘Azure Kubernetes Services’. Select ‘Kubernetes services’.
3. Click on ‘Create’ and choose ‘Create a Kubernetes cluster’.
4. Fill in the details for your Kubernetes cluster, including the cluster name and your Azure resource group.
5. Under ‘Primary node pool’, choose your desired Virtual Machine (VM) size for your Kubernetes worker nodes, the number of nodes, and if you wish to autoscale.
Autoscaling, while powerful, does come at a cost due to the provisioning of additional VMs.
6. After selecting your options, click the ‘Review + create’ button to create your AKS cluster.
Automating AKS Cluster Creation with Terraform
For production-level scenarios, automating AKS cluster creation is essential for ensuring repeatability and standardization. Terraform can be utilized for this purpose. For this demo, we’ll use Terraform locally, although for a production system, CI/CD pipelines are the way to go.
We’ll use two files main.tf
and variables.tf
:
The main.tf
file serves as the primary hub for your module’s configuration. You have the flexibility to create additional configuration files and arrange them in a way that aligns with your project’s needs.
On the other hand, variables.tf
is where you define the variables for your module. When others use your module, these variables can be set as arguments within the module block. Terraform requires all values to be defined, so any variables without default values will necessitate user-provided arguments. However, even for variables with default values, users can provide their own values as module arguments, which would override the defaults.
A typical Terraform configuration (borrowed from this link) for creating an AKS cluster includes:
1. Declaring the Azure provider:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
provider "azurerm" {
features {}
}
2. Specifying the AKS cluster resource block:
resource "azurerm_kubernetes_cluster" "example" {
name = "example-aks1"
location = azurerm_resource_group.example.location
group_name = azurerm_resource_group.example.name
dns_prefix = "exampleaks1"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
3. Defining variables for cluster parameters:
variable "name" {
type = string
default = "akszone01"
}
variable "group_name" {
type = string
default = "prodgroupservice"
}
variable "location" {
type = string
default = "westus"
}
variable "node_count " {
type = string
default = 5
}
Please remember to replace the placeholders with your actual Azure subscription and other necessary details. Also, note that Terraform uses a specific syntax and structure, so any changes to variable names or values must also be reflected in the rest of your configuration.
Storing the main.tf and variables.tf configuration files together in one directory forms a Terraform module, which can be used to set up an AKS cluster. Doing it this way makes the module versatile and can be employed in a variety of environments.
Scaling AKS Clusters: Kubernetes Cluster Autoscaler
AKS clusters can be scaled using the Kubernetes Cluster Autoscaler. The autoscaler adjusts the number of nodes in a cluster, increasing or decreasing the count based on workload requirements. This feature is useful when the load on your cluster fluctuates, requiring more or fewer resources. The autoscaler ensures that your cluster is operating efficiently, optimizing resource usage, and reducing costs.
To illustrate, consider an online retail application. During a sale event, the demand on this application may spike significantly. With the Cluster Autoscaler, the AKS cluster can automatically scale up, adding more nodes to handle the increased load. Once the event is over, the autoscaler can scale down, removing unnecessary nodes.
The Cluster Autoscaler makes decisions based on the load of the worker nodes, which are Azure VMs running in the background. It is typically deployed to the Kubernetes cluster via the cluster-autoscaler container image.
To scale an AKS cluster, you can follow these steps:
- Log in to the Azure portal and navigate to the AKS service.
- Go to Settings and select Node pools.
- Click on the three dots on the right hand side and choose the ‘Scale node pool’ option.
- Choose to either automatically scale the node pool or manually scale it. If manually scaling, specify how many nodes you want to make available.
From an automation standpoint, you can perform the same scaling operation using Terraform. By setting the enable_auto_scaling parameter to true in the azurerm_kubernetes_cluster_node_pool resource, you can enable autoscaling for your AKS cluster:
resource "k18_cluster_pool" "example" {
name = "internal"
kubernetes_cluster_id = azurerm_kubernetes_cluster.example.id
vm_size = "Standard_DS2_v2"
node_count = 1
enable_auto_scaling = true
tags = {
Environment = "Production"
}
}
Read more about it over here.
While autoscaling is a powerful feature, it’s important to remember that it comes with a cost. Each time your cluster scales out, additional VMs are provisioned, and these VMs incur charges. Therefore, it’s crucial to understand your application’s resource needs and set appropriate limits on the number of worker nodes.
Azure Virtual Kubelet: Connecting Kubernetes to Other APIs
Another interesting aspect of Azure Kubernetes Service is the Virtual Kubelet. This is not AKS-specific but is a valuable feature within the AKS offering.
Virtual Kubelet allows you to connect Kubernetes to other APIs. In simpler terms, a kubelet is a node agent that runs on each node in a Kubernetes cluster. This agent is responsible for registering the node with the Kubernetes control plane. The Azure Virtual Kubelet goes a step further, enabling the registration of serverless container platforms.
In Azure, one such platform is Azure Container Instances (ACI). ACI allows you to run containers without having to manage a Kubernetes cluster. This is particularly useful when you need to scale your applications but want to avoid the overhead of managing a large AKS cluster.
Here’s where the concept of ACI bursting comes into play, using the Virtual Kubelet. Instead of scaling the Kubernetes cluster by adding more worker nodes, you can offload some of your workloads to ACI. When Kubernetes needs to schedule a Deployment, Pod, or other workloads, it can choose to run them on ACI rather than on your local Kubernetes cluster.
However, with the recent general availability of Azure Container Apps (ACA), which provides a serverless Kubernetes experience, the use of Virtual Kubelet for ACI bursting may decrease.
Despite this, the concept of ACI bursting remains a powerful strategy to manage fluctuating workloads without having to scale a full AKS cluster. It provides a cost-effective solution, as you only pay for the additional compute resources when they’re being used. The benefit of this strategy is increased efficiency and cost savings, particularly for workloads that experience significant changes in demand.
To implement ACI bursting, you need to install the Virtual Kubelet in your AKS cluster and then create a Kubernetes deployment that targets the Virtual Kubelet. This deployment is then run on ACI instead of your AKS cluster. It’s worth noting that while ACI bursting can be a cost-effective solution, it may not be suitable for all use cases. For example, applications that require persistent storage or specific networking configurations may not be compatible with ACI.
Here are the steps to implement Virtual Kubelet with ACI:
1. Install the Virtual Kubelet in your AKS cluster: You can use the Azure CLI or Helm to install the Virtual Kubelet. The Virtual Kubelet is installed as a Helm chart, which is a pre-packaged Kubernetes deployment.
2. Configure the Virtual Kubelet: You need to provide the Virtual Kubelet with the necessary configuration to connect to ACI. This includes the Azure subscription ID, resource group, and region.
3. Create a Kubernetes deployment that targets the Virtual Kubelet: You can use a Kubernetes deployment manifest file to specify that the deployment should be run on the Virtual Kubelet. The key is to use the correct nodeSelector in your deployment manifest.
4. Monitor the deployment: Once your deployment is running, you can use the Kubernetes command-line tool (kubectl) to monitor the status of your pods. Pods running on ACI will be marked as running on the Virtual Kubelet.
Updating Your Azure Kubernetes Service
Azure Kubernetes Service (AKS) offers a suite of update management features to ensure your Kubernetes clusters remain current and secure. This piece will walk you through the ins and outs of managing these updates.
Understanding Kubernetes Versions
Kubernetes adheres to the Semantic Versioning scheme, featuring major, minor, and patch versions. For instance, in version 1.17.7:
- 1 is the major version.
- 17 is the minor version.
- 7 is the patch version.
Major versions change with incompatible API updates or potential backwards compatibility disruptions. Minor versions alter with backwards-compatible functionality updates. Patch versions adjust when backwards-compatible bug fixes are made. For example, if 1.17.7 is your current version, then 1.17.8 is the latest patch version available for the 1.17 series. You should upgrade to 1.17.8 promptly to keep your cluster fully patched and supported.
The Kubernetes community issues minor versions approximately every three months. They recently extended the support window for each version from nine months to a year, starting with version 1.19. Minor versions incorporate new features and improvements, while patch releases, which can be weekly, address critical bug fixes.
Using AKS for Cluster Creation and Upgrades
AKS allows cluster creation without specifying the exact patch version. If you don’t designate a patch when creating a cluster, it will run the latest GA patch of the minor version. For example, if you form a cluster with 1.21, it will run 1.21.7, the latest GA patch version of 1.21. To find out your patch version, use the command az aks show –resource-group myResourceGroup –name myAKSCluster
. The property currentKubernetesVersion shows the whole Kubernetes version.
When upgrading by alias minor version, only a higher minor version is supported. For instance, upgrading from 1.14.x to 1.14 won’t trigger an upgrade to the latest GA 1.14 patch, but upgrading to 1.15 will trigger an upgrade to the latest GA 1.15 patch.
AKS Node Image Updates
AKS regularly releases new node images. Upgrading your node images frequently allows you to access the latest AKS features. Linux node images update weekly, and Windows node images monthly. AKS includes image upgrade announcements in its release notes, and it may take up to a week for these updates to be distributed across all regions. You can also automate and schedule node image upgrades using planned maintenance.
AKS Kubernetes Version Support
AKS provides 12 months of support for a generally available (GA) Kubernetes version. A GA version in AKS refers to a version available in all regions and incorporated into all SLO or SLA measurements. At any given time, AKS supports:
- The latest GA minor version released in AKS (referred to as N).
- The two preceding minor versions. Each supported minor version also supports a maximum of two stable patches.
AKS also has a platform support policy for certain unsupported Kubernetes versions. During platform support, Microsoft only assists with AKS/Azure platform-related issues. Kubernetes functionality and component issues will not be supported. The platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster falls to n-4. For example, when v1.28 is the latest GA version, Kubernetes v1.25 will be considered platform support. However, during the v1.29 GA release, v1.25 will be auto-upgraded to v1.26.
When AKS introduces a new minor version, the oldest supported minor version and patch releases are deprecated and removed. For example, when AKS releases 1.18, all the 1.15 versions are phased out 30 days later. If you’re operating an unsupported Kubernetes version, you’ll be prompted to upgrade when seeking support for the cluster. Clusters running unsupported Kubernetes versions do not fall under AKS support policies.
AKS supports a maximum of two patch releases of a given minor version. For instance, if the current supported versions are 1.17.8, 1.17.7, 1.16.10, 1.16.9, and AKS issues 1.17.9 and 1.16.11, the oldest patch versions (1.17.7 and 1.16.9) are deprecated and removed.
Future Kubernetes Releases in AKS
As of May 2023, the latest GA version in AKS is 1.27.0, and the most recent upstream Kubernetes release is 1.28.0-alpha.34. AKS employs gradual region deployment for safe deployment practices. Therefore, a new release or version may take up to 10 business days to become available in all regions.
Here’s a snapshot of the upcoming Kubernetes versions and their estimated GA dates in AKS:
- 1.26: GA in AKS in April 2023
- 1.27: GA in AKS in June 2023
- 1.28: Estimated GA in AKS in August 2023
Conclusion:
Utilizing Azure Kubernetes Service (AKS) effectively requires a solid understanding of various features and functionalities it offers.
From manually creating a cluster to automating the process using Terraform, AKS provides diverse options for managing and deploying your applications. The ability to scale AKS clusters using Kubernetes Cluster Autoscaler or Azure Virtual Kubelet can be quite useful in optimizing resource usage and managing costs. Furthermore, staying current with Kubernetes versions and AKS updates is crucial to maintain the security and performance of your Kubernetes clusters.
This guide provided the necessary foundation for harnessing the power of AKS, but remember, real proficiency comes with hands-on experience and continuous learning!
This article has been editorially reviewed by Suprotim Agarwal.
C# and .NET have been around for a very long time, but their constant growth means there’s always more to learn.
We at DotNetCurry are very excited to announce The Absolutely Awesome Book on C# and .NET. This is a 500 pages concise technical eBook available in PDF, ePub (iPad), and Mobi (Kindle).
Organized around concepts, this Book aims to provide a concise, yet solid foundation in C# and .NET, covering C# 6.0, C# 7.0 and .NET Core, with chapters on the latest .NET Core 3.0, .NET Standard and C# 8.0 (final release) too. Use these concepts to deepen your existing knowledge of C# and .NET, to have a solid grasp of the latest in C# and .NET OR to crack your next .NET Interview.
Click here to Explore the Table of Contents or Download Sample Chapters!
Was this article worth reading? Share it with fellow developers too. Thanks!
Brian Martel, an experienced Azure and DevOps developer, has spent the last decade mastering technologies such as Kubernetes, Docker, Ansible, and Terraform. Armed with a Bachelor's degree in Computer Science and certifications like Cloud DevOps Engineer Expert (AWS and Azure) and Certified Kubernetes Administrator (CKA), Brian has a proven track record of guiding organizations through successful transitions to cloud-based infrastructures and implementing efficient DevOps pipelines.
He generously shares his wealth of knowledge as a mentor and an active participant in the developer community, contributing articles, speaking at user groups, and engaging with others on social media. All the while, Brian remains dedicated to staying current with the latest trends in his field.