Creating a cross-zone Azure Kubernetes Service (AKS) cluster

Overview: Azure Kubernetes Service (AKS) Cluster

Microsoft's Azure Kubernetes Service (AKS) is one of the better-known names in the world of cloud-based Kubernetes service providers. As of today, AKS users are presented with the option of over 50 availability regions. 11 of these regions offer different 'Availability Zones' within the same availability region. K8s cluster nodes in different availability zones are geographically set apart. In the event of a power outage, or any other form of failure in one of the data centers, cross-zone k8s clusters remain live with negligible down-time.

Creating a cross-zone Azure Kubernetes Service (AKS) cluster

Multi-zone clusters are the next step in achieving high-availability in these failure scenarios, although they are quite rare. Here are some of the benefits of using one:

  • With the use of pod-replication practices, there is minimal downtime in the event of a node failure. Your service remains live while being hosted in a remote location, away from the failure zone.
  • With node-failure, there is always a risk of data-loss for stateful applications, databases, etc. This can be mitigated if your application supports data-replication. CAS solutions like OpenEBS use live replica containers spread across zones for your high-availability storage needs.

These steps are best done through the console at portal.azure.com. Alternatively, it may also be done through shell.azure.com.

You can check if the feature required to have nodes in separate zones is enabled by executing the following command:

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AvailabilityZonePreview')].{Name:name,State:properties.state}"

It should say, 'Registered.’

If it says 'NotRegistered,’ execute the following commands to enable the feature:

az extension add --name aks-preview
az extension update --name aks-preview
az feature register --name AvailabilityZonePreview --namespace Microsoft.ContainerService
az provider register --namespace Microsoft.ContainerService

To verify, execute:

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AvailabilityZonePreview')].{Name:name,State:properties.state}"

All Azure resources are grouped together into ‘Resource Groups.’ So, the first step towards creating our cluster is to create a Resource Group. AKS clusters can currently be created using availability zones in the following regions:

Australia East (australiaeast)
Central US (centralus)
East US 2 (eastus2)
East US (eastus)
France Central (francecentral)
Japan East (japaneast)
North Europe (northeurope)
Southeast Asia (southeastasia)
UK South (uksouth)
West Europe (westeurope)
West US 2 (westus2)

Let’s go with the ‘East US’ region.

az group create --name myResourceGroup --location eastus

Next, we’ll create the cluster. The cluster needs to have a load balancer of SKU type ‘Standard.’ We’ll use the az aks create command to create the cluster. We are using Standard_B2ms sized VMs. You can visit this page to check for the different VM sizes available in your region.

You can get the list of available Kubernetes versions using the following command. Substitute the region name for <region-name>.

az aks get-versions --output table --location <region-name>
az aks create \
	--resource-group myResourceGroup \
	--name myAKSCluster \
	--kubernetes-version 1.18.6 \
	--load-balancer-sku standard \
	--node-count 3 \
	--node-zones 1 2 3 \
	--node-vm-size Standard_B2ms \
	--node-osdisk-size 40 \
	--vm-set-type VirtualMachineScaleSets

Execute the az aks get-credentials command to get the kubeconfig for the cluster.

az aks get-credentials --name myAKSCluster --resource-group myResourceGroup

After doing that, we should be able to access the cluster.

Execute a ‘kubectl get nodes’ to check.

c8542d96-6801-47f2-bbfb-8c8116c9@Azure:~$ kubectl get nodes -o custom-columns=NAME:'{.metadata.name}',REGION:'{.metadata.labels.topology\.kubernetes\.io/region}',ZONE:'{metadata.labels.topology\.kubernetes\.io/zone}'
NAME                                REGION   ZONE
aks-nodepool1-12791571-vmss000000   eastus   eastus-1
aks-nodepool1-12791571-vmss000001   eastus   eastus-2
aks-nodepool1-12791571-vmss000002   eastus   eastus-3

Cross-zone HA is the first thing you’ll need in your quest for redundancy. Microsoft’s managed Kubernetes offering has come a long way, and a couple of bash commands is all it takes to set one of these up.

Kiran Mova
Kiran Mova is a Passionate Technologist with 17 years of extensive experience working for product companies like Cisco, Lucent, Novell. Kiran has led efforts in performance engineering, simplifying the products in terms of usability, operation and deployment complexities, introducing multi-tenancy and enabling SAAS services in domains like IP/Optical Networks, Storage and Access Management. At MayaData, Kiran leads overall architecture and is responsible for architecting, solution design and customer adoption of OpenEBS and related software. Kiran evangelizes open culture and open source execution models and is a lead maintainer and contributor to the OpenEBS project Passionate about Storage Orchestration. Contributor and Maintainer OpenEBS projects. Chief Architect MayaData Inc. Open Source Dreamer
Murat Karslioglu
VP @OpenEBS & @MayaData_Inc. Murat Karslioglu is a serial entrepreneur, technologist, and startup advisor with over 15 years of experience in storage, distributed systems, and enterprise hardware development. Prior to joining MayaData, Murat worked at Hewlett Packard Enterprise / 3PAR Storage in various advanced development projects including storage file stack performance optimization and the storage management stack for HPE’s Hyper-converged solution. Before joining HPE, Murat led virtualization and OpenStack integration projects within the Nexenta CTO Office. Murat holds a Bachelor’s Degree in Industrial Engineering from the Sakarya University, Turkey, as well as a number of IT certifications. When he is not in his lab, he loves to travel, advise startups, and spend time with his family. Lives to innovate! Opinions my own!
Glenn Bullingham
Glenn joined MayaData in February 2020, as a member of the Product Management team. He began his career at a time when you could touch the server your software was running on, when Windows was "New Technology", and when managing your infrastructure meant applying hostnames with Dymo labels. In the intervening years, Glenn has worn more hats than Ladies' Day at Royal Ascot, with roles spanning development, operations, and senior management. For the last ten years, he's been working with storage virtualization technologies and at MayaData his day to day responsibilities are with the Mayastor storage engine project.