Achieving cross zone HA in GKE

Why a multi-zone cluster?

GKE has multiple zones within each region, when we set up a cluster by default all the nodes get deployed in a single zone. Though rare, GCP data centres of a zone may face an outage, which might cause a business disruption lasting for a few minutes or for hours so in order to increase resiliency we can deploy the nodes of a cluster in different zones of a region. To know about setting up a multi-zone cluster you can follow Google Cloud Docs.

But there are certain limitations with GKE multi-zone cluster as well,

  • When a node goes down a new node will come up automatically (if auto scaling is enabled), but the problem lies with data availability in case the attached disk is ephemeral. When a node goes down the data associated with the node is lost.
  • Secondly, even if your application is capable of performing replication of data on its own, it will be more time consuming thereby resulting in a negative impact on the performance.

Hence, in any of the cases, it is advisable to provision OpenEBS volumes.

How is replication done with OpenEBS?

OpenEBS needs a minimum of 3 replicas to run the OpenEBS cluster with high availability. If a node fails, OpenEBS will manage the data to be replicated to a new disk. In the meantime, your workload can access data from one of the replicas thereby ensuring better performance and no downtime.

Getting started with OpenEBS Enterprise Platform is simple all you need is to follow the below-mentioned steps.

Workflow:

STEP 1

Getting Started with OpenEBS Enterprise Platform

Install OpenEBS on your cluster using the following command:

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml

In order to verify whether the namespace is created and all pods are working as expected, run the following command:

kubectl get ns

The OpenEBS namespace should be in Active State. (By default pods are created in OpenEBS namespace)

NAME         STATUS        AGE
default      Active        8m
kube-public  Active        8m
kube-system  Active        8m
openebs      Active        11s

Run the following command to check all OpenEBS pods are in running state using the following command: By default OpenEBS pods are created in openebs namespace.

kubectl get pod -n openebs

An easier way to monitor pods and other components that will be used in further steps is with Director Online or Director onPrem

STEP 2

Creating and Attaching Disks 

Next, you need to create a disk and attach them to the desired VMs, this can be done either from the console or using command line arguments.

In order to get the detailed steps to create and attach disks in GCP click the corresponding link.

  1. Creation of disk
  2. Attaching the disk


STEP 3

Provisioning OpenEBS volumes

First, Create a cStorPool:

For creation of a cStorPool you will need to specify the block devices, to view the disks attached in step 2 execute the following command

kubectl get blockdevice -n openebs_namespace

Now, copy the below snippet with your block devices specified in a file, say cstor-pool-config.yaml

apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
  name: cstor-disk-pool
  annotations:
    cas.openebs.io/config: |
      - name: PoolResourceRequests
        value: |-
            memory: 2Gi
      - name: PoolResourceLimits
        value: |-
            memory: 4Gi
spec:
  name: cstor-disk-pool
  type: disk
  poolSpec:
    poolType: striped
  blockDevices:
    blockDeviceList:
  ## Replace the following with actual blockDevice CRs from your cluster using "kubectl get bd -n openebs" .
   - blockdevice-66a74896b61c60dcdaf7c7a76fde0ebb
    - blockdevice-b34b3f97840872da9aa0bac1edc9578a
    - blockdevice-ce41f8f5fa22acb79ec56292441dc207
---

Now, apply the above yaml file using the following command:

kubectl apply -f cstor-pool-config.yaml

To verify execute :

kubectl get spc

The output should have a cStor-pool created

NAME              AGE
cstor-disk-pool   1m

Also, Verify csp status:

kubectl get csp

The output status of the csp must be Healthy.
For further confirmation that things are working as expected execute the following command:

kubectl get pods -n openebs | grep poolname

Here, the name of pool is cstor-disk-pool so the command to be executed is:

kubectl get pods -n openebs | grep cstor-disk-pool

The output should display pods in running state.

cstor-disk-pool-c4qj-664bd98889-pclpb          3/3     Running   0          42m
cstor-disk-pool-c4ql-5fcb7646d6-jlk6w          3/3     Running   0          42m
cstor-disk-pool-gke5-76d97dc449-8px9c          3/3     Running   0          42m

Next, You have to create a StorageClass and mention about the StoragePoolClaim and ReplicaCount as the number of cStor Volume Replica is getting created.

Copy the YAML file given below, say cstor-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-sc-statefulset
  annotations:
    openebs.io/cas-type: cstor
    cas.openebs.io/config: |
      - name: StoragePoolClaim
        value: "cstor-disk-pool"
      - name: ReplicaCount
        value: "3"    
provisioner: openebs.io/provisioner-iscsi

To apply the above yaml execute:

kubectl apply -f cstor-sc.yaml

To verify execute:

kubectl get sc

The output must contain the StorageClass name specified in the yaml file.

NAME                            PROVISIONER                         AGE
openebs-sc-statefulset          openebs.io/provisioner-iscsi        1m

Next, you need a PVC spec or volumeClaimTemplate to use the StorageClass that is pointing to a pool with real disks.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cstor-pvc-mysql-large
spec:
  storageClassName: openebs-sc-statefulset
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Gi

Now, things are all set to deploy an application and get started.

Don Williams
Don is the CEO of MayaData and leading the company for last one year. He has an exceptional record of accomplishments leading technology teams for organizations ranging from private equity-backed start-ups to large, global corporations. He has deep experience in engineering, operations, and product development in highly technical and competitive marketplaces. His extensive professional network in several industries, large corporations and government agencies is a significant asset to early stage businesses, often essential to achieve product placement, growth and position for potential exit strategies.
Kiran Mova
Kiran evangelizes open culture and open-source execution models and is a lead maintainer and contributor to the OpenEBS project. Passionate about Kubernetes and Storage Orchestration. Contributor and Maintainer OpenEBS projects. Co-founder and Chief Architect at MayaData Inc.
Murat Karslioglu
VP @OpenEBS & @MayaData_Inc. Murat Karslioglu is a serial entrepreneur, technologist, and startup advisor with over 15 years of experience in storage, distributed systems, and enterprise hardware development. Prior to joining MayaData, Murat worked at Hewlett Packard Enterprise / 3PAR Storage in various advanced development projects including storage file stack performance optimization and the storage management stack for HPE’s Hyper-converged solution. Before joining HPE, Murat led virtualization and OpenStack integration projects within the Nexenta CTO Office. Murat holds a Bachelor’s Degree in Industrial Engineering from the Sakarya University, Turkey, as well as a number of IT certifications. When he is not in his lab, he loves to travel, advise startups, and spend time with his family. Lives to innovate! Opinions my own!