Storage Scheduling goes mainstream in Kubernetes 1.12

 

With every new release of Kubernetes, I find myself in awe and at ease with the choices we made early on to marry OpenEBS with Kubernetes.

There are several storage management capabilities being built into Kubernetes, such as PV/PVC metrics, PV resize, PV Quota, Pod Priority Classes, and the Mount Propagation features that greatly enhance OpenEBS. However, I am especially excited about a couple of features that recently were introduced with Kubernetes 1.12:

  • Taint Nodes based on Conditions
  • Topology Aware Storage Provisioning

Taint Nodes based on Conditions (#382):

OpenEBS Volume services are composed of a Target Pod and a set of Replicas. When a node that is running the Target pod is unable to serve the Pods — the Target Pod needs to evicted and rescheduled immediately. If you are using OpenEBS 0.6 or higher, the Target Pods have the following eviction tolerations specified:

- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 0
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 0

Until now, the above tolerations take effect only when the Kubernetes TaintNodeByCondition feature is enabled via the alpha gate. With K8s 1.12, this feature has moved to beta and is enabled by default. Along with this feature, the performance improvements done in scheduling allow for faster rescheduling of the OpenEBS Target pod and should help to keep the data storage by OpenEBS highly available.

Topology Aware Dynamic Provisioning ( #561)

This feature primarily benefits the Persistent Volumes that have connectivity or access limitations, such as Local PVs that cannot be accessed by Pods outside of the node or Cloud PVs like EBS and GPD. These cannot be accessed outside of the zone in which they were provisioned. OpenEBS never had this limitation, so this connectivity or access benefit is not required by the OpenEBS community.

However, I am excited about some of the new capabilities that have now now been added to the StorageClass and PVC that can benefit OpenEBS volumes as well.

For instance, OpenEBS storage classes also can be set with volumeBindingMode of WaitForFirstConsumer as follows:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: openebs-standard
provisioner: openebs.io/iscsi
volumeBindingMode: WaitForFirstConsumer

The PVCs provisioned with the above StorageClass will contain the information of the Node selected by the scheduler to launch the associated Pod in the following PVC annotation.

“volume.kubernetes.io/selected-node”

OpenEBS can then use the above annotation to determine the preferred node where the Target Pod can be scheduled. This provides a simpler way to schedule the Target Pods on the same Node as the Application Pod.

This feature decidedly is an important step towards making Storage PVs a first-class citizen in scheduling. This feature helps with the initial provisioning of the volumes — I am excited about the enhancements that are planned in this area, such as the ability for the Volume Plugins to specify the preferred location where the Application Pods can be scheduled.

While this release made progress in making Storage a first-class citizen of Kubernetes schedulers, a lot of work is underway to make the Storage Lifecycle easy to manage with the upcoming support in CSI of Snapshot, Clone, Backup and Recovery.

It feels great to be associated with Kubernetes and OpenEBS and the incredible team that is helps the DevOps teams sleep better.

Btw, it is Hacktober! This is a great time to become part of the Open Source community. OpenEBS is also participating in this year’s Hacktoberfest with a friendly team that is available to help you get started with your contributions to OpenEBS and other projects.

As always, feel free to reach out to us on Slack or add comments below. https://slack.openebs.io/.

This article was first published on Oct 2, 2018 on OpenEBS's Medium Account

Uma Mukkara
Umasankar Mukkara (Uma) has over 20 years of experience as a hands-on developer, architecting scalable products and building innovative teams. Uma led product development in the early days of MayaData (CloudByte). He has led multiple innovations in multi-tenant storage and has contributed to more than 10 patents. Prior to CloudByte, he has contributed significantly to the development of Access Management solutions and has a sound understanding of cloud storage and security architecture. Uma also spends significant time in building chaos engineering practices in OpenEBS development eco-system, community engagement, and partner eco-system development. Uma holds a Masters degree in Telecommunications and software engineering from Illinois Institute of Technology, Chicago and a bachelor’s degree in Communications from S.V.University, Tirupati, India. Contributor at openebs.io, Co-founder& COO@MayaData
Harshita Sharma
Software Engineer at MayaData Inc
Shashank Ranjan
Software Engineer at MayaData