Storage Scheduling goes mainstream in Kubernetes 1.12

With every new release of Kubernetes, I find myself in awe and at ease with the choices we made early on to marry OpenEBS with Kubernetes.
Kubernetes 1.12 release
There are several storage management capabilities being built into Kubernetes, such as PV/PVC metrics, PV resize, PV Quota, Pod Priority Classes, and the Mount Propagation features that greatly enhance OpenEBS. However, I am especially excited about a couple of features that recently were introduced with Kubernetes 1.12:

  • Taint Nodes based on Conditions
  • Topology Aware Storage Provisioning

Taint Nodes based on Conditions (#382):

OpenEBS Volume services are composed of a Target Pod and a set of Replicas. When a node that is running the Target pod is unable to serve the Pods — the Target Pod needs to evicted and rescheduled immediately. If you are using OpenEBS 0.6 or higher, the Target Pods have the following eviction tolerations specified:

- effect: NoExecute
operator: Exists
tolerationSeconds: 0
- effect: NoExecute
operator: Exists
tolerationSeconds: 0

Until now, the above tolerations take effect only when the Kubernetes TaintNodeByCondition feature is enabled via the alpha gate. With K8s 1.12, this feature has moved to beta and is enabled by default. Along with this feature, the performance improvements done in scheduling allow for faster rescheduling of the OpenEBS Target pod and should help to keep the data storage by OpenEBS highly available.

Topology Aware Dynamic Provisioning ( #561)

This feature primarily benefits the Persistent Volumes that have connectivity or access limitations, such as Local PVs that cannot be accessed by Pods outside of the node or Cloud PVs like EBS and GPD. These cannot be accessed outside of the zone in which they were provisioned. OpenEBS never had this limitation, so this connectivity or access benefit is not required by the OpenEBS community.

However, I am excited about some of the new capabilities that have now now been added to the StorageClass and PVC that can benefit OpenEBS volumes as well.

For instance, OpenEBS storage classes also can be set with volumeBindingMode of WaitForFirstConsumer as follows:

kind: StorageClass
name: openebs-standard
volumeBindingMode: WaitForFirstConsumer

The PVCs provisioned with the above StorageClass will contain the information of the Node selected by the scheduler to launch the associated Pod in the following PVC annotation.


OpenEBS can then use the above annotation to determine the preferred node where the Target Pod can be scheduled. This provides a simpler way to schedule the Target Pods on the same Node as the Application Pod.

This feature decidedly is an important step towards making Storage PVs a first-class citizen in scheduling. This feature helps with the initial provisioning of the volumes — I am excited about the enhancements that are planned in this area, such as the ability for the Volume Plugins to specify the preferred location where the Application Pods can be scheduled.

While this release made progress in making Storage a first-class citizen of Kubernetes schedulers, a lot of work is underway to make the Storage Lifecycle easy to manage with the upcoming support in CSI of Snapshot, Clone, Backup and Recovery.

It feels great to be associated with Kubernetes and OpenEBS and the incredible team that is helps the DevOps teams sleep better.

Btw, it is Hacktober! This is a great time to become part of the Open Source community. OpenEBS is also participating in this year’s Hacktoberfest with a friendly team that is available to help you get started with your contributions to OpenEBS and other projects.

As always, feel free to reach out to us on Slack or add comments below.

This article was first published on Oct 2, 2018 on OpenEBS's Medium Account

Uma Mukkara
Uma Mukkara is the co-founder and COO at MayaData. At MayaData co-created two open source projects OpenEBS and LitmusChaos at MayaData. Uma is also a maintainer of the LitmusChaos project. Uma's interests include research and contributions to the areas of cloud-native data management and cloud-native chaos engineering. Uma holds a Masters degree in Telecommunications and software engineering from Illinois Institute of Technology, Chicago and a bachelor’s degree in Communications from S.V.University, Tirupati, India.
Prateek Pandey
Software Engineer at MayaData
Murat Karslioglu
VP @OpenEBS & @MayaData_Inc. Murat Karslioglu is a serial entrepreneur, technologist, and startup advisor with over 15 years of experience in storage, distributed systems, and enterprise hardware development. Prior to joining MayaData, Murat worked at Hewlett Packard Enterprise / 3PAR Storage in various advanced development projects including storage file stack performance optimization and the storage management stack for HPE’s Hyper-converged solution. Before joining HPE, Murat led virtualization and OpenStack integration projects within the Nexenta CTO Office. Murat holds a Bachelor’s Degree in Industrial Engineering from the Sakarya University, Turkey, as well as a number of IT certifications. When he is not in his lab, he loves to travel, advise startups, and spend time with his family. Lives to innovate! Opinions my own!