Cstor Pool Provisioning in OpenEBS 0.7

Greetings OpenEBS users!
OpenEBS team is happy to announce the release of 0.7 which comes with a new storage engine for creating storage pool known as cStor engine.
To find out more details on this specific release, please go through following links:
https://github.com/openebs/openebs/releases
https://blog.openebs.io/openebs-0-7-release-pushes-cstor-storage-engine-to-field-trials-1c41e6ad8c91

To keep the story short and concise I will jump directly on how you can provision a storage pool in 0.7 using the cStor engine. Just for your information, storage pool can also be provisioned using the Jiva engine that we were using in previous versions of OpenEBS.

Cstor Pool Provisioning in OpenEBS 0.7

Let's get started!

Currently there are two ways of provisioning a storage pool using the cStor engine in OpenEBS 0.7. To continue with the tutorial I will assume that you have a Kubernetes cluster set up. I will follow up this tutorial by having a 3-node Kubernetes cluster with one physical disk attached to every node on GKE.

Manual Pool Provisioning

To provision a storage pool manually, you need to run following command where you have your Kubernetes cluster configured.

ashutosh@miracle:~$ kubectl get disk
NAME                                         CREATED AT
disk-26ac8d634b31ba497a9fa72ae57d6a24         1d
disk-2709a1cba9cea9407b92bc1f7d1a1bde         1d
disk-427145375f85e8a488eeb8bbfae45118         1d
sparse-4b488677f76c94d681870379168a677a       1d
sparse-c3ddc8f0de2eb17c50d145cf6713588c       1d
sparse-e09fe4b5170a7b8fd6b8aabf8c828072       1d

The output with prefix disk represents your physical disks and the one with prefix sparse represents sparse disks. We will come to sparse disk concept in a later blog-post, let us concentrate now on physical disks!

What we need to do is just copy the physical disks( in the diskList field of storage pool claim yaml) over which pool should be created. The spc yaml will look following:

 apiVersion: openebs.io/v1alpha1
 kind: StoragePoolClaim
 metadata:
   name: cstor-disk
 spec:
   name: cstor-disk
   type: disk
   poolSpec:
     poolType: striped
   disks:
     diskList:
     - disk-26ac8d634b31ba497a9fa72ae57d6a24
     - disk-2709a1cba9cea9407b92bc1f7d1a1bde
     - disk-427145375f85e8a488eeb8bbfae45118

Now you need to apply the yaml that we formed. That’s it. Done!

ashutosh@miracle:~$ kubectl apply -f spc.yaml storagepoolclaim.openebs.io/cstor-disk created

ashutosh@miracle:~$ kubectl get sp
NAME                 CREATED AT
cstor-disk-5w
si      4s
cstor-disk-7pgs      4s
cstor-disk-8fhi      4s
default              14d

Each disk that we entered in spc yaml is attached to a specific node.
So 3 cStor pools were created on top of the 3 nodes and these sp (as in the output belong to the applied spc) belong to the applied spc.
If all the 3 disks were attached to a single node we would have got only one sp.

Dynamic Pool Provisioning

Above involved a bit manual process but it helps user to configure storage pool on their choice. Didn’t like the above manual process? No worries, let's do some magic here by doing something known as dynamic pool provisioning. Apply the following SPC yaml :

 apiVersion: openebs.io/v1alpha1
 kind: StoragePoolClaim
 metadata:
   name: cstor-disk-dynamic
 spec:
   name: cstor-disk-dynamic
   type: disk
   # required in case of dynamic provisioning
   maxPools: 3
   # If not provided, defaults to 1 (recommended but not required)
   minPools: 3
   poolSpec:
     poolType: striped

ashutosh@miracle:~$ kubectl apply -f dynamic_spc.yaml storagepoolclaim.openebs.io/cstor-disk-dynamic

created
ashutosh@miracle:~$ kubectl get sp
NAME                          CREATED AT
cstor-disk-dynamic-jwc5       6s
cstor-disk-dynamic-qot0       6s
cstor-disk-dynamic-s8va       6s
default                       14d

That’s it. Done.

Let us understand more on these lines:

  1. Dynamic way of pool provisioning will support reconciliation i.e. openEBS control plane will always try to get the maxPools number of pools specified on storagepoolclaim. Let us say spc yaml is applied and node is down or no disks are there currently, but when the resources will come up pool will be provisioned automatically without any intervention.

2. Manual way of provisioning will not have any kind of such reconciliation.

3. minPools number of pool(s) at least will be created or no pools will be provisioned.
e.g
maxPool=10 and minPool=6
Control plane will always try to get to a pool count of 10 but any single shot of provisioning in any one part of reconciliation loop must provision at least 6 pools. Once minPools count is reached even if the count of pool increases by only 1 control plane will do that.

NOTE:
1.In above tutorial we provisioned striped type of pool for both the cases. mirrored pool can also be provisioned but that means at least 2 disks must be attached to the node.
Just change the poolType field in spc yaml to mirrored and rest follows the same.

2. Number of sp resources created is equal to the number of cstor pool created on top of each node but then those belong to a single spc who spawns them. For applications the pool is virtually one while we create volumes.

Hope it helps! Feel free to ask any queries,concerns and if you have any feedback, please share. You can reach out our openEBS slack channel also.
(https://slack.openebs.io/)

This article was first published on Sep 19, 2018 on OpenEBS's Medium Account

Don Williams
Don is the CEO of MayaData and leading the company for last one year. He has an exceptional record of accomplishments leading technology teams for organizations ranging from private equity-backed start-ups to large, global corporations. He has deep experience in engineering, operations, and product development in highly technical and competitive marketplaces. His extensive professional network in several industries, large corporations and government agencies is a significant asset to early stage businesses, often essential to achieve product placement, growth and position for potential exit strategies.
Kiran Mova
Kiran evangelizes open culture and open-source execution models and is a lead maintainer and contributor to the OpenEBS project. Passionate about Kubernetes and Storage Orchestration. Contributor and Maintainer OpenEBS projects. Co-founder and Chief Architect at MayaData Inc.
Murat Karslioglu
VP @OpenEBS & @MayaData_Inc. Murat Karslioglu is a serial entrepreneur, technologist, and startup advisor with over 15 years of experience in storage, distributed systems, and enterprise hardware development. Prior to joining MayaData, Murat worked at Hewlett Packard Enterprise / 3PAR Storage in various advanced development projects including storage file stack performance optimization and the storage management stack for HPE’s Hyper-converged solution. Before joining HPE, Murat led virtualization and OpenStack integration projects within the Nexenta CTO Office. Murat holds a Bachelor’s Degree in Industrial Engineering from the Sakarya University, Turkey, as well as a number of IT certifications. When he is not in his lab, he loves to travel, advise startups, and spend time with his family. Lives to innovate! Opinions my own!