MayaData Blog

Jiva to cStor CSPC Migration

Written by Chandan Sagar Pradhan | Oct 21, 2020 6:30:00 PM

This blog will explain how to migrate an application Jiva to CSPC cStor.

CSPC is a CSI driver implementation for the OpenEBS CStor storage engine.

The current implementation supports the following for CStor Volumes:

  1. Provisioning and De-provisioning with ext4, xfs filesystems
  2. Snapshots and clones
  3. Volume Expansion
  4. Volume Metrics

This blog will be using 1 master and 3 worker node clusters. In this blog, we will be discussing how to install OpenEBS, and a MySQL application will be deployed on JIva, and then the MySQL application will be migrated to CSPC cStor.

Note: We will need some downtime to perform this migration. Once the application’s backup is taken, there should not be any new write to the application. So, it is advised to stop any new writes to the application before taking backup. It can vary depending on the size of the volume, n/w bandwidth, resource limits.

Prerequisites:

  1. Kubernetes version 1.17 or higher 
  2. 2 external disks attached to each node. 1 disk will be used for Jiva and another one for CSPC cStor.
  3. iSCSI initiator utils installed on all the worker nodes

Use the below command to verify iSCSI is running on all the nodes.

root@demo-2:~# systemctl status iscsid
● iscsid.service - iSCSI initiator daemon (iscsid)
   Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-10-14 20:10:17 IST; 19s ago
     Docs: man:iscsid(8)
  Process: 9288 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
  Process: 9285 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=0/SUCCESS)
 Main PID: 9290 (iscsid)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/iscsid.service
           ├─9289 /sbin/iscsid
           └─9290 /sbin/iscsid

Installing OpenEBS:

This blog will be using operator yaml to install OpenEBS. Click here if a user wants to install OpenEBS using Helm.

First, get the operator yaml using the below command.

wget https://openebs.github.io/charts/openebs-operator.yaml

As mentioned in the prerequisites, we have 2 disks on each node (/dev/sdb, /dev/sdc). We will be using an exclude filter to exclude /dev/sdb as we will use it for JIva, and NDM will not detect it as blockdevices. Users can avoid this if they want to manage blockdevices by themselves.

Edit the operator yaml and add an exclude path filter in the openebs-ndm-config, as shown below.

apiVersion: v1
kind: ConfigMap
metadata:
  name: openebs-ndm-config
  namespace: openebs
  labels:
    openebs.io/component-name: ndm-config
data:
  # udev-probe is default or primary probe which should be enabled to run ndm
  # filterconfigs contails configs of filters - in their form fo include
  # and exclude comma separated strings
  node-disk-manager.config: |
    probeconfigs:
      - key: udev-probe
        name: udev probe
        state: true
      - key: seachest-probe
        name: seachest probe
        state: false
      - key: smart-probe
        name: smart probe
        state: true
    filterconfigs:
      - key: os-disk-exclude-filter
        name: os disk exclude filter
        state: true
        exclude: "/,/etc/hosts,/boot"
      - key: vendor-filter
        name: vendor filter
        state: true
        include: ""
        exclude: "CLOUDBYT,OpenEBS"
      - key: path-filter
        name: path filter
        state: true
        include: ""
        exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md,/dev/rbd,/dev/sdb"

Now apply the operator yaml.

root@demo-1:~# kubectl apply -f openebs-operator.yaml
namespace/openebs created
serviceaccount/openebs-maya-operator created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
deployment.apps/maya-apiserver created
service/maya-apiserver-service created
deployment.apps/openebs-provisioner created
deployment.apps/openebs-snapshot-operator created
configmap/openebs-ndm-config created
daemonset.apps/openebs-ndm created
deployment.apps/openebs-ndm-operator created
deployment.apps/openebs-admission-server created
deployment.apps/openebs-localpv-provisioner created

Verify all the OpenEBS pods are running.

root@demo-1:~# kubectl get pods -n openebs
NAME                                          READY   STATUS    RESTARTS   AGE
maya-apiserver-559484b978-f6tsv               1/1     Running   2          107s
openebs-admission-server-68b67858cb-q6rsz     1/1     Running   0          107s
openebs-localpv-provisioner-84d94fc75-42r2c   1/1     Running   0          107s
openebs-ndm-c4clw                             1/1     Running   0          107s
openebs-ndm-operator-75b957bf74-9zdt6         1/1     Running   0          107s
openebs-ndm-qhjwn                             1/1     Running   0          107s
openebs-ndm-zj2pc                             1/1     Running   0          107s
openebs-provisioner-664d994494-w4h6b          1/1     Running   0          107s
openebs-snapshot-operator-59c97c6cfc-dsw2q    2/2     Running   0          107s

List the blockdevices as below. If the user has not used the exclude filter then 6 blockdevices will be listed.

root@demo-1:~# kubectl get bd -n openebs
NAME                                           NODENAME   SIZE          CLAIMSTATE   STATUS   AGE
blockdevice-c5214e4ed934156c492e8d2a52922cbc   demo-2     42948607488   Unclaimed    Active   2m45s
blockdevice-ca60348232f4fdb6392fbf6b86f0dba1   demo-4     42948607488   Unclaimed    Active   2m42s
blockdevice-d50d6f154651d96a5cb4389ec34b10b6   demo-3     42948607488   Unclaimed    Active   2m45s

Now let’s create a Jiva storage pool.

These steps (Step-1 to Step-2) are to be performed on all the worker/storage nodes.

Step-1: Identify the disk using -- lsblk

root@demo-2:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0      2:0    1     4K  0 disk
loop0    7:0    0  44.9M  1 loop /snap/gtk-common-themes/1440
loop1    7:1    0  89.1M  1 loop /snap/core/8268
loop2    7:2    0  62.1M  1 loop /snap/gtk-common-themes/1506
loop3    7:3    0  55.3M  1 loop /snap/core18/1885
loop4    7:4    0   3.7M  1 loop /snap/gnome-system-monitor/127
loop5    7:5    0  14.8M  1 loop /snap/gnome-characters/399
loop6    7:6    0 162.9M  1 loop /snap/gnome-3-28-1804/145
loop7    7:7    0   956K  1 loop /snap/gnome-logs/100
loop8    7:8    0   2.2M  1 loop /snap/gnome-system-monitor/148
loop9    7:9    0   956K  1 loop /snap/gnome-logs/81
loop10   7:10   0  54.7M  1 loop /snap/core18/1668
loop11   7:11   0   2.5M  1 loop /snap/gnome-calculator/826
loop12   7:12   0   4.2M  1 loop /snap/gnome-calculator/544
loop13   7:13   0   276K  1 loop /snap/gnome-characters/570
loop14   7:14   0 160.2M  1 loop /snap/gnome-3-28-1804/116
loop15   7:15   0 217.9M  1 loop /snap/gnome-3-34-1804/60
loop16   7:16   0  97.7M  1 loop /snap/core/10126
sda      8:0    0   100G  0 disk
└─sda1   8:1    0   100G  0 part /
sdb      8:16   0    20G  0 disk
sdc      8:32   0    40G  0 disk
sr0     11:0    1  1024M  0 rom

Step-2: Format the disk and mount them as shown below.

root@demo-2:~# mkfs.ext4 /dev/sdb
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 5242880 4k blocks and 1310720 inodes
Filesystem UUID: 90a938bc-c056-4bfb-8a5e-29fe2c23266f
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
root@demo-2:~# mkdir /home/openebs-jiva
root@demo-2:~# mount /dev/sdb  /home/openebs-jiva
root@demo-2:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0      2:0    1     4K  0 disk
loop0    7:0    0   3.7M  1 loop /snap/gnome-system-monitor/127
loop1    7:1    0   4.2M  1 loop /snap/gnome-calculator/544
loop2    7:2    0  14.8M  1 loop /snap/gnome-characters/399
loop3    7:3    0   2.5M  1 loop /snap/gnome-calculator/826
loop4    7:4    0   956K  1 loop /snap/gnome-logs/81
loop5    7:5    0   2.2M  1 loop /snap/gnome-system-monitor/148
loop6    7:6    0 160.2M  1 loop /snap/gnome-3-28-1804/116
loop7    7:7    0  62.1M  1 loop /snap/gtk-common-themes/1506
loop8    7:8    0   956K  1 loop /snap/gnome-logs/100
loop9    7:9    0  44.9M  1 loop /snap/gtk-common-themes/1440
loop10   7:10   0  89.1M  1 loop /snap/core/8268
loop11   7:11   0 217.9M  1 loop /snap/gnome-3-34-1804/60
loop12   7:12   0   276K  1 loop /snap/gnome-characters/570
loop13   7:13   0  55.3M  1 loop /snap/core18/1885
loop14   7:14   0  54.7M  1 loop /snap/core18/1668
loop15   7:15   0 162.9M  1 loop /snap/gnome-3-28-1804/145
loop16   7:16   0  97.7M  1 loop /snap/core/10126
sda      8:0    0   100G  0 disk
└─sda1   8:1    0   100G  0 part /
sdb      8:16   0    20G  0 disk /home/openebs-jiva
sdc      8:32   0    40G  0 disk
sr0     11:0    1  1024M  0 rom

Now create a jiva_storagepool.yaml by adding the below content. In the path, we have mentioned the path we had mounted the disk in the previous step.

apiVersion: openebs.io/v1alpha1
kind: StoragePool
metadata:
  name: jiva-pool
  type: hostdir
spec:
  path: "/home/openebs-jiva"

Apply the jiva_storagepool.yaml and verify the pool has been created.

root@demo-1:~# kubectl apply -f jiva_storagepool.yaml
storagepool.openebs.io/jiva-pool created
root@demo-1:~# kubectl get storagepool
NAME        AGE
default     4m22s
jiva-pool   13s


Create a storageclass for Jiva

Create a jiva_storageclass.yaml with the below content. Mentioned the Jiva storagepool we created earlier in the StoragePool value.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: jiva-sc
  annotations:
    openebs.io/cas-type: jiva
    cas.openebs.io/config: |
      - name: ReplicaCount
        value: "3"
      - name: StoragePool
        value: jiva-pool
provisioner: openebs.io/provisioner-iscsi

Apply the jiva_storageclass.yaml and verify the storage class is created.

root@demo-1:~# kubectl apply -f jiva_storageclass.yaml
storageclass.storage.k8s.io/jiva-sc created
root@demo-1:~# kubectl get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
jiva-sc                     openebs.io/provisioner-iscsi                               Delete          Immediate              false                  6s
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  10m
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  10m
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  10m
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  10m


Deploy a MySQL Application on Jiva

Create a secret by using the following command. The secret will be used in the MySQL application. The user will need to replace YOUR_PASSWORD with the password the user wants to use.

kubectl create secret generic mysql-pass --from-literal=username=root --from-literal=password=YOUR_PASSWORD

Create a mysql.yaml with the below details. This yaml will create a service and PVC for MySQL and a MySQL deployment. The user has to mention the Jiva storage class created earlier in the spec:storageClassName.

apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
    - port: 3306
  selector:
    app: mysql
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: mysql
spec:
  storageClassName: jiva-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

Now apply mysql.yaml and verify the MySQL pod is running.

root@demo-1:~# kubectl apply -f mysql.yaml
service/mysql created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/mysql created
root@demo-1:~# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
mysql-6c85844c87-rpqb5   1/1     Running   0          38s

Now execute into the MySQL pod and write some data into it. We can verify the data are intact after the migration. 

root@demo-1:~# kubectl exec -it mysql-6c85844c87-rpqb5 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@mysql-6c85844c87-rpqb5:/# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.49 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select * from KUBERA;
+--------+------------+-----------+------+---------+--------------+
| name   | city       | infra     | vpc  | storage | region       |
+--------+------------+-----------+------+---------+--------------+
| ABN    | SAN JOSE   | AWS       | YES  | 50G     | US-EAST-1    |
| IGL    | SANTACLARA | AZURE     | YES  | 40G     | DOWN-SOUTH-2 |
| LGI    | KENTUCKY   | AWS       | NO   | 20G     | US-WEST-2    |
| BOEING | BRUSSELS   | OPENSTACK | NO   | 80G     | ASIA-PACIFIC |
| NIFTY  | SANTACRUZ  | AWS       | YES  | 67G     | EU-WEST-1    |
| ABN    | SAN JOSE   | AWS       | YES  | 50G     | US-EAST-1    |
| IGL    | SANTACLARA | AZURE     | YES  | 40G     | DOWN-SOUTH-2 |
| LGI    | KENTUCKY   | AWS       | NO   | 20G     | US-WEST-2    |
| BOEING | BRUSSELS   | OPENSTACK | NO   | 80G     | ASIA-PACIFIC |
| NIFTY  | SANTACRUZ  | AWS       | YES  | 67G     | EU-WEST-1    |
+--------+------------+-----------+------+---------+--------------+
10 rows in set (0.00 sec)


Now lets CSPC based cStor Pool and storage class

Apply the latest cstor-operator.yaml by using the below command.

kubectl apply -f https://raw.githubusercontent.com/openebs/charts/gh-pages/cstor-operator.yaml

The user will see the below output once cstor-operator.yaml is applied. Verify the csi pods are running, as shown in the below output.

root@demo-1:~# kubectl apply -f https://raw.githubusercontent.com/openebs/charts/gh-pages/cstor-operator.yaml
namespace/openebs unchanged
serviceaccount/openebs-maya-operator unchanged
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/openebs-cstor-operator created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-operator created
clusterrole.rbac.authorization.k8s.io/openebs-cstor-migration created
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-migration created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/csinodeinfos.csi.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumeattachments.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
Warning: storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver
csidriver.storage.k8s.io/cstor.csi.openebs.io created
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-csi-snapshotter-binding created
clusterrole.rbac.authorization.k8s.io/openebs-cstor-csi-snapshotter-role created
serviceaccount/openebs-cstor-csi-controller-sa created
clusterrole.rbac.authorization.k8s.io/openebs-cstor-csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-csi-provisioner-binding created
statefulset.apps/openebs-cstor-csi-controller created
clusterrole.rbac.authorization.k8s.io/openebs-cstor-csi-attacher-role created
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-csi-attacher-binding created
clusterrole.rbac.authorization.k8s.io/openebs-cstor-csi-cluster-registrar-role created
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-csi-cluster-registrar-binding created
serviceaccount/openebs-cstor-csi-node-sa created
clusterrole.rbac.authorization.k8s.io/openebs-cstor-csi-registrar-role created
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-csi-registrar-binding created
configmap/openebs-cstor-csi-iscsiadm created
daemonset.apps/openebs-cstor-csi-node created
customresourcedefinition.apiextensions.k8s.io/cstorpoolclusters.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorpoolinstances.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumeconfigs.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumepolicies.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumereplicas.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumes.cstor.openebs.io created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/cstorbackups.openebs.io configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/cstorcompletedbackups.openebs.io configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/cstorrestores.openebs.io configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/upgradetasks.openebs.io configured
deployment.apps/cspc-operator created
deployment.apps/cvc-operator created
service/cvc-operator-service created
deployment.apps/openebs-cstor-admission-server created
root@demo-1:~# kubectl get pods -n openebs -l role=openebs-cstor-csi
NAME                             READY   STATUS    RESTARTS   AGE
openebs-cstor-csi-controller-0   7/7     Running   0          51s
openebs-cstor-csi-node-5qwh5     2/2     Running   0          51s
openebs-cstor-csi-node-mhgrd     2/2     Running   0          51s
openebs-cstor-csi-node-mljvw     2/2     Running   0          51s


CSPC cStor Pool Provisioning

Users need to specify cStor pool intent in a CSPC YAML to provision cStor pools on nodes. In this article, the user is going to provision 3 stripe cStor pools. Let us prepare a CSPC YAML now.

Following command list all block devices which represent the user’s attached disks as mentioned earlier.

kubectl get bd -n kubera

Sample Output

root@demo-1:~# kubectl get bd -n openebs
NAME                                           NODENAME   SIZE          CLAIMSTATE   STATUS   AGE
blockdevice-c5214e4ed934156c492e8d2a52922cbc   demo-2     42948607488   Unclaimed    Active   125m
blockdevice-ca60348232f4fdb6392fbf6b86f0dba1   demo-4     42948607488   Unclaimed    Active   125m
blockdevice-d50d6f154651d96a5cb4389ec34b10b6   demo-3     42948607488   Unclaimed    Active   125m

Now the user has to pick 1 block device from each node to form a CSPC YAML. Users can pick multiple block devices from one node. The user has to update the hostname with the respective blockDeviceName carefully.

Sample CSPC yaml:

apiVersion: cstor.openebs.io/v1
kind: CStorPoolCluster
metadata:
  name: cspc-stripe
  namespace: openebs
spec:
  pools:
    - nodeSelector:
        kubernetes.io/hostname: "demo-2"
      dataRaidGroups:
      - blockDevices:
          - blockDeviceName: "blockdevice-c5214e4ed934156c492e8d2a52922cbc"
      poolConfig:
        dataRaidGroupType: "stripe"
    - nodeSelector:
        kubernetes.io/hostname: "demo-3"
      dataRaidGroups:
      - blockDevices:
          - blockDeviceName: "blockdevice-d50d6f154651d96a5cb4389ec34b10b6"
      poolConfig:
        dataRaidGroupType: "stripe"
    - nodeSelector:
        kubernetes.io/hostname: "demo-4"
      dataRaidGroups:
      - blockDevices:
          - blockDeviceName: "blockdevice-ca60348232f4fdb6392fbf6b86f0dba1"
      poolConfig:
        dataRaidGroupType: "stripe"

Now apply and verify the CSPC components are created and running, as shown below.

root@demo-1:~# kubectl apply -f cspc.yaml
cstorpoolcluster.cstor.openebs.io/cspc-stripe created
root@demo-1:~# kubectl get cspc -n openebs
NAME          HEALTHYINSTANCES   PROVISIONEDINSTANCES   DESIREDINSTANCES   AGE
cspc-stripe                      3                      3                  11s
root@demo-1:~# kubectl get cspi -n openebs
NAME               HOSTNAME   FREE     CAPACITY    READONLY   PROVISIONEDREPLICAS   HEALTHYREPLICAS   STATUS   AGE
cspc-stripe-6564   demo-2     38500M   38500056k   false      0                     0                 ONLINE   31s
cspc-stripe-dkb6   demo-4     38500M   38500053k   false      0                     0                 ONLINE   30s
cspc-stripe-rwd4   demo-3     38500M   38500053k   false      0                     0                 ONLINE   30s


Create storageclass for cStor

Now create cstor_sc.yaml by using the below contents. Mention the CSPC name that we created earlier in the parameters:cstorPoolCluster.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: csi-cstor-sc
provisioner: cstor.csi.openebs.io
allowVolumeExpansion: true
parameters:
  cas-type: cstor
  cstorPoolCluster: cspc-stripe
  replicaCount: "1"

Now apply the cstor_sc.yaml and verify the storage class is created.

root@demo-1:~# kubectl create -f cstor_sc.yaml
storageclass.storage.k8s.io/csi-cstor-sc created
root@demo-1:~# kubectl get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
csi-cstor-sc                cstor.csi.openebs.io                                       Delete          Immediate              true                   6s
jiva-sc                     openebs.io/provisioner-iscsi                               Delete          Immediate              false                  5h55m
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  6h5m
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  6h5m
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  6h5m
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  6h5m


Migrating MySQL application from Jiva to CSPC cStor

In this blog, we will be using Velero with Restic for taking a complete backup of the MySQL application, .including complete data, and all the dependencies. We will need a Minio application locally to store the backup we will take.

Let’s deploy a Minio application using the below yaml in the velero namespace.

Create velero namespace using below yaml.

kubectl create ns velero
apiVersion: apps/v1
kind: Deployment
metadata:
  # This name uniquely identifies the Deployment
  name: minio
  labels:
    app: minio
spec:
  selector:
    matchLabels:
      app: minio
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        # Label is used as selector in the service.
        app: minio
    spec:
      # Refer to the PVC created earlier
      volumes:
      - name: storage1
        persistentVolumeClaim:
          # Name of the PVC created earlier
                claimName: minio-pv-claim1
      containers:
      - name: minio
        # Pulls the default Minio image from Docker Hub
        image: minio/minio
        args:
        - server
        - /storage1
        env:
        # Minio access key and secret key
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        ports:
        - containerPort: 9000
        # Mount the volume into the pod
        volumeMounts:
        - name: storage1 # must match the volume name, above
          mountPath: "/storage1"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pv-claim1
  labels:
    app: minio
spec:
  storageClassName: openebs-hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
  name: minio
  labels:
    app: minio
spec:
  ports:
    - port: 9000
      nodePort: 32701
      protocol: TCP
  selector:
    app: minio
  sessionAffinity: None
  type: NodePort

Create a minio.yaml using the above contents and apply it. In this yaml we have chosen to run Minio on openebs-hostpath storageclass. Users can also choose Jiva or cStor for that. Once the yaml is applied, verify the minio pod is running, as shown below.

root@demo-1:~# kubectl apply -f minio.yaml -n velero
deployment.apps/minio created
persistentvolumeclaim/minio-pv-claim1 created
service/minio created
root@demo-1:~# kubectl get pods -n velero
NAME                    READY   STATUS    RESTARTS   AGE
minio-c45648fb4-tbmrn   1/1     Running   0          58s
root@demo-1:~# kubectl get pvc -n velero
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
minio-pv-claim1   Bound    pvc-8ff5c6f4-66af-459a-bb84-0bf7cd04a0e7   10Gi       RWO            jiva-sc        62s
root@demo-1:~# kubectl get svc -n velero
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1               443/TCP          24h
minio        NodePort    10.103.229.48           9000:32701/TCP   67s

Users can access Minio UI from the browser using the below pattern.
http://<worker node IP>:<NodePort>
Example:
http://10.41.107.21:32701 

Login to minio and create a bucket named the velero


Installing Velero with Restic plugin

Create a Velero-specific credentials file (credentials-velero) in your local directory:

[default]
aws_access_key_id = minio
aws_secret_access_key = minio123

Install velero binary using the below commands.

wget https://github.com/vmware-tanzu/velero/releases/download/v1.5.1/velero-v1.5.1-linux-amd64.tar.gz
tar -zxvf velero-v1.5.1-linux-amd64.tar.gz
cd velero-v1.5.1-linux-amd64/
root@demo-1:~/velero-v1.5.1-linux-amd64# cp velero /usr/local/bin/
root@demo-1:~/velero-v1.5.1-linux-amd64# velero version
Client:
        Version: v1.5.1
        Git commit: 87d86a45a6ca66c6c942c7c7f08352e26809426c

Install velero components using the below command.

velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=true --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000 --snapshot-location-config region="default" --use-restic

Once the above command is applied, the user will observe the below output. Verify velero and restic pods are running in the velero namespace, as shown below.

root@demo-1:~# velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=true --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000 --snapshot-location-config region="default" --use-restic
CustomResourceDefinition/backups.velero.io: attempting to create resource
CustomResourceDefinition/backups.velero.io: created
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
CustomResourceDefinition/backupstoragelocations.velero.io: created
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
CustomResourceDefinition/deletebackuprequests.velero.io: created
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
CustomResourceDefinition/downloadrequests.velero.io: created
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
CustomResourceDefinition/podvolumebackups.velero.io: created
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
CustomResourceDefinition/podvolumerestores.velero.io: created
CustomResourceDefinition/resticrepositories.velero.io: attempting to create resource
CustomResourceDefinition/resticrepositories.velero.io: created
CustomResourceDefinition/restores.velero.io: attempting to create resource
CustomResourceDefinition/restores.velero.io: created
CustomResourceDefinition/schedules.velero.io: attempting to create resource
CustomResourceDefinition/schedules.velero.io: created
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
CustomResourceDefinition/serverstatusrequests.velero.io: created
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
CustomResourceDefinition/volumesnapshotlocations.velero.io: created
Waiting for resources to be ready in cluster...
Namespace/velero: attempting to create resource
Namespace/velero: created
ClusterRoleBinding/velero: attempting to create resource
ClusterRoleBinding/velero: created
ServiceAccount/velero: attempting to create resource
ServiceAccount/velero: created
Secret/cloud-credentials: attempting to create resource
Secret/cloud-credentials: created
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: created
VolumeSnapshotLocation/default: attempting to create resource
VolumeSnapshotLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: created
DaemonSet/restic: attempting to create resource
DaemonSet/restic: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.
root@demo-1:~# kubectl get pods -n velero
NAME                     READY   STATUS    RESTARTS   AGE
restic-n2h8v             1/1     Running   0          40s
restic-n587c             1/1     Running   0          40s
restic-qcscn             1/1     Running   0          40s
velero-b54b7f5b8-zn9b4   1/1     Running   0          40s

Now we will create a configmap using the below contents. We need to create the config map to restore the MySQL application in a different storage class. In this blog, the MySQL application runs on Jiva storageclass jiva-sc and we will migrate it to csi-cstor-sc.

User has mention this in data section of configmap yaml in <source storageclass>: <destination storageclass> pattern as shown below. 

apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: change-storage-class-config
  # must be in the velero namespace
  namespace: velero
  # the below labels should be used verbatim in your
  # ConfigMap.
  labels:
    # this value-less label identifies the ConfigMap as
    # config for a plugin (i.e., the built-in restore item action plugin)
    velero.io/plugin-config: ""
    # this label identifies the name and kind of plugin
    # that this ConfigMap is for.
    velero.io/change-storage-class: RestoreItemAction
data:
  # add 1+ key-value pairs here, where the key is the old
  # storage class name and the value is the new storage
  # class name.
  jiva-sc: csi-cstor-sc

Using the above content, create velero_cm.yaml and apply it. Verify the configMap created in the velero namespace.

root@demo-1:~# kubectl apply -f velero_cm.yaml
configmap/change-storage-class-config created
root@demo-1:~# kubectl get cm -n velero
NAME                          DATA   AGE
change-storage-class-config   1      13s

Now the user has to make sure that all the components of MySQL have a common label. We will use that common label to take backup. Here, the MySQL app has components like pod, deployment, PVC, SVC, and secret have common label app=mysql

root@demo-1:~# kubectl get pods --show-labels
NAME                     READY   STATUS    RESTARTS   AGE   LABELS
mysql-6c85844c87-rpqb5   1/1     Running   0          31m   app=mysql,pod-template-hash=6c85844c87
root@demo-1:~# kubectl get deploy --show-labels
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   LABELS
mysql   1/1     1            1           31m   app=mysql
root@demo-1:~# kubectl get pvc --show-labels
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE   LABELS
mysql-pv-claim    Bound    pvc-b2fcadc1-b588-44cd-9914-926de41dde2c   10Gi       RWO            jiva-sc            31m   app=mysql
root@demo-1:~# kubectl get svc --show-labels
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE   LABELS
kubernetes   ClusterIP   10.96.0.1               443/TCP          32h   component=apiserver,provider=kubernetes
mysql        ClusterIP   None                    3306/TCP         31m   app=mysql
root@demo-1:~# kubectl get secret --show-labels
NAME                  TYPE                                  DATA   AGE   LABELS
default-token-vg85s   kubernetes.io/service-account-token   3      32h   
mysql-pass            Opaque                                2      35m   app=mysql

Now we have to add an annotation to the MySQL pod. This annotation will make sure that velero also takes the backup of the volume attached to the pod. The value of the annotation is the volume mount name of the pod. Here it is mysql-persistent-storage.

kubectl annotate pod mysql-6c85844c87-rpqb5 backup.velero.io/backup-volumes=mysql-persistent-storage


Taking complete Backup

Now, everything in place, we need to take a complete of the MySQL application. Use the below command to take a complete backup.

Note: Once a backup of the application is taken, there should not be any new write to the application. So, it is advised to stop any new writes to the application before taking backup.

velero backup create  --selector 

Example:

root@demo-1:~# velero backup create mysql-backup --selector app=mysql
Backup request "mysql-backup" submitted successfully.
Run `velero backup describe mysql-backup` or `velero backup logs mysql-backup` for more details.

Now check the backup status, as shown below. It should complete without any errors.

root@demo-1:~# velero backup get
NAME           STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
mysql-backup   Completed   0        0          2020-10-15 05:00:14 +0530 IST   29d       default            app=mysql


Restore Application

Now we will restore the application from the backup we took in the previous step. We will use namespace mapping to restore the application to another namespace as the MySQL application is already running in the default namespace. Users can also delete the application in its current namespace and restore it to the same namespace it was running. However, we will restore the application from the default namespace to the openebs namespace. Use the below command to restore the application.

velero restore create  --from-backup  --namespace-mappings :

Example:

root@demo-1:~# velero restore create mysql-restore --from-backup mysql-backup --namespace-mappings default:openebs
Restore request "mysql-restore" submitted successfully.
Run `velero restore describe mysql-restore` or `velero restore logs mysql-restore` for more details.

Now verify the restore should complete without any errors.

root@demo-1:~# velero restore get
NAME            BACKUP         STATUS      STARTED                         COMPLETED                       ERRORS   WARNINGS   CREATED                         SELECTOR
mysql-restore   mysql-backup   Completed   2020-10-15 05:01:42 +0530 IST   2020-10-15 05:02:52 +0530 IST   0        0          2020-10-15 05:01:42 +0530 IST   

Now, we can see MySQL pod is running in openebs namespace.

root@demo-1:~# kubectl get pvc -n openebs
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound    pvc-ffc10bec-c033-4387-b16e-f7dd36e1f5eb   10Gi       RWO            csi-cstor-sc   86s
root@demo-1:~# kubectl get pods -n openebs
NAME                                                              READY   STATUS      RESTARTS   AGE
cspc-operator-5b755bcb85-h6zjb                                    1/1     Running     0          6h1m
cspc-stripe-6564-6fb46bb8d9-h65r5                                 3/3     Running     0          119m
cspc-stripe-dkb6-68c7ff54fc-vr9q6                                 3/3     Running     0          119m
cspc-stripe-rwd4-5499687b74-wfk4m                                 3/3     Running     0          119m
cvc-operator-5c6ff4f749-nhr24                                     1/1     Running     0          6h1m
maya-apiserver-559484b978-f6tsv                                   1/1     Running     2          7h56m
mysql-6c85844c87-rpqb5                                            1/1     Running     0          94s
openebs-admission-server-68b67858cb-q6rsz                         1/1     Running     0          7h56m
openebs-cstor-admission-server-65549c6c8c-v2s4g                   1/1     Running     0          6h1m
openebs-cstor-csi-controller-0                                    7/7     Running     0          6h1m
openebs-cstor-csi-node-5qwh5                                      2/2     Running     0          6h1m
openebs-cstor-csi-node-mhgrd                                      2/2     Running     0          6h1m
openebs-cstor-csi-node-mljvw                                      2/2     Running     0          6h1m
openebs-localpv-provisioner-84d94fc75-42r2c                       1/1     Running     0          7h56m
openebs-ndm-c4clw                                                 1/1     Running     0          7h56m
openebs-ndm-operator-75b957bf74-9zdt6                             1/1     Running     0          7h56m
openebs-ndm-qhjwn                                                 1/1     Running     0          7h56m
openebs-ndm-zj2pc                                                 1/1     Running     0          7h56m
openebs-provisioner-664d994494-w4h6b                              1/1     Running     0          7h56m
openebs-snapshot-operator-59c97c6cfc-dsw2q                        2/2     Running     0          7h56m
pvc-b2fcadc1-b588-44cd-9914-926de41dde2c-ctrl-6f86c8b84c-bjf7f    2/2     Running     0          51m
pvc-b2fcadc1-b588-44cd-9914-926de41dde2c-rep-1-5d7bc57d44-dw65r   1/1     Running     0          51m
pvc-b2fcadc1-b588-44cd-9914-926de41dde2c-rep-2-7567494c45-8brwf   1/1     Running     0          51m
pvc-b2fcadc1-b588-44cd-9914-926de41dde2c-rep-3-c7bfc4b-g4bgg      1/1     Running     0          51m
pvc-ffc10bec-c033-4387-b16e-f7dd36e1f5eb-target-76cbdcdfb8wp2f2   3/3     Running     0          94s
sjr-pvc-1713f365-464a-4a6e-8f46-50586f9c4a1e-ky9a-jplr5           0/1     Completed   0          51m
sjr-pvc-1713f365-464a-4a6e-8f46-50586f9c4a1e-o3tw-tgtc5           0/1     Completed   0          51m
sjr-pvc-1713f365-464a-4a6e-8f46-50586f9c4a1e-trj6-vf6t6           0/1     Completed   0          51m
sjr-pvc-8ff5c6f4-66af-459a-bb84-0bf7cd04a0e7-6yio-crjqn           0/1     Completed   0          83m
sjr-pvc-8ff5c6f4-66af-459a-bb84-0bf7cd04a0e7-o8y4-5dx2s           0/1     Completed   0          83m
sjr-pvc-8ff5c6f4-66af-459a-bb84-0bf7cd04a0e7-x6dr-tj5qq           0/1     Completed   0          83m

Now, we can execute the pod and verify the data are intact after the restoration.

Example:

root@demo-1:~# kubectl exec -it mysql-6c85844c87-rpqb5 -n openebs bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@mysql-6c85844c87-rpqb5:/# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.49 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+---------------------+
| Database            |
+---------------------+
| information_schema  |
| #mysql50#lost+found |
| mayadata            |
| mysql               |
| performance_schema  |
+---------------------+
5 rows in set (0.00 sec)

mysql> use mayadata;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+--------------------+
| Tables_in_mayadata |
+--------------------+
| KUBERA             |
+--------------------+
1 row in set (0.00 sec)

mysql> select * from KUBERA;
+--------+------------+-----------+------+---------+--------------+
| name   | city       | infra     | vpc  | storage | region       |
+--------+------------+-----------+------+---------+--------------+
| ABN    | SAN JOSE   | AWS       | YES  | 50G     | US-EAST-1    |
| IGL    | SANTACLARA | AZURE     | YES  | 40G     | DOWN-SOUTH-2 |
| LGI    | KENTUCKY   | AWS       | NO   | 20G     | US-WEST-2    |
| BOEING | BRUSSELS   | OPENSTACK | NO   | 80G     | ASIA-PACIFIC |
| NIFTY  | SANTACRUZ  | AWS       | YES  | 67G     | EU-WEST-1    |
| ABN    | SAN JOSE   | AWS       | YES  | 50G     | US-EAST-1    |
| IGL    | SANTACLARA | AZURE     | YES  | 40G     | DOWN-SOUTH-2 |
| LGI    | KENTUCKY   | AWS       | NO   | 20G     | US-WEST-2    |
| BOEING | BRUSSELS   | OPENSTACK | NO   | 80G     | ASIA-PACIFIC |
| NIFTY  | SANTACRUZ  | AWS       | YES  | 67G     | EU-WEST-1    |
+--------+------------+-----------+------+---------+--------------+
10 rows in set (0.00 sec)


Conclusion:

I hope this blog will help you to migrate applications from Jiva to CSPC cStor. Thank you for reading, and please provide any feedback below or on Twitter. For more details on OpenEBS installation and troubleshooting, visit https://docs.openebs.io/. You can even reach out to us on OpenEBS slack channel here.