How do I run litmus test to compare storage performance on Kubernetes?

This article is part of a #HowDoI series on Kubernetes and Litmus.

Every so often, developers and devops engineers who are building or managing stateful applications on Kubernetes are on the lookout for suitable storage options for their application’s specific needs. Depending on their situation, they could be looking for an emphasis on high-availability, provisioning ease, performance etc..  Litmus (as detailed in this article) is an attempt to arm them with the necessary info to make the right choice. One of the most important storage tests is to simulate application workloads or multiply its effect using synthetic workload generators such as fio. In this article, we will detail the required steps to run a fio-based benchmark test using litmus.

image1 (1)

Evaluating Storage Performance with Litmus

PRE-REQUISITES

  • At least a single-node Kubernetes cluster with the necessary disk resources, mounted on the node. (Note: Certain storage solutions require minimum Kubernetes versions from which they are supported. For example: Local PVs are beta from 1.10, OpenEBS needs 1.7.5+)
  • A storage operator installed (this typically includes control-plane elements such as the static/dynamic provisioners, storage classes and other elements) with appropriate references to the node and disk resources (For example: This may involve storage pool creation OR updating disk and node details in the static provisioners, etc.).

STEP 1: Setup Litmus essentials on the Kubernetes cluster.

  • Obtain the Litmus Git repository via a Git Clone operation on the Kubernetes master/Control machine used to manage the cluster. Set up the Litmus namespace, service account, and clusterrolebinding by applying rbac.yaml
karthik_s@cloudshell:~ (strong-eon-153112)$ git clone https://github.com/openebs/litmus.git
Cloning into 'litmus'...
remote: Counting objects: 2627, done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 2627 (delta 2), reused 9 (delta 2), pack-reused 2609
Receiving objects: 100% (2627/2627), 10.50 MiB | 4.23 MiB/s, done.
Resolving deltas: 100% (740/740), done.
karthik_s@cloudshell:~ (strong-eon-153112)$ cd litmus/
karthik_s@cloudshell:~/litmus (strong-eon-153112)$ kubectl apply -f hack/rbac.yaml
namespace "litmus" created
serviceaccount "litmus" created
clusterrole "litmus" created
clusterrolebinding "litmus" created
  • Create a configmap resource from the cluster’s config file, typically at ~/.kube/config, /etc/kubernetes/admin.conf, or elsewhere depending on the type of cluster or setup method. (Note: Copy the config file to admin.conf before creating the configmap, as the litmus job expects this path)
karthik_s@cloudshell:~ (strong-eon-153112)$ kubectl create configmap kubeconfig --from-file=admin.conf -n litmus
configmap "kubeconfig" created

 

STEP 2: Update the Litmus test job per your requirements.

The litmus fio test job allows the developer to specify certain test parameters via ENV variables, such as the following:

  • The litmus fio test job allows the developer to specify the storage provider (PROVIDER_STORAGE_CLASS) and the node on which to schedule the application. (APP_NODE_SELECTOR)
  • The desired fio profile can also be specified. Currently, litmus supports simple test-templates, and is expected to grow to include multiple standard profiles. (FIO_TEST_PROFILE)
  • Certain simple test parameters such as the size of the test file (FIO_SAMPLE_SIZE) and duration of I/O (FIO_TESTRUN_PERIOD) can be specified as well, while the core I/O params continue to be housed in the templates.
  • The developer can choose to specify a comma-separated list of pods whose logs need to be collected for analysis of results, as well as the logs’ location on the host in the spec for the logger.
karthik_s@cloudshell:~ (strong-eon-153112)$ cd litmus/tests/fio/
karthik_s@cloudshell:~/litmus/tests/fio (strong-eon-153112)$ cat run_litmus_test.yaml

---
apiVersion: batch/v1
kind: Job
metadata:
name: litmus
namespace: litmus
spec:
template:
metadata:
name: litmus
spec:
serviceAccountName: litmus
restartPolicy: Never
containers:
- name: ansibletest
image: openebs/ansible-runner
env:
- name: ANSIBLE_STDOUT_CALLBACK
value: log_plays


- name: PROVIDER_STORAGE_CLASS
value: openebs-standard

- name: APP_NODE_SELECTOR
value: kubeminion-01

- name: FIO_TEST_PROFILE
value: standard-ssd

- name: FIO_SAMPLE_SIZE
value: "128m"

- name: FIO_TESTRUN_PERIOD
value: "60"

command: ["/bin/bash"]
args: ["-c", "ansible-playbook ./fio/test.yaml -i /etc/ansible/hosts -v; exit 0"]
volumeMounts:
- name: logs
mountPath: /var/log/ansible
tty: true
- name: logger
image: openebs/logger
command: ["/bin/bash"]
args: ["-c", "./logger.sh -d 10 -r fio,openebs; exit 0"]
volumeMounts:
- name: kubeconfig
mountPath: /root/admin.conf
subPath: admin.conf
- name: logs
mountPath: /mnt
tty: true
volumes:
- name: kubeconfig
configMap:
name: kubeconfig
- name: logs
hostPath:
path: /mnt
type: Directory

STEP 3: Run the Litmus fio test job.

The job creates the Litmus test pod, which contains both the test runner as well as the (stern-based) logger sidecar. The test runner then launches an fio test job that uses a persistent volume (PV) based on the specified storage class.

karthik_s@cloudshell:~/litmus/tests/fio (strong-eon-153112)$ kubectl apply -f run_litmus_test.yaml
job "litmus" created

STEP 4: View the fio run results.

The results can be obtained from the log directory on the node in which the litmus pod is executed (By default, it is stored in /mnt). The fio & other specified pod logs are available in a tarfile (Logstash_<timestamp>_.tar_).

root@gke-oebs-staging-default-pool-7cc7e313-bf16:/mnt# ls
Logstash_07_07_2018_04_10_AM.tar hosts systemd_logs

The fio results are captured in JSON format with job-specific result sections. Below is a truncated snippet reproduced from the log for a sample basic rw run:

{
"jobname" : "basic-readwrite",
"groupid" : 0,
"error" : 0,
"eta" : 0,
"elapsed" : 61,
"read" : {
"io_bytes" : 28399748,
"bw" : 473321,
"iops" : 118330.31,
"runtime" : 60001,
"total_ios" : 7099937,
"short_ios" : 0,
"drop_ios" : 0,
"slat" : {
"min" : 0,
"max" : 0,
"mean" : 0.00,
"stddev" : 0.00
},
"write" : {
"io_bytes" : 28400004,
"bw" : 473325,
"iops" : 118331.38,
"runtime" : 60001,
"total_ios" : 7100001,
"short_ios" : 0,
"drop_ios" : 0,
"slat" : {
"min" : 0,
"max" : 0,
"mean" : 0.00,
"stddev" : 0.00
},

CONCLUSION

How is this different from doing an fio package installation on the kubernetes nodes and running tests?

  • Running an fio kubernetes job will offer better control to simulating actual application loads when used with resource limits.
  • The litmus fio jobs with various profiles can be included as part of a larger suite using the executor framework, thereby obtaining results for different profiles.
  • Litmus (as it continues to mature) will provide jobs that perform Chaos tests against storage while running different types of workloads. Running a fio job lends itself to that model.
  • Finally, it is a more “Kubernetes” way of doing things!

Let us know your experience with using fio-based performance tests with Litmus. Any feedback is greatly appreciated!

This article was first published on Jul 16, 2018 on OpenEBS's Medium Account

Don Williams
Don is the CEO of MayaData and leading the company for last one year. He has an exceptional record of accomplishments leading technology teams for organizations ranging from private equity-backed start-ups to large, global corporations. He has deep experience in engineering, operations, and product development in highly technical and competitive marketplaces. His extensive professional network in several industries, large corporations and government agencies is a significant asset to early stage businesses, often essential to achieve product placement, growth and position for potential exit strategies.
Kiran Mova
Kiran evangelizes open culture and open-source execution models and is a lead maintainer and contributor to the OpenEBS project. Passionate about Kubernetes and Storage Orchestration. Contributor and Maintainer OpenEBS projects. Co-founder and Chief Architect at MayaData Inc.
Murat Karslioglu
VP @OpenEBS & @MayaData_Inc. Murat Karslioglu is a serial entrepreneur, technologist, and startup advisor with over 15 years of experience in storage, distributed systems, and enterprise hardware development. Prior to joining MayaData, Murat worked at Hewlett Packard Enterprise / 3PAR Storage in various advanced development projects including storage file stack performance optimization and the storage management stack for HPE’s Hyper-converged solution. Before joining HPE, Murat led virtualization and OpenStack integration projects within the Nexenta CTO Office. Murat holds a Bachelor’s Degree in Industrial Engineering from the Sakarya University, Turkey, as well as a number of IT certifications. When he is not in his lab, he loves to travel, advise startups, and spend time with his family. Lives to innovate! Opinions my own!