OpenEBS Node Device Management (NDM) — Troubleshooting tips

OpenEBS Node Device Management (aka NDM) helps in discovering the block devices attached to Kubernetes nodes. For many clusters, the default configuration of NDM suffices; however, there are some cases where further customizations are required.

In this blog, I will walk through some of the scenarios I have seen working with users on the OpenEBS Slack Channel.

. . . 

NDM Quick Overview

For setting up NDM in secure mode, please see my previous blog, and you can learn how NDM works here. Here is a quick snapshot of the key components of NDM.

  • NDM components are installed in the OpenEBS Namespace. Ensure that NDM pods part of the NDM DaemonSet are running on all the storage nodes. NDM Operator helps with allocating Block Devices to Block Device Claims and should be running.
  • NDM DaemonSet pod discovers all the block devices attached to the node and creates BlockDevice custom resource for each device. Note that NDM will filter out some of the devices like loopback device and so forth as configured in the NDM ConfigMap. `kubectl get bd -n openebs`
  • NDM creates a special type of device called sparse devices depending on the `SPARSE_FILE_COUNT` and `SPARSE_FILE_SIZE` passed to the NDM Daemon. These devices are used in cases where nodes do not have any additional devices attached to the node, and users would like to run their applications by carving out some spaces from the OS disk. The creation of sparse devices is disabled by default from OpenEBS 1.3.
  • Users or Operators like cStor Operator, Local PV provisioner interact with NDM by creating a BlockDeviceClaim CR. The BlockDeviceClaim will have properties like nodeName, required Capacity, etc. The NDM operator will match these properties with the available BlockDevices and associate the one that matches all the requirements to BlockDeviceClaim.

. . . 

NDM Known Issues / Future Development Items

  • BlockDevices are not created for Partitions and LVM devices. If you need to use them, you have to create BlockDevice CR manually. The steps are mentioned in this blog.

OK. Let us get started with some common issues reported and how to troubleshoot them.

OpenEBS Node Device Management (NDM) — Troubleshooting tipsOpenEBS Node Device Management (NDM) — Troubleshooting tips

. . . 

Scenario #1

BlockDevice CR is not created for a device available on my node.

I have some disks attached to the node. Installed OpenEBS, but blockdevice resources are not created for the devices.

Symptom: I have some disks attached to the node. Installed OpenEBS, but blockdevice resources are not created for the devices.

Troubleshooting:

  1. Check `lsblk` output of the node
  2. Get the NDM config map.
  3. Check if the mount point of the disk is excluded in the filter configurations in the configmap.
  4. From lsblk output, check if the blockdevice you want to use is an LVM/software raid/ partition/LUKS filesystem. NDM currently does not support these types.
  5. If none of the above works, the logs of NDM daemonset can be checked. It will have information of disk being detected, and at what point the disk was excluded from blockdevice creation, (like `excluded by path-filter`)
Scenario #1

Resolution: Update the filter configuration in the configmap and restart the NDM DaemonSet pod. This will create blockdevices.

. . . 

Scenario #2

After the node reboot, one blockdevice became inactive, and another blockdevice was created.

Symptom: When a node in the cluster rebooted. A blockdevice resource on that node was marked as inactive, and a new resource was created. The new blockdevice also has the same details as the old one.

Troubleshooting:

  1. Check `lsblk` output of the node
  2. Get the YAML of both blockdevices and compare them.
  3. Check to see `spec.Path` is different in both outputs.
  4. If yes, then the new blockdevice resource was created because the path changed
  5. Check if `kubernetes.io/hostname` is different. If yes, then the blockdevice was created because the hostname of the node changed.

Resolution: If using cStor, the newly generated BD can be added in both SPC and CSP instead of the old BD resource. Thus the storage engine will claim the new BD resources and start using it.

Root Cause: Whenever the NDM deamonset pods shutdown, all the devices on that node will be marked into an unknown state. When the pod comes back up, all the devices on that node are marked as inactive and then individual devices are processed for their statuses.

NDM uses an md5 sum of WWN+Model+Serial of the disk to create its unique name. If none of these fields are available, then NDM uses the device path and hostname to create the blockdevice. There are chances that the device path/hostname has changed after reboot. If the path/hostname changes, a new blockdevice resource will be created, and the old one will still be in the inactive state.

. . . 

Scenario #3

BlockDevices are created for already used disks in which OS is installed

Symptom: NDM created blockdevice resources for disks that are already used for OS partitions. By default NDM excludes the blockdevices that are mounted at `/, /boot, /etc/hosts`. If these mount points are on an LVM or SoftRaid, NDM will not be able to identify that.

Resolution: Support for LVM and software RAID is in the design phase. Once it is supported, the issue will be resolved.

. . . 

Scenario #4

Only one Blockdevice is created, when devices are connected in a multipath configuration

Symptom: There is a disk attached in multipath configuration to a node. i.e., both sdb and sdc are the same devices. But blockdevice resource is created only for sdc.

Resolution: Support for detecting disks in multipath configuration and attaching the same disk to multiple nodes will be available in the future versions of NDM

Root Cause: NDM generates the UID for disk identification using the disk details like WWN, Serial, etc., that are fetched from the disk. In the case of a disk attached in multipath configuration, the details from both sdb and sdc will be the same. Therefore, NDM will first create a blockdevice for sdb, and then moves on to create for `sdc`. But at this stage, it will find that a blockdevice with that UID already exists and will update the blockdevice information with the new path `sdc`. This results in a blockdevice existing only for sdc.

. . . 

Scenario #5

Only a single BlockDevice resource is created in a multi-node Kubernetes cluster on GKE.

Symptom: On a multi-node Kubernetes cluster in GKE, with an external GPD attached to each node. NDM is creating only one blockdevice resource, instead of one blockdevice resource per node.

Troubleshooting:

  1. Was the GPD added using the gcloud CLI or google cloud console web UI.
  2. If the disk was added using gcloud CLI, check whether the ` — device-name` flag was specified while attaching the disk.

Resolution: The command to add disk using gcloud CLI should be

gcloud compute instances attach-disk <node-name> --disk=disk-1 --device-name=disk-1

Root Cause: gcloud CLI uses the value provided in the `device-name` flag as the serial number of the GPD when it is attached to the node. If it is left blank, Google will assign a default serial number that is unique only to the node. When multiple nodes are present, and NDM generates the UID for the blockdevice, the disks on both nodes will have the same serial number and thus the same UID.

NDM from one node will create the blockdevice resource, and when the other NDM daemon tries to create the resource, it finds that a resource already exists and just updates the resource.

Thanks for reading. Please let me know how useful did you find this article. Also, comment down your valuable feedback or questions!

Write for us!

 

Don Williams
Don is the CEO of MayaData and leading the company for last one year. He has an exceptional record of accomplishments leading technology teams for organizations ranging from private equity-backed start-ups to large, global corporations. He has deep experience in engineering, operations, and product development in highly technical and competitive marketplaces. His extensive professional network in several industries, large corporations and government agencies is a significant asset to early stage businesses, often essential to achieve product placement, growth and position for potential exit strategies.
Kiran Mova
Kiran evangelizes open culture and open-source execution models and is a lead maintainer and contributor to the OpenEBS project. Passionate about Kubernetes and Storage Orchestration. Contributor and Maintainer OpenEBS projects. Co-founder and Chief Architect at MayaData Inc.
Murat Karslioglu
VP @OpenEBS & @MayaData_Inc. Murat Karslioglu is a serial entrepreneur, technologist, and startup advisor with over 15 years of experience in storage, distributed systems, and enterprise hardware development. Prior to joining MayaData, Murat worked at Hewlett Packard Enterprise / 3PAR Storage in various advanced development projects including storage file stack performance optimization and the storage management stack for HPE’s Hyper-converged solution. Before joining HPE, Murat led virtualization and OpenStack integration projects within the Nexenta CTO Office. Murat holds a Bachelor’s Degree in Industrial Engineering from the Sakarya University, Turkey, as well as a number of IT certifications. When he is not in his lab, he loves to travel, advise startups, and spend time with his family. Lives to innovate! Opinions my own!