KubeCon EU 2020 - Virtual, Storage chatter round-up

KubeCon EU 2020 - Virtual, might have set-up great precedence for showcasing how much the technology has evolved or is yet to evolve in conducting virtual conferences. The euphoric and uplifting vibe that I have enjoyed in the past keynotes and one of the primary and probably the only sessions I attend live was somewhat satiated. And of course, I could finally bring my family to participate in the keynote sessions alongside.

KubeCon  EU 2020

An aspect of KubeCon EU 2020 - Virtual- simulates the Hallway conversations via the CNCF Slack Channels. Another aspect of the KubeCon sessions is reaching out to the speaker and asking them questions - which also happened to move into the slack conversations. 

What I didn't expect, though, and which was quite shocking, is that after the virtual event, the 500+ people in the slack channel stopped discussing anything further. It is as if everyone just went away from the place!

First things, first! There is an ebook from NewStack based on the 2019 CNCF survey on cloud native technology adoption that showed more and more folks are starting to use Kubernetes for running stateful applications. Over the years, folks are finding it easier to run stateful workloads in Kubernetes.

Kubernetes users who face challenges using/deploying containers

The digital dust generated in the slack channel lingers on and is filled with some pretty interesting tidbits of information.

The following information is not organized as per the time or the threads but is a slightly abridged version of the discussions that took place #2-KubeCon-Storage slack channel.

  • As is expected and been the case in the previous KubeCons, most storage-related discussions are usually led by the folks building storage products or projects. Or it could be that person from a sponsor company whose primary job is to lead the end-user from the hallway into their sponsor-booth. Once you see past those annoyances and honestly, one has to expect sponsor ads and such as they are instrumental in making sure these events happen. 
  • Storage at the core of maintaining high availability for any services. There is no such thing as stateless services; it is just that state is assumed to be better managed outside of the Kubernetes cluster. This perception is changing, and more and more users now realize that Kubernetes is great for running stateful workloads as well. Thanks to the many initiatives, most notably CSI. 
  • Even with CSI, there is a war being waged within organizations between DevOps and traditional IT over agility and control. While Kubernetes has become a boon for DevOps, it is turning out to be a bane for traditional IT. Especially storage administrators who want to control how LUNs / volumes are created on the storage boxes, the most expensive pieces of hardware they have in their datacenter. DevOps is winning the war! Organizations are being forced to move towards Kubernetes, just as there was a need for overcoming the resistance to move towards the cloud a couple of decades ago. 
  • The question remains for those who have adopted solutions of connecting to storage appliances via CSI, who manages the storage classes. Is it the DevOps with developers' knowledge or the storage IT that are reluctant to move towards K8s. Applications' workload characteristics are changing. Declarative API has caught on, enabling GitOps and driving up agility. The compute and network stacks have and are being rewritten, and storage tends to become the bottleneck. Traditional storage vendors are generating a lot of buzz around using CSI to call their storage as container ready while working on projects (or rewriting their storage stacks to hit the market in a few years). Traditional IT and storage systems are catching up with the Kubernetes game. 
  • Meanwhile, DIY storages have become a "thing." There is no need to call the salespeople and get into the vendor politics to get your developers to pick their favorite storage and stateful application. There is no dependency on the cloud services that come with the fixed version. These DIY storage are what we call "Container Attached Storages" that has been built ground up to meet the demands of the stateful workloads running within Kubernetes. 
  • The question of scale and upgrades (both hardware and software) and how to protect against complete disasters seems to be the next level of barriers that users have to learn to overcome. As you sift through, you will notice those who swear to only ever run the DIY storages to apprehensive about taking complete ownership of data. Who can get past the fat fingers of fate! The discussions around the scale can mute this point where it was succinctly summarized that a typical user could easily get by with 8 to 10 nodes that are doing both storage and compute. There are then some specialized cases where it starts to feel like dedicated storages are better. However, as the scale crosses a certain threshold, it makes better economic and operational sense to have all the worker nodes with local storage and run the stateful and stateless workloads together. 
  • Local storage is "The" thing. Three out of five talks focused on stateful applications, and best practices had recommendations around using local storage. The community's request is clearly around further enhancing Kubernetes and CSI's capabilities to treat storage constraints as first-class when scheduling workloads. For example, add a feature towards volume accessibility preferences over constraints for scheduling workloads nearer to the storage for better performance, scheduling to consider the available capacity on a per-node level, additional hooks for performing backup and recovery operations. 
  • Any storage discussion will remain unfinished without bringing in data protection. I am usually wary about unified storages that claim to offer every feature imaginable. However, it may appeal to a specific audience, and the new DevOps personas tend to pick the best of the solutions and customize it further. Avoid the lockin at all costs. The power of Kubernetes adoption has become of its open source nature. It surprises me why people would put additional infrastructure components that are closed source, especially when opting for the DIY option. 
  • Kubernetes - everywhere - at home and office, i.e., your data center. Now that most of us work from home, the need to become self-reliant using open source technology and build smarter homes is becoming more of a reality. Pay for streaming to host my own streaming server, as one user described it, maybe that push required to make more and more users interested in running Kubernetes and Stateful workloads on ARM/home clusters. When you trust the technology to power your home, you end up trusting them to run your businesses as well. An interesting blooper in the slack chatter while speaking of the home use case was a commercial (freemium) product advocate using an open source technology to power his home cluster. 

Closing this summary in the KubeCon EU 2020 - Virtual style, where keynote speakers urged everyone to introduce themselves in as many ways as possible and in as many times. Here is something about myself. I am the co-founder of MayaData, focusing on open source projects that help folks run data on Kubernetes confidently. I have helped with founding OpenEBS and Litmus, which have now become CNCF projects. 

I am shy of public speaking, and this virtual format is probably just what I needed to debut myself to speak at three virtual conferences this season in a short span of time - which would have been physically impossible to do with all of them happening at different locations/time zones within such a short period. The event organizing team made the speakers feel like rock stars really - with personalized follow-ups and private 1:1 training sessions. 

I was lucky enough to present the origins of OpenEBS, the driving factors, and how users have deployed OpenEBS in production. Thanks to the community's vibrant support, OpenEBS is the most popular Open Source Container Attached Storage solution as per the CNCF survey. You can always check out the slides from the talk here or reach out to me on the Kubernetes slack #openebs channel for continuing the conversation.

Don Williams
Don is the CEO of MayaData and leading the company for last one year. He has an exceptional record of accomplishments leading technology teams for organizations ranging from private equity-backed start-ups to large, global corporations. He has deep experience in engineering, operations, and product development in highly technical and competitive marketplaces. His extensive professional network in several industries, large corporations and government agencies is a significant asset to early stage businesses, often essential to achieve product placement, growth and position for potential exit strategies.
Kiran Mova
Kiran evangelizes open culture and open-source execution models and is a lead maintainer and contributor to the OpenEBS project. Passionate about Kubernetes and Storage Orchestration. Contributor and Maintainer OpenEBS projects. Co-founder and Chief Architect at MayaData Inc.
Murat Karslioglu
VP @OpenEBS & @MayaData_Inc. Murat Karslioglu is a serial entrepreneur, technologist, and startup advisor with over 15 years of experience in storage, distributed systems, and enterprise hardware development. Prior to joining MayaData, Murat worked at Hewlett Packard Enterprise / 3PAR Storage in various advanced development projects including storage file stack performance optimization and the storage management stack for HPE’s Hyper-converged solution. Before joining HPE, Murat led virtualization and OpenStack integration projects within the Nexenta CTO Office. Murat holds a Bachelor’s Degree in Industrial Engineering from the Sakarya University, Turkey, as well as a number of IT certifications. When he is not in his lab, he loves to travel, advise startups, and spend time with his family. Lives to innovate! Opinions my own!