About a month ago, D2iQ (known as Mesosphere at that time) announced the upcoming availability of two new products for the Kubernetes community. More on that announcement here.
The first is Konvoy, a managed Kubernetes platform. Konvoy provides a standalone production-ready Kubernetes cluster with best-in-class components (CNCF projects) for operation and lifecycle management. The solution enables developer agility and faster time to market for new Kubernetes deployments.
The second solution is Kommander, which will enable better visibility and control across multiple Kubernetes clusters.Both solutions are expected to be GAed sometime around late Q3 or early Q4 2019.
Both Konvoy and Kommander are complementary to KUDO, an open-source project created by D2iQ in December that provides a declarative approach to building production-grade Kubernetes operators covering the entire application lifecycle.
In this blog post, I provide step-by-step instructions on how to quickly configure a stateful-workload-ready Kubernetes cluster using Konvoy and OpenEBS.
Now, let’s take a look at the requirements.
Minimum requirements for a multi-node cluster:
Hardware
No bare metal nodes were harmed in the making of this blog.
I will install Konvoy on AWS. Default settings will create 3x t3.large master and 4x t3.x large worker instances (total 7).
Note: Separately, you can also install a demo environment on your laptop using Docker running konvoy provision --provisioner=docker parameter, but this will not be covered in this blog this time.
Software
As I mentioned above, installation requires an AWS account and an AWS user with a policy that has permission to use the related services.
You will also need awscli and kubectl installed before we begin.
I will be using Ubuntu 18.04 LTS as my workstation host. If you already have prerequisites ready you can jump into Installing Konvoy section.
$ sudo apt-get update && sudo apt-get install awscli jq -y
$ aws configure
$ curl -LO
https://github.com/kubernetes/kops/releases/download/$(curl -s
https://api.github.com/repos/kubernetes/kops/releases/latest |
grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
$ chmod +x kops-linux-amd64
$ sudo mv kops-linux-amd64 /usr/local/bin/kops
$ kubectl version --short
Client Version: v1.15.0
Server Version: v1.15.0
$ tar -xf konvoy_v0.6.0_linux.tar.bz2
$ sudo mv ./konvoy_v0.6.0/* /usr/local/bin/
$ konvoy --version
{
"Version": "v0.6.0",
"BuildDate": "Thu Jul 25 04:55:40 UTC 2019"
Let’s first get up stateless components that don’t require persistent storage.
$ konvoy init
$ nano cluster.yaml
---
kind: ClusterProvisioner
apiVersion: konvoy.mesosphere.io/v1alpha1
metadata:
name: ubuntu
creationTimestamp: "2019-07-31T23:04:29.030542867Z"
spec:
provider: aws
aws:
region: us-west-2
availabilityZones:
- us-west-2c
tags:
owner: ubuntu
adminCIDRBlocks:
- 0.0.0.0/0
nodePools:
- name: worker
count: 4
machine:rootVolumeSize: 80
rootVolumeType: gp2
imagefsVolumeEnabled: true
imagefsVolumeSize: 160
imagefsVolumeType: gp2
type: t3.xlarge
- name: control-plane
controlPlane: true
count: 3
machine:
rootVolumeSize: 80
rootVolumeType: gp2
imagefsVolumeEnabled: true
imagefsVolumeSize: 160
imagefsVolumeType: gp2
type: t3.large
- name: bastion
bastion: true
count: 0
machine:
rootVolumeSize: 10
rootVolumeType: gp2
type: t3.small
sshCredentials:
user: centos
publicKeyFile: ubuntu-ssh.pub
privateKeyFile: ubuntu-ssh.pem
version: v0.6.0
...
addons:
configVersion: v0.0.47
addonsList:
- name: awsebscsiprovisioner
enabled: false
- name: awsebsprovisioner
enabled: false
- name: dashboard
enabled: true
- name: dex
enabled: true
- name: dex-k8s-authenticator
enabled: true
- name: elasticsearch
enabled: false
- name: elasticsearchexporter
enabled: false
- name: fluentbit
enabled: false
- name: helm
enabled: true
- name: kibana
enabled: false
- name: kommander
enabled: true
- name: konvoy-ui
enabled: true
- name: localvolumeprovisioner
enabled: false
- name: opsportal
enabled: true
- name: prometheus
enabled: false
- name: prometheusadapter
enabled: false
- name: traefik
enabled: true
- name: traefik-forward-auth
enabled: true
- name: velero
enabled: false
version: v0.6.0
$ konvoy up
Kubernetes cluster and addons deployed successfully!
Run `konvoy apply kubeconfig` to update kubectl credentials. Navigate to the URL below to access various services running in the cluster.
https://a67190d830fd64957884d49fd036ea78-841155633.us-west-2.elb.amazonaws.com/ops/landing
And login using the credentials below.
Username: dazzling_swirles
Password: DGwwyTFNCt6h2JSxTfqzD8PUWSLo1qPLCZJ82kQmMmaNctodWa3gd8W0O2SPFC
If the cluster was recently created, the dashboard and services may take a few minutes to be accessible.$ konvoy apply kubeconfig
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-129-203.us-west-2.compute.internal Ready <none> 4m49s v1.15.0
ip-10-0-129-216.us-west-2.compute.internal Ready <none> 4m49s v1.15.0
ip-10-0-130-203.us-west-2.compute.internal Ready <none> 4m49s v1.15.0
ip-10-0-131-148.us-west-2.compute.internal Ready <none> 4m49s v1.15.0
ip-10-0-194-238.us-west-2.compute.internal Ready master 7m8s v1.15.0
ip-10-0-194-76.us-west-2.compute.internal Ready master 5m27s v1.15.0
ip-10-0-195-6.us-west-2.compute.internal Ready master 6m31s v1.15.0
Now, Let’s get up stateless components on OpenEBS persistent storage using cStor engine.
export CLUSTER=ubuntu-bb0c # name of your cluster,
export REGION=us-west-2 # region-name
export KEY_FILE=ubuntu-ssh.pem # path to private key
fileIPS=$(aws --region=$REGION ec2 describe-instances | jq
--raw-output ".Reservations[].Instances[] | select((.Tags |
length) > 0) | select(.Tags[].Value | test("$CLUSTER-worker"))
| select(.State.Name | test("running")) | [.PublicIpAddress] |
join(" ")")
for ip in $IPS; do
echo $ip
ssh -o StrictHostKeyChecking=no -i $KEY_FILE centos@$ip sudo
yum install iscsi-initiator-utils -y
ssh -o StrictHostKeyChecking=no -i $KEY_FILE centos@$ip sudo
systemctl enable iscsid
ssh -o StrictHostKeyChecking=no -i $KEY_FILE centos@$ip sudo
systemctl start iscsid
done
export DISK_SIZE=200
aws --region=$REGION ec2 describe-instances | jq --raw-output
".Reservations[].Instances[] | select((.Tags | length) > 0) | select(.Tags[].Value | test("$CLUSTER-worker")) | select(.State.Name | test("running")) | [.InstanceId, .Placement.AvailabilityZone] | "\(.[0]) \(.[1])"" | while read
instance zone; do
echo $instance $zone
volume=$(aws --region=$REGION ec2 create-volume --size=$DISK_SIZE --volume-type gp2 --availability-zone=$zone --tag-specifications="ResourceType=volume,Tags=[{Key=string,Value=$CLUSTER}, {Key=owner,Value=michaelbeisiegel}]" | jq --raw-output .VolumeId)
sleep 10
aws --region=$REGION ec2 attach-volume --device=/dev/xvdc --instance-id=$instance --volume-id=$volume
done
wget
https://openebs.github.io/charts/openebs-operator-1.1.0.yaml
…
filterconfigs:
– key: vendor-filter
name: vendor filter
state: true
include: “”
exclude: “CLOUDBYT,OpenEBS”
– key: path-filter
name: path filter
state: true
include: “”
exclude: “loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md,/dev/nvme0n1,/dev/nvme1n1”
…
kubectl apply -f openebs-operator-1.1.0.yaml
$ kubectl get blockdevices -n openebs
NAME SIZE CLAIMSTATE STATUS AGE
blockdevice-16b54eaf38720b5448005fbb3b2e803e 107374182400 Unclaimed Active 23s
blockdevice-2403405be0d70c38b08ea6e29c5b0a2f 107374182400 Unclaimed Active 14s
blockdevice-6d9b524c340bcb4dcecb69358054ec31 107374182400 Unclaimed Active 22s
blockdevice-ecafd0c075fb4776f78bcde26ee2295f 107374182400 Unclaimed Active 23s
cat <<EOF | kubectl apply -f -
kind: StoragePoolClaim
apiVersion: openebs.io/v1alpha1
metadata:
name: cstor-ebs-disk-pool
annotations:
cas.openebs.io/config: |
- name: PoolResourceRequests
value: |-
memory: 2Gi
- name: PoolResourceLimits
value: |-
memory: 4Gi
spec:
name: cstor-disk-pool
type: disk
poolSpec:
poolType: striped
blockDevices:
blockDeviceList:
- blockdevice-16b54eaf38720b5448005fbb3b2e803e
- blockdevice-2403405be0d70c38b08ea6e29c5b0a2f
- blockdevice-6d9b524c340bcb4dcecb69358054ec31
- blockdevice-ecafd0c075fb4776f78bcde26ee2295f
EOF
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: openebs-cstor-default
annotations:
openebs.io/cas-type: cstor
cas.openebs.io/config: |
- name: StoragePoolClaim
value: "cstor-ebs-disk-pool"
- name: ReplicaCount
value: "3"
storageclass.kubernetes.io/is-default-class: 'true'
provisioner: openebs.io/provisioner-iscsi
EOF
$ kubectl get sc
NAME PROVISIONER AGE
openebs-cstor-default (default) openebs.io/provisioner-iscsi 4s
openebs-device openebs.io/local 26m
openebs-hostpath openebs.io/local 26m
openebs-jiva-default openebs.io/provisioner-iscsi 26m
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 26m
addons:
configVersion: v0.0.47
addonsList:
- name: awsebscsiprovisioner
enabled: false
- name: awsebsprovisioner
enabled: false
- name: dashboard
enabled: true
- name: dex
enabled: true
- name: dex-k8s-authenticator
enabled: true
- name: elasticsearch
enabled: true
- name: elasticsearchexporter
enabled: true
- name: fluentbit
enabled: true
- name: helm
enabled: true
- name: kibana
enabled: true
- name: kommander
enabled: true
- name: konvoy-ui
enabled: true
- name: localvolumeprovisioner
enabled: false
- name: opsportal
enabled: true
- name: prometheus
enabled: true
- name: prometheusadapter
enabled: true
- name: traefik
enabled: true
- name: traefik-forward-auth
enabled: true
- name: velero
enabled: true
version: v0.6.0
$ konvoy up
STAGE [Deploying Enabled Addons]
helm [OK]
dashboard [OK]
opsportal [OK]
traefik [OK]
fluentbit [OK]
kommander [OK]
prometheus [OK]
elasticsearch [OK]
traefik-forward-auth [OK]
konvoy-ui [OK]
dex [OK]
dex-k8s-authenticator [OK]
elasticsearchexporter [OK]
kibana [OK]
prometheusadapter [OK]
velero [OK]STAGE [Removing Disabled Addons]
awsebscsiprovisioner [OK]
Kubernetes cluster and addons deployed successfully!
Run `konvoy apply kubeconfig` to update kubectl credentials.
Navigate to the URL below to access various services running in the cluster.
https://a67190d830fd64957884d49fd036ea78-841155633.us-west-2.elb.amazonaws.com/ops/landing
And login using the credentials below.
Username: dazzling_swirles
Password: DGwwyTFNCt6h2JSxTfqzD8PUWSLo1qPLCZJ82kQmMmaNctodWa3gd8W0O2SPFCIc
If the cluster was recently created, the dashboard and services may take a few minutes to be accessible.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0b6031e2-38c7-49e0-98b6-94ba9decda19 4Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-master-2 openebs-cstor-default 18m
pvc-0fa74e6d-978a-47e6-961d-9ff6601702e6 10Gi RWO Delete Bound velero/data-minio-3 openebs-cstor-default 15m
pvc-18cfe51f-0a2c-409a-ba89-dbd89ad1c9f5 10Gi RWO Delete Bound velero/data-minio-2 openebs-cstor-default 15m
pvc-442dad69-56e5-44f1-8c43-cc90028c1d6d 10Gi RWO Delete Bound velero/data-minio-1 openebs-cstor-default 16m
pvc-69b30e82-6e47-4061-9360-54591a3cb593 30Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-data-0 openebs-cstor-default 21m
pvc-6d837a95-8607-4956-b3f5-bce4e42f5165 10Gi RWO Delete Bound velero/data-minio-0 openebs-cstor-default 16m
pvc-83498bcb-6f7e-45b8-8d25-d6f7af46af9a 4Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-master-1 openebs-cstor-default 19m
pvc-9df994f7-38af-4cb2-b0fc-2d314245a965 50Gi RWO Delete Bound kubeaddons/db-prometheus-prometheus-kubeaddons-prom-prometheus-0 openebs-cstor-default 21m
pvc-f1c0e0ad-f03e-4212-81b5-f4b9e4b70f10 30Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-data-1 openebs-cstor-default 19m
pvc-f92ca2d2-b808-4dc5-b148-362b93ef6d20 4Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-master-0 openebs-cstor-default 21m
Available Commands:
apply Updates certain configuration in the existing kubeconfig file
check Run checks on the health of the cluster
completion Output shell completion code for the specified shell (bash or zsh)
deploy Deploy a fully functioning Kubernetes cluster and addons
diagnose Creates a diagnostics bundle of the cluster. Logs are included for the last 48h.
down Destroy the Kubernetes cluster
get Get cluster related information
help Help about any command
init Create the provision and deploy configuration with default values
provision Provision the nodes according to the provided Terraform variables file
reset Remove any modifications to the nodes made by the installer, and cleanup file artifacts
up Run provision, and deploy (kubernetes, container-networking, and addons) to create or update a Kubernetes cluster reflecting the provided configuration and inventory files
version Version for konvoy
If you have any questions, feedback or any topic that is not covered here feel free to comment on our blog or contact me via Twitter @muratkarslioglu.
To be continued…