KubeVirt: Running Kubernetes Clusters Inside Kubernetes

TOC:

What is KubeVirt?

At LiveWyer we’re thrilled to see the velocity of new open source projects that contribute to the Kubernetes / Cloud Native ecosystem. In a new series of blog posts we’re going to be taking a deep dive into the CNCF Sandbox. Each month we’ll be taking on a new project from the sandbox - installing, having a look around and seeing if we can’t manage to do something cool with it - and posting our results. We’re starting with KubeVirt, a tool that can be used to create and manage Virtual Machines (VM) within a Kubernetes cluster.

KubeVirt is intended to be used for VM-based workloads that are difficult to containerise. However, I want to demonstrate using KubeVirt to create a Kubernetes Cluster within a Kubernetes Cluster.

This demonstration will show the manual process for doing so, but in practice we’ll want to automate this process, so that we’ll be able to automatically create disposable and customisable Kubernetes clusters that we can perform automatic tests on. In particular, we’ll be able to automatically test potential changes on the Kubernetes level/layer (i.e. cluster-wide changes) before they are applied to a live Kubernetes cluster.

KubeVirt Setup

For the demonstration I have:

  1. Deployed KubeVirt v0.38.1 onto my Kubernetes cluster
  2. Deployed KubeVirt’s Containerized-Data-Importer (CDI) v1.30.0
  3. Installed the kubectl virtual plugin to allow me to perform operations on the VMs powered by KubeVirt
  4. Setup a repository containing all resources used in this demonstration

With the above setup, I’ll first demonstrate how to create a VM with two Kubernetes custom resources VirtualMachine & DataVolume. Then, I’ll combine both resources with other tools to deploy a “nested” Kubernetes cluster.

Container Disk Images & DataVolumes

Before we can create a VM, we need a container disk image (CDI) for our Operating System (OS) of choice. Therefore, I first need to create a DataVolume. Deploying this object will import our chosen disk image into our Kubernetes cluster.

My OS of choice is Ubuntu 20.04. In order to use this OS, we need to find the source for it and include it in the manifest file for my DataVolume. I’ve found the URL for the official Ubuntu cloud image for Ubuntu 20.04. This URL is the source we need to include in the manifest file for my DataVolume..

The video below will show what you’ll see once you deploy a DataVolume.

Once the container disk image has been fully imported successfully, we can now use use the DataVolume to create a single VM and create a single node Kubernetes cluster.

A Single Node Cluster Running in a Kubernetes cluster

With a DataVolume available we can create a manifest file for a VirtualMachine that references it. Below is a snippet that shows how to reference a created DataVolume.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: ubuntuvm
spec:
  template:
    spec:
      domain:
        disks:
        - disk:
            bus: virtio
          name: containerdisk
      volumes:
      - name: containerdisk
        dataVolume:
          name: DATA_VOLUME_NAME

Here is an example of an actual manifest file for the VirtualMachine. Note, there is a placeholder value in the manfiest file for a public SSH key, you need to replace that value if you want to SSH into the VM. The last video in this blog will showcase a method to SSH into a KubeVirt VM in a Kubernetes cluster.

The video below demonstrates the deployment of the VM and connecting to the console of the newly created VM.

Once we have a VM, we can use it to create a single node Kubernetes cluster using k3s.

Dynamic DataVolumes & VirtualMachine “ReplicaSets”

Now that we’re able to create a single node Kubernetes cluster within a Kubernetes cluster, the next step is to create a multi-node Kubernetes cluster within a Kubernetes cluster.

Unfortunately, we cannot use a single DataVolume for multiple VMs, so we’re going to dynamically create DataVolumes whenever a new VM is created. We can do this by adding a DataVolumeTemplate in the manifest file for the VirtualMachine.

Ideally, we would want to use a replica set to create multiple VMs. However, as of version v0.38.1 (and the time of writing) a VirtualMachineReplicaSet custom resource does not exist (here you can find the specification for KubeVirt v0.38.1). There is a VirtualMachineInstanceReplicaSet but it does not currently support DataVolumeTemplates.

As a workaround, I’ve created a helm chart that can deploy multiple identical VMs, with each one using a DataVolumeTemplate to dynamically create DataVolumes.

The video below shows the deployment of this helm chart. Note, it may take roughly 15 minutes for all the data volumes to finish importing the image.

A Multi-Node Kubernetes Cluster Running in a Kubernetes cluster

Now that we have multiple VMs running inside our Kubernetes cluster, we can use them to create a multi-node Kubernetes Cluster. To do so, we’ll be using the k3sup tool, which requires us to be able to SSH into every VM.

To SSH into these VMs, I’ll be using a pod with the linuxserver/openssh-server image. This pod will need to be in a node with no VM (pods) running in it, otherwise you’ll not be able to SSH into all the VMs from within the openssh pod.

You can find an example manifest file for the pod I used, (you’ll need to replace the placeholder value for the node name) and the process for using these VMs to create the Kubernetes cluster is shown in the video below. Before the recording, I executed into the openssh pod and installed k3sup and added the required SSH key.

As showcased in the video above, we were able to successfully create a three node Kubernetes cluster within a Kubernetes cluster. It may seem like a novel idea to have such a cluster, but we believe this novel idea can be used effectively to test potential changes on the Kubernetes level/layer.

Unfortunately, changes with a cluster-wide scope have the potential to have a negative impact on all (or most) workloads running in the cluster. For example, a change that misconfigures:

With the automatic creation of Kubernetes clusters and the ability to perform automatic tests on them. We’ll be able to effectively test changes on the Kubernetes level/layer and significantly reduce the risk of changes that will break the functionality of a live cluster being applied.

DevOps Engineer Final Thoughts on KubeVirt

This is as far as we’re going to go with KubeVirt today, but we’ll be trying to make use of Kubevirt for our internal projects and keenly following the project to see how it matures. If you try this out yourselves, or there’s anything you feel we should have done differently, then please contact us.