What is Cluster API?
Cluster API (CAPI) is one of Kubernetes’ sub-projects to bring declarative APIs to cluster creation, configuration, and management. It programmatically configures and deploys K8s clusters on a variety of different infrastructures. You can provision clusters on demand and it integrates well with many tools, and one of them is FluxCD, a continuous delivery tool for Kubernetes.
In the blog post we will cover these main concepts:
- Overview of CAPI
- Use cases (including how we use this internally)
- Integration with other tools to make your life easier
We also use this tool in one of our projects for building both long-running production clusters and ephemeral clusters on demand for testing our new software in different environments.
I will show a little demo on how to continuous deliver workload clusters with FluxCD as well, so keep reading!
Overview of Cluster API (CAPI)
Cluster API is a multi-cluster management tool for Kubernetes that helps us to create, configure and manage our clusters via a declarative approach through the use of a management cluster that becomes the main cluster from where you deploy workload clusters.
So you just need to install the tools components, in other words make your cluster into a management cluster, and have some yaml
files to deliver a workload cluster in your chosen provider. This becomes even more useful when you use a CD tool that supports the GitOps approach – so you are able to create workload clusters through Git PRs, commits, etc. Workload clusters are simple Kubernetes cluster which is managed by the management one.
So what are the benefits of using Cluster API?
- It supports a very big list of providers and the speed of deployment only depends on your chosen provider
- You are able to upgrade the management cluster as well as workload clusters
- You can use different version of
Kubeadm
bootstrap provider - You can integrate it with many tools (e.g CI/CD tools)
Cluster API Providers
CAPI supports many providers for provisioning clusters, beginning with the local solutions such as Docker and ending with the most popular cloud providers (AWS, GCP, Azure). Here’s the full list of supported CAPI common providers.
I would recommend you to use local providers for short-term workload clusters so they can be provisioned within a few minutes without spending extra money, while production variants for long-term clusters can be deployed on cloud platforms.
Cluster API General use cases
Basically, we have two general use cases for what we may need the workload clusters for:
- Long-running cluster for production variants
- Workload clusters with short TTL for testing purposes
Cluster API fully satisfies these needs, because it can provision both long and short running clusters for specific purposes. It can configure or manage workload clusters during the whole life-cycle so you are able to use CAPI as a provisioning tool of your production / staging clusters for long-term projects. Or, if you need to test new software, an ephemeral workload cluster is for you.
Moreover, you can integrate CAPI with Flux CD so you are able to deliver these clusters without accessing the management cluster – just the GitOps way!
Why do we need a management cluster?
The management cluster plays the role of a centralized cluster from where we deliver workload clusters. Basically, it’s just a Kubernetes cluster but with installed CAPI components with these use cases:
- You can install many providers on the management cluster so you are not limited to one provider
- When you initialize the management cluster you can choose different versions of
Kubeadm
for your control plane and worker nodes - You can access all the workload clusters from the management cluster via the generated
kubeconfig
- Upgrading workload clusters with a new Kubernetes version
So the main aim of a management cluster is to deliver new workload clusters to our eco-system.
Our case: why we decided to use it?
At LiveWyer, we’ve long needed a solution which allows us to just spin up clusters quickly for various uses – whether that be a short term cluster to test something out or a long term cluster for internal use. So we had two main reasons to try CAPI:
- We had a wider need to create clusters for other uses too, e.g. for training we wanted to be able to quickly spin up short term clusters for trainees to use which would be destroyed the next day.
- We wanted a solution where we could easily spin up clusters across multiple cloud providers, to stay vendor neutral.
Upgrading management and workload clusters
From time to time, we have to upgrade our clusters with a new Kubernetes version, so what about upgrading management and workload clusters? It’s okay, no worries, Cluster API supports the upgrading operation.
It’s possible to upgrade the management cluster as well as workload clusters. I won’t cover here how to do it, because it’s a pretty large task, but you can review the high level steps to fully upgrading a cluster.
Integration CAPI with other tools
Cluster API can be integrated with many tools, but in this blog post I’ll show you how to use CAPI with FluxCD.
It helps us in the automation of many scenarios – for example, you can create ephemeral workload clusters on demand with Flux CD via Git commits.
Flux CD
Basically, Flux integrates with Cluster API so whenever a new cluster is generated Flux has access to it via the kubeconfig
, stored as a secret in your management cluster.
When you have Flux installed on your management cluster you can deploy resources on any workload clusters via the kubeconfig
you specify in the Kustomization files.
kubeConfig:
secretRef:
name: ${CLUSTER_NAME}-kubeconfig # Cluster API creates this for the matching Cluster
Note: Flux Kustomization that will be deployed on the workload cluster should be in the same namespace as a generated cluster.
If the CAPI provider deletes the kubeconfig
when the workload cluster is deleted, then Flux will fail that Kustomization from then because the secret with the kubeconfig
is missing.
Cluster API Demo - How to deploy workload clusters with FluxCD
Let’s play around with Cluster API in combination with FluxCD. We will have a demo on how to deploy workload clusters with FluxCD.
It’s very convenient to deploy workload clusters with CD tools such as FluxCD or ArgoCD, but I prefer to use FluxCD because of it’s built-in integration with Cluster API.
Steps
In order to reproduce this demo you’ll have to:
- Create a helm chart that describes the workload cluster
- Initialize Cluster API with the provider you’ll use on your K8s cluster, so it’s made into a management cluster
- Prepare Flux files in your repository to be deployed
Note: To create a helm chart with workload cluster you’ll need to generate a template of workload cluster with help of clusterctl generate cluster ...
. CAPI uses different templates for different providers.
Showcase
I have this setup of terminal:
- In top terminal, you have the
flux bootstrap
being run for triggering Flux pipeline - In the bottom left terminal, you’ll see a new workload cluster being provisioned
- In the bottom right terminal, you’ll see Flux updating
With the above, I can produce the demo below:
In the demo, we created a workload cluster with FluxCD, so let’s describe each step in the demo to have a clear picture of what’s going on.
- I used
flux bootstrap
to initiate Flux installation as well as the deployment of resources I placed in the specified repo. - Flux has been successfully installed on my cluster and it has started to reconcile the latest changes in my repo.
- When it finished the reconciliation, it deployed a helm chart that contains our cluster.
Also, I think it would be good to take a look at main concepts I used in the demo:
First, I’d like to start with the structure of the Git repository.
It’s not required to store your charts in the same repository, they can be stored in a cloud bucket or another repository as well – so it’s up to you to decide what’s better in your scenario.
The best practice says “Only store Flux Kustomization in folder which Flux monitors / will monitor”. I agree with that because it helps us to structure all the resources we’d like to deploy. If you wonder what it should look like, or what other ways of structuring there are, then you are welcome to check these examples/docs:
Here I am using the structure of a monorepo model.
├── charts
│ └── cluster
│ ├── Chart.yaml
│ ├── templates
│ │ └── ...
│ └── values.yaml
├── clusters
│ └── my-cluster
│ ├── flux-system
│ │ └── ...
│ └── workload-cluster.yaml
└── workload-cluster
├── kustomization.yaml
└── release.yaml
More over, the peak moment in the demo was running flux bootstrap
CLI command so if you want to get to know Flux better, I will recommend you to take a look at the FluCD user-guide.
Finally, I showed how I described the Flux Kustomization workload-cluster.yaml
and workload-cluster/
directory.
Pay attention to the spec.path
field, it uses an absolute path of your repository.
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
spec:
sourceRef:
kind: GitRepository
name: flux-system
path: ./workload-cluster
You can use the same repository to store your helm charts and it’s very convenient, don’t you think?
kind: HelmRelease
spec:
chart:
spec:
chart: ./charts/cluster
sourceRef:
kind: GitRepository
name: flux-system
Platform Engineer Final Thoughts on Cluster API
I’ve really enjoyed learning how to use Cluster API, and it’s a really powerful tool for creating k8s clusters via declarative APIs.
What do you think about this tool? What other interesting demos / PoCs you could do with it? Book a meeting with our Cloud Platform Engineers if you would like to know more.