2020 Cloud Native Focus
Like everyone else in tech and around the world, the year of 2020 has dealt us with some unique challenges to quickly adapt to, while continuing to keep our heads down and do what we do best. With the end of the year approaching, and with everything seemingly at a standstill until the “old normal” can start to return, it’s especially important to hit the pause button and take the time to reflect and take stock. Of course, in the Kubernetes world, the one waypoint for that is KubeCon + CloudNativeCon.
At LiveWyer, we’ve been focusing quite a bit lately on supporting technologies intended to uplift the core Kubernetes and Cloud Native eco-space (such as hybrid Linux / Windows Kubernetes clusters, services meshes with Istio, Open Policy Agent, and Falco). So in the course of building my own virtual schedule for the event, I thought it would be an interesting exercise to document the current trajectory and focus of present / future development within the core Kubernetes eco-system. I have signposted some sessions which flag the introduction of some really interesting use-cases for Kubernetes workloads, as well as work being undertaken to future push the viability of Kubernetes as a mature platform.
Storage Integrated with Kubernetes
There has been a bit of time since Kubernetes’ implementation of the Container Storage Interface (CSI) was promoted to GA in version 1.13. This has led to an explosion in the number of file and block storage systems which can be integrated directly with Kubernetes, allowing for a multitude of options available to support production use-cases in Kubernetes. There are numerous sessions which provide deep dives into some of the up and coming work that will allow for interesting new storage options.
PMEM-CSI Driver
Persistent Memory in Kubernetes | Patrick Ohly @ Intel
Session schedule link
Thursday November 19th: 8:45pm - 9:20pm (GMT)
Companies running workloads that are particularly sensitive to latency, such as financial trading, will be keen to find new and innovative implementations to gain a competitive advantage or increase their output. Local persistent memory (PMEM) allows data to be stored in non-volatile RAM, and is becoming more affordable. This would offer organisations a faster alternative to storing workload state on traditional file or block storage.
This talk offers an introduction to their CSI driver to integrate their Intel Optane Persistent Memory hardware with Kubernetes, making this memory available to Kubernetes workloads as filesystem volumes.
Container Object Storage Interface (COSI)
Beyond File and Block Storage in Kubernetes | Sidhartha Mani @ MinIO
Session schedule link
Wednesday November 18th: 5:45pm - 6:20pm (GMT)
CSI has provided a consistent standard for implementing drivers to integrate file and block based storage with container-based workloads. Object-based integrations for object storage systems have been available in Kubernetes with flexvolume, but a new standard called Container Object Storage Interface is being proposed that will allow a consistent standard for implementing object-based volumes; this will be one to keep an eye on.
Multi Tenancy in Kubernetes
An interesting angle of focus also seems to be how we manage multi-tenancy in Kubernetes. While running multiple applications or environments on a single Kubernetes cluster isn’t new (the scoping provided by namespacing makes it really easy to set up and segment “environments” on a single cluster), part of proving the maturity and validity of Kubernetes as a platform is that we are able to enforce policies that ensure the proper isolation of data and resource where required. Ensuring that all tenants are able to live harmoniously on a single cluster will be an important step in that direction.
API Server Priority and Fairness
API Priority and Fairness: Kube-APIServer Flow-control Protection | Min Jin @ Ant Group
Session schedule link
Friday November 20th: 5:55pm - 6:30pm (GMT)
Introduced as an alpha feature in Kubernetes 1.18, the API Priority and Fairness feature is an excellent step to mitigate a possible “noisy neighbour” effect inherent in running clusters with multiple tenants. In addition to preventing tenant-specific controllers from overloading the shared Kubernetes API, this also increases the overall reliability of Kubernetes clusters by ensuring that the controllers (that are vital to maintaining the health of workloads) are more likely to be able to continue to write changes to the cluster state as necessary.
Multi Tenancy Working Group
Kubernetes Working Group for Multi-Tenancy Project Overview | Tasha Drew @ VMware, Adrian Ludwin @ Google, Fei Guo @ Alibaba, Jim Bugwadia @ Nirmata
Session schedule link
Friday November 20th: 10:05pm - 10:40pm (GMT)
Although not necessarily “new”, a look into the projects being worked on by the multi tenancy working group should be worth a shout for those looking to optimise the production-readiness of their Kubernetes clusters. These projects include the Hierararchical Namespaces controller (which was highlighted on the Kubernetes blog a couple of months ago), and Multi-Tenancy Benchmarks.
But what do you think? How are you going to spend your time during the next virtual Kubecon? Let us know!