K8s Unboxing Part 1

Amartya Mandal
6 min readNov 14, 2022

Update: Release 1.2.1 support following
provider: only libvirt tested and supported for this release
runc support for both runc | crun | kata | gvisor
cni: “default” Note: For this release “calico” & “cilium” not tested
kubernetes version: 1.25 (1:24–1:25)
cri tool version: 1.25
runc version: 1.1 | crun version: 1.7 Note: kata & gvisor runtime version has no effect
containerd: 1.6
cni plugin version: 1.1
os: Ubuntu 1804 | Ubuntu 2004 | Ubuntu 2204

Now the latest Updates release 1.1 do support k8s 1.25 with crun

  • runc support for both runc and crun
  • cni: “default” | “calico” | “cilium”
  • kubernetes version: 1.25 (1:24–1:25)
  • cri tool version: 1.25
  • runc version: 1.1 | crun version: 1.7
  • containerd: 1.6
  • cni plugin version: 1.1
  • os: Ubuntu 1804 | Ubuntu 2004 | Ubuntu 2204

The Kubernetes ecosystem has always amused me. Once the initial surprise is over and if you are still curious, you would find a wonderful world of few passionate people working very hard to keep the magic alive.

Managed kubernetes distribution and bootstrappers are the key constraints to dig deep into this world. Managed k8s distributions from vendors mostly satisfy all the day to day requirements. And bootstrapper’s like makes it very easy to build a new k8s cluster in a minute.

I am fortunate that people do ask me questions around k8s and I get to answer those questions, which gives me the opportunity to learn more. There are some questions which best can be answered by digging into kubernetes source code and that seems an interesting journey to me.

IF you, like me, love to browse source code in free time, this specific series of blogs may be interesting.

I would like to unbox kubernetes to its source and try to find a few answers along the way.

Any k8s cluster at application level has following core componentes, which are enough to build a vanilla k8s cluster. Kubernetes consists of kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, kubelet (& kubectl client). And containerd (one of the many options), cni plugins & etcd.

We can get a very clear picture of this ecosystem by examining configuration, integrations of these above components.

But we need a very transparent way to observe how these components are working together.

I find kubernetes-the-hard-way is the best place to start, 30K + stars certainly speak of its own.

Generally I use kvm in my home lab, poor man’s hypervisor. You need some sort of automation so that one can quickly change, build, destroy & redo a cluster whenever you want in the cheapest way possible.

Purpose of this very first blog of this series is to introduce the following collection of scripts which let you create a k8s cluster with an api load balancer with both libvirt (kvm) and virtualbox.

You can find the source here K8s-unboxing

Once download first thing you would like to do is to update k8s-config.yaml


## global definitions
# k8s:
# provider: 'libvirt' ## two options 'libvirt' or 'virtualbox'#######################################
# domain: 'k8s.local'
# ip_start: 192.168.121.128 ## This is required for libvirt provider to create a subnet ###################
# ip_end: 192.168.121.254 ## for virtualbox its use the default vboxnet0 ################################
# ncpnd: 1 ## number of master nodes, load balancer will balanced the traffic to kubeapi##
# nwrknd: 2 ## number of worker nodes #####################################################
# cni: "default" ## 3 options 'default'(simple routing & no 3rd party CNI),'calico','cilium' ###
# V: 1.22 ## k8s version ################################################################
# CRI_CTL_V: 1.25 ## CRI version ################################################################
# runtime: runc | crun | kata | gvisor
# runtime_v: low level runtime versions runc version = 1.1; crun version = 1.7; kata version = 2.4.2
# gvisor version = 20221128.0 at present snap install version for kata 2.4.2, let's keep it that way!
# for kata & gvisor runtime version has no effect, because it is always getting the latest source
# during provisioning of the nodes, its not ideal, but at this moment, either of this special runtime
# not stable, documentation not clear, so its better to do the runtime build and configuration inside
# the node, remember its a test bench for kubernetes
# CONTD_V: 1.6 ## containerd version #########################################################
# CNI_PLUGIN_V: 1.1 ## cni plugin version #########################################################
# build_directory: ""## path to the directory where you downloaded & build all k8s related source ##
# node: ## any node attrebutes can be configured here ##########################
# private_key_name: ""## ssh key name to ssh into the nodes,expect key in default ~/.ssh path
# os: "generic/ubuntu2204" ## os ubuntu is the only flavour which has been tested


k8s:
provider: "libvirt"
domain: ""
ip_start: 192.168.121.128
ip_end: 192.168.121.254
ncpnd: 1
nwrknd: 2
cni: "default"
V: 1.25
CRI_CTL_V: 1.25
runtime: "kata"
runtime_v: 2.4.2
CONTD_V: 1.6
CNI_PLUGIN_V: 1.1
build_directory: ""
node:
private_key_name: "ssh_key"
os: "generic/ubuntu2204"

Github repo documentation still a work in progress and grow along with this series and will bring more clarity, it’s only bash scripts (other than one exception of ansible, just to keep an entry point for future enhancement), using only bash is intentional, code is very much straight forward and easy to understand and change and that is the primary objectives.

following table would be helpful-

Following should be the right sequence of commands for the first time users…

./setup.sh make
./setup.sh build
./setup.sh all

FYI- “build” for the first time will take some time mostly to build k8s binaries, do check “build-k8s.sh” under scripts directory for the build command and you are free to make changes according to your environment

Building containerd may have some complain around libseccomp and you may download compile build the same with following commands

Sometimes changing virtualization providers from virtualbox to libvirt causes some trouble simply restart libvirtd and remove stale images.

Few things to remember before using this repo.

  1. Its inspired by “kubernetes the hard way”- it’s just an enhancement to use cheaper infra provisioning platform or tools like virtualbox or kvm
  2. This is not any tool or has no intention to become one, in fact its opposites, its whole purpose is to unwrap installation and configuration of a cluster in its full glory
  3. This is no way optimized for time (it takes to build a cluster) or efficiency (I purposefully avoided using ansible or any sorts of cm), it’s expressive and fragile.
  4. I have ansible provisioner with vagrant, but use has been kept very limited, its mostly collection of few bash script and that is intentional

Pre-requisites

  1. I use ubuntu for my development machine, it should also work in a debian distribution
  2. Go
  3. Kvm or virtualbox — I should warn kvm is much much faster, reason is obvious kvm is type 1 hypervisor
  4. Vagrant , vagrant virtualbox provider and libvirt providers
  5. Virsh and Vboxmanage command lines, which should be installed once you configured libvirt and virtualbox
  6. Cilium command line if you are using cilium, but my suggestions would be to start with default
  7. If anything else, rest assured deployment will certainly break and let you know what is wrong :-)
  8. It will ask for “sudo”, you are welcome to check the code before you go with it, nothing harmful though

Next couple of articles I will focus on each of these 5 core components one by one, go through the source code and try to find some answers!

Notes on kata

kata going through some major changes and documentation is hard to follow. Ideally kata runtime should be build and separately and copied to specific node, it is only required to check if node is capable of creating a Kata Container. Ideally these checks should not be part of node provisioning, but for the clarity of understanding, I am building kata runtime from source in the node itself. This will change later to more standard approach.

Notes on gvisor

gvisor has a major limitation with ubuntu. The new systemd 247.2–2 has switched to a new “unified” cgroup hierarchy (i.e. cgroup v2) which is not supported by gVisor. Ubuntu version 21.10 & above affected. Workaround is to switching back to cgroup v1 and that’s why a node created with gvisor runtime will reboot to reflect the downgrade

Originally published at https://www.blogs.k101.io.

--

--