Have you ever wondered what’s happening under the hood of Kubernetes bootstrap process?
In this blog post I will take you on a journey through the steps of the Kubernetes cluster provisioning in general without focusing on any particular cloud provider or hardware virtualization.
It will give you an overview how the various components fit and work together.
This section assume that you are familiar with Kubernetes and know the basics.
Before we start digging into the actual bootstrap process let’s take a look at the Kubernetes components in order to understand what we’re going to provision.
As you can see we have 3 nodes — two workers and one master node.
API component is running on master node which exposes Kuberentes API. An authenticated and authorized user or service can view or modify Kubernetes objects described in manifests.
Scheduler and Controller are running on master node and are responsible for managing cluster state based on Kubernetes Objects stored in etcd (highly available key-value datastore).
Kubelet is “primary” node agent and it’s focused on running containers, it doesn’t manage containers which were not created on Kubernetes.
As the operating system we’ll use CoreOS Container Linux which is
“Minimal container operating system with basic userland utilities. Runs on nearly any platform whether physical, virtual or private/public cloud.”
It supports a container runtime (docker and rkt) out of the box.
Ignition is a provisioning utility for Container Linux based on YAML files. It executes in early boot process in initrd. It’s able to partition disk, configure network, systemd services and many more.
GRUB2: load kernel image and initramfs ← Ignition here
KERNEL: load modules and start system 1st process
SYSTEMD: read from /etc/systemd/
Matchbox
Matchbox came from CoreOS and actually it’s just a web and gRPC server which takes care about mapping baremetal machines based on labels like MAC address to PXE boot profiles.
Matchbox is configured by files and directory structure, by default it uses /var/lib/matchbox. It can be also configured using Terrform module.
Directory consists of:
assets — just static files
groups — mapping between baremetal machines and boot profiles
profiles — kernel/initrd images
ignition — OS configuration like: systemd services, network configuration
It can be run as a container:
1sudo docker run --rm quay.io/coreos/matchbox:latest \
We’ve seen before that kube-proxy is responsible for TCP and UDP forwarding using iptables or ipvs rules.
That’s pretty much all what is needed to bootstrap minimal Kubernetes Cluster. We can dig deeper into kube-apiserver, kube-dns, scheduler or even more configuration files but this is not the scope of this blog post :)
Bootstrap on cloud is not so different than on baremetal instead of providing physical machines we have to take care of provisioning the nodes using different tools running on higher level of abstraction.
For example in terms of AWS cloud provider we can take an advantage of CloudFormation templates in order to create and provision EC2 instances.