The lightweight Kubernetes OS that is known as k3OS has quickly been gaining popularity in the cloud-native community as a compact and edge-focused Linux distribution that cuts the fat away from the traditional K8s distro. While k3OS is picking up steam, it is still on the bleeding edge and there is still a bit of a shortage of learning material out there for it.
In this blog post, I’m going to walk you through a demo of how I helped to solve a puzzling issue with Powerflex’s edge k3OS deployment.
What exactly is k3OS?
For those that aren’t familiar with it, k3OS is a lightweight Linux distribution developed by Rancher Labs. It is designed to abstract away as much of the OS maintenance of a Kubernetes cluster as possible and it is meant to only have what is needed to run k3OS.
Key Features of k3OS include:
- Fast install: You can boot up with k3s available in under 10 seconds, with fast cluster scaling.
- Simple Management: You can control the OS from within Kubernetes without the need to log into remote nodes
- Easy configuration: You can declare a cloud-init script at the initial boot to configure your system to your desired specs.
- Multi-architecture support: x86_64 support available now & Arm support available soon.
By leveraging a lightweight Kubernetes operating system like k3OS, developers can more easily deploy applications to resource-constrained environments such as those at the edge of the internet.
Now, Before we jump into our demo, I want to give a brief overview of what Rancher is and how it relates to what we are trying to solve.
What is Rancher?
Rancher is an open-source managed Kubernetes platform that can deploy and manage multiple Kubernetes clusters running anywhere, on any provider. It can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or inherit existing Kubernetes clusters running anywhere.
Rancher adds significant value on top of Kubernetes, first by centralizing role-based access control (RBAC) for all of the clusters and giving global admins the ability to control cluster access from one location. It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog.
Using cloud-init to self-register k3OS clusters to Rancher
Let’s explore the use case of Powerflex a bit before we dive into the more technical parts. Powerflex maintains a large edge deployment built on top of k3OS while using GCP for the management plane. The architecture looks something like this:
Powerflex has sites scattered across the country, and the installation process for a node has to be fairly plug-and-play. When a site is being installed, Powerflex will ship a preconfigured box with k3OS out to the site, the engineer plugs it in and the box will run an init script that does the following:
- Creates a K8s cluster
- Calls Rancher API and register cluster
- Installs argocd and pull down edge site manifest
- Has the skupper create a connection to the cloud
This plug-and-play model allows field engineers to rapidly deploy k3OS nodes with little effort and setup. The optimization I’m going to explore today is using the cloud-init scripts to register nodes to Rancher via the API.
Setting up a demo environment with Virtualbox
Let’s dive into some of the technical setup of ougoing to want to install VirtualBox and make sure to download the k3OS iso here so you can boot it up in a bit. You’ll also need a Rancher instance running on some type of cloud platform, which is outside the scope of this demo.
Just to reiterate, the things you’ll need are:
- Rancher instance (Remote)
Once you have Virtualbox installed, you’ll want to start it up and go through the initial process of setting up the VM and attaching the k3OS iso. Once you’ve done that, start up the machine and you should be greeted with a nice little intro screen:
From here, you’ll want to login with the rancher user and then you’ll be asked if you want to configure the system of install. Choose to install and move through the steps until it asks you whether or not you want to use a cloud-init file or not.
Before we move any further, I’m going to break down what the cloud-init file is doing here. Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization. It is supported across all major public cloud providers, provisioning systems for private cloud infrastructure, and bare-metal installations.
Cloud instances are initialized from a disk image and instance data:
- Cloud metadata
- User data (optional)
- Vendor data (optional)
- Grab an auth token from a remote Rancher server
- Create an API key
- Extract and store the auth token
- Configure the server URL
- Create the cluster via the Rancher API
- Extract the Cluster ID
- Generate the Registration token
- Output the yaml to a /manifests folder
It is a pretty simple process, and I would provide the raw code here but Medium’s formatting makes it a bit difficult.
So let’s head back over to Virtualbox and give k3OS the RAW gist of the cloud-init file we just talked about.
Once you feed the OS the URL of your init file, k3OS will use that to make an authentication call to your Rancher server and get back the YAML to spin up a cluster. If everything goes correctly, you should have a cluster show up in your Rancher server on the execution of the cloud-init file.
That’s really about all there is to it. There’s a lot of really great automation that cloud-init allows and I’m curious to hear how you all implement it in your workflows, esp. with k3OS.
Thanks for reading and let me know what you think. Also, I want to throw a huge shoutout to Nick Kampe for helping me brainstorm this project!
Create your free account to unlock your custom reading experience.