Will it cluster? k3s on your Raspberry Pi

In this post we'll test-drive k3s which is a stripped-down Kubernetes distribution from Rancher Labs. With a single binary and a one-line bootstrap process it's even easier than before to create a light-weight cluster. So grab your Raspberry Pi and get ready to deploy the smallest Kubernetes distribution ever.

You may have seen my previous work with Kubernetes and Docker on Raspberry Pi such as Build your own bare-metal ARM cluster. I'm hoping that this post will be a lot simpler to follow, with fewer workarounds and even more resources left over for your projects to consume.

Featured: Raspberry Pi x5 Compute Module (COM) holder with Gigabit ethernet, from mininodes.com

Why k3s?

Darren Shepherd, Chief Architect at Rancher Labs is known for building simple solutions and accessible user-experiences for distributed systems. k3s is one of his latest experiements to reduce the footprint and bootstrap-process of Kubernetes to a single binary.

The k3s binary available on GitHub comes in at around 40mb and bundles all the low-level components required such as containerd, runc and even kubectl. k3s can take the place of kubeadm which started as part of a response from the Kubernetes community to up their game for user-experience of bootstrapping clusters.

kubeadm is now able to create production-ready multi-master clusters, but is not well-suited for the Raspberry Pi. This is because it assumes hosts have high CPU/memory and low-latency. When I ran through the installation for k3s the first time it was several times quicker to boot up than kubeadm, but the important part was that it worked first-time, every time without any manual hacks or troubleshooting.

Note: k3s just like Kubernetes, also works on armhf (Raspberry Pi), ARM64 (Packet/AWS/Scaleway) and x86_64 (regular PCs/VMs).


I'll list the pre-requisites and add some affiliate links to Amazon US.

  • At least 2 of: Raspberry Pi 2B/3B/3B+ (ARMv7)

The Raspberry Pi Zero and first-gen RPi (armv6l) are not compatible with k3s. The main reason is that these devices have very low-powered and so the Kubernetes project does not publish Docker images for this CPU architecture.

I say that you need two nodes, but one node can work if that is all you can spare.

Element14 Raspberry Pi 3 B+ Motherboard

Whatever you do, don't buy a fake. My personal recommendation:

SanDisk 32GB ULTRA microSDHC Card Class 10 (SDSDQUA-032G-A11A)

For something like Kubernetes that depends on low-latency I would strongly encourage against you using WiFi. A 5-port Ethernet switch is very cheap and widely available.

NETGEAR 5-Port Gigabit Ethernet Unmanaged Switch, Desktop, Internet Splitter, Sturdy Metal, Fanless, Plug-and-Play (GS305)

You don't want to be using a random adapter or mobile phone charger. Get the official adapter so you're sure the RPi has enough power.

Official Raspberry Pi Foundation 5V 2.5A Power Supply WHITE

Clustering parts

If you're running with more than one RPi then buying multiple cases or multiple power adapters can be a false economy.

  • Power supply for multiple RPis

6-way USB power supply from Anker

I would recommend getting "ribbon" / "flat" cables because regular cables create tension and can pull over your stack of RPis.

Ethernet Cable 3 ft Shielded (STP), High Speed Flat RJ45 cable

iUniker Raspberry Pi Cluster Case, Raspberry Pi Case with Cooling Fan

GeauxRobot Raspberry Pi 3 Model B 7-layer Dog Bone Stack

CLOUDLET CASE: For Raspberry Pi and Other Single Board Computers

Prepare the RPi

Let's start the tutorial.

Flash the OS to the SD card

Let's not make things complicated by messing about with bespoke operating systems. The Raspberry Pi team have done a great job with Raspbian and for a headless system Raspbian Lite is easy to use and quick to flash.

On MacOS you can usually type in: sudo touch /Volumes/boot/ssh for this step.

Etcher.io in action

Power-up the device & customise it

Now power-up your device. It will be accessible on your network over ssh using the following command:

$ ssh pi@raspberrypi.local

Log in with the password raspberry and then type in sudo raspi-config.

Update the following:

  • Set the GPU memory split to 16mb
  • Set the hostname to whatever you want (write it down?)
  • Change the password for the pi user

I also highly recommend setting a static IP for each Raspberry Pi in your cluster.

Do you have an ssh key?

$ ls -l ~/.ssh/id_rsa.pub

If that says file not found, then let's generate a key-pair for use with SSH. This means you can set a complicated password, or disable password login completely and rely on your public key to log into each RPi without typing a password in.

Hit enter to everything:

$ ssh-keygen

Finally run: ssh-copy-id pi@raspberrypi.local

Enable container features

We need to enable container features in the kernel, edit /boot/cmdline.txt and add the following to the end of the line:

 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Now reboot the device.

Create the k3s cluster

Bootstrap the k3s server

The current version of k3s is v0.2.0, but you can always visit the releases page to check for a newer version.

On one of the nodes log in and do the following:

$ wget https://github.com/rancher/k3s/releases/download/v0.2.0/k3s-armhf && \ chmod +x k3s-armhf && \ sudo mv k3s-armhf /usr/local/bin/k3s

Now the best way to run k3s is through a systemd unit file and Darren provides that in the GitHub repo. For our testing we'll just run the server in a tmux window.

$ tmux $ sudo k3s server

Wait for k3s to start and to download the required images from the Kubernetes registry. This may take a few minutes.

Grab the join key from this node with:

$ sudo cat /var/lib/rancher/k3s/server/node-token

Join a worker

Now log into another node and download the binary as before, moving it to /usr/local/bin/.

Now join the worker to the server with the following:

$ export NODE_TOKEN="K1089729d4ab5e51a44b1871768c7c04ad80bc6319d7bef5d94c7caaf9b0bd29efc::node:1fcdc14840494f3ebdcad635c7b7a9b7"
$ export SERVER_IP=""
$ sudo -E k3s agent -s ${SERVER_IP} -t ${NODE_TOKEN}

List your nodes

cm3 Ready <none> 9m45s v1.13.4-k3s.1 <none> Raspbian GNU/Linux 9 (stretch) 4.14.79-v7+ containerd://1.2.4+unknown
cm4 Ready <none> 13m v1.13.4-k3s.1 <none> Raspbian GNU/Linux 9 (stretch) 4.14.79-v7+ containerd://1.2.4+unknown

We can see our nodes and that they are using containerd rather than full Docker. This is part of how Darren was able to reduce the footprint.

Deploy a microservice

We can now log into the k3s server and deploy a microservice. We'll deploy figlet which will take a body over HTTP on port 8080 and return an ASCII-formatted string.

  • Create a service (with a NodePort):

Save: openfaas-figlet-svc.yaml.

apiVersion: v1
kind: Service
metadata: name: openfaas-figlet labels: app: openfaas-figlet
spec: type: NodePort ports: - port: 8080 protocol: TCP targetPort: 8080 nodePort: 31111 selector: app: openfaas-figlet

The deployment will be used to schedule a Pod using a Docker image published in the OpenFaaS Function Store.

Save: openfaas-figlet-dep.yaml.

apiVersion: apps/v1
kind: Deployment
metadata: name: openfaas-figlet labels: app: openfaas-figlet
spec: replicas: 1 selector: matchLabels: app: openfaas-figlet template: metadata: labels: app: openfaas-figlet spec: containers: - name: openfaas-figlet image: functions/figlet:latest-armhf imagePullPolicy: Always ports: - containerPort: 8080 protocol: TCP
  • Now apply the configuration:

Notice that we prefix kubectl with sudo k3s? You can set a bash alias for this in your ~/.bash_profile.

sudo k3s kubectl apply -f openfaas-figlet-dep.yaml,openfaas-figlet-svc.yaml
deployment.apps/openfaas-figlet created
service/openfaas-figlet created

Wait for the figlet microservice to come up:

$ sudo k3s kubectl rollout status deploy/openfaas-figlet
deployment "openfaas-figlet" successfully rolled out

Now invoke the function:

echo -n "I like $(uname -m)" | curl --data-binary @- ___ _ _ _ _____ _
|_ _| | (_) | _____ __ _ _ __ _ __ _____ _|___ | | | | | | | |/ / _ \ / _` | '__| '_ ` _ \ \ / / / /| | | | | | | < __/ | (_| | | | | | | | \ V / / / | |
|___| |_|_|_|\_\___| \__,_|_| |_| |_| |_|\_/ /_/ |_|

You can use an Open Source tool like inlets.dev to create a tunnel to the public Internet for your Raspberry Pi k3s cluster. All you need to do is to create a cheap VPS or EC2 node to get a public IP address that connects back to your cluster.

However if you only need your tunnel for 7 hours or are happy to pay for a SaaS service, then then ngrok is also very easy to use and well-known amongst developers.

$ wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-arm.zip
$ unzip ngrok-stable-linux-arm.zip
$ sudo mv ngrok /usr/local/bin/
  • Start a HTTP tunnel to the microservice
$ ngrok http

You'll get a web address appear such as:

Forwarding https://16f7c980.ngrok.io ->

Now you can share that URL with your friends:

$ curl -SLs --data $(whoami) https://16f7c980.ngrok.io _ __ _| | _____ __ / _` | |/ _ \ \/ /
| (_| | | __/> < \__,_|_|\___/_/\_\ 

You can check the logs to see if they tried it:

$ sudo k3s kubectl logs deploy/openfaas-figlet -f

You can even scale up the microservice:

$ sudo k3s kubectl scale deploy/openfaas-figlet --replicas=4

Then find out which nodes the Pods were created on:

$ sudo k3s kubectl get pods -l app=openfaas-figlet -o wide
openfaas-figlet-8486c9f585-4ks2f 1/1 Running 0 26s cm4 openfaas-figlet-8486c9f585-d7kpk 1/1 Running 0 26s cm3 openfaas-figlet-8486c9f585-l7x89 1/1 Running 0 10m cm3 openfaas-figlet-8486c9f585-nhqj6 1/1 Running 0 25s cm3 

We're only scratching the surface here. You can see Darren demo k3s and OpenFaaS in a CNCF Webinar below:

Tear down

Kill k3s on each node and remove the data directory.

$ sudo killall k3s
$ sudo rm -rf /var/lib/rancher


It is very early for k3s on ARM, but at this stage it's certainly more usable than the alternatives. If you're considering building a cluster for tinkering and for learning more about Kubernetes then you can't go wrong with trying k3s.

Continue learning more about Kubernetes:

k3s is a compliant Kubernetes distribution which means if you learn k3s, you're learning Kubernetes and as I tweeted earlier last week - it's never too late to start learning Kubernetes and nobody ever got fired for that.

You may also like

For even more - Follow me on Twitter @alexellisuk.