Running a Kubernetes cluster on Raspberry Pi's
For a while I have been considering to build my own Raspberry Pi cluster, however, I wasn’t quite sure what exactly to use for it, there were two distinct options I was considering to use, either Docker Swarm or Kubernetes. I decided to go for Kubernetes, because why not!
k8s + arm
I quickly became familiar with Lucas Käldström work, as he composed an easy guide of how to set up k8s cluster on ARM devices back in September 2015. Since then a lot of things have changed. The guide is deprecated as since the beginning of 2016 Lucas have started to contribute towards k8s core, which meant only one thing, that k8s is now supported out of the box on ARM devices. As I own a couple of Raspberry Pi 3’s and a single Raspberry Pi Zero, I thought that it would be awesome to have my Zero as the cluster master, but apparently, the support for ARM v6 devices have been dropped from k8s since version 1.6, you can read more about this on Kubernetes issue#38067 if you want to know a bit more.
prepare sd cards
I found Hypriot - Docker Pirates ARMed with explosive stuff. They have composed HypriotOS which comes with Docker pre-baked in the image, therefore I decided to use the latest version at the time of writing 1.7.1. For flashing the OS image, I used Etcher by reisin.io, even though Hypriot advertise their own flasher - after trying it, I can’t say that I’m a big fan of it. After flashing all Micro sd cards with the HypriotOS image, I also updated the user-data
to enable WiFi connection as at this point I was planning to run the k8s cluster wirelessly because I didn’t had any spare routers laying around.
This is the user-data
file I copied over to the Micro SD card root directory to enable the wireless connection on the OS boot.
#cloud-config
hostname: YOUR_HOST_NAME
manage_etc_hosts: true
users:
- name: YOUR_USERNAME
gecos: ""
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
groups: users,docker,video
plain_text_passwd: YOUR_PASSWORD
lock_passwd: false
ssh_pwauth: true
chpasswd: { expire: false }
package_upgrade: false
write_files:
- content: |
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp
path: /etc/network/interfaces.d/wlan0
- content: |
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="YOUR_WIFI_SSID"
psk="YOUR_WIFI_PASSWORD"
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP
auth_alg=OPEN
}
path: /etc/wpa_supplicant/wpa_supplicant.conf
runcmd:
- 'systemctl restart avahi-daemon'
- 'ifup wlan0'
I decided to use m01
as a host name for my master node, and w0*
for my worker nodes. Once I had the updated user-data
file on all Micro SD cards, it was the time to start up all of the nodes.
start k8s cluster
First of all we need to SSH into each node and install Kubernetes v1.9.3 (the latest version at the time of writing).
sudo su -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update && apt-get install -y kubeadm
Initialise k8s cluster on your master node by running:
kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address=192.168.0.2
where 192.168.0.2
was my master node IP address, replace this one with your master node IP address. You can easily find out the node IP address by executing:
ifconfig wlan0 | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1'
The k8s cluster initialisation process might take some while, so feel free to make a cup of coffee while you are waiting. Once k8s cluster initialisation is complete, you need to add cluster admin config in order to communicate with your cluster.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
When you have initialised your k8s cluster you should get the following in the output:
kubeadm join --token 5a2de9.fad7339a191cedac 192.168.0.2:6443 --discovery-token-ca-cert-hash sha256:744ab435b304cd3c83bbe4d65a35d28e46369f398c5640c806fe327fc9526b44
This command is used to join worker nodes to the cluster. We just have to execute the command on all of the worker nodes in order to join.
Lastly, you have to set up the pod network driver on your master node. I had trouble setting up Flannel, however, when I tried Weave, it just worked, so I went for it.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
After running kubectl get nodes
you should see all of your k8s nodes. By running kubectl get pods --all-namespaces
you should see all pods.
tips
kubectl alias
Add alias for kubectl
to make your life easier.
echo "alias k='kubectl'" >> ~/.bashrc ; . ~/.bashrc
By adding this alias you won’t need to write the full command kubectl
but just k
so the previous commands becomes more simple - k get nodes
& k get pods --all-namespaces
.