Setting up RKE1 kubernetes cluster on Vagrant machines

Purushotham Reddy
6 min readJun 26, 2024

--

Introduction

We all know that kubernetes(aka k8s) is a popular container orchestration platform. There are many cloud managed offerings like AWS EKS, Azure AKS, Google GKE. However when it comes to on-premises environment the best popular choice is RKE k8s cluster.

RKE1 has 2 variants

  • RKE1
  • RKE2

RKE1 — uses docker as container runtime. It runs controlplane components in docker containers.

RKE2- uses containerd as container runtime. It runs controlplane components as static pods(managed by kubelet). This is refereed as next gen of RKE1.

In this blog we are going to discuss about setting up RKE1 kubernetes cluster on Vagrant machines.

Pre-requisities:

  • Base machine with any OS(you can use your Laptop as well)

In my case I am using my laptop as Base machin which is running with Ubuntu OS.

Required softwares

  • Virtualbox
  • Vagrant cli
  • rke cli

Lets proceed with installing each and every software.

Virtualbox

Download virtualbox from this url —

Based on your host OS choose the respective installer. Since I am using Ubuntu 20.04 as base machine(Laptop). I am downloading the .deb package and installing it. Execute below commands.


# Downloading virtualbox package

wget https://download.virtualbox.org/virtualbox/7.0.18/virtualbox-7.0_7.0.18-162988~Ubuntu~focal_amd64.deb

# Installing virtualbox package

sudo dpkg -i virtualbox-7.0_7.0.18-162988~Ubuntu~focal_amd64.deb

Once the installation is over you can see Virtualbox under installed softwares on your Laptop.

Vagrant cli installation

Vagrant can be downloaded from this page. Choose the respective os and arch while downloading installer.

Since I am on ubuntu machine I will be executing below commands.


wget https://releases.hashicorp.com/vagrant/2.4.1/vagrant_2.4.1_linux_amd64.zip

unzip vagrant_2.4.1_linux_amd64.zip

chmod +x vagrant

sudo mv vagrant /usr/bin

You can verify vagrant installation by executing below commands.


vagrant --version

which vagrant

rke cli installation

Execute below commands.


wget https://github.com/rancher/rke/releases/download/v1.5.10/rke_linux-amd64

chmod +x rke_linux-amd64

sudo mv rke_linux-amd64 /usr/bin/rke

which rke

To install rke cli for other OS and arch refer to this page.

Installing Vagrant machines

We will be installing 2 vagrant machines out of which 1 will be used as controlplane and 2nd node will be as worker nodes.

Execute the below commands to clone vagrant configurations.


git clone https://github.com/purushothamkdr453/rke1-learning.git

cd rke1-learning/RKE1-setup/

ls -lrt

Open Vagrantfile and replace <Replace this> placeholder with your bridge network.

Bridge network in this case means — the Interface(which has IP address) which is connected to your WIFI/Internet.

For example in my case if run ip a command.

My laptop is connected to home wifi network through wlp3s0(shown above).

So <Replace this> should be changed to wlp3s0 inside Vagrantfile. After replacement execute below command.


vagrant up

the above command will take a while(depends on network bandwidth) as it has to download vagrant boxs, and execute the bootstrap shell script.

Wait until the above command execution is successful. Once it is over you should see vagrant machines under virtualbox.

Each vagrant machine is running with Ubuntu 20.04 OS and docker installed in it.

kmaster is using 2 CPU & 4GB RAM

kworker1 is using 2CPU and 1GB RAM

you can adjust these values inside Vagrantfile(self explanatory)

Installing RKE1 cluster

Now we have nodes(vagrant machines) on which we are going to install RKE1 k8s cluster.

in the current directory you should see a file named “cluster.yaml”. This is a configuration file for RKE1 which is used to provision RKE1 k8s cluster. However there placeholders which need to change before applying it.

Lets go over each and every placeholder attributes.

<MASTER IPADDRESS> — You can get this ip address by ssh into kmaster node. Execute below commands.


vagrant ssh kmaster

ip a show enp0s8

Screenshots added below for reference.

So the highlighted ip address in above screenshot is the value of <MASTER IPADDRESS>

Replace <MASTER IPADDRESS> with 192.168.31.50 inside cluster.yml

<WORKERNODE1 IPADDRESS>

Now lets repeat the same process for <WORKERNODE1 IPADDRESS> as well.


vagrant ssh kworker1

ip a show enp0s8

So the highlighted ip address in above screenshot is the value of <WORKERNODE1 IPADDRESS>

Replace <WORKERNODE1 IPADDRESS> with 192.168.31.94 inside cluster.yml

<MASTERNODE KEY>

we need to get the sshkey for the master node vagrant machine. For this execute the below command.


vagrant ssh-config kmaster

Copy the value under IdentityFile(highlighted above in the screenshot) and replace it with <MASTERNODE KEY> inside cluster.yml

<WORKERNODE1 KEY>

we need to get the sshkey for the worker node vagrant machine. For this execute the below command.


vagrant ssh-config kworker1

Copy the value under IdentityFile(highlighted above in the screenshot) and replace it with <WORKERNODE1 KEY> inside cluster.yml

with this we have replaced all the placeholder attributes. End cluster.yml file should like something like this.


cluster_name: rkelearning
network:
plugin: canal
options:
canal_iface: enp0s8
nodes:
- address: 192.168.31.50
user: vagrant
role:
- controlplane
- etcd
- worker
ssh_key_path: /home/purushotham/purushotham/learning/RKE1/blog/rke1-learning/RKE1-setup/.vagrant/machines/kmaster/virtualbox/private_key
hostname_override: kmaster
- address: 192.168.31.94
user: vagrant
role:
- worker
ssh_key_path: /home/purushotham/purushotham/learning/RKE1/blog/rke1-learning/RKE1-setup/.vagrant/machines/kworker1/virtualbox/private_key
hostname_override: kworker1

Now lets create the RKE1 k8s cluster by execute below command.


rke up

the above command takes some time as it downloads container images and runs k8s cluster components as part of docker containers on each node(vagrant machine).

Here is truncated output of the above command.

The above command generates 2 files.

cluster.rkestate — RKE1 k8s clusters maintain its state in a file called cluster.rkestate.

kube_config_cluster.yml — Kubeconfig file of the created cluster

These 2 files are generated in the current directory.

Now lets connect to created k8s cluster using generated kubeconfig.


export KUBECONFIG=./kube_config_cluster.yml

kubectl get nodes

kubectl get pods -n kube-system

RKE uses canal as network plugin(which is a combination of Calico and flannel).

With this we can conclude that RKE cluster step is successful.

Feel free to update Vagrantfile with more nodes and more cpu/memory as per your needs. Similary adjust the node configurations in cluster.yml as well.

Hope you liked this blog. Feel free to comment if you have any queries/questions.

--

--