In this post we are going to see how to setup multi-node kubernetes cluster on LXD/LXC.

 

Architecture

We will setup a 3 nodes Kubernetes cluster on CentoOS 7, which contains the components below:

Hostname Role Version OS
k8s-master Master v1.14.1 CentOS Linux 7 (Core)
k8s-node-1 Node1 v1.14.1 CentOS Linux 7 (Core)
k8s-node-3 Node3 v1.14.1 CentOS Linux 7 (Core)
k8s-node-3 Node3 v1.14.1 CentOS Linux 7 (Core)

 

Prerequisite

Hardware

Minimum Recommended Hardware Requirements

Processor CPU RAM Storage Internet
64-bit 4-Core 8 GB 20 GB 3 Mbps

Operating System:

  • Linux

Software

 

Installation

If you’re in a hurry, you can just copy and paste the following commands into your terminal to install all the Prerequisite software:

 

Install Ansible

sudo apt-get update && \
sudo apt-get install -y software-properties-common && \
sudo apt-add-repository ppa:ansible/ansible && \
sudo apt-get update && \
sudo apt-get install ansible 

 

Install kubectl with snap

If you are on Ubuntu or one of other Linux distributions that support snap package manager, kubectl is available as a snap application.

sudo snap install kubectl --classic

Test to ensure the version you installed is sufficiently up-to-date:

kubectl version

 

Getting Started

Clone this github repo:
git clone https://github.com/sayems/lxc.resources.git
Spin up Kubernetes cluster

Navigate to lxc.resources directory and then run the following command:

$ ansible-playbook playbook.yml

Now, wait for kubernetes cluster to be ready. This might take a while to complete.

 

Login to k8s-master

You can now login to the k8s-master by running the following command:

$ lxc exec k8s-master bash

You can verify the cluster by checking the nodes. Run the following command to list all the connected nodes:

[root@k8s-master ~]# kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE               
k8s-master   Ready    master   15h   v1.14.1   10.179.186.150   <none>        CentOS Linux 7 (Core)  
k8s-node-1   Ready    <none>   15h   v1.14.1   10.179.186.179   <none>        CentOS Linux 7 (Core)  
k8s-node-2   Ready    <none>   15h   v1.14.1   10.179.186.144   <none>        CentOS Linux 7 (Core)  
k8s-node-3   Ready    <none>   15h   v1.14.1   10.179.186.123   <none>        CentOS Linux 7 (Core)  

 

Check kube-system is running:

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-529h8                 1/1     Running   1          15h
coredns-fb8b8dccf-hxxhw                 1/1     Running   1          15h
etcd-k8s-master                         1/1     Running   1          15h
kube-apiserver-k8s-master               1/1     Running   1          15h
kube-controller-manager-k8s-master      1/1     Running   1          15h
kube-flannel-ds-amd64-ddln7             1/1     Running   1          15h
kube-flannel-ds-amd64-g5l7c             1/1     Running   1          15h
kube-flannel-ds-amd64-jlblc             1/1     Running   1          15h
kube-flannel-ds-amd64-qwpnh             1/1     Running   1          15h
kube-proxy-f2wcx                        1/1     Running   1          15h
kube-proxy-gzx5j                        1/1     Running   1          15h
kube-proxy-pqpnx                        1/1     Running   1          15h
kube-proxy-qfmr7                        1/1     Running   1          15h
kube-scheduler-k8s-master               1/1     Running   1          15h
kubernetes-dashboard-8657bd768c-xbbkg   1/1     Running   1          13h
tiller-deploy-8458f6c667-7zz74          1/1     Running   1          13h

 

Deployment to Kubernetes with Helm

Let's deploy Selenium Grid on Kubernetes cluster using Helm. Helm is a package manager for Kubernetes that allows developers deploy applications on Kubernetes clusters.

 

Installing Helm

Installing Helm is actually pretty straightforward. Follow these steps:

[root@k8s-master ~]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
[root@k8s-master ~]# chmod 700 get_helm.sh
[root@k8s-master ~]# ./get_helm.sh

 

Installing Tiller

[root@k8s-master ~]# kubectl create serviceaccount --namespace kube-system tiller

serviceaccount/tiller created
[root@k8s-master ~]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
[root@k8s-master ~]# helm init --service-account tiller --upgrade

$HELM_HOME has been configured at /Users/sidrah/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Now, run the following command

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                                   READY   STATUS    RESTARTS   AGE
tiller-deploy-57c574bfb8-2f7zw                         1/1     Running   0          2m
[root@k8s-master ~]# kubectl get pods -n kube-system | grep tiller --color
tiller-deploy-57c574bfb8-2f7zw                         1/1     Running   0          3m

 

Selenium Grid installation
[root@k8s-master ~]# helm install --name selenium-grid stable/selenium \
--set chromeDebug.enabled=true \
--set firefoxDebug.enabled=true
NAME                                                        READY   STATUS    RESTARTS   AGE
pod/selenium-grid-selenium-chrome-debug-77c567d586-ln2wr    1/1     Running   0          3m56s
pod/selenium-grid-selenium-firefox-debug-54d5f8c674-qr4fs   1/1     Running   0          3m56s
pod/selenium-grid-selenium-hub-77667c5c7c-5thrs             1/1     Running   0          3m56s

NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes                   ClusterIP      10.96.0.1       <none>        443/TCP          30m
service/selenium-grid-selenium-hub   LoadBalancer   10.106.174.57   <pending>     4444:30171/TCP   3m56s

NAME                                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/selenium-grid-selenium-chrome-debug    1/1     1            1           3m56s
deployment.apps/selenium-grid-selenium-firefox-debug   1/1     1            1           3m56s
deployment.apps/selenium-grid-selenium-hub             1/1     1            1           3m56s

NAME                                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/selenium-grid-selenium-chrome-debug-77c567d586    1         1         1       3m56s
replicaset.apps/selenium-grid-selenium-firefox-debug-54d5f8c674   1         1         1       3m56s
replicaset.apps/selenium-grid-selenium-hub-77667c5c7c             1         1         1       3m56s

Selenium hub will automatically start-up using port 4444 by default. You can view the status of the hub by opening a browser window and navigating to: http://k8s-master-ip:30171/grid/console

For more information on how to run tests on Selenium Grid, please visit this link

 

Remote Kubernetes Cluster

To connect to a remote Kubernetes cluster from you host computer, you'll need copy the Kubernetes admin.conf credentials to your host home directory as shown below.

$ lxc file pull k8s-master/etc/kubernetes/admin.conf ~/.kube/config

That’s all you have to do. Your local Kubectl should be able to connect with the remote Kubernetes cluster now.