Date

K8s Lab: Crating Master

Summary

In this Lab exercise we will learn step by step process for creating k8s Cluster

Prerequisites

  1. A compatible Linux host (Linux distributions based on Ubuntu, Debian and Red Hat)

  2. 2 GB or more of RAM per machine (any less will leave little room for your apps).

  3. 2 CPUs or more.

  4. Network connectivity between all Nodes in the cluster (Static or DHCP).

  5. Unique hostname, MAC address, and product_uuid for every node. See here for more details.

  6. Certain ports are open on your machines.

Preparation

1. Disable Swap

 Swapoff -a 

 Nano /etc/fstab

Comment out “#” /swapfile

2. Disable Linux SE (Security Enhanced Linux)

 Setenforce 0 

 Sed -I 's/enforcing/disabled/g' /etc/selinux/config 

 Grep disabled /etc/selinux/config | grep -v '#'

3. Configure Linux Node’s iptables to correctly see bridged traffic, ensure net.bridge.bridge-nf-call-iptables is set to 1 in sysctl config

 cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf 

 net.bridge.bridge-nf-call-ip6tables = 1 

 net.bridge.bridge-nf-call-iptables = 1 

 EOF 

 sudo sysctl --system 

Step by Step Process

Initializing your control-plane

The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with)

1. Download latest packages and install

Add Google’s apt repository gpg key

 sudo apt-get update && sudo apt-get install -y apt-transport-https curl 

 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - 

Add the Kubernetes apt repository

 cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list 

 deb https://apt.kubernetes.io/ kubernetes-xenial main 

 EOF 

Update the package list

 sudo apt-get update 

2. Install following packages Docker, kubelet, kubeadm and kubectl

 sudo apt-get install -y docker.io kubelet kubeadm kubectl kubernetes-cni 

Check the status of kubelet and container runtime, docker

 sudo systemctl status kubelet.service 

 sudo systemctl status docker.service 

Ensure both are set to start when the system starts up

 sudo systemctl enable kubelet.service 

 sudo systemctl enable docker.service 

Note: We no longer wants ‘apt’ to maintain the upgrade of these packages rather depends on Kubernetes to maintain its own updates

 sudo apt-mark hold kubelet kubeadm kubectl 

Note: Kublet Config files: /var/lib/kubelet/config.yaml

3. Reload the systemd config and docker

 sudo systemctl daemon-reload 

 sudo systemctl restart docker 

Install a single control-plane Kubernetes cluster

4. To initialize the control-plane node run:

 sudo kubeadm init --pod-network-cidr=10.240.0.0/16 

Sample output:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.20:6443 –token ko9j83.f2gdekajw4z280st \

    –discovery-token-ca-cert-hash sha256:fc3f3bad62a8f4ea9333bca850dc6bd253d57250e6e39a8436d9428dc4be1884

5. To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:

 mkdir -p $HOME/.kube 

 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 

 sudo chown $(id -u):$(id -g) $HOME/.kube/config 

Alternatively, if you are the root user, you can run:

 export KUBECONFIG=/etc/kubernetes/admin.conf 

Install a Pod network on the cluster so that your Pods can talk to each other

6. Download & configure Flannel

 sudo wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

7. Modify net-conf.json and verify type of network backend being used is set to “host-gw” and that the cluster CIDR (e.g. “10.244.0.0/16”)

This is how net-conf.json should look like

  net-conf.json: |

    {

      “Network”: “10.244.0.0/16”,

      “Backend”: {

        “Type”: “host-gw”

      }

    }

8. Apply the Flannel yaml and Validate

 kubectl apply -f kube-flannel.yml 

Validate Flannel Pod Network

 kubectl get pods --all-namespaces 

NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE

kube-system   coredns-66bff467f8-hvft2              1/1     Running   0          6m

kube-system   coredns-66bff467f8-xmc27              1/1     Running   0          6m

kube-system   etcd-linuxmaster                      1/1     Running   0          6m14s

kube-system   kube-apiserver-linuxmaster            1/1     Running   0          6m14s

kube-system   kube-controller-manager-linuxmaster   1/1     Running   0          6m14s

kube-system   kube-flannel-ds-amd64-jl5v5           1/1     Running   0          46s

kube-system   kube-proxy-gh4qr                      1/1     Running   0          6m

kube-system   kube-scheduler-linuxmaster            1/1     Running   0          6m13s

All the pods will be in ‘Running’ state if the Flannel pod network was deployed.

9. Verify that the Flannel DaemonSet has the NodeSelector applied.

 kubectl get ds -n kube-system 

NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE

kube-flannel-ds-amd64     1         1         1       1            1           <none>                   54s

kube-flannel-ds-arm       0         0         0       0            0           <none>                   54s

kube-flannel-ds-arm64     0         0         0       0            0           <none>                   54s

kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   54s

kube-flannel-ds-s390x     0         0         0       0            0           <none>                   54s

kube-proxy                1         1         1       1            1           kubernetes.io/os=linux   6m23s

10. Verify and validate Master status

 kubectl get nodes -o wide 

NAME          STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME

linuxmaster   Ready    master   6m41s   v1.18.5   10.0.0.x      <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-azure   docker://19.3.6

AZmachina

Knowledge Shared = Knowledge2

We have created AZmachina blog to share our knowledge on Docker & Container and Kubernetes on Windows Servers with curious and enthusiastic novice learner. We hope that this will help them to take a swim in this vast ocean of Window Containers and Kubernetes

Happy Learning !
Recent Posts
Categories
Archives
Sumeet Kumar

Sumeet Kumar

I am Windows Core Engineer with 7+ years of experience in Windows Hyper-v, Failover Cluster, Windows Storage, Volume Shadow Copy (VSS), Docker & Containers on Windows Servers, Backup & Recovery, VMware vSphere EXSi & vCenter Server

RELATED

Articles

Leave a Reply

Your email address will not be published. Required fields are marked *