Summary
In this article we will learn Kubernetes Master and Node (Minions) basic concepts.
Perquisite
Linux Physical or virtual machine for Master and Linux/Windows physical or virtual machine as Nodes (Minions)
Kubernetes : Introduction
- Kubernetes was designed and developed by Google. In 2014, Google donated Kubernetes to CNCF (Cloud Native Computing Foundation)
- Kubernetes lives on https://github.com/kubernetes/kubernetes. Kubernetes borrow some of it DNS from Borg and Omega
- Kubernetes is written in Go\Golang language
- First version, v1.0 was released on July, 2015 and It is often called k8s
- Kubernetes is a framework for building distributed platforms. Kubernetes is Cluster manager or orchestrator that lets us manages containerized app (Microservice) at scale
Microservice App comprise of lots of small and independent services. These services come together to create a meaningful application
Master and Nodes (Minions)
In K8s Cluster, we packaged our apps and give it to the Master, it decide on which Node this application will run
Minions runs the actual work load. Reports the status of the Application to the Master and listens to any changes in the Application Definition on the Master
Master
Master can be broken into 4 components:
- API Server
- Cluster Store
- Schedular
- Control manager
kube-apiserver
- It is front-end to the master or control plane. It is the only master component with external facing interface
- Exposes the RESTful API (default port: 443)
- Consumes JSON (via Manifest files)
- We create a Manifest Files with Application Desired State definition (in JSON). Share Manifest JSON file to the Master. Master validated manifest file and deploy it to the one of the Nodes
- Commands and queries are issued to API server component of the Master (through $kubectl command line utility formatted as JSON). Command and action items make their ways to Nodes
- We issue work to the Cluster through API Server (using Manifest), all the components on the Master works together to decide where to run the work. Work is distributed across worker Nodes
- While running the Work, worker Nodes reports the status of the Pods to the Master continuously
- Only runs System Pods, Cluster Store, Schedular, Controller Manager, API Server. All the user workload will run on Minions. However, you can run User Pods on Master (Requires Config Changes)
Recommendation: Run the Application on the Nodes and keep master free on Application workload
Cluster Store: Persistent Storage
- Cluster Store persistently stores configuration and State of the Cluster
- Uses etcd and NoSQL key-pair value store from CoreOS. It is distributed, consistent and watchable
- Kubernetes use etcd as the “source of truth” for the Cluster
- It is the only Stateful part of the Control Plane (every other components are stateless)
Kube-controller-manager
- Controller of Controllers (Mini Monolith)
- Controller Manger implements few features and functions:
- Node Controller
- Endpoints Controller
- Namespace Controller
- ……
- Watches current state of the system and update API server to ensure desired state is maintained
- Helps to implement lifecycle functions of Pods and maintains desire state of the system
- These controllers monitors changes with end goal is make sure that the current state of the cluster matches desired state
Kube-schedular
- Watches API Server for new Pods (unscheduled pods) and schedule these pods to Nodes
- Assign work to Nodes: It consider following before assigning Pods to Nodes
- Affinity/anti-affinity
- Constraints
- Resources management
- ….
- It watched for new Pods and assign them to workers
Nodes (a.k.a Minions)
Minions runs the actual work load. Reports the status of the Application to the Master and listens to any changes in the Application Definition on the Master
Minions comprise of 3 components:
- Kubelet
- Container Runtime (Docker Engine)
- Kube-proxy
Kubelet
Kubelet is main Kubernetes Agent on the Node. It has following functions:
- Registers node with Cluster
- Watches API Serer
- Instantiates pods
- Reports back health status of Pods, Node and Application (Container) to master
- Exposes endpoint on: 10255
When you install kubelet on a Linux Host, it registers the host as Nodes in the Kubernetes Cluster. It watches API server on the Master for work packages (basically Pods). Anytime kubelet sees any work, it carries out task and maintains a reporting channel back to the master
Example: If kubelet can run the work, it reports back to the master and control plane on the Master decides action items. Kubelet does not restart pods or replace Pods on other active Nodes, rather it only reports the status back to the Master
Kubelet expose an endpoint on local host on Port:10255 (where we can inspect specs of kubelet)
Container Engine
Does container management:
- Pulling Images
- Starting/Stopping containers
- ….
Container Engine co-ordinates with kubelet for container management
Kube-proxy
Kubernetes networking
- Pod IP address
- All containers in a Pod shares a single IP
2. Load balancing across all pods in a service. Implementing Service. Load balancing, routing traffic to pods
No NAT
Example
Network brains of Node. Kube-proxy make sure that every Pod gets its own unique IP. However, all containers in a Pods shares same IP
Containers in a single Pod : Communicates over localhost using Namespaces: Pod IP address
Pod-Pod (Same Node) : Communicates over layer 2 software Bridge using individual IPs
Pods – Pods communication across Nodes: Overlay Network
Kube-Proxy: External Service
3. Kubelet and Kube-proxy communicates with API server, monitoring it for changes to the environment
4. Master also has Kubelet, Container Runtime (Docker Engine), Kube-proxy
Additional Information
Ports used by k8s Cluster Components/services
Components | Ports (TCP) | Used By |
API Server | 6443 | All |
etcd | 2379-2380 | API/etcd |
Schedular | 10251 | Self (localhost) |
Control Manager | 10252 | Self (localhost) |
Kubelet | 10250 | Control Plane |
NodePort | 30000-32767 (IP Range) | All |
NodePort Service: It is type of Kubernetes Service that exposes Service Port on the individual Nodes and Cluster