Getting Started with Kubeadm Cluster

Getting Started with Kubeadm Cluster

·

4 min read

Before jumping into a practical demonstration we will understand how kubeadm cluster is better than minikube cluster.

Kubeadm and minikube both allow you to create Kubernetes clusters locally. However, some key differences make kubeadm more suitable for production use cases:

  • Scale: Kubeadm can create clusters with multiple nodes, allowing you to test and run applications at a larger scale. Minikube only supports a single-node cluster.

  • Performance: Since minikube runs within a VM, the performance can be limited. Kubeadm clusters run directly on hardware or cloud instances and offer better performance.

  • High availability: Kubeadm can be configured to provide high availability with a master node and multiple worker nodes. Minikube only has a single node and does not offer high availability.

  • Configuration: Kubeadm provides more configuration options and flexibility to customize the cluster. Minikube has limited configuration options as it targets simplicity.

  • Add-ons: Kubeadm makes it easier to install and manage various add-ons like the dashboard, metrics server, etc. Minikube add-ons are more limited.

  • Production readiness: Since kubeadm clusters are more feature-rich, they provide a closer simulation of a real production cluster. Minikube is targeted more for testing and development.

In summary, kubeadm clusters offer:

  • Better scale

  • Higher performance

  • High availability options

  • More flexibility

  • Support for add-ons

  • Simulation of production environments

While minikube clusters are ideal for:

  • Getting started with Kubernetes

  • Learning and experimenting

  • Testing applications

So for testing purposes during development, minikube is a good choice. But for running applications in a simulated production environment, a kubeadm cluster will be a better option due to its advantages listed above.

Installation

Creating EC2 Instances

Firstly we need to create two EC2 Instances for the master node and worker node.

Log in to your AWS account.

  1. Now, Navigate to the EC2 instance and then click "Launch Instance".

  2. Name: k8s-demo

  3. Number of Instances: 2 (master node and worker node)

  4. Application and OS image: Ubuntu

  5. Instance type: t2.medium

  6. Key pair: create a new one or use the existing one

  7. Keep the rest of the things as default and click on "Launch Instance"

Now, rename your instances as master and worker so that it can be easy to identify the master node and worker node.

ssh both the master node and worker node using the command:

ssh -i {path of pem file} ubuntu@{Public IPv4 address}

After SSH to both our instances, we will install kubeadm on both the master node and worker node.

Installing Kubeadm

Firstly we will install docker-engine on both nodes using the command:

sudo apt update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo apt install docker.io -y

Enable and start the docker engine

sudo systemctl enable --now docker

Adding GPG keys:

curl -fsSL "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg
#Add the repository to the sourcelist
echo 'deb https://packages.cloud.google.com/apt kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Now finally, we will install kubeadm using the command:

sudo apt update 
sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y

Setting up Master Node

Now we want to Initialize the Kubernetes master node so that all of our important services get started.

sudo kubeadm init

After successfully running, your Kubernetes control plane will be initialized successfully

Set up local kubeconfig (both for the root user and normal user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Apply Weave network:

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

Generate a token for worker nodes to join:

sudo kubeadm token create --print-join-command

Setting Security Groups

By default, SonarQube will not be accessible to the external world due to the inbound traffic restriction by AWS. Open port 6443 in the inbound traffic rules as shown below.

  • EC2 > Instances > Click on

  • In the bottom tabs -> Click on Security

  • Security groups

  • Add inbound traffic rules as shown in the image (you can just allow TCP 6443 as well, in my case, I allowed All traffic).

Setting up Worker Node

Run the following commands on the worker node

sudo kubeadm reset pre-flight checks

Paste the join command you got from the master node and append --v=5 at the end. Make sure either you are working as sudo user or use sudo before the command

After succesful join->

Verifying Cluster Connections

On Master Node

kubectl get nodes

Now, to test a demo pod you can refer to my Pods blog:

https://ketangrover.hashnode.dev/kubernetes-deploy-your-first-pod

That's a wrap............................