Setting Up a Kubernetes Multi-Node Cluster on AWS: A Step-by-Step Guide🚀
INTRODUCTION
In today’s rapidly evolving landscape of cloud-native technologies, Kubernetes has emerged as the de facto standard for container orchestration, empowering organizations to efficiently deploy, manage, and scale containerized applications. In this comprehensive guide, we’ll walk through the process of setting up a Kubernetes multi-node cluster from scratch on AWS (Amazon Web Services), enabling you to harness the power of Kubernetes for your own infrastructure needs.
Let’s Begin The Journey🏃
Step 1: Provisioning AWS Instances
Begin by logging into your AWS Management Console and navigating to the EC2 (Elastic Compute Cloud) dashboard. Here, launch EC2 instances to serve as the nodes in your Kubernetes cluster. Ensure that you select the appropriate instance types and sizes based on your workload requirements, and allocate sufficient resources for each node, including CPU, memory, and storage. Choose the AMI of RHEL9.2
Step 2: Configuring Security Groups
Next, configure security groups to control inbound and outbound traffic to your EC2 instances. Define rules to allow communication between the nodes within the cluster, as well as access to necessary services such as SSH (Secure Shell) for remote administration. For this practical we allow the All traffic
in the security group, so that there will be no isuue in terms of connectivity.
Step 3: Setting up master node
- For best performance, Kubernetes requires that swap is disabled on the master/slave system, edit the /etc/fstab file to make the changes persistent
swapoff -a
2. Install the traffic control utility package. ip controls IPv4 and IPv6 configuration and tc stands for traffic control. Both tools print detailed usage messages and are accompanied by a set of manpages
dnf install -y iproute-tc
3. Then we need to load the certain driver for overlay networking which enables the inter-pods communication between nodes.
$ modprobe overlay
$ modprobe br_netfilter
To make this setting permanent we need to add this into a file k8s.conf
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
4. We need to enable the ip forwarding in the kubernetes.
IP forwarding is a kernel setting that allows forwarding of the traffic coming from one interface to be routed to another interface. This setting is necessary for Linux kernel to route traffic from containers to the outside world.
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
To confirm the changes we made are successfully made or not :
sysctl --system
5. Disable SELinux and set it to ‘permissive’ in order to allow smooth communication between the nodes and the pods.
$ setenforce 0
$ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
6. Container Runtime is an application that supports running containers at the lowest level, So for this we will install CRI-O.
$ export VERSION=1.26
$ curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_8/devel:kubic:libcontainers:stable.repo
$ curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/CentOS_8/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
$ dnf install cri-o
$ systemctl enable crio
$ systemctl start crio
7. Configuring yum for downloading kubelet & kubectl & kubeadm. This will overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
Installing the softwares.
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
8. Initialize a Kubernetes cluster using the kubeadm command as follows. This initializes a control plane in the master node., use network range of kube server below cidr
$ kubeadm init --pod-network-cidr=192.168.0.0/16
9. To start using your cluster, you need to run the following as a regular user:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
10. By default, apps won’t get scheduled on the master node. If you want to use the master node for scheduling apps, taint the master node.
$ kubectl taint nodes --all node-role.kubernetes.io/control-plane-
10. Installing Calico
Till here everything is fine but if you launch your deployment at this moment (after configuring slave node) you will see that pods cannot able to connect to different node pods. Means inter-pods comminucation is not enable right now. so for this we need some CNI tool to start overlay networking. And here comes Calico in scene.
After Installing Calico CNI, nodes state will change to Ready state, DNS service inside the cluster would be functional and containers can start communicating with each other.
curl -O https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml
11. Go to the calico.yaml file and make following changes in the file:
vi calico.yaml
#find the line in the file where Auto-detect is written and make the follwing changes
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
- name: IP_AUTODETECTION_METHOD
value: "interface=eth0"
And then apply the file :
kubectl apply -f calico.yaml
12. Verify by getting the pods in kube-system namespace
kubectl get pods -n kube-system
13. Verify that all the slave nodes are fully registered. (after joining them in cluster, given below).
kubectl get nodes
14. Add the labels to the slave node.
kubectl label node <host-name> node-role.kubernetes.io/worker=wo
15. Verify all the cluster component health statuses using the following command
$ kubectl get --raw='/readyz?verbose'
→ For getting the cluster info
$ kubectl cluster-info
16. If you missed copying the join command, execute the following command in the master node to recreate the token with the join command
And then paste that command into the slave node to add this into the cluster.
$ kubeadm token create --print-join-command
17. Create your Deployment and expose it to the outside world :
$ kubectl create deployment lwdeploy1 --image=vimal13/apache-webserver-php --replicas=5
$ kubectl expose deployment lwdeploy1 --type=NodePort --port=80
Step 4 : Setting Up Worker/Slave Node
While launching the Worker node inside the AWS, go to the User-data options and paste the below command which is command in both the master and worker node setup.
This will automatically run these commands while booting up.
Add your commands into this columns and launch the instances that will act as your worker node
#!/bin/bash
swapoff -a
dnf install -y iproute-tc
modprobe overlay
modprobe br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
KUBERNETES_VERSION=v1.29
PROJECT_PATH=prerelease:/main
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/repodata/repomd.xml.key
EOF
cat <<EOF | tee /etc/yum.repos.d/cri-o.repo
[cri-o]
name=CRI-O
baseurl=https://pkgs.k8s.io/addons:/cri-o:/$PROJECT_PATH/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/addons:/cri-o:/$PROJECT_PATH/rpm/repodata/repomd.xml.key
EOF
dnf install -y cri-o kubelet kubeadm kubectl
systemctl enable --now crio
systemctl enable --now kubelet
That’s it now launch the instances and login into the instances to run the joining command that will make you the part of cluster.
Now Finally Your Kubernetes Multi Node Cluster is Ready🚀
Congratulations🎉
You’ve successfully set up a Kubernetes multi-node cluster on AWS🥳, empowering you to harness the full potential of Kubernetes for orchestrating containerized workloads in your environment. Explore additional features and capabilities of Kubernetes to optimize and streamline your application deployment and management workflows.
Stay tuned for more in-depth tutorials and best practices on leveraging Kubernetes and other cloud-native technologies for modern application development and deployment.