“Docker In Docker (DinD): Building Scalable Swarm Clusters with HAProxy”

@Harsh
9 min readJan 19, 2024

--

Introduction:

In the dynamic realm of containerization, Docker in Docker (DinD) emerges as a powerful tool, offering a deeper understanding of container orchestration. This journey unfolds as we dive into the intricacies of building a Docker Swarm cluster, a scalable and resilient environment for managing containers. To enhance accessibility and streamline communication, we introduce HAProxy, turning this exploration into a comprehensive guide for enthusiasts and practitioners alike.

Our Architecture will soon going to look like this:

Part 1: Understanding DinD — Docker in Docker:

DinD allows running Docker containers within Docker containers, enabling scenarios like isolated testing and dynamic environment setups. DinD can be beneficial in certain scenarios, but it is essential to use it judiciously and be mindful of the security and performance implications. DinD can’t be used on platforms that don’t support privileged mode, such as those that run containers on Windows.

Delve into the architecture, setup, and scenarios where DinD shines. By grasping DinD’s nuances, we lay the foundation for our Swarm cluster.

Part 2: Building a Docker Swarm Cluster:

Step into the world of container orchestration as we construct a Docker Swarm cluster. Understand the roles of manager and worker nodes, configuring three nodes as managers and four as workers. Witness the synchronization that occurs as containers seamlessly communicate across the cluster, creating a distributed environment for scalable and efficient container management.

Follow these Steps to Create the same Architecture:

STEP : 1

Launching 3 Manager’s Node:

docker run -dit --name Docker-manager1 -p 8081:8080 --privileged docker:dind 

We have run this command three times as we have to launch three manager nodes each with different port no. and name.

STEP : 2

Launching 3 Worker’s Node,

docker run -dit --name Docker-worker1 --privileged docker:dind 

Now we will run this command 3times to launch 3 containers as a worker.

STEP : 3

Now we will enter in Docker-manager1 container with exec command and initiate swarm in it.

#To enter into Docker-manager1
docker exec -it Docker-manager1 sh

#To initialise the container as a swarm manager
docker swarm init --advertise--addr <ip-of-that-node>

After following these steps you will also get this output and we will now copy this highlighted line from here and paste it in all of our worker node to configure it as a worker.

Now you have successfully configure all the worker nodes.

STEP : 4

After configure worker nodes, our next task is to configure all other manager nodes as a manager also. For this we need same kind of token.

To get this token for configure manager node:

docker swarm join-token manager

Now we will paste this token in all the managers-node:

Finally, we have complete all the steps and form our cluster where we have 3 manager nodes and 3 worker nodes.

Part 3: Launching Services in the Swarm

With our Swarm cluster in place, we delve into the deployment of services. Learn the Swarm service concepts and how to launch applications in a distributed manner. Optimize resource utilization, ensure high availability, and witness the dynamic nature of Swarm services as they adapt to changes in the cluster.

STEP : 1

We will launch the service in one of the manager node and replicate it to 5

The image that we are going to use is vimal13/apache-webserver-php for testing purpose as this will return the IP of the instance where it is currently running. Then we will launch a well designed index page service in the cluster.

But before this we will change the availability of our manager nodes from active to drain so that it can’t be used as a worker node.

#Inside docker-manager1 we are going to run this
docker node update --availability drain <id-of-node>

Now, we will going to launch the service:

#In Docker-manager1 node
docker pull vimal13/apache-webserver-php
docker service create --name IPweb --publish 8080:80 --replicas=5 vimal13/apache-webserver-php

Checking the running status of the services:

docker service ls
docker service ps IPweb

Hence we can see that our services is successfully deployed on different-different worker nodes.

Part 4: Introducing HAProxy for Seamless Communication

Enhance accessibility and simplify communication with the introduction of HAProxy. Configure it to act as a load balancer, providing a single point of access to our Swarm cluster. Witness the efficiency as HAProxy distributes requests among the manager nodes, ensuring a balanced and responsive environment for containerized applications.

STEP : 1

We will first launch another docker container in our base os, where we are going to install haproxy and configure it by adding the address of our manager nodes.

  1. To launch the container ;
docker run -dit --name LoadBalancer -p 123:5000 centos:7

We have exposed it on 5000 port as the haproxy by default operates on 5000 ports number.

After successfully getting the shell of the container, we will install haproxy software first and configure it.

STEP : 2

Installing HAProxy software,

#Run this inside LoadBalancer container
yum install haproxy -y

STEP : 3

Configure the container by adding the addresses of our manager nodes.

vi /etc/haproxy/haproxy.cfg

Inside the file, edit the backend apps section and add the addresses of our manager nodes with port number we have exposed them, here 8080.

And we saved this file and start the services. Since systemctl does not work inside container, so we will use alternative.

#Run this inside container
haproxy -f /etc/haproxy/haproxy.cfg

Now, We will connect to this container via chrome browser.

# Type the URL in the format 
http://{public-IP}:port

Here we have the Public IP of our AWS instance and then the port no. of the LoadBalancer, i.e, Haproxy container port number that we have exposed.

You will see the below output on hitting this URL:

And on continuous hitting, you will see the different different IPs each time.

This shows that our Site is running on different-different worker nodes that is managed by manager nodes.

Now if our leader node get failed or stops due to any reason, the cluster won’t stop because we have high availability.

Let’s try this by manually stopping our leader node.

We can easily see in this command, that our leader node is 3fe4493e4929, now going ahead we will stop this container manually to show the demo of high availability.

Now we will again attach to one of the manager node and see the state of the cluster.

The cluster automatically make other manager the Leader on the failure of the actual Leader node. And our site can still be accessible without any hurdles.

Again on hitting the same URL, that is:

Our load balancer which is ha proxy detects that one of its client is not available, so it will connect to other manager node from where we can access the site.

Everything is occurring on backend, therefore, it provides best User Experience.

Our service runs our Application on different different worker nodes and if any of these containers stops or fails to perform then the Leader will immediately launched other container either in the same worker nodes or in other. It always keeps an eye on it and make sure that the desired number of containers always present.

Launching our Index Page with our Custom Image

We will first delete previous service IPweb,

Now we will create our index.html page.

Below used html template is available on github account : https://github.com/harsh2478/basic-html-for-testing

Creating Dockerfile for our custom image:

We have our custom image “myweb” ready:

Now we have push this image on our Docker hub account. But for this we have to first login in our hub account from our command line.

Now we will push it on hub.

LAUNCHING SERVICE “INDEXPAGE” ON OUR LEADER NODE

Our service will launched successfully and if again we go to the same URL, we see a different site there.

This entire demo, from achieving high availability to optimizing fault tolerance, coupled with a robust Swarm cluster and smart HAProxy integration, is a testament to the transformative potential of containerization. Simplifying scalability, ensuring seamless communication, and adapting dynamically to changes, this showcase encapsulates the power and versatility of our container orchestration journey.

PART 4: Conclusion

In this comprehensive exploration, we’ve navigated through the realms of DinD, built a robust Docker Swarm cluster, and optimized accessibility with HAProxy. This guide serves as a beacon for containerization enthusiasts, offering insights and practical knowledge for orchestrating containers at scale. Embrace the power of Docker in Docker, unlock the potential of Swarm clusters, and streamline communication with HAProxy for a future-ready containerized environment.

Thanks for Reading,

If you find this helpful, don’t forget to hit the 👏 button and give it a follow. Your support means a lot!! 💜🙌

BY — Harsh Gupta

--

--

@Harsh
@Harsh

Written by @Harsh

A devOps engineer from India

No responses yet