How to Create Docker Swarms on AWS (Multi-container, Multi-Machine Apps) πŸ‹xπŸ‹

1. Overview / Introduction

A Docker swarm is a group of machines that are running the Docker engine and joined into a docker cluster.

In this tutorial we will create multiple machines on AWS running the docker engine by using docker-machine.


Then we will join those machines (nodes) into a docker swarm (dockerized cluster) and run a multi-container-multi-machine webserver on it.

Prerequisites:

  • AWS Account with IAM Certificate
  • Docker
  • SSH Client

2. Installing docker-machine

In order to control our swarm, we use the tool docker-machine. It comes with docker’s windows and macOS version but for Linux we install it from github.

base=https://github.com/docker/machine/releases/download/v0.14.0
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine
sudo install /tmp/docker-machine /usr/local/bin/docker-machine

Let’s check if docker-machine is ready for use:

docker-machine version

docker-machine version 0.14.0, build 89b8332

As we have seen when we created multiple instances of our container (scaling) we utilize docker services.

This time we will also use services. The first step is to switch the machine running as our swarm-manager into swarm mode.

3. Creating our docker machines on AWS

Docker-machine makes it really easy to create machines on AWS thanks to a custom AWS-driver. Since we do not want to enter our credentials manually we put them in a file with the same name under ~/.aws .

cd
mkdir .aws
cd .aws/
nano credentials

aws_access_key_id = ENTERYOURACCESSKEYIDHERE
aws_secret_access_key = ENTERYOURSECRETACCESSKEYHERE

Now, we can run docker-machine create to create our ec2 machines. In total we want to end up with four instances:

  1. Swarm leader
  2. Node
  3. Node
  4. Node

We can specify different deployment regions to our liking but it is important to open up the port 8000 since we want to access our server on that later.

Let’s setup our four machines and start with the swarm leader
docker-machine create --driver amazonec2 --amazonec2-open-port 8000 --amazonec2-region us-west-1 myswarmleader

Running pre-create checks…
Creating machine…
(myswarmleader) Launching instance…
Waiting for machine to be running, this may take a few minutes…
Detecting operating system of created instance…
Waiting for SSH to be available…
Detecting the provisioner…
Provisioning with ubuntu(systemd)…
Installing Docker…
Copying certs to the local machine directory…
Copying certs to the remote machine…
Setting Docker configuration on the remote daemon…
Checking connection to Docker…
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env myswarmleader
… and the same for the remaining nodes:

docker-machine create --driver amazonec2 --amazonec2-open-port 8000 --amazonec2-region us-west-1 myswarmnode1
docker-machine create --driver amazonec2 --amazonec2-open-port 8000 --amazonec2-region us-west-1 myswarmnode2
docker-machine create --driver amazonec2 --amazonec2-open-port 8000 --amazonec2-region us-west-1 myswarmnode3

Our swarm machines are now being initialized:

We check our result after everything has finished:
docker-machine ls

The following output is expected:

NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myswarmleader – amazonec2 Running tcp://XX.XXX.XXX.XXX:2376 v18.05.0-ce
myswarmnode1 – amazonec2 Running tcp://XX.XXX.XXX.XXX:2376 v18.05.0-ce
myswarmnode2 – amazonec2 Running tcp://XX.XXX.XXX.XXX:2376 v18.05.0-ce
myswarmnode3 – amazonec2 Running tcp://XX.XXX.XXX.XXX:2376 v18.05.0-ce

If we use docker visualizer we see just empty docker engines:

4. Turn on Docker swarm mode and wiring everything together

The next step is to make myswarmleader the swarm leader by running docker swarm init on it. We use docker-machine ssh for that.

docker-machine ssh myswarmleader "sudo docker swarm init"

The output is important for us, since it contains the command we have to run on our node machines to join the swarm.

Swarm initialized: current node (XXXXXXXXXXXXXXXXXXXX) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join –token SWMTKN-1-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXX.XXX.XXX.XXX:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

We just copy the command and add a sudo in front of it and run it on all three nodes:

docker-machine ssh myswarmnodeX "docker swarm join --token SWMTKN-1-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXX.XXX.XXX.XXX:2377"

The result we are looking for:

This node joined a swarm as a worker.

As a final check we run docker node ls on our swarm leader.

docker-machine ssh myswarmleader "docker node ls"

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
nn105thskwfzlh9xbhm9n0624 * myswarmleader Ready Active Leader 18.05.0-ce
8h7ap1edyllbnasdxy0xdz6bp myswarmnode1 Ready Active 18.05.0-ce
p4ukpa4vmnfh4sqy3be5vvgql myswarmnode2 Ready Active 18.05.0-ce
p8yjv6e04myt210kj0fea6ayx myswarmnode3 Ready Active 18.05.0-ce

5. Distributing our application (and how to take it down)

The only thing that is left to do now is to distribute our demo webserver (modified version of a previous example on scaling) to our docker nodes. Let’s assume we have a docker-compose.yml on our ssh machine locally available:

version: "3"
services:
 server:
  image: pushcommit/myveryfirstimageserver
  deploy:
   replicas: 5
   resources:
    limits:
     cpus: "0.1"
     memory: 50M
  ports:
   - "8000:80"

We have to run docker stack deploy on our swarm leader. Since we don’t want to copy the file we take over the remote docker environment variables by running docker-machine env.

eval $(docker-machine env myswarmleader)

…and can now finally deploy our app:

docker stack deploy -c docker-compose.yml myswarmapp

This command will run on our swarm leader.

Creating network myswarmapp_default
Creating service myswarmapp_server

Docker visualizer (needs port 8080 by default to be open!) confirms that our five desired instances have been evenly distributed (the first node running to containers).

Let’s double-check our services:
docker service ls # List running services

ID NAME MODE REPLICAS IMAGE PORTS
XYXYXYXYXYXYXY myswarmapp_server replicated 5/5 pushcommit/myveryfirstimageserver:latest *:8000->80/tcp
docker service ps XYXYXYXYXYXYXY
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
XCXCXCXCXCXC myswarmapp_server.1 pushcommit/myveryfirstimageserver:latest myswarmnode2 Running Running 6 minutes ago
XCXCXCXCXCXC myswarmapp_server.2 pushcommit/myveryfirstimageserver:latest myswarmnode3 Running Running 6 minutes ago
XCXCXCXCXCXC myswarmapp_server.3 pushcommit/myveryfirstimageserver:latest myswarmleader Running Running 6 minutes ago
XCXCXCXCXCXC myswarmapp_server.4 pushcommit/myveryfirstimageserver:latest myswarmnode1 Running Running 6 minutes ago
XCXCXCXCXCXC myswarmapp_server.5 pushcommit/myveryfirstimageserver:latest myswarmnode1 Running Running 6 minutes ago

Looks great. If we check out our public address of one of our nodes on AWS Console we can access our swarm-app from the internet:

That’s it! If we want to take down everything, first shut down the stack, leave the swarm and stop the ec2 machines:

docker stack rm myswarmapp

Removing service myswarmapp_server
Removing network myswarmapp_default

docker swarm leave --force

Node left the swarm.

docker-machine rm myswarmnode1
docker-machine rm myswarmnode2
docker-machine rm myswarmnode3
docker-machine rm myswarmleader

After this output the ec2 machines are shutting down.

About to remove myswarmnode1
WARNING: This action will delete both local reference and remote instance.
Are you sure? (y/n): y
Successfully removed myswarmnode1

6. Conclusion

Docker-machine has been proven to be a pretty powerful tool in setting up ec2 docker machines and deploying swarm applications.

It is also interesting to take a look at a higher abstraction level, namely docker stacks.

Leave a Reply

Your email address will not be published. Required fields are marked *