The Docker Advanced Tutorial – AWS and Networks (Part 3/3) πŸ‹

1. Overview

In the third part of the tutorial we will see how to deploy a docker image to the AWS cloud, create a docker network and let containers talk to each other and finally talk about what else there in the docker ecosystem like docker compose and swarms.

Note, that this third part of our three-part tutorial is based on the previous two. Check out the first and the second part in order to understand what is going on.

2. Docker Essentials

2. 1 Cloud Hosting Containers (with AWS Beanstalk)

Containers are great to be hosted on a cloud service like AWS. Amazon Web Services have developed an easy way to deploy containers that is also elastic.

Let’s upload our nodejs https server container to the AWS Cloud and test it by making a request from the WWW.

The basic steps we have to get it up and running are

  • A docker container (our nodejs server)
  • Push our container to DockerHub
  • A AWS ECS Docker manifest file
  • A AWS ECS cluster
  • Run the AWS ECS manifest file on AWS
  • Let’s get started with our container. It will establish an http nodejs server listening on port 80 which in turn will be exposed.

    Our nodejs file looks like this:

    
    const http = require('http')
    const port = 80
    
    const requestHandler = (request, response) => {
      console.log(request.url)
      response.end('\nHello World!\n')
    }
    
    const server = http.createServer(requestHandler)
    
    server.listen(port, (err) => {
      if (err) {
        return console.log('Something went wrong', err)
      }
    
      console.log(`Server listening on ${port}`)
    })
    

    We also have a corresponding Dockerfile

     
    ARG  CODE_VERSION=latest
    FROM ubuntu:${CODE_VERSION}
    
    ADD *.js /data/
    
    RUN apt-get -qq update
    RUN apt-get install -yqq nodejs curl nano
    
    CMD node /data/AWSServer.js
    

    Let’s build and push our container to Dockerhub so AWS can pull it later:

     
    docker build -t pushcommit/awsserver:v1 .
    docker login -u YOURUSERNAME -p YOURPASSWORD
    docker push pushcommit/awsserver:v1
    

    Next we take care of the AWS part. First, we need of course an account with AWS. If we’re ready let’s navigate to Elastic Beanstalk (in a locale of our choice, like Ireland for me).

    For AWS Beanstalk we create a manifest file to automate the pulling and port exposure process.

     
    {
      "AWSEBDockerrunVersion": "1",
      "Image": {
        "Name": "pushcommit/awsserver:v1",
        "Update": "true"
      },
      "Ports": [
        {
          "ContainerPort": "80"
        }
      ],
      "Logging": "/var/log/nginx"
    }
    

    We start a new Elastic Beanstalk in Docker configuration and select our file locally to upload as our application to be run.

    When the EBS is up and our container is running we can click on the EBS url and enjoy our result

    There is also a dedicated AWS Elastic Container Service (ECS) with much more configuration and a CLI tool. If you just want to run a few tests Elastic Beanstalk requires certainly less overhead.

    2. 2 Networks

    Networks allow multiple running containers to talk to each other. They are like common TCP/IP subnets in the way that every endpoint has an IP and there is a gateway.

    In order to demonstrate how docker nets work we create two containers, a server and a client. The server will be our nodejs server from the previous tutorial. For the sake of easiness we will create a nodejs http client that is based on the server docker image.

    First we set up a directory structure like this. We can also create the files with touch, the folders with mkdir and to the editing with nano or vim.

    Our server just will respond to request on port 80 and http protocol.

    
    const http = require('http')
    const port = 80
    
    const requestHandler = (request, response) => {
      console.log(request.url)
      response.end('\nHello World!\n')
    }
    
    const server = http.createServer(requestHandler)
    
    server.listen(port, (err) => {
      if (err) {
        return console.log('Something went wrong', err)
      }
    
      console.log(`Server listening on ${port}`)
    })
    

    Our Dockerfile copies the server.js and executes it when we will run the docker container later. The installation of curl and nano is not necessary but useful for debugging.

     
    ARG  CODE_VERSION=latest
    FROM ubuntu:${CODE_VERSION}
    
    ADD *.js /data/
    
    RUN apt-get -qq update
    RUN apt-get install -yqq nodejs curl nano
    
    CMD node /data/server.js
    

    Our client is also based on nodejs and makes an http request to http://server:80. There has to be some host name configuration to be in place for our client to resolve server. Fortunately, docker will take care of this.

    const http = require('http');
    
    function myRequest() {
    http.get('http://server:80', (resp) => {
      let data = '';
    
      // A chunk of data has been recieved.
      resp.on('data', (chunk) => {
        data += chunk;
      });
    
      // The whole response has been received. Print out the result.
      resp.on('end', () => {
        console.log(data);
      });
    
    }).on("error", (err) => {
      console.log("Error: " + err.message);
    });
    
    }
    setInterval(myRequest, 1500);
    

    The Dockerfile of our client also copies the client.js and runs it when the container starts.

    We build the server docker image with a custom tag.

     
    docker build -t pushcommit/myveryfirstimageserver .
    
     
    ARG  CODE_VERSION=latest
    FROM ubuntu:${CODE_VERSION}
    
    ADD *.js /data/
    
    RUN apt-get -qq update
    RUN apt-get install -yqq nodejs curl nano
    
    CMD node /data/client.js
    

    Again, we build the client docker image with a custom tag.

     
    docker build -t pushcommit/myveryfirstimageclient .
    

    Finally, we create our network. This is pretty simple.

     
    docker network create myveryfirstnetwork
    

    Like with images we can list all available networks with the following command:

     
    docker network ls
    

    Also with the inspect command we can pick out our network and see more information about it.

     
    docker network inspect myveryfirstnetwork
    

    For now, there are no containers associated with our network. Let’s change that.

    We can now run our containers with a network by the --net flag. Also we set a custom name for the client and server, respectively.

     
    docker run --net myveryfirstnetwork --name server pushcommit/myveryfirstimageserver
    docker run --net myveryfirstnetwork --name client pushcommit/myveryfirstimageclient
    

    When we inspect our network again, we can see two containers associated with our network.

     
    docker network inspect myveryfirstnetwork
    

    In order to see that is happening in our server and client container we use docker attach in seperate terminals in our host system (screen is also great for that).

     
    docker attach client
    
     
    docker attach server
    

    We can see how our client makes regular requests and how our server responds two that. Congratulations, we have successfully set up our first docker network that is isolated from our host system and other docker containers!

    You can download all the files for our network example here: MyDockerImages

    3. Conclusion & what’s left?

    This post concludes our three-part tutorial and now you are not only ready to build useful docker images but also know how to deploy them on the cloud and use scaling tactics.

    Of course, there is more to learn about the Docker ecosystem.

    There are tools like docker-compose that help with the setup of docker containers and configuration. We will talk about it in a separate article.

    Since docker containers are relatively resource-inexpensive a redundancy tactic is to deploy multiple identical container instances over several (isolated) docker hosts. This is where docker swarms or tools like Kubernetes come in.

Leave a Reply

Your email address will not be published. Required fields are marked *