Everyone knows that in the early days of the Docker platform's existence, more emphasis was placed on the Dev side of the DevOps equation. Effectively, that meant that Docker provided a good experience for developing software applications, but a sub-optimal one for running those applications in production. No more so, than with the native networking capabilities provided, that limited inter-container communication to a local Docker host (unless you employed some creative glue and sticky tape).

That all changed with Docker's acquisition of SocketPlane, and the subsequent release of Docker 1.9 in November 2015. The team from SocketPlane helped to completely overhaul the platform's networking capabilities, with the introduction of a networking library called libnetwork. Libnetwork implements Docker's Container Network Model (CNM), and via its API, specific networking drivers provide container networking capabilities based on the CNM abstraction. Docker has in-built drivers, but also supports third party plugin drivers, such as Weave Net and Calico.

One of the in-built drivers is the overlay driver, which provides one of the hitherto most sought after features - cross-host Docker networking for containers. It's based on the VXLAN principle, which encapsulates layer 2 ethernet frames in layer 4 (UDP) packets to enable overlay networking. Let's see how to set this up in Docker.

To demonstrate the use of overlay networks in Docker, I'll use a variation of Dj Walker-Morgan's goredchat application, a simple chat application that uses the Redis database engine to register chat users, and for routing chat messages. We'll create a Redis instance, and two client sessions using goredchat (all running in containers), but on different Docker hosts and connected via an overlay network.

Establish a Key/Value Store

The first thing we need to do is establish a key/value store that Docker's overlay networking requires - it's used to hold network state. We will use HashiCorp's Consul key/value store (other choices are CoreOS' etcd and Apache Software Foundation's Zookeeper), and the easiest way to do this is to run it in a container on a dedicated VM using Virtualbox. To simplify things, we'll use Docker Machine to create the VM.

Create the VM:

$ docker-machine create -d virtualbox kv-store

Next, we need to point our Docker client at the Docker daemon running on the kv-store VM:

$ eval $(docker-machine env kv-store)

Now we need to start a container running Consul, and we'll use the popular progrium/consul Docker image from the Docker Hub. The container needs some ports forwarded to its VM host, and can be started with the following command:

$ docker run -it -d --restart unless-stopped -p 8400:8400 -p 8500:8500 \
> -p 8600:53/udp -h consul progrium/consul -server -bootstrap

Consul will run in the background, and will be available for storing key/value pairs relating to the state of Docker overlay networks for Docker hosts using the store. When one or more Docker hosts are configured to make use of a key/value store, they often do so as part of a cluster arrangement, but being part of a formal cluster (e.g. Docker Swarm) is not a requirement for participating in overlay networks.

Create Three Additional Docker Hosts

Next, we'll create three more VMs, each running a Docker daemon, which we'll use to host containers that will be connected to the overlay network. Each of the Docker daemons running on these machine needs to be made aware of the KV store, and of each other. To achieve this, we need to configure each Docker daemon with the --cluster-store and --cluster-advertise configuration options, which need to supplied via the Docker Machine --engine-opt configuration option:

$ for i in {1..3}; do docker-machine create -d virtualbox \
> --engine-opt “cluster-store consul://$(docker-machine ip kv-store):8500” \
> --engine-opt “cluster-advertise eth1:2376” \
> host0${i}; done

To see if all the VMs are running as expected:

$ docker-machine ls -f "table {{.Name}}  \t{{.State}}\t{{.URL}}\t{{.DockerVersion}}"
NAME         STATE     URL                         DOCKER  
host01       Running   tcp://192.168.99.104:2376   v1.10.2  
host02       Running   tcp://192.168.99.105:2376   v1.10.2  
host03       Running   tcp://192.168.99.106:2376   v1.10.2  
kv-store     Running   tcp://192.168.99.100:2376   v1.10.2  
Create an Overlay Network

We now need to run some Docker CLI commands on each of the Docker hosts we have created. There are numerous ways of doing this;

  1. Establish an ssh session on the VM in question, using the docker-machine ssh command
  2. Point the local Docker client at the Docker host in question, using the docker-machine env command
  3. Run one-time commands against the particular Docker host using the docker-machine config command

The overlay network can be created using any of the Docker hosts:

$ docker $(docker-machine config host01) network create -d overlay my_overlay
0223fc182bd363b95fce819bd33640c7afff8f5b306f0a93a7093b9a254cd9cc  

We can check that the overlay network my_overlay can be seen from each of the Docker hosts (replacing host01 for each individual host):

$ docker $(docker-machine config host01) network ls -f name=my_overlay
NETWORK ID          NAME                DRIVER  
0223fc182bd3        my_overlay          overlay  
Create a Container Running the Redis KV Database Engine

Having created the overlay network, we now need to start a Redis server running in a container on Docker host host01, using the library image found on the Docker Hub registry.

$ docker $(docker-machine config host01) run -d --restart unless-stopped \
> --net-alias redis_svr --net my_overlay redis:alpine redis-server \
> --appendonly yes

The library Redis image will be pulled from the Docker Hub registry, and the Docker CLI will return the container ID, e.g.

4d96bb2bd436bb6a19d5408de026904adc91892db40cfa25a8d71bae0e73bb29  

In order to connect the container to the my_overlay network, the docker run client command is given the --net my_overlay configuration option, along with --net-alias redis_svr, which provides a network specific alias for the container, redis_svr. The alias can be used to lookup the container. The Redis library image exposes port 6379, which will be accessible to containers connected to the my_overlay network. To test whether the redis_svr container is listening for connections, we can run the Redis client in an ephemeral container, also on host01:

$ docker $(docker-machine config host01) run --rm --net my_overlay \
> redis:alpine redis-cli -h redis_svr ping
PONG  

Notice that Docker's embedded DNS server resolves the redis_svr name, and the Redis server responds to the Redis client ping, with a PONG.

Create a Container Running goredchat on host02

Now that we have established that a Redis server container is connected to the my_overlay network and listening on port 6379, we can attempt to consume its service from a container running on a different Docker host.

The goredchat image can be found on the Docker Hub registry, and can be run like a binary, with command line options. To find out how it functions, I can run the following command:

$ docker $(docker-machine config host02) run --rm --net my_overlay nbrown/goredchat --help
Usage: /goredchat [-r URL] username  
  e.g. /goredchat -r redis://redis_svr:6379 antirez

  If -r URL is not used, the REDIS_URL env must be set instead

Now let's run a container using the -r configuration option to address the Redis server. If you're re-creating these steps in your own environment, don't forget to add the -it configuration options to enable you to interact with the container:

$ docker $(docker-machine config host02) run -it --rm --net my_overlay \
> nbrown/goredchat -r redis://redis_svr:6379 bill

Welcome to goredchat bill! Type /who to see who's online, /exit to exit.  

I can find out who's online by typing /who:

/who
bill  

The goredchat client has created a TCP socket connection from the container on host02 to the Redis server container on host01 over the my_overlay network, and queried the Redis database engine.

Create a Container Running goredchat on host03

In another bash command shell, we can start another instance of goredchat, this time on host03.

$ docker $(docker-machine config host03) run -it --rm --net my_overlay \
> nbrown/goredchat -r redis://redis_svr:6379 brian

Welcome to goredchat brian! Type /who to see who's online, /exit to exit.

/who
brian  
bill  

Brian and Bill can chat through the goredchat client, which uses a Redis server for subscribing and publishing to a message channel. All components of the chat service run in containers, on different Docker hosts, but connected to the same overlay network.

Docker's new networking capabilitities have some detractors, but libnetwork is a significant step forward, is evolving, and is designed to support multiple use cases, whilst maintaining a consistent and familiar user experience.