Secrets Come to Docker

Secrets Come to Docker

The provision of secure, authenticated access to sensitive data on IT systems, is an integral component of systems design. The secrets that users or peer IT services employ for accessing sensitive data published by an IT service, come in a variety of guises; passwords, X.509 certificates, SSL/TLS keys, GPG keys, SSH keys and so on. Managing and controlling these secrets in service-oriented environments, is non-trivial. With the continued advance in the adoption of the microservices architecture pattern for software applications, and their common implementation as distributed, immutable containers, this challenge has been exacerbated. How do you de-couple the secret from the template (image) of the container? How do you provide the container with the secret without compromising it? Where will the container be running, so as to provide it with the secret? How do you change the secret without interrupting the consumption of the service?

Docker Engine 1.13.0 introduced a new primary object, the secret, when it was released recently. In conjunction with new API endpoints and CLI commands, the new secret object is designed for handling secrets in a multi-container, multi-node environment - a 'swarm mode' cluster. It is not intended or available for use outside of a swarm mode cluster. Whilst the management of secrets is an oft-requested feature for Docker (particularly in the context of building Docker images), it's unclear if or when a secrets solution will be implemented for the standalone Docker host context. For now, people have been encouraged to use the 'service' abstraction in place of deploying individual containers. This requires bootstrapping a swarm mode cluster, even if it only contains a single node, and the service you deploy only comprises a single task. It's a good job it's as simple as,

$ docker swarm init
How are secrets created?

Creating a secret with the Docker client is a straightforward exercise,

$ < /dev/urandom tr -dc 'a-z0-9' | head -c 32 | docker secret create db_pw -

In this simple example, the Docker CLI reads the content of the secret from STDIN, but it could equally well be a file. The content of the secret can be anything, provided it's size is no more than the secret limit of 500 KB. As with all Docker objects, there are API endpoints and CLI commands for inspecting, listing and removing secrets: docker secret inspect, docker secret ls, docker secret rm. Inspecting the secret provides the following:

$ docker secret inspect db_pw
        "ID": "joptoh9y7x8galitn4ztnk86r",
        "Version": {
            "Index": 44
        "CreatedAt": "2017-01-23T13:52:35.810853263Z",
        "UpdatedAt": "2017-01-23T13:52:35.810853263Z",
        "Spec": {
            "Name": "db_pw"

Inspecting the secret, doesn't (obvs) show you the content of the secret. It shows the creation time of the secret, and whilst the output displays an UpdatedAt key, secrets cannot be updated by the CLI at present. There is, however, an API endpoint for updating secrets.

The Spec key provides some detail about the secret, just the name in the above example. Like most objects in Docker, it is possible to associate labels with secrets when they are created, and labels appear as part of the value of the Spec key.

How are secrets consumed?

Secrets are consumed by services through explicit association. Services are implemented with tasks (individual containers), which can be scheduled on any node within the swarm cluster. If a service comprises of multiple tasks, an associated secret is accessible to any of the tasks, whichever node they are running on.

A service can be created with access to a secret, using the --secret flag:

$ docker service create --name app --secret db_pw my_app:1.0

In addition, a previously created service can be be granted access to an additional secret or have secrets revoked, using the --secret-add and --secret-rm flags used in conjunction with docker service update.

Where are secrets kept?

A swarm mode cluster uses the Raft Consensus Algorithm in order to ensure that nodes participating in the management of the cluster, agree on the state of the cluster. Part of this process involves the replication of the state to all management nodes in the form of a log.

The implementation of secrets in Docker swarm mode, takes advantage of the highly consistent, distributed nature of Raft, by writing secrets to the raft log, which means they are replicated to each of the manager nodes. The Raft log on each manager node is held in memory whilst the cluster is operating, and is encrypted in Docker 1.13.0+.

How does a container access a secret?

A container that is a task associated with a service that has access to a secret, has the secret mounted onto its filesystem under /run/secrets, which is a tmpfs filesystem residing in memory. For example, if the secret is called db_pw, it's available inside the container at /var/run/secrets/db_pw for as long as the container is running (/var/run is a symlink to /run). If the container is halted for any reason, /run/secrets is no longer a component of the container's filesystem, and the secret is also flushed from the hosting node's memory.

The secrets user interface provides some flexibility regarding a service's consumption of the secret. The secret can be mounted with a different name to the one provided during its creation, and its possible to set the UID, GID and mode for the secret. For example, the db_pw secret could be made available inside container tasks with the following attributes:

$ docker service create --name app --secret source=db_pw,target=password,uid=2000,gid=3000,mode=0400 my_app:1.0

Inside the container, this would yield:

root@a61281217232:~# ls -l /var/run/secrets  
total 8  
-r--r--r-- 1 root root 32 Jan 23 11:49 my_secret
-r-------- 1 2000 3000 32 Jan 23 11:49 password
How are secrets updated?

By design, secrets in Docker swarm mode are immutable. If a secret needs to be rotated, it must first be removed from the service, before being replaced with a new secret. The replacement secret can be mounted in the same location. Let's take a look at an example. First we'll create a secret, and use a version number in the secret name, before adding it to a service as password:

$ < /dev/urandom tr -dc 'a-z0-9' | head -c 32 | docker secret create my_secret_v1.0 -
$ docker service create --name nginx --secret source=my_secret_v1.0,target=password nginx

Once the task is running, the secret will be available in the container at /var/run/secrets/password. If the secret is changed, the service can be updated to reflect this:

$ < /dev/urandom tr -dc 'a-z0-9' | head -c 32 | docker secret create my_secret_v1.1 -
$ docker service update --secret-rm my_secret_v1.0 --secret-add source=my_secret_v1.1,target=password nginx

Each service update results in the replacement of existing tasks based on the update policy defined by the --update-parallelism and --update-delay flags (1 and 0s by default, respectively). If the service comprises of multiple tasks, and the update is configured to be applied over a period of time, then some tasks will be using the old secret, whilst the updated tasks will be using the new secret. Clearly, some co-ordination needs to take place between service providers and consumers, when secrets are changed!

After the update, the new secret is available for all tasks that make up the service, and it can be removed (if desired). It can't be removed whilst a service is using the secret:

$ docker secret rm my_secret_v1.0

Introduced in Docker 1.13.0:

  • A new secrets object, along with API endpoints and CLI commands
  • Available in swarm mode only
  • Secrets are stored in the Raft log associated with the swarm cluster
  • Mounted in tmpfs inside a container

Container Mania

Nearly 60 years after Malcolm McLean developed the intermodal shipping container for the transportation of goods, it seems its computing equivalent has arrived with considerable interest, much expression, and with some occasional controversy.

Containers are a form of virtualisation, but unlike the traditional virtual machine, they are very light in terms of footprint and resource usage. Applications running in containers don't need a full blown guest operating system to function, they just need the bare minimum in terms of OS binaries and libraries, and share the host's kernel with other containers and the host itself. The light nature of containers, and the very quick speeds with which container workloads can be provisioned, can however, be contrasted with a reduction in the level of isolation when compared to a traditional virtual machine, and they are currently only available on the Linux OS. Equivalent capabilities exist in other *nix operating systems, such as Solaris (Zones) and FreeBSD (Jails), but not the Windows platform .... yet. When it comes to choosing whether to plump for containers or virtual machines, it's a matter of horses for courses.

So, why now? What has provoked the current interest and activity? The sudden popularity of containers has much to do with technological maturity, inspired innovation, and an evolving need.

Whilst some aspects of the technology that provides Linux containers has been around for a number of years, it's true to say that the 'total package' has only recently matured to a level where its inherent in the kernels shipped with most Linux distributions.

Containers are an abstraction of Linux kernel namespaces and control groups (or cgroups), and as such it requires some effort and knowledge on the part of the user to create and invoke a container. This has inhibited their take up as a means of isolating workloads. Enter stage left, the likes of Docker (libcontainer library), Rocket, LXC and lmctfy, all of which serve to commodotise the container. Docker, in particular, has captured the hearts and minds of the Devops community, with its platform for delivering distributed applications in containers.

Containers are a perfect fit for a recent trend in architecting software applications as small, discrete, independent microservices. Whilst there is no formal definition of a microservice, it is generally considered that a microservice is a highly de-coupled, independent process with a specific function, which often communicates via a RESTful HTTP API. It's entirely possible to use containers to run multiple processes (as is the case with LXC), but the approach taken by Docker and Rocket is to encourage a single process per container, fitting neatly with the microservice aspiration.

The fact that all major cloud and operating system vendors, including Microsoft, are busy developing their capabilities regarding containers, is evidence enough that containers will have a big part to play in workload deployment in the coming years. This means the stakes are high for the organisations behind the different technologies, which has led to some differences of opinion. On the whole, however, the majority of the technologies are being developed in the open, using a community-based model, which should significantly aid continued innovation, maturity, and adoption.

This article serves as an introduction to a series of articles that examine the fundamental building blocks for containers; namespaces and cgroups.