Picture of the author
Jarred Kenny
Published on

Single Node Docker Swarms are Awesome

Authors

If you've been working in the software development or devops spaces for any amount of time you've probably used docker. At this point in time, I feel docker has become as ubiquitous in the software development workflow as a text editor. Applications are getting larger and harder to manage but we've also developed excellent tooling to manage the growth. Gone are the days of running web and database servers directly on your workstation only to find out after you've pushed your code to master and the production server is running a different version of PHP or a very different web server configuration. Docker and Compose have solved this problem for us.

Compose allows you to define a stack of services (docker containers) and their configuration in a single docker-compose.yml file and provides tooling for managing the orchestration of those services. This allows for reproducible and portable development environments which can be used for local development, automated tested, or rapid deployment to staging or production environments.

If you have not used docker-compose I suggest giving the official documentation a read before continuing.

What about production?

It is not insane to use Compose in production. After all, Compose is more of an automation tool than anything else and if it does what you need it to do then by all means use it. That being said, it wasn't built with production use cases in mind. You'll find no options for high availability, health checks, or scaling of container instances when using docker-compose directly. For these features you need to graduate to docker swarm.

Docker swarm provides a means to manage a stack of containers defined with Compose across many hosts. It includes features such as:

  • Health checks and automatic restarts of unhealthy containers
  • Parallel execution of containers across nodes for high availability with built in load balancing
  • Rolling updates when you are ready to deploy an updated stack
  • Uses the same docker-compose.yml file you use locally via Compose.

"But I don't need a cluster!"

Then don't run one! Most people steer away from docker swarm because they think you need a multi-host cluster. If you are only serving containers from a single host you can still use docker swarm. This is known as a single node swarm and you can still leverage many of the benefits included in docker swarm on a single host.

How do I do it?

There is one initialization step required to turn a regular system running docker into a member of a docker swarm.

docker swarm init

That's it. The host is now the leader of your single node swarm. Now you can use docker stack to deploy your docker-compose.yml file into the swarm.

docker stack deploy -c docker-compose.yml name_of_stack

If you make changes to your docker-compose.yml and want to update your stack, simply run the same command again to update the running containers.

Pretty easy right?

If and when you no longer need your compose stack running in the swarm, simply remove it with:

docker stack rm name_of_stack

Further Reading

You can control many aspects of how containers are deployed within a swarm using your docker-compose.yml file. It is possible to define how replicas of each container should be running in the stack and under what conditions containers should be restarted if found to be unhealthy. You can limit the memory and CPU resources available to a container or define multiple public or private networks and define which containers in your stack of publicly accessible and which are only accessible to other containers within the stack.

The official docker documentation is by far the best resource for these topics.

I've also found these blog posts to incredibly informative on the topic, which also delve into secret management and task scheduling using docker swarm.

I use this same technique to deploy and manage a verity of tools and services on a single Digital Ocean droplet. I find this approach particularly great because when the time comes I can simply add another host to my swarm and take advantage of the full cluster.