Running an Akka Cluster with Docker Containers
Mar 23 2014Update! You can now use SBT-Docker with SBT-Native Packager for a better sbt/docker experience. Here’s the new approach with an updated GitHub repo.
We recently upgraded our vagrant environments to use docker. One of our projects relies on akka’s cluster functionality. I wanted to easily run an akka cluster locally using docker as sbt can be somewhat tedious. The example project is on github and the solution is described below.
The solution relies on:
- Sbt Native Packager to package dependencies and create a startup file.
- Typesafe’s Config library for configuring the app’s ip address and seed nodes. We setup cascading configurations that will look for docker link environment variables if present.
- A simple bash script to package the app and build the docker container.
I’m a big fan of Typesafe’s Config library and the environment variable overrides come in handy for providing sensible defaults with optional overrides. It’s the preferred way we configure our applications in upper environments.
The tricky part of running an akka cluster with docker is knowing the ip address each remote node needs to listen on. An akka cluster relies on each node listening on a specific port and hostname or ip. It also needs to know the port and hostname/ip of a seed node the cluster. As there’s no catch-all binding we need specific ip settings for our cluster.
A simple bash script within the container will figure out the current IP for our cluster configuration and docker links pass seed node information to newly launched nodes.
First Step: Setup Application Configuration
The configuration is the same as that of a normal cluster, but I’m using substitution to configure the ip address, port and seed nodes for the application. For simplicity I setup a clustering
block with defaults for running normally and environment variable overrides:
clustering { ip = "127.0.0.1" ip = ${?CLUSTER_IP} port = 1600 port = ${?CLUSTER_PORT} seed-ip = "127.0.0.1" seed-ip = ${?CLUSTER_IP} seed-ip = ${?SEED_PORT_1600_TCP_ADDR} seed-port = 1600 seed-port = ${?SEED_PORT_1600_TCP_PORT} cluster.name = clustering-cluster } akka.remote { log-remote-lifecycle-events = on netty.tcp { hostname = ${clustering.ip} port = ${clustering.port} } } cluster { seed-nodes = [ "akka.tcp://"${clustering.cluster.name}"@"${clustering.seed-ip}":"${clustering.seed-port} ] auto-down-unreachable-after = 10s } }
As an example the clustering.seed-ip
setting will use 127.0.0.1 as the default. If it can find a _CLUSTERIP or a SEED_PORT_1600_TCP_ADDR override it will use that instead. You’ll notice the latter override is using docker’s environment variable pattern for linking: that’s how we set the cluster’s seed node when using docker. You don’t need the _CLUSTERIP in this example but that’s the environment variable we use in upper environments and I didn’t want to change our infrastructure to conform to docker’s pattern. The cascading settings are helpful if you’re forced to follow one pattern depending on the environment. We do the same thing for the ip and port of the current node when launched.
With this override in place we can use substitution to set the seed nodes in the akka cluster configuration block. The expression "akka.tcp://"${clustering.cluster.name}"@"${clustering.seed-ip}":"${clustering.seed-port}
builds the proper akka URI so the current node can find the seed node in the cluster. Seed nodes avoid potential split-brain issues during network partitions. You’ll want to run more than one in production but for local testing one is fine. On a final note the cluster-name setting is arbitrary. Because the name of the actor system and the uri must match I prefer not to hard code values in multiple places.
I put these settings in resources/reference.conf. We could have named this file application.conf, but I prefer bundling configurations as reference.conf and reserving application.conf for external configuration files. A setting in application.conf will override a corresponding reference.conf setting and you probably want to manage application.conf files outside of the project’s jar file.
Second: SBT Native Packager
We use the native packager plugin to build a runnable script for our applications. For docker we just need to run universal:stage
, creating a folder with all dependencies in the target/
folder of our project. We’ll move this into a staging directory for uploading to the docker container.
Third: The Dockerfile and Start script
The dockerfile is pretty simple:
FROM dockerfile/java MAINTAINER Michael Hamrah [email protected] ADD tmp/ /opt/app/ ADD start /opt/start RUN chmod +x /opt/start EXPOSE 1600 ENTRYPOINT [ "/opt/start" ]
We start with Dockerfile’s java base image. We then upload our staging tmp/
folder which has our application from sbt’s native packager output and a corresponding executable start script described below. I opted for ENTRYPOINT
instead of CMD
so the container is treated like an executable. This makes it easier to pass in command line arguments into the sbt native packager script in case you want to set java system properties or override configuration settings via command line arguments.
The start script is how we tee up the container’s IP address for our cluster application:
#!/bin/bash
CLUSTER_IP=/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'
/opt/app/bin/clustering $@
The script sets an inline environment variable by parsing ifconfig
output to get the container’s ip. We then run the clustering start script produced from sbt native packager. The $@
lets us pass along any command line settings set when launching the container into the sbt native packager script.
Fourth: Putting It Together
The last part is a simple bash script named dockerize
to orchestrate each step. By running this script we run sbt native packager, move files to a staging directory, and build the container:
#!/bin/bash echo "Build docker container" #run sbt native packager sbt universal:stage #cleanup stage directory rm -rf docker/tmp/ #copy output into staging area cp -r target/universal/stage/ docker/tmp/ #build the container, remove intermediate nodes docker build -rm -t clustering docker/ #remove staged files rm -rf docker/tmp/
With this in place we simply run
bin/dockerize
to create our docker container named clustering.
Running the Application within Docker
With our clustering container built we fire up our first instance. This will be our seed node for other containers:
$ docker run -i -t -name seed clustering 2014-03-23 00:20:39,918 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2014-03-23 00:20:40,392 DEBUG com.mlh.clustering.ClusterListener - starting up cluster listener... 2014-03-23 00:20:40,403 DEBUG com.mlh.clustering.ClusterListener - Current members: 2014-03-23 00:20:40,418 INFO com.mlh.clustering.ClusterListener - Leader changed: Some(akka.tcp://[email protected]:1600) 2014-03-23 00:20:41,404 DEBUG com.mlh.clustering.ClusterListener - Member is Up: akka.tcp://[email protected]:1600
Next we fire up a second node. Because of our reference.conf defaults all we need to do is link this container with the name seed. Docker will set the environment variables we are looking for in the bundled reference.conf:
$ docker run -name c1 -link seed:seed -i -t clustering 2014-03-23 00:22:49,332 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2014-03-23 00:22:49,788 DEBUG com.mlh.clustering.ClusterListener - starting up cluster listener... 2014-03-23 00:22:49,797 DEBUG com.mlh.clustering.ClusterListener - Current members: 2014-03-23 00:22:50,238 DEBUG com.mlh.clustering.ClusterListener - Member is Up: akka.tcp://[email protected]:1600 2014-03-23 00:22:50,249 INFO com.mlh.clustering.ClusterListener - Leader changed: Some(akka.tcp://[email protected]:1600) 2014-03-23 00:22:50,803 DEBUG com.mlh.clustering.ClusterListener - Member is Up: akka.tcp://[email protected]:1600
You’ll see the current leader discovering new nodes and the appropriate broadcast messages sent out. We can even do this a third time and all nodes will react:
$ docker run -name c2 -link seed:seed -i -t clustering 2014-03-23 00:24:52,768 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2014-03-23 00:24:53,224 DEBUG com.mlh.clustering.ClusterListener - starting up cluster listener... 2014-03-23 00:24:53,235 DEBUG com.mlh.clustering.ClusterListener - Current members: 2014-03-23 00:24:53,470 DEBUG com.mlh.clustering.ClusterListener - Member is Up: akka.tcp://[email protected]:1600 2014-03-23 00:24:53,472 DEBUG com.mlh.clustering.ClusterListener - Member is Up: akka.tcp://[email protected]:1600 2014-03-23 00:24:53,478 INFO com.mlh.clustering.ClusterListener - Leader changed: Some(akka.tcp://[email protected]:1600) 2014-03-23 00:24:55,401 DEBUG com.mlh.clustering.ClusterListener - Member is Up: akka.tcp://[email protected]:1600
Try killing a node and see what happens!
Modifying the Docker Start Script
There’s another reason for the docker start script: it opens the door for different seed discovery options. Container linking works well if everything is running on the same host but not when running on multiple hosts. Also setting multiple seed nodes via docker links will get tedious via environment variables; it’s doable but we’re getting into coding-cruft territory. It would be better to discover seed nodes and set that configuration via command line parameters when launching the app.
The start script gives us control over how we discover information. We could use etcd, serf or even zookeeper to manage how seed nodes are set and discovered, passing this to our application via environment variables or additional command line parameters. Seed nodes can easily be set via system properties set via the command line:
-Dakka.cluster.seed-nodes.0=akka.tcp://ClusterSystem@host1:2552 -Dakka.cluster.seed-nodes.1=akka.tcp://ClusterSystem@host2:2552
The start script can probably be configured via sbt native packager but I haven’t looked into that option. Regardless this approach is (relatively) straight forward to run akka clusters with docker. The full project is on github. If there’s a better approach I’d love to know!