Adventures in HttpContext All the stuff after 'Hello, World'

Deploying Docker Containers on CoreOS with the Fleet API

I’ve been spending a lot more time with CoreOS in search of a docker-filled utopian PaaS dreams. I haven’t found quite what I’m looking for, but with some bash scripting, a little http, good tools and a lotta love I’m coming close. There are no shortage of solutions to this problem, and honestly, nobody’s really figured this out yet in an easy, fluid, turn-key type of way. You’ve probably read about CoreOS, Mesos, Marathon, Kubernetes… maybe even dug into Deis, Flynn, Shipyard. You’ve spun up a cluster, and are like… This is great, now what.

What I want is to go from an app on my laptop to running in a production environment with minimal fuss. I don’t want to re-invent the wheel; there are too many people solving this problem in a similar way. I like CoreOS because it provides a bare-bones docker runtime with a solid set of low-level tools. Plus, a lot of people I’m close with have been using it, so the cross-pollination of ideas helps overcome some hurdles.

One of these hurdles is how you launch containers on a cluster. I really like Marathon’s http api for Mesos, but I also like the simplicity of CoreOS as a platform. CoreOS’s distributed init system is Fleet, which leverages systemd for running a process on a CoreOS node (it doesn’t have to be a container). It has some nice features, but having to constantly write similar systemd files and run fleetctl to manage containers is somewhat annoying.

Turns out, Fleet has an http API. It’s not quite as nice as Marathon’s; you can’t easily scale to N number of instances, but it does come close. There are a few examples of using the API to launch containers, but I wanted a more end-to-end solution that eliminated boilerplate.

Activate the Fleet API

The Fleet API isn’t enabled out-of-the-box. That makes sense as the API is currently unsecured, so you shouldn’t enable it unless you have the proper VPC set up. CoreOS has good documentation on getting the API running. For a quick start you can drop the following yaml snippet into your cloudconfig’s units section:

- name: fleet.socket
  drop-ins:
    - name: 30-ListenStream.conf
      content: |
        [Socket]
        ListenStream=8080
        Service=fleet.service
        [Install]
        WantedBy=sockets.target

Exploring the API

With the API enabled, it’s time to get to work. The API has some simple documentation but offers enough to get started. I personally like the minimal approach, although I wish it was more feature-rich (it is v1, and better than nothing).

You can do a lot with curl, bash and jq. First, let’s see what’s running. All these examples assume you have a FLEET_ENDPOINT environment variable set with the host and port:

On a side note, environment variables are key to reuse the same functionality across environments. In my opinion, they aren’t used nearly enough. Check out the twelve-factor app’s config section to understand the importance of environment variables.

curl -s $FLEET_ENDPOINT/fleet/v1/units | jq '.units[] | { name: .name, currentState: .currentState}'

Sure, you can get the same data by running fleetctl list-units, but the http command doesn’t involve ssh, which can be a plus if you have a protected network, are are running from an application or CI server.

Creating Containers

Instead of crafting a fleet template and running fleetctl start sometemplate , we want to launch new units via http. This involves PUTting a resource to the /units/ endpoint under the name of your unit (it’s actually /fleet/v1/units, it took me forever to find the path prefix). The Fleet API will build a corresponding systemd unit from the json payload, and the content closely corresponds to what you can do with a fleet unit file.

The schema takes in a desiredState and an array of options which specify the section, name, and value for each line. Most Fleet templates follow a similar pattern as exemplified with the launching containers with Fleet guide:

  1. Cleanup potentially running containers
  2. Pull the container
  3. Run the container
  4. Define X-Fleet parameters, like conflicts.

Again we’ll use curl, but writing json on the command line is really annoying. So let’s create a unit.json for our payload defining the tasks for CoreOS’s apache container:

{
  "desiredState": "launched",
  "options": [
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "-/usr/bin/docker kill %p-i%"
    },
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "-/usr/bin/docker rm %p-%i"
    },
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "/usr/bin/docker pull coreos/%p"
    },
    {
      "section": "Service",
      "name": "ExecStart",
      "value": "/usr/bin/docker run --rm --name %pi-%i -p 80 coreos/%p /usr/sbin/apache2ctl -D FOREGROUND"
    },
    {
      "section": "Service",
      "name": "ExecStop",
      "value": "/usr/bin/docker stop %p-%i"
    },
    {
      "section": "X-Fleet",
      "name": "Conflicts",
      "value": "%p@*.service"
    }
  ]
}

There’s a couple of things of note in this snippet:

  • We’re adding a “-” in front of the docker kill and docker rm commands of the ExecStartPre tasks. This tells to Fleet to continue if there’s an error; these tasks are precautionary to remove an existing phantom container if it will conflict with the newly launched one.
  • We’re using Fleet’s systemd placeholders %p and %i to replace actual values in our template with values from the template name. This provides a level of agnosticism in our template; we can easily reuse this template to launch different containers by changing the name. Unfortunately this doesn’t quite work in our example because it’s apache specific, but if you were running a container with an entry point command specified, it would work fine. You’ll also want to manage containers under your own namespace, either in a private or public registry.

We can launch this file with curl:

curl -d @unit.json -w "%{http_code}" -H 'Content-Type: application/json' $FLEETCTL_ENDPOINT/fleet/v1/units/apache@1.service

If all goes well you’ll get back a 201 Created response. Try running the list units curl command to see your container task.

We can run fleetctl cat apache@1 to view the generated systemd unit:

[Service]
ExecStartPre=-/usr/bin/docker kill %p-%I
ExecStartPre=-/usr/bin/docker rm %p-%i
ExecStartPre=/usr/bin/docker pull coreos/%p
ExecStart=/usr/bin/docker run --rm --name %pi-%i -p 80 coreos/%p /usr/sbin/apache2ctl -D FOREGROUND
ExecStop=/usr/bin/docker stop %p-%i

[X-Fleet]
Conflicts=%p@*.service

Want to launch a second task? Just post again, but change the instance number from 1 to 2:

curl -d @unit.json -w "%{http_code}" -H 'Content-Type: application/json' $FLEETCTL_ENDPOINT/fleet/v1/units/apache@2.service

When you’re done with your container, you can simple issue a delete command to tear it down:

curl -X DELETE -w "%{http_code}" $FLEET_ENDPOINT/fleet/v1/units/apache@1.service

Deploying New Versions

Launching individual containers is great, but for continuous delivery, you need deploy new versions with no downtime. The example above used systemd’s placeholders for providing the name of the container, but left the apache commands in place. Let’s use another CoreOS example container from the zero downtime frontend deploys blog post. This coreos/example container uses an entrypoint and tagged docker versions to go from a v1 to a v2 version of the app. Instead of creating multiple, similar, fleet unit files like that blog post, can we make an agnostic http call that works across versions? Yes we can.

Let’s conceptually figure out how this would work. We don’t want to change the json payload across versions, so the body must be static. We could use some form of templating or find-and-replace, but let’s try and avoid that complexity for now. Can we make due with the options provided us? We know that the %p parameter lets us pass in the template name to our body. So if we can specify the name and version of the container we want to launch in the name of the unit file we PUT, we’re good to go.

So we want to:

curl -d @unit.json -w "%{http_code"} -H 'Content-Type: application/json' $FLEETCTL_ENDPOINT/fleet/v1/units/example:1.0.0@1.service

I tried this with the above snippet, but replaced the pull and run commands above with the following:

{
      "section": "Service",
      "name": "ExecStart",
      "value": "/usr/bin/docker run --rm --name %p-%i -p 80 coreos/%p"
    },
    {
      "section": "Service",
      "name": "ExecStop",
      "value": "/usr/bin/docker stop %p-%i"
    },

Unfortunately, this didn’t work because the colon, :, in example:1.0.0 make the name invalid for a container. I could forego the name, but then I wouldn’t be able to easily stop, kill or rm the container. So we need to massage the %p parameter a little bit. Luckily, bash to the rescue.

Unfortunately, systemd is a little wonky when it comes to scripting in a unit file. It’s relatively hard to create and access environment variables, you need fully-qualified paths, and multiple lines for arbitrary scripts are discouraged. After googling how exactly to do bash scripting in a systemd file, or why an environment variable wasn’t being set, I began to understand the frustration in the community on popular distros switching to systemd. But we can still make do with what we have by launching a /bin/bash command instead of the vanilla /usr/bin/docker:

{
  "desiredState": "launched",
  "options": [
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "-/bin/bash -c \"APP=`/bin/echo %p | sed 's/:/-/'`; /usr/bin/docker kill $APP-%i\""
    },
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "-/bin/bash -c \"APP=`/bin/echo %p | sed 's/:/-/'`; /usr/bin/docker rm $APP-%i\""
    },
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "/usr/bin/docker pull coreos/%p"
    },
    {
      "section": "Service",
      "name": "ExecStart",
      "value": "/bin/bash -c \"APP=`/bin/echo %p | sed 's/:/-/'`; /usr/bin/docker run --name $APP-%i -h $APP-%i -p 80 --rm coreos/%p\""
    },
    {
      "section": "Service",
      "name": "ExecStop",
      "value": "/bin/bash -c \"APP=`/bin/echo %p | sed 's/:/-/'`; /usr/bin/docker stop $APP-%i"
    },
    {
      "section": "X-Fleet",
      "name": "Conflicts",
      "value": "%p@*.service"
    }
  ]
}

and we can submit with:

curl -X PUT -d @unit.json -H 'Content-Type: application/json'  $FLEET_ENDPOINT/fleet/v1/units/example:1.0.0@1.service

More importantly, we can easily launch multiple containers of version two simultaneously:

curl -X PUT -d @unit.json -H 'Content-Type: application/json'  $FLEET_ENDPOINT/fleet/v1/units/example:2.0.0@1.service
curl -X PUT -d @unit.json -H 'Content-Type: application/json'  $FLEET_ENDPOINT/fleet/v1/units/example:2.0.0@2.service

and then destroy version one:

curl -X DELETE -w "%{http_code}" $FLEET_ENDPOINT/fleet/v1/units/example:1.0.0@1.service

More jq and bash fun

Let’s say you do start multiple containers, and you want to cycle them out and delete them. In our above example, we’ve started two containers. How will we easily go from v2 to v3, and remove the v3 nodes? The marathon API has a simple “scale” button which does just that. Can we do the same for CoreOS? Yes we can.

Conceptually, let’s think about what we want. We want to select all containers running a specific version, grab the full unit file name, and then curl a DELETE operation to that endpoint. We can use the Fleet API to get our information, jq to parse the response, and the bash pipe operator with xargs to call our curl command.

Stringing this together like so:

curl -s $FLEET_ENDPOINT/fleet/v1/units | jq '.units[] | .name | select(startswith("example:1.0.0"))' | xargs -t -I{} curl -s -X DELETE $FLEET_ENDPOINT/fleet/v1/units/{}

jq provides some very powerful json processing. We are pulling out the name field, and only selecting elements which start with our specific app and version, and then piping that to xargs. The -I{} flag for xargs is a substitution trick I learned. This allows you to do string placements rather than pass the field as an argument.

Conclusion

I can pretty much guarantee no matter what you pick to run your Docker PaaS, it won’t do exactly what you want. I can also guarantee that there will be a lot to learn: new apis, new commands, new tools. It’s going to feel like pushing a round peg in a square hole. But that’s okay; part of the experience is formulating opinions on how you want things to work. It’s a blend of learning the patterns and practices of a tool versus configuring it to work the way you want. Always remember a few things:

  • Keep It Simple
  • Think about how it should work conceptually
  • You can do a lot with command line.

With an API-enabled CoreOS cluster, you can easily plug deployment of containers to whatever build flow you use: your laptop, a github web hook, jenkins, or whatever flow you wish. Because all the above commands are bash, you can replace any part with a bash variable and execute appropriately. This makes parameterizing these commands into functions easy.