Adventures in HttpContext All the stuff after 'Hello, World'

Fleet Unit Files for Kubernetes on CoreOS

As I’ve been leveraging CoreOS more and more for running docker containers, the limitations of Fleet have become apparent.

Despite the benefits of dynamic unit files via the Fleet API there is still a need for fine-grained scheduling, discovery, and more complex dependencies across containers. Thus I’ve been exploring Kubernetes. Although young, it shows promise for practical usage. The Kubernetes repository has a plethora of “getting started” examples across a variety of environments. There are a few CoreOS related already, but they embed the kubernetes units in a cloud-config file, which may not be what you want.

My preference is to separate the CoreOS cluster setup from the Kubernetes installation. Keeping your CoreOS cloud-config minimal has many benefits, especially on AWS where you can’t easily update your cloud-config. Plus, you may already have a CoreOS cluster and you just want to deploy Kubernetes on top of it.

The Fleet Unit Files for Kubernetes on CoreOS are on GitHub. The unit files makes a few assumptions, mainly you are running with a production setup using central services. For AWS you can use Cloudformation to manage multiple sets of CoreOS roles as distinct stacks, and join them together via input parameters.

Networking Basics: Understanding CIDR notation and Subnets: what’s up with /16 and /24?

An IP address (specifically, an IPv4 address), like 192.168.1.51, is really just 32 bits of data. 32 ordinary bits, like a 32 bit integer, but represented a little differently than a normal int32. These 32 bits are split up into 4 8-bit blocks, with each block represented as a number with a dot in between. This is called dotted-decimal notation: 192.168.1.51. Curious why an IP address never has a number in it greater than 255? That’s the maximum value for 8 bits of data: you can only represent 0 to 255 with 8 bits.

The IP address 192.168.1.51 in binary is 11000000 10101000 00000001 00110011, or just 11000000101010000000000100110011. If this were an int32 it would be 3232235827. Why not just use this number as an address? By breaking up the address into blocks we can logically group related ip addresses together. This is helpful for routing and subnets (as in sub-networks) which is where we usually see CIDR notation. It lets us specify a common range of IP address by saying some bits in the IP address are fixed, while others can change. When we add a slash and a number between 0 and 32 after an IP, like 192.168.1.124, we specify which bits of the address are fixed and which can be changed.

Let’s take a look at Docker networking. Docker creates a subnet for all containers on a machine. It uses CIDR notation to specify a range of IP address for containers, usually 172.17.42.116. The /16 tells us the first sixteen bits in the dotted-decimal ip address are fixed. So any ip address, in binary, must begin with 1010110000010001. We are “masking” the first 16 bits of the address. Docker could assign a container of 172.23.42.1 but could not assign it 172.32.42.1. That’s because the first four bits in 23 are the same as 17 (0001) but the first four bits 32 are different (0010).

Another way to specify a range of IP addresses is with a “subnet mask”. Instead of simply saying 172.17.42.116, we could give the ip address of 172.17.42.1 and a subnet mask of 255.255.0.0. We often see this with tools like ifconfig.

255.255.0.0 in binary is 11111111 11111111 00000000 00000000. The first sixteen bits are 1 telling us which bits in the corresponding IP address are fixed. Sometimes you see a subnet mask of 255.240.0.0: that’s the same as an ip range of /12. The number 240 is 11110000 in binary, masking the first 12 bits in the subnet mask 255.240.0.0 (8 1’s for 255, and 4 1’s for 240). You could never have a subnet of 239, because there is a 0 in the middle of the binary number 239 (11101111), defeating the purpose of a mask.

A /16 range gives us 2 8 bit blocks to play with, or 65535 combinations: 2^16, or 256*256. But the 0 and 255 numbers are reserved in the IP address space, so we really only get 254 * 254 combinations, or 64516 addresses. This is important when setting up networks: if you want to make sure you have enough IP addresses for all the things in your network. A software defined network like Flannel, which allows you to create a private subnet for docker containers across hosts, uses a /16 subnet for a cluster and /24 range per node. So an entire cluster can only have 64516 ip address. More specifically, it can only have 254 nodes running 254 containers. Large networks, like those at most companies, use the reserved address space 10.0.0.0/8. The first 8 bits are fixed giving us 24 bits to play with: or 16,387,064 possible addresses (because we can’t use 0 or 255). These are usually broken up into several subnet works, like 10.252.0.0/16 and 10.12.0.0/16, carving up smaller subnets from the larger address space.

Subnet masks and CIDR notation play prominent roles in a variety of areas beyond specifying subnets. They are heavily used in routing tables to specify where traffic for a particular IP should go. They are used extensively in AWS and other cloud providers to specify firewall rules for security groups as well as their VPC product. Understanding CIDR notion and subnet masks make other aspects of networking: interfaces, gateways, routing tables much easier to understand.

Easy Scaling with Fleet and CoreOS

One element of a successful production deployment is the ability to easily scale the number of instances your process is running. Many cloud providers, both on the PaaS and IaaS front, offer such functionality: AWS Auto Scaling Groups, Heroku’s process size, Marathon’s instance count. I was hoping for something similar in the CoreOS world. Deis, the PaaS-on-CoreOS service, offers Heroku-like scaling, but I don’t want to commit to the Deis layer nor its build pack approach (for no other reason than personal preference). Fleet, CoreOS’s distributed systemd service, offers service templating, but you cannot say “now run three instances of service x”. Being programmers we can do whatever we want, and luckily, we’re only a little bash script away from replicating the “scale to x instances” functionality of popular providers.

You’ll want to enable the Fleet HTTP Api for this script to work. You can easily port this to the Fleet CLI, but I much prefer the http api because it doesn’t involve ssh, and provides more versatility into how and where you run the script.

Conceptually the flow is straightforward:

  • Given a process we want to set the number of running instances to some desired_count.
  • If desired_count is less than current_count, scale down.
  • If desired_count is more than current_count, scale up.
  • If they are the same, do nothing.

Fleet offers service templating so you can have a service unit named [email protected] with specific copies named my_awesome_app@1, my_awesome_app@2, my_awesome_app@N representing specific running instances. Currently Fleet doesn’t offer a way to group these related services together but we can easily pattern match on the service name to control specific running instances. The steps are straightforward:

  • Query the Fleet API for all instances
  • Filter by all services matching the specified name
  • See how many instances we have running for the given service
  • Destroy or create instances using specific service names until we match the desired_size.

All of these steps are easily achievable with Fleet’s HTTP Api (or fleetctl) and a little bash. To give our script some context, let’s start with how we want to use the script. Ideally it will look like this:

./scale-fleet my_awesome_app 5

First, let’s set up our script scale-fleet and set the command line arguments:

#!/bin/bash

FLEET_HOST=<YOUR FLEET API HOST>

# You may want to consider cli flags 
SERVICE_NAME=$1
DESIRED_SIZE=$2

Next we want to query the Fleet API and filter on all units with a prefix of SERVICE_NAME which have a process number. This will give us an array of units matching [email protected], not the base template of [email protected]. These are the units we will either add to or destroy as appropriate. The latest 1.5 version of jq supports regex expressions, but as of this writing 1.4 is the common release version, so we’ll parse the json response with jq, and then filter with grep. Finally some bash trickery will parse the result into an array we can loop through later.

# Curls the API and filter on a specific pattern, storing results in an array
INSTANCES=($(curl -s $FLEET_HOST/fleet/v1/units | jq ".units[].name | select(startswith(\"$SERVICE@\"))" | grep '\w@\d\.service'))

# A bash trick to get size of array
CURRENT_SIZE=${#INSTANCES[@]}
echo "Current instance count for $SERVICE is: $CURRENT_SIZE"

Next let’s scaffold the various scenarios for matching CURRENT_SIZE with DESIRED_SIZE, which boils down to some if statements.

if [[ $DESIRED_SIZE = $CURRENT_SIZE ]]; then
  echo "doing nothing, current size is equal desired size"
elif [[ $DESIRED_SIZE < $CURRENT_SIZE ]]; then
  echo "going to scale down instance $CURRENT_SIZE"
  # More stuff here
else 
  echo "going to scale up to $DESIRED_SIZE"
  # More stuff here
fi

When the desired size equals the current size we don’t need to do anything. Scaling down is easy, we simply loop, deleting the specific instance, until the desired and current states match. You can drop in the following snippet for scaling down:

until [[ $DESIRED_SIZE = $CURRENT_SIZE ]]; do
    curl -X DELETE $FLEET_HOST/fleet/v1/units/${SERVICE}@${CURRENT_SIZE}.service

    let CURRENT_SIZE = CURRENT_SIZE-1
  done
  echo "new instance count is $CURRENT_SIZE"

Scaling up is a bit trickier. Unfortunately you can’t simply create a new unit from a template like you can with the fleetctl CLI. But you can do exactly what the fleetctl does: copy the body from the base template and create a new one with the specific full unit name. With the body we can loop, creating instances, until our current size matches the desired size. Let’s walk it through step-by-step:

echo "going to scale up to $desired_size"
 # Get payload by parsing the options field from the base template
 # And build our new payload for PUTing later
 payload=`curl -s $FLEET_HOST/fleet/v1/units/${SERVICE}@.service | jq '. | { "desiredState":"launched", "options": .options }'`

 #Loop, PUTing our new template with the appropriate name
 until [[ $DESIRED_SIZE = $CURRENT_SIZE ]]; do
   let current_size=current_size+1

   curl -X PUT -d "${payload}" -H 'Content-Type: application/json' $FLEET_HOST/fleet/v1/units/${SERVICE}@${CURRENT_SIZE}.service 
 done
 echo "new instance count is $CURRENT_SIZE"

With our script in place we can scale away:

# Scale up to 5 instances
$ ./scale-fleet my_awesome_app 5

# Scale down
$ ./scale-fleet my_awesome_app 3

Because this all comes down to a simple bash script you can easily run it from a variety of places. It can be part of a parameterized Jambi job to scale manually with a UI, part of an envconsul setup with a key set in Consul, or it can fit into a larger script that reads performance characteristics from some monitoring tool and reacts accordingly. You can also combine this with AWS Cloudformation or another cloud provider: if you’re CPU’s hit a certain threshold, you can scale the specific worker role running your instances, and have your desired_size be some factor of that number.

I’ve been on a bash kick lately. It’s a versatile scripting language that easily portable. The syntax can be somewhat mystic, but as long as you have a shell, you have all you need to run your script.

The final, complete script is here:

Managing CoreOS Clusters on AWS with CloudFormation

Personally, I find CloudFormation a somewhat annoying tool, yet I haven’t replaced it with anything else. Those json files can get so ugly and unwieldy. Alternatives exist; you can try an abstraction like troposphere or jclouds, or ditch cfn completely with something like terraform. These are interesting tools but somehow I find myself sticking with the straight-up json approach, the aws cli, and some bash scripting: the pieces are already there, they just need to be strung together. In the end it’s not that bad, and there are some tools and techniques I’ve picked up which really help out. I recently applied these to managing CoreOS clusters with CFN, and wanted to share a simplified version of the approach.

CoreOS provides a default CloudFormation template which is a great start for cluster experimentation. But scaling out, where nodes are coming and going, can be disastrous for etcd’s quorum consensus if you’re not careful. You just don’t want to remove nodes from a formed etcd cluster. CoreOS’s cluster documentation has a section on production configuration: you want a core set of nodes for running central services, with various worker nodes for specific purposes. We can elaborate this with a short-list of requirements:

You want to tag sets of instances with specific roles so you can group dependencies and isolate apps when needed. Although possible, it’s unrealistic to actually run any app on any node. More likely you want to group apps into front-facing and back-facing and treat those nodes differently. For instance, you could map the IP’s of front-facing nodes to a Route53 endpoint.

You want a cluster of heterogeneous instances for different workloads Certain apps require certain characteristics. Even though you’re running everything in docker containers, you still want to have c4’s for compute-intensive loads, r3’s for memory-intensive loads, etc. Look at your applications and map them to a system topology. You can also scale these groups of instances differently, but you want to see your entire system as a whole: not as independent, discrete parts.

At some point, you’ll need to update the configuration of your instances. You want to do this surgically, without accidentally destroying your cluster. You may be one bad cfn update from relaunching an auto scaling group or misconfiguring an instance which causes a replacement. Just like normal instances you want to apply updates and reconfiguration of nodes in a sane, logical way. If you only had one cfn template for your entire cluster, it’s all or nothing. That’s not a choice we want to make.

CoreOS won’t let you forget about the underlying nodes; it just adds a little abstraction so you don’t need to deal with specific nodes as much.

I’m assuming you’re familiar with CloudFormation and the basics of a template. For our setup we’ll start with the us-east-1 hvm CoreOS template and modify it along the way. This template create a straight-up CoreOS cluster launched in an Auto Scaling Group, uses a LaunchConfig’s UserData to set some Cloud-Config settings. Like most templates you need a few parameters to launch. The non-default ones are your keypair and the etcd Discovery Url for forming the cluster. We are going to launch this stack with the CLI (who needs user interfaces?)

Let’s create a bash script, coreos-cfn.sh, to call our create stack (don’t forget to chmod +x). We need a DiscoveryUrl so we’ll get a new one in our script and pass it as a parameter to CFN.

#!/bin/bash 

DISCOVERY_URL=`curl -s -w "\n" https://discovery.etcd.io/new`
#Check to make sure the above command worked, or exit
[[ $? -ne 0 ]] && echo "Could not generate discovery url." && exit 1

if [ -z "$COREOS_KEYPAIR" ]; then
  KEYPAIR=yourkey.pem
fi

# Create the CloudFormation stack
aws cloudformation create-stack \
    --stack-name coreos-test \
    --template-body file://coreos-stable-hvm.template \
    --capabilities CAPABILITY_IAM \
    --tags Key=Name,Value=CoreOS \
    --parameters \
        ParameterKey=DiscoveryURL,ParameterValue=${DISCOVERY_URL} \
        ParameterKey=KeyPair,ParameterValue=${KEYPAIR}

The -z $KEYPAIR tests to see if there’s a keypair set as an environment variable; if not, it uses the specified one. If you run coreos-cfn.sh you should see the CLI spit out the ARN for the stack. Before we do that, let’s make two minor tweaks.

There are two key pieces of information we want to remember from this cluster: The DiscoveryUrl, so can access cluster state, and the AutoScalingGroup, so we can easily inspect instances in the future. Because the DiscoveryUrl is a parameter the aws cli will remember it for you. We need to add the auto scaling group as an output:

"Outputs": {
    "AutoScalingGroup" : {
      "Value": { "Ref": "CoreOSServerAutoScale" }
    }
  }

After launching the cluster we can use the CLI and some jq to get back these parameters. It’s a simple built-in storage mechanism of AWS, and all you need is the original stack name:

# Get back the DiscoveryURL: Describe the stack, select the parameter list
DISCOVERY_URL=`aws cloudformation describe-stacks --stack-name coreos-test | \
  jq -r '[.Stacks[].Parameters[]][] | select (.ParameterKey == "DiscoveryURL") | .ParameterValue'`

# Get back the auto-scaling-group-id
LEADER_ASG=`aws cloudformation describe-stacks --stack-name coreos-test | \
  jq -r '[.Stacks[].Outputs[]][] | select (.OutputKey == "AutoScalingGroup") | .OutputValue'`

echo "Discovery Url is $DISCOVERY_URL and Leader ASG is $LEADER_ASG"

Why is this important? Because now we can either inspect the state of the cluster via the disovery url service, or query the ASG to inspect running nodes directly:

# Query AWS for Leader Nodes
$aws ec2 describe-instances --filters Name=tag-value,Values=$LEADER_ASG | \
  jq '.Reservations[].Instances[].NetworkInterfaces[].PrivateIpAddress'

# Inspect the Discovery Url for nodes, trimming port. 
$ `curl -s $DISCOVERY_URL | jq '.node.nodes[].value[0:-5]'

# Taking the latter one step further, we can build an Etcd Peers string using Jq, xargs and tr
$ ETCD_PEERS=`curl -s $DISCOVERY_URL | jq '.node.nodes[].value[0:-5]' | xargs -I{}  echo "{}:4001" | tr "\\n" ","`
# Drop the last ,
$ ETCD_PEERS=${ETCD_PEERS%?}

Armed with this information we are now able to spin up new CoreOS nodes and have it use our CoreOS leader cluster for management. The CoreOS Cluster Architecture page has the specific cloud-config settings which amount to:

  • Disable etcd, we don’t need it
  • Set etcd peer settings to a comma delimited list of nodes for Fleet, Locksmith
  • Set environment variables for fleet and etcd in start scripts

We’ll make the etcd peer list a parameter for our template. We can duplicate our leader template, replace the UserData portion of the LaunchConfig with the updated settings from the link above, and add { Ref: } parameters where appropriate. Let’s also add a metadata parameter as well:

"Parameters": {
    "EtcdPeers" : {
      "Description" : "A comma delimited list of etcd endpoints to use for state management.",
      "Type" : "String"
    },
    "FleetMetadata" : {
      "Description" : "A comma delimited list of key=value attributes to apply for fleet",
      "Type" : "String"
    }
  }

We can use the Ref functionality to pass these to our UserData script of the LaunchConfig:

//other config above
  "UserData" : { "Fn::Base64":
          { "Fn::Join": [ "", [
            "#cloud-config\n\n",
            "coreos:\n",
            "  fleet:\n",
            "    metadata: ", { "Ref": "FleetMetadata" }, "\n",
            "    etcd_servers: $", { "Ref": "EtcdPeers" }, "\n",
            "  locksmith:\n",
            "    endpoint: ", { "Ref": "EtcdPeers" }, "\n"
            ] ]
          }

// Other config below

Finally we need a bash script which lets us inspect the existing stack information to pass as parameters to this new template. I also appreciate a CLI tool with a sane set of explicit flags. When I launch a secondary set of CoreOS nodes, I’d like something simple to set the name, type, metadata and where I want to join to:

$ launch-worker-group.sh -n r3-workers -t r3.large -j coreos-test -m "instancetype=r3,role=worker"

Bash has a flag-parsing abilities in its getopts function which we’ll simply use to set variables:

#!/bin/bash

while getopts n:j:m:s: FLAG; do
  case $FLAG in
    n)  STACK_NAME=${OPTARG};;
    j)  JOIN=${OPTARG};;
    m)  METADATA=${OPTARG};;
    t)  INSTANCE_TYPE =${OPTARG};;
    [?])
      print >&2 "Usage: $0 [ -n stack-name ] [ -j join to leader] [ -m fleet-metadata ] [ -t instance-type ]"
      exit 1;;
  esac
done

shift $((OPTIND-1))

# You can set defaults, too:
if [ -z $INSTANCE_TYPE ]; then 
  INSTANCE_TYPE ="m3.medium"
fi

With this in place it’s just a matter of calling the AWS CLI with our new template and updated parameters. The only thing we’re doing differently than the original script is using CloudFormation’s json parameter functionality. This allows for more structured data in variables. Otherwise the comma-delimited list for etcd peers will throw off the CLI call.

DISCOVERY_URL=`aws cloudformation describe-stacks --stack-name $JOIN | \
  jq -r '[.Stacks[].Parameters[]][] | select (.ParameterKey == "DiscoveryURL") | .ParameterValue'`
# Taking the latter one step further, we can build an Etcd 
# Peers string using jq, xargs and tr to flatten
ETCD_PEERS=`curl -s $DISCOVERY_URL | jq '.node.nodes[].value[0:-5]' | \
  xargs -I{}  echo "{}:4001" | tr "\\n" ","`

# Drop the last ,
ETCD_PEERS=${ETCD_PEERS%?}

 # Create the CloudFormation stack
 aws cloudformation create-stack \
    --stack-name STACK_NAME \
    --template-body file://coreos-worker-hvm.template \
    --capabilities CAPABILITY_IAM \
    --tags Key=Name,Value=CoreOS Key=Role,Value=Worker \
    --parameters "[
      { \"ParameterKey\":\"FleetMetadata\",\"ParameterValue\":\"${METADATA}\" },
      { \"ParameterKey\":\"InstanceType\",\"ParameterValue\":\"${INSTANCE_TYPE}\" },
      { \"ParameterKey\":\"EtcdPeers\",\"ParameterValue\":\"${ETCD_PEERS%?}\" },
      { \"ParameterKey\":\"KeyPair\",\"ParameterValue\":\"${KEYPAIR}\" }
    ]"

And launch it! This will create a new stack for your worker nodes with whatever metadata you want, with whatever instance type you want.

There are a few ways to extend this. For one, we haven’t dealt with updating or destroying the stack. You can create separate shell scripts or combine them together with flags for determining which action to take. I prefer the latter as it keeps all related scripts in one file, but you can break out accordingly. You can use the AWS CLI and the Stack Name to query for private ip’s and update Route 53 accordingly, bypassing the need for an ELB.

You can do a lot with bash and other CLI tools like jq. You don’t need to scour GitHub for open source tools, or frameworks that have bells and whistles. The core components are there, you just need to glue them together. Yes, your scripts may get out of hand, but at that point it’s worth looking for alternatives because there’s probably a specific problem you need to solve. Remember, be opinionated and let those choices guide you. At some point in the future I may be raving about Terraform; friends say it’s a great tool, but it’s just not one that I need-or particularly want-to use now.

Slimming down Dockerfiles: Decrease the size of Gitlab’s CI runner from 900 to 420 mb

I’ve been leveraging Gitlab CI for our continuous integration needs, running both the CI site and CI runners on our CoreOS cluster in docker containers. It’s working well. On the runner side, after cloning the ci-runner repositroy and running a docker build -t base-runner . , I was a little disappointed with the size of the runner. It weighed in at 900MB, a fairly hefty size for something that should be a lightweight process. I’ve built the ci-runner dockerfile with the name “base-runner”:

base-runner      latest      aaf8a1c6a6b8    2 weeks ago    901.1 MB

The dockerfile is well documented and organized, but I immediately noticed some things which cause dockerfile bloat. There are some great resources on slimming down docker files, including optimizing docker images and the docker-alpine project. The advice comes down to:

  • Use the smallest possible base layer (usually Ubuntu is not needed)
  • Eliminate, or at least reduce, layers
  • Avoid extraneous cruft, usually due to excessive packages

Let’s make some minor changes to see if we can slim down this image. At the top of the dockerfile, we see the usual apt-get commands:

# Update your packages and install the ones that are needed to compile Ruby
RUN apt-get update -y
RUN apt-get upgrade -y
RUN apt-get install -y curl libxml2-dev libxslt-dev libcurl4-openssl-dev libreadline6-dev libssl-dev patch build-essential zlib1g-dev openssh-server libyaml-dev libicu-dev

# Download Ruby and compile it
RUN mkdir /tmp/ruby
RUN cd /tmp/ruby && curl --silent ftp://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p481.tar.gz | tar xz
RUN cd /tmp/ruby/ruby-2.0.0-p481 && ./configure --disable-install-rdoc && make install

Each RUN command creates a separate layer, and nothing is cleaned up. These artifacts will stay with the container unnecessarily. Running another RUN rm -rf /tmp won’t help, because the history is still there. We need things gone and without a trace. We can “flatten” these commands and add some cleanup commands while preserving readability:

# Update your packages and install the ones that are needed to compile Ruby
# Download Ruby and compile it
RUN apt-get update -y && 
    apt-get upgrade -y && 
    apt-get install -y curl libxml2-dev libxslt-dev libcurl4-openssl-dev libreadline6-dev libssl-dev patch build-essential zlib1g-dev openssh-server libyaml-dev libicu-dev && 
    mkdir /tmp/ruby && 
    cd /tmp/ruby && curl --silent ftp://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p481.tar.gz | tar xz && 
    cd /tmp/ruby/ruby-2.0.0-p481 && ./configure --disable-install-rdoc && make install && 
    apt-get clean && 
    rm -rf /var/lib/apt/lists/* /tmp/*

There’s only one run command, and the last two lines cleanup the apt-get downloads and tmp space. Let’s see how we well we do:

$ docker images | grep base-runner
base-runner      latest      2a454f84e4e8      About a minute ago   566.9 MB

Not bad; with one simple modification we went from 902mb to 566mb. This change comes at the cost of build speed. Because there’s no previously cached layer, we always start from the beginning. When creating docker files, I usually start with multiple run commands so history is preserved while I’m working on the file, but then concatenate everything at the end to minimize cruft.

566mb is a good start, but can we do better? The goal of this build is to install the ci-runner. This requires Ruby and some dependencies, all documented on the ci-runner’s readme. As long as we’re meeting those requirements, we’re good to go. Let’s switch to debian:wheezy. We’ll also need to tweak the locale setting for debian. Our updated dockerfile starts with this:

# gitlab-ci-runner

FROM debian:wheezy
MAINTAINER Michael Hamrah <[email protected]>

# Get rid of the debconf messages
ENV DEBIAN_FRONTEND noninteractive

# Update your packages and install the ones that are needed to compile Ruby
# Download Ruby and compile it
RUN apt-get update -y && 
    apt-get upgrade -y && 
    apt-get install -y locales curl libxml2-dev libxslt-dev libcurl4-openssl-dev libreadline6-dev libssl-dev patch build-essential zlib1g-dev openssh-server libyaml-dev libicu-dev && 
    mkdir /tmp/ruby && 
    cd /tmp/ruby && curl --silent ftp://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p481.tar.gz | tar xz && 
    cd /tmp/ruby/ruby-2.0.0-p481 && ./configure --disable-install-rdoc && make install && 
    apt-get clean && 
    rm -rf /var/lib/apt/lists/* /tmp/*

Let’s check this switch:

base-runner      latest      40a1465ebaed      3 minutes ago      490.3 MB

Better. A slight modification can slim this down some more; the dockerfile builds ruby from source. Not only does this take longer, it’s not needed: we can just include the ruby and ruby-dev packages; on debian:wheezy these are good enough for running the ci-runner. By removing the install-from-source commands we can get the image down to:

base-runner      latest      bb4e6306811d      About a minute ago   423.6 MB

This now more than 50% less then the original, with a minimal amount of tweaking.

Pushing Even Further

Normally I’m not looking for an absolute minimal container. I just want to avoid excessive bloat, and some simple commands can usually go a long way. I also find it best to avoid packages in favor of pre-built binaries. As an example I do a lot of work with Scala, and have an sbt container for builds. If I were to install the SBT package from debian I’d get a container weighing in at a few hundred megabytes. That’s because the SBT package pulls in a lot of dependencies: java, for one. But if I already have a jre, all I really need is the sbt jar file and a bash script to launch. That considerably shrinks down the dockerfile size.

When selecting a base image, it’s important to realize what you’re getting. A linux distribution is simply the linux kernel and an opinionated configuration of packages, tools and binaries. Ubuntu uses aptitude for package management, Fedora uses Yum. Centos 6 uses a specific version of the kernel, while version 7 uses another. You get one set of packages with Debian, another with Ubuntu. That’s the power of specific communities: how frequently things are updated, how well they’re maintained, and what you get out-of-box. A docker container jails a process to only see a specific part of the filesystem; specifically, the container’s file system. Using a distribution ensures that the required libraries and support binaries are there, in place, where they should be. But major distributions aren’t designed to run specific applications; their general-purposes servers that are designed to run a variety of apps and processes. If you want to run a single process there’s a lot that comes with an OS you don’t need.

There’s a re-emergence of lightweight linux distributions in the docker world first popularized with embedded systems. You’re probably familiar with busybox useful for running one-off bash commands. Because of busybox’s embedded roots, it’s not quite intended for packages. Alpine Linux is another alternative which features its own official registry. It’s still very small, based on busybox, and has its own package system. I tried getting gitlab’s ci-runner working with alpine, but unfortunately some of the ruby gems used by ci-runner require GNU packages which aren’t available, and I didn’t want to compile them manually. In terms of time/benefit, I can live with 400mb and move on to something else. For most things you can probably do a lot with Alpine and keep your containers really small: great for doing continuous deploys to a bunch of servers.

The bottom line is know what you need. If you want a minimal container, build up, rather than slim down. You usually need the runtime for your application (if any), your app, and supporting dependencies. Know those dependencies, and avoid cruft when you can.

Book Review: Go Programming Blueprints, and the beauty of a language.

Just over two years ago my wife and I [traveled around Asia for several months)[thegreatbigadventure.tumblr.com]. I didn’t do any programming while I was gone but I did a great deal of reading, gaining a new-found appreciation for programming and technology. I became deeply interested in Scala and Go for their respective approachs to statically typed languages. Scala for its functional programming aspects and Go for its refreshing and intentionally succinct approach to interfaces, types and its anti-inheritance. The criticism I most often here with Scala; that’s it too open, too free-for-fall in its paradigms is in stark contrast to the main criticisms I hear of Go: it’s too limiting, too constrained.

Since returning a majority of my time is focused on Scala, yet I still keep a hand in the Go cookie jar. Both languages are incredibly productive, and I appreciate FP the more I use it and understand it. Scala’s criticism is legitimate; it can be a chaotic language. However, my personal opinion is the language shouldn’t constrain you: it’s the discipline of the programmer to write code well, not the language. A bad programmer is going to destroy any language; a good programmer can make any code beautiful. More importantly, no language is magical. A language is a tool, and it’s up to the programmer to use it effectively.

Learning a language is more than just knowing how to write a class or function. Learning a language is about composing these together effectively and using the ecosystem around the language. Scala’s benefit is the ecosystem around the JVM; idiomatic Scala is contentious debate, as you have the functional programmers on one side and the more lenient anti-javaists on the other (Martin Odersky’s talk Scala: The Simple Parts is a great overview of where Scala shines). Go, on the other hand, is truly effective when you embrace its opinions and leverage its ecosystem: understanding imports and go get, writing small, independent modules, reusing these modules, embracing interfaces, and understanding the power of goroutines.

Last summer I had the great pleasure of being a technical reviewer for Mat Ryer’s Go Programming Blueprints. I’ve read a great deal of programming books in my career and appreciated Mat’s approach to showcasing the power and simplicity of Go. It’s not for beginners programmers, but if you have some experience, not even with Go, you can kick-start a working knowledge easily with Mat’s book. My favorite aspect is it explains how to write idiomatic Go to build applications. One example application composes discrete services and links them with bitly’s NSQ library, another uses a routing library on top of Go’s httpRequest handler. The book isn’t just isolated to web programs, there’s a section on writing CLI apps which link together with standard in and standard out. For those criticizing Go’s terseness Mat’s book exemplifies what you can do with those terse systems: write scalable, composable apps that are also maintainable and readable. The books shows why so many exciting new tools are written in Go: you can do a lot with little, and they compile to statically linked, minimal binaries.

As you develop your craft of writing code, you develop certain opinions on the way code should work. When your language is inline with your opinions, or you develop opinions based on the language, you are effectively using that language. If you are learning a new language, like Go, but still applying your existing opinions on how to develop applications (say, by wishing the language had Generics), you struggle. Worse, you are attempting to shape a new language to the one you know, effectively programming in the old language. You should embrace what the language offers, and honor its design decisions. Mat’s book shows how to apply Go’s design decisions effectively. The language itself will evolve and grow, but it will do it in a way that enhances and honors its design decisions. And if you still don’t like it, or Scala, well there’s always Rust.

Deploying Docker Containers on CoreOS with the Fleet API

I’ve been spending a lot more time with CoreOS in search of a docker-filled utopian PaaS dreams. I haven’t found quite what I’m looking for, but with some bash scripting, a little http, good tools and a lotta love I’m coming close. There are no shortage of solutions to this problem, and honestly, nobody’s really figured this out yet in an easy, fluid, turn-key type of way. You’ve probably read about CoreOS, Mesos, Marathon, Kubernetes… maybe even dug into Deis, Flynn, Shipyard. You’ve spun up a cluster, and are like… This is great, now what.

What I want is to go from an app on my laptop to running in a production environment with minimal fuss. I don’t want to re-invent the wheel; there are too many people solving this problem in a similar way. I like CoreOS because it provides a bare-bones docker runtime with a solid set of low-level tools. Plus, a lot of people I’m close with have been using it, so the cross-pollination of ideas helps overcome some hurdles.

One of these hurdles is how you launch containers on a cluster. I really like Marathon’s http api for Mesos, but I also like the simplicity of CoreOS as a platform. CoreOS’s distributed init system is Fleet, which leverages systemd for running a process on a CoreOS node (it doesn’t have to be a container). It has some nice features, but having to constantly write similar systemd files and run fleetctl to manage containers is somewhat annoying.

Turns out, Fleet has an http API. It’s not quite as nice as Marathon’s; you can’t easily scale to N number of instances, but it does come close. There are a few examples of using the API to launch containers, but I wanted a more end-to-end solution that eliminated boilerplate.

Activate the Fleet API

The Fleet API isn’t enabled out-of-the-box. That makes sense as the API is currently unsecured, so you shouldn’t enable it unless you have the proper VPC set up. CoreOS has good documentation on getting the API running. For a quick start you can drop the following yaml snippet into your cloudconfig’s units section:

- name: fleet.socket
  drop-ins:
    - name: 30-ListenStream.conf
      content: |
        [Socket]
        ListenStream=8080
        Service=fleet.service
        [Install]
        WantedBy=sockets.target

Exploring the API

With the API enabled, it’s time to get to work. The API has some simple documentation but offers enough to get started. I personally like the minimal approach, although I wish it was more feature-rich (it is v1, and better than nothing).

You can do a lot with curl, bash and jq. First, let’s see what’s running. All these examples assume you have a FLEET_ENDPOINT environment variable set with the host and port:

On a side note, environment variables are key to reuse the same functionality across environments. In my opinion, they aren’t used nearly enough. Check out the twelve-factor app’s config section to understand the importance of environment variables.

curl -s $FLEET_ENDPOINT/fleet/v1/units | jq '.units[] | { name: .name, currentState: .currentState}'

Sure, you can get the same data by running fleetctl list-units, but the http command doesn’t involve ssh, which can be a plus if you have a protected network, are are running from an application or CI server.

Creating Containers

Instead of crafting a fleet template and running fleetctl start sometemplate , we want to launch new units via http. This involves PUTting a resource to the /units/ endpoint under the name of your unit (it’s actually /fleet/v1/units, it took me forever to find the path prefix). The Fleet API will build a corresponding systemd unit from the json payload, and the content closely corresponds to what you can do with a fleet unit file.

The schema takes in a desiredState and an array of options which specify the section, name, and value for each line. Most Fleet templates follow a similar pattern as exemplified with the launching containers with Fleet guide:

  1. Cleanup potentially running containers
  2. Pull the container
  3. Run the container
  4. Define X-Fleet parameters, like conflicts.

Again we’ll use curl, but writing json on the command line is really annoying. So let’s create a unit.json for our payload defining the tasks for CoreOS’s apache container:

{
  "desiredState": "launched",
  "options": [
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "-/usr/bin/docker kill %p-i%"
    },
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "-/usr/bin/docker rm %p-%i"
    },
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "/usr/bin/docker pull coreos/%p"
    },
    {
      "section": "Service",
      "name": "ExecStart",
      "value": "/usr/bin/docker run --rm --name %pi-%i -p 80 coreos/%p /usr/sbin/apache2ctl -D FOREGROUND"
    },
    {
      "section": "Service",
      "name": "ExecStop",
      "value": "/usr/bin/docker stop %p-%i"
    },
    {
      "section": "X-Fleet",
      "name": "Conflicts",
      "value": "%p@*.service"
    }
  ]
}

There’s a couple of things of note in this snippet:

  • We’re adding a “-” in front of the docker kill and docker rm commands of the ExecStartPre tasks. This tells to Fleet to continue if there’s an error; these tasks are precautionary to remove an existing phantom container if it will conflict with the newly launched one.
  • We’re using Fleet’s systemd placeholders %p and %i to replace actual values in our template with values from the template name. This provides a level of agnosticism in our template; we can easily reuse this template to launch different containers by changing the name. Unfortunately this doesn’t quite work in our example because it’s apache specific, but if you were running a container with an entry point command specified, it would work fine. You’ll also want to manage containers under your own namespace, either in a private or public registry.

We can launch this file with curl:

curl -d @unit.json -w "%{http_code}" -H 'Content-Type: application/json' $FLEETCTL_ENDPOINT/fleet/v1/units/[email protected]

If all goes well you’ll get back a 201 Created response. Try running the list units curl command to see your container task.

We can run fleetctl cat apache@1 to view the generated systemd unit:

[Service]
ExecStartPre=-/usr/bin/docker kill %p-%I
ExecStartPre=-/usr/bin/docker rm %p-%i
ExecStartPre=/usr/bin/docker pull coreos/%p
ExecStart=/usr/bin/docker run --rm --name %pi-%i -p 80 coreos/%p /usr/sbin/apache2ctl -D FOREGROUND
ExecStop=/usr/bin/docker stop %p-%i

[X-Fleet]
Conflicts=%p@*.service

Want to launch a second task? Just post again, but change the instance number from 1 to 2:

curl -d @unit.json -w "%{http_code}" -H 'Content-Type: application/json' $FLEETCTL_ENDPOINT/fleet/v1/units/[email protected]

When you’re done with your container, you can simple issue a delete command to tear it down:

curl -X DELETE -w "%{http_code}" $FLEET_ENDPOINT/fleet/v1/units/[email protected]

Deploying New Versions

Launching individual containers is great, but for continuous delivery, you need deploy new versions with no downtime. The example above used systemd’s placeholders for providing the name of the container, but left the apache commands in place. Let’s use another CoreOS example container from the zero downtime frontend deploys blog post. This coreos/example container uses an entrypoint and tagged docker versions to go from a v1 to a v2 version of the app. Instead of creating multiple, similar, fleet unit files like that blog post, can we make an agnostic http call that works across versions? Yes we can.

Let’s conceptually figure out how this would work. We don’t want to change the json payload across versions, so the body must be static. We could use some form of templating or find-and-replace, but let’s try and avoid that complexity for now. Can we make due with the options provided us? We know that the %p parameter lets us pass in the template name to our body. So if we can specify the name and version of the container we want to launch in the name of the unit file we PUT, we’re good to go.

So we want to:

curl -d @unit.json -w "%{http_code"} -H 'Content-Type: application/json' $FLEETCTL_ENDPOINT/fleet/v1/units/example:[email protected]

I tried this with the above snippet, but replaced the pull and run commands above with the following:

{
      "section": "Service",
      "name": "ExecStart",
      "value": "/usr/bin/docker run --rm --name %p-%i -p 80 coreos/%p"
    },
    {
      "section": "Service",
      "name": "ExecStop",
      "value": "/usr/bin/docker stop %p-%i"
    },

Unfortunately, this didn’t work because the colon, :, in example:1.0.0 make the name invalid for a container. I could forego the name, but then I wouldn’t be able to easily stop, kill or rm the container. So we need to massage the %p parameter a little bit. Luckily, bash to the rescue.

Unfortunately, systemd is a little wonky when it comes to scripting in a unit file. It’s relatively hard to create and access environment variables, you need fully-qualified paths, and multiple lines for arbitrary scripts are discouraged. After googling how exactly to do bash scripting in a systemd file, or why an environment variable wasn’t being set, I began to understand the frustration in the community on popular distros switching to systemd. But we can still make do with what we have by launching a /bin/bash command instead of the vanilla /usr/bin/docker:

{
  "desiredState": "launched",
  "options": [
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "-/bin/bash -c \"APP=`/bin/echo %p | sed 's/:/-/'`; /usr/bin/docker kill $APP-%i\""
    },
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "-/bin/bash -c \"APP=`/bin/echo %p | sed 's/:/-/'`; /usr/bin/docker rm $APP-%i\""
    },
    {
      "section": "Service",
      "name": "ExecStartPre",
      "value": "/usr/bin/docker pull coreos/%p"
    },
    {
      "section": "Service",
      "name": "ExecStart",
      "value": "/bin/bash -c \"APP=`/bin/echo %p | sed 's/:/-/'`; /usr/bin/docker run --name $APP-%i -h $APP-%i -p 80 --rm coreos/%p\""
    },
    {
      "section": "Service",
      "name": "ExecStop",
      "value": "/bin/bash -c \"APP=`/bin/echo %p | sed 's/:/-/'`; /usr/bin/docker stop $APP-%i"
    },
    {
      "section": "X-Fleet",
      "name": "Conflicts",
      "value": "%p@*.service"
    }
  ]
}

and we can submit with:

curl -X PUT -d @unit.json -H 'Content-Type: application/json'  $FLEET_ENDPOINT/fleet/v1/units/example:[email protected]

More importantly, we can easily launch multiple containers of version two simultaneously:

curl -X PUT -d @unit.json -H 'Content-Type: application/json'  $FLEET_ENDPOINT/fleet/v1/units/example:[email protected]
curl -X PUT -d @unit.json -H 'Content-Type: application/json'  $FLEET_ENDPOINT/fleet/v1/units/example:[email protected]

and then destroy version one:

curl -X DELETE -w "%{http_code}" $FLEET_ENDPOINT/fleet/v1/units/example:[email protected]

More jq and bash fun

Let’s say you do start multiple containers, and you want to cycle them out and delete them. In our above example, we’ve started two containers. How will we easily go from v2 to v3, and remove the v3 nodes? The marathon API has a simple “scale” button which does just that. Can we do the same for CoreOS? Yes we can.

Conceptually, let’s think about what we want. We want to select all containers running a specific version, grab the full unit file name, and then curl a DELETE operation to that endpoint. We can use the Fleet API to get our information, jq to parse the response, and the bash pipe operator with xargs to call our curl command.

Stringing this together like so:

curl -s $FLEET_ENDPOINT/fleet/v1/units | jq '.units[] | .name | select(startswith("example:1.0.0"))' | xargs -t -I{} curl -s -X DELETE $FLEET_ENDPOINT/fleet/v1/units/{}

jq provides some very powerful json processing. We are pulling out the name field, and only selecting elements which start with our specific app and version, and then piping that to xargs. The -I{} flag for xargs is a substitution trick I learned. This allows you to do string placements rather than pass the field as an argument.

Conclusion

I can pretty much guarantee no matter what you pick to run your Docker PaaS, it won’t do exactly what you want. I can also guarantee that there will be a lot to learn: new apis, new commands, new tools. It’s going to feel like pushing a round peg in a square hole. But that’s okay; part of the experience is formulating opinions on how you want things to work. It’s a blend of learning the patterns and practices of a tool versus configuring it to work the way you want. Always remember a few things:

  • Keep It Simple
  • Think about how it should work conceptually
  • You can do a lot with command line.

With an API-enabled CoreOS cluster, you can easily plug deployment of containers to whatever build flow you use: your laptop, a github web hook, jenkins, or whatever flow you wish. Because all the above commands are bash, you can replace any part with a bash variable and execute appropriately. This makes parameterizing these commands into functions easy.

Adding Http Server-Side Events to Akka-Streams

In my last blog post we pushed messages from RabbitMq to the console using Akka-Streams. We used the reactive-rabbit library to create an Akka-Streams Source for our streams-playground queue and mapped the stream to a println statement before dropping it into an empty Sink. All Akka-Streams need both a Source and Sink to be runnable; we created a complete stream blueprint to be run later.

Printing to the console is somewhat boring, so let’s take it up a notch. The excellent Spray Web Service library is being merged into Akka as Akka-Http. It’s essentially Spray built with Akka-Streams in mind. The routing dsl, immutable request/response model, and high-performance http server are all there; think of it as Spray vNext. Check out Mathias Doenitz’s excellent slide deck on kaka-http from Scala days to learn more on this evolution of Spray; it also highlights the back-pressure functionality Akka-Streams will give you for Http.

Everyone’s familiar with the Request/Response model of Http, but to show the power of Akka-Streams we’ll add Heiko Seeberger’s Akka-SSE library which brings Server-Side Events to Akka-Http. Server-Side Events are a more efficient form of long-polling that’s a lighter protocol to the bi-directional WebSocket API. It allows the client to easily register a handler which the server can then push events to. Akka-SSE adds an SSE-enabled completion marshaller to Akka-Http so your response can be SSE-aware. Instead of printing messages to the console, we’ll push those messages to the browser with SSE. This shows one of my favorite features of stream-based programming: we simply connect the specific pipes to create more complex flows, without worrying about the how; the framework handles that for us.

Changing the Original Example

If you’re interested in the code, simply clone the original repo with git clone https://github.com/mhamrah/streams-playground.git and then git checkout adding-sse to get to this step in the repo.

To modify the original example we’re going to remove the println and Sink calls from RabbitMqConsumer so we can plug in our enhanced Source to the Akka-Http sink.

def consume() = {
    Source(connection.consume("streams-playground"))
      .map(_.message.body.utf8String)
  }

This is now a partial flow: we build up the original RabbitMq Source with our map function to get the message body. Now the “other end” of the stream needs to be connected, which we defer until later. This is the essence of stream composition. There are multiple ways we can cut this up: our map call could be the only thing in this function, with our RabbitMq source defined elsewhere.

Adding Akka-Http

If you’re familiar with Spray, Akka-Http won’t look that much different. We want to create an Actor for our http service. There are just a few different traits we extend our original Actor from, and a different way plug our routing functions into the Akka-Streams pipeline.

class HttpService
  extends Actor
  with Directives
  with ImplicitFlowMaterializer
  with SseMarshalling {
  // implementation
  // ...
}

Directives gives us the routing dsl, similar to spray-routing (the functions are pretty much the same). Because Akka-Http uses Akka-Streams, we need an implicit FlowMaterializer in scope to run the stream. ImplicitFlowMaterializer provides a default. Finally, the SseMarshalling trait from Heiko Seeberger’s library provides the SSE functionality we want for our app. If you’re interested in a robust Akka-Streams sample, Heiko’s Reactive-Flows is worth checking out.

##Binding to Http

Within our actor body we’ll create our http stream by binding a routing function to an http port. This is a little different than Spray; there’s just some syntactical sugar so we can plug our routing function directly into the http pipeline:

//need an ExecutionContext for Futures
    import context.dispatcher

    //There's no receive needed, this is implicit
    //by our routing dsl.
    override def receive: Receive = Actor.emptyBehavior

    //We bind to an interface and create a 
    //Flow with our routing function
    Http()(context.system)
      .bind(Config.interface, Config.port)
      .startHandlingWith(route)

    //Simple composition of basic routes
    private def route: Route = sse ~ assets

    //Defined later
    private def see: Route = ???
    private def assets: Route = ???

If we weren’t using the Routing DSL we’d need to explicitly handling HttpRequest messages in our receive partial function. But the startHandlingWith call will do this for us; like spray-routing it takes in a routing function, and will call the appropriate route handler. New http requests will be pumped into the route handler and completed with the completion function at the end of the route.

##Adding SSE

The last piece of the puzzle is adding a specific route for SSE. We need two pieces for SSE support: first, an implicit function which converts the type produced from our Source to an Sse.Message; in this case, we need to go from a String to an Sse.Message. Secondly we need a route where a client can subscribe to the stream of server-side events.

//Convert a String (our RabbitMq output) to an SSE Message
 implicit def stringToSseMessage(event: String): Sse.Message = {
      Sse.Message(event, Some("published"))
    }

 //add a route for our sse endpoint.
 private def sse: Route = {
      path("messages") {
        get {
          complete {
            RabbitMqConsumer.consume
          }
        }
      }
    }

In order for SSE to work in the browser we need to produce a stream of SSE messages with a specific content-type: Content-Type: text/event-stream. That’s what Akka-SSE provides: the SSE Message case classes and serialization to text/event-stream. Our implicit function stringToSseMessage allows the Scala types to align so the “stream pipes” can be attached together. In our case, we produce a stream of Strings, our RabbitMq message body. We need to produce a stream of SSE.Messages so we add a simple conversion function. When a new client connects, they’ll attach themselves to the consuming RabbitMq Source. Akka-Http lets you natively complete a route with a Flow; Akka-Sse simply completes that Flow with the proper Http response for SSE.

Trying It Out

Fire up SBT and run ~reStart, ensuring you have RabbitMq running and set up a queue named streams-playground (see the README). In your console, try a simple curl command:

$ curl http://localhost:8080/messages

The curl command won’t return. Start sending messages via the RabbitMq Admin console and you’ll see the SSE output in action:

$ curl localhost:8080/messages
event:published
data:woot!

event:published
data:another message!

Close the curl command, and fire up your browser at http://localhost:8080 you’ll see a simple web page (served from the assets route). Continue sending messages via RabbitMq, and those messages will be added to the dom. Most modern browsers natively support SSE with the EventSource object. The following gist creates an event listener on the 'published' event, which is produced from our implicit string => sse function above:

There’s also handlers for opening the initial sse connection and any errors produced. You could also add more events; our simple conversion only goes from a String to one specific SSE of type published. You could map a set of case classes–preferably an algebraic data type–to a set of events for the client. Most modern browsers support EventStream; there’s no need a for an additional framework or library. The gist above includes a test I copied from the html5 rocks page on SSE.

A Naive Implementation

If you open up multiple browsers to localhost, or curl http://localhost:8080/messages a few times, you’ll notice that a published message only goes to one client. This is because our initial RabbitMq Source only consumes one message from a queue, and passes that down the stream pipeline. That single message will only go to one of the connected clients; there’s no fanout or broadcasting. You can do that with either RabbitMq or Akka-Streams, try experimenting for yourself!

A Gentle Introduction To Akka Streams

I’m happy to see stream-based programming emerge as a paradigm in many languages. Streams have been around for a while: take a look at the good ‘ol | operator in Unix. Streams offer an interesting conceptual model to processing pipelines that is very functional: you have an input, you produce an output. You string these little functions together to build bigger, more complex pipelines. Most of the time you can make these functions asynchronous and parallelize them over input data to maximize throughput and scale. With a Stream, handling data is almost hidden behind the scenes: it just flows through functions, producing a new output from some input. In the case of an Http server, the Request-Response model across all clients is a Stream-based process: You map a Request to a Response, passing it through various functions which act on an input. Forget about MVC, it’s all middleware. No need to set variables, iterate over collections, orchestrate function calls. Just concatenate stream-enabled functions together, and run your code. Streams offer a succinct programming model for a process. The fact it also scales is a nice bonus.

Stream based programming is possible in a variety of languages, and I encourage you to explore this space. There’s an excellent stream handbook for Node, an exploratory stream language from Yukihiro “Matz” Matsumoto of Ruby fame, Spark Streaming and of course Akka-Streams which joins the existing scalaz-stream library for Scala. Even Go’s HttpHandler function is Stream-esque: you can easily wrap one function around another, building up a flow, and manipulate the Response stream accordingly.

Why Akka-Streams?

Akka-Streams provide a higher-level abstraction over Akka’s existing actor model. The Actor model provides an excellent primitive for writing concurrent, scalable software, but it still is a primitive; it’s not hard to find a few critiques of the model. So is it possible to have your cake and eat it too? Can we abstract the functionality we want to achieve with Actors into a set of function calls? Can we treat Actor Messages as Inputs and Outputs to Functions, with type safety? Hello, Akka-Streams.

There’s an excellent activator template for Akka-Streams offering an in-depth tutorial on several aspects of Akka-Streams. For a more a gentler introduction, read on.

The Recipe

To cook up a reasonable dish, we are going to consume messages from RabbitMq with the reactive-rabbit library and output them to the console. The code is on GitHub. If you’d like to follow along, git clone and then git checkout intro; hopefully I’ll build up more functionality in later posts so the master branch may differ.

Let’s start with a code snippet:

object RabbitMqConsumer {
 def consume(implicit flowMaterializer: FlowMaterializer) = {
    Source(connection.consume("streams-playground"))
      .map(_.message.body.utf8String)
      .foreach(println(_))
  }
}
  • We use a RabbitMq connection to consume messages off of a queue named streams-playground.
  • For each message, we pull out the message and decode the bytes as a UTF-8 string
  • We print it to the console

The Ingredients

  • A Source is something which produces exactly one output. If you need something that generates data, you need a Source. Our source above is produced from the connection.consume function.
  • A Sink is something with exactly one input. A Sink is the final stage of a Stream process. The .foreach call is a Sink which writes the input (_) to the console via println.
  • A Flow is something with exactly one input and one output. It allows data to flow through a function: like calling map which also returns an element on a collection. The map call above is a Flow: it consumes a Delivery message and outputs a String.

In order to actually run something using Akka-Streams you must have both a Source and Sink attached to the same pipeline. This allows you to create a RunnableFlow and begin processing the stream. Just as you can compose functions and classes, you can compose streams to build up richer functionality. It’s a powerful abstraction allowing you to build your processing logic independently of its execution. Think of stream libraries where you “plug in” parts of streams together and customize accordingly.

A Simple Flow

You’ll notice the above snippet requires an implicit flowMaterializer: FlowMaterializer. A FlowMaterializer is required to actually run a Flow. In the snippet above foreach acts as both a Sink and a run() call to run the flow. If you look at the Main.scala file you’ll see I start the stream easily in one call:

implicit val flowMaterializer = FlowMaterializer()
  RabbitMqConsumer.consume

Create a queue named streams-playground via the RabbitMq Admin UI and run the application. You can use publish messages in the RabbitMq Admin UI and they will appear in the console. Try some UTF-8 characters, like åßç∂!

A Variation

The original snippet is nice, but it does require the implicit FlowMaterializer to build and run the stream in consume. If you remove it, you’ll get a compile error. Is there a way to separate the definition of the stream with the running of the stream? Yes, by simply removing the foreach call. foreach is just syntactical sugar for a map with a run() call. By explicitly setting a Sink without a call to run() we can construct our stream blueprint producing a new object of type RunnableFlow. Intuitively, it’s a Flow which can be run().

Here’s the variation:

def consume() = {
     Source(connection.consume("streams-playground"))
      .map(_.message.body.utf8String)
      .map(println(_))
      .to(Sink.ignore) //won't start consuming until run() is called!
  }

We got rid of our flowMaterializer implicit by terminating our Stream with a to() call and a simple Sink.ignore which discards messages. This stream will not be run when called. Instead we must call it explicitly in Main.scala:

implicit val flowMaterializer = FlowMaterializer()
  RabbitMqConsumer.consume().run()

We’ve separated out the entire pipeline into two stages: the build stage, via the consume call, and the run stage, with run(). Ideally you’d want to compose your stream processing as you wire up the app, with each component, like RabbitMqConsumer, providing part of the overall stream process.

A Counter Example

As an alternative, explore the rabbitmq tutorials for Java examples. Here’s a snippet from the site:

QueueingConsumer consumer = new QueueingConsumer(channel);
    channel.basicConsume(QUEUE_NAME, true, consumer);

    while (true) {
      QueueingConsumer.Delivery delivery = consumer.nextDelivery();
      String message = new String(delivery.getBody());
      System.out.println(" [x] Received '" + message + "'");
    }

This is typical of an imperative style. Our flow is controlled by the while loop, we have to explicitly manage variables, and there’s no flow control. We could separate out the body from the while loop, but we’d have a crazy function signature. Alternatively on the Akka side there’s the solid amqp-client library which provides an Actor based model over RabbitMq:

// create an actor that will receive AMQP deliveries
  val listener = system.actorOf(Props(new Actor {
    def receive = {
      case Delivery(consumerTag, envelope, properties, body) => {
        println("got a message: " + new String(body))
        sender ! Ack(envelope.getDeliveryTag)
      }
    }
  }))

  // create a consumer that will route incoming AMQP messages to our listener
  // it starts with an empty list of queues to consume from
  val consumer = ConnectionOwner.createChildActor(conn, Consumer.props(listener, channelParams = None, autoack = false))

You get the concurrency primitives via configuration over the actor system, but we still enter imperative-programming land in the Actor’s receive blog (sure, this can be refactored to some degree). In general, if we can model our process as a set of streams, we achieve the same benefits we get with functional programming: clear composition on what is happening, not how it’s doing it.

Streams can be applied in a variety of contexts. I’m happy to see the amazing and powerful spray.io library for Restful web services will be merged into Akka as a stream enabled http toolkit. It’s also not hard to find out what’s been done with scalaz-streams or the plethora of tooling already available in other languages.

Don’t Sweat Choice in Tech: Be Opinionated

Recently I jumped back into some front-end development. My focus is primarily on backend systems and APIs so I welcomed the opportunity to hack on a UI. I keep tabs on the front-end world and a new project is a good opportunity for a test-drive or to level-up on an existing toolkit. The caveat, however, is the dreaded developaralysis. We have so many choices that discerning the difference, picking the “right one”, and learning it becomes an overwhelming endeavor. Should I try out gulp? What about test-driving react? Or should I go with the usual bootstrap/angular combo I’ve come to know well?

It’s hard to balance the cost of time in the present for the potential–and I emphasize potential–benefit of speed and simplicity later when choosing something new. Time is limited; do I need an exploration of browserify, amd, and umd when all I really need is a simple script tag? Browserify looks cool, but what’s the return on that investment? The flood of options occurs at every level of experience; it’s endearing to overhear a debate amongst new developers on whether to learn rails or node first. It’s definitely not helpful when sites offer laundry lists of languages you should learn. C# and Java, really? I’m surprised assembly wasn’t on the list. Judging the nuances of NoSQL options is just as entertaining.

My programming career, now inching the 15-year mark, has seen its fair share of languages and frameworks. Happily I no longer think about ASP.NET view state or the server-control lifecycle. These were instrumental at one time, and even though they are long gone, those experiences helped shape my current opinions on how I want to develop (or, in this particular case, not to develop) software. I didn’t realize I had a choice back then on how I develop: ASP.NET seemed a given. That horrible windows-on-web paradigm pushed me to an intense focus on MVC, now a staple of many web frameworks. In turn, with the advent of APIs and more complex, task-focused UX, I am keenly interested in stream-based programming emerging in Scala and Node. What I’ve come to realize in the tumultuous world of programming, and with constantly needing to level-up, is that frameworks and languages are only part of the equation. The most important part is me, and you: the developer. Steve Ballmer got it right: it’s about developers. Languages and frameworks help us do things but our opinions on how we want to do them is what moves us forward. When a framework matches your opinions, getting stuff done is simple and intuitive. When you feel like your jumping through hoops it’s time to try something different.

My small foray back into the front-end world was met with the usual whirlwind of information. Not to mention the usual upgrade of tools, so any answers I find on google will be outdated:

I just want to throw together a web site. Grunt vs. Gulp? Wait, there’s this thing called Broccoli? Is there something different than Bootstrap that’s less bootstrappy? Are people still using h5bp?

It seemed even the simple go-to of yo angular was fraught with peril: what are all these questions I have to answer? A massive amount of files were generated. Yes, all important and I know all required for various things, but it’s information overload. Why is unresolved added to the body tag? What happens if I hack the meta-viewport settings? What is build:js doing? Should I put on the blinders and ignore? Maybe I should try the possible simplicity of just using npm. I was in it: developaralysis.

Stop. Relax. Breathe. I already knew the simple answer to navigating the awesome amount of choice: opinion. Forget about existing, pre-conceived notions of software. You need to do something: how would you do it? Chances are someone’s had a similar idea and wrote some software. Don’t even know what you need? Then start with something that’s easy to learn. If it doesn’t work out, you’ll have formed an opinion on what you wanted to happen. This is learning by fire.

Opinions have given birth to some of the most widely used software in the world. Yukihiro Matsumoto created Ruby from his dissatisfaction with other OOP languages. Nginx was spawned by dissatisfaction in threaded web-servers. You may not be ready to write a new language, framework, or web server, but your opinions can still shape what you learn and where you invest your time.

Nobody asked me a decade ago how I wanted to write web software. If they did I doubt I would have come up with anything similar to ASP.NET webforms, if I could even have put together a semi-coherent answer. Yet I was a full-time ASP.NET webforms developer, and that’s how I was writing software. Eventually my teammates and I realized this way of programming was utter crap. We asked ourselves that simple question, which we should have asked a lot earlier: How do we want to do this?

At any level it’s important to develop opinions on how you want to achieve goals. Beginners may seem they have a difficult spot because there’s so little grounding to formulate opinions: any answer may appear “too simple”. But the spectrum is the same for experience developers as well. There’s always “something else” to know and factor in behind the curtain. You’re constantly peeling layers off of the onion.

Don’t sweat the plethora of choice which exist. Nothing is perfect, and stagnation is the worst option. Take a moment and develop an opinion on how you want to solve a particular problem. Poke holes in your solution. See if somebody else has a similar idea, or a similar experience. Try something out: don’t like how it happened or the result? Did you leverage the tool correctly? Okay, great, now you have the basis for something better. Develop your suite of go-to tooling. You can keep tabs on the eco-system, and cross-pollinate ideas across similar veins. Choice is a good thing: like a breadth-first search, letting you still run forward if you want.