Adventures in HttpContext All the stuff after 'Hello, World'

Scalability comparison of WordPress with NGINX/PHP-FCM and Apache on an ec2-micro instance.

For the past few years this blog ran apache + mod_php on an ec2-micro instance. It was time for a change; I’ve enjoyed using nginx in other projects and thought I could get more out of my micro server. I went with a php-fpm/nginx combo and am very surprised with the results. The performance charts are below; for php the response times varied little under minimal load, but nginx handled heavy load far better than apache. Overall throughput with nginx was phenomenal from this tiny server. The result for static content was even more impressive: apache effectively died after ~2000 concurrent connections and 35k total pages killing the server; nginx handled the load to 10,000 very well and delivered 160k successful responses.

Here’s the results from static content from, comparing apache with nginx. I suggest clicking through and exploring the charts:

View on

Apache only handled 33.5k successful responses up to about 1,300 concurrent connections, and died pretty quickly. Nginx did far better:

View on

160k successful response with a 22% error rate and avg. response time of 142ms. Not too shabby. The apache run effectively killed the server and required a full reboot as ssh was unresponsive. Nginx barely hiccuped.

The results of my wordpress/php performance is also interesting. I only did 1000 concurrent users hitting Here’s the apache result:

View on

There was a 21% error rate with 13.7k request served and a 237ms average response time (I believe the lower average is due to errors). Overall not too bad for an ec2-micro instance, but the error rate was quite high and nginx again did far better:

View on

A total of 19k successes with a 0% error rate. The average response time was a little higher than apache, but nginx did serve far more responses. I also get a kick out of the response time line between the two charts. Apache is fairly choppy as it scales up, while nginx increases smoothly and evens out when the concurrent connections plateaus. That’s what scalability should look like!

There are plenty of guides online showing how to get set up with nginx/php-fpm. The Nginx guide on WordPress Codex is the most thorough, but there’s a straightforward nginx/php guide on Tod Sul. I also relied on an nginx tuning guide from Dakini and this nginx/wordpress tuning guide from perfplanet. They both have excellent information. I also think you should check out the html5 boilerplate nginx conf files which have great bits of information.

If you’re setting this up yourself, start simple and work your way up. The guides above have varying degrees of information and various configuration options which may conflict with each other. Here’s some tips:

  1. Decide if you’re going with a socket or tcp/ip connection between nginx + php-fcm. A socket connection is slightly faster and local to the system, but a tcp/ip is (marginally) easier to set up and good if you are spanning multiple nodes (you could create a php app farm to compliment an nginx front-facing web farm).

    I chose to go with the socket approach between nginx/php-fpm. It was relatively painless, but I did hit a snag passing nginx requests to php. I kept getting a “no input file specified” error. It turns out it was a simple permissions issue: the default php-fpm user was different the nginx user the webserver runs under. Which leads me to:

  2. Plan your users. Security issues are annoying, so make sure file and app permissions are all in sync.

  3. Check your settings! Read through default configuration options so you know what’s going on. For instance you may end up running more worker processes in your nginx instance than available cpu’s killing performance. Well documented configuration files are essential to tuning.

  4. Plan for access and error logging. If things go wrong during the setup, you’ll want to know what’s going on and if your server is getting requests. You can turn access logs of later.

  5. Get your app running, test, and tune. If you do too many configuration settings at once you’ll most likely hit a snag. I only did a moderate amount of tuning; nginx configuration files vary considerably, so again it’s a good idea to read through the options and make your own call. Ditto for php-fcm.

I am really happy with the idea of running php as a separate process. Running php as a daemon has many benefits: you have a dedicate process you can monitor and recycle for php without effecting your web server. Pooling apps allows you to tune them individually. You’re also not tying yourself to a particular web server; php-fpm can run fine with apache. In TCP mode you can even offload your web server to separate node. At the very least, you can distinguish php usage against web server usage.

So my only question is why would anyone still use apache?