This domain and all content is a copy of my old website, for historical purposes only.

This post is over a year old, its content may be outdated.

Matt Wilcox

Web Development

Tutorials Mar 06th 2015

Speed up website response times with nginx

The why, when, and how of using nginx to cache a CMS's output.

Prior to worrying about nginx...

Nginx can't do much to help make a slow design and inefficient front-end code feel fast.

A lot of what makes a web page fast or slow is down to design considerations and front end techniques. To that end I've so far implemented the following with my new website:

  • Used a performance conscious design.
  • Kept the core CSS relatively lean (32Kb before gzip).
  • Minified the JS and CSS.
  • Used Gzip to compress all appropriate files over the wire.
  • Set appropriate cache headers for all content types.
  • Used SPDY3 instead of HTTP1.
  • Created image assets optimised via ImageAlpha and ImageOptim.
  • Ensured that JS, CSS, and fonts are loaded asynchronously.

The goal is to minimise the amount of 'stuff' needed on a page, and to stop any of that stuff from blocking page render. I do have a couple of things that are not quite optimal:

  • I load three font files, which is a bit excessive - but I'm willing to pay that price for the design.
  • I load jQuery because I'm not good enough with pure JS yet to ditch it.

I'm not worrying about the number of HTTP requests because I'm using SPDY and will soon switch to HTTP2. I've written about why that makes a difference in another article: HTTP2 for front end developers.

That left only one problem area...

Time To First Byte

Having to run any request through a CMS is inevitably slower than serving a static file.

TTFB is a measure of how long it takes for the server to begin responding to a request by sending data to the client. For static files on this website that number is typically in the 20 to 40 millisecond range, which is essentially imperceptible.

However, when requesting a page that's routed through the CMS, such as the homepage, the Time To First Byte is much larger - and becomes noticeable.

This is because the CMS must generate the HTML of the page being requested. For my homepage it's gathering all the articles that I've written, sorting and filtering them into groups, extracting certain fields, creating pagination, and then spitting it all out as HTML. All of that can take a second or so, depending on the amount of content being manipulated and complexity of the relations between the content types.

Using Craft's cache feature

Craft is a great CMS, and so it has techniques to help mitigate complex queries impacting performance - specifically it has cache tags. These can be wrapped around expensive bits of code, and Craft will then store the results of that code in the database; the next time the page is requested Craft will use the previous result instead of doing all the work again. Using these tags I was able to get the homepage TTFB down to about 0.3 seconds - just using that tag lopped about a second off the TTFB.

That's great, but still not on the same order as fetching a static file. Despite the cache tag removing a lot of computation, it still has to execute a bunch of database calls in order to fetch the content of the tag.

To be clear; I'm being very fussy by bothering about a 0.3s TTFB; but I want to see how much I can push things on my site...

Using nginx's fastcgi_cache

With this technique, we can essentially skip the CMS entirely for front-end page requests.

Nginx has a built in way to store the results of a PHP call, so the next time it's needed it can pull the stored result from memory, rather than have PHP do the work again. This is a bit like Craft's cache tag, only even more efficient.

Things to be aware of

Nginx doesn't provide a way to clear its cache when something in the CMS changes.

That ability is kept for the commercial Nginx Plus product. However, there are two options available to those of us not wanting to pay $1,350 per year for this feature.

Option one is to manually delete the cache when we change something. As the cache is stored as files in a location you specify, you can use SSH or SFTP to delete those files when you make a change in the CMS. That works but is a bit clunky, so you could write a little script that listens on a particular URL and executes a bash script to do that for you.

Option two is to not worry about it. Instead set the cache period to something small but useful, like half an hour. That means when you make a change in your CMS it might take up to half an hour to be reflected on the front end of your site. No big deal for my use case, and likely not for most people's blogs either.

Secret option number three is to use a third-party nginx module to manage cache invalidation. I've chosen not to do this: I'm wary of third-party modules, especially ones with little documentation, and given my lack of knowledge in this area I'd rather not go down that route yet.

I'll be going with Option 2 - let my cache age out over a short period of time.

Setting up fastcgi_cache

The first thing we need to do is decide where we're going to store the cache in the filesystem. That folder also needs to be owned by whichever user is running nginx - typically that's www-data. Create a folder wherever you want it, for example:

mkdir /etc/nginx-cache
chown www-data /etc/nginx-cache

Now we need to define a cache key-zone in nginx, this is done inside the http { ... } block, because it will be accessible by any of the servers that may be defined later inside of server { ... } blocks.

Open your /etc/nginx/nginx.conf file and inside of the http { ... } block add the following:

Setting up a cache key-zone called 'phpcache'

fastcgi_cache_path /etc/nginx-cache levels=1:2 keys_zone=phpcache:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";

Now all we need to do is configure the domain we're interested in to use it. You should have an entry in your /etc/nginx/sites-available/ folder which defines your website, such as mysite.conf. Open that, and inside the server { ... } block add:

set $no_cache 0;

# Don't cache the CMS admin area
location /admin {
  set $no_cache 1;
}

Next, you need to modify the block you have for handling php files, so it looks like this:

location ~ [^/]\.php(/|$) {
  fastcgi_cache phpcache; # The name of the cache key-zone to use
  fastcgi_cache_valid 200 30m; # What to cache: 'code 200' responses, for half an hour
  fastcgi_cache_methods GET HEAD; # What to cache: only GET and HEAD requests (ot POST)
  add_header X-Fastcgi-Cache $upstream_cache_status; # Allow us to see if the cache was HIT, MISS, or BYPASSED inside a browser's Inspector panel
  fastcgi_cache_bypass $no_cache; # Dont pull from the cache if true
  fastcgi_no_cache $no_cache; # Dont save to the cache if true

  # the rest of your existing stuff to handle PHP files here
}

That's it, done. You just need to reload the configuration in nginx (on Debian that's a case of running /etc/init.d/nginx reload).

My TTFB is now down in the 0.04 second range on any page which has been cached. That's pretty much instant.

You can learn a lot more about what the various options and parts do at the official documentation. This should be enough to get things working for you though.

Nginx as a reverse proxy

This was the first thing I tried, before realising it wasn't actually right for what I needed - my site just doesn't have enough traffic to warrant a reverse proxy approach.

A reverse proxy can do a number of things, but I was interested in using one just for caching. This is where you put a proxy cache server in front of the web server - the proxy stores cached versions of the whole web server's output, so most requests to your website never get to your web server - they get served by the proxy instead, which isn't having to do any CMS processing at all. This set up is a lot like fastcgi_cache - except it's for entire sites and all their files, not just the PHP pages.

I realised this wasn't what I needed, but if you're running a larger site with a lot of traffic, an nginx reverse proxy could be perfect for you - and the set up is almost the same as for fastcgi_cache.