I was having a period of really high load on securityheaders.io earlier and whilst I looked into it and sorted out the root cause I wanted to throw some more cloud behind the site to bolster it. That introduced an interesting problem that I wanted to solve quickly.


DNS Round-Robin

I wanted to spin up another server to split the load off the current, single instance that securityheaders.io runs on. The plan was simple: create server, add IP address to DNS. DNS round-robin allows you to have 2 (or more) IP addresses for the same domain and they are returned in a different order so that clients will use different ones. The idea is to end up with fairly basic load balancing and ~50% of the traffic on each server. This solves my load issue. The problem is it potentially introduced another issue for me, renewing certificates.


Let's Encrypt

I use Let's Encrypt for my certificates and they do their DV challenge as a simple HTML file you host at a special path. This works fine when you only have a single server but now I have two. What if one of them requests a certificate, hosts the challenge file and then Let's Encrypt resolves my IP address to the IP of the other server, which doesn't have the challenge response file on it? That wouldn't work and the certificate request would fail so we need to solve this. Fortunately it was quite easy to with just Nginx.


Nginx Named Locations

In Nginx we typically define a location block and a path. Here is my location block for serving the ACME challenge normally:

    location /.well-known/acme-challenge/ {
            alias /home/acme/challenges/;
            try_files $uri;
    }

The path will match /.well-known/acme-challenge/, I've aliased it onto the file system folder where the challenge file is written and try_files will check for the existence of the file. This all works perfectly well until the file might be on another server, that's where named locations come in. The try_files directive can take multiple argument and Nginx will iterate over them until one matches. We can now introduce another argument and create a named location:

location /.well-known/acme-challenge/ {
    alias /home/acme/challenges/;
    try_files $uri @proxy;
}

This tells Nginx to look locally for the file and if it isn't found then pass it to @proxy instead. We now need to define the named location:

location @proxy {
    proxy_pass http://162.243.159.108:8080;
}

In this location I'm now using the proxy_pass directive to pass the request over to another server. The second server is listening on port 8080 specifically for ACME challenges like this that have been passed over:

server {
    listen 192.241.216.219:8080;
    server_name 192.241.216.219;

    location / {
            return 301 https://securityheaders.io$request_uri;
    }

    location /.well-known/acme-challenge/ {
            alias /home/acme/challenges/;
            try_files $uri =404;
    }
}

The second server is configured to listen on the IP and either answer ACME challenges or redirect the traffic away and back to the securityheaders.io domain. I also added a rule to UFW to only allow each server to communicate with the other on port 8080 based on their IP addresses.


Better ways

The best way to have solved this would probably be to use the DNS challenge instead of HTTP. Let's Encrypt added support for that a while ago now...


But, my chosen client acme_tiny doesn't support that yet and I just wanted a quick and simple solution. This works perfectly well and is fine when you only have a couple of servers, but of course it won't scale very well at all. Still, if you need a quick and hacky solution, here it is.