Before I started this blog, I had started a few others at my other domain (now moribund). Despite repeated attempts, I never could resign myself to doing systems administration for a web server that executed dynamic code, like that which powers WordPress or Drupal; I’d install such a framework, begin locking the site down, realize that I’d spent a lot of time reassuring myself that the site was secure without believing it for a second, then delete the framework and revert the frontpage to an
index.html rather like what’s present there now. Particularly ambitious iterations would get a post or two published before this cycle completed, now long-vanished.
I wanted my site to be either accessible for the sole purpose of reading my content or not responding to any incoming requests; I wanted it to have no possibility of serving my content but also mining BitCoin for someone else, for example, or serving my content and also a whole bunch of malware. Best of all would be to get all of that without a maintenance-hungry layer of helper applications and kernel code, most of which is irrelevant to my server’s only goal - delivering my dumb thoughts via HTML and CSS to hapless browsers.
This may sound paranoid (not to mention self-important) to readers who haven’t ever read the access or error logs for a publicly accessible webserver, and are under the impression that all attacks are launched by humans and targeted at a specific site. In fact, the vast majority of attacks are automated from stem to stern - a crawler automatically discovers your site, scans it for vulnerabilities, finds an automatically exploitable problem, exploits it, deposits a payload, and moves to the next target. Blog frameworks get a lot of attention from people writing tools for finding new targets. Here’s an excerpt from my access logs at the old domain from this month, reflecting a few automated attempts to discover exploitable software running there.
184.108.40.206 - - [04/Aug/2014:17:02:48 -0400] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 - "-" "-" 220.127.116.11 - - [06/Aug/2014:04:20:02 -0400] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 - "-" "-" 18.104.22.168 - - [07/Aug/2014:01:08:42 -0400] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 - "-" "-" 22.214.171.124 - - [07/Aug/2014:06:44:50 -0400] "GET /CFIDE/administrator/ HTTP/1.1" 404 - "-" "-" 126.96.36.199 - - [07/Aug/2014:11:23:52 -0400] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 - "-" "-" 188.8.131.52 - - [07/Aug/2014:20:02:51 -0400] "GET /CFIDE/administrator/ HTTP/1.1" 404 - "-" "-"
In short, it doesn’t matter who you are; whether or not anyone’s reading your blog, your blog is a target.
In late 2013, I decided to do something with the impulse-purchased
somerandomidiot.com, and rather than going on the dynamic blog framework roller-coaster ride again, I used a static site generator. This site is represented completely by content that doesn’t need to be executed by the webserver when a page is loaded. Moreover, there is no element of the site that needs to read, parse, or act according to the semantics of user-generated input. Only the webserver itself, which has a much more clearly-defined and constrained set of potential inputs, is exposed to outside requests.
This site’s use of a static site generator, rather than a dynamic framework like WordPress, is entirely responsible for this site’s existence through March of this year. I was still responsible for something that opened a potential attack vector to the entire Internet, but at least it wasn’t the most glaringly obvious target around; reducing the attack surface to that of a static site let me sleep at night, albeit fitfully.
Before April of 2014, this blog was served from a fairly conventional Ubuntu Linux remotely-hosted virtual private server running
nginx, a common webserver more minimal than the venerable
apache. In order to submit new content to the blog and apply security updates to its software components, I had to be able to access it over some control interface;
ssh fulfills both requirements, but not without a price.
ssh is one of the most frequently-attacked services on the Internet. Having a server with a publicly-accessible
ssh port guarantees that bots will be attempting to guess a valid username and password at least once a day. The risk of such an attack succeeding is fairly easy to mitigate with the most common security advice on the planet or by having your computer do something smarter, but the
ssh server I was using is not itself invulnerable. I thought about this risk a lot when logging into my server and took a few ameliorative measures, but largely I accepted the risk because no alternatives seemed sufficiently superior.
In summary, in order to serve content from my blog, here’s what I needed to configure and manage, and why.
- compilation: an environment that can build static pages representing this blog
ssh: updates and configuration
nginx: serving web pages
- Ubuntu LTS: Linux kernel and system dependencies
- VPS provider: physical hardware and network connections
Compilation can be done locally (i.e. not on the server that hosts the blog).
ssh’s security risk can be mitigated, but not removed.
nginx, Ubuntu, and the VPS are required and must run on the blog’s server.
When I discovered Mirage in April of 2014, I was immediately excited about the idea of running a web server without having to maintain or configure an underlying OS. Moreover, I wouldn’t have to leave any kind of control facility open to the Internet - my workflow for deploying a new unikernel was the control facility. Even if my site was hacked and got defaced – and in the worst case, was hacked and defaced in some way that I didn’t notice, but harmed my readers – it wouldn’t stay hacked; each time I updated the blog, I would be deploying an entirely freshly-generated unikernel, so post-install changes would be overwritten.
In essence, running my site from a unikernel replaces the
ssh control service with something much more difficult to exploit, which is invisible to automated scanners - one might surmise that the machine that
somerandomidiot.com points to is an Amazon EC2 instance from its public IP address, but that doesn’t give an attacker any useful information on how to go about attacking it. In order to subvert the control I have over the web server at
somerandomidiot.com, an attacker would have to first gain my Amazon credentials, and at that point I can’t imagine any sane attacker doing anything besides creating a whole bunch of mining instances on my dime and not worrying about my silly website.
I’m also able not to worry about attackers who’ve managed to compromise my blog using it to launch attacks to other sites. Since there’s no need for software updates from the unikernel – a new unikernel completely supplants the old one – there’s no need to initiate outgoing connections. Ever. I can define network firewall rules that prevent my unikernel from initiating network connections to the outside world, so that even in the case where the box is completely owned and tries to start an outgoing connection, the network firewall will disallow it. (This would’ve been possible with a Linux box, but not without needing to manually allow outbound access for system updates.)
For my Mirage blog, here’s what I need to configure and manage:
- compilation: a local environment that can build
octopressblog and Mirage unikernels
- deployment: a process to deploy a unikernel to the a public cloud and keep
somerandomidiot.compointing to the right place
- VPS provider: physical hardware and network connections
Crucially, compilation and deployment are separable from the server that runs my blog. I can (and do) assemble a unikernel on my own laptop and send it to a non-publicly-accessible host for deployment. The only thing the unikernel version of
somerandomidiot.com knows how to do is serve static blog content.
I won’t go so far as to say that this site is unhackable now. I found a denial of service that would crash this blog with just one packet a month ago. Mirage is still under active development - there are surely errors we have yet to find (and for that matter, errors we have yet to make). But once an error is found and patched, users need only do what they are already used to doing - build a new unikernel, test it, and use it to replace the old unikernel. There are no harrowing, high-stakes in-place upgrades of the webserver that can render a site both inoperable and unrecoverable. If a vulnerability is discovered tomorrow and a patch is available on Sunday, my steps to ameliorate the problem are just
opam upgrade, then the same steps I take whenever I publish a new post.
This blog is brought to you, in more ways than one, by Mirage.