Dan Bernstein, famous security expert and author of qmail, just published a paper called Some thoughts on security after ten years of qmail 1.0. The paper reflects on ten years of qmail, which has a very impressive security record, and explains Dan’s philosophy behind writing secure software.

Most of the content is what you would expect, but I found one of Dan’s conclusions quite surprising. He argues against the principle of least privilege, going so far as to call it “fundamentally wrong.”

I have become convinced that this “principle of least privilege” is fundamentally wrong. Minimizing privilege might reduce the damage done by some security holes but almost never fixes the holes. Minimizing privilege is not the same as minimizing the amount of trusted code, does not have the same benefits as minimizing the amount of trusted code, and does not move us any closer to a secure computer system

To understand DJB’s argument here, you have to back up a bit and understand what he means by “trusted code.” In his view, there are only two buckets into which code falls: trusted code and untrusted code. It is a binary thing – there is nothing in between. And here is his definition for untrusted code:

We can architect computer systems to place most of the code into untrusted prisons. “Untrusted” means that code in these prisons – no matter what the code does, no matter how badly it behaves, no matter how many bugs it has – cannot violate the user’s security requirements.

This is an incredibly restrictive definition. For code to qualify as “untrusted” in this view, a bug that gives an attacker full control over the compromised code must have no security impact over the system as a whole. In practical terms, this means that the code must satisfy two criteria:

  • It must not have direct access to any resources that could possibly be sensitive from a security perspective. For example, it must not have any filesystem access or network access. essentially, it must be nothing more than a data processing node that takes input, performs some transformation, and produces output.
  • its output must not be any more trusted than its input. for example, you can’t count on untrusted code to sanitize your input in any way, because bugs in the code could prevent it from being sanitized properly.

So what does fit into this category? DJB’s example is the address extracting code in email software – a part of the code that takes an email header as input and spits out an email address as output. If you run this code inside an interpreter that properly sandboxes it from accessing anything other than its input, then the code qualifies as untrusted code, according to DJB. Even if you find a way to exploit a bug in the program by sending it malformed data, all you are capable of doing is changing the output email address. But you already had the means of controlling that, since you supplied the input. Ergo, the code is untrusted.

The paper claims that “we can architect computer systems to place most of the code into untrusted prisons” (emphasis mine). I find this claim hard to believe. How much real-world code can you really remove all I/O from? How much of a web application can really be written in a sandbox that doesn’t have access to your database (or whatever tier interfaces you with the database)? How much of a desktop application can run in a context where no filesystem access is allowed? How much of an operating system can run without having any access to hardware? Even more significantly, how many components in any of these systems is it acceptable to say that we don’t trust their output?

Setting that aside, is it really the case that the principle of least privilege is useless? Is it really the case that a binary “trusted/untrusted” designation is the only important distinction? A surprising conclusion of that argument is that having non-privileged (eg. non-root) user accounts offers no significant advantage over running everything as root. We all know that non-root accounts aren’t a panacea, but taking the argument this far seems absurd. An exploit that gives an attacker control over a single user account is not as bad as a full root exploit, end of story. It is damage control; it is limiting the scope of the impact.

DJB is brilliant and has security credentials far above my own, but in this case I think he is overreacting. It is true that the principle of least privilege isn’t a way to prevent security holes from happening, but that doesn’t mean there isn’t merit to having multiple layers of defense. Not all code can be written to satisfy his definition of untrusted code, but that doesn’t mean that all of the rest of the code needs to be completely trusted with root-level access. There is a middle ground.