The Trusted Computing Group, formed in 2003, set out to solve this problem. Through a hardware chip on the motherboard, it effectively allows a record company (for example) to send your computer some music and be guaranteed that only "trusted" software can have access to that data. "Trusted" in this context means "trusted by the record company," and for this to work, every bit of software from the bootloader up through the drivers and music-playing application has to be "trusted." The "untrusted" part of your software stack is denied access to the memory and sound hardware that is used by the trusted signal path. The same mechanisms can also result in lots of other unsavory outcomes like remote censorship.
Thankfully this vision of computer media distribution did not catch on in PCs. The GNU project called it "Treacherous Computing" (which might not be the most brilliant marketing, but certainly was on the right track). Apparently modern PCs do have Trusted Computing chips in them, which is what Microsoft's BitLocker encryption scheme uses to ensure that its software has not been tampered with (ie. for non-evil purposes). But we're thankfully not in a world where your only option for buying music and movies requires an RIAA-approved kernel and sound-card driver.
I was thinking about Trusted Computing recently as I was thinking about Ken Thompson's famous talk Reflections on Trusting Trust, whose central thesis is: "You can't trust code that you did not totally create yourself." Thompson was talking primarily about compilers, but the same applies to the environment (OS, shell, etc) that you are using to compile and/or run programs. A trojan in any one of those layers can compromise your entire system; even if you trust the source code you are running, a trojan in the OS or environment can propagate itself, and worse, hide its very existence. This is the essence of what a rootkit does.
The solution to this problem is presented in David A. Wheeler's very interesting thesis Fully Countering Trusting Trust through Diverse Double-Compiling. To vastly over-simplify, this thesis shows that if you have one trusted environment/compiler, you can use it to extend trust to another environment/compiler. The newly-trusted environment can be larger or more complex than the already-trusted environment, so you can use this to extend trust to more components until you Trusted Computing Base that provides the services that you actually need.
This idea sounded awfully similar to the evil version of Trusted Computing (which verifies all software up from the bootloader), so I set out to find out what made one good and the other evil. I haven't analyzed it exhaustively (and probably don't have the time to do so), but my intuition is that a non-evil form of Trusted Computing is completely possible, and something the industry should pursue.
Palladium-style trusted computing is evil because it is primarily designed to give a third party (namely a record/movie company) more control over a computer than that computer's owner. But what if this were flipped around: what if it were designed to give the computer's owner more control over that system than any of the software running on it?
Imagine if a TPM (Trusted Platform Module) could guarantee that you're not running a rootkit? Imagine if it could tell you exactly which processes in your system were trusted -- not by some third party, but by you as the system's owner? The user interface for this could be something like:
- When you buy a new computer, the BIOS has a special boot mode that will reset the TPM's private key. It will also generate a new private key for the system's owner and write it to a USB device that can sign random numbers without the key leaving the device. So now the computer's TPM and this USB key mutually trust each other and can set up secure communication channels between them. Unless you mistrust your computer system's hardware, this trust path is secure.
- Next you install your OS from a CD that you trust, without being connected to the network. None of the software is trusted when first installed (it runs fine, just doesn't have the "trusted" bit set according to the TPM). Once it's all installed you put your USB key in and tell the TPM to trust all of the software you just installed.
- Once your reboot, your TPM and Trusted Computing-enabled OS cooperate so that your bootloader, OS, and programs all have a "trusted" bit on them that you can check. You can configure your OS to not load any privileged code (like driver os kernel modules) unless you trust it. If you want to upgrade any software in your system, you have to re-insert your USB device. But the upside is that a rootkit cannot load itself unless the USB key is in and you authorize the rootkit to be trusted.
Right now I have a nagging worry that whoever hacked my Wordpress blog could possibly have gotten their hands on a private key that would have compromised other systems too (it was encrypted with a good password, but could they have had a keylogger installed on my Dreamhost account?). I'll never know for sure that I'm safe unless I completely reinstall the other systems from scratch, but that would be a colossal amount of effort for a worry that is almost certainly baseless. I wish hardware could help me out here, and give me confidence that my OS has not been tampered with.