Write-protection at hardware level for security

Quis custodiet ipsos custodes?

Before I begin, I'd like to explain a bit about the term trust as it is used in an information security context. In infosec, trust often has the opposite meaning of what would seem logical. You want less trust. An untrusted component of a system is good, and the more untrusted components, the better. Trust means reliance. If you trust a component, you rely upon it to be honest, correctly implemented, and secure. If you do not trust a component, it means even compromise or malice will not allow it to do harm to you. In a secure computer, the Trusted Computing Base (TCB) is the sum of trusted code on any given system. The smaller the TCB, the better.

This brings us to the question, who watches the watchmen? In infosec, the highest watchmen are the TCB. The fewer there are, the safer the system. While it is impossible to completely solve this issue, one common effective solution is to implement trust in hardware, which is much harder to tamper with than software. Hardware verifies a small amount of software. This small software verifies larger software, and on and on. This is explained later in this post.

Answering your questions

Is it possible to write-protect BIOS chips at the hardware level, e.g. with a device having a similar form-factor to a BIOS Savior but instead possessing a hardware switch that physically prevents current from reaching the circuitry capable of overwriting the BIOS?

Some older systems required jumpers to be set in order to write to the BIOS. Now days, it's almost always in software. You would want to use a modern computer that supports BootGuard in order to mitigate this. BootGuard runs in the CPU before the BIOS even loads and verifies a digital signature in the BIOS to ensure it has not been tampered with. Boot will only resume if the signature is valid.

Similarly, is it possible to write-protect processors at the hardware level, e.g. with a device having a similar form-factor, mutatis mutandis, to that described above for the BIOS, i.e. sitting physically between the CPU socket on the motherboard and, the CPU itself, and again possessing a hardware switch that physically prevents current from reaching the circuitry capable of overwriting the CPU's firmware?

CPUs do not have firmware. In fact, they have no permanent storage at all. Intel even has a "statement of volatility" in many of their documents, essentially saying that a second-hand CPU will not contain any personal information or permanent changes from its previous owner. For example, from the Xeon E5 v4 datasheet, section 1.1.4 contains the following:

1.1.4   Statement of Volatility (SOV)

        The Intel® Xeon® Processor E5 v4 Product Family does not retain any
        end-user data when powered down and / or the processor is physically
        removed from the socket.

Technically, CPUs do have a small amount of permanent storage called OTP (One-Time-Programmable) fuses, but they are used for permanently changing some basic settings, such as whether or not BootGuard is active. It does not contain anything executable.

You are probably thinking of microcode. Microcode is a way that CPUs can modify the behavior of certain instructions by routing through a microcode table. This is used to fix buggy instructions or even disable them completely. However, it's not built into the CPU, and can only be loaded at runtime. Once a CPU resets, microcode is lost. There are two main ways a CPU can load microcode:

  • The BIOS, which often contains microcode from the time of manufacture.
  • Software, which downloads the latest microcode and inserts it.

Microcode is verified using a signing key known only to Intel. To insert malicious microcode, one would have to be in the position to both get you to run it, and to obtain Intel's signing key.

Are there any server OSes/distros designed to support this sort of configuration out of the box?

Tails is specifically designed to do this. It is an amnesic live system that keeps data only in memory. It uses a special type of filesystem called a union to fuse two other filesystems. Specifically, it fuses the read-only squashfs, present on a USB stick or DVD, and an in-memory tmpfs. Any changes to the union filesystem are written to tmpfs. This provides the illusion of one, large, writable filesystem, when in reality there is only one that is never-changing and another that exists only in memory.

Are there any information resources (books, websites) dedicated to deploying and maintaining servers in this fashion?

You may want to look into diskless servers. These are common in clustering and boot over the network. All data they have, unless saved over the network, is lost on reboot.

Other hardware that can be modified

The list of hardware you provided is not exhaustive. There is plenty more on a modern motherboard that can be modified, for example:

  • Both HDDs and SSDs have writable firmware powering their microcontrollers.
  • Option ROMs are present in nearly every PCI device and are executed by the BIOS.
  • GPU firmware is usually not writable over software, but it is not write-protected.

There is more, and unless you have a way to verify your boot process, trying to write-protect everything will be a cat-and-mouse game.

Measured boot

With some effort, even using software (on modern hardware) alone, you may still be able to verify the integrity of your hardware, sharply reducing the number of components you have to trust. It is even possible to do this remotely for a computer you do not have physical access to! This is called remote attestation. Usually, this is done using the TPM.

The TPM is a small hardware component (though often emulated in firmware in newer processors) which is designed to be tamper resistant and secure. It has minimal functionality, and can only do a few things. Primarily, it is used to combine the hashes of various components of the system, and they will unlock (unseal) themselves when the hashes match a known value. Various pieces of software send copies of parts of the system to the TPM to be hashed. As a result, the TPM's hashes will change if any component of the system changes. This all starts with the trusted CRTM (Core Root-of-Trust for Measurement), which is a, usually, read-only component of the BIOS that sends a hash of the BIOS itself to the TPM. If the CRTM and TPM are trusted, then the rest of the system is untrusted.

A TPM verifies several different components of the system and stores in it PCRs. Each PCR has a different purpose, and verifies a different part of the system (taken from another post):

PCR 0 to 3 for the BIOS, ROMS...
PCR 4 - MBR information and stage1
PCR 8 - bootloader information stage2 part1
PCR 9 - bootloader information stage2 part2
PCR 12 - all command-line arguments from menu.lst and those entered in the shell
PCR 13 - all files checked via the checkfile-routine
PCR 14 - all files which are actually loaded (e.g., Linux kernel, initramfs, modules...)
PCR 15 to 23 are not used for SRTM

An important thing to remember is that the TPM cannot act on any detected tampering. In fact it is fundamentally not able to, being positioned on the LPC bus. All it can do is verify that the hardware is in a sane state, and refuse to unseal itself otherwise. The sealed data could include a disk encryption key (ensuring the system will not boot if the system has been tampered with), or a secret known only to you and not guessable (and thus not spoofable) by attackers, such as a string (which is how Anti-Evil Maid from ITL works).

The end of this process is that you have reduced the TCB for your entire system down to the small, read-only CRTM, and the secure TPM chip itself.

Resources / pointers

Since you asked for some resources, I would look into the following things for improving improving the security (reducing the trusted computing base) of a COTS workstation:

  • Measured boot
  • Remote attestation
  • Trusted Platform Module (TPM)
  • Static Root-of-Trust for Measurement (SRTM)
  • Dynamic Root-of-Trust for Measurement (DRTM)

Some other questions that may be relevant and could help you understand this process:

  • Anti-Evil Maid - Invisible Things Lab
  • Trusted Platform Module - Wikipedia
  • How does the TPM perform integrity measurements on a system?
  • Security of TPM 1.2 for providing tamper-evidence against firmware modification
  • Right way to use the TPM for full disk encryption
  • What prevents the Intel TXT boot loader from being maliciously altered?
  • For remotely unlocking LUKS volumes via SSH, how can I verify integrity before sending passphrase?

TL;DR

Write protection is a game of cat-and-mouse. The only practical solution is to use measured boot.


BIOS

Most memory chips I've worked with have a W or R/W pin which selects the write mode. Physically tying that one to appropriate logical level should do the trick.

Write-protected USB drives

I'm a bit suspicious about this one. I've implemented microcontroller<->SD card interface, and the "write-protect" bit is handled completely in software, so you have to trust some part of your computer to not be able to write there. I do not know if the USB flash drives are the same in this regard, but this is something to keep in mind - hardware switch might still have software protection.