How is being able to break into any Linux machine through grub2 secure?

How is the ability of a person who gained physical access to your computer to get root [using Grub/Bash] in any way secure?

Because if Linux decided to start doing that, hackers would just exploit other security holes. The first rule of security is that if I have physical access to your system, it's game over. I've won.

Plus, imagine your X server broke and you don't have a GUI anymore. You need to boot into a recovery console to fix things, but you can't because that's insecure. In this case, you're left with a totally broken system but hey, at least it's "secure!"

But Kaz, how is this possible? I set a password on my Grub so that you can't change my init to Bash!

Oh, you did, did you? Interesting, because this looks like your photo album. GRUB doesn't have any inherent security factor at all. It's just a bootloader, not a step in some secure boot and authentication chain. The "password" you've set up is, in fact, pretty darn easy to bypass.

That, and what sysadmin doesn't carry a boot drive on them for emergencies?

But how?! You don't know my password (which is totally not P@ssw0rd btw)

Yeah, but that doesn't stop me from opening your computer and pulling out your hard drive. From there, it's a couple simple steps to mount your drive on my computer, giving me access to all of your system. This also has the awesome benefit of bypassing your BIOS password. That, or I could have just reset your CMOS. Either/or.

So... how do I not let you get access to my data?

Simple. Keep your computer away from me. If I can touch it, access a keyboard, insert my own flash drives, or take it apart, I can win.

So, can I just like put my computer in a datacenter or something? Those are pretty secure, right?

Yeah, they are. But, you're forgetting that humans are hackable too, and given enough time and preparation, I could probably get into that datacenter and siphon all of that sweet, sweet data off your computer. But I digress. We're dealing with real solutions here.

Okay, so you called my bluff. I can't put it in a datacenter. Can I just encrypt my home folder or something?

Sure, you can! It's your computer! Will it help stop me? Not in the slightest. I can just replace something important, like /usr/bin/firefox with my own malicious program. Next time you open Firefox, all of your secret data is siphoned off to some secret server somewhere secret and you won't even know. Or, if I have frequent access to your machine, I can just set up your home folder to be copied to /usr/share/nonsecrets/home/ or any similar (non-encrypted) location.

Okay, what about full disk encryption?

That's... actually pretty good. However, it's not perfect yet! I can always perform a Cold Boot Attack using my trusty can of compressed air. Or, I can just plug a hardware keylogger into your computer. One's obviously easier than the other, but the way doesn't really matter.

In the vast majority of cases, this is a good stopping place. Maybe pair it with TPM (discussed below), and you're golden. Unless you've angered a three-letter agency or a very motivated hacker, nobody's going to go through the effort required past this stage.

Of course, I can still get you to install some malware/backdoors by offering you a PPA or similar, but this gets into the very murky area of user trust.

So... how are iPhones so secure? Even with physical access, there's not much you can do.

Well, yes and no. I mean, if I was motivated enough, I could read the flash chip and get everything I need. But, iPhones are fundamentally different inasmuch as they're a fully locked down platform from the very start of the bootup process. This, however, causes you to sacrifice usability and the ability to recover from catastrophic failures. GRUB (except when very specifically designed) is not meant to be a chain in a security system. In fact, most Linux systems have their security chains start post-boot, so after GRUB has finished doing its thing.

iPhones additioanlly have cryptographic signature enforcement (also discussed below), which makes it very hard for malware to sneak on to your phone.

But what about TPM/SmartCards/[insert crypto tech here]?

Well, now you're pairing physical security into the equation, it becomes more complicated still. But, this isn't really a solution because TPMs are relatively weak and all encryption doesn't take place on-chip. If your TPM is (somehow) strong enough where it does encryption on the chip itself (some very fancy hard drives have something like this), the key won't ever be revealed and things like cold-boot attacks are impossible. However, the keys (or the raw data) might still be present in the system bus meaning they can be intercepted.

Even so, my hardware keylogger can still get your password, and I can easily load some malware onto your machine a la the Firefox exploit I mentioned earlier. All I need is for you to leave your house/computer for maybe an hour.

Now, if you take your TPM/smartcard/whatever with you, and all the encryption is actually done on the chip (meaning your key isn't stored in RAM at all), then it's practically impossible for me to get in at all. This assumes, of course, that there isn't any known security vulnerability I can exploit and that you follow good security practices.

But what if I have some form of cryptographic/digital signature enforcement on all of my programs to make sure they're legit?

As demonstrated by various smartphone companies, this is a very good way of dealing with security. You've now nullified my ability to inject some code onto your machine to do nefarious things. Effectively, you've disabled my ability to retain persistent access to your machine remotely.

However, this still isn't a perfect method! Digital signature enforcement won't stop a hardware keylogger, for one. It also needs to be completely bug-free, meaning there's no way I can find an exploit that allows me to load my own certificate into your machine's certificate store. Furthermore, this means every executable on your system needs to be signed. Unless you want to manually go through and do all of that, it's going to be very hard to find Apt packages and the like that have digital signatures on everything. In a similar vein, this blocks legitimate uses for unsigned executables, namely recovery. What if you break something important, and you don't have the (signed) executable to fix it?

Either way, an effort to do this on Linux has basically been all but abandoned and no longer works for new kernels, so you'd need to create your own.

So, it's impossible to keep you out of my computer?

Effectively, yes, sorry. If I have physical access and enough motivation, it is always possible to get into a system. No exceptions.

In reality, though, most evil people won't try to go this far just for some cat pictures. Typically, just full-disk encryption (or even just running Linux!) is enough to deter most script kiddies from having their two seconds of fame.

TL;DR: Just don't let people you don't trust near your computer and use LUKS. That's typically good enough.


If you want it tied down use a password. From the link:

GRUB 2 Password Protection Notes

Grub 2 can establish password requirements on:

  • All menuentries
  • Specific menuentries
  • For specific users: For example, user "Jane" can access Ubuntu but not the Windows recovery mode, which is only accessible by "John", the superuser.
  • The administrator must enable password protection manually by editing the GRUB 2 system files.

  • Users and passwords should be identified in the /etc/grub.d/00_header or another GRUB 2 script file.

  • Unless universal protection of all menuentries is is desired, the specific entries must be identified:

    • Manually by editing the Grub 2 /etc/grub.d/ scripts such as 10_linux and 30_os-prober
    • Manually by editing a custom configuration file created by the user.

    • Either of the above methods enables GRUB 2 to automatically add the password requirement to the the configuration file (grub.cfg) whenever update-grub is executed.

    • Manually by editing /boot/grub/grub.cfg. Edits to this file will be removed when update-grub is run and password protection will be lost.

  • If any form of GRUB 2 password protection is enabled, the superuser's name and password are required to gain access to the GRUB 2 command line and menu-editing modes.

  • The username and/or password do not have to be the same as the Ubuntu logon name/password.
  • Unless GRUB 2's password encryption feature is used, the password is stored as plain text in a readable file. See the Password Encryption section for guidance on using this feature.

By default(!) in this case usability trumps security. If you can not trust the people that are around you keep the machine with you at all times. People who need more security tend to encrypt their whole system making the need for a password mandatory.


Your intentional hack starts with this:

  1. When grub2 menu opens press 'e' to edit the linux start options

But you can password protect the e option as discussed here: How to add the GRUB password protection to the OS load process instead of when editing boot options

You can take the extra step of encrypting the grub password as discussed in the link. Indeed with perhaps 3% of the population (wild guess) using Linux / Ubuntu at home it's a good idea for System Administrators to protect against the e function on production systems at work. I imagine if Ubuntu is used at work then 30 to 40% would be using it at home too and maybe 10% of those will be learning how to do the e on their home systems.

Thanks to your question they have just learned more. With the link above though, System Administrators have another task on their to-do list to protect production environments.