Why is drive/partition number still used?

Strictly speaking, UUID is not addressing at all.

Addressing is very, very simple: read drive X sector Y - or else. Read memory address Z - or else. Addressing is simple, fast, leaves not much room for interpretation, and it's everywhere.

UUID is not addressing. Instead it's searching, finding, sometimes waiting for devices to appear, and also understanding filesystems(★). And depending how many devices there are, it may take a very long time. And once found, back to regular addressing it is.

In GRUB, this is called search(★★) and it's only available when GRUB has already grown wings (search is a module, as is every filesystem it supports, thus only available after loading core). In Linux, it's (for example) called findfs, findfs will search the block devices in the system looking for a filesystem or partition.

It goes through all blockdevices, wakes them from standby, reads data, and the result may even still be random if the UUID is not unique as it should be (after dd accident or the like), or you get no result if the UUID changed - UUIDs are prone to configuration errors, too.

In general, UUIDs are great, and of course you should use them everywhere if available, especially when traditional addressing is bound to fail because drive order is random in Linux; but understand that the complexity is above and beyond what simple addressing is meant to do. And especially in the very early stages of bootloaders, it simply might not be an option yet. Addressing comes first, growing wings comes later.

For the bootloader, it might simply not be necessary to make the effort (not every bootloader supports a wide range of filesystems like GRUB). If hd0 is guaranteed to be "the disk we booted off of" due to circumstance (the BIOS provides), and so if you can rule out random drive order issues, there may be no need to go through a potentially enormous list of other partitions in search for UUIDs.

If you're confident enough in your configuration to say that hd0,gpt2 is the one you want, and it has to be, and it can't be otherwise, then there is nothing wrong with using it like that. Sometimes, plain and simple addressing works just fine.


(★) I previously explained this for LABELs here...

There is no generic standard for labels, it's all hand-knitted, see for example this implementation of superblocks formats in util-linux. If you invent a new filesystem tomorrow, even if it has a label, it won't show up until support is added.

...and it's much the same for UUIDs.


(★★) Actually, GRUB's search has a --hint option, and... now I haven't checked the source code, and it's not even documented in their manual, but such an option would make sense to give you the best of both worlds: the hint should tell search to check that partition first, and if the UUID matches as expected, it identified the device with minimal effort, and if it doesn't match, it'll still fall back to the full blown search to keep things working somehow.

In addition to that, previously found UUIDs tend to be cached, so it doesn't have to go through all devices all over again and again and again - and this too works great, provided the UUID you're looking for actually exists somewhere to make it into the cache in the first place.


The plain numbering scheme is not actually used in recent systems (with "recent" being Ubuntu 9 and later, other distributions may have adapted in that era, too).
You are correct in observing the root partition is set with the plain numbering scheme. But this only is a default or fall-back setting which is usually overridden with the very next command, such as:

search --no-floppy --fs-uuid --set=root 74686973-6973-616e-6578-616d706c650a

This selects the root partition based on the file-system's UUID.

In practice, the plain numbering scheme is usually stable (as long as there are no hardware changes). The only instance I observed non-predictable numbering was system with many USB-drives which were enumerated based on a first-come-first serve pattern and then emulated as IDE drives. None of these processes are inherently chaotic, so I assume a problem in that particular systems BIOS implementation.

Note: "root partition" in this context means the partition to boot from, it may be different from the partition containing the "root aka. / file system".


Also do not forget labels. They aren't as unique as UUIDs, but much more informative, and make your fstab human readable. If it's your desktop, or a small company--in other words, you are managing a few to a few dozens drives, you may prefer labels to UUIDs.

Musing over @frostschutz's excellent answer to your question, one scenario when you would likely prefer the "classic" device link addressing is the VM setup, especially in the VM-for-hire (abbreviated, confusingly, “IaaS”) clouds. Suppose you want to customize an Ubunzima 04.18 image. You create a (throwaway) VM with 2 disks: one will be the (throwaway) system drive, and the second the one you mount and customize. Presumably, you also mount its UEFI boot partition, if you want to grub a newer grub on your new disk. Assuming you've chosen mount points for the target partitions under /mnt, your desired mount table looks like

/dev/sda1    /
/dev/sda9    /boot/efi
/dev/sdb1    /mnt/root
/dev/sdb9    /mnt/efi

So you make 2 identical drives from the existing, provider-provided, cloud-ready image, connect them to a new VM and boot it. Naturally,

  • All modern OS distros, our imaginary Ubunzima 04.18 not being an exception, rely on UUID-named mounts.
  • All hard drives rolled out from the same image have the same UUID. UUIDs are unique, so what could go wrong?

You already see whither this is all going.

The first time this frankencontraption booted, it picked sda9 as the EFI boot partition, but Linux decided to remount sdb1 as the root FS:

/dev/sda1    /mnt/root
/dev/sdb1    /
/dev/sda9    /boot/efi
/dev/sdb9    /mnt/efi

And since my roll-out script was quite unprepared for that, I've got an unbootable dud image in the end, without a single tool complaining in the log during the frankenbuild!

Of course I printed the mount table in the logs. And of course the mess-up is very hard to spot, since mount(8) prints mounts in the order halfway between random and that in which the devices were mounted, so it was not surprising I did not spot it right away. And imagine, the same script (but with disks from different images) previously worked as smooth as 15-year-old Glenfiddich. Guess how many hours I spent pulling my hair¹ over the log trying to figure out the problem?


There are no hard and fast rules good for any situation, from a desktop PC to a Linux embedded in a router to your Android phone to a cloud data center. A SO answer is supposed to be objective, and my experiences or preferences are, of course, not. So I would rather show examples of logical reasoning when selecting among different methods of identifying partitions:

  • Leave it alone if you have no reason not to. The UUIDs is the default for most modern distros. If it comes to adding a second drive, try then and decide. Chances are you won't ever need to even know. If your system still boots and you can see and partition the new device, format and add it to fstab (by UUID, by LABEL or by a /dev link, same considerations apply). It's only when your system refuses to boot after plugging in the extra drive, then you have a problem (and maybe changing the boot order in the UEFI BIOS is the quickest way out).

    Pragmatically, labeling which SATA connector goes to which drive in your own desktop may be the fastest and easiest solution, while changing the way the system boots and recovering from a quite likely boot failure is, arguably, the worst time-gobbler. But if you manage it for 50 programmers who think that throwing in an extra drive is not a problem worthy bothering you, at the very least do not test the limits of your luck and make sure their initial boot drives are all seen by grub as hd0 and the system as sda.

  • Labels to manage your own drives and partitions in your desktop or three, or a small milieu (a sitting room of a house packed with software engineers who funnily call the place their “startup office”). If you pull a physical drive from someone's machine, you know where it came from if you use labels consistently.

    If lsblk(8) says LABEL=bubba-boot, you know it has been pulled from the machine called bubba; besides, bubba-boot rolls over my tongue much easier than 6864c4ea-f9b9-46db-b875-4d7fc2981007, which, to my spoiled taste, is downright a jawbreaker. Ensuring that labels are unique now shifts upon you, but what you get in return is the label's meaningfulness.

  • /dev-link based naming when commanding a battalion of relatively short-living, low-maintenance VMs which are the spawn of the same image, and you would not bet your weekly wage that all their UUID live up to the UU promise. Any sane VM service, be it Vyper-H on your own physical server or Kugel Cloud or anything, shall never call your boot drive sde, and the second and the only other one sdc². In a physical machine, on the other hand, you can easily get that same arrangement by creatively connecting SATA cables.

    I digress now, but in this scenario, I go same route with the so-called “consistent” Ethernet interface naming: disable it in VMs. Don't get me wrong, the naming is really consistent as long as the NIC you put into the PCI slot 4 won't suddenly jump to slot 5 on its own whim while you are not looking (or maybe even while you are; NICs have no shame whatsoever). Unfortunately, in the “battalion of VMs” milieu they in fact do. In this case, counter-intuitively, eth0 is more consistent than enp0s4f6. The VM provider did not promise to always put their virtual NIC number 1 in the slot 4 on the PCI bus 0 (and none of the 3 mentioned entities are physically real), and that it will always be the Function 6. But you can pretty much rely on the first interface going before the second, considering they normally have the same driver module, commonly from the virtio family (and if the first NIC is not always eth0, the same note² still applies).


¹ Figuratively, of course. I've been if this business for far too long to have any left.
² If they did, I'd seriously consider running away screaming from them changing the provider or the VM hypervisor software.