Why do companies not give root access to employees on their desktop machines?

Security administrators are responsible for your machine and what happens on your machine. This responsibility violates the basic security model for a single-user Unix machine because the admin (an absent party) is root on your machine, you are not. Unix isn't really set up for this model.

Admins need to be able to install security controls on your machine in order to protect the company, not just the data and the network and the other nodes. If the local user had root access then admins are no longer in control over those controls. That's the basic premise.

Yes, there are tons of reasons why root is needed to do bad things or to turn the machine into a malicious node and all those are good reasons not to provide root access. And yes, there are lots of ways around those limitations and lots of ways that the local user could do bad things. But ultimately, the local user and the Risk Owner cannot be competing for control or responsibility over the machine.


A few reasons off the top of my head:

  • ARP poisoning or network flooding attacks on the network would generally require root access to a machine on the network.

  • Being able to install unauthorised programs might open the company up to legal liability if those programs are themselves illegal (e.g. because they're pirated or not licensed for for-profit use or whatever).

  • If the company has any sort of remote monitoring of employees (or wants the ability to have such monitoring even if it's not in place yet), giving users root access would allow them to bypass that.

  • Not having root access means you can't rm -rf /bin, or any number of other destructive things, and nor can anyone who gains access to your login details, so there's no chance your company will need to help you recover from that situation.

  • If your company might redeploy your machine if you leave, they might feel more comfortable doing so without doing a complete wipe-and-reinstall if you've never had root access to it.

  • Giving people root access is easy, if it becomes necessary; taking root access away comprehensively is difficult if it becomes necessary.

  • The general principle of least privilege is that you shouldn't give anyone/anything access they don't actively need.

  • Simply not having moved on from the days of shared servers because it's a process that's worked and nothing has broken the inertia (the hypothetical monkeys and ladders problem).


This answer is not meant to contradict the existing answers, but rather supplement them because it's too long for a comment.

Part of the reason is (as others have alluded to) that users can't be trusted not to do foolish/malicious things. But another part is whose responsibility is it to fix things when that happens?

I'm a full-stack developer and part time devops with root access not only to my own development machines but a number of our production servers and at least some level of access to the hypervisor they're deployed on. But if I mess up, I am the party (or at least a party) with the skills, expertise, and responsibility to fix it. Not so of the typical end user: if Bobby the user borks his/her Windows install that happened to have mission-critical data and/or be used for mission critical work then Bobby isn't the one who has to come in on his/her day off or work unpaid overtime to fix it. Not to mention answer to the brass how Bobby managed to almost single-handedly sink the ship.

So part of the reason IT departments limit end user privileges is that it reduces their own risk exposure.