Company claims hardwire connections are a security issue

Warning: Conjecture, because none of us know their actual setup.

It is very likely that the organization has their own network, which is hard-wired, as well as a guest network, which is wireless-only. The two are separate networks. This is a common layout because laying wire to desks is expensive, but worth it, for your own employees; broadcasting wireless is cheap, and worth every penny of it, for your guests.

When you asked about a hard-wired connection, they are answering the question of which network you'd be on rather than how you connect to the network. And as the two are intertwined in their minds ("hard-wire is our network, wireless is guest network") they are answering very simply.

From their point of view, they don't want non-organization machines on their network, only on the guest network - because of viruses and stuff. We can all understand that we wouldn't want random visitors on our internal networks, right? So that would be a context in which their answer makes sense.

I would suggest explaining your concern to them and seeing if they can come up with a solution, instead of asking them about the solution you would expect to work. It may be that they only expect guests to need enough connectivity for email and light web browsing. If you explain that Jane needs more bandwidth for her study needs, and can convince them that it's a reasonable request, they're likely to find some way to help - even if it's just moving Jane to a room closer to the Wireless AP.


It really depends on how they have set up their network, so we can only speculate. But I can provide a similar anecdote.

My local library has a wifi that you can log into using your library card. Several rooms have ethernet ports in the wall, but when I asked if I could plug in, I was told that the ethernet goes straight to the back-end network with access to the library's databases, printers, etc. Not intended for customers.

It's common practice to keep separate networks for "trusted" machines that are using corporate-supplied anti-virus, etc, and a separate network for the public to use. I guess wifi vs ethernet is as good a way as any to split that.


I'm going to come at this from a network-engineering point-of-view (full disclosure: CCNA / N+, I work on enterprise-level network systems which include complex topics that we'll discuss here, as well as having done network-engineering for a private university).

Every network is different, and every network-device is different, but there are some commonalities:

  • Many enterprise-level devices (switches) offer some sort of "VLAN" ("Virtual-LAN"), for those unfamiliar, think of it as a way of saying that "This switchport is in LAN X, whereas this other switchport is in LAN Y.", this allows us to separate devices logically, so that you and I can be plugged into the same switch, but not even see each other through MAC targeting;
  • Many enterprise-level devices (switches) offer SNMP targeting / triggering / "trap"ping to switch ports between different VLAN's based on things like MAC-addresses and the like;

Here's the thing about Ethernet / RJ-45 / 100M/1000M connections: we typically use lower-end devices for this, because we often "just" need a basic connection back to the router. Often they're less advanced, and don't offer good-quality features of the above. (You'll typically find "VLAN" segregation on just about every switch now-a-days, but the SNMP triggering and targeting is substantially more difficult to find for a good price-point.)

When I worked for the University we used a software that would look at a switchport and the MAC-address (a unique hardware-identifier for your Ethernet port) which would decide what "VLAN" you were on: Guest, Staff, Faculty, Student, Lab, etc. This was extraordinarily expensive, both in licensing and implementation. While there are good, free tools out there to do this, it's still difficult to setup, and may not be worth it depending on what the goals of the company are. (This software is notoriously unreliable.) Another problem is that, with sufficient work, a MAC Address can be spoofed, which makes it about as secure as using someone's full name.

So, we have to make a decision, support hard-wired connections that may be unstable, insecure, and leak access to privileged resources, or not?

No network is perfectly secure, even if we have all the resources on the "protected" network locked down, there's still a risk of connection a foreign device to the network. Therefore, we often make decisions like "any BYOD connects to this wireless network." We can turn the wireless network into a "Guest"/"Secured" network, via different SSID's and authentication mechanisms. This means we can have both the guests and employees connected to one wireless access point. Infrastructure cost is lower, and we get the same security benefit.

Like this other answers, this is conjecture or speculation, but from my (professional) experience this would be the likely explanation. The infrastructure cost to support hard-wired connections was too high to be justified. (And since almost all devices people use have wireless capability these days, it's tough to justify.) Considering even Apple is dropping Ethernet ports off the MacBook Pro by default, we get into a "is it even worth it?" situation.


TL;DR;: Ethernet is too expensive to do across the board and secure properly, whereas Wireless is becoming much more commonplace, secure and easier to distribute access for.