Why does just splitting an Ethernet cable not work?

In 10BASE-T and 100BASE-TX, one pair of wires is used for transmitting, and one for receiving. That is, one pair is the pair the Ethernet host transmits on, and the hub or switch receives on, and the other pair is the pair that the the hub/switch transmits on, and the Ethernet host receives on.

If you split the cable with a simple passive splitter, you're hooking up those two Ethernet hosts transmitter-to-transmitter and receiver-to-receiver. That's like holding the phone handset upside down and trying to speak into the speaker and listen to the microphone—it just doesn't work. So even if both were in half-duplex mode (like they were hooked to a hub, not a switch), neither of the Ethernet hosts would be able to sense when the other was transmitting, because neither one's receiver was hooked up to the other one's transmitter. So they would have undetectable collisions. Not to mention that they'd both be connected to the same port of the hub, probably confusing the hub's autonegotiation ability, because hubs don't expect to autonegotiate with two separate hosts on the same port.

In many ways, things are even worse in your case of hooking them both up to a switch, because they could both end up thinking they can do full-duplex, which means even more undetectable collisions, on what's supposed to be a collision-free link (properly-wired full duplex links can't possibly have collisions).

With 1000BASE-T (Gigabit Ethernet over Cat5 or better UTP copper cabling), the situation is even worse, because all 4 pairs of wires are used for both transmit and receive (simultaneous, full-duplex), and the transceivers are sophisticated enough to enable that. But if you suddenly have a third party on the line transmitting and receiving all at the same time, it completely blows away the way the simultaneous bidirectional signaling scheme works. With three devices all transmitting at the same time, even when you subtract out your own transmission, you can't differentiate the other two devices' transmissions in the signal you're receiving.

Some early flavors of Ethernet, such as 10BASE-2 a.k.a. "thinnet" a.k.a "cheapernet", featured a bus topology where all the hosts on the LAN literally shared the same wire (the same coaxial cable). Because the same wire was used for both Tx and Rx and there could be any number of hosts on the bus, it had to be half-duplex. But a 10BASE-2 transceiver was expecting it to be that way. And since all the transmitters and receivers were hooked up to the same wire, everyone could hear each other (unlike your split 10/100/1000BASE-T example).


The original Ethernet spec called for coaxial cables that were tapped (split) to each workstation (hence the "ether" in ethernet). But we're talking ancient history here. Technically it's still possible with RJ-45 cables since the ethernet protocol still supports the collision detection mechanisms, but why in God's name would you want to set it up that way? Especially since your router has 4 ports to work with in the first place.


I'm surprised I must disagree with Spiff -- in a sense it does work. We were hunting the cause of excessive packet errors in the factory. Among other things we found where some electrician had simply spliced a Y into a 100BASE-T network cable.

The two computers involved sometimes had network errors, but since this persisted for a long time while the users used a program that was on the network and all its data (except stuff written to the temp directory) was on the network I can conclusively say it's possible.

The switches are the traffic lights of the network -- without them packets run into each other badly. Normally the network protocol makes up for the lost packets, though.