When/why to start subnetting a network?

Solution 1:

Interesting Question.

Historically, prior to the advent of fully switched networks, the main consideration to breaking a network into subnets had to do with limiting the number of nodes in a single collision domain. That is, if you had too many nodes, your network performance would reach a peak and eventually would collapse under heavy load due to excessive collisions. The exact number of nodes that could be deployed depended on lots of factors, but generally speaking you could not regularly load the collision domain much beyond 50% of the total bandwidth available and still have the network be stable all the time. 50 nodes on the network was a lot nodes in those days. With heavy use users, you might have topped out at 20 or 30 nodes before needing to start subnetting things.

Of course, with fully switched full-duplex subnets, collisions are not a concern anymore and assuming typical desktop type users, you can typically deploy hundreds of nodes in a single subnet without any issues at all. Having lots of broadcast traffic, as other answers have alluded to, might be a concern depending on what protocols/applications you are running on the network. However, understand that subnetting a network does not necessarily help you with your broadcast traffic concerns. Many of the protocols use broadcasting for a reason - that is, when all the nodes on network actually need to see such traffic to implement the application level feature(s) desired. Simply subnetting the network doesn't actually buy you anything if the broadcasted packet is also going to need to forwarded over to the other subnet and broadcasted out again. In fact, that actually adds extra traffic (and latency) to both subnets if you think this through.

Generally speaking, today, the main reasons for subnetting networks has much more to do with organizational, administrative and security boundary considerations than anything else.

The original question asks for measurable metrics that trigger subnetting considerations. I am not sure there are any in terms of specific numbers. This is going to depend dramatically on the 'applications' involved and I don't think there is really any trigger points that would generally apply.

Relative to rules of thumbs in planning out subnets:

  • Consider subnets for each different organizational departments/divisions, especially as they get to be non-trivial (50+ nodes!?) in size.
  • Consider subnets for groups of nodes/users using a common application set that is distinct from other users or node types (Developers, VoIP Devices, manufacturing floor)
  • Consider subnets for groups of users that have differing security requirements (securing the accounting department, Securing Wifi)
  • Consider subnets from a virus outbreak, security breach and damage containment perspective. How many nodes get exposed/breached - what is an acceptable exposure level for your organization? This consideration assumes restrictive routing (firewall) rules between subnets.

With all that said, adding subnets adds some level of administrative overhead and potentially causes problems relative to running out of node addresses in one subnet and having too many left in another pool, etc. The routing and firewall setups and placement of common servers in the network and such get more involved, that kind of thing. Certainly, each subnet should have a reason for existing that outweighs the overhead of maintaining the more sophisticated logical topology.

Solution 2:

If it's a single site, don't bother unless you've got more than several dozen systems, and even then it's probably unnecessary.

These days with everyone using at least 100 Mbps switches and more often 1 Gbps, the only performance related reason to segment your network is if you're suffering excess broadcast traffic (i.e. > 2%, off the top of my head)

The main other reason is security, i.e DMZ for public facing servers, another subnet for finance, or a separate VLAN/subnet for VoIP systems.


Solution 3:

Limiting scope for any compliance requirements you may have (i.e. PCI) is a pretty good catalyst to segment off some portions of your network. Segmenting off your payment acceptance/processing and finance systems can save money. But in general subnetting a small network will not gain you much in the way of performance.


Solution 4:

Another reason would be Quality of Service related. We run voice and data vlans separately so that we can easily apply QoS to the voip traffic.

You know, I;ve been thinking about this question more. There are a ton of good reasons to design a new network using distinct networks (performance, security, QoS, limiting DHCP scopes, limiting broadcast traffic (which can be both security and performance related)).

But when thinking of a metric for redesigning just to subnet, and thinking of networks I've had to handle in the past, all I can think of is "wow, that'd have to one really messed up network to make me completely redesign it for subnetting". There are lots of other reasons - bandwidth, cpu utilization of the devices installed, etc. But just subnetting itself on a pure data network wouldn't usually buy a ton of performance


Solution 5:

Security and quality mostly (as long as the network segment in question can support the nodes in question of course). A separate network for printer traffic, voice/phone, isolated departments like IT Ops and of course server segments, internet-facing segments (one per internet-facing service is popular today, not just "one dmz will do") and so on.