How does Linux allocate bandwidth between processes?

Like most performance problems, it is complicated. How much bandwidth each task gets is a complex interaction between many things, at different layers of the network stack. Even without shaping. An incomplete list:

  • CPU scheduler for when tasks (and driver interrupt handlers) can get on CPU
  • How fast the tasks get their data, possibly limited by bottlenecks or contention
  • Which queueing discipline is in use, essentially a packet scheduler
  • Driver details, such as the number of hardware TX queues and how they select flows
  • TCP protocol behavior, if one flow happens to hit congestion control it may stay slow while bandwidth is limited
  • All of the above considerations for the remote system(s) receiving the flows
    • If all of your connections aren't going to the same recipient, the other end is possibly of more impact than your end

Many of these are not optimizing for equal bandwidth "fairness", but other criteria. TCP congestion control would rather have a little goodput rather than suffer congestive collapse.

And don't forget, too, you're probably not the only one of the network at any given time - so you also need to factor-in router(s), switch(es), etc between "here" and "there".


Should this be more than a curiosity, a solution to as fast as possible is get more bandwidth.

Or, doing QoS, shaping, or application throttling can set quotas for better overall behavior, whatever better means. But this no longer is as fast as possible, you pick the winners and losers by policy.