What is the difference between latency, bandwidth and throughput?

Water Analogy:

enter image description here

  • Latency is the amount of time it takes to travel through the tube.
  • Bandwidth is how wide the tube is.
  • The rate of water flow is the Throughput

Vehicle Analogy:

  • Vehicle travel time from source to destination is latency.
  • Types of Roadways are bandwidth.
  • Number of Vehicles traveling is throughput.

Here is my bit in a language which I can understand

When you go to buy a water pipe, there are two completely independent parameters that you look at: the diameter of the pipe and its length. The diameter determines the throughput of the pipe and the length determines the latency, i.e., the time it will take for a water droplet to travel across the pipe. Key point to note is that the length and diameter are independent, thus, so are are latency and throughput of a communication channel.

More formally, Throughput is defined as the amount of water entering or leaving the pipe every second and latency is the average time required to for a droplet to travel from one end of the pipe to the other.

Let’s do some math:

For simplicity, assume that our pipe is a 4inch x 4inch square and its length is 12inches. Now assume that each water droplet is a 0.1in x 0.1in x 0.1in cube. Thus, in one cross section of the pipe, I will be able to fit 1600 water droplets. Now assume that water droplets travel at a rate of 1 inch/second.

Throughput: Each set of droplets will move into the pipe in 0.1 seconds. Thus, 10 sets will move in 1 second, i.e., 16000 droplets will enter the pipe per second. Note that this is independent of the length of the pipe. Latency: At one inch/second, it will take 12 seconds for droplet A to get from one end of the pipe to the other regardless of pipe’s diameter. Hence the latency will be 12 seconds.


When a SYN packet is sent using TCP it waits for a SYN+ACK response, the time between sending and receiving is the latency. It's a function of one variable ie time.

If we're doing this on a 100Mbit connection this is the theoretical bandwidth that we have i.e. how many bits per second we can send.

If I compress a 1000Mbit file to 100Mbit and send it over the 100Mbit line then my effective throughput could be considered 1Gbit per second. Theoretical throughput and theoretical bandwidth are the same on this network but why am I saying the throughput is 1Gbit per second.

When talking about throughput I hear it most in relation to an application ie the 1Gbit throughput example I gave assumed compression at some layer in the stack and we measured throughput there. The throughput of the actual network did not change but the application throughput did. Sometimes throughput is talking about actual throughput ie a 100Mbit connection is the theoretical bandwidth and also the theoretical throughput in bps but highly unlikely to be what you'll actually get.

Throughput is also used in terms of whole systems ie Number of Dogs washed per day or Number of Bottles filled per hour. You don't often use bandwidth in this way.

Note, bandwidth in particular has other common meanings, I've assumed networking because this is stackoverflow but if it was a maths or amateur radio forum I might be talking about something else entirely.

https://en.wikipedia.org/wiki/Bandwidth

https://en.wikipedia.org/wiki/Latency

This is worth reading on throughput.

https://en.wikipedia.org/wiki/Throughput