How much network latency is "typical" for east - west coast USA?

Solution 1:

Speed of Light:
You are not going beat the speed of light as an interesting academic point. This link works out Stanford to Boston at ~40ms best possible time. When this person did the calculation he decided the internet operates at about "within a factor of two of the speed of light", so there is about ~85ms transfer time.

TCP Window Size:
If you are having transfer speed issues you may need to increase the receiving window tcp size. You might also need to enable window scaling if this is a high bandwidth connection with high latency (Called a "Long Fat Pipe"). So if you are transferring a large file, you need to have a big enough receiving window to fill the pipe without having to wait for window updates. I went into some detail on how to calculate that in my answer Tuning an Elephant.

Geography and Latency:
A failing point of some CDNs (Content Distribtuion Networks) is that they equate latency and geography. Google did a lot of research with their network and found flaws in this, they published the results in the white paper Moving Beyond End-to-End Path Information to Optimize CDN Performance:

First, even though most clients are served by a geographically nearby CDN node, a sizeable fraction of clients experience latencies several tens of milliseconds higher than other clients in the same region. Second, we find that queueing delays often override the benefits of a client interacting with a nearby server.

BGP Peerings:
Also if you start to study BGP (core internet routing protocol) and how ISPs choose peerings, you will find it is often more about finances and politics, so you might not always get the 'best' route to certain geographic locations depending on your ISP. You can look at how your IP is connected to other ISPs (Autonomous Systems) using a looking glass router. You can also use a special whois service:

whois -h ""
PEER_AS | IP               | AS Name
25899   |    | LSNET - LS Networks
32869   |    | SILVERSTAR-NET - Silver Star Telecom, LLC

It also fun to explore these as peerings with a gui tool like linkrank, it gives you a picture of the internet around you.

Solution 2:

This site would suggest around 70-80ms latency between East/West coast US is typical (San Francisco to New York for example).

Trans-Atlantic Path
NY      78    London
Wash    87    Frankfurt
Trans-Pacific Path
SF     147    Hong Kong
Trans-USA Path
SF      72    NY

network latency by world city pairs

Here are my timings (I'm in London, England, so my West coast times are higher than East). I get a 74ms latency difference, which seems to support the value from that site.

NY - 108ms latency, 61ms transfer, 169 total
OR - 182ms latency, 71ms transfer, 253 total

These were measured using Google Chrome dev tools.

Solution 3:

Measure with ICMP first if at all possible. ICMP tests typically use a very small payload by default, do not use a three-way handshake, and do not have to interact with another application up the stack like HTTP does. Whatever the case, it is of the utmost importance that HTTP results do not get mixed up with ICMP results. They are apples and oranges.

Going by the answer of Rich Adams and using the site that he recommended, you can see that on AT&T's backbone, it takes 72 ms for ICMP traffic to move between their SF and NY endpoints. That is a fair number to go by, but you must keep in mind that this is on a network that is completely controlled by AT&T. It does not take into account the transition to your home or office network.

If you do a ping against from your source network, you should see something not too far off of 72 ms (maybe +/- 20 ms). If that is the case, then you can probably assume that the network path between the two of you is okay and running within normal ranges. If not, don't panic and measure from a few other places. It could be your ISP.

Assuming that passed, your next step is to tackle the application layer and determine if there is anything wrong with the additional overhead you are seeing with your HTTP requests. This can vary from app to app due to hardware, OS, and application stack, but since you have roughly identical equipment on both the East and West coasts, you could have East coast users hit the West coast servers and West coast users hit the East coast. If both sites are configured properly, I would expect to see all numbers to be more less equal and to therefore demonstrate that what you are seeing is pretty much par for the coarse.

If those HTTP times have a wide variance, I would not be surprised if there was a configuration issue on the slower performing site.

Now, once you are at this point, you can attempt to do some more aggressive optimization on the app side in order to see if those numbers can be reduced at all. For example, if your are using IIS 7, are you taking advantage of its caching capabilities, etc? Maybe you could win something there, maybe not. When it comes to tweaking low-level items such as TCP windows, I am very skeptical that it would have much of an impact for something like Stack Overflow. But hey - you won't know until you try it and measure.

Solution 4:

Several of the answers here are using ping and traceroute for their explanations. These tools have their place, but they are not reliable for network performance measurement.

In particular, (at least some) Juniper routers send processing of ICMP events to the control plane of the router. This is MUCH slower than the forwarding plane, especially in a backbone router.

There are other circumstances where the ICMP response can be much slower than a router's actual forwarding performance. For instance, imagine an all-software router (no specialized forwarding hardware) that is at 99% of CPU capacity, but it is still moving traffic fine. Do you want it to spend a lot of cycles processing traceroute responses, or forwarding traffic? So processing the response is a super low priority.

As a result, ping/traceroute give you reasonable upper bounds - things are going at least that fast - but they don't really tell you how fast real traffic is going.

In any event -

Here's an example traceroute from the University of Michigan (central US) to Stanford (west coast US). (It happens to go by way of Washington, DC (east coast US), which is 500 miles in the "wrong" direction.)

% traceroute -w 2
traceroute to (, 64 hops max, 52 byte packets
 1  * * *
 2  * * *
 3 (  3.808 ms  4.225 ms  2.223 ms
 4 (  1.372 ms  1.281 ms  1.485 ms
 5 (  1.784 ms  0.874 ms  0.900 ms
 6 (  2.443 ms  2.412 ms  2.957 ms
 7 (  107.269 ms  61.849 ms  47.859 ms
 8 (  28.267 ms  28.756 ms  28.938 ms
 9 (  52.075 ms  52.156 ms  88.596 ms
10  * * (  496.838 ms
11 (  76.537 ms  78.948 ms  75.010 ms
12 (  82.151 ms  82.304 ms  82.208 ms
13 (  82.504 ms  82.295 ms  82.884 ms
14 (  82.859 ms  82.888 ms  82.930 ms
15  * * *
16  * * *
17 (  83.136 ms  83.288 ms  83.089 ms

In particular, note the time difference between the traceroute results from the wash router and the atla router (hops 7 & 8). the network path goes first to wash and then to atla. wash takes 50-100ms to respond, atla takes about 28ms. Clearly atla is further away, but its traceroute results suggest that it's closer.

See for lots of info on network measurement. (disclaimer, i used to work for internet2). Also see:

To add some specific relevance to the original question... As you can see I had an 83 ms round-trip ping time to stanford, so we know the network can go at least this fast.

Note that the research & education network path that I took on this traceroute is likely to be faster than a commodity internet path. R&E networks generally overprovision their connections, which makes buffering in each router unlikely. Also, note the long physical path, longer than coast-to-coast, although clearly representative of real traffic.

michigan->washington, dc->atlanta->houston->los angeles->stanford

Solution 5:

I'm seeing consistent differences, and I'm sitting in Norway:

serverfault       careers
  509ms            282ms
  511ms            304ms
  488ms            295ms
  480ms            274ms
  498ms            278ms

This was measured with the scientific accurate and proven method of using the resources view of Google Chrome and just repeatedly refreshing each link.

Traceroute to serverfault

Tracing route to []
over a maximum of 30 hops:

  1    <1 ms     1 ms    <1 ms
  2     2 ms     1 ms     1 ms []
  3     1 ms     1 ms     1 ms
  4     1 ms     2 ms     1 ms []
  5    14 ms    14 ms    14 ms
  6    13 ms    13 ms    14 ms []
  7    22 ms    21 ms    21 ms []
  8    21 ms    20 ms    20 ms []
  9    21 ms    21 ms    20 ms []
 10   107 ms   107 ms   107 ms
 11   107 ms   106 ms   105 ms []
 12   106 ms   106 ms   107 ms []
 13   129 ms   135 ms   134 ms []
 14   183 ms   183 ms   184 ms []
 15   189 ms   189 ms   189 ms []
 16   193 ms   189 ms   189 ms
 17   181 ms   181 ms   180 ms []
 18   182 ms   182 ms   182 ms []
 19   195 ms   195 ms   194 ms []
 20   197 ms   197 ms   197 ms []
 21   188 ms   187 ms   189 ms []
 22   198 ms   198 ms   198 ms []
 23   198 ms   197 ms   197 ms []

Trace complete.

Traceroute to careers

Tracing route to []
over a maximum of 30 hops:

  1     1 ms     1 ms     1 ms
  2     2 ms     1 ms    <1 ms []
  3     1 ms     1 ms     1 ms
  4     1 ms     1 ms     2 ms []
  5    12 ms    13 ms    13 ms
  6    13 ms    14 ms    14 ms []
  7    21 ms    21 ms    21 ms []
  8    21 ms    20 ms    20 ms []
  9   116 ms   117 ms   122 ms []
 10   121 ms   122 ms   121 ms []
 11     *        *        *     Request timed out.

Unfortunately, it now starts going into a loop or whatnot and continues giving stars and timeout until 30 hops and then finishes.

Note, the traceroutes are from a different host than the timings at the start, I had to RDP to my hosted server to execute them