Why first network call takes more time than subsequent ones?

After a few experiments, I found out that Content Download (browser request steps) part of the request is speeding up 1.5-2 times This looks like a cause of TCP Slow Start algorithm

As it states:

modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular web server

That might be the reason for the first request to be slower than others

Also, @Vishal Vijay made a good addition:

Making initial connection handshake to the server is taking time (DNS Lookup + Initial connection + SSL). Browsers are creating Persistent Connections for HTTP requests and keep it open for some time. If any request came in for the same domain within that time, the browser will try to reuse the same connection for faster response.


In some cases it might be server-side cache mechanism causing subsequent request to process faster, but let's just talk about the browser-side stuffs.

When you hover on the waterfall 'blocks' you will get the time details: Chrome Network Waterfall

Here is a quick reference for each of the phases (from Google Developers):

  • Queueing. The browser queues requests when:
    • There are higher priority requests.
    • There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
    • The browser is briefly allocating space in the disk cache
  • Stalled. The request could be stalled for any of the reasons described in Queueing.
  • DNS Lookup. The browser is resolving the request's IP address.
  • Proxy negotiation. The browser is negotiating the request with a proxy server.
  • Request sent. The request is being sent.
  • ServiceWorker Preparation. The browser is starting up the service worker.
  • Request to ServiceWorker. The request is being sent to the service worker.
  • Waiting (TTFB). The browser is waiting for the first byte of a response. TTFB stands for Time To First Byte. This timing includes 1 round trip of latency and the time the server took to prepare the response.
  • Content Download. The browser is receiving the response.
  • Receiving Push. The browser is receiving data for this response via HTTP/2 Server Push.
  • Reading Push. The browser is reading the local data previously received.

So what's difference between the first and subsequent requests in traditional HTTP/1.1 scenario?

  • DNS Lookup: It might take more time to resolve DNS for the first request. Subsequent requests will resolve a lot faster using browser DNS cache.
  • Waiting (TTFB): The first request have to establish TCP connecting to the server. Due to HTTP keep-alive mechanism, subsequent requests to the same server will reuse the existing TCP connection to prevent another TCP handshake, thus reducing three round-trip time compared the first request.
  • Content Download: Due to TCP slow start, the first request will need more time to download content. Since subsequent requests will reuse the TCP connection, when the TCP window scaled up, the content will be downloaded much faster than the first request.

Thus generally subsequent requests should be much faster than the first request. Actually this leads to a common network optimization strategy: Use as few domains as possible for your website.

HTTP/2 even introduces multiplexing to better reuse a single TCP connection. That's why HTTP/2 will give a performance boost in modern front end world, where we deploy tons of small assets on the CDN servers.