Some way to guess the speed of client connection

I think you should not measure the speed or throughput.

A first guess could be the browser of the client. There are many different browsers for computers but they are generally not the same as browsers for mobile devices.

It is easy to check what browser your users are using.

Still you should provide an option to switch between light weight and full content because your guesses could be wrong.


First thing: running over SSL (HTTPS) will avoid a lot of proxy nonsense. It'll also stop things like compression (which may make HTML, CSS, etc. load faster, but won't help for already-compressed data).

The time to load a page is latency + (bandwidth × size). Even if latency is unknown, measuring a small file and a large file can give you bandwidth:

Let L be latency, B be bandwidth, both unknown.
Let t₁ and t₂ be the measured download times.
In this example, the two sizes are 128k and 256k.

t₁ = L + B × 128                             // sure would be nice if SO had Τεχ
t₂ = L + B × 256
t₂ - t₁ = (L + B × 256) - (L + B × 128)
        = L + B × 256 - L - B × 128
        = 256(B) - 128(B)
        = 128(B)

So, you can see that if you divide the difference in times by the difference in page sizes, you get the bandwidth. Taking a single measurement may yield weird results due to latency and bandwidth not being constant. Repeating a few times (and throwing out outliers and absurd [e.g., negative] values) will converge on the true average bandwidth.

You can do these measurements in JavaScript easily, in the background, using any AJAX framework. Get current time, send off request, not clock time when response received. The requests themselves should be the same size, so that the overhead of sending the requests is just part of the latency. You'll probably want to use different hosts though, to defeat persistent connections. Either that or configure your server to refuse persistent connections, but only to your test files.

I suppose I'm actually abusing the word latency a little, it includes the time for all the constant overhead (e.g., sending the request). Well, its latency from wanting to receiving the first byte of payload.


This is a great question. I haven't run across any techniques for estimating client speed from a browser before. I do have an idea, though; I haven't put more than a couple minutes of thought into this, but hopefully it'll give you some ideas. Also, please forgive my verbosity:

First, there are two things to consider when dealing with client-server performance: throughput and latency. Generally, a mobile client is going to have low bandwidth (and therefore low throughput) compared to a desktop client. Additionally, the mobile client's connection may be more error prone and therefore have higher latency. However, in my limited experience, high latency does not mean low throughput. Conversely, low latency does not mean high throughput.

Thus, you may need to distinguish between latency and throughput. Suppose the client sends a timestamp (let's call it "A") with each HTTP request and the server simply echos it back. The client can then subtract this returned timestamp with its current time to estimate how long it took the request to make the round trip. This time includes almost everything, including network latency, and the time it took the server to fully receive your request.

Now, suppose the server sends back the timestamp "A" first in the response headers before sending the entire response body. Also assume you can incrementally read the server's response (e.g. nonblocking IO. There are a variety of ways to do this.) This means you can get your echoed timestamp before reading the server response. At this point, the client time "B" minus the request timestamp "A" is an approximation of your latency. Save this, along with the client time "B".

Once you've finished reading the response, the amount of data in the response body divided by the new client time "C" minus the previous client time "B" is an approximation of your throughput. For example, suppose C - B = 100ms, and you've read 100kb of data, then your throughput is 10kb/s.

Once again, mobile client connections are error prone and have a tendency to change in strength over time. Thus, you probably don't want test the throughput once. In fact, you might as well measure the throughput of every response, and keep a moving average of the client's throughput. This will reduce the likelihood that an unusually bad throughput on one request causes the client's quality to be downgraded, or vice versa.

Provided this method works, then all you need to do is decide what the policy is for deciding what content the client gets. For example, you could start in "low quality" mode and then if the client has good enough throughput for some period of time, then upgrade them to high quality content. Then, if their throughput goes back down, downgrade them back to low quality.

EDIT: clarified some things and added throughput example.


Yes I know the response is DON'T.

The reason I'm reviving this ancient discussion is twofold:

Firstly, technology has changed, what wasn't easy 9 years ago might be now.

Secondly, I have a client with a website dating back over 20 years virtually unchanged. He declined the offer of a (very inexpensive) rewrite because it works and it's very fast. It's only a few pages, content is still relevant (he did ask me to delete the FAX number!) his view was "if it ain't broke don't fix it". It was written in pure HTML for a 640px wide screen in the days of dial-up modem connections. Some still use them The fixed screen width means it's usable on mobile/tablet, especially in landscape mode. It doesn't look too bad on big screen as there's a tiled background.

I ran Google pagespeed checker and it only scored 99% so I tweaked the htaccess file and it now gets 100%. We are mostly spoilt with fast broadband but some rural users get very disappointing speeds. Those guys can't be happy when they reach a multi-megabyte page. I thought maybe I could try an experiment on another site. If I could detect that the user was on a dial-up connection I could see what happens if I served those users a simple light-weight alternative.