What's the meaning of defining "burst" with "nodelay" option?

Solution 1:

I find limit_req documentation clear enough.

burst is documented that way:

Excessive requests are delayed until their number exceeds the maximum burst size [...]

nodelay is documented that way:

If delaying of excessive requests while requests are being limited is not desired, the parameter nodelay should be used

Requests are limited to fit the defined rate. If requests are incoming at a higher rate, no more than the defined number of requests per time unit will be served. You then need to decide on what to do with those other requests.

  • By default (no burst, no nodelay), requests are denied with an HTTP 503 error.
  • With burst, you stack the defined number of requests in a waiting queue, but you do not process them quicker than the defined requests per time unit rate.
  • With burst and nodelay, the queue won't be waiting and request bursts will get processed immediately.

Solution 2:

The comments on the original answer seem wrong.

The question at hand is what the difference between, say rate=6r/s burst=0 and rate=1r/s burst=5 nodelay

The answers are great about explaining the difference when the nodelay option is NOT present -- in which case, the requests queue up with the burst, and are 503'd without the burst.

The original answer seems spot on -- with nodelay, the burst requests get processed immediately. And thererfore, the only implication of that is that there is NO difference between specifying a burst + nodelay vs. just specifying a the higher limit with busrt=0 in the first place.

Therefore, to more succinctly answer the OP question: the meaning of burst when nodelay is specified is the same as just specifying a larger rate without burst.


Solution 3:

With burst and nodelay specified I find it easier to understand the mechanism like this (the other way around than it is usually understood):

  1. You allow a maximum of burst requests. With $binary_remote_addr that's the maximum number of requests that you accept from a given address. Every request increases an internal counter. When the counter reaches burst all additional requests are denied (and the counter is not increased beyond burst value).
  2. This counter is continuously decreased at a rate specified using rate.

This logic suggests that it makes perfect sense to specify a high burst value (e.g. 100 and more) and a low rate value (even something like 2r/s). This handles normal browsing (a batch of parallel requests followed by a quiet period) better while protecting against sustained bot request stream.