What is "backlog" in TCP connections?

When TCP connection is being established the so called three-way handshake is performed. Both sides exchange some packets and once they do it this connection is called complete and it is ready to be used by the application.

However this three-way handshake takes some time. And during that time the connection is queued and this is the backlog. So you can set the maximum amount of incomplete parallel connections via .listen(no) call (note that according to the posix standard the value is only a hint, it may be totally ignored). If someone tries to establish a connection above backlog limit the other side will refuse it.

So the backlog limit is about pending connections, not established.

Now higher backlog limit will be better in most cases. Note that the maximum limit is OS dependent, e.g. cat /proc/sys/net/core/somaxconn gives me 128 on my Ubuntu.


The function of the parameter appears to be to limit the number of incoming connect requests a server will retain in a queue assuming it can serve the current request and the small amount of queued pending requests in a reasonable amount of time while under high load. Here's a good paragraph I came against that lends a little context around this argument...

Finally, the argument to listen tells the socket library that we want it to queue up as many as 5 connect requests (the normal max) before refusing outside connections. If the rest of the code is written properly, that should be plenty.

https://docs.python.org/3/howto/sockets.html#creating-a-socket

There's text earlier up in the document that suggests clients should dip in and out of a server so you don't build up a long queue of requests in the first place...

When the connect completes, the socket s can be used to send in a request for the text of the page. The same socket will read the reply, and then be destroyed. That’s right, destroyed. Client sockets are normally only used for one exchange (or a small set of sequential exchanges).

The linked HowTo guide is a must read when getting up to speed on network programming with sockets. It really brings into focus some big picture themes about it. Now how the server socket manages this queue as far as implementation details is another story, probably an interesting one. I suppose the motivation for this design is more telling, without it the barrier for inflicting a denial of service attack would be very very low.

As far as the reason for a minimum value of 0 vs 1, we should keep in mind that 0 is still a valid value, meaning queue up nothing. That is essentially to say let there be no request queue, just reject connections outright if the server socket is currently serving a connection. The point of a currently active connection being served should always be kept in mind in this context, it's the only reason a queue would be of interest in the first place.

This brings us to the next question regarding a preferred value. This is all a design decision, do you want to queue up requests or not? If so you may pick a value you feel is warranted based on expected traffic and known hardware resources I suppose. I doubt there's anything formulaic in picking a value. This makes me wonder how lightweight a request is in the first place that you'd face a penalty in queuing anything up on the server.


UPDATE

I wanted to substantiate the comments from user207421 and went to lookup the python source. Unfortunately this level of detail is not to be found in the sockets.py source but rather in socketmodule.c#L3351-L3382 as of hash 530f506.

The comments are very illuminating, I'll copy the source verbatim below and single out the clarifying comments here which are pretty illuminating...

We try to choose a default backlog high enough to avoid connection drops for common workloads, yet not too high to limit resource usage.

and

If backlog is specified, it must be at least 0 (if it is lower, it is set to 0); it specifies the number of unaccepted connections that the system will allow before refusing new connections. If not specified, a default reasonable value is chosen.

/* s.listen(n) method */

static PyObject *
sock_listen(PySocketSockObject *s, PyObject *args)
{
    /* We try to choose a default backlog high enough to avoid connection drops
     * for common workloads, yet not too high to limit resource usage. */
    int backlog = Py_MIN(SOMAXCONN, 128);
    int res;

    if (!PyArg_ParseTuple(args, "|i:listen", &backlog))
        return NULL;

    Py_BEGIN_ALLOW_THREADS
    /* To avoid problems on systems that don't allow a negative backlog
     * (which doesn't make sense anyway) we force a minimum value of 0. */
    if (backlog < 0)
        backlog = 0;
    res = listen(s->sock_fd, backlog);
    Py_END_ALLOW_THREADS
    if (res < 0)
        return s->errorhandler();
    Py_RETURN_NONE;
}

PyDoc_STRVAR(listen_doc,
"listen([backlog])\n\
\n\
Enable a server to accept connections.  If backlog is specified, it must be\n\
at least 0 (if it is lower, it is set to 0); it specifies the number of\n\
unaccepted connections that the system will allow before refusing new\n\
connections. If not specified, a default reasonable value is chosen.");

Going further down the rabbithole into the externals I trace the following source from socketmodule...

 res = listen(s->sock_fd, backlog);

This source is over at socket.h and socket.c using linux as a concrete platform backdrop for discussion purposes.

/* Maximum queue length specifiable by listen.  */
#define SOMAXCONN   128
extern int __sys_listen(int fd, int backlog);

There's more info to be found in the man page

http://man7.org/linux/man-pages/man2/listen.2.html

int listen(int sockfd, int backlog);

And the corresponding docstring

listen() marks the socket referred to by sockfd as a passive socket, that is, as a socket that will be used to accept incoming connection requests using accept(2).

The sockfd argument is a file descriptor that refers to a socket of type SOCK_STREAM or SOCK_SEQPACKET.

The backlog argument defines the maximum length to which the queue of pending connections for sockfd may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds.

One additional source identifies the kernel as being responsible for the backlog queue.

The second argument backlog to this function specifies the maximum number of connections the kernel should queue for this socket.

They briefly go on to relate how the unaccepted / queued connections are partitioned in the backlog (a useful figure is included on the linked source).

To understand the backlog argument, we must realize that for a given listening socket, the kernel maintains two queues:

An incomplete connection queue, which contains an entry for each SYN that has arrived from a client for which the server is awaiting completion of the TCP three-way handshake. These sockets are in the SYN_RCVD state (Figure 2.4).

A completed connection queue, which contains an entry for each client with whom the TCP three-way handshake has completed. These sockets are in the ESTABLISHED state (Figure 2.4). These two queues are depicted in the figure below:

When an entry is created on the incomplete queue, the parameters from the listen socket are copied over to the newly created connection. The connection creation mechanism is completely automatic; the server process is not involved.


NOTE : Answers are framed without having any background in Python, but, the questions are irrelevant to language, to be answered.

What are these queued connections?

In simple words, the backlog parameter specifies the number of pending connections the queue will hold.

When multiple clients connect to the server, the server then holds the incoming requests in a queue. The clients are arranged in the queue, and the server processes their requests one by one as and when queue-member proceeds. The nature of this kind of connection is called queued connection.

Does it make any difference for client requests? (I mean is the server that is running with socket.listen(5) different from the server that is running with socket.listen(1) in accepting connection requests or in receiving data?)

Yes, both cases are different. The first case would allow only 5 clients to be arranged to the queue; whereas in the case of backlog=1, only 1 connection can be hold in the queue, thereby resulting in the dropping of the further connection request!

Why is the minimum value zero? Shouldn't it be at least 1?

I have no idea about Python, but, as per this source, in C, a backlog argument of 0 may allow the socket to accept connections, in which case the length of the listen queue may be set to an implementation-defined minimum value.

Is there a preferred value?

This question has no well-defined answer. I'd say this depends on the nature of your application, as well as the hardware configurations and software configuration too. Again, as per the source, BackLog is silently limited to between 1 and 5, inclusive(again as per C).

Is this backlog defined for TCP connections only or does it apply for UDP and other protocols too?

NO. Please note that there's no need to listen() or accept() for unconnected datagram sockets(UDP). This is one of the perks of using unconnected datagram sockets!

But, do keep in mind, then there are TCP based datagram socket implementations (called TCPDatagramSocket) too which have backlog parameter.