`EINTR`: is there a rationale behind it?

It is difficult to do nontrivial things in a signal handler, since the rest of the program is in an unknown state. Most signal handlers just set a flag, which is later checked and handled elsewhere in the program.

Reason for not restarting the system call automatically:

Imagine an application which receives data from a socket by the blocking and uninterruptible recv() system call. In our scenario, data comes very slow and the program resides long in that system call. That program has a signal handler for SIGINT that sets a flag (which is evaluated elsewhere), and SA_RESTART is set that the system call restarts automatically. Imagine that the program is in recv() which waits for data. But no data arrives. The system call blocks. The program now catches ctrl-c from the user. The system call is interrupted and the signal handler, which just sets the flag is executed. Then recv() is restarted, still waiting for data. The event loop is stuck in recv() and has no opportunity to evaluate the flag and exit the program gracefully.

With SA_RESTART not set:

In the above scenario, when SA_RESTART is not set, recv() would recieve EINTR instead of being restarted. The system call exits and thus can continue. Off course, the program should then (as early as possible) check the flag (set by the signal handler) and do clean up or whatever it does.


Richard Gabriel wrote a paper The Rise of 'Worse is Better' which discusses the design choice here in Unix:

Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called PC loser-ing because the PC is being coerced into loser mode, where 'loser' is the affectionate name for 'user' at MIT.

The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.

The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. The New Jersey guy said that the right tradeoff has been selected in Unix-namely, implementation simplicity was more important than interface simplicity.