How to terminate Linux tee command without killing application it is receiving from

When tee terminates, the command feeding it will continue to run, until it attempts to write more output. Then it will get a SIGPIPE (13 on most systems) for trying to write to a pipe with no readers.

If you modify your script to trap SIGPIPE and take some appropriate action (like, stop writing output), then you should be able to have it continue after tee is terminated.


Better yet, rather than killing tee at all, use logrotate with the copytruncate option for simplicity.

To quote logrotate(8):

copytruncate

Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one. It can be used when some program cannot be told to close its logfile and thus might continue writing (appending) to the previous log file forever. Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost. When this option is used, the create option will have no effect, as the old log file stays in place.


Explaining The "Why"

In short: If writes failing didn't cause a program to exit (by default), we'd have a mess. Consider find . | head -n 10 -- you don't want find to keep running, scanning the rest of your hard drive, after head has already taken the 10 lines it needed and proceeded.

Doing It Better: Rotate Inside Your Logger

Consider the following, which doesn't use tee at all, as a demonstrative example:

#!/usr/bin/env bash

file=${1:-debug.log}                     # filename as 1st argument
max_size=${2:-100000}                    # max size as 2nd argument
size=$(stat --format=%s -- "$file") || exit  # Use GNU stat to retrieve size
exec >>"$file"                           # Open file for append

while IFS= read -r line; do              # read a line from stdin
  size=$(( size + ${#line} + 1 ))        # add line's length + 1 to our counter
  if (( size > max_size )); then         # and if it exceeds our maximum...
    mv -- "$file" "$file.old"            # ...rename the file away...
    exec >"$file"                        # ...and reopen a new file as stdout
    size=0                               # ...resetting our size counter
  fi
  printf '%s\n' "$line"                  # regardless, append to our current stdout
done

If run as:

/mnt/apps/start.sh 2>&1 | above-script /tmp/nginx/debug_log

...this will start out by appending to /tmp/nginx/debug_log, renaming the file to /tmp/nginx/debug_log.old when over 100KB of contents are present. Because the logger itself is doing the rotation, there's no broken pipe, no error, and no data loss window when rotation takes place -- every line will be written to one file or another.

Of course, implementing this in native bash is inefficient, but the above is an illustrative example. There are numerous programs available which will implement the above logic for you. Consider:

  • svlogd, the service logger from the Runit suite.
  • s6-log, an actively-maintained alternative from the skanet suite.
  • multilog from DJB Daemontools, the granddaddy of this family of process supervision and monitoring tooling.