Chain of piped commands, each outputting status to standard error

Try this:

  1. Remove carriage return from each processes output. Sometimes you may need to substitute carriage return character for a newline. If color is not important, you can just cat -v it.
  2. Force line buffering. (This is really only (maybe) needed for the last program in the pipe, but it helps me debugging).

{ stdbuf -oL prog1 | stdbuf -oL prog2 | stdbuf -oL prog3 | stdbuf -oL tr -d '\r' ;} 2> >(stdbuf -oL tr -d '\r'>&2)

When dealing with multiple programs, I usually add a tag/prefix to each one of their output, so I know which line is from which program:

stdbuf -oL prog1 2> >(sed 's/\r//g; s/^/prog1: /' >&2) |
stdbuf -oL prog2 2> >(stdbuf -oL tr '\r' '\n' | sed 's/^/prog2: /' >&2) |
stdbuf -oL prog3 2> >(sed 's/\r//g; s/^/prog3: /' >&2) |
stdbuf -oL sed 's/\r//g; s/^/out: /'

For anything more complicated where you really need to share the screen for multiple processes (and you are running commands interactively) use screen or tmux or similar to share the screen via multiple processes or write your own application that will handle the terminal:

tmpd=$(mktemp -d)
mkfifo "$tmpd"/1 "$tmpd"/2
trap 'rm -r "$tmpd"' EXIT
# prog1 = seq 5
# prog2 = grep -v 3
# prog3 = cat
tmux new-session \; \
  send-keys "seq 5 > $tmpd/1" C-m \; \
  split-window -v \; \
  send-keys "grep -v 3 < $tmpd/1 > $tmpd/2" C-m \; \
  split-window -v \; \
  send-keys "cat < $tmpd/2" C-m \; \
  select-layout even-vertical \;

If however you aim to run the program non-interactively and still want to preserve a (big) amount of logging information in a non-volatile manner, I suggest to utilize the system logger designed for such case. From the shell use logger.

$ runlog() { stdbuf -oL "$@" 2> >(logger -p local3.info -t "$1") | stdbuf -oL tee >(logger -p local3.info -t "$1"); }; 
$ runlog seq 3 | runlog grep -v 3 | runlog cat
1
2
$ sudo journalctl -p info -b0 -tseq
-- Logs begin at Fri 2018-11-02 02:06:41 CET, end at Fri 2020-05-08 14:40:24 CEST. --
maj 08 14:39:41 leonidas seq[255641]: 1
maj 08 14:39:41 leonidas seq[255641]: 2
maj 08 14:39:41 leonidas seq[255641]: 3
$ sudo journalctl -p info -b0 -tgrep
-- Logs begin at Fri 2018-11-02 02:06:41 CET, end at Fri 2020-05-08 14:40:14 CEST. --
maj 08 14:39:41 leonidas grep[255647]: 1
maj 08 14:39:41 leonidas grep[255647]: 2

A more advanced version could use fifos and systemd drop-in units, which would allow to really fine-tune execution of each executable.


Interesting ideas have been given here for this challenging question, but I didn't see any complete solution up to now. I will try to give one. In order to achieve this, I first wrote three scripts corresponding to the pipeline prog1 | prog2 | prog3 the PO was speaking about.

prog1 producing messages separated by \n on the error stream and generating numbers on the standard stream:

#!/bin/bash

cmd=$(basename $0)

seq 8 |
while ((i++ < 10)); do
  read line || break
  echo -e "$cmd: message $i to stderr" >&2 
  echo $line
  sleep 1
done

echo -e "$clearline$cmd: has no more input"  >&2 

prog2 producing messages separated by \r and overwriting its own outputon the error stream and transferring numbers from the standard input stream to the standard output stream:

#!/bin/bash

cmd=$(basename $0)
el=$(tput el)

while ((i++ < 10)); do
  read line || break
  echo -en "$cmd: message $i to stderr${el}\r" >&2 
  echo $line
  sleep 2
done

echo -en "$clearline$cmd: has no more input${el}\r" >&2 

and finally prog3 reading from the standard input stream and writing messages to the error stream in the same way as prog2:

#!/bin/bash

cmd=$(basename $0)
el=$(tput el)

while ((i++ < 10)); do
  read line || break
  echo -en "$cmd: message $i to stderr${el}\r" >&2 
  sleep 3
done

echo -en "$clearline$cmd: has no more input${el}\r"  >&2 

Instead of invoking this three scripts as

prog1 | prog2 | prog3

We will need a script to invoke this three programs redirecting the error stream to three FIFO special files (named pipes), but before launching this command, we will have to create first the three special files and to launch in the background processes to listen to the special files: every time a full line is sent, these process will print it on a special area of the screen that I will call a taskbar.

The three taskbars are in the bottom of the screen: the upper one will contain the messages of prog1 to the error stream, the next one will correspond to prog2 and the last one in the bottom will contain the messages from prog3.

At the end, the FIFO files will have to be removed.

Now the tricky parts:

  1. I found no utility reading without buffering a line ending with \r, so I had to change the \r into \n before printing the message lines to the screen;
  2. some program in the several programs I was connecting with pipes were buffering their input or output causing the messages not to be print until the end, which is obviously not the intended behaviour; for fixing this, I had to use the command stdbuf with the tr utility;

Putting all together, I implemented next script, which is working as intended:

#!/bin/bash

echo -n "Test with clean output"
echo;echo;echo        # open three blank lines in the bottom of the screen
tput sc               # save the cursor position (bottom of taskbars)
l3=$(tput rc)                       # move cursor at last line of screen
l2=$(tput rc; tput cuu1)            # move cursor at second line from bottom
l1=$(tput rc; tput cuu1; tput cuu1) # move cursor at third line from bottom
el=$(tput el)         # clear to end of line
c3=$(tput setaf 1)    # set color to red
c2=$(tput setaf 2)    # set color to green
c1=$(tput setaf 3)    # set color to yellow
r0=$(tput sgr0)       # reset color

mkfifo error{1..3}    # create named pipes error1, error2 and error3

(cat error1 | stdbuf -o0 tr '\r' '\n' | 
  while read line1; do echo -en "$l1$c1$line1$el$r0"; done &)
(cat error2 | stdbuf -o0 tr '\r' '\n' | 
  while read line2; do echo -en "$l2$c2$line2$el$r0"; done &)
(cat error3 | stdbuf -o0 tr '\r' '\n' | 
  while read line3; do echo -en "$l3$c3$line3$el$r0"; done &)

./prog1 2>error1 | ./prog2  2>error2 | ./prog3 2>error3

wait

rm error{1..3}      # remove named pipes

tput rc             # put cursor below taskbars to finish gracefully
echo
echo "Test finished"

We added colors, different for each line of the taskbar, with strings produced by tput.

Enjoy.


The line-overwriting behavior is likely \r characters being written to stderr by one or more of these programs. Here's a simple example you can try:

$ progress() {
  for i in {1..10}; do
    printf "$1\r" "$i" >&2; sleep 1
  done
  echo >&2
}
$ progress 'Num: %s'
# Should display a single line, `Num: N`, with `N` incrementing from 1-10

There are other ways to control the cursor, such as certain ANSI escape sequences, but \r is the simplest to implement. Unfortunately as you've found the behavior isn't very helpful when multiple programs are competing for that one line, or if \n characters are written at the same time:

$ ({ sleep $(( 1+(RANDOM%8) )); echo 'Interrupt!'; } & ) &&
  progress 'Num %s' | progress '%s Something Else'
# Should see "flickering" between the two progress tasks, and eventually an "interruption"

Unfortunately, there's no general-purpose way to disable this behavior, since each program is independently printing \r characters and they aren't aware of one another. Exactly for this reason many programs have some mechanism for disabling this progress-style output, so the first thing to look for is a flag or setting to turn it off, like a --no_progress flag.

If these are programs you wrote or can change, you can check whether the program is attached to a TTY or not. In Bash this can be done with the -t test, which might look something like this:

$ progress() {
  for i in {1..10}; do
    # Only print progress to stderr if stdout *and* stderr are attached to TTYs
    if [[ -t 1 ]] && [[ -t 2 ]]; then
      printf "$1\r" "$i" >&2; sleep 1
    fi
  done
  echo >&2
}

If neither of these approaches are feasible, one last option is to wrap the program(s) and preprocess their output (or simply suppress stderr with 2>/dev/null). Since you want to preserve both stdout and stderr it's a bit fiddly, but it can be done. Your helper would swap stdout and stderr, clean up stderr, such as by removing \r characters, and then swap them back. Here's an example:

# Wraps a given command, replacing CR characters on stderr with newlines
$ no_CRs() {
  { "$@" 3>&1 1>&2 2>&3 | tr '\r' '\n'; } 3>&1 1>&2 2>&3
}

$ no_CRs progress 'Num %s' | no_CRs progress '%s Something Else'
# Should print both program's stderr on separate lines, as \r is no longer being emitted