How do I join two named pipes into single input stream in linux

Solution 1:

Personally, my favorite (requires bash and other things that are standard on most Linux distributions)

The details can depend a lot on what the two things output and how you want to merge them ...

Contents of command1 and command2 after each other in the output:

cat <(command1) <(command2) > outputfile

Or if both commands output alternate versions of the same data that you want to see side-by side (I've used this with snmpwalk; numbers on one side and MIB names on the other):

paste <(command1) <(command2) > outputfile

Or if you want to compare the output of two similar commands (say a find on two different directories)

diff <(command1) <(command2) > outputfile

Or if they're ordered outputs of some sort, merge them:

sort -m <(command1) <(command2) > outputfile

Or run both commands at once (could scramble things a bit, though):

cat <(command1 & command2) > outputfile

The <() operator sets up a named pipe (or /dev/fd) for each command, piping the output of that command into the named pipe (or /dev/fd filehandle reference) and passes the name on the commandline. There's an equivalent with >(). You could do: command0 | tee >(command1) >(command2) >(command3) | command4 to simultaneously send the output of one command to 4 other commands, for instance.

Solution 2:

You can append two steams to another with cat, as gorilla shows.

You can also create a FIFO, direct the output of the commands to that, then read from the FIFO with whatever other program:

mkfifo ~/my_fifo
command1 > ~/my_fifo &
command2 > ~/my_fifo &
command3 < ~/my_fifo

Particularly useful for programs that will only write or read a file, or mixing programs that only output stdout/file with one that supports only the other.


Solution 3:

(tail -f /tmp/p1 & tail -f /tmp/p2 ) | cat > /tmp/output

/tmp/p1 and /tmp/p2 are your input pipes, while /tmp/output is the output.


Solution 4:

I have created special program for this: fdlinecombine

It reads multiple pipes (usually program outputs) and writes them to stdout linewise (you can also override the separator)


Solution 5:

Be careful here; just catting them will end up mixing the results in ways you may not want: for instance, if they're log files you probably don't really want a line from one inserted halfway through a line from the other. If that's okay, then

tail -f /tmp/p1 /tmp/p2 > /tmp/output

will work. If that's not okay, then you're going to have to do find something that will do line buffering and only output complete lines. Syslog does this, but I'm not sure what else might.

EDIT: optimalization for unbuffered reading and named pipes:

considering /tmp/p1 , /tmp/p2 , /tmp/p3 as named pipes, created by "mkfifo /tmp/pN"

tail -q -f /tmp/p1 /tmp/p2 | awk '{print $0 > "/tmp/p3"; close("/tmp/p3"); fflush();}' &

now by this way, we can read the Output named pipe "/tmp/p3" unbuffered by :

tail -f /tmp/p3

there is small bug of sort, you need to "initialize" the 1st input pipe /tmp/p1 by:

echo -n > /tmp/p1

in order to tail will accept the input from 2nd pipe /tmp/p2 first and not wait until something comes to /tmp/p1 . this may not be the case, if you are sure, the /tmp/p1 will receive input first.

Also the -q option is needed in order to tail does not print garbage about filenames.

Tags:

Linux

Shell