how to output text to both screen and file inside a shell script?

This works:

command | tee -a "$log_file"

tee saves input to a file (use -a to append rather than overwrite), and copies the input to standard output as well.

Because the command can detect that it's now being run in a non-interactive fashion this may change its behaviour. The most common side effect is that it disables colour output. If this happens (and you want ANSI colour coded output) you have to check the command documentation to see if it has a way to force it to revert to the interactive behaviour, such as grep --color=always. Beware that this means the log file will also include these escape codes, and you'll need to use less --RAW-CONTROL-CHARS "$log_file" to read it without distracting escape code literals. Also beware that there is no way to make the log file contents different from what is printed to screen when running the above command, so you can't have colour coded output to screen and non-coloured output in the log file.


You can use a here-doc and . source it for an efficient, POSIX-friendly general collector model.

. 8<<-\IOHERE /proc/self/fd/8

command
… 
fn() { declaration ; } <<WHATEVER
# though a nested heredoc might be finicky
# about stdin depending on shell
WHATEVER
cat -u ./stdout | ./works.as >> expect.ed
IOHERE

When you open the heredoc you signal the shell with an IOHERE input token that it should redirect its input to the file-descriptor you specify until it encounters the other end of your limiter token. I've looked around but I haven't seen many examples of the use of the redirection fd number as I've shown above in combination with the heredoc operator, though its usage is clearly specified in the POSIX basic shell-command guidelines. Most people just point it at stdin and shoot, but I find sourcing scriptlets this way can keep stdin free and the constituent apps from complaining about blocked I/o paths.

The heredoc's contents are streamed to the file descriptor you specify, which is in turn then interpreted as shell-code and executed by the . builtin, but not without specifying a specific path for . . If the /proc/self path gives you trouble try /dev/fd/n or /proc/$$. This same method works on pipes, by the way:

cat ./*.sh | . /dev/stdin

Is probably at least as unwise as it looks. You can do the same with sh, of course, but .'s purpose is to execute in the current shell environment, which is probably what you want, and, depending on your shell, is a lot more likely to work with a heredoc than with a standard anonymous pipe.

Anyway, as you've probably noticed, I still haven't answered your question. But if you think about it, in the same way the heredoc streams all of your code to .'s in, it also provides you a single, simple, outpoint:

. 5<<EOIN /dev/fd/5 |\ 
    tee -a ./log.file | cat -u >$(tty)
script
… 
more script
EOIN

So all of the terminal stdout from any of the code executed in your heredoc is piped out from . as a matter of course and can easily be tee'd off of a single pipe. I included the unbuffered cat call because I'm unclear about the current stdout direction, but its probably redundant (almost certainly it is as written anyway) and the pipeline can probably end right at tee.

You might also question the missing backslash quote in the second example. This part is important to understand before you jump-in and might give you few ideas about how it can be used. A quoted heredoc limiter (so far we've used IOHERE and EOIN, and the first I quoted with a backslash, though 'single' or "double" quotes would serve the same purpose) will bar the shell from performing any parameter expansion on the contents, but an unquoted limiter will leave its contents open to expansion. The consequences of this when your heredoc is . sourced are dramatic:

. 3<<HD ${fdpath}/3
: \${vars=$(printf '${var%s=$((%s*2))},' `seq 1 100`)} 
HD
echo $vars
> 4,8,12… 
echo $var1 $var51
> 4 104

Because I didn't quote the heredoc limiter the shell expanded the contents as it read it in and before serving the resulting file descriptor to . to execute. This essentially resulted in the commands being parsed twice - the expandable ones anyway. Because I backslash quoted the $vars parameter expansion the shell ignored its declaration on the first pass and only stripped the backslash so the whole printf expanded contents could be evaluated by null when . sourced the script on the second pass.

This functionality is basically exactly what the dangerous eval shell builtin can provide, even if the quoting is much easier to handle in a heredoc than with eval, and can be equally as dangerous. Unless you plan it carefully it is probably best to quote the "EOF" limiter as a matter of habit. Just saying.

EDIT: Eh, I'm looking back at this and thinking it's a little too much of a stretch. If ALL you need to do is concatenate several outputs into one pipe then the simplest method is just to use a:

{ or ( command ) list ; } | tee -a ea.sy >>pea.sy

The curlies will attempt to run contents in the current shell whereas the parens will sub-out automatically. Still, anyone can tell you that and, at least in my opinion, the . heredoc solution is much more valuable information, especially if you'd like to understand how the shell actually works.

Have fun!


Here is how to do it, if you don't want a tee statement to run it:

#!/bin/bash
# A Shell subroutine to echo to screen and a log file

log_file_name="/some/dir/log_file.log"

echolog()
(
    echo "$@"
    echo "$@" >> $log_file_name
)


echo "no need to log this"
echolog "some important text that needs logging"

So now in my original script, I can just change 'echo' to 'echolog' where I want the output in a log file.