Why there is such a difference in execution time of echo and cat?

There are several things to consider here.

i=`cat input`

can be expensive and there's a lot of variations between shells.

That's a feature called command substitution. The idea is to store the whole output of the command minus the trailing newline characters into the i variable in memory.

To do that, shells fork the command in a subshell and read its output through a pipe or socketpair. You see a lot of variation here. On a 50MiB file here, I can see for instance bash being 6 times as slow as ksh93 but slightly faster than zsh and twice as fast as yash.

The main reason for bash being slow is that it reads from the pipe 128 bytes at a time (while other shells read 4KiB or 8KiB at a time) and is penalised by the system call overhead.

zsh needs to do some post-processing to escape NUL bytes (other shells break on NUL bytes), and yash does even more heavy-duty processing by parsing multi-byte characters.

All shells need to strip the trailing newline characters which they may be doing more or less efficiently.

Some may want to handle NUL bytes more gracefully than others and check for their presence.

Then once you have that big variable in memory, any manipulation on it generally involves allocating more memory and coping data across.

Here, you're passing (were intending to pass) the content of the variable to echo.

Luckily, echo is built-in in your shell, otherwise the execution would have likely failed with an arg list too long error. Even then, building the argument list array will possibly involve copying the content of the variable.

The other main problem in your command substitution approach is that you're invoking the split+glob operator (by forgetting to quote the variable).

For that, shells need to treat the string as a string of characters (though some shells don't and are buggy in that regard) so in UTF-8 locales, that means parsing UTF-8 sequences (if not done already like yash does), look for $IFS characters in the string. If $IFS contains space, tab or newline (which is the case by default), the algorithm is even more complex and expensive. Then, the words resulting from that splitting need to be allocated and copied.

The glob part will be even more expensive. If any of those words contain glob characters (*, ?, [), then the shell will have to read the content of some directories and do some expensive pattern matching (bash's implementation for instance is notoriously very bad at that).

If the input contains something like /*/*/*/../../../*/*/*/../../../*/*/*, that will be extremely expensive as that means listing thousands of directories and that can expand to several hundred MiB.

Then echo will typically do some extra processing. Some implementations expand \x sequences in the argument it receives, which means parsing the content and probably another allocation and copy of the data.

On the other hand, OK, in most shells cat is not built-in, so that means forking a process and executing it (so loading the code and the libraries), but after the first invocation, that code and the content of the input file will be cached in memory. On the other hand, there will be no intermediary. cat will read large amounts at a time and write it straight away without processing, and it doesn't need to allocate huge amount of memory, just that one buffer that it reuses.

It also means that it's a lot more reliable as it doesn't choke on NUL bytes and doesn't trim trailing newline characters (and doesn't do split+glob, though you can avoid that by quoting the variable, and doesn't expand escape sequence though you can avoid that by using printf instead of echo).

If you want to optimise it further, instead of invoking cat several times, just pass input several times to cat.

yes input | head -n 100 | xargs cat

Will run 3 commands instead of 100.

To make the variable version more reliable, you'd need to use zsh (other shells can't cope with NUL bytes) and do it:

zmodload zsh/mapfile
var=$mapfile[input]
repeat 10 print -rn -- "$var"

If you know the input doesn't contain NUL bytes, then you can reliably do it POSIXly (though it may not work where printf is not builtin) with:

i=$(cat input && echo .) || exit # add an extra .\n to avoid trimming newlines
i=${i%.} # remove that trailing dot (the \n was removed by cmdsubst)
n=10
while [ "$n" -gt 10 ]; do
  printf %s "$i"
  n=$((n - 1))
done

But that is never going to be more efficient than using cat in the loop (unless the input is very small).


The problem is not about cat and echo, it's about the forgotten quote variable $i.

In Bourne-like shell script (except zsh), leaving variables unquote cause glob+split operators on the variables.

$var

is actually:

glob(split($var))

So with each loop iteration, the whole content of input (exclude trailing newlines) will be expanded, splitting, globbing. The whole process require shell to allocate memory, parsing the string again and again. That's the reason you got the bad performance.

You can quote the variable to prevent glob+split but it won't help you much, since when the shell still need to build the big string argument and scan its content for echo (Replacing builtin echo with external /bin/echo will give you the argument list too long or out of memory depend on the $i size). Most of echo implementation isn't POSIX compliant, it will expand backslash \x sequences in arguments it received.

With cat, the shell only needs to spawn a process each loop iteration and cat will do the copy i/o. The system can also cache the file content to make the cat process faster.


If you call

i=`cat input`

this lets your shell process grow by 50MB up to 200MB (depending on the internal wide character implementation). This may make your shell slow but this is not the main problem.

The main problem is that the command above needs to read the whole file into shell memory and the echo $i needs to do field splitting on that file content in $i. In order to do field splitting, all text from file needs to be converted into wide characters and this is where most of the time is spent.

I did some tests with the slow case and got these results:

  • Fastest is ksh93
  • Next is my Bourne Shell (2x slower that ksh93)
  • Next is bash (3x slower than ksh93)
  • Last is ksh88 (7x slower than ksh93)

The reason why ksh93 is the fastest seems to be that ksh93 does not use mbtowc() from libc but rather an own implementation.

BTW: Stephane is mistaken that the read size has some influence, I compiled the Bourne Shell to read in 4096 byte chunks instead of 128 bytes and got the same performance in both cases.