How to Iterate Null Separated Results in non-Bash Shell

See How can I find and safely handle file names containing newlines, spaces or both?.

You can e.g. use find -exec:

find [...] -exec <command> {} \;

or xargs -0:

find [...] -print0 | xargs -r0 <command>

Note that in your above example you still need to set IFS or you will trim off leading/trailing whitespace:

while IFS= read -rd '' file; do
   do_something_with "${file}"
done

You are right, it's a real bummer that this read only properly works in bash. I usually don't give a damn about possible newlines in filenames and just make sure that otherwise portable code doesn't break if they occur (as opposed to ignoring the problem and your script exploding) which I believe suffices for most scenarios, e.g.

while IFS= read -r file; do
    [ -e "${file}" ] || continue # skip over truncated filenames due to newlines
    do_something_file "${file}"
done < <(find [...])

or use globbing (when possible) which behaves correctly:

for file in *.foo; do
    [ -e "${file}" ] || continue # or use nullglob
    do_something_file "${file}"
done

Adding to @Adrian Frühwirth's excellent answer:

Here is a strictly POSIX-compliant solution, both in terms of the shell code and the utilities and their options used:

find . -exec sh -c 'for f in "$@"; do echo "$f"; done' - {} +

This avoids both find's -print0 and read -d.

(There's a hypothetical chance that your shell code will be invoked more than once, namely when there are so many input filenames that they don't fit on a single command line.
getconf ARG_MAX tells you your platform's max. command-line length for invoking external utilities, but note that in practice the limit is lower; see http://www.in-ulm.de/~mascheck/various/argmax/)

Tags:

Shell

Bash