Fast elimination of duplicate lines across multiple files

I'm not sure I understand your question, but your code can be optimised to:

awk '!x{a[$0];next}; !($0 in a)' foo/file x=1 bar/file > tmp

(yours had issues for empty lines or lines resolving to "0" in them I think)

If the files are sorted, you could do:

comm -13 foo/file bar/file > tmp

If they're not (ksh93. zsh or bash syntax):

comm -13  <(sort foo/file) <(sort bar/file) > tmp

(not necessarily faster than the awk solution)

Also, especially with GNU awk, you may get better performance by setting the locale to C/POSIX:

LC_ALL=C awk ...