Copying a large directory tree locally? cp or rsync?

Solution 1:

I would use rsync as it means that if it is interrupted for any reason, then you can restart it easily with very little cost. And being rsync, it can even restart part way through a large file. As others mention, it can exclude files easily. The simplest way to preserve most things is to use the -a flag – ‘archive.’ So:

rsync -a source dest

Although UID/GID and symlinks are preserved by -a (see -lpgo), your question implies you might want a full copy of the filesystem information; and -a doesn't include hard-links, extended attributes, or ACLs (on Linux) or the above nor resource forks (on OS X.) Thus, for a robust copy of a filesystem, you'll need to include those flags:

rsync -aHAX source dest # Linux
rsync -aHE source dest  # OS X

The default cp will start again, though the -u flag will "copy only when the SOURCE file is newer than the destination file or when the destination file is missing". And the -a (archive) flag will be recursive, not recopy files if you have to restart and preserve permissions. So:

cp -au source dest

Solution 2:

When copying to the local file system I tend to use rsync with the following options:

# rsync -avhW --no-compress --progress /src/ /dst/

Here's my reasoning:

-a is for archive, which preserves ownership, permissions etc.
-v is for verbose, so I can see what's happening (optional)
-h is for human-readable, so the transfer rate and file sizes are easier to read (optional)
-W is for copying whole files only, without delta-xfer algorithm which should reduce CPU load
--no-compress as there's no lack of bandwidth between local devices
--progress so I can see the progress of large files (optional)

I've seen 17% faster transfers using the above rsync settings over the following tar command as suggested by another answer:

# (cd /src; tar cf - .) | (cd /dst; tar xpf -)

Solution 3:

When I have to copy a large amount of data, I usually use a combination of tar and rsync. The first pass is to tar it, something like this:

# (cd /src; tar cf - .) | (cd /dst; tar xpf -)

Usually with a large amount of files, there will be some that tar can't handle for whatever reason. Or maybe the process will get interrupted, or if it is a filesystem migration, the you might want to do the initial copy before the actual migration step. At any rate, after the initial copy, I do an rsync step to sync it all up:

# cd /dst; rsync -avPHSx --delete /src/ .

Note that the trailing slash on /src/ is important.


Solution 4:

rsync

Here is the rsync I use, I prefer cp for simple commands, not this.

$ rsync -ahSD --ignore-errors --force --delete --stats $SRC/ $DIR/

cpio

Here is a way that is even safer, cpio. It's about as fast as tar, maybe a little quicker.

$ cd $SRC && find . -mount -depth -print0 2>/dev/null | cpio -0admp $DEST &>/dev/null

tar

This is also good, and continues on read-failures.

$ tar --ignore-failed-read -C $SRC -cf - . | tar --ignore-failed-read -C $DEST -xf -

Note those are all just for local copies.


Solution 5:

This thread was very useful and because there were so many options to achieve the result, I decided to benchmark few of them. I believe my results can be helpful to others have a sense of what worked faster.

To move 532Gb of data distributed among 1,753,200 files we had those times:

  • rsync took 232 minutes
  • tar took 206 minutes
  • cpio took 225 minutes
  • rsync + parallel took 209 minutes

On my case I preferred to use rsync + parallel. I hope this information helps more people to decide among these alternatives.

The complete benchmark are published here