Is it better to use cat, dd, pv or another procedure to copy a CD/DVD?

All of the following commands are equivalent. They read the bytes of the CD /dev/sr0 and write them to a file called image.iso.

cat /dev/sr0 >image.iso
cat </dev/sr0 >image.iso
tee </dev/sr0 >image.iso
dd </dev/sr0 >image.iso
dd if=/dev/cdrom of=image.iso
pv </dev/sr0 >image.iso
cp /dev/sr0 image.iso
tail -c +1 /dev/sr0 >image.iso

Why would you use one over the other?

  • Simplicity. For example, if you already know cat or cp, you don't need to learn yet another command.

  • Robustness. This one is a bit of a variant of simplicity. How much risk is there that changing the command is going to change what it does? Let's see a few examples:

    • Anything with redirection: you might accidentally put a redirection the wrong way round, or forget it. Since the destination is supposed to be a non-existing file, set -o noclobber should ensure that you don't overwrite anything; however you might overwrite a device if you accidentally write >/dev/sda (for a CD, which is read-only, there's no risk, of course). This speaks in favor of cat /dev/sr0 >image.iso (hard to get wrong in a damaging way) over alternatives such as tee </dev/sr0 >image.iso (if you invert the redirections or forget the input one, tee will write to /dev/sr0).
    • cat: you might accidentally concatenate two files. That leaves the data easily salvageable.
    • dd: i and o are close on the keyboard, and somewhat unusual. There's no equivalent of noclobber, of= will happily overwrite anything. The redirection syntax is less error-prone.
    • cp: if you accidentally swap the source and the target, the device will be overwritten (again, assuming a non read-only device). If cp is invoked with some options such as -R or -a which some people add via an alias, it will copy the device node rather than the device content.
  • Additional functionality. The one tool here that has useful additional functionality is pv, with its powerful reporting options.
    But here you can check how much has been copied by looking at the size of the output file anyway.

  • Performance. This is an I/O-bound process; the main influence in performance is the buffer size: the tool reads a chunk from the source, writes the chunk to the destination, repeats. If the chunk is too small, the computer spends its time switching between tasks. If the chunk is too large, the read and write operations can't be parallelized. The optimal chunk size on a PC is typically around a few megabytes but this is obviously very dependent on the OS, on the hardware, and on what else the computer is doing. I made benchmarks for hard disk to hard disk copies a while ago, on Linux, which showed that for copies within the same disk, dd with a large buffer size has the advantage, but for cross-disk copies, cat won over any dd buffer size.

There are a few reasons why you find dd mentioned so often. Apart from performance, they aren't particularly good reasons.

  • In very old Unix systems, some text processing tools couldn't cope with binary data (they used null-terminated strings internally, so they tended to have problems with null bytes; some tools also assumed that characters used only 7 bits and didn't process 8-bit character sets properly). I'm not sure if this ever was a problem with cat (it was with more line-oriented tools such as head, sed, etc.), but people tended to avoid it on binary data because of its association with text processing. This is not a problem on modern systems such as Linux, OSX, *BSD, or anything that's POSIX-compliant.
  • There's a sort of myth that dd is somewhat “lower level” than other tools such as cat and accesses devices directly. This is completely false: dd and cat and tee and the others all read bytes from their input and write the bytes to their output. The real magic is in /dev/sr0.
  • dd has an unusual command line syntax, so explaining how it works gives more of an opportunity to shine by explaining something that just writing cat /dev/sr0.
  • Using dd with a large buffer size can have better performance, but it is not always the case (see some benchmarks on Linux).

A major risk with dd is that it can silently skip some data. I think dd is safe as long as skip or count are not passed but I'm not sure whether this is the case on all platforms. But it has no advantage except for performance.

So just use pv if you want its fancy progress report, or cat if you don't.


Instead of using generic tools like cat or dd, one should prefer tools which are more reliable on read errors like

  • ddrescue
  • readcd (which has error corrections/retry mechanisms for CD/DVD drives built-in)

In addition, their default settings are more suitable than e.g. dd's.


There are interesting facts in this case, specially these ones:

  • I've just checked the output I got and provided (I used another disc this time, exactly, the Xubuntu 15.04 x64 setup disc), and with both procedures (dd and pv) the checksums are identical.
  • I had the idea to, after doing the dd procedure, open the drive and close it with the same disc, and then finish the test with the pv procedure. Doing just that, I got identical copies with both procedures.
  • I think I got different checksums the first time, because for some reason, the data collected from the CD/DVD drive seems to be "recorded" to other purposes for some time (like a cache) -- thus, other operations like checksums were made a lot faster than the transfer. Please comment if you know the exact cause for this.
  • Another fact is that dd w/o the count=X parameter stops correctly at the end of the disc and gives the same disc-image as with pv (checksums are identical), so it's better for me to use dd w/o parameters or just pv.

So, for now, it seems pv and dd can accomplish a CD/DVD copy with same results.