Is there any way to speed up ddrescue?

I observed that using the -n (no-split) option together with -r 1 (retry once) and setting -c (cluster size) to a smaller value can help.

My impression is that the splitting step is very slow as ddrescue splits and splits again the damaged areas. This takes a lot of time because ddrescue tries to restore very small portions of data. So, I prefer to use -n (no-split) together with -c 64, -c 32, -c 16, a.s.o.

Probably the -n (no-split) should always be used for one first pass in forward and reverse directions. It seems that the more the data were split, the slower the cloning, although I'm not sure about this. I assume the larger the non-treated areas, the best when running ddrescue again, because more contiguous sectors are to clone.

As I'm using a logfile, I don't hesitate to cancel the command with Ctrl+C when the data read speed becomes two low.

I also use the -R (Reverse) mode and after a first pass it often gives me higher speeds reading backwards than forward.

It's not clear to me how already retried sectors (-r N) are handled when running the ddrescue command again, especially when alternating forward (default) and reverse (-R) cloning commands. I'm not sure if the number of times they were tried is stored in the logfile and probably the work is done again useless.

Probably the -i (input position) flag can help speed up things too.


It can be very hard to see the progress of ddrescue, but there is another command included called ddrescuelog.

A simple command ddrescuelog -t YourLog.txt will output these nice infos:

current pos:     2016 GB,  current status: trimming
domain size:     3000 GB,  in    1 area(s)
rescued:     2998 GB,  in 12802 area(s)  ( 99.91%)
non-tried:         0 B,  in    0 area(s)  (  0%)

errsize:     2452 MB,  errors:   12801  (  0.08%)
non-trimmed:   178896 kB,  in 3395 area(s)  (  0.00%)
non-split:     2262 MB,  in 9803 area(s)  (  0.07%)
bad-sector:    10451 kB,  in 19613 area(s)  (  0.00%)

You can even use it while ddrescue is running...


I have found that playing with the -K parameter you can speed things up. From what I've seen if ddrescue finds an error when running with the -n option tries to jump a fixed amount of sectors. If it still can't read it jumps double the size. If you have large damaged areas you can indicate a big K value (for example 100M) and so the jumping on an error will be larger the first time and it will be easier to avoid problematic areas quickly in the first past.

By the way, there is a wonderful graphical application to analyze the log.

http://sourceforge.net/projects/ddrescueview/