Persistent retrying resuming downloads with curl

After googling and man reading and such for a while I have figured out a solution that worked for me:

curl ftp://server/dir/file[01-30].ext --user user:pass -O --retry 999 --retry-max-time 0 -C -
  • [01-30] will make it download 30 files named file01.ext, file02.ext and so on
  • --user user:pass should be obvious
  • -O to output to files with original name
  • --retry 999 to retry 999 times
  • --retry-max-time 0 to prevent it from timing out the retrys. The default behavior if you don't specify a fixed --retry-delay is to sleep first one second between retries, then doubling that, until it reaches 10 min. between retries
  • -C - to make it continue where it dropped of (if you run the command again). The dash afterwards tells it to figure out where to resume from

If someone knows how to get the filenames from a file instead, please let me know.


In the man page it says the following:

--url <URL>
Specify a URL to fetch. This option is mostly handy when you want to specify URL(s) in a config file.

Seems like that could be something, but don't quite understand how it would be used...


You can use the -K option to curl to specify a config file. In that case, you use the syntax:

optionname=<value>

If you have a file containing a list of URL's, you can use curl like this:

sed 's/\(.*\)/-O\nurl=\1/g' url_list.txt | curl -K -

which transforms a list of urls, like:

http://host1.com/foo.html
http://host2.com/bar.html

into a format like this:

-O
url=http://host1.com/foo.html
-O
url=http://host2.com/bar.html

Curl reads that and interprets each of those like options.

HTH,

Adam