Download big file over bad connection

lftp (Wikipedia) is good for that. It supports a number of protocols, can download files using several concurrent parallel connections (useful where there's a lot of packet loss not caused by congestion), and can automatically resume downloads. It's also scriptable.

Here including the fine-tuning you came up with (credits to you):

lftp -c 'set net:idle 10
         set net:max-retries 0
         set net:reconnect-interval-base 3
         set net:reconnect-interval-max 3
         pget -n 10 -c "https://host/file.tar.gz"'

I can't test this for you in your situation, but you should not be using --range with -C -. Here's what the man page has to say on the subject:

Use -C - to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out.

Try this instead:

curl -s --retry 9999 --retry-delay 3 --speed-limit 2048 --speed-time 10 \
    --retry-max-time 0 -C - -o "${FILENAME}.part${i}" "${URL}" &

I'd also strongly recommend that you always double-quote your variables so that the shell won't try to parse them. (Consider a URL https://example.net/param1=one&param2=two, where the shell would split the value at &.)

Incidentally, 120 KB/s is approximately 1.2 Mb/s, which is a typical xDSL upload speed in many parts of the world. 10 seconds per MB, so a little under one hour for the entire file. Not so slow, although I do appreciate you're more concerned with reliability rather than speed.


Maybe you have more luck with wget --continue:

wget --continue ${URL}

See also https://www.cyberciti.biz/tips/wget-resume-broken-download.html