Transferring large (8 GB) files over ssh
Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.
It is kind of surprising that your
scp versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.
I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:
split -b 1G matlab.iso
This will create 1 GiB files which, by default, are named as
xaa, xab, xac, .... You could then use scp to transfer the files:
scp xa* [email protected]:
Then on the remote system recreate the originial file with cat:
cat xa* > matlab.iso
Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.
The original problem (based on reading all comments to the OP question) was that the
scp executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to
2^32 =~ 4GB.
You may tell if
scp is 32-bit by using the
file `which scp`
On most modern systems it will be 64-bit, so no file truncation would occur:
$ file `which scp` /usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...
A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.
The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.