Why is scp so slow and how to make it faster?

You could use rsync (over ssh), which uses a single connection to transfer all the source files.

rsync -avP cap_* user@host:dir

If you don't have rsync (and why not!?) you can use tar with ssh like this, which avoids creating a temporary file (these two alternatives are equivalent):

tar czf - cap_* | ssh user@host tar xvzfC - dir
tar cf - cap_* | gzip | ssh user@host 'cd dir && gzip -d | tar xvf -'

The rsync is to be preferred, all other things being equal, because it's restartable in the event of an interruption.


@wurtel's comment is probably correct: there's a lot of overhead establishing each connection. If you can fix that you'll get faster transfers (and if you can't, just use @roaima's rsync workaround). I did an experiment transferring similar-sized files (head -c 417K /dev/urandom > foo.1 and made some copies of that file) to a host that takes a while to connect (HOST4) and one that responds very quickly (HOST1):

$ time ssh $HOST1 echo


real    0m0.146s
user    0m0.016s
sys     0m0.008s
$ time scp * $HOST1:
foo.1                                         100%  417KB 417.0KB/s   00:00    
foo.2                                         100%  417KB 417.0KB/s   00:00    
foo.3                                         100%  417KB 417.0KB/s   00:00    
foo.4                                         100%  417KB 417.0KB/s   00:00    
foo.5                                         100%  417KB 417.0KB/s   00:00    

real    0m0.337s
user    0m0.032s
sys     0m0.016s
$ time ssh $HOST4 echo


real    0m1.369s
user    0m0.020s
sys     0m0.016s
$ time scp * $HOST4:
foo.1                                         100%  417KB 417.0KB/s   00:00    
foo.2                                         100%  417KB 417.0KB/s   00:00    
foo.3                                         100%  417KB 417.0KB/s   00:00    
foo.4                                         100%  417KB 417.0KB/s   00:00    
foo.5                                         100%  417KB 417.0KB/s   00:00    

real    0m6.489s
user    0m0.052s
sys     0m0.020s
$ 

It's the negotiation of the transfer that takes time. In general, operations on n files of b bytes each takes much, much longer than a single operation on a single file of n * b bytes. This is also true e.g. for disk I/O.

If you look carefully you'll see that the transfer rate in this case is size_of_the_file/secs.

To transfer files more efficiently, bundle them together with tar, then transfer the tarball:

tar cvf myarchive.tar cap_20151023T*.png

or, if you also want to compress the archive,

tar cvzf myarchive.tar.gz myfile*

Whether to compress or not depends on the file contents, eg. if they're JPEGs or PNGs, compression won't have any effect.