downloading a file from Internet into S3 bucket

Download the data via curl and pipe the contents straight to S3. The data is streamed directly to S3 and not stored locally, avoiding any memory issues.

curl "https://download-link-address/" | aws s3 cp - s3://aws-bucket/data-file

As suggested above, if download speed is too slow on your local computer, launch an EC2 instance, ssh in and execute the above command there.


For anyone (like me) less experienced, here is a more detailed description of the process via EC2:

  1. Launch an Amazon EC2 instance in the same region as the target S3 bucket. Smallest available (default Amazon Linux) instance should be fine, but be sure to give it enough storage space to save your file(s). If you need transfer speeds above ~20MB/s, consider selecting an instance with larger pipes.

  2. Launch an SSH connection to the new EC2 instance, then download the file(s), for instance using wget. (For example, to download an entire directory via FTP, you might use wget -r ftp://name:[email protected]/somedir/.)

  3. Using AWS CLI (see Amazon's documentation), upload the file(s) to your S3 bucket. For example, aws s3 cp myfolder s3://mybucket/myfolder --recursive (for an entire directory). (Before this command will work you need to add your S3 security credentials to a config file, as described in the Amazon documentation.)

  4. Terminate/destroy your EC2 instance.