"head" command for aws s3 to view file contents

You can specify a byte range when retrieving data from S3 to get the first N bytes, the last N bytes or anything in between. (This is also helpful since it allows you to download files in parallel – just start multiple threads or processes, each of which retrieves part of the total file.)

I don't know which of the various CLI tools support this directly but a range retrieval does what you want.

The AWS CLI tools ("aws s3 cp" to be precise) does not allow you to do range retrieval but s3curl (http://aws.amazon.com/code/128) should do the trick.(So does plain curl, e.g., using the --range parameter but then you would have to do the request signing on your own.)


You can use the range switch to the older s3api get-object command to bring back the first bytes of a s3 object. (AFAICT s3 doesn't support the switch.)

The pipe \dev\stdout can be passed as the target filename if you simply want to view the S3 object by piping to head. Here's an example:

aws s3api get-object --bucket mybucket_name --key path/to/the/file.log --range bytes=0-10000 /dev/stdout | head

Finally, if like me you're dealing with compressed .gz files, the above technique also works with zless enabling you to view the head of the decompressed file:

aws s3api get-object --bucket mybucket_name --key path/to/the/file.log.gz --range bytes=0-10000 /dev/stdout | zless

One tip with zless: if it isn't working try increasing the size of the range.


One thing you could do is cp the object to stout and then pipe it to head:

aws s3 cp s3://path/to/my/object - | head

You get a broken pipe error at the end but it works.