S3: make a public folder private again?

The accepted answer works well - seems to set ACLs recursively on a given s3 path too. However, this can also be done more easily by a third-party tool called s3cmd - we use it heavily at my company and it seems to be fairly popular within the AWS community.

For example, suppose you had this kind of s3 bucket and dir structure: s3://mybucket.com/topleveldir/scripts/bootstrap/tmp/. Now suppose you had marked the entire scripts "directory" as public using the Amazon S3 console.

Now to make the entire scripts "directory-tree" recursively (i.e. including subdirectories and their files) private again:

s3cmd setacl --acl-private --recursive s3://mybucket.com/topleveldir/scripts/

It's also easy to make the scripts "directory-tree" recursively public again if you want:

s3cmd setacl --acl-public --recursive s3://mybucket.com/topleveldir/scripts/

You can also choose to set the permission/ACL only on a given s3 "directory" (i.e. non-recursively) by simply omitting --recursive in the above commands.

For s3cmd to work, you first have to provide your AWS access and secret keys to s3cmd via s3cmd --configure (see http://s3tools.org/s3cmd for more details).


From what I understand, the 'Make public' option in the managment console recursively adds a public grant for every object 'in' the directory. You can see this by right-clicking on one file, then click on 'Properties'. You then need to click on 'Permissions' and there should be a line:

 Grantee:  Everyone  [x] open/download  [] view permissions   [] edit permission.

If you upload a new file within this directory it won't have this public access set and therefore be private.

You need to remove public read permission one by one, either manually if you only have a few keys or by using a script.

I wrote a small script in Python with the 'boto' module to recursively remove the 'public read' attribute of all keys in a S3 folder:

#!/usr/bin/env python
#remove public read right for all keys within a directory

#usage: remove_public.py bucketName folderName

import sys
import boto3

BUCKET = sys.argv[1]
PATH = sys.argv[2]
s3client = boto3.client("s3")
paginator = s3client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket=BUCKET, Prefix=PATH)
for page in page_iterator:
    keys = page['Contents']
    for k in keys:
        response = s3client.put_object_acl(
                        ACL='private',
                        Bucket=BUCKET,
                        Key=k['Key']
                    )

I tested it in a folder with (only) 2 objects and it worked. If you have lots of keys it may take some time to complete and a parallel approach might be necessary.


For AWS CLI, it is fairly straight forward.

If the object is: s3://<bucket-name>/file.txt

For single object:

aws s3api put-object-acl --acl private --bucket <bucket-name> --key file.txt

For all objects in the bucket (bash one-liner):

aws s3 ls --recursive s3://<bucket-name> | cut -d' ' -f5- | awk '{print $NF}' | while read line; do
    echo "$line"
    aws s3api put-object-acl --acl private --bucket <bucket-name> --key "$line"
done