How to increase the maximum size of the AWS lambda deployment package (RequestEntityTooLargeException)?

You cannot increase the deployment package size for Lambda. AWS Lambda limits are described in AWS Lambda devopler guide. More information on how those limits work can be seen here. In essence, your unzipped package size has to be less than 250MB (262144000 bytes).

PS: Using layers doesn't solve sizing problem, though helps with management & maybe faster cold start. Package size includes the layers - Lambda layers.

A function can use up to 5 layers at a time. The total unzipped size of the function and all layers can't exceed the unzipped deployment package size limit of 250 MB.

I have not tried this myself, but the folks at Zappa describe a trick that might help. Quoting from

Zappa zips up the large application and sends the project zip file up to S3. Second, Zappa creates a very minimal slim handler that just contains Zappa and its dependencies and sends that to Lambda.

When the slim handler is called on a cold start, it downloads the large project zip from S3 and unzips it in Lambda’s shared /tmp space. All subsequent calls to that warm Lambda share the /tmp space and have access to the project files; so it is possible for the file to only download once if the Lambda stays warm.

This way you should get 500MB in /tmp.


I have used the following code in the lambdas of a couple of projects, it is based on the method zappa used, but can be used directly.

# Based on the code in
# We need to load the layer from an s3 bucket into tmp, bypassing the normal
# AWS layer mechanism, since it is too large, AWS unzipped lambda function size
# including layers is 250MB.
def load_remote_project_archive(remote_bucket, remote_file, layer_name):
    # Puts the project files from S3 in /tmp and adds to path
    project_folder = '/tmp/{0!s}'.format(layer_name)
    if not os.path.isdir(project_folder):
        # The project folder doesn't exist in this cold lambda, get it from S3
        boto_session = boto3.Session()

        # Download zip file from S3
        s3 = boto_session.resource('s3')
        archive_on_s3 = s3.Object(remote_bucket, remote_file).get()

        # unzip from stream
        with io.BytesIO(archive_on_s3["Body"].read()) as zf:

            # rewind the file

            # Read the file as a zipfile and process the members
            with zipfile.ZipFile(zf, mode='r') as zipf:

    # Add to project path
    sys.path.insert(0, project_folder)

    return True

This can then be called as follows (I pass the bucket with the layer to the lambda function via an env variable):

load_remote_project_archive(os.environ['MY_ADDITIONAL_LAYERS_BUCKET'], '', 'lambda_my_extra_layer')

At the time when I wrote this code, tmp was also capped, I think to 250MB, but the call to zipf.extractall(project_folder) above can be replaced with extracting directly to memory: unzipped_in_memory = {name: for name in zipf.namelist()} which I did for some machine learning models, I guess the answer of @rahul is more versatile for this though.

AWS Lambda functions can mount EFS. You can load libraries or packages that are larger than the 250 MB package deployment size limit of AWS Lambda using EFS.

Detailed steps on how to set it up are here:

On a high level, the changes include:

  1. Create and setup EFS file system
  2. Use EFS with lambda function
  3. Install the pip dependencies inside EFS access point
  4. Set the PYTHONPATH environment variable to tell where to look for the dependencies