DynamoDB causing Lambda timeout

If your are launching your Lambda in VPC, try to launched in a Private Subnet instead of Public Subnet. I had the same problem and launching Lambda in a Private Subnet worked for me.


Ran into a random lambda timeout issues while "put"ting data from lambda to DynamoDB. Lambda resides in a VPC (per organization policy).

Issue: Some (random) lambda containers would consistently fail while putting data and times out (set to 30 sec), while other containers got done putting data in a few milliseconds.

Root cause: There were two subnets (as suggested by AWS) configured. One was a private subnet and other was a public subnet. When a new lambda container is spun-off, it would randomly select one of the subnets. If it choose public subnet, it would consistently fail. If it choose private subnet, it would be done in a few milliseconds.

Solution: Remove public subnet and, rather, have two private subnets configured.


After significantly increasing the timeout, I found that a network error is eventually thrown:

{
    "errorMessage": "write EPROTO",
    "errorType": "NetworkingError",
    "stackTrace": [
        "Object.exports._errnoException (util.js:870:11)",
        "exports._exceptionWithHostPort (util.js:893:20)",
        "WriteWrap.afterWrite (net.js:763:14)"
    ]
}

This issue appears to be caused by an issue between Node.js and OpenSSL according to this thread. It sounds like the issue affects Node.js 4.x and up but not 0.10. This means you can either resolve the issue by downgrading the Lambda runtime to Node.js 0.10 or adding the following code when using aws-sdk:

new AWS.DynamoDB({
  httpOptions: {
    agent: new https.Agent({
      rejectUnauthorized: true,
      secureProtocol: "TLSv1_method",
      ciphers: "ALL"
    })
  }
});