Terraform - Upload file to S3 on every apply

Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.

To make subsequent changes, there are a few options:

  • You could use a different local filename for each new version.
  • You could use a different remote object path for each new version.
  • You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.

The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:

resource "aws_s3_bucket_object" "file_upload" {
  bucket = "my_bucket"
  key    = "my_bucket_key"
  source = "${path.module}/my_files.zip"
  etag   = "${filemd5("${path.module}/my_files.zip")}"
}

With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.


(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)


The preferred solution is now to use the source_hash property. Note that aws_s3_bucket_object has been replaced by aws_s3_object.

locals {
  object_source = "${path.module}/my_files.zip"
}

resource "aws_s3_object" "file_upload" {
  bucket      = "my_bucket"
  key         = "my_bucket_key"
  source      = local.object_source
  source_hash = filemd5(local.object_source)
}

Note that etag can have issues when encryption is used.