Can I use Terragrunt on Terraform Cloud

Terragrunt expects you to run terragrunt commands, and under the hood, it runs terraform commands, passing along TF_VAR_* environment variables. TFC also runs terraform commands directly. Therefore, you cannot run Terragrunt within TFC - it won't execute the terragrunt binary, only the terraform binary.

However, you can use Terragrunt from the CLI to execute Terraform runs on TFC using a remote backend. In this mode, you run Terragrunt commands as usual, and when Terraform is called, it actually executes within TFC. You can review the runs in the TFC UI, the state is stored in TFC, etc. However, due to the limitation described above, you cannot actually run Terragrunt from the UI.

To set this up, first you need to get an API token and configure the CLI with a credentials block in .terraformrc.

Next, you'll need to generate a backend block:

generate "remote_state" {
  path      = "backend.tf"
  if_exists = "overwrite_terragrunt"
  contents = <<EOF
terraform {
  backend "remote" {
    hostname = "app.terraform.io" # Change this to your hostname for TFE
    organization = "your-tfc-organization"
    workspaces {
      name = "your-workspace"
    }
  }
}
EOF
}

This code generates a file called backend.tf alongside your Terraform module. This instructs Terraform to use TFC as a remote backend. It will use workspace called your-workspace. If this workspace doesn't exist, TFC will create it automatically using implict workspace creation. You'll have one workspace for each module you're calling in Terragrunt.

TFC does not support the TF_VAR_* environment variables that "stock" Terraform supports. Therefore, the Terragrunt inputs block, which is the standard way that Terragrunt passes variables to Terraform, does not work.

Instead, you can create a *.auto.tfvars.json file. You can generate this file in Terragrunt as well:

generate "tfvars" {
  path      = "terragrunt.auto.tfvars.json"
  if_exists = "overwrite"
  disable_signature = true
  contents = jsonencode({name = "your-name"})
}

All the variables required for a module should be passed as JSON to the the contents attribute above. A more flexible pattern is to use a locals block to set up the variables, then just pass those in within the content block. JSON is preferred to avoid type issues.

A final wrinkle is that when the workspace is created automatically, it won't have the API credentials (e.g. AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) that are needed to interface with the cloud provider. You can define this in the provider configuration by creating a provider.tf file, but then the credentials are static and in plain text. Not good.

Instead, you can either set the environment variables in each workspace manually, or you can use the tfe_workspace and tfe_variable resources to create them with Terraform in advance. The latter method is recommended since it's programmatic, making it much easier to update if you need to rotate your credentials.

In both cases you'll need to have a workspace for each module called by Terragrunt.

See also: this blog post on the topic and this content on integration with Terragrunt.


I have received a direct response from josh-padnick at at Gruntwork that as of now there are no concrete plans to make this work, and confirmed that it cannot currently work to the best of his knowledge. I appreciate all the answers!