Capture Terraform provisioner output?

When I asked myself the same question, "Can I use output from a provisioner to feed into another resource's variables?", I went to the source for answers.

At this moment in time, provisioner results are simply streamed to terraform's standard out and never captured.

Given that you are running remote provisioners on both nodes, and you are trying to access values from S3 - I agree with this approach by the way, I would do the same - what you probably need to do is handle the race condition in your script with a sleep command, or by scheduling a script to run later with the at or cron or similar scheduling systems.

In general, Terraform wants to access all variables either up front, or as the result of a provider. Provisioners are not necessarily treated as first-class in Terraform. I'm not on the core team so I can't say why, but my speculation is that it reduces complexity to ignore provisioner results beyond success or failure, since provisioners are just scripts so their results are generally unstructured.

If you need more enhanced capabilities for setting up your instances, I suggest a dedicated tool for that purpose like Ansible, Chef, Puppet, etc. Terraform's focus is really on Infrastructure, rather than software components.


You can use external data:

data "external" "docker_token" {
  program = ["/bin/bash", "-c" "echo \"{\\\"token\\\":\\\"$(docker swarm init...)\\\"}\""]
}

Then the token will be available as data.external.docker_token.result.token. If you need to pass arguments in, you can use a script (e.g. relative to path.module). See https://www.terraform.io/docs/providers/external/data_source.html for details.


You can redirect the outputs to a file:

resource "null_resource" "shell" {

  provisioner "local-exec" {
    command = "uptime 2>stderr >stdout; echo $? >exitstatus"
  }
}

and then read the stdout, stderr and exitstatus files with local_file

The problem is that if the files disappear, then terraform apply will fail.

In terraform 0.11 I made a workaround by reading the file with external data source and storing the results in a null_resource triggers (!)

resource "null_resource" "contents" {
  triggers = {
    stdout     = "${data.external.read.result["stdout"]}"
    stderr     = "${data.external.read.result["stderr"]}"
    exitstatus = "${data.external.read.result["exitstatus"]}"
  }

  lifecycle {
    ignore_changes = [
      "triggers",
    ]
  }
}

But in 0.12 this can be replaced with file()

and then finally I can use / output those with:

output "stdout" {
  value = "${chomp(null_resource.contents.triggers["stdout"])}"
}

See the module https://github.com/matti/terraform-shell-resource for full implementation


Simpler solution would be to provide the token yourself.

When creating the ACL token, simply pass in the ID value and consul will use that instead of generating one at random.

Tags:

Terraform