#terragrunt (2021-04)
Terragrunt discussions
Archive: https://archive.sweetops.com/terragrunt/
2021-04-09
Hi. I’m trying to figure out how to use variables merged from tfvar files in my ‘root’ terragrunt.hcl file. Given this region.tfvar file:
region_parameters = {
region = "us-west-1"
}
How do I access pass region
above to this terragrunt.hcl (in the same directory)?:
terraform {
extra_arguments "common_vars" {
commands = get_terraform_commands_that_need_vars()
required_var_files = [
find_in_parent_folders("common.tfvars"),
]
optional_var_files = [
find_in_parent_folders("account.tfvars"),
find_in_parent_folders("regional.tfvars"),
find_in_parent_folders("env.tfvars")
]
}
extra_arguments "disable_input" {
commands = get_terraform_commands_that_need_input()
arguments = ["-input=false"]
}
}
remote_state {
backend = "s3"
config = {
bucket = "compeat-dev-terraform"
key = "us-west-1/${path_relative_to_include()}/terraform.tfstate"
# for example, I want to use it here:
region = "us-west-1"
}
}
Our root terragrunt hcl looks pseudo like
locals {
account = read_terragrunt_config(find_in_parent_folders("account.hcl"))
globals = read_terragrunt_config(find_in_parent_folders("globals.hcl"))
environment = read_terragrunt_config(find_in_parent_folders("env.hcl"))
region = read_terragrunt_config(find_in_parent_folders("region.hcl"))
vars = merge(merge(merge(local.globals.locals, local.account.locals), local.environment.locals), local.region.locals)
}
I don’t think that’s compatible with what we’re doing now. … not using hcl for locals / no ‘locals’ in terragrunt.hcl …
Where region.hcl looks like
locals {
aws_region = "eu-central-1"
}
( we could put a locals block int terragrunt.hcl, but it would need to not override what was imported via tfvars. )
Don’t think you can get the value to feed back into where you want from a tfvar file
locals are useful as you can a) Use interpolation b) pull them in from other places and extract values.
My only concern with them was that I thought there was a limited number of layers we could pull in for locals. We have these layers: account, region, env.
If I can add an hcl with just what I’m trying to import into terragrunt.hcl, and it not cause issues with the layers of tfvars, I’m good with that.
You can pull in what you want, as you can see we are pulling a hierarchy of hcl files with local blocks in them.
Ok, is there a way to pull in multiple layers of tfvars into a locals block in a single hcl file?
Not the last time I checked (if I’m understanding correctly)
Ok. Thanks for your time!
if you switch from tfvars in hcl to tfvars.json in json, then you can read the file into terragrunt with jsondecode(file('path/to/file.tfvars.json'))
Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.
Terraform also automatically loads a number of variable definitions files if they are present:
* Files named exactly terraform.tfvars
or terraform.tfvars.json
.
- Any files with names ending in
.auto.tfvars
or.auto.tfvars.json
.
Thanks for the response! That sounds like it could be worth exploring. I need to access a variable in the TF with var.regional_parameters.var_name .. how would that json look? ( and how would I then access it from the terragrunt.hcl file after I read it in there? )
Hi, I have been using Terraform for awhile now. I recently started looking into terragrunt a bit more and been looking at this example. I have a terraform.tfvars.json file with aws_region defined. Is there a way to reference this variable in the root terragrunt.hcl so that it can use it as part of the provider?
I might be able to figure it out with that. Thanks again!
{
"region": "us-east-1"
}
And that will get me local.vars.region , in the terragrunt.hcl file, but that would be var.region, in the TF, right?
presuming you load the file into the local named vars
, yes
If so, the thing left for to figure out is how to get the extra var.regional_parameters
.region piece in. It’s json, so maybe:
{
“regional_parameters”: { “region”: “us-east-1” }
}
and then local.vars.regional_parameters.region in the HCL?
correct
2021-04-12
i’m running into the following error when running terragrunt with azure provider. Has anyone come across this? Seems a possible bug i may have encountered?
I’m running tf version 0.14.9
and tg version 0.28.20
azurerm_role_definition.default: Creating...
Error: rpc error: code = Unavailable desc = transport is closing....
2021/04/12 19:38:12 [TRACE] dag/walk: upstream of "provider[\"<http://registry.terraform.io/hashicorp/azurerm\|registry.terraform.io/hashicorp/azurerm\>"] (close)" errored, so skipping
2021/04/12 19:38:12 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021-04-12T19:38:12.459-0700 [DEBUG] plugin: plugin exited
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.
When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.
SECURITY WARNING: the "crash.log" file that was created may contain
sensitive information that must be redacted before it is safe to share
on the issue tracker.
[1]: <https://github.com/hashicorp/terraform/issues>
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
ERRO[0079] Hit multiple errors:
Hit multiple errors:
exit status 1
2021-04-15
Avoided terragrunt for a long time, stuck with Terraform Cloud at last role, not for flexibility but for simplicity in team collaboration.
I’m now at a new role where that’s not as much of a concern, and instead I want flexibility to not require lots of repeated var configurations for multiple environments. I tend to avoid workspaces (terraform native ones) and prefer folder/var driven instead.
What I’ve Tried
- I tried yaml config approach that cloudposse wrote, which is very creative, but in the end felt it was a bit hard to debug for my use case. I also don’t need to support k8s, just terraform + Go apps. This has made me tempted to use pulumi :wink:
- I started writing my own Taskflow (Go automation wrapper) and realized I’m probably implementing what Terragrunt does with much less elegance. :joy:
taskflow -v tidy job-tfact "action=plan" "env=dev.staging" "stack=ecs_task_name"
is basically a less featured version of what terragrunt does
My Environment
- I have to use Azure Repos (no more github access)
- I need to deploy multiple clones of the environments to help with staging and so on.
- I’m familiar with Go, and working with Go developers.
- Considering for Azure Repos: Atlantis + terragrunt over building azure pipelines to handle the workflow.
- Also considering astro which seemed pretty cool as well. https://github.com/uber/astro Is there any caution or advice against terragrunt being a good solution to simply this?
Astro is a tool for managing multiple Terraform executions as a single command - uber/astro
@joshmyers a If that’s for astro, i see it’s not very active, so i agree.
Astro is a tool for managing multiple Terraform executions as a single command - uber/astro
That was an answer to the only question I could see in your statement
Agree with Astro, nice idea, wouldn’t use.
whops, sosry for not being clear.
@managedkaos i did create a multistage yaml pipeline with PR code commenting and approval gate for desired environment.
So I did solve that, but I don’t think I want to keep that long-term if I can leverage something more developed and worked on than my custom implementation, such as Atlantis.
The variables/backend stuff is a pain still. I think terragrunt does this automatically
https://sweetops.slack.com/archives/CDMJ4BBR8/p1618502121036200
@managedkaos responding to your post in this thread so the history/archive keeps the context.
Yes, I agree it was nice. I did this with PR commenting in yaml pipeline, but it’s the backend and variable stuff that’s painful and the PR commenting/workflow is only 10-20% of my main concern. I guess that’s why terragrunt seemed promising to simplify that part of it in my scenario.
Prior company was better with Terraform Cloud for sure, but looking for more flexibility now
I know this is the #terragrunt channel, so apologies that this is not terragrunt specific but i followed the conversation here from #terraform.
Also, yes, this is a UI implementation of a pipeline vs a YAML based pipeline. However, I export the pipeline as JSON for backup.
Anyway, it works pretty good!
got it
same story with taskfile, I have my own terraform stack management, more tight control than Terragunt or terraspace and just simpler, and work with vanilla terraform
I love that tool. it’s great!
Only problem I have with it is a silly one. For small adhoc tasks the bash scripting is to me far worse than powershell 7 i’m used to as it’s not even a fully featured sh. For all else it’s great!
Would love to see your terraform task runner file if you could remove anything sensitive for my education. I’ve done pretty extensive ones for docker/go, but not used with terraform.
It’s a different focus for sure, as task is just running vanilla right? I want the dynamic backend + PR commenting workflow and terragrunt approaches differently. Taskfile makes it easier to run, but doesn’t solve the DRY stuff
My current approach was similar this is what it looks like. It’s basically building the tfvars/backend based on parameters. Still not overall happy with it over perhaps something designed for this from the beginning.
When you say that you had a go with cloudposses yaml
configuration, does it mean that you tried atmos
?
both manually and with atmos. It’s just not the right fit for my use case, despite my for all things cloudposse
Taskflow looks more advance than taskfile, the idea to create these task runners without having any embedded secrets or sensitive data, can you share your taskflow I’m interested
here a simple version of my tf taskfile https://github.com/mhmdio/iac-taskfile-framework/blob/main/terraform/Taskfile.yml
Taskfile that contains needed daily operations tasks and commands. - mhmdio/iac-taskfile-framework
Why not use Task?
Task is simpler and easier to use than Make it still has some problems:
Requires to learn Task’s YAML sturcture and the minimalistic, cross-platform interpreter which it uses.
Debugging and testing tasks is not fun.
Harder to make some reusable tasks.
Requires to “install” the tool. taskflow leverages go run
and Go Modules so that you can be sure that everyone uses the same version of taskflow.
A task runner / simpler Make alternative written in Go
I think after give this a shot, this is more advance use case for me now, other wise, I will be ended reinvent Terragrunt, which I don’t want to do, YAML declaration for my tasks seems valid to me, for the time being
Yeah, so it’s a different focus. For make style stuff it works really fast for exploration to use go-task. I still use it.
For building automation with native Go, then taskflow will be better. You could interact with your business apis and do tons of things that would be clunky. So basically go-task = alternative to save typing for cli commands. Taskflow –> go powered tooling.
Since I’m doing FT Go development I’m working with taskflow to improve my Go chops and have more flexibility, but for “make build” style quick tools go-task all the way
Here’s a sample of a func to validate terraform path and log success or failure.
go
func getTerraformPath(tf *taskflow.TF) (terraformPath string) {
// terraformCmd.Dir = stack
terraformPath, err := filepath.Abs(path.Join(toolsDir, "terraform"))
if err != nil {
tf.Logf(":arrows_counterclockwise: skipcannot resolve realpath of terraformPath at: [%s] -> [%v]\n", terraformPath, err)
tf.SkipNow()
}
if _, err := os.Stat(terraformPath); os.IsNotExist(err) {
tf.Logf(":exclamation: cannot find terraform at: [%s] -> [%v]\n", terraformPath, err)
tf.SkipNow()
}
pterm.Success.Printf(":white_check_mark: found terraform at: [%s]\n", terraformPath)
sb := &strings.Builder{}
tfVersion := tf.Cmd(terraformPath, "--version", "-json")
tfVersion.Stdout = io.MultiWriter(sb)
if err := tfVersion.Run(); err != nil {
pterm.Error.Printf(":exclamation: terraform --version: [%s]\n", err)
tf.FailNow()
}
ver := TerraformVersion{}
s := sb.String()
err = json.Unmarshal([]byte(s), &ver)
if err != nil {
pterm.Error.Println(":exclamation: was not able to parse terraform version")
tf.Fail()
}
pterm.Success.Printf(":white_check_mark: terraform version: [%s]\n", ver.TerraformVersion)
return terraformPath
}
disclaimer: still new to go, so lots of messy code
Here’s a screenshot of my “fun” version where I was just playing around with Pterm to try and make things more visually appealing
Go task is great for simple things though! Kinda forces me to keep it simple.
I prefer powershell over bash for this type of task automation as I like structured objects over parsing text typically, so my personal favorite tool is InvokeBuild for more robust local scripted task running. Go task is kinda a test to force myself to keep it “less” powerful and limit what I put in there
cool, thanks @sheldonh i learned a lot today
@sheldonh took the next step, and look like Terragrunt is using https://github.com/urfave/cli which 15000 starts on gituhb VS Taskflow, and seems nicely to use with PTerm, I think I like this path.
A simple, fast, and fun package for building command line apps in Go - urfave/cli
nice! for the record, taskflow is a different concept. cobra and cli both are focused on building command line apps.
I think taskflow is designed to be more of a go build ./build.go
tool, that basically means zero dependencies on install.
A cli tool is probably a great fit for established standard workflows, but probably not as good for easy adhoc tasks like taskflow. YMMV. Let me know what you find out as well
@sheldonh are you using Azure DevOps pipelines? I have had success using a release pipeline with workspaces to stamp out multiple copies of an environment with no code changes. The state is in Azure Storage and variables in the pipeline set up the workspaces for me.
I know this is the #terragrunt channel, so apologies that this is not terragrunt specific but i followed the conversation here from #terraform.
Also, yes, this is a UI implementation of a pipeline vs a YAML based pipeline. However, I export the pipeline as JSON for backup.
Anyway, it works pretty good!
2021-04-16
Would love any help i could get on this as I’m running into a few issues maybe easier to deal with if you know terragrunt well.
https://github.com/antonbabenko/terragrunt-reference-architecture/issues/8
This file uses commands to load common.tfvars and regional.tfvars However, it seems to have a problem loading this common.tfvars in when it runs, saying it can't find common.tfvars. Call to fun…
It differs for different versions of terragrunt. The most recent declaration that I use is:
inputs = merge(
# Configure Terragrunt to use common vars encoded as yaml to help you keep often-repeated variables (e.g., account ID)
# DRY. We use yamldecode to merge the maps into the inputs, as opposed to using varfiles due to a restriction in
# Terraform >=0.12 that all vars must be defined as variable blocks in modules. Terragrunt inputs are not affected by
# this restriction.
yamldecode(
file("${find_in_parent_folders("region.yaml", "empty.yaml")}"),
),
yamldecode(
file("${find_in_parent_folders("env.yaml", "empty.yaml")}"),
),
You prevent not matching such file by creating empty empty.yaml
This file uses commands to load common.tfvars and regional.tfvars However, it seems to have a problem loading this common.tfvars in when it runs, saying it can't find common.tfvars. Call to fun…
ARGGH. Got you. That’s really useful. Might add that to the issue as well so no one else repeats the work. I’m going to try that now
@Marcin Brański that’s what you put in the child file right?