#terragrunt (2020-01)
Terragrunt discussions
Archive: https://archive.sweetops.com/terragrunt/
2020-01-01
2020-01-09
guys how to safely migrate state from one s3 bucket to another with terragrunt ?
First of all you can make a backup of your state by copying it elsewhere, this way you don’t get sweaty hands. Secondly, I think that the moment you change the backend in terragrunt, terragrunt/terraform will take care of the move.
Thanks Maarten, well I synced s3 buckets and changed backend config after that. It looks ok now :)
Ive got a terraform module that creates resources in two different aws accounts. I handle this by doing the following:
provider "aws" {
region = "us-west-2"
profile = "profile1"
}
provider "aws" {
region = "us-west-2"
profile = "profile2"
alias = "digi"
}
And in the module itself, the different provider is picked up like this:
resource "aws_route53_record" "dig_ns" {
provider = aws.digi
I’m trying to utilize terragrunt to deploy many modules. This becomes difficult since the above method no longer works, has anyone encountered this? If so, how have you got around this. I dont think Terragrunt supports multiple providers like this
I dont quite understand how tg can use different providers/regions(if at all )
@Brij S
few comments for you 1 is , route53 is not region dependent so if that’s the only use-case you don’t have to worry about it.
-
I try to make modules which are used for a single region, if I need them to be applied in a different region it will be a different apply
-
Using a structure like the following can help you with that:
├── envs
│ ├── aws
│ │ ├── dev
│ │ │ ├── eu-central-1
│ │ │ │ ├── applications
│ │ │ │ ├── infra
│ │ │ ├── eu-west-1
@maarten, yes r53 is not region specific, but it is account specific. I have a unique setup where I need to create a new zone and then take its NS records and insert them into a different zone in a different account. Hence the multiple providers
all of my modules are quite generic already like you mentioned. With TG - it seems like i can only use on provider so if I need to create edge acm certs in us-east-1 and all other resources in us-west-2 then it becomes a mystery
@Brij S could you delegate the zone to the other account instead ?
unfortunately, no
new zone in new acccount with NS records of said zone in old account
yes r53 is not region specific, but it is account specific
Adding to the use case list for this is AWS govcloud[1].
govcloud’s Route53 service only allows creating private zones. Account-level delegations across the govcloud/commercial partition line is also not possible.
[1] https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-r53.html
Lists the differences for using Amazon Route 53 in the AWS GovCloud (US-West) Region compared to other AWS regions.
I don’t see why that wouldn’t work, unless maybe you’re on an older version?
well, in the .hcl file you dont specify providers of any sort?
or any profiles
True, but terragrunt is just applying terraform. So you should be able to leave your tf files more or less the same
hmm, im confused
the terragrunt docs mention only leaving .hcl files in the ‘live’ repo
I have only done some light testing around multiple providers in my TF but TG didn’t seem to mind
so if my tg file looked like this
include {
path = find_in_parent_folders()
}
terraform {
source = "git::[email protected]:foo/modules.git//app"
}
inputs = {
zone_name = "domain.com"
comment = "Managed by Terraform"
where do you plugin multiple providers
You put the multiple providers in your .tf files like you would normally
Terragrunt is just a wrapper for terraform, it doesn’t need all that info
So if you had [module.tf](http://module.tf)
with multiple providers before, just point TG at it and it should (probably) just work
@slaughtr heres my structure
├── global
│ ├── main.tf
│ └── terragrunt.hcl
├── terragrunt.hcl
└── us-east-1
├── main.tf
├── terragrunt.hcl
└── variables.tf
this issue arose when, the variables file in the us-east-1 folder wasnt enough to pick up the inputs from the global folder as you helped me with earlier
so i decided to do all .hcl, just like the examples in their docs
the main.tf in global has an output called zone_id (in the module itself). I setup the hcl file as you assisted me with as follows in the us-east-1 folder:
include {
path = find_in_parent_folders()
}
dependency "global" {
config_path = "../global"
mock_outputs = {
zone_id = "Z3P5QSUBK4POTI"
}
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
}
inputs = {
zone_id = dependency.global.outputs.zone_id
}
then in the variables.tf file i created a var called zone_id
but it just picks up the mock value instead of the real created one
The format I follow (and I guess I didn’t read/should ahve read more/something) is something like this:
├── module
│ ├── main.tf
│ ├── somecode.js
| └── variables.tf
└── us-east-1
├── terragrunt.hcl
├── module_tg
├── terragrunt.hcl
Just a side point, so you know you can do it
Oh if you have a mock value it will use that, you can remove it in production
There might be a way to say “if not present use mock” but I’m not sure
yeah it has the
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
but that doesnt work
it still picked up mock value when I ran apply-all
Hmm, I think the -all
commands might follow a slightly different set of rules, don’t quote me on that. I never use them because I find they super conflate everything, and really it’s pointless til you’ve applied each thing individually because TG kinda sucks at dependency resolution
so how do you go about applying all the modules in seperate folders?
I also don’t use mock outputs so I’m not sure what’s going on there. I’d recommend commenting it out for the moment
I apply each module individually
Generally I try not to change more than one module at a time anyway
Back to your original query, though: you have terraform config that has multiple providers that worked in the past with just TF?
yes
i was trying to convert to using just .hcl, as per the docs
So if you point at that in your source
and get your inputs and dependencies figured out it should just work
how does tg know to pick up credentials for a different provider?
Can you point me to where in the docs it says that? I’ve not seen that rec before
TG is just wrapping around terraform, so if terraform knows it you’re good
my module, has a resource that explicitly has a provider set for it like this
resource "aws_route53_record" "digital_ns" {
provider = aws.digi
in my main.tf file I have two providers, one with an alias of digi
however, if there is no tf file and only a .hcl file - how will tg know to use that profile and that alias etc
Learn how to achieve DRY Terraform code and immutable infrastructure.
Ah ok I think you may have misunderstood.
In a separate repo, called, for example, live, you define the code for all of your environments, which now consists of just one terragrunt.hcl file per component
my modules are in a different repo
i am creating a live repo
So you have your .tf files like normal, you point at them in the .hcl files.
like this
terraform {
# Deploy version v0.0.3 in stage
source = "git::[email protected]:foo/modules.git//app?ref=v0.0.3"
}
?
Yup!
no thats not what im getting at
I havea module .. like the one you can find on terraform registry
All TG is doing here is doing inputs/outputs for TF. It doesn’t control your providers or anything.
my root .hcl file
remote_state {
backend = "s3"
config = {
bucket = "test-terraform-state-us-west-2"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-west-2"
encrypt = true
dynamodb_table = "test-lock-table-us-west-2"
profile = "profile1"
}
}
tg wont automatically assume to use profile1 for everything?
since all the child .hcl files inhert from this
It will, yes. But using the provider
block in your tf will work as it used to. Just like tf uses whichever profile you’re using and then follows the directive of the provider block
If you have a child .hcl that shouldn’t use profile1 at all you can not use the path = find_in_parent_folders()
bit
but then it wont use the remote state bucket?
also, let me try this out now, with the changes weve discussed. I think i’ll run into a problem with passing the output though - lets see
Here’s my root terragrunt.hcl
using two accounts, if that helps
Though I assume you’re not looking at dev vs prod necessarily, so that might not be the most helpful
But it gives you an idea of conditionally changing profiles at that level
ok let me see
To be clear I’m just doing dev and prod and applying the same resources to them conditionally based on an env var (which is sent in via an alias, I have tg
and tgprod
to be safe)
TG can be a bit to wrap your head around, especially since 0.19 introduced the .hcl
files
yeah its a bit confusing
even going from TF -> TG
Pre 0.19 it was much less confusing. Though now that I’ve got it all setup I much prefer the current method.
true, let me give this a go - i’ll message back on this thread tomorrow if i need any more assistance
thank you for all the help by the way!! much appreciated
No problem, I know what a nightmare it can be trying to migrate. Hopefully it does the thing and you can focus on more fun stuff
Feel free to throw a message at me if you still need help!
didnt work
its complaining about multiple providers with the same name
Error: Duplicate provider configuration
on provider.tf line 1:
1: provider "aws" {
A provider configuration for "aws" with alias "digital" was already given at
main.tf:6,1-15. Each configuration for the same provider must have a distinct
alias.
even though its only once..
.
├── global
│ ├── main.tf
│ └── terragrunt.hcl
├── terragrunt.hcl
└── us-east-1
├── main.tf
├── terragrunt.hcl
└── variables.tf
main.tf inside global is
provider "aws" {
region = "us-west-2"
profile = "profile1"
}
provider "aws" {
region = "us-west-2"
profile = "profile2"
alias = "digital"
}
terraform {
backend "s3" {}
}
Hmm that’s weird, that should work afaik.
One thing to look out for with TG is the terragrunt get -update
command doesn’t seem to work very well in 0.19, so when you make changes you often need to run a command to delete cached files:
alias tg-cache-list='find . -type d -name ".terragrunt-cache"'
alias tg-cache-del='find . -type d -name ".terragrunt-cache" -prune -exec rm -rf {} \;'
That’s from the tg docs somewhere. Catches me on occasion. There’s a chance something is cached
Oh, uh on [provider.tf](http://provider.tf) line 1:
in your error…do you have a provider.tf file? that would cause problems if it’s also got a non-aliased provider in it
sorry about that
2020-01-10
2020-01-11
2020-01-17
Hi Guys, do you know if I can use terragrunt command in before hooks?
before_hook "test" {
commands = ["init"]
execute = ["terragrunt", "apply", "-target=null_resource.rule", "-auto-approve"]
}
Isn’t that going to cause recursion issues?
depends on when the state is locked… i think it would work. does seem fragile though. but could be an interesting approach to dealing with the “count” cannot be computed style of error
would at least need to disable autoinit in the hook to avoid recursion, --terragrunt-no-auto-init
Thanks guys, eventually I dropped approach with null resource and use External Data Source
2020-01-22
@Chase Ward has joined the channel
2020-01-23
I’m having an issue where terragrunt ignores a providers assume role/iam role block when assuming a role and using the creds from that as env vars - has anyone dealt with this?
2020-01-24
@Brij S post what you have/what you problem is
I assume a role using aws-okta cli and use the env vars from that. I have a provider as follows
provider "aws" {
region = "us-west-2"
alias = "dig"
assume_role {
role_arn = "arn:aws:iam::xxxxxxxxxx:role/tf"
}
}
but this provider gets ignored and I get the following error in the cli:
Error: AccessDenied: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/acount-name-ReadWrite/user is not authorized to access this resource
At what point in the workflow is that happening? Are you using remote tfstate in s3? How is that configured?
Have had similar but need exact info.
2020-01-25
I have some extra var files defined and want to use some variables from this files in input for example region in container task definition, is this possible?
It is! What have you tried so far?
I found this thread https://github.com/gruntwork-io/terragrunt/issues/752
I've been reading documentation for terraform 0.12 in regards to handling env vars, and https://www.terraform.io/docs/configuration/variables.html states that: Some special rules apply to the -…
this is what i need
# stage/frontend-app/terragrunt.hcl
terraform {
source = "..."
}
# Include all settings from the root terragrunt.hcl file
include {
path = "${find_in_parent_folders()}"
}
inputs = {
aws_region = get_input("aws_region")
remote_state_bucket = get_input("remote_state_bucket")
instance_type = "t2.micro"
instance_count = 10
}
aws_region is defined in tfvars which is included in optional_var_files
how can I achieve this?
aws_region = get_input("aws_region")
2020-01-26
get_input isn’t a valid function.