#terraform (2023-10)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2023-10-01
Hi All, I was hoping to create some validation for some variables in a module. I have a list of strings that the incoming variable needs to be one of. What I would like is to have a single list and then use that list in the validation and the error message. Like so
locals {
components = ["django", "rabbitmq"]
}
variable "asset_component" {
type = string
validation {
condition = contains(locals.components, var.asset_component)
error_message = "Invalid input, options: ${join(", ", local.components)}."
}
}
error_message = "Invalid value for var.asset_component does not satisfy allowable list of: X, Y, Z."
My understanding of validation (and the error msg provided) the validate error_message can not use outside sources such as local, other vars, data, etc. The message wants to be string text without logic. Mark’s idea could work if you are willing to use lifecycle to validate a resource configuration.
But when I do that I get the following error
│ Error: Invalid reference in variable validation
│
│ on ../modules/switchdin-tags/variables.tf line 44, in variable "asset_component":
│ 44: error_message = "Invalid input, options: ${join(", ", local.components)}."
│
│ The error message for variable "asset_component" can only refer to the variable itself, using var.asset_component.
Any tips on how to wrangle this better?
Consider using a lifecycle precondition on a resource block which uses var.asset_component
and catching the issue there instead.
2023-10-02
2023-10-03
Reference: https://github.com/cloudposse/terraform-opsgenie-incident-management - does anyone have any guidance on how best to integrate this across multiple environments and subsequently link it up to the end services they want to alert from? Would you deploy the module three times for each of dev, uat and prod or just deploy it once and handle the alert config within that module?
Terraform module to provision Opsgenie resources from YAML configurations using the Opsgenie provider,, complete with automated tests
@Max Lobur (Cloud Posse)
Terraform module to provision Opsgenie resources from YAML configurations using the Opsgenie provider,, complete with automated tests
deploying once globally, config handling everything
datadog monitors which we use as an alert source have different levels set for different envs
so in ops genie you can filter on alert level and decide where to route
hello everyone, quick question. What is the recommended way to upgrade a redis cluster using https://github.com/cloudposse/terraform-aws-elasticache-redis. Updating just engine version and family is not possible because of the following error:
╷
│ Error: deleting ElastiCache Parameter Group (poller): InvalidCacheParameterGroupState: One or more cache clusters are still members of this parameter group poller, so the group cannot be deleted.
│ status code: 400, request id: 00e2d0be-e52c-435d-9731-a1d8a382feb5
I suggest you do it with click ops and poster reconcile the Terraform state
thank you @Alex Jurkiewicz this is what I did eventually
Hi All.. I am trying to create multiple lambda functions using https://github.com/cloudposse/terraform-aws-lambda-function in the same tf file. Any hints to achieve?
A module for launching Lambda Fuctions
check out for_each
for modules, you can then define a map in a variable or local variable with each lambda’s unique attributes (eg name) configured. Then pass that to the module
module "app_name" {
source = "registry/lambda-function'
for_each = local.lambda_functions
function_name = each.value.function_name
}
A module for launching Lambda Fuctions
before for_each
in modules, people just copy/pasted additional module blocks, which I think is more readable, but less efficient since terraform interprets this as another module to download (in some cases)
When I try to add additional module blocks for each function ..surprisingly first one is getting destroyed
it’s probably because you provisioned a function without the for_each
, then added it. I would recommend destroying that resource prior to adding for_each
. If you can’t, you’ll have to remove those resources from the statefile and then import them again after adding that lambda’s definition into the lambda functions variable
2023-10-04
open-source Terraform registry: https://github.com/terrariumcloud/terrarium-lite
I haven’t used terrarium before, but I’ve been on the search for a self-hosted module registry. Looking forward to the docs on how to get started
I’m looking at this one because it has SSO auth, some just have individual or simple auth, which I don’t expect to fly with security teams https://github.com/MatthewJohn/terrareg
Terraform module registry
cool, ALB is a temporary workaround for SSO
interesting, how? cognito?
yeah, ALB supports OIDC -> Cognito -> LDAP server( or AD server)
very cool
I was just checking out this article https://medium.com/@acgs771126/aws-alb-integrate-with-google-oauth-by-aws-cognito-db2dc745fc59
Introduction:This story is talking about ALB integrate AWS Cognito and then we can using Google OAuth to allow client to access application
v1.6.0 1.6.0 (October 4, 2023) UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued. On Windows, Terraform now requires at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued. The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming…
with the hashicorp open source fiasco, do all of the supporters / contributors plan to support ONLY opentofu?
I can only speak for Terrateam. We’ll be supporting MPL Terraform versions along with OpenTofu.
(and not terraform post version xyz)
All our GitHub pipelines are suddenly failing with this error across all accounts:
Error while initializing Terraform:
Error: Invalid KMS Key ARN
on backend.tf line 2, in terraform:
2: backend "s3" {}
Value must be a valid KMS Key ARN, got
"arn:aws:kms:eu-west-1:account_id:alias/aws/s3"
Nothing I’m aware of has changed, the role we use got the same policy as it has for months, same for the S3 bucket and KMS key. If I checkout the repo and run the same code locally everything works as expected
are you pinning your tf version, or just using latest? because 1.6.0 was just released, and there were significant changes to the s3 backend implementation….
I don’t think they intended to break backwards compatibility, so I’d highly suggest opening a new issue….
Good idea
Ok, so Im trying to use the default_tags stuff to automatically tag my AWS resources. But I get a perpetual diff on the tags_all attribute like
# aws_iam_instance_profile.sep2app will be updated in-place
~ resource "aws_iam_instance_profile" "sep2app" {
id = "AppInstance"
name = "AppInstance"
tags = {}
~ tags_all = {} -> (known after apply)
# (5 unchanged attributes hidden)
}
mark, could you please try recent provider version and tf apply once and run tf plan again
No matter how many times I apply this it is still a diff. I ran across this issue -> https://github.com/hashicorp/terraform-provider-aws/issues/18311 which seems to suggest that its a case of default_tags and tags interactiing with each other is a not so great way. This particular resource doesnt have tags attached onto it. And I think I have the right provider versions
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.6.0"
}
}
Description
I have been looking forward to the default tagging support and tested it on a project yesterday which uses https://github.com/terraform-aws-modules/terraform-aws-vpc/ — this immediately showed some tags on aws_vpc and aws_subnet resources as having been changed, but only if another unrelated change was also present.
Community Note
• Please vote on this issue by adding a :+1: reaction to the original issue to help the community and maintainers prioritize this request • Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request • If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform CLI and Terraform AWS Provider Version
Terraform v0.14.8
+ provider registry.terraform.io/hashicorp/aws v3.33.0
+ provider registry.terraform.io/hashicorp/dns v3.1.0
+ provider registry.terraform.io/hashicorp/http v2.1.0
Affected Resource(s)
• aws_subnet • aws_vpc
Terraform Configuration Files
provider "aws" {
region = var.region
default_tags {
tags = local.tags
}
}
Debug Output Expected Behavior
No changes would be displayed
Actual Behavior
If an unrelated resource triggers a diff, all of the subnet and VPC resources will show an update in-place diff showing the tags which are already present. Curiously, in my project it lists 5 tags which are present as having been changed but then display a “1 unchanged element hidden”
# module.vpc.aws_subnet.public[0] will be updated in-place
~ resource "aws_subnet" "public" {
id = "subnet-07620b925b0c70066"
~ tags = {
+ "Environment" = "Development"
+ "Project" = "…"
+ "ResponsibleParty" = "…"
+ "Terraform" = "true"
+ "TerraformWorkspace" = "…"
# (1 unchanged element hidden)
}
# (10 unchanged attributes hidden)
}
Steps to Reproduce
terraform apply
References
• #7926
Any tips?
yeah, this has been a pain since provider tags was introduced in 3.x. If a tag is set to an empty string, this might cause a problem. If there’s overlapping tags between the tags set in the provider and directly to the resource, that can be a problem. You can be strict on where tagging occurs and/or create a map of tags with conditional additions for the ones that aren’t empty strings. I’ve also seen people use null
as an alternative to tags with empty strings
So the actual problem seemed to be that some IAM stuff doesnt accept the default_tags, because their Tags are not the resource tags that everybody else uses. So in the plan terraform says “I dont have tags for this resource so I need to apply them” but then it cant apply them, because well that AWS Functionality doesnt exist. The way I worked around this was to create two providers, one with tags, one without and use the non-tag version for certain resources.
damn, I get why you did two providers, but that’s a pain just for tags
Well the wierd thing was that I wasnt setting tags on the resources at all. I also tried to set the same tags on the iam resources, and it just ended up with a diff on tags as well as tags_all.
It makes me think default_tags still doesnt really work the way that it needs to, if it cant do IAM resources like IAM Roles and things.
yeah, it was supposedly fixed in AWS Provider 5.0
it’s a great idea to have them at the provider level, but then you have to rely on humans to remember/set tags
and getting it right/consistent is super important
Yeah totally. I can see the value, just annoying that it doesnt work on IAM stuff (apparently).
so much so that I wrote a blog on it https://www.taccoform.com/posts/tfg_p2/
Overview What are tags? Why should I care about tags? How can I create tags? How do I query tags? Lesson What are tags? In AWS, tags are metadata that you can attach to AWS resources like ec2 instances, rds databases, and s3 buckets. The tags are create as key/value pairs and can be queried by tools like awscli and Terraform. Why should I care about tags? When done right, tags can make automation easier and more efficient.
yeah, it’s always a game of figuring out which resources have tags and which don’t
2023-10-05
Hi there, I’m trying to add actions to an module
with source = "cloudposse/s3-bucket/aws"
via allowed_bucket_actions
but changing this and applying the configuration, it doesn’t see any changes
@Max Lobur (Cloud Posse)
what version of s3-bucket module do you use? could you share the code?
@Michael Baldry
Hi
I don’t specify a version anywhere
it’s latest then, checking now
Do you have user_enabled
set to true? Otherwise this module will not create an underlying user and thus allowed_bucket_actions
value will be ignored
Hi @Michael Baldry do you still face this issue or it’s solved?
Sorry, I don’t think there was an issue, I think I was adding something to allowed_bucket_actions
that was already part of the default value, so it was just showing as no change
When running TF in CI, what do people do with the TF output to be able to browse it later or alert on errors? It’s not formatted like logs. For the moment we rely on a wrapper to capture errors, but that’s a bit ugly
Do you mean the command terraform output
?
If so, you can use -json
to make it more readable - https://opentofu.org/docs/cli/commands/output/#usage
The tofu output
command is used to extract the value of an output variable from the state file.
I mean what TF prints to stdout when you run terraform apply
good hint nevertheless, thanks, I can see the JSON format reminds of logs, has level field and so on https://developer.hashicorp.com/terraform/internals/machine-readable-ui#sample-json-output
Has anyone done a comparison of what you gain from the new terraform testing approach vs terratest?
in in HCL which is nice
I actually prefer Go so it’s more functionality or limitations I was curious on. I’d guess native testing would be limited but get momentum and actually be the best option but kinda why I’m asking
So we have a monorepo of terraform with a bunch of different folders containing different sets of terraform. Each with its own provider and statefile. I understand that you cant use variables in the Provder or Configuration blocks, which makes centralizing the version of all this different things difficult. How do folks do this?
Cloudposse maintains atmos, with similar capabilities
There’s also terramate
Take 20 minutes to learn the most important atmos concepts.
can atmos
support crossplane please?
I just took a look at crossplane. It has a Tf provider https://github.com/upbound/provider-aws
Official AWS Provider for Crossplane by Upbound.
Atmos orchestrates and manages Terraform configurations (separates Terraform code from configs for diff environments)
so if you add terraform components to provision the Crossplane resources using the crossplane provider, then you can use Atmos to manage the configurations for those resources. It’s the same as using the aws
provider to provision AWS resources in TF
i don’t see why not (although I just looked at it for a few minutes, so can’t say anything for sure)
we are using Atmos to manage configurations for components provisioned using diff TF providers like aws
, helm
, kubernetes
, datadog
, crowdstrike
etc. Using the crossplane
TF provider should be exactly the same (no changes to Atmos are required as I see it)
awesome
yeah, they got TF provider
I suggest not bothering to centralise the provider version. You don’t want upgrades to be all our nothing in future when V6 lands and it has a critical fix for one stack and bugs in other ones
I agree with @Alex Jurkiewicz - pin providers per root module (what we call components). Don’t try to DRY this up. Use something like dependabot or rennovatebot to manage the updates.
Also, as others have alluded to, one of atmos’s many features is to manage this type of architecture in a monorepo, and we have a lot of conventions built up around it.
Centralisation is overrated.
That’s why version constraints are an intersection.
2023-10-06
2023-10-07
2023-10-09
2023-10-10
v1.6.1 1.6.1 (October 10, 2023) ENHANCEMENTS:
backend/s3: The skip_requesting_account_id argument supports AWS API implementations that do not have the IAM, STS, or metadata API. (#34002)
BUG FIXES:
config: Using sensitive values as one or both of the results of a conditional expression will no longer crash. [<a class=”issue-link…
1.6.1 (October 10, 2023) ENHANCEMENTS:
backend/s3: The skip_requesting_account_id argument supports AWS API implementations that do not have the IAM, STS, or metadata API. (#34002)
BUG FIXES:
co…
As recent (1.6) changes to the s3 backend are breaking for anyone using an s3 api without STS API (like minio), this PR introduces a skip_requesting_account_id param analogously to the aws provider…
2023-10-11
2023-10-12
does anyone ship terraform output to log analytics tools? I have TF running in CI that ships raw (not JSON) logs to SaaS tool. The problem is these are not logs so no way to filter for errors (no severity information other than the color coding). There must be a better way, right?
What do you want this log shipping to do for you?
I want to be able to filter for and optionally alert on errors
I think you just need to set TF_LOG
Terraform operations really are pass/fail scenarios though, as in exit code zero/non zero.
Remember It’s a CLI tool not a service
set TF_LOG to what?
2023-10-13
If I have a data source, how am I supposed to debug to verify what exactly what that returned data looks like considering console
won’t let you look at it
if you create a new dir, add the data source, apply the no-change, then terraform console
will allow you to see it
you can also output it to see it in the temp dir
to test, yes
it just tells me known after apply, but can’t apply because it’s not what I’m expecting
2023-10-15
Hello .. This looks like a pretty simple PR to fix EFS for ECS .. Can we please get it merged as I’m blocked on it. Thanks! https://github.com/cloudposse/terraform-aws-ecs-web-app/pull/235/files
Hi @Ahmed Kamal Please see comments
Thanks .. I replied on GH. I guess I’ll wait a few days if the original PR author wants to handle it, otherwise I can refactor the variables of the other module
2023-10-16
https://medium.com/@xpf6677/writing-terraform-plan-polices-with-kcl-programming-language-ce94a6236798 Hi forks! I just published a blog on medium about the terraform planning policy writing with the KCL programming language. Welcome to read and provide feedback.
2023-10-17
hey, is there an option for configuring a pidMode in the terraform-aws-ecs-container-definition module?
The details of a task definition which describes the container and volume definitions of an Amazon Elastic Container Service task. You can specify which Docker images to use, the required resources, and other configurations related to launching the task definition through an Amazon ECS service or task.
@Igor Rodionov
The details of a task definition which describes the container and volume definitions of an Amazon Elastic Container Service task. You can specify which Docker images to use, the required resources, and other configurations related to launching the task definition through an Amazon ECS service or task.
@Sebastian Mank you should check this module
https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
pidMode
is a setting related to task definition, not to container definition.
https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/main/variables.tf#L528
the support was added 3 month ago in this PR https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/206
@Igor Rodionov thank you!
How do people manage secrets when using terraform and kubernetes? We hand off some code, customer deploys product in their environment, and we want to prevent them from tampering or seeing those secrets (in backend state or aws secrets manager). Any thoughts or suggestions would be much appreciated!
Vault has a k8s auth method that uses the service account JWT to authn/authz. This all assumes they have no access to the service account themselves. So you can’t let them use that service account for anything or they’ll have access to the secrets. There’s even an operator for easier operations.
If customer can deploy the code which can read the secret, they can modify the code to print secrets. It seems like you could put in place guard rails to discourage inappropriate use, but not prevent it outright
Something smells off here. This is just like running code in the client’s browser. You have to just assume it’s out of your control.
Sounds like you have long live shared secrets, not credentials which are used for the client to authenticate to whatever you need their system to do.
From further investigation these might end up just being short term secrets to RDS, Cognito, etc. Our code just deploys all of it for a customer, along with artifacts we share via ECR.
I’d just leverage iam trust as much as possible
Yeah we’re still going through what secrets and things are being shared within the application, let alone the infrastructure.
2023-10-18
How do people manage resources that should exist as a singleton, i.e. outside the usual dev
/stag
/prod
structure? For example a route53 registered domain, or an IAM user. Would you have something like a general
environment, or take a different approach?
Someone before me used gbl
as a standard for resources that don’t fit into our environmental buckets. IAM and 3rd party configurations like GitHub are env:gbl
customer:root
which gives us a gbl-root-iam
stack to manage for example.
We use tooling
I don’t like gbl
for several reasons
- It’s a bad abbreviation, it’s not apparent to the average person what it means.
- Global is really a geographical term not an environment term. Like a global load balancer vs a regional load balancer Also environments are more like attributes not buckets. And they’re not universal. As in, a dev database for a product team is still production to the data platform team.
My point being that don’t get too hung up on ‘dev/staging/prod’ or whatever.
If you can’t delete it without impacting someone, it’s production. Which would be most of the situations you described
If you dedicate a branch to a single environment or tenant, rebased against a common ancestor (e.g. “main”), that common ancestor can be that base layer.
v1.6.2 1.6.2 (October 18, 2023) BUG FIXES
terraform test: Fix performance issues when using provisioners within configs being tested. (#34026) terraform test: Only process and parse relevant variables for each run block. (<a href=”https://github.com/hashicorp/terraform/pull/34072” data-hovercard-type=”pull_request”…
Currently, the validate resource node merges the base resource connection with any connection defined within the provisioner block. It then applies that merged result back into the original config….
This PR adds in the concept of “relevant variables” to the testing framework. Previously, each run block would process every available variable and attempt to parse even those they had no need for….
2023-10-19
2023-10-20
I’m curious to understand what others are doing regarding Terraform vs OpenTofu. Migrating a production env to OpenTofu alpha release sounds scary. Should I keep my Terrafrom version fixed at v1.5.7 to be sure my state file and infra is compliant with OpenTofu when we at some stage in the future decide to make the change. Or is the general consensus here to keep using Terrafrom at the latest version and worry about the future another time?
I don’t need Terraform 1.6 features yet, so I’m holding at 1.5.x to wait and see
Okay, that’s exactly where I’m at the moment.
Honestly I just carry on as if OT doesn’t exist. Infrastructure is not the domain to mess around with experimental or new stuff.
What will likely happen is, in 3-5 years if OpenTofu becomes the defacto, then you can just start using it for new projects and migrate critical projects in the future. Stuff that doesn’t change much or is near abandoned can just stay on TF.
And if OpenTofu doesn’t take off, well you’ve just saved yourself a massive headache having to migrate everything to OT and then back to TF, since you can’t leave anything still used on abandonware
I’m more curious what’s driving you to use OpenTofu in the first place. Terraform will always be free for end users to use.
For me it was not just about being free for end users. The list of points is something like this, in no particular order: Strong preference for other TACOS, open source, accepting community contributions, avoiding licensing battles with the legal department over BUSL, not worrying about Hashicorp changing their mind about our business use/competition
What specifically about “open source” are you concerned with?
I’m not concerned about open source
Sorry, was a continuation of “Strong preference for … Open source”
That’s the piece I’m asking about, what’s driving the strong preference for open source, versus source available? The ability to do whatever you want (including building competing software) or the ability to know what’s in the code, or something else?
I understand a preference for TACOS other than Terraform Cloud/Enterprise.
A lot for me is that I feel compelled to contribute when i can, to compensate for using it for free and not having the option of paying. It’s an important part of the community aspect of open source
I say compelled, but I actually value that quite highly
I see, so you’re someone who actually contributes to the core Terraform language.
Me and my team, yes. Well, when/if contributions were accepted anyway. Not just Terraform core, but all of the open source projects that we use (and their dependencies). Wherever our use cases take us
To me that’s just part of the contract/responsibility being an open source user and builder
OK, as I understand it so far, the reasons folks are moving to OpenTofu are one or more of the following reasons:
- Want to use an alternative TACOS to Terraform Cloud/Enterprise
- Want the potential to contribute to the core OpenTofu engine
- Want to build a TACOS alternative using the OpenTofu engine.
I see, so you’re someone who actually contributes to the core Terraform language.
I think your framing here is disingenuous. You can contribute in many ways and still be affected by the relicensing. I contributed several patches to the AWS and other providers. I also provide support in this channel. I don’t really do either any more, because I want to contribute my efforts to open ecosystems.
More seeking clarification and what contributions folks are looking to make. I can see why folks would not want to contribute to Terraform Core for many various reasons. The providers are still open source and the vast majority of them are maintained by folks outside HashiCorp. The ecosystem is still open.
Also, I respect everyone’s preferences. I’m trying to understand what is making folks shift.
@Jake Lundberg (HashiCorp) The preferred capitalization is “OpenTofu”
@Jake Lundberg (HashiCorp) Thank you for fixing the capitalization.
I am trying to find the right language, but am I correct to assume any version prior to 1.6 is still “free” to use without license restrictions?
The BSL licensed Terraform is still free for the vast majority of folks. And the source is still available. Versions become open sourced after 4 years.
The FAQ has a bunch of useful information. https://www.hashicorp.com/license-faq#what-is-the-bsl
I usually alias terraform
to t
so this switch shouldn’t be too hard
A quick guide to installation and importing Terraform state files.
2023-10-21
2023-10-22
2023-10-23
any Spacelift users seeing an issue with tracking/head commits not syncing on their stacks?
I encountered this a few days ago, raised the question to another team member, and he noted that he inquired with Spacelift about it a few months ago but no follow up.
didn’t notice it during initial usage, but it’s impacted some runs lately
Check the sample history for your push policy, see what the event action was, or if the event was missing
If the former, you can fix that yourself. If the latter, speak to Spacelift support
Thanks everyone for your help!
2023-10-24
2023-10-25
v1.7.0-alpha20231025 1.7.0-alpha20231025 (October 25, 2023) UPGRADE NOTES:
Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches:
Users…
1.7.0-alpha20231025 (October 25, 2023) UPGRADE NOTES:
Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlie…
Terraform Version Terraform v1.4.6 on darwin_amd64 Terraform Configuration Files https://github.com/evan-cleary/tf1_5-state-compatibility Debug Output https://gist.github.com/evan-cleary/d036479be1…
Are there any plans to add OpenTofu releases to this channel?
I think that would be a good idea. They should probably be kept separate.
2023-10-26
Im hitting a wall with “Provider configuration not present” errors…details in the thread…
I inherited a terraform stack of code that is severely out of date. Im trying to upgrade it and then get it moved over to a different repo where the rest of our TF code lives. Currently this code is on TF 1.1.9. (this doesnt super matter because I have the same issue when I upgraded to 1.3.10) When i run a plan I get the following error:
│ Error: Provider configuration not present
│
│ To work with module.cops_term.aws_route53_record.private_dns[0] (orphan) its original provider configuration at module.cops_term.provider["registry.terraform.io/hashicorp/aws"].dns_private is required, but it has been removed. This occurs when a provider configuration
│ is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy module.cops_term.aws_route53_record.private_dns[0] (orphan), after which you can remove the provider configuration again.
In my root module I have the following:
provider "aws" {
alias = "dns_public"
region = var.dns.region
assume_role {
role_arn = var.dns.public.role_arn
}
}
provider "aws" {
alias = "dns_private"
region = var.dns.region
assume_role {
role_arn = var.dns.private.role_arn
}
}
In the module that is called module.cops_term
there is also this in the providers.tf…
#This is the same as whats in the root module...
provider "aws" {
alias = "dns_private"
region = var.dns.region
assume_role {
role_arn = var.dns.private.role_arn
}
}
So the error is a little misleading. The resource its talking about doesn’t need to be destroyed. That resource actually exists in the state file also. So the error says the provider needs to be added but its already in there. Obviously having it in both is not correct. Ive tried having it in the root only and then in the module only but I get the same error either way.
Any ideas about how to proceed?
I’ve seen this issue. IIRC, I solved it by removing .terraform
directory
Thats not going to fix this…the issue is in the state file.
What’s the output of terraform providers
?
$ TF providers
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] 4.14.0
├── provider[registry.terraform.io/hashicorp/template]
├── module.cops_did
│ ├── provider[registry.terraform.io/hashicorp/aws]
│ └── provider[registry.terraform.io/hashicorp/template]
Providers required by state:
provider[registry.terraform.io/hashicorp/template]
provider[registry.terraform.io/hashicorp/aws]
Looks like the module cops_did
was renamed to cops_term
. You could try either to add a moved
block (if your terraform is recent enough) or to revert the renaming of you module. HTH
Just published a blog post on Cloud Posse’s terraform-null-label that we all know and love: https://masterpoint.io/updates/terraform-null-label/
A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …
Thanks! Now i have a link to point to
A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …
i posted this issue the other day and your blog post fixes it
https://github.com/cloudposse/terraform-null-label/issues/146
Yeah, I was describing to a client on how we do this and I thought it would be a great blog post to share going forward. Hopefully for lots of people!
Plus all terraform projects that we run into never use a label module…. which is just a damn shame. The more usage of this pattern in the ecosystem, the better.
Chatting with some clients and i get questions like “Why do we need the <context input> in every name” too
Stage: This indicates the stage of the application, such as dev, stage, prod. While this is what we would normally call an environment in most settings, for legacy reasons the naming within the module has made this a bit funky. It may help to think of the term “stage” by its general English usage, referring to a distinct phase or period within a series of events or a larger process.
This doesnt seem right. The stage is the account name.
Currently the environment and stage, i believe, plan to be renamed in the next context null label overhaul
Eh the usage of stage / environment is blurry as hell. People can use them for what they want and we probably should have stated that. You’re definitely right. But we’ve also used them to signify dev vs stage resources when a client wants a non-prod AWS account and those resources are co-mingled.
so this is what ive done to help with the overloaded terms for stage/env/account-name/region
ive stuck with the stage == account-name and env == short (or fixed) -region
but for actual environments, such as brownfield accounts that have multiple environments, i leverage the attributes.
e.g.
org-ue1-monolith-titan-dev
org-ue1-monolith-titan-prod
This way we can keep with the same null label intention (as the way cloudposse currently uses it) and still be able to codify the real environments (dev/prod/etc) in those older accounts
Ah that’s not a bad way to do it We’ve done it different, but I like that.
Hi all, We are running Terragrunt OSS and were planning to update both terraform and terragrunt to a latest verson at some point. Back in August, Terragrunt released a statement where it said that ‘commercial products’ would not be supported beyond 1.5.5 : https://blog.gruntwork.io/the-impact-of-the-hashicorp-license-change-on-gruntwork-customers-5fcd096ba86a At that point I thought, fine, that does not affect us because we were not using ‘commercial producuts’. Now we are finally ready to update both Terragrunt and Terraform but when looking at ‘supported versions’ page it looks like only OpenTofu is supported with Terraform 1.6.x : https://terragrunt.gruntwork.io/docs/getting-started/supported-versions/
This is confusing to me. Does it mean | “Terraform 1.6.x is not supported yet” ? Or does it mean “Terraform 1.6.x will not be supported, even for OSS Terragrunt “? |
[Update, Aug 15, 2023] Please see our new blog post for more detailed thoughts and our plan for the future: The future of Terraform must be…
Learn which Terraform and OpenTofu versions are compatible with which versions of Terragrunt.
Terraform 1.6 works in some cases, but not officially yet. Still a work in progress… https://github.com/gruntwork-io/terragrunt/issues/2747
Describe the solution you’d like
Validate support for Terraform 1.6 in Terragrunt
Describe alternatives you’ve considered
N/A
Additional context
https://github.com/hashicorp/terraform/releases/tag/v1.6.0
https://www.hashicorp.com/blog/terraform-1-6-adds-a-test-framework-for-enhanced-code-validation
https://developer.hashicorp.com/terraform/language/upgrade-guides
Hi folks, the terragrunt August blog post says “As long as you use Terraform 1.5.5 or older, you can keep using all our commercial and open source products”. However the compatibility page now says TF 1.6.x is officially supported. As far as I know 1.6 is still on BSL. Any chance anyone here knows why/how?
Learn which Terraform and OpenTofu versions are compatible with which versions of Terragrunt.
Currently have a project with remote state. Doing some refactoring, going to have to be doing some state mv
‘ing, is there a ‘best practice’ for testing state changes locally before pushing the refactor and doing the mv
’s on the remote state?
Not really. Take a backup of the state file locally if you want to be safe. Worst case, you screw up the state file, your backup is non-existent, and you have to rebuild the state file from scratch with imports. It’s painful, but ultimately has no risk to your product availability
Figured as much. I was thinking to export the state from cloud, then just drop the remote backend config section and drop the state file locally. That way I can do all the mangling I want and remote wont change and I can plan to my hearts content
Would Terraform moved
(https://developer.hashicorp.com/terraform/language/modules/develop/refactoring) block help here?
How to make backward-compatible changes to modules already in use.
Yeah, but to me that’s just kicking the can down the road and not really getting rid of the tech debt.
2023-10-27
2023-10-28
Created this so that users can compare the following tacos head to head:
Atlantis, Digger, Spacelift, Env0, Scalr and Terraform Cloud.
All data was taken either from the repos, AWS marketplace or websites of the above tacos.
https://www.tacosheadtohead.com/
PS: if anyone wants to add their TACO, happy to add it, feel free to DM me the information along with publicly available links.
It doesn’t seem to work on Chrome mobile
Tapping compare does nothing
Can you check now? Shipped a fix
Personally, I would rather see a big table on page load and filter out as needed. Might also be better for SEO if it’s the default. Regardless, thanks for putting this together!
@venkata.mutyala thanks for the suggestion. Turned the website into an article here: https://medium.com/p/4dbe77f0883d
TACOs or Terraform/OpenTofu automation and collaboration tools, are tools that help you use IaC at scale.
I gave you all my claps. One small bit of feedback. The table view you had like this makes it a lot easier to consume (in a past life I used to do a bunch of BI dashboards). If you are able to edit/update the article to include a markdown table I think it could go a long way for the reader. Also, you didn’t link to your site in that article, I’d recommend throwing it in there especially if adding a matrix/table to the medium post isn’t possible.
And thanks again for putting this together. We are actively looking at spacelift and chatting with their sales team. So this will definitely help me and my team with those discussions
Thanks Venkata!
Noted - will update.
Sorry I see the table at the bottom
Yes but it’s not in markdown
2023-10-30
hi All, I am trying to create GCP service account using terraform, i could create it, but i wanted to download the private_key JSON file aswell, i tried to create it using output resource in terraform, but it seems like the format is different, It is not like the same as the one which we creating in the web console. Can someone help me the procedure to get the key in JSON format same as its creating in manual way..