#terraform (2023-12)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2023-12-01
2023-12-06
v1.7.0-beta1 1.7.0-beta1 (December 6, 2023) UPGRADE NOTES:
Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches:
Users of…
1.7.0-beta1 (December 6, 2023) UPGRADE NOTES:
Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versio…
Terraform Version Terraform v1.4.6 on darwin_amd64 Terraform Configuration Files https://github.com/evan-cleary/tf1_5-state-compatibility Debug Output https://gist.github.com/evan-cleary/d036479be1…
Hi all, I’m using the cloudposse/elasticache-redis/aws
terraform module, and every time I update the security group rules with a new rule, I run into the error from terraform saying that there is a conflicting rule.
Upon investigation I noticed that this module is using cloudposse/security-group/aws
version 1.0.1, and the latest version of that module is 2.2.0. Is there a particular reason this module hasn’t been bumped? I’ve tested it locally in a fork, and it appears to be working and resolved my issue
the redis module needs to be updated to use the latest version of the SG module. PRs are welcome, thanks
what
The security group module dependency has been upgraded to include a major fix to how it manages rules
why
Using the old version made changes to security group rules hard to deal with.
references
@Andriy Knysh (Cloud Posse)
@Andrew den Hertog thanks for all the changes, please address the last comment
I’m used to having a pre-commit hook run terraform format for me , I’ve done so, and pushed the changes
@Andrew den Hertog the PR is approved and merged, thanks
Thanks
Does anyone have an example of doing a data lookup of subnet cidrs in a vpc and then adding those cidrs into a cidr_block in a security group? Im having to refactor some very old code and im running into a roadblock here. Code in thread.
Heres the code I inherited…I should also note that I already know there should be 5 cidrs that show up here for each subnet we have.
data "aws_subnet" "cidr" {
for_each = toset(data.aws_subnets.public.ids)
id = each.value
}
data "aws_subnets" "public" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
filter {
name = "tag:Name"
values = ["${var.environment}-public-*"]
}
tags = {
Type = "Public"
}
}
resource "aws_security_group" "security_group" {
name = var.instance_role
description = "${var.instance_role} ${var.environment}"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
ingress {
description = ""
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [data.aws_subnet.cidr.*.cidr_block]. <-- NOT SURE WHAT I NEED TO PUT HERE
}
}
There is also this in the code but im not sure how it fits in with any of this really…
locals {
subnet = {
ids = [for s in data.aws_subnet.cidr : s.id]
cidrs = [for s in data.aws_subnet.cidr : s.cidr_block]
}
}
Figured it out finally…this got me what i was looking for. `
cidr_blocks = [ for subnet in data.aws_subnet.cidr : subnet.cidr_block ]
`
@setheryops do you still need help here?
Nah…I figured it out in a hacky workaround. After the fact I found out that the devs didnt need one of the subnets/cidrs so that made it easier. Thanks though.
2023-12-07
Since im updating a security group does this mean the actual instance will be replaced and getting a new instance ID OR does this mean that the instance will stay as is and terraform will just replace the SG thats attached to it?
# module.thing.aws_instance.instance[0] will be updated in-place
~ resource "aws_instance" "instance" {
id = "i-1234567890"
tags = {
"Environment" = "prod"
"Name" = "thing-api1"
"Role" = "thing-api"
"Service" = "thing"
"Team" = "thing"
}
~ vpc_security_group_ids = [
- "sg-0987654321",
# (1 unchanged element hidden)
]
# (34 unchanged attributes hidden)
# (7 unchanged blocks hidden)
}
Update in place term means the resource (the instance in this case) will simply be updated. It will not be replaced.
Also indicated by the ~
prefixing the resource label
Thats what I was thinking but wanted to make double sure
A replacing update would be noted by -/+
yep
Hi, i’m looking for an example of how to implement AWS Backup with RDS in terraform. The docs are very confusing and mostly don’t give complete examples for RDS. Any help would be greatly appreciated!
Is this what you are looking for? https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#backup_retention_period
You also have read replica
settings further down on that page too.
That’s the fun part about AWS Backup. The capitalized version is a (little known) service in AWS, but many of the other services have their own redundant (lower-case) backup features. Makes it near impossible to search for!
I’m actually interested in figuring out how to implement AWS Backup in terraform for an RDS instance.
AWS Backup lets you centrally manage and automate backups across AWS services and third-party applications.
Ahhh
Theres this…not sure what you have looked at so far… https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/backup_framework
Ive never used this service so thats all I got
Yeah i’m trying to navigate all the options under the AWS Backup terraform pages, but there are many and i’m kind of a noob.
You will wind up making what they call a backup plan resource. Backup drives many of those per service backup features but in a more global and uniform way. Backup goes above and beyond those service specific backup features in many cases. The backup plan resource will typically match on a tag. E.g. tag resources with BackupPlan=my-cool-plan
Backup is a neat service for sure
Your selection selects things and assigns them to a backup plan.
examples of TF modules and components that use AWS backup:
https://github.com/cloudposse/terraform-aws-backup/tree/main
https://github.com/cloudposse/terraform-aws-backup/tree/main/examples/complete
https://github.com/cloudposse/terraform-aws-components/tree/main/modules/aws-backup
Hello, I’m seeking your advice on the best tools to integrate Terraform with GitOps. Options I’m considering include Atlantis, env0, and HashiCorp Cloud (theres more to consider?). My primary criteria are ease of use, a comprehensive set of features, and reliability. Budget constraints are not a concern in this scenario. Which solution would you recommend based on these requirements?
spacelift, Scalar
DiggerHq
@jose.amengual @loren Does they gives better than Hashicorp Cloud when we talking about the most suitable option based on the criterias ?
Hashicorp cloud is VERY expensive compared to all the other services mentioned
I dont mind at prices currently
if you use Cloudposse modules I will recommend to go with Spacelift
I know you do not mind prices, but the owner of the company does, and maybe you have money now, but what about later? What if you do not have the same budget down the road and then you are asked to cut costs and you have to work overtime to switch to another provider? that will be far most costly later
and do not get me wrong I have stocks in Hashicorp and I’m a Maintainer of Atlantis
you absolutly right but Thats something we will consider only after ordering the solutions
i dont come here to ask for pricing since its not the place to do it so i want to know the best suitable solution for managing cloud infra, and then we will check later for costs
We’ll also takes into consideration the maintainability costs but its another topic
they all get the job done. you’ll need to read the docs and think about your use cases and desired workflow to figure out which will be “best” for you…
Check out Terrateam https://terrateam.io/ – disclaimer: I’m the co-founder. But we fit your use case.
Forgot to mention that we are using Terragrunt to simplify the IaC, but it’s not mandatory if you know some equivalent/better solution
spacelift, env0, terrateam, etc. support terragrunt
@Josh Pollara thanks
@kfirfer Where are you located? While the alternate TACOS may be viable options for you, it’d be a good idea for one of HashiCorp’s technical teams to at least chat with you about your requirements and where we can help. Feel free to DM me and I can put you in touch with the proper folks. I’m also happy to have an initial conversation with you if you’d like (I’m a Field CTO for HashiCorp and have been here just short of 6 years).
2023-12-08
Hello I am looking for a baseline (basically a what would to build in a green field AWS project if you had the chance) of sorts that alights well with the AWS Architected framework that exists in Terraform, something similar to GCP’s Foundational Toolkit. Articles or blog posts are welcomed, but ideally I’d love to just look at some code.
Have you thought about triggering the Well-Architected Tool on milestones or a schedule instead, to track your implementations over time (and worry less about your starting point) ? I like this idea, managing WAT resources from terraform-provider-aws: Support for Well Architected Tool resources #29755
Learn more about the AWS Well-Architected Tool by reading frequently asked questions.
Description
Hi all,
It would be great if the AWS provider could support the creation of Well Architected Tool resources like workloads
, lenses
, and potentially milestones
. I’ve checked the collaboration guidelines and can see the service is supported and listed under names/names_data.csv
, which is a good start.
To keep this feature request simple, for now I would like to request managing workloads
via Terraform, since these are the first things you create whenever performing a Well-Architected Review using the tool. Custom lenses would be a nice follow-up.
Requested Resource(s) and/or Data Source(s)
• aws_wellarchitected_workload
Potential Terraform Configuration
resource "aws_wellarchitected_workload" "this" {
account_ids = [
"123456789012",
"123456789011",
]
architectural_design = var.architectural_design
aws_regions = [
"eu-west-1",
"eu-west-2",
]
description = var.description
environment = var.environment
industry = var.industry
industryType = var.industry_type
lenses = [
"wellarchitected",
"serverless"
]
pillar_priorities = [
"operationalExcellence",
"performance",
"security",
"reliability",
"costOptimization",
"sustainability",
]
review_owner = var.review_owner
workload_name = var.workload_name
}
data "aws_wellarchitected_workload" "this" {
workload_id = aws_wellarchitected_workload.this.id
}
References
AWS CLI documentation for create-workload
: https://docs.aws.amazon.com/cli/latest/reference/wellarchitected/create-workload.html
AWS CLI documentation for get-workload
, which I presume would be the same API used for the Terraform data source: https://docs.aws.amazon.com/cli/latest/reference/wellarchitected/get-workload.html
Would you like to implement a fix?
None
There’s also aws-samples/aws-startup-landing-zone-terraform-example, fwiw (looks quiet).
2023-12-10
Hello friends, Can any of you help me to deploy website deployment in S3 with Codepipeline using terraform?
Hello All, I am trying to create a service account in GCP using terraform also I want to generate a private key for that SA with JSON format, I have followed the terraform documentation and added the below block in the code but still i am unable to get the output in JSON format, https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_service_account_key#private_key_type
resource "google_service_account" "user1" {
account_id = "dev_google_service_account"
display_name = "dev_google_service_account"
}
resource "google_service_account_key" "sa_private_key" {
service_account_id = google_service_account.user1.name
private_key_type = "TYPE_GOOGLE_CREDENTIALS_FILE"
}
output "service_account_key" {
value = google_service_account_key.sa_private_key.private_key
sensitive = true
}
But i am unable to download the key in JSON format, can someone help me to get this?
2023-12-11
I am trying to do an automation where user provides a list of principals i.e. IAM roles which has to be assigned to IAM policy. I often run in to an issue if one of the principal doesn’t exist in the account or typo in the value. How could I ignore the errors and applying the principals that are valid.
it’s probably not possible since the error is from AWS, not from terraform
unless you use a wildcard, but that’s less secure
You could perhaps use a data source to lookup the principal and validate that it exists? It will at least error on plan that way, instead of on apply
Has anyone been able to import an aws_organizations_account
to an existing org? I’m getting Import successful
but terraform state list
does not show the index for the resource at all
I tried refresh
, deleting the .terraform
dir, moving the Account out of the child Org to the root before importing, disabling the SCPs, and updating to latest provider version and nothing
and I’m a root user on the Management account
still does not work
Unusual… Haven’t tried myself… suggest opening a bug report?
it is interesting, I see it in the state on another resource but I believe is from a data lookup since is a list instead of the indexed map where is supposed to be
I just moved all this account under the root just to see if that will make a difference
and nothing
is it possible that the index need to exist before importing?
terraform import -var-file=pepe-global-root-account.terraform.tfvars.json 'aws_organizations_account.organizational_units_accounts["NEWINDEX"]' 44444444444
if NEWINDEX
does not exist it should create it anyways, no?
Yeah, if the index exists, it’s not really an import…
in my case it does not exist
do indexes have name length limits?
the import works even if I pass the wrong id
That definitely sounds wrong
I even switched to TF 1.6, nothing
this must be a bug
This is not cool, the last thing I needed today
Maybe try different AWS provider versions
I tried, 4 and 5 versions
interesting….
I created another resource, no for_each
I try the import and it worked
Huh
aws_organizations_account.non-prod
aws_organizations_account.organizational_units_accounts["audit"]
aws_organizations_account.organizational_units_accounts["dns"]
aws_organizations_account.organizational_units_accounts["identity"]
aws_organizations_account.organizational_units_accounts["ipam"]
aws_organizations_account.organizational_units_accounts["log-archive"]
aws_organizations_account.organizational_units_accounts["sec-tooling"]
aws_organizations_account.organizational_units_accounts["transit"]
aws_organizations_account.prod
aws_organizations_account.shared
that is after the import
state list
then a state move
and now I have it in the right place
OMG what a pain
Very strange
that worked
@RB
Did the emails have to align with the convention or did they trigger a recreation due to the emails not aligning with the cp convention?
emails did not manage the new convention
which is similar to the cp convention
2023-12-12
2023-12-13
v1.7.0-beta2 1.7.0-beta2 (December 13, 2023) UPGRADE NOTES:
Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches:
Users of…
1.7.0-beta2 (December 13, 2023) UPGRADE NOTES:
Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versi…
Terraform Version Terraform v1.4.6 on darwin_amd64 Terraform Configuration Files https://github.com/evan-cleary/tf1_5-state-compatibility Debug Output https://gist.github.com/evan-cleary/d036479be1…
v1.6.6 1.6.6 (December 13, 2023) BUG FIXES:
terraform test: Stop attempting to destroy run blocks that have no actual infrastructure to destroy. This fixes an issue where attempts to destroy “verification” run blocks that load only data sources would fail if the underlying infrastructure referenced by the run blocks had already been destroyed. (<a href=”https://github.com/hashicorp/terraform/pull/34331” data-hovercard-type=”pull_request”…
This PR updates the testing framework so that it won’t attempt to destroy run blocks that only contain data sources. This doesn’t really fix the underlying issue that was exposed by #34280 and then…
2023-12-14
After more than 11 years, I’ve decided to move on from HashiCorp. HashiCorp achieved more than my wildest dreams and I’m proud of the role I played. While this has been long planned, its still an emotional day. Here is the letter I shared with employees: https://t.co/lOX6fvVwtk
Is he going to contribute to OpenTofu
?
After more than 11 years, I’ve decided to move on from HashiCorp. HashiCorp achieved more than my wildest dreams and I’m proud of the role I played. While this has been long planned, its still an emotional day. Here is the letter I shared with employees: https://t.co/lOX6fvVwtk
After more than 11 years, HashiCorp Co-Founder Mitchell Hashimoto pens a heartfelt goodbye letter to the company he helped create.
Hashimoto left HashiCorp
I just deployed Config and Security hub using the CloudPosse
components but then I saw this:
that is the new stuff they announced
anyone have tried this and know if is yet available in TF?
@Matt Calhoun @bradj
anyone know if this steps needs SuperAdmin
https://github.com/cloudposse/terraform-aws-components/tree/main/modules/guardduty#deploy-organization-settings-in-delegated-administrator-account?
and another one, anyone have seen this:
module.cloudtrail.aws_cloudtrail.default[0]: Creating...
╷
│ Error: creating CloudTrail Trail (pepe-global-audit): InvalidParameter: 2 validation error(s) found.
│ - minimum field size of 1, CreateTrailInput.TagsList[4].Value.
│ - minimum field size of 1, CreateTrailInput.TagsList[6].Value.
│
│
│ with module.cloudtrail.aws_cloudtrail.default[0],
│ on .terraform/modules/cloudtrail/main.tf line 1, in resource "aws_cloudtrail" "default":
│ 1: resource "aws_cloudtrail" "default" {
using the cloudtrail component
cloudtrail-bucket
and deployed and the cloudtrail
component can find it
mmm I had null
tags
2023-12-15
Hey all - we’re trying to build out the AWS SSO Application module & component, however we’re pretty limited in that currently custom-saml applications do not work with the initial release of the ssoadmin_application Terraform-provider-aws#34813. if you have a chance, give it a
Hello Team,
we are facing some issue while using https://github.com/cloudposse/terraform-aws-dynamic-subnets module
error which we are getting is.. with the retirement of EC2-Classic no new non-VPC EC2 EIPs can be created
Terraform module for public and private subnets provisioning in existing VPC
maybe these links will help:
Recently I tried to deploy the same code that was running for months and ran into a weird error. So basically what the Terraform code does…
I have a snippet of a script that deployed EC2’s with a few EIP’s. See below: resource “aws_eip” “management” { count = length(var.palos) vpc = true network_interface = aws_network_interface.management[count.index].id tags = { Name = “${var.palos[count.index].hostname}-management” } } Create eth1 elastic IPs resource “aws_eip” “eth1” { count = length(var.palos) vpc = true network_interface = aws_network_interface.eth1[count.index].id tags = { Name = “${var.palos[count.index].hostn…
the https://github.com/cloudposse/terraform-aws-dynamic-subnets module def has vpc = true
set on resource "aws_eip"
so this is not an issue in the module (it gets deployed all the time in many infras).
Terraform module for public and private subnets provisioning in existing VPC
maybe you still have some EC2-Classic resources:
https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/
Update (August 23, 2023) – The retirement announced in this blog post is now complete. There are no more EC2 instances running with EC2-Classic networking. Update (July 29, 2021) – Added link to the Support Automation Workflow document & clarified link to AWS MGN pricing. Also updated the list of recommended instance types to favor […]
This script enables the identification of resources running in Amazon EC2 Classic
2023-12-16
2023-12-17
2023-12-19
Hey folks, what is the best practice around handling public DNS zones in a multi account setup? Currently my domain ownership is in my root/management account. I am considering adding another account. Ideally I want to setup public records directly in this account as well but for the same zone. I’m curious about the various approaches to solving this.
you mean you want the same zone in two different accounts? i don’t think that works…
Hm
@Dan Miller (Cloud Posse)
the top-level zone in one account, and a sub-zone in another is pretty easy to setup. but i don’t think you can have the same top-level zone in two different accounts…
So in order to provision a record, I would need access to both the management account and child account. Right?
in order to provision a subzone, you need access to both the management and child accounts. once the subzone exists, no access is needed to the management account to create records.
basically you create the subzone in the child account, and the ns delegation records in the management account, and it’ll work
we’ve done it for years now with this module, https://github.com/plus3it/terraform-aws-tardigrade-route53-zone
Terraform module to create a Route53 zone
we do something similar. we have a primary zone in a dns account and then delegated zones in each child account. for example, example.com in a dns account and dev.example.com or prod.example.com in the child accounts
the apex zone can really be anywhere, but we choose to put it in an isolated account to managed access easily. Plus we usually purchase domains from aws in that same account
seems like a good practice to have a single account for DNS.
Or a shared services account?
we have a single account for each type of service, but that’s up-to-you of course. for example 1 account for DNS, 1 account for network (TGW, VPN, etc), 1 account for automation (github runners, spacelift, etc). We have more than a dozen accounts because of this
Damn
Do you have a catch all mailbox for aws-* lol
yup! we use plus addressing. for example
with plus addressing you can have 1 actual email, but all accounts connected to the same
Also note that a given zone and it’s records will have to be in one account. But you can seperate the management of a zone and the records in those zones into different terraform configurations.
Such as one configuration and automation that has the ability to create a zone, while another configuration and automation with lower privileges managed by the service owners of that domain to manage their records.
Regarding github-actions-runner component
How come so many default iam actions are for the self hosted runner role ? Doesn’t the role for the arc runner only need access to assume other roles ?
do you mean the eks/action-runner-controller
component or the eks/github-actions-runner
component? these are 2 different components. eks/arc
is the previous implementation of the mumoshu chart, whereas eks/github-actions-runner
is the brand new implementation using the GitHub supported solution
The new implementation
that comment above is for the old implementation, and the new implementation doesnt have a default policy added
oh! let me check out the new one
Has anyone managed GitHub Enterprise Cloud using Terraform? I’m not sure if I’m just misreading the provider docs or if this provider currently doesn’t support Enterprise cloud at all. If it doesn’t, what do you guys currently do instead of using Terraform?
For example, I’m trying to setup the Audit Log settings via Terraform, but can’t even find an audit log resource in the provider.
I didn’t set up Cloud but on-prem with Ansible before, after a quick look, did you set up ``base_url?
I was going to use base url, with the one provided in enterprise cloud, but stopped when I couldn’t find a terraform resource for the audit logs.
baseUrl
seems like it’s required for on-prem, so makes sense that i would have to use it for cloud enterprise as well.
does on-prem have audit logs as well?
it should have, a basic requirement
I believe the official provider has a lot of issues. I know Mineiros forked it, but it looks not maintained. https://github.com/mineiros-io/terraform-github-repository
A Terraform module to manage GitHub Repositories. https://github.com/
Thanks @Erik Osterman (Cloud Posse) that is still interesting to look at regardless.
2023-12-20
v1.7.0-rc1 1.7.0-rc1 (December 20, 2023) UPGRADE NOTES:
Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches:
Users of…
1.7.0-rc1 (December 20, 2023) UPGRADE NOTES:
Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier version…
Terraform Version Terraform v1.4.6 on darwin_amd64 Terraform Configuration Files https://github.com/evan-cleary/tf1_5-state-compatibility Debug Output https://gist.github.com/evan-cleary/d036479be1…
2023-12-21
Hello, I am using alb module for my current project but unable to put my usecase together. Can anyone suggest.
My usecase is i have 5 API ECS Services and 5 Frontend ECS Services and i want to have 2 alb 1 for api services and 1 for frontend services. These ALBs will have multiple listeners and target groups. How can i use cloudposse modules to achieve this? I want to have it dynamic so if need to add a new target group for a specific api then the terraform should add listener with that new port to specific ALB accordingly.
seems you need k8s ingress controller
ECS cannot do dynamic
2023-12-22
2023-12-27
Hello
Has anyone used the mongodbatlas
provider and managed to work with the SecretsManager authentication without providing a role to assume? I’m also using the AWS provider in the same execution and my role has permissions to SecretsManager
@Dan Miller (Cloud Posse)
sorry no I havent used that provider myself
If you’re talking about this https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs that documentation looks rather dubious
I’d think twice about following any instructions from that documentation