#terraform (2024-09)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2024-09-02
Hi, have a question about "cloudposse/backup/aws"
module
I would like to create a separate plan under the same vault, and that plan would use different rules. Is it possible? Or is it a good practice, to have S3 and RDS backup under the same vault, but with different set of rules and plan? Thank you
Hey everyone, has anyone found any good no-code alternatives to terraform? Could be as simple as just supporting resource version control.
hey @Abhigya Wangoo it’s not no-code; it’s visual, you can import stuff, and write code to edit resources if needed. but give it a try and let me know what you think
if you’re looking for something pure no-code try Brainboard
A Hybrid Intelligence that helps you create production-ready infrastructure, using knowledge contributed by you and other DevOps experts all over the world. A Hive Mind for DevOps and cloud-native infrastructure design.
Brainboard is an AI driven platform to visually design, generate terraform code and manage cloud infrastructure, collaboratively.
2024-09-03
hi folks, i’m working with the cloudposse/efs/aws
module. we did some console tweaks that we’re trying to reflect in the TF but i’m struggling.
tf plan
shows three diffs like this, because the tf doesn’t have the security group that was added in the console.
~ resource "aws_efs_mount_target" "default" {
id = "fsmt-092000dbd046024f2"
~ security_groups = [
- "sg-0473cc37c73272716",
# (1 unchanged element hidden)
]
# (10 unchanged attributes hidden)
}
i thought that adding this might help but it doesn’t seem to have any effect:
allowed_security_group_ids = [
"sg-0473cc37c73272716"
]
i used the wrong argument. never mind.
same module, different question. what’s the right way to incorporate this console rule addition?
# module.efs.module.security_group.aws_security_group.default[0] has changed
~ resource "aws_security_group" "default" {
id = "sg-06fa9a39d9a9c6f77"
~ ingress = [
+ {
+ cidr_blocks = [
+ "10.103.21.119/32",
]
+ description = "Allow jenkins-data-sync to access EFS"
+ from_port = 2049
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 2049
},
# (1 unchanged element hidden)
]
name = "terraform-20240722194649647100000001"
tags = {}
# (8 unchanged attributes hidden)
# (1 unchanged block hidden)
}
adding this:
additional_security_group_rules = [
...
{
type = "ingress"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = [
"10.103.21.119/32",
]
yields this:
# module.efs.module.security_group.aws_security_group_rule.dbc["_list_[1]"] will be created
+ resource "aws_security_group_rule" "dbc" {
+ cidr_blocks = [
+ "10.103.21.119/32",
]
+ description = "Allow jenkins-data-sync to access EFS"
+ from_port = 2049
+ id = (known after apply)
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_group_id = "sg-06fa9a39d9a9c6f77"
+ security_group_rule_id = (known after apply)
+ self = false
+ source_security_group_id = (known after apply)
+ to_port = 2049
+ type = "ingress"
}
which i’m not sure is expected or not.
never mind this one, too.
2024-09-04
2024-09-05
Hi, I’m having an error with the
cloudposse/ec2-client-vpn/aws
When I try to use the module, I get an error in the awsutil dependency:
│ Error: Invalid provider configuration
│
│ Provider "registry.terraform.io/cloudposse/awsutils" requires explicit
│ configuration. Add a provider block to the root module and configure the
│ provider's required arguments as described in the provider documentation.
This is not a dependency in the module and the examples have no reference to this provider. I have configured the provider and it still fails with the same error. Any hints?
@Juan Pablo Lorier did you add this to [versions.tf](http://versions.tf)
file
terraform {
required_version = ">= 1.3.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.0"
}
awsutils = {
source = "cloudposse/awsutils"
version = ">= 0.15.0"
}
}
}
and this to [providers.tf](http://providers.tf)
file
provider "aws" {
region = var.region
assume_role {
role_arn = ...
}
}
provider "awsutils" {
region = var.region
assume_role {
role_arn = ...
}
}
the provider configuration is empty as I use OIDC to authenticate
in this case, you just need to add (assuming the role is not related to how to configure the provider itself)
provider "awsutils" {
region = var.region
}
Thanks! I might open an issue to add that to the documentation. (both to the provider and to the vpn client as a dependency)
you need to do this only if you are using some old version of the awsutils
provider
the latest version does not need the region config
I’m using the latest version of the module. But it’s not the module you pointed.
https://registry.terraform.io/modules/cloudposse/ec2-client-vpn/aws/latest
let me know if you are using the latest version and it still requires the region config
i pointed to the component that wraps the module
and yes, the module https://github.com/clouddrove/terraform-aws-client-vpn does not have the provider config (this needs to be fixed)
in the component (root-module), the provider config is added
awsutils = {
source = "cloudposse/awsutils"
version = ">= 0.15.0"
}
that’s why the component works
I’m using latest version. 1.0.0 and I’m facing the issue there
in your root-module (that uses <ttps://github.com/clouddrove/terraform-aws-client-vpn>) try to add
awsutils = {
source = "cloudposse/awsutils"
version = ">= 0.15.0"
}
about the clouddrove link, I pasted it by mistake, I was looking for alternatives to cloudposse as I was not able to get it to work
source = “cloudposse/ec2-client-vpn/aws” version = “1.0.0”
we provision the component https://github.com/cloudposse/terraform-aws-components/tree/main/modules/ec2-client-vpn all the time, it works
in your module, just add the awsutils
config to [versions.tf](http://versions.tf)
it failed with only versions.tf. It worked when I added the provider config
awsutils = { source = “cloudposse/awsutils” version = “>= 0.19.1” } }
nice
(we have the config in [versions.tf](http://versions.tf)
https://github.com/cloudposse/terraform-aws-components/blob/main/modules/ec2-client-vpn/versions.tf)
I see, but somehow, terraform was complaining. Maybe it’s related to using OIDC in terraform cloud?
should not be related to OIDC, but I’m glad you made it work
@Andriy Knysh (Cloud Posse) Sorry to bug you with this non terraform question. The module creates the certs and stores them in parameter store. But it creates only CA, root and server certs. Do I need to create the client certs manually?
2024-09-07
I am a team looking for help with the yml pipeline for Azure DevOps to Azure static Apps service in the nextjs application.
2024-09-11
v1.10.0-alpha20240911 1.10.0-alpha20240911 (September 11, 2024) NEW FEATURES:
Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.
terraform output -json now displays ephemeral outputs. The value of an ephemeral output is always null unless a plan or apply is being run. Note that terraform output (without the -json) flag does not yet display ephemeral…
1.10.0-alpha20240911 (September 11, 2024) NEW FEATURES:
Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terr…
Is there a script that converts json into the formatting expected here? https://github.com/cloudposse/terraform-aws-iam-policy/blob/main/examples/complete/fixtures.us-east-2.tfvars
I tried this online one but i’s different than what the module expects https://flosell.github.io/iam-policy-json-to-terraform/
This tool converts standard IAM policies in JSON format (like what you’d find in the AWS docs) into more terraform native aws_iam_policy_document data source code
I don’t think jsondecode()
will give you exactly what you want, so within terraform this isn’t easy to solve. You can open a PR for a new variable accepting JSON input and I will review it!
This tool converts standard IAM policies in JSON format (like what you’d find in the AWS docs) into more terraform native aws_iam_policy_document data source code
As you see here the module eventually converts inputs to json, so if you’d rather just provide the json yourself that’s a good feature-add
i was looking for something like this but the syntax that the module expects. it’s slightly different than what terraform’s resource expects by default. i was able to to use that converter with a couple string replaces https://iampolicyconverter.com/
Effortlessly convert AWS IAM Policy JSON to Terraform AWS policy documents with our simple and effective tool. Simplify your infrastructure management process today.
Hello everyone. Has anyone had to rotate aws access key in the module cloudposse/terraform-aws-iam-system-user ? I encountered this problem when trying to rotate a key. To do this, I try to do the following • Manually create a new key • Manually Update new key and new key_secret in ssm manager • Delete the old key from the state
terraform state rm 'module.system_user.aws_iam_access_key.default[0]'
• Import the new key
terraform import 'module.system_user.aws_iam_access_key.default[0]' <new_key_id>
All of the above was successfully completed
But then when I try to do plan or apply I get this error
│ Error: Invalid combination of arguments
│
│ with module.system_user.module.store_write[0].aws_ssm_parameter.default["/<ssm-path>/secret_access_key"],
│ on .terraform/modules/system_user.store_write/main.tf line 13, in resource "aws_ssm_parameter" "default":
│ 13: resource "aws_ssm_parameter" "default" {
│
│ "insecure_value": one of `insecure_value,value` must be specified
╵
╷
│ Error: Invalid combination of arguments
│
│ with module.system_user.module.store_write[0].aws_ssm_parameter.default["/<ssm-path>/secret_access_key"],
│ on .terraform/modules/system_user.store_write/main.tf line 21, in resource "aws_ssm_parameter" "default":
│ 21: value = each.value.value
│
│ "value": one of `insecure_value,value` must be specified
And I don’t understand how to fix it As far as I can see, there are still records about the old access-key in the state file How can I update them correctly?
User creates whit following parameters:
module "system_user" {
source = "git::<https://github.com/cloudposse/terraform-aws-iam-system-user.git?ref=tags/1.2.0>"
context = module.label.context
ssm_base_path = "/${local.ssm_params_prefix}"
}
PS. its not possible to remove old key and create new, because the key uses a running application for which you can’t do a downtime
I would be grateful for any help
@Jeremy G (Cloud Posse)
The aws_iam_access_key docs say that secret
and ses_smtp_password_v4
are not available for imported resources.
I think the procedure for rotating the key would simply be:
• terraform state rm 'module.system_user.aws_iam_access_key.default[0]'
to remove the current key from Terraform control
• terraform apply
to create the new key
• Later, delete the old key manually via aws
2024-09-12
Hi Team, my company is deploying infra as code pipelines into AWS using Gitlab. We are reading lots of platform engineering blogs, lots of different choices to make. What is the guidance on latest and greatest to support multi accounts? We are currently thinking of self-hosted runners, using OIDC to auth to AWS accounts. With simple gitlab-ci-yml to run terraform plan and apply once MR is approved. Any big issues here? We are also considering Atlantis (but unsure about a public webhook into our build account), have been pointed to Atmos also. Any tips here would be great!
Hey Mike!
I used a similar setup in a previous role, and it worked well enough, but it depends on how much you plan to scale your infrastructure repository. As our monorepo grew, we ran into limitations with the pipelines, and GitLab’s child pipelines could only take us so far. Our biggest regret was relying on GitLab’s managed state, which became a bottleneck when we moved to a polyrepo pattern for our microservices. This also led to dependency hell as we tried to maintain consistency across all repositories.
I’ve always been a fan of how Atmos offers straightforward design patterns and inheritance, making monorepo management much easier. However, not much work has been done with Atmos in the GitLab ecosystem, so documentation might be a bit scarce. But in the long run, I believe it will make scaling much smoother.
Just my two cents. Feel free to DM me if you want to chat more!
How can i override name
variable from the concatenation of provided variables to a specific string I choose?
For example, the module concats osprey-lb-policy-aws-load-balancer-controller@all
I want to just call it “MyPolicy”.
https://github.com/cloudposse/terraform-aws-eks-iam-role/tree/main
In that module, check line 191 in context.tf. the label order variable lets you define what gets included in the id attribute
It uses a different null label for it.
You could technically override the order to remove the attributes, however, you’d be impacting both the policy name and the iam role name
I’m trying to keep the tagging that [context.tf](http://context.tf)
provides but override the name entirely. I want to provide a totally custom string irrelevant to the label order.
Understood. This is why you’d have to contribute a change to the specific null label module to get the intended effect.
See these code blocks
module "service_account_label" {
source = "cloudposse/label/null"
version = "0.25.0"
# To remain consistent with our other modules, the service account name goes after
# user-supplied attributes, not before.
attributes = [local.service_account_id]
# The standard module does not allow @ but we want it
regex_replace_chars = "/[^-a-zA-Z0-9@_]/"
id_length_limit = 64
context = module.this.context
}
resource "aws_iam_policy" "service_account" {
count = local.iam_policy_enabled ? 1 : 0
name = module.service_account_label.id
You could expose a new input for the policy name perhaps
or you can avoid passing in the aws_iam_policy_document
to disable the creation of the policy.
Then you can create the policy and attach it from outside of the module.
2024-09-13
2024-09-17
https://medium.com/thousandeyes-engineering/scaling-terraform-at-thousandeyes-b2a581b8b0b0 — I don’t think this has been discussed here yet (I checked the archives)… this reads like an opinionated implementation of the terraform preprocessor pattern. Interested in comments/discussion/comparison vs other solutions.
by Ricard Bejarano, Lead Site Reliability Engineer, Infrastructure at Cisco ThousandEyes
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
A few other designs if you’re interested in the topic
• https://github.com/gruntwork-io/terragrunt/pull/2403
• https://blog.fensak.io/terraform-compiler-pattern-79c7629e317e
In this post, I will introduce a Terraform design pattern that I have been using successfully. This pattern has certain advantages …
It would be great to hear about experiences from someone who tried for example Terramate and Cisco Stacks
2024-09-18
v1.9.6 1.9.6 (September 18, 2024) BUG FIXES:
plan renderer: Render complete changes within unknown nested blocks. (#35644) plan renderer: Fix crash when attempting to render unknown nested blocks that contain attributes forcing resource replacement. (<a href=”https://github.com/hashicorp/terraform/issues/35644“…
This PR updates the rendering logic so that it properly renders the contents of a nested block that is becoming unknown. Previously, we'd only render a single item for the whole nested block. N…
hey there. have a question around eks managed node groups and launch templates / bootstrapping (user data)
• is it better to use a custom launch template or the eks default one?
• how do i omit the second block device mapping when using bottlerocket, and use instance store volumes instead? (local NVME SSDs) bottlerocket just released support for local NVMEs, and i’d like to avoid that second EBS for the data vol. https://github.com/bottlerocket-os/bottlerocket/releases/tag/v1.22.0
i think i might just need to use virtual_name or no_device for the second block mapping? apparently NVME instances are auto configured… but im not too sure
@Andriy Knysh (Cloud Posse)
v1.10.0-alpha20240918 1.10.0-alpha20240918 (September 18, 2024) NEW FEATURES:
Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.
terraform output -json now displays ephemeral outputs. The value of an ephemeral output is always null unless a plan or apply is being run. Note that terraform output (without the -json) flag does not yet display ephemeral…
Ooh, “ephemeral values” sounds like a potential shift in paradigm with regards to how secrets handling can be done via Terraform without exposure in the state file.
Yea, pretty awesome!
2024-09-19
I have a made a list of api_passwords that I loop over creating PostgreSQL roles for each of our APIs. The problem is when I deploy, I’m getting write conflicts in the database. Anyone know how I can keep my loop, but do them sequential not in parallel.
# Create unique PostgreSQL role for each API
resource "postgresql_role" "api_read_write_roles" {
for_each = toset(var.api_list)
name = "${each.key}_read_write_role"
password = local.api_passwords[each.key]
encrypted_password = true
login = true
create_database = false
superuser = false
depends_on = [data.terraform_remote_state.rds_postgresql]
}
# Grant database-level privileges (CONNECT) for each API
resource "postgresql_grant" "database_grants" {
for_each = toset(var.api_list)
database = "postgres"
role = postgresql_role.api_read_write_roles[each.key].name
object_type = "database"
privileges = ["CONNECT"]
depends_on = [postgresql_role.api_read_write_roles]
}
# Grant schema-level privileges (USAGE) for each API
resource "postgresql_grant" "schema_grants" {
for_each = toset(var.api_list)
database = "postgres"
schema = "my_schema"
role = postgresql_role.api_read_write_roles[each.key].name
object_type = "schema"
privileges = ["USAGE"]
depends_on = [postgresql_grant.database_grants]
}
# Grant table-level privileges (SELECT, INSERT, UPDATE) for each API
resource "postgresql_grant" "table_grants" {
for_each = toset(var.api_list)
database = "postgres"
schema = "my_schema"
role = postgresql_role.api_read_write_roles[each.key].name
object_type = "table"
privileges = ["SELECT", "INSERT", "UPDATE"]
depends_on = [postgresql_grant.schema_grants]
}
I get the following type of errors for grant schema and table for a random amount of APIs, rerunning the deployment a few times deploys all the resources. But it would be great to have a smoother pipeline.
│ Error: could not execute revoke query: pq: tuple concurrently updated
│
│ with postgresql_grant.table_grants["ingestion_api"],
│ on postgres_roles.tf line 35, in resource "postgresql_grant" "table_grants":
│ 35: resource "postgresql_grant" "table_grants" {
@Igor Rodionov @Ben Smith (Cloud Posse) @Jeremy White (Cloud Posse)