#terraform (2023-05)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2023-05-01

2023-05-02

Afolabi Omotoso avatar
Afolabi Omotoso

Hi I am looking at how to dynamically create resources in multiple regions but as far as I can see, It is not supported yet by terraform. Has anyone tried any work around as I have over 3000 resources to create across multiple regions?

#16967 Dynamically-generated provider configurations based on a collection value

Terraform Version

v0.11.1

Terraform Configuration Files

variable "regions" {
  default = [
    "us-east-1",
    "us-east-2",
    "us-west-1",
    "us-west-2",
    "ca-central-1",
    "eu-central-1",
    "eu-west-1",
    "eu-west-2",
    "eu-west-3",
    "ap-northeast-1",
    "ap-northeast-2",
    "ap-southeast-1",
    "ap-southeast-2",
    "ap-south-1",
    "sa-east-1"
  ]
}

provider "aws" {
  count = "${length(var.regions)}"
  alias = "${element(var.regions, count.index)}"
  region = "${element(var.regions, count.index)}"
  profile = "defualt"
}

resource "aws_security_group" "http-https" {
  count = "${length(var.regions)}"
  provider = "aws.${element(var.regions, count.index)}"
  name = "http-https"
  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Expected Behavior

Creating a security group on each AWS regions.

Actual Behavior

Planning/Applying fails with

Error: Error asking for user input: 1 error(s) occurred:

* aws_security_group.http-https: configuration for aws.${element(var.regions, count.index)} is not present; a provider configuration block is required for all operations

Steps to Reproduce

  1. terraform init
  2. terraform apply
loren avatar

best option i can think of would be something like cdktf to generate the .tf configs for you

#16967 Dynamically-generated provider configurations based on a collection value

Terraform Version

v0.11.1

Terraform Configuration Files

variable "regions" {
  default = [
    "us-east-1",
    "us-east-2",
    "us-west-1",
    "us-west-2",
    "ca-central-1",
    "eu-central-1",
    "eu-west-1",
    "eu-west-2",
    "eu-west-3",
    "ap-northeast-1",
    "ap-northeast-2",
    "ap-southeast-1",
    "ap-southeast-2",
    "ap-south-1",
    "sa-east-1"
  ]
}

provider "aws" {
  count = "${length(var.regions)}"
  alias = "${element(var.regions, count.index)}"
  region = "${element(var.regions, count.index)}"
  profile = "defualt"
}

resource "aws_security_group" "http-https" {
  count = "${length(var.regions)}"
  provider = "aws.${element(var.regions, count.index)}"
  name = "http-https"
  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Expected Behavior

Creating a security group on each AWS regions.

Actual Behavior

Planning/Applying fails with

Error: Error asking for user input: 1 error(s) occurred:

* aws_security_group.http-https: configuration for aws.${element(var.regions, count.index)} is not present; a provider configuration block is required for all operations

Steps to Reproduce

  1. terraform init
  2. terraform apply
Alex Jurkiewicz avatar
Alex Jurkiewicz

are you creating the same set of resources in many regions? Or put another way, do you have one set of resources which are created repeatedly with only minor differences?

The best approach would be to split up your resources into many stacks. 3000 resources is too many for a single stack and you will suffer much operational pain doing so.

Afolabi Omotoso avatar
Afolabi Omotoso

Apologies, I meant 300. Yes I am creating the same resources in many regions. A typical example of what I am trying to do is shown below. I know that we cannot use count or for_each in providers, is there any way I can do this as a work-around

Afolabi Omotoso avatar
Afolabi Omotoso
variable "regions" {
  default = [
    "us-east-1",
    "us-east-2",
    "us-west-1",
    "us-west-2",
    "ca-central-1",
    "eu-central-1",
    "eu-west-1",
    "eu-west-2",
    "eu-west-3",
    "ap-northeast-1",
    "ap-northeast-2",
    "ap-southeast-1",
    "ap-southeast-2",
    "ap-south-1",
    "sa-east-1"
  ]
}

provider "aws" {
  count = "${length(var.regions)}"
  alias = "${element(var.regions, count.index)}"
  region = "${element(var.regions, count.index)}"
  profile = "defualt"
}

resource "aws_security_group" "http-https" {
  count = "${length(var.regions)}"
  provider = "aws.${element(var.regions, count.index)}"
  name = "http-https"
  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
loren avatar

here is a module that is setup for multi-region. give it a good study to see how to do multi-region in pure terraform… https://github.com/nozaq/terraform-aws-secure-baseline

nozaq/terraform-aws-secure-baseline

Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations and AWS Foundational Security Best Practices.

1
Afolabi Omotoso avatar
Afolabi Omotoso

Thank you. I will check this out

Fizz avatar

In your example, the only thing that is changing is the region, so you could create multiple executions over the same code and get provider to accept a variable for region which has a different value on each run.

Fizz avatar

You’ll need to take care with the state file as you’ll end up overwriting it if all you change is the region but if you use terraform workspaces then you can have a state file per region

Fizz avatar

You also would not need to use a provider alias in this case as each state file would only contain the resources of one account/region all created by one provider

Afolabi Omotoso avatar
Afolabi Omotoso

Thank you for this

2023-05-03

ohad avatar

New Podcast - theiacpodcast.com Hi all, my name is Ohad Maislish and I am the CEO and co-founder of www.env0.com We launched yesterday our new podcast about IaC and 3 episodes are already up in the air - with amazing guests such as the CEO of Infracost.io and the CTO of aquasec.com (tfsec+trivy OSS)

5
1
1
1
1
Shahar Glazner avatar
Shahar Glazner

Hey Ohad

wave1
jeffchao avatar
jeffchao

Hey Ohad. Cool, I’ll check it out. Super relevant to what we’re working on (in fact, been looking into env0 as well)

1

2023-05-04

Soren Jensen avatar
Soren Jensen

@Erik Osterman (Cloud Posse) I see there are already 2 open PR’s on the S3 bucket module. The issue is blocking new deployments. It will be much appreciated if one of the 2 solutions are merged into main. https://github.com/cloudposse/terraform-aws-s3-bucket/pulls │ Error: error creating S3 bucket ACL for bucket: AccessControlListNotSupported: The bucket does not allow ACLs

José avatar

What you can do is, upgrade the s3-logs version in the module main.tf internally for you, from 0.26.0 to 1.1.0 until the PR is merge.

Or the hard way

• terraform init

• sed -i “s/0.26.0/1.1.0/” …

• terraform init

• terraform apply

Soren Jensen avatar
Soren Jensen

Thanks @José

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Max Lobur (Cloud Posse) can we prioritize rolling out the release branch manager to this repo

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(and any S3 repos)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) let’s discuss on ARB today

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Modules terraform-aws-s3-bucket and terraform-aws-s3-log-storage have been updated to work with the new AWS S3 defaults. Other modules dependent on them should be updated soon.

cloudposse/terraform-aws-s3-bucket
cloudposse/terraform-aws-s3-log-storage
3
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Jeremy G (Cloud Posse)

Soren Jensen avatar
Soren Jensen

Thanks for the quick response and fix

Michael Dizon avatar
Michael Dizon

Running into a weird error when creating a bucket.

module "s3_bucket" {
  source                 = "cloudposse/s3-bucket/aws"
  version                = "3.1.0"
  acl                    = "private"
  context            = module.this.context
  kms_master_key_arn = module.kms_key.alias_arn
  sse_algorithm      = var.sse_algorithm
}
│ Error: error creating S3 bucket (xxxx) accelerate configuration: UnsupportedArgument: The request contained an unsupported argument.
│       status code: 400, request id: XZMWPBJNR1DEBXJQ, host id: CViaAfJ5ZhqVM2t6XViRKzlz+SATKo38dDxSISOQ3nihJM3K6qyWoBVizpP+ywZPrugDBbii/wQ=
│ 
│   with module.s3_bucket.aws_s3_bucket_accelerate_configuration.default[0],
│   on .terraform/modules/s3_bucket/main.tf line 48, in resource "aws_s3_bucket_accelerate_configuration" "default":
│   48: resource "aws_s3_bucket_accelerate_configuration" "default" {
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Yes, that is weird, and I cannot reproduce it, so I will need more information if you want me to investigate further.

Michael Dizon avatar
Michael Dizon

i don’t see govcloud listed

Michael Dizon avatar
Michael Dizon

I guess that means it’s not supported?

Michael Dizon avatar
Michael Dizon

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) Please review and approve s3-bucket PR 180 to fix @Michael Dizon’s problem above.

#180 Revert change to Transfer Acceleration from #178

what

• Revert change to Transfer Acceleration from #178

why

• Transfer Acceleration is not available in every region, and the change in #178 (meant to detect and correct drift) does not work (throws API errors) in regions where Transfer Acceleration is not supported

1
1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Michael Dizon I hope this is fixed in v3.1.1. Please try it out and report back.

Michael Dizon avatar
Michael Dizon

@Jeremy G (Cloud Posse) yeah that worked!! thank you! can terraform-aws-s3-log-storage get that bump also?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The PR is approved, but right now cannot be merged and a new release cut because of another GitHub outage

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

terraform-aws-s3-log-storage v1.3.1 released

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

terraform-aws-vpc-flow-logs-s3-bucket v1.0.1 released

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

terraform-aws-lb-s3-bucket v0.16.4 released

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#30380 [New Service]: VPC Lattice

Description

Support for recently announced VPC Lattice

https://aws.amazon.com/blogs/aws/simplify-service-to-service-connectivity-security-and-monitoring-with-amazon-vpc-lattice-now-generally-available/https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonvpclatticeservices.htmlhttps://awscli.amazonaws.com/v2/documentation/api/latest/reference/vpc-lattice/index.html?highlight=lattice

Requested Resource(s) and/or Data Source(s)

☑︎ aws_vpclattice_service ☑︎ aws_vpclattice_service_network ☑︎ aws_vpclattice_service_network_service_association ☑︎ aws_vpclattice_service_network_vpc_association ☑︎ aws_vpclattice_listener ☑︎ aws_vpclattice_listener_rule ☑︎ aws_vpclattice_target_group ☑︎ aws_vpclattice_access_log_subscription ☑︎ aws_vpclattice_auth_policy ☑︎ aws_vpclattice_resource_policy ☑︎ aws_vpclattice_target_group_attachment

Potential Terraform Configuration

TBD

References

https://aws.amazon.com/blogs/aws/simplify-service-to-service-connectivity-security-and-monitoring-with-amazon-vpc-lattice-now-generally-available/https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonvpclatticeservices.htmlhttps://awscli.amazonaws.com/v2/documentation/api/latest/reference/vpc-lattice/index.html?highlight=lattice

Would you like to implement a fix?

None

2
Release notes from terraform avatar
Release notes from terraform
05:33:34 PM

v1.5.0-alpha20230504 1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…

Release v1.5.0-alpha20230504 · hashicorp/terraformattachment image

1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to v…

2023-05-05

Release notes from terraform avatar
Release notes from terraform
03:33:32 PM

v1.5.0-alpha20230504 1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…

Release v1.5.0-alpha20230504 · hashicorp/terraformattachment image

1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to v…

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure.

Release v1.5.0-alpha20230504 · hashicorp/terraformattachment image

1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to v…

Alex Jurkiewicz avatar
Alex Jurkiewicz

i find it funny they are described this way (which is pretty accurate) but the truth is if one of the checks fails, execution continues

1
Nitin avatar

https://github.com/cloudposse/terraform-aws-elasticache-memcached

curious to know why we can’t modify the security group created by this module. (everything should be known at plan time)

cloudposse/terraform-aws-elasticache-memcached

Terraform Module for ElastiCache Memcached Cluster

2023-05-06

2023-05-08

mike avatar

Anyone familiar with the Confluent Terraform provider? I am unable to get confluent_kafka_cluster_config resources working. I always get this error:

error creating Kafka Config: 400 Bad Request: Altering resources of type BROKER is not permitted
mike avatar

Some information if anyone else runs into this issue: https://github.com/confluentinc/terraform-provider-confluent/issues/251#issuecomment-1541025164. I am not sure how to set settings on a cluster that is not of type dedicated. I suppose there might be a way to call the REST API to do so.

managedkaos avatar
managedkaos

Oddball question: what are all the possible resource actions for a plan?

When a terraform plan is reported, it includes what will happen to each resource. ie:

  # aws_security_group_rule.ec2-http will be created
  # azurerm_container_group.basics will be destroyed
  # azurerm_container_group.basics will be replaced

With emphasis on created,destroyed, and replaced.

Are there any other options?

Would you happen to know the source file in the repo that contains the options? (I’ll be digging into the repo in a sec)

Alex Jurkiewicz avatar
Alex Jurkiewicz

I believe that’s it, although there are interesting sub-types, like how dangling resources get deposed

managedkaos avatar
managedkaos

I asked my good friend Chat, Chat GPT, and this is what he came back with. The response sounds reasonable but I have yet to validate…
In Terraform, there are several resource actions that can be reported in a plan. The most common ones are:
• Created: A new resource will be created.
• Updated: An existing resource will be updated.
• Replaced: An existing resource will be replaced with a new one.
• Destroyed: An existing resource will be destroyed.
• No changes: The resource has not changed since the last Terraform run.
Additionally, there are a few less common actions that may appear in a plan:
• Tainted: A resource has been marked as tainted and will be recreated on the next Terraform run.
• Imported: An existing resource has been imported into Terraform state.
• Ignored: A resource has been ignored due to the ignore_changes setting in the configuration.
• Moved: A resource has been moved to a different location within the infrastructure.
It’s worth noting that some of these actions, such as “tainted” and “ignored”, are specific to Terraform and not used in other infrastructure-as-code tools.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@managedkaos iust curious, what are you building?

managedkaos avatar
managedkaos

A plan parser for GitHub actions. I came across one and didn’t like it so I thought I’d put one together. Doesn’t have to be API complete but wanted to cover as much ground as possible.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(That was going to be my guess!)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you publish it, share it!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I came across one and didn’t like it
I have seen quite a few of them, though…

managedkaos avatar
managedkaos

I will admit I didn’t look too far.

But yeah, I’ll definitely share when it comes together.

managedkaos avatar
managedkaos

It’ll be a while before i have a working “action” (i’m just scripting in a workflow at the moment) but here’s a preview:

managedkaos avatar
managedkaos

clearly my summaries are off

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sharing some of my related links:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
dflook/terraform-github-actions

GitHub actions for terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Silence "Refreshing state…" & Highlight Changes in GitHub Actions Terraform Plan Outputattachment image

This article was originally published in November 2022 on my GitHub io blog here

Purpose After working with GitHub Actions as my Terraform CI pipeline over the past year, I started looking for potential methods to clean up the Plan outputs displaye…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, @loren shared this one before: https://suzuki-shunsuke.github.io/tfcmt/

tfcmt | tfcmt

Build Status

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
mercari/tfnotify

A CLI command to parse Terraform execution result and notify it to GitHub

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Supports Slack and GitHub comments out of the box

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@managedkaos did you end up building anything? we’re looking into this right now.

managedkaos avatar
managedkaos

I did not build a complete, deployable action…. only some awk code that pares the plan and translates it to Markdown for display in a GitHub Actions job summary. Will see if I can find the code for reference….

managedkaos avatar
managedkaos

I’m doing all the plan analysis in plain text but i know it could me better using a JSON plan. Just had to move on to another project.

underplank avatar
underplank

Hi All. I’m trying to put tags on an autoscaling group. I would like them to propogate to the underlying ec2 instances. So I put this block in

  tag = {
    key                 = "service"
    value               = "prometheus_server"
    propagate_at_launch = true
  }
Alex Jurkiewicz avatar
Alex Jurkiewicz

you’re getting confused between attributes and blocks, one of Terraform’s warts (IMO)

Alex Jurkiewicz avatar
Alex Jurkiewicz

attributes are set in a resource like:

key = val

blocks are set like:

block {
  # attribs...
}
Alex Jurkiewicz avatar
Alex Jurkiewicz
underplank avatar
underplank

uuughh.. thanks… I figured it was something I was doing. Just didnt know what.

1
underplank avatar
underplank

On the aws_autoscaling_group resource.

underplank avatar
underplank

It complained that

╷
│ Error: Unsupported argument
│
│   on prometheus.tf line 93, in resource "aws_autoscaling_group" "prometheus-server":
│   93:   tag = {
│
│ An argument named "tag" is not expected here. Did you mean "tags"?

So I put used tags instead but then I got

│ Warning: Argument is deprecated
│
│   with aws_autoscaling_group.prometheus-server,
│   on prometheus.tf line 93, in resource "aws_autoscaling_group" "prometheus-server":
│   93:   tags = [{
│   94:     key                 = "service"
│   95:     value               = "prometheus_server"
│   96:     propagate_at_launch = true
│   97:   }]
│
│ Use tag instead
underplank avatar
underplank

Versions Im using

Terraform v1.4.6
on darwin_arm64
+ provider registry.terraform.io/grafana/grafana v1.36.1
+ provider registry.terraform.io/hashicorp/aws v4.66.1
+ provider registry.terraform.io/hashicorp/helm v2.9.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.20.0
+ provider registry.terraform.io/hashicorp/tls v4.0.4
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

does

`which terraform` version

Return 1.4.6

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe something we can bring up on #office-hours today

2023-05-09

2023-05-10

Michael Dizon avatar
Michael Dizon

wondering if anyone had some thoughts or design references for creating an 2 s3 buckets. one source bucket, and one replication bucket in a single module using the terraform-aws-s3-bucket module.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Generally you want the target bucket to be in a separate region, which means a separate AWS provider, so you have to decide how you want to organize that. You can have a root module with 2 providers, pass the 2nd one into s3-bucket to create the replication destination, then pass the output of that into another s3-bucket instantiation using the default AWS provider to create the primary bucket.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Erik Osterman (Cloud Posse)

Michael Dizon avatar
Michael Dizon

since the target bucket accepts the outputted replication_role_arn from the source, and the s3_replication_rules references the target bucket, which doesn’t yet exist

Utpal Nadiger avatar
Utpal Nadiger

Folks who use Atlantis for Terraform Self Service - what pains you the most?

We are building an Open Source GitOps tool for Terraform (https://github.com/diggerhq/digger) and are looking for what’s missing. We also read & asked around. We found the following pain points already, curious for more:

  1. In Atlantis, anyone who can run a plan, can exfiltrate your root credentials. This talked about by others and was highlighted at the Defcon 2021 conference. (CloudPosse)
  2. “Atlantis shows plan output, if it’s too long it splits it to different comments in the PR which is not horrible, just need to get used to it.” (User feedback)
  3. Anyone that stumbles upon your Atlantis instance can disable apply commands, i.e. stopping production infrastructure changes. This isn’t obvious at all, and it would be a real head scratcher to work out why Atlantis suddenly stopped working! (Loveholidays blog)
  4. “Atlantis does not have Drift Detection.” (Multiple users)
  5. “The OPA support in atlantis is very basic.” (Multiple users) As CloudPosse themselves explain - “Atlantis was the first project to define a GitOps workflow for Terraform, but it’s been left in the dust compared to newer alternatives.” The problem though is that none of the newer alternatives are Open Source, and this is what we want to change. Would be super grateful for any thoughts/insights and pain points you have faced.
Terraform Plan RCE

Running a Terraform plan on unstrusted code can lead to RCE and credential exfiltration.

Enforcing best practice on self-serve infrastructure with Terraform, Atlantis and Policy As Codeattachment image

Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…

diggerhq/digger

Digger is an open source Terraform Cloud alternative. Digger allows you to run Terraform plan / apply in your CI. No need for separate CI tool, comes with all batteries included

1
loren avatar

Support for CodeCommit would be nice

Terraform Plan RCE

Running a Terraform plan on unstrusted code can lead to RCE and credential exfiltration.

Enforcing best practice on self-serve infrastructure with Terraform, Atlantis and Policy As Codeattachment image

Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…

diggerhq/digger

Digger is an open source Terraform Cloud alternative. Digger allows you to run Terraform plan / apply in your CI. No need for separate CI tool, comes with all batteries included

jose.amengual avatar
jose.amengual

1.- anyone that can run terraform can exfiltrate your root credentials, not an atlantis problem per se

loren avatar

i think it’s a CI problem. if i’m running locally, i already have credentials… but if i’m getting the CI credentials, that’s not great

loren avatar

also not sure what might be done to stop that, outside of config parsing and policy enforcement tool

jose.amengual avatar
jose.amengual

3.- put the UI under OIDC etc and give only access to the people that needs it

jose.amengual avatar
jose.amengual

4.- it is being develop and there is a github action you could test https://github.com/cresta/atlantis-drift-detection

cresta/atlantis-drift-detection

Detect terraform drift in atlantis

jose.amengual avatar
jose.amengual

5.- you can run conftest , what is basic about it?

jose.amengual avatar
jose.amengual

Ask Cloudposse Again about the status of Atlantis, I’m pretty sure they have changed their minds. Check the releases as well, we have been updating the code pretty often

jose.amengual avatar
jose.amengual

by the way I’m one of the Atlantis maintainers

kunalsingthakur avatar
kunalsingthakur

When i am reading documentation of digger they have comparison with open source and enterpise tool

kunalsingthakur avatar
kunalsingthakur

That’s the best thing

kunalsingthakur avatar
kunalsingthakur

If they work on solving the issue of both soon they will attract to use it

kunalsingthakur avatar
kunalsingthakur

Does digger has authentication control VCS providers

kunalsingthakur avatar
kunalsingthakur

Like bitbucket user authentication right now not available in atlantis

kunalsingthakur avatar
kunalsingthakur

They only support github

kunalsingthakur avatar
kunalsingthakur

Terragrunt integration

kunalsingthakur avatar
kunalsingthakur

Does digger provide pr request automation same like atlantis

curious deviant avatar
curious deviant

Hello, I have setup provisioned concurrency with scheduled scaling for my lambda. However, successive terraform runs cause the error : Error updating lambda alias resourceConflictexception: Alias can’t be used for provisioned concurrency configuration on an already provisioned version. Is this something anyone else has run into ?

Joe Perez avatar
Joe Perez

Hello all, I was wondering if anyone has had success with delivering developer permissions within AWS SSO and the proper guardrail system for permission in a build system that runs terraform. I also acknowledge it is slower to iterate on Terraform changes when you have to check in a change and run a build each time. Maybe others have found success in the balance between security and speed

Michael Galey avatar
Michael Galey

I like my aws SSO setup and permissions, but as a small company we just have permissions like DataTeam/Engineer/Admin for different aws orgs that are our environments, where engineer is read/write to most things we use.

Michael Galey avatar
Michael Galey

no experience running terraform in a build system as I’m the only one that does it

Joe Perez avatar
Joe Perez

@Michael Galey thank you for the input, I’m definitely of the mind to keep things simple as possible

Joe Perez avatar
Joe Perez

as for running/not running in a build system, I get for your use case as the only one doing ops-y things, then it might not be a priority until the team grows

Joe Perez avatar
Joe Perez

there’s also compliance/security stuff, but that’s another thing that needs to be prioritized

Michael Galey avatar
Michael Galey

for my own auditing/backup info, i also log all runs and send diffs as events into datadog, the diffs sometimes are beyond the char limit tho

Joe Perez avatar
Joe Perez

oh, that’s very cool, just a homegrown wrapper around terraform?

Michael Galey avatar
Michael Galey

yea, i also wrap other terraform stuff to use SSM parameters/aws sso

Michael Galey avatar
Michael Galey

from my ‘tfa’ (terraform apply)

Michael Galey avatar
Michael Galey

so it also catches/warns of destructive stuff, and asks for an extra confirm later on

Joe Perez avatar
Joe Perez

the beauty of bash, very cool stuff, I have a similar alias, but for aws sso+aws-vault+terraform

Michael Galey avatar
Michael Galey

what do you need aws-vault for? sso means it doesn’t have to vault anything right? or you store even the temp creds in the vault?

Joe Perez avatar
Joe Perez

I think it’s more habit than anything now to keep aws-vault around. Used it prior to AWS SSO

Joe Perez avatar
Joe Perez

I have to fix my blog, but I created a post about terraform wrappers https://github.com/jperez3/taccoform-blog/blob/master/hugo/content/posts/TF_WRAPPER_P1.md

attachment image

+++
title = “Terraform Wrappers - Simplify Your Workflow”
tags = [“terraform”, “tutorial”, “terraform1.x”, “wrapper”, “bash”, “envsubst”]
date = “2022-11-09”
+++

Tortillas

Overview

Cloud providers are complex. You’ll often ask yourself three questions: “Is it me?”, “Is it Terraform?”, and “Is it AWS?” The answer will be yes to at least one of those questions. Fighting complexity can happen at many different levels. It could be standardizing the tagging of cloud resources, creating and tuning the right abstraction points (Terraform modules) to help engineers build new services, or streamlining the IaC development process with wrappers. Deeper understanding of the technology through experimentation can lead to amazing breakthroughs for your team and business.

Lesson

• Terraform Challenges • Terraform Wrappers • Creating Your Own Wrapper • Wrapper Example

Terraform Challenges

As you experiment more with Terraform, you start to see where things can break down. Terraform variables can’t be used in the backend configuration, inconsistencies grow as more people contribute to the Terraform codebase, dependency differences between provisioners, etc.

Terraform Wrappers

You may or may not be familiar with the fact that you can create a wrapper around the terraform binary to add functionality or that several open source terraform wrappers have existed for several years already. The most well-known terraform wrapper being terragrunt which was ahead of its time by filling in gaps in Terraform’s features and provided things like provisioning entire environments. I tried using terragrunt around 2016 and found the documentation to be confusing and incomplete. I encountered terragrunt again in 2019 and found it be confusing and frustrating to work on. I didn’t see the utility in using a wrapper and decided to steer away from wrappers, favoring “vanilla” terraform. I created separate service workspaces/modules and leaned heavily into tagging/data-sources to scale our infrastructure codebase. In 2022, we’ve started to support developers writing/maintaing their own IaC with our guidance. In any shared repo, you will notice that people naturally have different development techniques and/or styles. We’re all about individualiiy when it comes interacting with people, but cloud providers are less forgiving. Inconsistencies across environments slow down teams, destroys deployment confidence, and makes it verify difficult to debug problems when they arise.

Creating Your Own Wrapper

It may be difficult to figure out at first, but only you and your team know the daily pain points when dealing with terraform. You also know how organized (or disorganized) your IaC can be. At the very least, the following requirements should be met:

  1. You have a well-defined folder structure for your terraform workspaces. This will allow you to cherry-pick information from the folder path and predictably source scripts or other files.
  2. Your modules and workspaces have a 1:1 mapping, which means for every folder with terraform files, you’re only deploying one terraform module. No indivial resource defintions are created. This helps with keeping consistency across across environments.

Once you’ve gotten the prereqs out of the way, you can start thinking about what you want the wrapper to do that isn’t already built into the terraform binary. Start by picking one or two features and your programming language of choice. You can jump right into using something like python or go, but I would actually recommend starting with bash. It will work on most computers, so you don’t have to worry about specific dependencies if you want a teammate to kick the tires on your terraform wrapper. If and when your terarform wrapper blows up with functionality, then you can decide to move it to a proper programming language and think about shoving it into a container image.

Wrapper Example Organization and Requirements Gathering

I’ve created a repo called terraform-wrapper-demo and inside I’ve created a service called burrito. The burrito service has a well-organized folder structure:

burrito
├── modules
│   └── base
│       └── workspace-templates
├── scripts
└── workspaces
    ├── dev
    │   └── base
    └── prod
        └── base

I also have a 1:1 mapping between my base workspaces and modules. The burrito module is very basic and for demonstration purposes only includes an s3 bucket. If this were a real service, it would have more specifics on compute, networking, and database resources.

Ok, this set up is great, but even with the 1:1 mapping of workspaces to modules, we’re still seeing inconsistencies across environments. Some inconsistencies are small like misspelled tags and others are big like a security group misconfigurations. As a member of the team who contributes to the burrito service’s codebase, I want things to be consistent across environments. Advancing changes across nearly identical environments gives a developer confidence that the intended change will be uneventful once it reaches production.

It sounds like templates can help mitigate fears of inconsistency across environments. Let’s put together some requirements:

  1. The wrapper should be similar to the existing terraform workflow to make it easy to use
  2. Workspace templates should be pulled from a centralized location, injected with environment specific variables, and placed into their respective workspaces.

Starting The Wrapper Script

We want the wrapper script to act similar to the terraform command. So the script will start with a command (the script name) and we’ll call it tee-eff.sh. We’ll also expect it to take a subcommand. If you’re familiar with Terraform, this is stuff like init, plan, apply. Using the script would look something like tee-eff.sh plan.

  1. Ok now to start the script and lets begin with the input:

tee-eff.sh

#!/bin/bash

SUBCOMMAND=$1

• Now any argument supplied to the script will be set as the SUBCOMMAND variable.

  1. Now we can focus on the variables we need to interpolate by looking at the [provider.tf](http://provider.tf) file:
terraform {
    backend "s3" {
        bucket = "$TF_STATE_BUCKET_NAME-$ENV"
        key    = "$REPO_NAME/$SERVICE_PATH/terraform.tfstate"
        region = "$BUCKET_REGION"
    }

}

provider "aws" {
    region = "$AWS_REGION"

    default_tags {
        tags = {
            Terraform_Workspace = "$REPO_NAME/$SERVICE_PATH"
            Environment         = "$ENV"
        }
    }
}

terraform {
    required_providers {
        aws = {
            source  = "hashicorp/aws"
            version = "~> 4.0"
        }
    }

    required_version = "~> 1.0"
}

• We’ll want to replace any variables denoted with a $ at the beginning with values from our tee-eff.sh script. The same goes for variables in the [burrito_base.tf](http://burrito_base.tf) file which can be found below:

[burrito_base.tf](http://burrito_base.tf)

module "$SERVICE_$MODULE_NAME" {
    source = "../../../modules/$MODULE_NAME"

    env = "$ENV"
}

Note: Things like the backend values and module name cannot rely on terraform variables because those variables are loaded too late in the terraform execution process to be used.

  1. After we’ve tallied up the required variables, we can come back to the tee-eff.sh script to set those variables as environment variables:

tee-eff.sh

``` #!/bin/bash

SUBCOMMAND=$1

echo “SETTING VARIABLES

Terraform Backend S3 Bucket

export TF_STATE_BUCKET_NAME=’taccoform-tf-backend’

current working directory matches …

Joe Perez avatar
Joe Perez

I like wrappers, but also want to be aware of how tangled they can become

Michael Galey avatar
Michael Galey

thanks for sharing! agreed, my setup would be unfortunate to teach a second person, even though it’s technically cool when understood

Joe Perez avatar
Joe Perez

I think your setup is good and teachable to another engineer. I think the bigger hurdle is organizing the terraform a way where you two can actively work in the same environment without stepping on each others toes

Michael Galey avatar
Michael Galey

yea for sure

2023-05-11

Elad Levi avatar
Elad Levi

Hey all, I’m trying to use the terraform module cloudposse/firewall-manager/aws on version 0.3.0

I can’t find the right way to add the logging_configuration block in order to use S3 bucket as the waf_v2_policies direct log destination.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Dan Miller (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Elad Levi I guess the logging_configuration configuration is part of this variable https://github.com/cloudposse/terraform-aws-firewall-manager/blob/master/variables.tf#L233

      logging_configuration:
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
      loggingConfiguration              = lookup(each.value.policy_data, "logging_configuration", local.logging_configuration)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
SecurityServicePolicyData - AWS Firewall Manager

Details about the security service that is being used to protect the resources.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

search for loggingConfiguration in the wall of JSON there

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you will need to provide a similar JSON string to logging_configuration in variable "waf_v2_policies"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

having said that, the module was not updated in 1.5 years and prob needs some love

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Example: WAFV2 - Logging configurations

"{\"type\":\"WAFV2\",\"preProcessRuleGroups\":[{\"ruleGroupArn\":null, \"overrideAction\":{\"type\":\"NONE\"},\"managedRuleGroupIdentifier\": {\"versionEnabled\":null,\"version\":null,\"vendorName\":\"AWS\", \"managedRuleGroupName\":\"AWSManagedRulesAdminProtectionRuleSet\"} ,\"ruleGroupType\":\"ManagedRuleGroup\",\"excludeRules\":[], \"sampledRequestsEnabled\":true}],\"postProcessRuleGroups\":[], \"defaultAction\":{\"type\":\"ALLOW\"},\"customRequestHandling\" :null,\"customResponse\":null,\"overrideCustomerWebACLAssociation\" :false,\"loggingConfiguration\":{\"logDestinationConfigs\": [\"arn:aws:s3:::aws-waf-logs-example-bucket\"] ,\"redactedFields\":[],\"loggingFilterConfigs\":{\"defaultBehavior\":\"KEEP\", \"filters\":[{\"behavior\":\"KEEP\",\"requirement\":\"MEETS_ALL\", \"conditions\":[{\"actionCondition\":\"CAPTCHA\"},{\"actionCondition\": \"CHALLENGE\"}, {\"actionCondition\":\"EXCLUDED_AS_COUNT\"}]}]}},\"sampledRequestsEnabledForDefaultActions\":true}"
Elad Levi avatar
Elad Levi

Thanks @Andriy Knysh (Cloud Posse) The problem for me was that I used this block:

        loggingConfiguration = ({
          "logDestinationConfigs" = ["arn:aws:s3:::aws-waf-logs-bucket-name-01"]
          "redactedFields" = []
          "loggingFilterConfigs" = null
        })

And I should have used this:

        logging_configuration = ({
          "logDestinationConfigs" = ["arn:aws:s3:::aws-waf-logs-bucket-name-01"]
          "redactedFields" = []
          "loggingFilterConfigs" = null
        })

As far as I can see there is no need to write it on json string because the loggingConfiguration will be inside jsonencode anyway as you see here:

    managed_service_data = jsonencode({
      type                  = "WAFV2"
      preProcessRuleGroups  = lookup(each.value.policy_data, "pre_process_rule_groups", [])
      postProcessRuleGroups = lookup(each.value.policy_data, "post_process_rule_groups", [])

      defaultAction = {
        type = upper(each.value.policy_data.default_action)
      }

      overrideCustomerWebACLAssociation = lookup(each.value.policy_data, "override_customer_web_acl_association", false)
      loggingConfiguration              = lookup(each.value.policy_data, "logging_configuration", local.logging_configuration)
    })
1

2023-05-12

shamwow avatar
shamwow

just had a question regarding custom terraform modules, is it generally considered best practice to “pin” things like terraform version, provider versions in the module? I feel like thats where it should be done but just looking for some advice

José avatar

As the modules usage in the README.md file and the examples suggest, yes. You should have pinned versions and a systematic way to control their upgrades to avoid disruptions due to latest changes.

shamwow avatar
shamwow

ok cool, ty @José

shamwow avatar
shamwow

there was talk of doing this from terragrunt which we are using to implement said modules and I feel like thats very dangerous

Kyle Stevenson avatar
Kyle Stevenson

Hi, does anyone know how to implement something like this where the user calling the module can say if they want to install the EKS addon and optionally provide configuration for it?

Hao Wang avatar
Hao Wang

try function?

Kyle Stevenson avatar
Kyle Stevenson

The issue is you can’t use dynamic in module input params @Hao Wang. Not sure if it’s possible to do differently, but that gives an idea of what I’m trying to accomplish.

Hao Wang avatar
Hao Wang

Yeah, it is possible, you can pass cluster_addons as a variable with dynamic

Hao Wang avatar
Hao Wang
cluster_addons
Hao Wang avatar
Hao Wang

may need to use locals variables to compose

Kyle Stevenson avatar
Kyle Stevenson

So compose the value in a local using dynamic then pass that local var to it

Kyle Stevenson avatar
Kyle Stevenson

I’ll have to give it a go, new to Terraform/hcl

Hao Wang avatar
Hao Wang

got it

Kyle Stevenson avatar
Kyle Stevenson

Awesome, thanks for the suggestion

Hao Wang avatar
Hao Wang

you are welcome

1
Kyle Stevenson avatar
Kyle Stevenson

I am not having any luck with this at all

Kyle Stevenson avatar
Kyle Stevenson

Looking to dynamically generate cluster_addons, if anyone has any ideas please let me know

2023-05-13

2023-05-14

2023-05-15

Release notes from terraform avatar
Release notes from terraform
03:43:33 PM

v1.5.0-beta1 1.5.0-beta1 (May 15, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…

Release v1.5.0-beta1 · hashicorp/terraformattachment image

1.5.0-beta1 (May 15, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate…

Hussam avatar

Hey guys, I am using the EMR Cluster module and it is creating all my master and core instances with the same name. Is there a way to give them unique to easily identify them instead?

Hao Wang avatar
Hao Wang

I set up EMR before but not used different names

Hao Wang avatar
Hao Wang

After reviewing the codes, core/master nodes use different labels

Hao Wang avatar
Hao Wang

and they use different attributes already in the module

Hao Wang avatar
Hao Wang

May need to pass Name tag

Hussam avatar

I will try it out. Thank you.

Hao Wang avatar
Hao Wang

np

2023-05-16

managedkaos avatar
managedkaos
Terraform Cloud updates plans with an enhanced Free tier and more flexibilityattachment image

Terraform Cloud’s Free tier now offers new features — including SSO, policy as code, and cloud agents — while new paid offerings update scaling concurrency and more.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Terraform Cloud’s Free tier now offers new features — including SSO, policy as code, and cloud agents

Terraform Cloud updates plans with an enhanced Free tier and more flexibilityattachment image

Terraform Cloud’s Free tier now offers new features — including SSO, policy as code, and cloud agents — while new paid offerings update scaling concurrency and more.

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow! Includes cloud agents. Amazing. I’ve been lobbying for this as the free tier was unusable as implemented because it required hardcoding AWS admin credentials as environment variables.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Smart move! cc @Jake Lundberg (HashiCorp)

managedkaos avatar
managedkaos

yeah pretty sweet. might make me think about taking another look at TFE. Even with GitHub Actions et. al. you have to hard code creds. Self-hosted runners and dedicated build agents with IAM roles gets you around that though. I guess this is Hashi getting on board.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
aws-actions/configure-aws-credentials

Configure AWS credential environment variables for use in other GitHub Actions.

managedkaos avatar
managedkaos

OK ok i will concede to the openid configuration. I guess i am thinking about arm chair devs/ops

1
Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Yes, the dynamic credentials in TFE/C looks quite a bit like Github cloud credentials.

managedkaos avatar
managedkaos

also, when i say “hard code” i mean repo secrets… which leads to the configure-aws-credentials action

managedkaos avatar
managedkaos

i guess my trade offs for the casual dev/ops people are getting them up and running with an AWS key in secrets or going through another hour or so of training to explain and set up the openid configuration.

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

But overall the pricing model is moving more towards scale of operations which is why we’ll see more traditional paid features be free for smaller teams. And no SSO tax…that’s table stakes these days.

2
1
Alex Jurkiewicz avatar
Alex Jurkiewicz

still no list pricing for larger org plans, I guess it’s hard to give up the spice of “how much do we think you can afford”

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Would you pay the list price as a larger org?

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes, with more private workers / concurrency

Stef avatar

Just my luck, signed up for a paid TFC plan a few months ago because we needed SSO and Teams management.

2023-05-17

james.knott avatar
james.knott

Good Morning! I’m new here so I’m looking for a place where I can jump in and get started. Is this still the best place to start? https://github.com/cloudposse/reference-architectures

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.

Michael Galey avatar
Michael Galey

Not sure, looks a little old, and some providers had decent sized changes in the last 2 years. If you know terraform already, I found it helpful to read some of their modules and compare to my own that did similar things, understanding how they use the ‘this’ object/context object and pass that around etc.

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.

James Knott avatar
James Knott

@Michael Galey Thank you so much!

Utpal Nadiger avatar
Utpal Nadiger

We launched Digger v4.0 today on Product hunt today! It has been quite a journey from Digger Classic (v1.0), to AXE (v2.0), to Trowel (v3.0) and finally to our current version.

Read more about our iterative journey in the blog and please share your feedback (good and bad) on Product Hunt

Digger - Open Source GitOps tool for Terraform | Product Huntattachment image

Digger is an open source tool that helps you run Terraform in the CI system you already have, such as GitHub Actions.

Hao Wang avatar
Hao Wang

great, will give it a try, can I use gmail to join slack?

Digger - Open Source GitOps tool for Terraform | Product Huntattachment image

Digger is an open source tool that helps you run Terraform in the CI system you already have, such as GitHub Actions.

Utpal Nadiger avatar
Utpal Nadiger

Absolutely! This is the link

Hao Wang avatar
Hao Wang

Cool, joined

kunalsingthakur avatar
kunalsingthakur

What I feel is like your pro feature should be open sources first because there are already tools available which supports these

kunalsingthakur avatar
kunalsingthakur

Like terrakube has already implemented it

kunalsingthakur avatar
kunalsingthakur

Drift detection opa integration

kunalsingthakur avatar
kunalsingthakur

Atlantis with custom workflow can add these features

kunalsingthakur avatar
kunalsingthakur

Scalr is also providing all features unlimited for small business purpose with limited number of runs bit they are not restricting to use there feature and buy it

kunalsingthakur avatar
kunalsingthakur

@Utpal Nadiger let me know if I miss anything i need something which I can usually pick up that without pro but use your tool as competitor for all open source tool

kunalsingthakur avatar
kunalsingthakur

Which are already fighting get alternative for terraform cloud

kunalsingthakur avatar
kunalsingthakur

Even terraform cloud has enable feature in free tier with unlimited user accounts and rbac agents

kunalsingthakur avatar
kunalsingthakur

And many more

Josh Pollara avatar
Josh Pollara

I’m really excited to release Terrateam Self-Hosted today. Full feature parity with our Cloud version. This is our first step to making Terrateam open source. Looking forward to community feedback, feature requests, etc.

https://github.com/terrateamio/terrateam https://terrateam.io/blog/terrateam-self-hosted

1
3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great move!

Josh Pollara avatar
Josh Pollara

Thanks!

kunalsingthakur avatar
kunalsingthakur

Terrateam should support bitbucket

kunalsingthakur avatar
kunalsingthakur

First in place don’t depend on GitHub in terms of git.providers

kunalsingthakur avatar
kunalsingthakur

Congratulations terrateam

2023-05-18

Josh Pollara avatar
Josh Pollara

Thanks @kunalsingthakur – We have seriously thought about supporting BitBucket (also GitLab) but our journey hasn’t taken us there yet. I’m curious though. Are you doing anything now with Terraform + BitBucket?

managedkaos avatar
managedkaos

Just chiming into the conversation. wave

i support one customer running a workload with Bitbucket + TF + AWS.

The Bitbucket pipeline tooling is working pretty good for triggers, branch specification (for build targets), and environments for deployment tracking .

managedkaos avatar
managedkaos

i do like the cloudspend feature of Terrateam though!

James Knott avatar
James Knott

Hello, I’m trying to get started with CloudPosse and SweetOps and want to make sure I’m starting at square one. This is the old reference architecture https://github.com/cloudposse/reference-architectures. Has anything replaced it and if not where is a good place to start? Thank you

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So our reference architecture is the one thing we hold back. We have some simple examples though

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, check out our refarch and atmos channels

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Tutorials | atmos

Here are some lessons on how to implement Atmos in a project.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

https://cloudposse.com/services/ ← our paid reference architecture

Cloud Posse Professional Servicesattachment image

We have solutions that fit your scale and budget.

Nikola Niksa avatar
Nikola Niksa

hey guys I have an issue with elastic search cloud posse module not being able to enable view_index_metadata is there any way that someone can help me out perhaps ?

Nikola Niksa avatar
Nikola Niksa

or is it a version block sicne I am at 0.35.1 version of the module

Nikola Niksa avatar
Nikola Niksa

Managed to enable it but the way it was done is just dumb since no where I managedd to fin correlation in between es:HTTPHead and view_index_metadata

2023-05-22

Paula avatar

Hi! I’m looking for advice as I’m new to Terraform projects. My team and I are starting to migrate all our infrastructure to Infrastructure as Code (IaC) using Terraform. The Minimum Viable Product (MVP) consists of approximately 60 microservices. This is because these services have dependencies on certain databases. Currently, we have deployed some of this infrastructure in a development environment. These services are built using different technologies, have different environment variables, and require different permissions. We are using preexisting modules, but we have a separate folder for each service for the particular configurations of each one. We have started a monorepo with a folder for each environment. However, the apply process is taking around 15 minutes, and our project organization is structured as follows:

• Staging ◦ VPC ▪︎ VPC-1 • ECS ◦ Service-1 ◦ Service-2 ◦ Service-3 ◦ Service-N • RDS ◦ RDS-1 ◦ RDS-2 ◦ RDS-3 ◦ RDS-N ▪︎ VPC-2 • ECS ◦ Service-1 ◦ Service-2 ◦ Service-3 ◦ Service-4 ◦ Service-N • RDS ◦ RDS-1 ▪︎ VPC-3 • ECS ◦ Service-1 ◦ Service-2 ◦ Service-3 can you give me your advice?

Hao Wang avatar
Hao Wang

I’m very familar with ECS, but feel it ie better to migrate ECS to k8s

Paula avatar

one step at time

jose.amengual avatar
jose.amengual

k8s is not the solution for a lot of companies, it all depends on workload/knowledge level etc

jose.amengual avatar
jose.amengual

Paula what advise are you looking for?

jose.amengual avatar
jose.amengual

if you go monorepo I will advise to make your rds , ecs alb , etc…modules very flexible so you can Standup any of your services

jose.amengual avatar
jose.amengual

you could look into using Atmos from cloudposse to make it easier to manage the inputs between all your services/modules

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think the question is “the apply process is taking around 15 minutes”

That’s a very slow apply. Why is it slow? There are generally two reasons:

  1. Too many resources in the Terraform configuration
  2. Some resources are very slow to change a. And perhaps you have several of these with dependencies causing them to apply linearly The way to improve things depends on the specifics of this answer.
Paula avatar

Definitely option 1. There are many resources and I feel like I’m accidentally making a monolith Inside of each service folder i have the creation of the ECR, the task excec role, the container definition, the load balancing ingress, the service, the task, the pipeline, codebuild and the autoscaling (most of it using cloudposse’s modules) with the specific configuration of each service

Paula avatar

Answering about k8s, we have not enough knowledge to migrate (startup mode activated). Im gonna check about Atmos

Alex Jurkiewicz avatar
Alex Jurkiewicz

gotcha. One way to think about how to split things up is by lifecycle. You may want to separate the Terraform which sets up the environment from the infrastructure which deploys the application. It sort of sounds like you’ve done that, mostly though. Perhaps you might want to split out the pipeline/codebuild to a second configuration, but that might be pointless complexity.

If these stacks are mostly concerned with things that don’t change often, taking 15mins to run is probably fine. How often do you change ingress rules or autoscaling thresholds? It sounds like a stack you spin up once per service and then change a few times a year

jose.amengual avatar
jose.amengual

I will say for my experience RDS will take the longest so maybe doing what Alex is suggesting with RDS and have a different deployment lifecycle could be a good option. In my company projects we do not put RDS/Aurora deployments are part of the app because some actions can take a long time

tommy avatar

We have encountered the same issue. Our team mitigates it by split monolith into separate independently deployed modules. You can use atmos by cloudposse or terragrunt to manage the dependency graph.

Then you don’t need to apply all resources, just apply the related ones when updating.

1
this1
tommy avatar

You can check out infrastructure reference architecture of cloudposse or gruntworks, which offer you insights to mitigate this issue.

kallan.gerard avatar
kallan.gerard

@Paula I’m a bit late to the party but I think the first issue to address is the configuration/folder hierachy

kallan.gerard avatar
kallan.gerard

The fundamental problem is you’ve got the relationship inside out. Instead of thinking of a terraform monorepo that contains configuration for many environments > networks > infrastructure > services…

kallan.gerard avatar
kallan.gerard

You should think of services containing their own private infrastructure and deployment configuration (whether that’s terraform, kubernetes, ecs service and task definitions, ci/cd config etc) in a service monorepo. Personally I’d put it with the service source code repo, but atleast a repo that is privately owned and maintained by that service.

kallan.gerard avatar
kallan.gerard

The problem you’ve got is that you’ve sliced things horizontally by what they are, not the service. This is equivalent to a neighbourhood where each house contains one type of thing for everyone. For example one house contains everybody’s chairs. One house contains everybody’s beds. One house contains everybody’s tables etc.

But of course in reality each house is independent and private to everyone else as dictated by the owners/occupants of that house. And within each house things are they arranged by use case, not furniture type. For example a living room could contain chairs, tables, tvs etc.

The same way a service should contain and control it’s own infrastructure arranged by use case. That’s the whole point of service orientated architecture, in that maintainers of a service can perform the things they need to do end to end without having to involve other people.

2023-05-23

venkata.mutyala avatar
venkata.mutyala

How do you folks manage terraform provider updates? For example we have a lot of terraform and we prefer to bring everything to the same version across our many repos/state files. We have used lock files and/or manually pinned each provider but this has significant overhead as we need to go to each repo or module and make updates. Would love to know if anyone has found a more optimal solution to this.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think there are two good options:

  1. Just don’t bother. Provider updates almost never fix bugs, and if you write code for a new feature it’s quite painless to update to the latest version on-demand.
  2. Renovate with autoMerge
Renovate Docs

Renovate documentation.

2
venkata.mutyala avatar
venkata.mutyala

Thanks

Hao Wang avatar
Hao Wang

wow Renovate is magic

Alex Jurkiewicz avatar
Alex Jurkiewicz

we’re rolling it out now. But I’d caution that it might just be adding pointless busywork. Does it really matter if you use AWS provider 3.68 instead of 3.72? Is it a waste of time to review PRs upgrading such providers?

And if you enable autoMerge, how do you ensure that problematic upgrades are not merged? With Terraform, your CI probably runs a plan and reports if there are no errors. But “no errors” doesn’t mean “no changes”. A provider upgrade might make significant changes to your resources and it’s very difficult to detect this automatically.

Renovate with Terraform is a bit of a rabbit hole, I’m finding.

2
kallan.gerard avatar
kallan.gerard

Agreed. If you’re concerned about compatibility with shared modules I’d argue to maybe consider the pros and cons of org wide modules.

venkata.mutyala avatar
venkata.mutyala

So since i brought this question up we had built our own little registry thinking we could do a proxy that sat in front of the official terraform registry. And the way it worked is our proxy allowed us to define what versions are actually available from the official terraform proxy. However where it doesn’t work is within modules and anything else that is nested. All the modules we use (ex. CloudPosse modules) would need to also be configured to pull from our custom provider registry. So we came up with another idea today: https://github.com/GlueOps/terraform-module-provider-versions/blob/main/README.md we wrote a module that only defines our provider versions and then reference the module across all our terraform directories.

It’s been 2 hours since we rolled it out and it’s working well so far…

terraform-module-provider-versions Overview

To use this repo as your source of truth for provider versions just add a file like this to your terraform directory:

provider_versions.tf:

module "provider_versions" {
  source = "git::<https://github.com/GlueOps/terraform-module-provider-versions.git>"
}

Note:

• GlueOps uses main for production across all repositories. So please test compatibility as needed on a feature branch.

venkata.mutyala avatar
venkata.mutyala
GlueOps/terraform-registry-proxy
venkata.mutyala avatar
venkata.mutyala

^ we will probably be archiving this repo

Alex Jurkiewicz avatar
Alex Jurkiewicz

Can you talk a bit about what benefit this brings you? What’s the motivation here

venkata.mutyala avatar
venkata.mutyala

Definitely. We calculated we were spending about 3-6 hours a month on just updating all of our terraform directories/repos to use the same/latest terraform providers across all our repos. So speeding up this repetitive process was the primary motivation here.

1
kallan.gerard avatar
kallan.gerard

It’s an interesting pattern i wouldn’t have thought about using a module purely for required versions.

I guess to take a step back, what are you solving by making all your configs using the same provider versions?

Plus I assume you’ll still need to tf init -upgrade and commit all your .terraform.lock.hcl files

1
venkata.mutyala avatar
venkata.mutyala

That’s a really good point. We will need to revisit if and how we want to handle the .terraform.lock.hcl

RE: standardized/updated configs.

So we actually update all our apps each month. We don’t do every last dependency/package but for all the layers we manage, we try to bring it up to the latest release/patch. Historically, we have worked on teams where we usually ignored updates and have found that falling behind on software updates (not specifically terraform) ends up being a huge nightmare at the most inconvenient times. So for a little over the past year we have been doing updates once a month. It takes about a full day for one of us to get it done.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Any hashicorp ambassadors able to promote this small, simple PR: https://github.com/hashicorp/terraform/pull/30121/files

• Enables the ability to create IAM policies that give roles access to state files based on tags in S3 for fine-grained access permissions.

• It’s a tiny/simple PR

2
1
loren avatar

fyi…
Hey @lorengordon wave I’m the community manager for the AWS provider. Someone from the team has assigned this to themselves for review, however, they’re currently focused on finishing up the last bit of work needed for the next major AWS provider release. Unfortunately I can’t provide an ETA on when this will be reviewed/merged due to the potential of shifting priorities. We prioritize by count of reactions and a few other things (more information on our prioritization guide if you’re interested).

How We Prioritize - Terraform AWS Provider - Contributor Guide

Contributor documentation and reference for the Terraform AWS Provider

Loren Gordon
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks @loren - it was worth a shot. The problem I have with rankings is it’s not weighted. Some of the issues are level-10 effort, while this is level-0.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Jeremy G (Cloud Posse) @matt

loren avatar

yeah agree

loren avatar

maybe it’ll help to the linked issue also…? https://github.com/hashicorp/terraform/issues/30054

#30054 Add tags to s3 backend resources

Current Terraform Version

1.0.11

Use-cases

In my company we’d like to simplify terraform backend management and use a unique s3 bucket for all projects.
We already have a custom rôle by project, but to secure access by folder we’ve reached a limit in aws bucket policies.
The best solution would be to tag the s3 objects to better handle access.

Proposal

Looking at the code for remote-state/s3, the tags would work a lot like the current acl work. It would be another option in the s3 backend config with small impact in the client.
https://github.com/hashicorp/terraform/blob/main/internal/backend/remote-state/s3/client.go#L175

loren avatar

i wonder if “participants” can be tracked as a metric in addition to reactions. a lot of folks will comment, without posting a reaction

loren avatar

could possibly try to frame it as a security enhancement, at least for advanced users, with fine-grained ABAC policies…

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I was a lot more excited about this feature before I found out that you cannot use tags to control write-access. You still need a path-based restriction to prevent people who can write to some part of the S3 bucket from writing to (overwriting, deleting) another part of the bucket.

1
Subutai avatar
Subutai

Hello Community, looking for a TF module that caters for SQS, SNS and CW alerts. I am somewhat new getting into the tool. Any pointers greatly appreciated.

Alex Jurkiewicz avatar
Alex Jurkiewicz

What do you mean “caters for”?

Subutai avatar
Subutai

Sorry, I meant builds those 3 resources together.

Subutai avatar
Subutai

I feel like I am asking a dumb question though

Hao Wang avatar
Hao Wang

it is a good question, I came across issues before that combining multiple modules will cause some dependency issue, need to run --target first to create some resources

Alex Jurkiewicz avatar
Alex Jurkiewicz

This Slack is run by the CloudPosse company, who release many open source modules. Searching the repository list at https://github.com/cloudposse/ is a good place to start. There are a couple of hits if I search for sqs and sns. One of them might be suitable.

Otherwise, we’re going to need more info. What are you trying to accomplish with SQS, SNS and CW alarms?

Cloud Posse

DevOps Accelerator for Startups Hire Us! https://slack.cloudposse.com/

Subutai avatar
Subutai

I was looking for a solution to build an SQS queue, with an CW Alert that monitors a DLQ configured to send to an SNS topic. I’ve tried creating my own module as follows:

resource "aws_sqs_queue" "sqs_queue" {
  name = var.sqs_queue_name
}

resource "aws_sqs_queue" "dlq_queue" {
  name = var.dlq_queue_name
  redrive_policy = jsonencode({
    deadLetterTargetArn = aws_sqs_queue.dlq_queue.arn
    maxReceiveCount     = 3
  })
}

resource "aws_cloudwatch_metric_alarm" "dlq_alarm" {
  alarm_name          = "dlq-message-received-alarm"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = 1
  metric_name         = "ApproximateNumberOfMessagesVisible"
  namespace           = "AWS/SQS"
  period              = 60
  statistic           = "SampleCount"
  threshold           = var.cloudwatch_alarm_threshold

  alarm_description = "This alarm is triggered when the DLQ receives a message."

  alarm_actions = [
    var.sns_topic_arn
  ]
}

Not the cleanest of solutions.

Alex Jurkiewicz avatar
Alex Jurkiewicz

that looks clean to me! What don’t you like about it?

Ben Kero avatar
Ben Kero

Heya. I’m trying to use a cloudposse module with Terraform Cloud for the first time (terraform-aws-elasticache-redis). I keep running into a problem with things named after module.this.id . I’m looking into the this module and see it should be set to an ID, but it seems to be set to an empty string.

Is this something that’s known? I’m passing enabled = true to the module, which should pass it to the null context module as well.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the context must have the following attribute (all or at least one)

namespace = "eg"

stage = "test"

name = "redis-test"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
enabled = true

region = "us-east-2"

availability_zones = ["us-east-2a", "us-east-2b"]

namespace = "eg"

stage = "test"

name = "redis-test"

# Using a large instance vs a micro shaves 5-10 minutes off the run time of the test
instance_type = "cache.m6g.large"

cluster_size = 1

family = "redis6.x"

engine_version = "6.x"

at_rest_encryption_enabled = false

transit_encryption_enabled = true

zone_id = "Z3SO0TKDDQ0RGG"

cloudwatch_metric_alarms_enabled = false

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the final ID/name will be calculated as

{namespace}-{environment}-{stage}-{name}-{attributes}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to make all AWS resource names unique and consistent

Ben Kero avatar
Ben Kero

Ah I see, thank you.

2023-05-24

Release notes from terraform avatar
Release notes from terraform
01:03:31 PM

v1.5.0-beta2 1.5.0-beta2 (May 24, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…

Release v1.5.0-beta2 · hashicorp/terraformattachment image

1.5.0-beta2 (May 24, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate…

2023-05-25

SlackBot avatar
SlackBot
02:01:57 PM

This message was deleted.

José avatar

Hello team. Any suggestion about how to do a / rds environments with cloudposse modules? Lets use the simpler one for this suggestion https://github.com/cloudposse/terraform-aws-rds. Ideas? Thanks.

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We don’t support it

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Open to supporting it though

mrwacky avatar
mrwacky

I just did some blue/green RDS upgrades recently, the API/Terraform support was lacking, so had to do it mostly manually, and update our Terraform after the fact to reflect the infra changes. Also, fun fact: you can’t use blue/green with RDS Proxy (yet).

1
Josh Pollara avatar
Josh Pollara

Also be careful with RDS blue green on especially busy databases. It can have a lot of trouble acquiring a lock and eventually just timeout after you initiate the update.

Josh Pollara avatar
Josh Pollara

This is my experience on Aurora MySQL at least

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Josh Pollara was this using the AWS managed “Amazon RDS Blue/Green Deployments”? Interesting insight…

Josh Pollara avatar
Josh Pollara

Yes it was

Josh Pollara avatar
Josh Pollara

Against a very busy database

Josh Pollara avatar
Josh Pollara

Additionally, it wasn’t a one time event. We tried multiple times without luck.

José avatar

Ok, so is a no go for the time being. Yeah it was just a insight, Now I got the facts for not to. Thanks…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What we need is amazon to offer https://neon.tech/ as a managed service

Neon — Serverless, Fault-Tolerant, Branchable Postgresattachment image

Postgres made for developers. Easy to Use, Scalable, Cost efficient solution for your next project.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(this is what vercel is now offering)

Josh Pollara avatar
Josh Pollara
PlanetScale: The world’s most advanced database platformattachment image

PlanetScale is the MySQL-compatible, serverless database platform.

Soren Jensen avatar
Soren Jensen

I’m using the terraform-aws-ec2-autoscale-group module at the moment, but in an attempt to do some cost saving I like to switch to spot instances. I see there is an option to set instance_market_options but can’t get the syntax right. Documentation says:

object({
    market_type = string
    spot_options = object({
      block_duration_minutes         = number
      instance_interruption_behavior = string
      max_price                      = number
      spot_instance_type             = string
      valid_until                    = string
    })
  })

I tried this with no luck:

  instance_market_options  = [
    {
      market_type = "spot",
      spot_options = [
        {
          spot_instance_type = "one-time"
        }  
      ]
    }
  ]

│ The given value is not suitable for module.autoscale_group.var.instance_market_options declared at .terraform/modules/autoscale_group/variables.tf:93,1-35: object required.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

variable "instance_market_options" is an object, but you use it as a list of objects

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
instance_market_options  = {
      market_type = "spot",
      spot_options = {
          spot_instance_type = "one-time"
        }  
    }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

same with spot_options

2023-05-26

jwood avatar

I have a question regarding atmos stacks and cloudposse/terraform-aws-components/account. The email format in the example is [something+%[email protected]](mailto:something+%[email protected]), but if the account name has hyphens in it like foo-bar, you would have an account email of [[email protected]](mailto:[email protected]). My question is, will this cause issues with email routing, and if so, is there a simple way to replace hyphens with dots without forking the account component?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please ask in refarch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(as this relates to our reference architecture)

Alex Jurkiewicz avatar
Alex Jurkiewicz

It won’t cause problems

2023-05-27

Mike Crowe avatar
Mike Crowe

Can somebody help me with cloudposse/ssm-tls-ssh-key-pair/aws please? I’m trying to create a keypair and store the output in SSM. This module creates the SSH keys in SSM properly, but I don’t see how to then use it: • In terraform-aws-modules/key-pair/aws If I use public_key = module.ssm_tls_ssh_key_pair.public_key, I get the error: InvalidKey.Format: Key is not in valid OpenSSH public key format (doing an ECDSA key)If I use cloudposse/key-pair/aws, it expects the key to be in a file (which defeats the whole purpose of using SSM, right?) I’m sure this is obvious, but I’m missing it

Mike Crowe avatar
Mike Crowe

Hmm, OK, so ECDSA doesn’t output a public key format? If I change to RSA, it works as expected

Alex Jurkiewicz avatar
Alex Jurkiewicz
Create key pairs - Amazon Elastic Compute Cloud

You can use Amazon EC2 to create an RSA or ED25519 key pair, or you can use a third-party tool to create a key pair and then import the public key to Amazon EC2.

2023-05-29

Lele avatar

(probably?) outdated docs:

https://registry.terraform.io/modules/cloudposse/vpc/aws/latest

cidr_block         = "10.0.0.0/16"

not valid anymore.. it’s ipv4_cidr_block now and it’s a list of strings

Alex Jurkiewicz avatar
Alex Jurkiewicz

report it as an issue against the github repository. Or if you can’t do that, try #terraform-aws-modules

Lele avatar
#121 `cidr_block` is not a valid option anymore.. it's `ipv4_cidr_block` now

Describe the Bug

The option:

cidr_block         = "10.0.0.0/16"

is not valid anymore.. it has been changed to ipv4_cidr_block now and it’s a list of strings, not one string.

Expected Behavior

cidr_block         = "10.0.0.0/16"

should work as per docs and examples

Steps to Reproduce

try to use latest main branch version of the module

Screenshots

No response

Environment

• Module: latest main branch commit • TF version: tested with v1.4.5

Additional Context

No response

1

2023-05-30

Adrian Rodzik avatar
Adrian Rodzik

Hello, I have a terrraform issue to overcome. I am creating a azure key-vaults and secrets inside them with random password generator. It is working fine. The problem starts when i want to add to list another key-vault to be provisioned. While i run terraform again all the passwords are being regenreted because of the random function im using. Is there a possibility to persist the passwords for already created key-vaults without regenerating them for existing ones?

key-vaults are created with for_each statement from the list.

key-vault.tf

data "azurerm_client_config" "current" {}

resource "azurerm_key_vault" "kv" {
  for_each                    = toset(var.vm_names)
  name                        = format("%s", each.value)
  location                    = azurerm_resource_group.db_rg.location
  resource_group_name         = azurerm_resource_group.db_rg.name
  enabled_for_disk_encryption = true
  tenant_id                   = data.azurerm_client_config.current.tenant_id
  soft_delete_retention_days  = var.soft_delete_retention_days
  purge_protection_enabled    = false

  sku_name = var.kv_sku_name

  # TODO: add group with permissions to manage key vaults
  access_policy {
    tenant_id    = data.azurerm_client_config.current.tenant_id
    object_id    = data.azurerm_client_config.current.object_id

    key_permissions = [
    ]

    secret_permissions = [
      "Get",
      "List",
      "Set",
    ]


    storage_permissions = [
    ]
  }
}

secrets.tf example resources from secrets

resource "random_password" "password" {
  count           = 11
  length          = 11
  special         = false
  min_upper       = 3
  min_numeric     = 3
  min_lower       = 3
}

resource "azurerm_key_vault_secret" "secret1" {
  depends_on   = [azurerm_key_vault.kv]
  for_each     = toset(var.vm_names)
  name         = "name1"
  value        = random_password.password[0].result
  key_vault_id = azurerm_key_vault.kv[each.value].id
}
resource "azurerm_key_vault_secret" "secret2" {
  depends_on   = [azurerm_key_vault.kv]
  for_each     = toset(var.vm_names)
  name         = "name2"
  value        = random_password.password[1].result
  key_vault_id = azurerm_key_vault.kv[each.value].id
}
Hao Wang avatar
Hao Wang

seems it is still a limit of terraform since 2017, https://github.com/hashicorp/terraform/issues/13417#issuecomment-297562588

Comment on #13417 Terraform recreates google_compute_instance resource with random even if random is not changed

Hey @zbikmarc! Sorry for the long delay on a response to this. It confounded me for a bit, and I kept chasing red herrings trying to find a root cause.

This is essentially the same bug as #13763, which itself is a manifestation of #3449. Basically, because you’re using element, Terraform sees that the list is changed, and assumes everything changes. @apparentlymart explained it much better than I can in #13763, so I’m just going to borrow his explanation:

The problem is that currently Terraform isn’t able to understand that element only refers to the count.index element of the list, and so it assumes that because the list contains something computed the result must itself be computed, causing Terraform to think that the name has updated for all of your addresses.

A workaround that worked in my reproduction of the issue is to run

terraform plan -out=tfplan -target=random_id.server-suffix

And make sure it only intends to add the new random_id. Then run terraform apply tfplan. Then a normal plan/apply cycle should only add the new instance, because Terraform already has the ID, so the list isn’t changing.

I’m going to go ahead and close this issue, but if you have questions or this doesn’t work for you, feel free to comment. Also, it looks like @apparentlymart has a fix for this opened as hashicorp/hil#51, so if that gets merged, this will become unnecessary in the future.

Eduardo Wohlers avatar
Eduardo Wohlers

Hm, do you have to run this multiple times??

One approach you can consider is appending a small hash after the name of the keys. ex.: key-name-j3hd1 And create a separate workspace and repo (I’m assuming you are using Terraform cloud) and only run if necessary.

Even if anyone accidentally run it, it will create a new set of keys and won’t change the ones you created before and prbly already using.

I think it solves for now but creates a technical debt with the automation gods.

1
Eduardo Wohlers avatar
Eduardo Wohlers

Also, you could use azurecaf provider for naming conventions and random provider for the small hash.

Fizz avatar

Could you add the plan where it wants to re-create stuff? As you are using a list there could be an issue with lexical ordering. TF sorts lists alphabetically before looping through them, so you can’t rely on ordinal order. You could work around this by using maps or adding a numeric prefix to each element in the list so you can control the other TF loops through it and how it stores it in state.

Adrian Rodzik avatar
Adrian Rodzik

Thank you for responses, ill check your suggestions.

More info about the setup. It is a part of a bigger tf script. Im trying to provision a VMs on azure that suppose to host databases. Together with each VM (from the list) there is a keyvault created with secrets for this database. Secrets should be created once.

I need to have the possibility to run it multiple times because there will be need to remove one of the hosts or add another later on. Im using for_each to not relay on count index. While im adding new VM to the list or removing one the secrets related to this machine are created/destroyed but the rest related to other machines that suppose to be unchanged are regenerating once again

Probably using a different state for each key-vault will do the trick but its strictly connected to the VM. While im removing the VMs i would like the key-vault and its secrets to be removed as well.

Adrian Rodzik avatar
Adrian Rodzik

It seems that this issue is fixed. Thanks. It was some dependency that was created because i was injecting those passwords to “cloud-init. scripts.

Adrian Rodzik avatar
Adrian Rodzik

anyway im facing another issue now.

variable "vm_names" {
  type    = list(string)
  default = [
   "vm-1-2ds31",
   "vm-2-412ae"
  ]
}

resource "random_password" "password" {
  count           = 2
  length          = 11
  special         = false
  min_upper       = 3
  min_numeric     = 3
  min_lower       = 3
}

resource "azurerm_key_vault_secret" "secret1" {
  depends_on   = [azurerm_key_vault.kv]
  for_each     = toset(var.vm_names)
  name         = "name1"
  value        = random_password.password[0].result
  key_vault_id = azurerm_key_vault.kv[each.value].id
}
resource "azurerm_key_vault_secret" "secret2" {
  depends_on   = [azurerm_key_vault.kv]
  for_each     = toset(var.vm_names)
  name         = "name2"
  value        = random_password.password[1].result
  key_vault_id = azurerm_key_vault.kv[each.value].id
}

I need to create a set of two different random passwords for each VM from the list. Right now with this setup it is creating only 2 passwords and propagating it to all key-vaults that are created

key vaults are created like this

resource "azurerm_key_vault" "kv" {
  for_each                    = toset(var.vm_names)
  name                        = format("%s", each.value)
.
.
.

if i add the for_each to random_password then i cannot specify how many passwords need to be created.

The solution may be another resource for random_password to have it in different object but i cannot do this as it shoudl be created dinamically in relation to the var.vm_names list

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Adrian Rodzik, sorry for the late reply here. Was the latest issue fixed?

2023-05-31

Release notes from terraform avatar
Release notes from terraform
09:03:32 PM

v1.5.0-rc1 1.5.0-rc1 (May 31, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…

Release v1.5.0-rc1 · hashicorp/terraformattachment image

1.5.0-rc1 (May 31, 2023) NEW FEATURES:

check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate a…

    keyboard_arrow_up