#terraform (2021-08)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-08-01

Alexander Scherer avatar
Alexander Scherer

Good evening everyone. I’m new here and hopefully you don’t mind a really noobish question but i’m trying to add my lambda functions dynamically to the api gateway integration like so:

// need to do it dynamically somehow but it won't let me assign the key name with variable
  dynamic "integrations" {
    for_each = module.lambdas
    content {
      "ANY /hello-world" = {
        lambda_arn             = module.lambdas["hello-world"].lambda_function_arn
        payload_format_version = "2.0"
        timeout_milliseconds   = 12000
      }
    }
  }

// this works
//  integrations = {
//    "ANY /hello-world" = {
//      lambda_arn             = module.lambdas["hello-world"].lambda_function_arn
//      payload_format_version = "2.0"
//      timeout_milliseconds   = 12000
//    }
//  }

any help on how to archive this in tf would be awesome!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  dynamic "integrations" {
    for_each = module.lambdas
    content {
      (integrations.values.lambda_function_name) = {
        lambda_arn             = integrations.values.lambda_function_arn
        payload_format_version = "2.0"
        timeout_milliseconds   = 12000
      }
    }
  }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
The keys in a map must be strings; they can be left unquoted if they are a valid identifier, but must be quoted otherwise. You can use a non-literal string expression as a key by wrapping it in parentheses, like (var.business_unit_tag_name) = "SRE".
Alexander Scherer avatar
Alexander Scherer

damn you are good sir! thank you! I’ve come up with a workaround but this is much cleaner. My workaround was:

  integrations = {
    for lambda in module.lambdas : lambda.function_name_iterator => {
      lambda_arn             = lambda.lambda_function_arn
      payload_format_version = "2.0"
      timeout_milliseconds   = 12000
    }
  }

i had to export a “fake” variable from my module where i added the “ANY /” in front of the function name

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can add/format any prefix/suffix/variable to (integrations.values.lambda_function_name) , it’s a standard terraform interpolation

Brad McCoy avatar
Brad McCoy

Hi All we did a webinar on intro to IAC and Terraform in the weekend if anyone is interested: https://www.youtube.com/watch?v=2keKHXtvY5c

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is there a way to perform a deep merge like following:

locals {
  a = { foo = { age = 12, items = [1] } }
  b = { foo = { age = 12, items = [2] }, bar = { age = 4, items = [3] } }
  c = { bar = { age = 4, items = [] } }

  # desired output
  out = { foo = { age = 12, items = [1,2] }, bar = { age = 4, items = [3] } }
}

I have many maps of objects. The objects with same key are identical except for one list field. I want to merge all items with the same key, except concat the list field.

2021-08-02

bober2000 avatar
bober2000

Hi all. Have a question about working with provider aliases. In my case I have several accounts in AWS:

provider "aws" {
  region = var.region
}

provider "aws" {
  region = "eu-west-2"
  alias = "aws.eu-west-2"
}

provider "aws" {
  region = "eu-central-1"
  alias = "aws.eu-central-1"
}

I want to create same shared WAF rules in this accounts. Could I use for_each somehow to get call module creating this rules and iterating over different provider aliases?

Alex S avatar

I don’t believe you can currently loop with a for_each for provider aliases due to it being a meta-argument but you can consume the module you’re talking of like so:

module "waf_rules_euc1" {
  source = "./modules/waf-rules"

  providers = {
    aws = aws.eu-central-1
  }

  myvar = "value"
}
  • repeat for regions
1
this1
Alex S avatar

Hi all, I’m using the terraform-aws-ecs-alb-service-task module and running into a bit of an issue; I’ve set deployment_controller_type to CODE_DEPLOY and using the blue/green deployment method - when Code Deploy diligently switches to the green autoscaling group, the next run of the module deletes/recreates the ecs service because it’s trying to put back the blue target group (or both)… Has anyone tried to run this setup? I can make a PR to ignore changes to load balancers, but if you look at the module it’s going to become an immediate nightmare to support the 3 different ignore combinations. Any advice greatly appreciated.

Kenan Virtucio avatar
Kenan Virtucio

Hi, just asking again https://github.com/cloudposse/terraform-aws-cloudfront-cdn for this module. Is there a plan in the roadmap for users to be able to modify default_cache_behavior ?

GitHub - cloudposse/terraform-aws-cloudfront-cdn: Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin.attachment image

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - GitHub - cloudposse/terraform-aws-cloudfront-cdn: Terraform Module that implements a CloudFront Distribution…

jose.amengual avatar
jose.amengual

anyone can create a PR to add features/flexibility

GitHub - cloudposse/terraform-aws-cloudfront-cdn: Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin.attachment image

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - GitHub - cloudposse/terraform-aws-cloudfront-cdn: Terraform Module that implements a CloudFront Distribution…

jose.amengual avatar
jose.amengual

there is not roadmap for features in community modules

jose.amengual avatar
jose.amengual

although the cache behavior have variables and a dynamic already so seems to be pretty flexible

Kenan Virtucio avatar
Kenan Virtucio

Oh, alright. Thanks @jose.amengual

2021-08-03

Grubhold avatar
Grubhold

Hello folks, I’m facing this issue when trying to deploy https://github.com/cloudposse/terraform-aws-elasticsearch using all other modules that this one requires. I’ll link the files I have in the thread

│ Error: Error creating ElasticSearch domain: ValidationException: You must specify exactly two subnets because you've set zone count to two.
│
│   with module.elasticsearch.aws_elasticsearch_domain.default[0],
│   on modules/elasticsearch/main.tf line 100, in resource "aws_elasticsearch_domain" "default":
│  100: resource "aws_elasticsearch_domain" "default" {
│
╵
╷
│ Error: Error creating Security Group: InvalidGroup.Duplicate: The security group 'elastic-test-es-test' already exists for VPC 'vpc-0fbda4f1d6105a68c'
│ 	status code: 400, request id: 6ad8d766-7954-49d5-b257-8b2213d1f8ec
│
│   with module.vpc.module.security_group.aws_security_group.default[0],
│   on modules/sg-cp/main.tf line 28, in resource "aws_security_group" "default":
│   28: resource "aws_security_group" "default" {
Grubhold avatar
Grubhold
02:38:14 PM
Grubhold avatar
Grubhold
02:38:48 PM
Grubhold avatar
Grubhold

They’re basically CloudPosse’s modules.

Matt Gowie avatar
Matt Gowie

@Grubhold the error you’re getting spells it out — When ES is deployed it needs a certain number of subnets to account for the number of nodes that it has.

So for —

  dynamic "vpc_options" {
    for_each = var.vpc_enabled ? [true] : []
    content {
      security_group_ids = [join("", aws_security_group.default.*.id)]
      subnet_ids         = var.subnet_ids
    }
  }

Your var.subnet_ids needs to include more subnet IDs.

1
1
Grubhold avatar
Grubhold

Thank you so much @Matt Gowie for pointing that out, it’s interesting not to notice this for a whole day

1
Matt Gowie avatar
Matt Gowie

No problem — Glad that worked out for ya.

Ben avatar

Hello folks, I have a question regarding terraform-aws-eks-cluster https://github.com/cloudposse/terraform-aws-eks-cluster/blame/master/README.md#L100 The readme contains two statements that seem contradicting but maybe I’m just not getting it. It states:
The KUBECONFG file is the most reliable [method], […]
and a few lines below:
At the moment, the exec option appears to be the most reliable method, […]

terraform-aws-eks-cluster/README.md at master · cloudposse/terraform-aws-eks-clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

I believe @Jeremy G (Cloud Posse) recently updated that documentation… There was a big re-work he just did to support the 2.x k8s provider I’m pretty sure. Easy for something to pass through the cracks.

terraform-aws-eks-cluster/README.md at master · cloudposse/terraform-aws-eks-clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

You may look for his PR (which should be recent) and see what changed to try and figure out the answer to this one.

Matt Gowie avatar
Matt Gowie

PR to correct would be much appreciated!

Ben avatar
Ben
04:42:45 PM

I can only guess which one is correct (exec method), but as both statements have been added in the same PR, it’s not obvious which one is more true. I’m also a bit hesitant to start working on a PR because there seems to be some tooling involved in the readme file creation.

Matt Gowie avatar
Matt Gowie

Ah that is indeed confusing then…

Matt Gowie avatar
Matt Gowie

Since Jeremy isn’t around… @Andriy Knysh (Cloud Posse) do you know which of the two options Jeremy meant as the recommended way forward?

Matt Gowie avatar
Matt Gowie

I would imagine it’s exec, but now I’m confused considering they’re both mentioned in the same PR.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Ben @Matt Gowie The README needs revision, as you note. Please read the updated Release Notes for v0.42.0. Fixing the README will take a PR, but I was able to update the Release Notes easily.

Release v0.42.0 Fix EKS upgrades, enable choice of service role · cloudposse/terraform-aws-eks-clusterattachment image

This release resumes compatibility with v0.39.0 and adds new features: Releases and builds on PR #116 by providing the create_eks_service_role option that allows you to disable the automatic creat…

1
Matt Gowie avatar
Matt Gowie

Good stuff — Thanks Jeremy.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The short answer is “use data_auth if you can, kubeconfig for importing things into Terraform state, and exec_auth if it works and data_auth doesn’t.”

Ben avatar

Great, thank you!

2021-08-04

Grubhold avatar
Grubhold

Hello folks, is there a workaround for the current security group module where using it with another module such as Elasticsearch it yells that xyz security group already exists? This PR seems to address this but its not yet merged.

│ Error: Error creating Security Group: InvalidGroup.Duplicate: The security group 'logger-test-es-test' already exists for VPC 'vpc-0e868046c92d7bb2a'
│ 	status code: 400, request id: be88d8b8-1ed0-43ac-8246-e37265782098
│
│   with module.vpc.module.security_group.aws_security_group.default[0],
│   on modules/sg-cp/main.tf line 28, in resource "aws_security_group" "default":
│   28: resource "aws_security_group" "default" {
Overhaul Module for Consistency by Nuru · Pull Request #17 · cloudposse/terraform-aws-security-groupattachment image

what Input use_name_prefix replaced with create_before_destroy. Previously, create_before_destroy was always set to true but of course that fails if you are not using a name prefix, because the na…

Grubhold avatar
Grubhold
12:21:27 PM

My security group module taken from CloudPosse’s

Overhaul Module for Consistency by Nuru · Pull Request #17 · cloudposse/terraform-aws-security-groupattachment image

what Input use_name_prefix replaced with create_before_destroy. Previously, create_before_destroy was always set to true but of course that fails if you are not using a name prefix, because the na…

Grubhold avatar
Grubhold
12:56:59 PM

After making var.security_group_enabled as false that error went away.

Grubhold avatar
Grubhold
12:57:42 PM

But this time it gave me this error

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use attribute and add diff attributes for the Elasticsearch and other modules you use with it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the issue is that all modules, when provided with the same context (e.g. namespace, environment, stage, name), generates the same names for the SGs that they create

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

adding attributes will make all generated IDs different

Grubhold avatar
Grubhold

Oh wow, indeed all contexts in all module folders are the same. And I just found out that var.attributes are left without reference in [variables.tf](http://variables.tf)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

many modules have flags to create the SG or not, you can create your own and provide to the modules

Grubhold avatar
Grubhold

I wanted to use CP’s Security Group module as well for everything. But I think maybe for the time being I can just use my own simple security group configs instead of CP’s until I get to fully understand how its functioning.

Grubhold avatar
Grubhold

And provide it to the modules as you say

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Erik Osterman (Cloud Posse) nothing really to do with our SG module, more, as Andriy said, about how we handle naming of SGs.

1
Grubhold avatar
Grubhold

If I understood correctly you’re suggesting to use my own custom SGs but maybe follow the naming conventions you have set up in your SGs?

Grubhold avatar
Grubhold

Also, thank you so much for sparing time to answer my rather newbie questions. But I’m loving your modules and it has been teaching me a lot!

Gerald avatar

Hi folks, I want to include the snippet section below from my ECS task definition for datadog container but the problem volume argument not supported? Anyone can give me advise. Thank you

  mount_points                  = [
    {
      containerPath = "/var/run/docker.sock"
      sourceVolume = "docker_sock"
      readOnly = true
    },
    {
      containerPath = "/host/sys/fs/cgroup"
      sourceVolume = "cgroup"
      readOnly = true
    },
    {
      containerPath = "/host/proc"
      sourceVolume = "proc"
      readOnly = true
    }
  ]

  volumes                       =  [
    {
      host_path = "/var/run/docker.sock"
      name      = "docker_sock"
      docker_volume_configuration = []
    },
    {
      host_path = "/proc/"
      name      = "proc"
      docker_volume_configuration = []
    },
    {
      host_path = "/sys/fs/cgroup/"
      name      = "cgroup"
      docker_volume_configuration = []
    }
  ]

BTW: I’m using this module https://github.com/cloudposse/terraform-aws-ecs-container-definition/blob/master/main.tf

terraform-aws-ecs-container-definition/main.tf at master · cloudposse/terraform-aws-ecs-container-definitionattachment image

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - terraform-aws-ecs-container-definition/main.tf a…

Matt Gowie avatar
Matt Gowie

@Gerald — If you’ve got the code then put it up on PR and mention it in #pr-reviews. We’d be happy to take a look at it.

terraform-aws-ecs-container-definition/main.tf at master · cloudposse/terraform-aws-ecs-container-definitionattachment image

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - terraform-aws-ecs-container-definition/main.tf a…

1
Gerald avatar

Thanks Matt will do it

Gerald avatar

possible we can add this line here?

  dynamic "volume" {
    for_each = var.volumes
    content {
      name      = volume.value.name
      host_path = lookup(volume.value, "host_path", null)

      dynamic "docker_volume_configuration" {
        for_each = lookup(volume.value, "docker_volume_configuration", [])
        content {
          autoprovision = lookup(docker_volume_configuration.value, "autoprovision", null)
          driver        = lookup(docker_volume_configuration.value, "driver", null)
          driver_opts   = lookup(docker_volume_configuration.value, "driver_opts", null)
          labels        = lookup(docker_volume_configuration.value, "labels", null)
          scope         = lookup(docker_volume_configuration.value, "scope", null)
        }
      }
    }
  }
Release notes from terraform avatar
Release notes from terraform
09:33:41 PM

v1.0.4 1.0.4 (August 04, 2021) BUG FIXES: backend/consul: Fix a bug where the state value may be too large for consul to accept (#28838) cli: Fixed a crashing bug with some edge-cases when reporting syntax errors that happen to be reported at the position of a newline. (<a href=”https://github.com/hashicorp/terraform/issues/29048“…

Fix handling large states in the Consul backend by remilapeyre · Pull Request #28838 · hashicorp/terraformattachment image

The logic in e680211 to determine whether a given state is small enough to fit in a single KV entry in Consul is buggy: because we are using the Transaction API we are base64 encoding it so the pay…

command/views/json: Never generate invalid diagnostic snippet offsets by apparentlymart · Pull Request #29048 · hashicorp/terraformattachment image

Because our snippet generator is trying to select whole lines to include in the snippet, it has some edge cases for odd situations where the relevant source range starts or ends directly at a newli…

Unwoven avatar
Unwoven

Hello i’m trying to create a proper policy for the cluster-autoscaler service account (aws eks) and I need the ASG ARNs. Any ideas on how I can get them from the eks_node_group module? I can only get the ASG name from the node_group “resource” attribute, which I can use to get the ARN from the aws_autoscaling_group data source. Unfortunately this will be known only after apply. Do the more experienced folks have a decent workaround or a better way to do this?

Brent Garber avatar
Brent Garber

Trying to use cloudposse/terraform-aws-ec2-instance-group and it may be lack of sleep but I can’t figure out how to get it to not generate spurious SSH keys if I’m using an existing key?

If I pass in ssh_key_pair , I’ll want to specify generate_ssh_key_pair as false, but if I do that ssh_key_pair module goes “Oh, well we’re using an existing file then”, and then plan dies with

│ Error: Invalid function argument
│
│   on .terraform\modules\worker_tenants.worker_tenant.ssh_key_pair\main.tf line 19, in resource "aws_key_pair" "
imported":
│   19:   public_key = file(local.public_key_filename)
│     ├────────────────
│     │ local.public_key_filename is "C:/projects/terraform-network-worker/app-dev-worker-worker-arrow.pub"
│
│ Invalid value for "path" parameter: no file exists at
Matt Gowie avatar
Matt Gowie

Sounds like a potential bug in the module? PR to fix would be welcome!

Brent Garber avatar
Brent Garber

Should be there already, horribly done. But there

Brent Garber avatar
Brent Garber
Disable ssh_key_pair generation when an existing keypair is being passed in by BGarber42 · Pull Request #32 · cloudposse/terraform-aws-ec2-instance-group

what If you&#39;re passing in a keypair name, don&#39;t generate one, and done try to load a local file that doesn&#39;t exist why You want to use an existing keypair You don&#39;t want plan to …

Brent Garber avatar
Brent Garber

Can workaround the error by passing true to generate_ssh_key_pair but then I got an extra keypair for every instance group being generated.

2021-08-05

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there an easy way to get TF to ignore data resource changes (see below)

data "tls_certificate" "eks_oidc_cert" {
  url = aws_eks_cluster.eks.identity.0.oidc.0.issuer
}
~ id           = "2021-08-05 16:09:43.671261128 +0000 UTC" -> "2021-08-05 16:09:58.401961984 +0000 UTC"
Rhys Davies avatar
Rhys Davies

Hey folks, wondering if anyone has what they consider a good, or definitive, resource on how to do a major version upgrade of an RDS database with Terraform?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Do it entirely by hand and then remediate your TF state after. I’m mostly serious. Terraform is not good at changing state in a sensible way

venkata.mutyala avatar
venkata.mutyala

^ that’s how I have usually done DB upgrades

venkata.mutyala avatar
venkata.mutyala

There is a flag to do an immediate apply but if the upgrade takes a while then I imagine the odds of Terraform timing out are high

loren avatar

It’s not in terraform, but I really liked this article… https://engineering.theblueground.com/blog/zero-downtime-postgres-migration-done-right/

Zero downtime Postgres migration, done rightattachment image

A step by step guide to migrate your Postgres databases in production environments with zero downtime

Zach avatar

ugh I could have really used that a month ago

Eric Alford avatar
Eric Alford

Hoping someone can give an update on this issue with the terraform aws ses module. Any idea when we can get a fix in? I think ideally the resource would be configurable by an input variable similar to how the iam_permissions is configurable. https://github.com/cloudposse/terraform-aws-ses/issues/40

Default permission limit sending email only to verified domain · Issue #40 · cloudposse/terraform-aws-sesattachment image

Found a bug? Maybe our Slack Community can help. Describe the Bug I believe this line resources = [join(&quot;&quot;, aws_ses_domain_identity.ses_domain.*.arn)] prevent from sending email to outsid…

Matt Gowie avatar
Matt Gowie

@Eric Alford I’m assuming you just need the ability to associate additional domains with that SES sender? That seems like an easy one to wire in a new variable for. You can give it a shot and put up a PR and we’ll be happy to take a look at it.

Default permission limit sending email only to verified domain · Issue #40 · cloudposse/terraform-aws-sesattachment image

Found a bug? Maybe our Slack Community can help. Describe the Bug I believe this line resources = [join(&quot;&quot;, aws_ses_domain_identity.ses_domain.*.arn)] prevent from sending email to outsid…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just post your PR in #pr-reviews and it will get more

1
Dustin Lee avatar
Dustin Lee

Just curious i am using the https://github.com/cloudposse/terraform-aws-alb and https://github.com/cloudposse/terraform-aws-alb-ingress just curious how to set the instance targets for the default target group ?

GitHub - cloudposse/terraform-aws-alb: Terraform module to provision a standard ALB for HTTP/HTTP trafficattachment image

Terraform module to provision a standard ALB for HTTP/HTTP traffic - GitHub - cloudposse/terraform-aws-alb: Terraform module to provision a standard ALB for HTTP/HTTP traffic

GitHub - cloudposse/terraform-aws-alb-ingress: Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groupsattachment image

Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - GitHub - cloudposse/terraform-aws-alb-ingress: Terraform module to provision an …

Zach avatar

Depends on whether you’re attaching EC2 (ASG), ECS, or lambdas

GitHub - cloudposse/terraform-aws-alb: Terraform module to provision a standard ALB for HTTP/HTTP trafficattachment image

Terraform module to provision a standard ALB for HTTP/HTTP traffic - GitHub - cloudposse/terraform-aws-alb: Terraform module to provision a standard ALB for HTTP/HTTP traffic

GitHub - cloudposse/terraform-aws-alb-ingress: Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groupsattachment image

Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - GitHub - cloudposse/terraform-aws-alb-ingress: Terraform module to provision an …

Zach avatar

ECS its done in the ECS Service Definition itself in the load_balancer block https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service

Dustin Lee avatar
Dustin Lee

Just with EC2

2021-08-06

Mazin Ahmed avatar
Mazin Ahmed

Hi all wave I’m speaking today on DEFCON Cloud Village about attacking Terraform environments. It will different attacks I have seen in the years against TF environments, and how engineers can make use of it in security their TF environment.

Please join me today at 12.05 PDT at Cloud Village livestream!

4
2
Mazin Ahmed avatar
Mazin Ahmed
attachment image

Watch how I got RCE at HashiCorp Infrastructure today at #DEFCON. I’m dropping the PoC and reproducible exploit after the talk! https://pbs.twimg.com/media/E8HXnjnXMAElkdQ.png

Alex Jurkiewicz avatar
Alex Jurkiewicz

Do you have a link to the stream?

Mazin Ahmed avatar
Mazin Ahmed

@Alex Jurkiewicz it was livestreamed yesterday, here is the recorded talk: https://www.youtube.com/watch?v=3ODhxYY9-9U

5
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

whoot! will share on office hours

1
1

2021-08-07

mfridh avatar

https://github.com/cloudposse/terraform-aws-rds-cluster

Use case: • provisioned • aurora-mysql • Upgrading of an example engine_version: 5.7.mysql_aurora.2.09.1 => 5.7.mysql_aurora.2.09.2 … Instead of just bumping the cluster resource and letting RDS handle the instances, it deletes and recreates each aws_rds_cluster_instance as well.

Thoughts?

GitHub - cloudposse/terraform-aws-rds-cluster: Terraform module to provision an RDS Aurora cluster for MySQL or Postgresattachment image

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - GitHub - cloudposse/terraform-aws-rds-cluster: Terraform module to provision an RDS Aurora cluster for MySQL or Postgres

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not really much to go on here… does that plan indicate what is causing it to recreate? my guess is it’s not the version change.

GitHub - cloudposse/terraform-aws-rds-cluster: Terraform module to provision an RDS Aurora cluster for MySQL or Postgresattachment image

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - GitHub - cloudposse/terraform-aws-rds-cluster: Terraform module to provision an RDS Aurora cluster for MySQL or Postgres

mfridh avatar

It is the version change. If upgrading cluster from Aws Console and allowing it to propagate to the individual instances, then changing the passed in version in the terraform stack - it detects the remote changes and is happy.

mfridh avatar

I can provide more details when I have time to revisit.

2021-08-09

Piece of Cake avatar
Piece of Cake

It’ll awesome if aws_dynamic_subnet module has support to specific number of private & public subnets

2021-08-10

Julian Gog avatar
Julian Gog

cross-post BUG:wave: I’m here! What’s up? I was about to create a bug ticket and and saw the link to your slack. So I want to make sure its a Bug before opening a ticket. its about the terraform-aws-s3-bucket. if you specify the privileged_principal_arns option it will never create a bucket policy. Is this a wanted behaviour, since the a aws_iam_policy_document is created? My guess is that in the the privileged_principal_arns is missing in the count option here:

resource "aws_s3_bucket_policy" "default" {
  count      = local.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || length(var.s3_replication_source_roles) > 0 || var.policy != "") ? 1 : 0
  bucket     = join("", aws_s3_bucket.default.*.id)
  policy     = join("", data.aws_iam_policy_document.aggregated_policy.*.json)
  depends_on = [aws_s3_bucket_public_access_block.default]
}

ok I am almost 100% sure its a bug, so here are the issue and the PR
Bug-Issue//github.com/cloudposse/terraform-aws-s3-bucket/issues/100>
PR//github.com/cloudposse/terraform-aws-s3-bucket/pull/101>

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

looks like @RB already approved and merged

1
Julian Gog avatar
Julian Gog

As a workaround, I thought I could specify a dedicated policy like this:

  policy = jsonencode({
    "Version" = "2012-10-17",
    "Id"      = "MYBUCKETPOLICY",
    "Statement" = [
      {
        "Sid" = "${var.bucket_name}-bucket_policy",
        "Effect" = "Allow",
        "Action" = [
          "s3:PutObject",
          "s3:GetObject",
          "s3:DeleteObject",
          "s3:ListBucket",
          "s3:ListBucketMultipartUploads",
          "s3:GetBucketLocation",
          "s3:AbortMultipartUpload"
        ],
        "Resource" = [
          "arn:aws:s3:::${var.bucket_name}",
          "arn:aws:s3:::${var.bucket_name}/*"
        ],
        "Principal" = {
          "AWS" : [var.privileged_principal_arn]
        }
      },
    ]
  })

but this results in this error:

Error: Invalid count argument

  on .terraform/modules/service.s3-bucket.s3_bucket/main.tf line 367, in resource "aws_s3_bucket_policy" "default":
 367:   count      = local.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || length(var.s3_replication_source_roles) > 0 || var.policy != "") ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

does anyone has a clue why?

Dustin Lee avatar
Dustin Lee

Hello, anybody had issues with this before

aws_cloudwatch_event_rule.this: Creating...
╷
│ Error: Creating CloudWatch Events Rule failed: InvalidEventPatternException: Event pattern is not valid. Reason: Filter is not an object
Dustin Lee avatar
Dustin Lee

i have tried umpteen ways of trying to get the thing to work with jsonencoding, tomaps, etc

Dustin Lee avatar
Dustin Lee
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_cloudwatch_event_rule.this will be created
  + resource "aws_cloudwatch_event_rule" "this" {
      + arn            = (known after apply)
      + description    = "This is event rule description."
      + event_bus_name = "default"
      + event_pattern  = "\"{\\\"detail\\\":{\\\"eventTypeCategory\\\":[\\\"issue\\\"],\\\"service\\\":[\\\"EC2\\\"]},\\\"detail-type\\\":[\\\"AWS Health Event\\\"],\\\"source\\\":[\\\"aws.health\\\"]}\""
      + id             = (known after apply)
      + is_enabled     = true
Dustin Lee avatar
Dustin Lee

The event pattern is what’s getting me

Dustin Lee avatar
Dustin Lee

sussed it out issue in the module it self

Mr.Devops avatar
Mr.Devops

Any tools to help simplify state migration in a mono repo without destroying your current infrastructure?

Alex Jurkiewicz avatar
Alex Jurkiewicz

what does monorepo have to do with the state migration?

Someone linked https://github.com/minamijoyo/tfmigrate recently, which seems good for more complex migrations

GitHub - minamijoyo/tfmigrate: A Terraform state migration tool for GitOpsattachment image

A Terraform state migration tool for GitOps. Contribute to minamijoyo/tfmigrate development by creating an account on GitHub.

1

2021-08-11

Grubhold avatar
Grubhold

Hi folks, I’ve been working on https://github.com/cloudposse/terraform-aws-elasticsearch and it’s dependencies, its working great and deploying successfully. I have two questions regarding this that I need your assistance with;

  1. How is CloudWatch subscription filter managed by this resource, I believe for ES we need a Lambda function for that, does CloudPosse have a module for this, that I missed?
  2. How is access to Kibana managed by this module? Looking at the config it seems that its depending on VPC and access through a Route53 resource, if so how to access the dashboard of Kibana?
OZZZY avatar

hi

wave2
Release notes from terraform avatar
Release notes from terraform
03:23:44 PM

v1.1.0-alpha20210811 1.1.0 (Unreleased) NEW FEATURES: cli: terraform add generates resource configuration templates (#28874) config: a new type() function, only available in terraform console (<a href=”https://github.com/hashicorp/terraform/issues/28501” data-hovercard-type=”pull_request”…

commands: `terraform add` by mildwonkey · Pull Request #28874 · hashicorp/terraformattachment image

terraform add generates resource configuration templates which can be filled out and used to create resources. The template is output in stdout unless the -out flag is used. By default, only requir…

lang/funcs: add (console-only) TypeFunction by mildwonkey · Pull Request #28501 · hashicorp/terraformattachment image

The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it&#39;s handy to be abl…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

If you use Mac with M1, this is really cool: https://github.com/kreuzwerker/m1-terraform-provider-helper (I do, and have run into what the author described there)

GitHub - kreuzwerker/m1-terraform-provider-helper: CLI to support with downloading and compiling terraform providers for Mac with M1 chipattachment image

CLI to support with downloading and compiling terraform providers for Mac with M1 chip - GitHub - kreuzwerker/m1-terraform-provider-helper: CLI to support with downloading and compiling terraform p…

1
managedkaos avatar
managedkaos

Hello team!

TLDR Question: Do you have tips/suggestions/pointers/resources on creating plugins for tflint?

Details: I have a group of 15-20 modules that I’d like to be coded consistently, specifically: • All modules have inputs for name, environment, and tags • All variables have a description, and optionally a type if applicable • All outputs have a description • All AWS resources that can be tagged, have their tag attribute assigned like tag = merge(vars.tags, local.tags) and optionally an resource level override like tag = merge(vars.tags, local.tags, {RESOURCE = OVERRIDE}) So far I have python scripts that are doing most of these but as I went deeper into the weeds, I thought a tool like tflint might be better suited. So before I go down that route, I’m looking for best practices and tips from those that have been there and done that. Thanks! wave

Bill Davidson avatar
Bill Davidson

Hello, all.

I am trying to use the CloudPosse ec2-autoscale-group module and every time I do a terraform apply, it just repeatedly generates EC2s that are automatically terminated. Any thoughts on where to begin?

Thanks!

Bill Davidson avatar
Bill Davidson

Additional note: the Launch template it generates seems ok when I specify a subnet for it. I am passing in a list of 3 subnets for my 3 AZs.

Tony Bower avatar
Tony Bower

Hello! First, thanks for the cloudposse modules, they’ve been very helpful. I’ve got an issue trying to implement two instances of cloudposse/terraform-aws-datadog-integration

Details in thread.

I’m not sure if this is the right place to ask, but I figured I’d try.

Tony Bower avatar
Tony Bower

The modules …

module "datadog_integration_alpha_prod" {
  source  = "cloudposse/datadog-integration/aws"
  version = "0.13.0"

  namespace    = "alpha"
  stage        = "prod"
  name         = "datadog"
  integrations = ["all"]

  forwarder_rds_enabled        = true

  dd_api_key_source = {
    identifier = aws_ssm_parameter.dd_api_key_secret.name
    resource = "ssm"
  }

  providers = {
    aws = aws.alpha-prod
  }
}

module "datadog_integration_alpha_stg" {
  source  = "cloudposse/datadog-integration/aws"
  version = "0.13.0"

  namespace    = "alpha"
  stage        = "stg"
  name         = "datadog"
  integrations = ["all"]

  forwarder_rds_enabled        = true

  dd_api_key_source = {
    identifier = aws_ssm_parameter.dd_api_key_secret.name
    resource = "ssm"
  }

  providers = {
    aws = aws.alpha-stg
  }
}

The errors…

 Error: failed to execute "git": fatal: not a git repository (or any of the parent directories): .git
│ 
│ 
│   with module.datadog_integration_alpha_stg.module.forwarder_rds[0].data.external.git[0],
│   on .terraform/modules/datadog_integration_alpha_stg.forwarder_rds/main.tf line 7, in data "external" "git":
│    7: data "external" "git" {
│ 
jose.amengual avatar
jose.amengual

Tony, please set forwarder_rds_enabled = false

jose.amengual avatar
jose.amengual

and try again

Tony Bower avatar
Tony Bower

These both succeed if we set the forwarder_rds_enabled to false, but the whole reason we’re using this is that we want to enable enhanced RDS info being forwarded to DataDog

jose.amengual avatar
jose.amengual

we are moving all that code out of it and we are creating a module just for the DD forwarder

jose.amengual avatar
jose.amengual

I’m pretty sure it will be released today

jose.amengual avatar
jose.amengual

although the error you are getting is related to git not able to pull the datadog repo

jose.amengual avatar
jose.amengual

you should be able to download this : <https://raw.githubusercontent.com/DataDog/datadog-serverless-functions/master/aws/rds_enhanced_monitoring/lambda_function.py?ref=3.34.0>

Tony Bower avatar
Tony Bower


although the error you are getting is related to git not able to pull the datadog repo
Yeah, it appears so. This is running from TF cloud though, and only an issue with this module. Other changes in our workspace are working fine, as does this module with the enhanced RDS functionality turned off.

I’ll look for that new module. Do you have a link to the repo?

jose.amengual avatar
jose.amengual

the repo is private until re lease it

jose.amengual avatar
jose.amengual

I will send you the link when is ready

jose.amengual avatar
jose.amengual

but that functionality on how we download the file has not been changed

jose.amengual avatar
jose.amengual

so I imagine you will have the same issue

Tony Bower avatar
Tony Bower

Am I missing anything in my module block?

Could the providers blocks be causing an issue?

jose.amengual avatar
jose.amengual

this are the providers needed

terraform {
  required_version = ">= 0.13"

  required_providers {
    # Update these to reflect the actual requirements of your module
    local = {
      source  = "hashicorp/local"
      version = ">= 1.2"
    }
    random = {
      source  = "hashicorp/random"
      version = ">= 2.2"
    }
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.0"
    }
    archive = {
      source  = "hashicorp/archive"
      version = ">= 2.2.0"
    }
  }
}
jose.amengual avatar
jose.amengual

the archive module is trying to do this :

jose.amengual avatar
jose.amengual
data "external" "git" {
  count   = module.this.enabled && var.git_ref == "" ? 1 : 0
  program = ["git", "-C", var.module_path, "log", "-n", "1", "--pretty=format:{\"ref\": \"%H\"}"]
}
jose.amengual avatar
jose.amengual

is trying to run git on that file

jose.amengual avatar
jose.amengual

it could be that where are you running it git is not properly configured

Tony Bower avatar
Tony Bower

Perhaps, but we’re running it in TF Cloud. I’ll try planning from my workstation later this evening and report back.

jose.amengual avatar
jose.amengual

ohhh tf cloud, that might be an issue

Tony Bower avatar
Tony Bower

yeah

Tony Bower avatar
Tony Bower

Other cloudposse modules haven’t been an issue so far, we’ve had success with them in TF Cloud

jose.amengual avatar
jose.amengual

no many modules use the git pull strategy

jose.amengual avatar
jose.amengual
GitHub - cloudposse/terraform-aws-datadog-lambda-forwarder: Terraform module to provision all the necessary infrastructure to deploy Datadog Lambda forwardersattachment image

Terraform module to provision all the necessary infrastructure to deploy Datadog Lambda forwarders - GitHub - cloudposse/terraform-aws-datadog-lambda-forwarder: Terraform module to provision all th…

jose.amengual avatar
jose.amengual

you will see in the example how to use a local file so in the case of running in TFC then you can push the code in your repo and use the local file instead

Tony Bower avatar
Tony Bower

Thanks! I’ll give it a shot tonight! Thanks for updating the thread.

jose.amengual avatar
jose.amengual

np, let me know how that goes, I can make changes to the module since I’m still working on it so any feedback is appreciated

1
Tony Bower avatar
Tony Bower


you will see in the example how to use a local file so in the case of running in TFC then you can push the code in your repo and use the local file instead
I’m not sure I follow what you are suggesting here.

jose.amengual avatar
jose.amengual

so the problem you had with the RDS forwarder in Terraform cloud

jose.amengual avatar
jose.amengual

was I think related to the module trying to run git

jose.amengual avatar
jose.amengual

so with this new module you can point to a local file instead

Tony Bower avatar
Tony Bower

Ok, I didn’t find that in the example, so maybe I missed it or wasn’t sure.

jose.amengual avatar
jose.amengual

that was if you are running in a system where remote url connections are not allowed ( like TFC) then you can use a local zip file with the code

jose.amengual avatar
jose.amengual

sorry, in the readme :

module "datadog_lambda_forwarder" {
  source = "cloudposse/datadog-lambda-forwarder/aws"
  forwarder_log_enabled = true
  forwarder_rds_artifact_url = "${file("${path.module}/function.zip")}"
  cloudwatch_forwarder_log_groups = {
    postgres =  "/aws/rds/cluster/pg-main/postgresql"
}
Tony Bower avatar
Tony Bower

Ah, ok! Thanks!

2021-08-12

Jackson Delahunt avatar
Jackson Delahunt
09:13:30 AM

Hi all! I’m using Terraform Cloud for state storage and terraform execution. I’ve been running on 0.12.26 , but a module requires me to upgrade to 1.0.4. When changing the workspace version to the new version I get the error in the screenshot. I’m required to run terraform 0.13upgrade to upgrade the state files, however I don’t know how to target state in Terraform Cloud from my local cli. Can anyone advise how I can target Terraform Cloud state from my local cli?

Yousuf Jawwad avatar
Yousuf Jawwad

you will have to go one by one

Yousuf Jawwad avatar
Yousuf Jawwad

first update to 0.13 .. it comes with a command that update the tf files to new format .. a minor refactoring will be required but it can be done.

Yousuf Jawwad avatar
Yousuf Jawwad

after upgrading .. the next time you will do terraform plan .. it will upgrade automatically

1
Mihai Cindea avatar
Mihai Cindea

hey everyone! Trying to use terraform-aws-waf but no matter how I use it I get:

Error: Unsupported block type

  on .terraform/modules/wafv2/rules.tf line 253, in resource "aws_wafv2_web_acl" "default":
 253:             dynamic "forwarded_ip_config" {

Blocks of type "forwarded_ip_config" are not expected here.


Error: Unsupported block type

  on .terraform/modules/wafv2/rules.tf line 306, in resource "aws_wafv2_web_acl" "default":
 306:             dynamic "ip_set_forwarded_ip_config" {

Blocks of type "ip_set_forwarded_ip_config" are not expected here.


Error: Unsupported block type

  on .terraform/modules/wafv2/rules.tf line 409, in resource "aws_wafv2_web_acl" "default":
 409:             dynamic "forwarded_ip_config" {

Blocks of type "forwarded_ip_config" are not expected here.

I also tried using the code from examples/complete, but still have the same issue. Is it a minimum version other than 0.13? I’m currently using 0.14.11

GitHub - cloudposse/terraform-aws-wafattachment image

Contribute to cloudposse/terraform-aws-waf development by creating an account on GitHub.

1
Mihai Cindea avatar
Mihai Cindea

fixed, was using an older version of aws provider. Upgraded to 3.53.0 and it works flawlessly

GitHub - cloudposse/terraform-aws-wafattachment image

Contribute to cloudposse/terraform-aws-waf development by creating an account on GitHub.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looks like we should set a minimum version for the module

1
Murali Manohar avatar
Murali Manohar

Hi Team,

Can someone please help me to rocksdb alerts setup using terraform using write stalls. I am new to datadog and terraform. looking for a syntax to setup alert.

https://github.com/facebook/rocksdb/wiki/Write-Stalls

Write Stalls · facebook/rocksdb Wikiattachment image

A library that provides an embeddable, persistent key-value store for fast storage. - Write Stalls · facebook/rocksdb Wiki

2021-08-13

atom avatar

Morning, is this the right place to ask questions about CloudPosse Terraform modules?

Julian Gog avatar
Julian Gog

Hey everyone, I am getting this error using your S3-module with the privileged_principal_arns option. Any clue why? I am using TF version 0.14.9. this was a common error at earlier versions

Error: Invalid count argument

  on .terraform/modules/service.s3_bucket.s3_bucket/main.tf line 367, in resource "aws_s3_bucket_policy" "default":
 367:   count      = local.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || length(var.s3_replication_source_roles) > 0 || length(var.privileged_principal_arns) > 0 || var.policy != "") ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

ERRO[0043] Hit multiple errors:
Hit multiple errors:
exit status 1 
GitHub - cloudposse/terraform-aws-s3-bucket at 0.42.0attachment image

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - GitHub - cloudposse/terraform-aws-s3-bucket at 0.42.0

Alex Jurkiewicz avatar
Alex Jurkiewicz

it’s because the number of ARNS depends on a computed value

GitHub - cloudposse/terraform-aws-s3-bucket at 0.42.0attachment image

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - GitHub - cloudposse/terraform-aws-s3-bucket at 0.42.0

Alex Jurkiewicz avatar
Alex Jurkiewicz

I guess you are passing in a ARN that is dynamically generated in the same terraform configuration

Alex Jurkiewicz avatar
Alex Jurkiewicz

in this case, Terraform doesn’t know what the length of var.privileged_principal_arns is at plan time, so it doesn’t know how many resources of resource "aws_s3_bucket_policy" "default" to create

Julian Gog avatar
Julian Gog

jupp I pass the arn in via from a ECS module. Ok then instead of checking the length would it make sense to check != "" or != null?

Julian Gog avatar
Julian Gog

what confuses me, is that if I am trying this one out in a test repo where I just call the module with some random string as ARN it works

Alex Jurkiewicz avatar
Alex Jurkiewicz

Right. Because the string is hard coded

Julian Gog avatar
Julian Gog
fix(main.tf): change count check for privileged_principal_arns by avendretter · Pull Request #103 · cloudposse/terraform-aws-s3-bucketattachment image

what The length(var.privileged_principal_arns) is not determinable before apply if the input itself is dependent on other resources. why Terraform cannot know the length of the variable before i…

length of privileged_principal_arns not determinable before apply · Issue #102 · cloudposse/terraform-aws-s3-bucketattachment image

Describe the Bug If no resources are created yet and the var.privileged_principal_arns is a variable, this will lead in to this error: Error: Invalid count argument on .terraform/modules/service.s3…

Alex Jurkiewicz avatar
Alex Jurkiewicz

what’s that project which auto-adds tags to your terraform resources based on file/repo/commit?

Alex Jurkiewicz avatar
Alex Jurkiewicz

it has some name like yonder

Julian Gog avatar
Julian Gog
GitHub - bridgecrewio/yor: Extensible auto-tagger for your IaC files. The ultimate way to link entities in the cloud back to the codified resource which created it.

Extensible auto-tagger for your IaC files. The ultimate way to link entities in the cloud back to the codified resource which created it. - GitHub - bridgecrewio/yor: Extensible auto-tagger for you…

Alex Jurkiewicz avatar
Alex Jurkiewicz

man! i just found it too and was going to post

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

thanks, that’s it

1

2021-08-15

Phillip Hocking avatar
Phillip Hocking

hey, any thoughts as to how to update public ptr of an ec2 public ip programmatically? i think typically you have to ask aws support to do that… but it would be nice to do it via terraform/api

Phillip Hocking avatar
Phillip Hocking
loren avatar

If it can be done with the aws cli, then there is an api for it, which means terraform could do it also… If it doesn’t have the feature yet, check for an existing issue or open one!

1
roth.andy avatar
roth.andy

Example repo I made for running shell-based tests inside an ephemeral EC2 instance using Terratest, if anyone’s interested in that kind of thing

https://github.com/RothAndrew/terratest-shell-e2e-poc

GitHub - RothAndrew/terratest-shell-e2e-poc: Proof of Concept for a shell-based E2E test using Terratest for an ephemeral EC2 instanceattachment image

Proof of Concept for a shell-based E2E test using Terratest for an ephemeral EC2 instance - GitHub - RothAndrew/terratest-shell-e2e-poc: Proof of Concept for a shell-based E2E test using Terratest …

2021-08-16

Mark juan avatar
Mark juan

Hie all, do anyone having idea about how to add cloudwatch as grafana data source via terraform(helm chart) and also verify it

Alex Jurkiewicz avatar
Alex Jurkiewicz
GitHub - prometheus/cloudwatch_exporter: Metrics exporter for Amazon AWS CloudWatchattachment image

Metrics exporter for Amazon AWS CloudWatch. Contribute to prometheus/cloudwatch_exporter development by creating an account on GitHub.

OZZZY avatar

Hi guys, I am new at Terraform. How can I create more than one site2site vpn connection.

James Wade avatar
James Wade
Error: Cannot import non-existent remote object
│ 
│ While attempting to import an existing object to "aws_codebuild_project.lambda", the provider detected that no object exists with the given id. Only pre-existing objects can be imported; check that the id is correct and that it is associated with the provider's
│ configured region or endpoint, or use "terraform apply" to create a new remote object for this resource.

Anyone else ever had an issue with this?

loren avatar

only when the object doesn’t exist in the account

James Wade avatar
James Wade

hmm, i can see it via the console, I’ll get the cli

James Wade avatar
James Wade

ah, typo

1

2021-08-17

OZZZY avatar

Hi.. I am new at Terraform. How can I create more than one site2site vpn connection.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry, this is overly broad. What have you already tried? Have you successfully created one VPN connection? Are you getting any errors?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Users of driftctl, I have a few questions for you: AFAIK, driftctl only catches three kinds of drift that are not already caught by Terraform’s refresh process: 1. SG rule changes, 2. IAM policy assignment, 3. SSO permission set assignments.

  1. Is there anything else driftctl catches that I’m missing?
  2. If I’m correct about the above, why do you use driftctl and not simply run a TF plan on a cron and see if any drifts are detected (like env0 are suggesting, or even Spacelift)?
marcinw avatar
marcinw

I think one of the benefits of using driftctl is that it can detect resources that are deployed in the cloud but not managed through Terraform.

At least that’s the promise.

marcinw avatar
marcinw

Spacelift drift detection will detect (and optionally fix) drift only against the resources it manages.

marcinw avatar
marcinw

So in that sense you can use both, though each for a different reason.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Good point. So it’s not so much drift as “unmanaged resources”. I’m curious how many people try to map out unmanaged resources and for what reason. For example, as a security person, I’d want to know if there are security issues in unmanaged resources. But I’m wondering if Engineering leaders care about them as much.

marcinw avatar
marcinw

There are also tools like https://steampipe.io/ or https://www.cloudquery.io/ that allow you to query entire accounts for possible security violations regardless of how resources are managed.

Steampipe | select * from cloud;

Steampipe is an open source tool to instantly query your cloud services (e.g. AWS, Azure, GCP and more) with SQL. No DB required.

Easily query, monitor and analyze your cloud infrastructure | CloudQuery

query, monitor and analyze your cloud infrastructure

marcinw avatar
marcinw
06:09:35 PM

¯_(ツ)_/¯

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Oh, there are dozens, if not hundreds of tools for that. But one could say that the approach to security in IaC (like what we do at Cloudrail, or Bridgecrew/Accurics/Fugue does), is different to security in unmanaged resources.

andylamp avatar
andylamp

hi there, I am trying to use: https://github.com/cloudposse/terraform-aws-elasticache-redis however, then I use subnets fetched through a data resource in the form of:

data "aws_vpc" "vpc-dev" {
  tags       = { environment = "dev" }
  depends_on = [module.vpc-dev]
}

data "aws_subnet_ids" "vpc-dev-private-subnet-ids" {
  vpc_id     = data.aws_vpc.vpc-dev.id
  depends_on = [module.vpc-dev]
  tags = {
    Name = "*private*"
  }
}

And plug that into the configuration it throws an error saying:

│ Error: Invalid count argument
│ 
│   on .terraform/modules/my-redis-cluster.redis/main.tf line 31, in resource "aws_elasticache_subnet_group" "default":
│   31:   count      = module.this.enabled && var.elasticache_subnet_group_name == "" && length(var.subnets) > 0 ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use
│ the -target argument to first apply only the resources that the count depends on.

has anyone encountered that error before? If so, how did you solve it?

GitHub - cloudposse/terraform-aws-elasticache-redis: Terraform module to provision an ElastiCache Redis Clusterattachment image

Terraform module to provision an ElastiCache Redis Cluster - GitHub - cloudposse/terraform-aws-elasticache-redis: Terraform module to provision an ElastiCache Redis Cluster

jose.amengual avatar
jose.amengual

are you sure is finding the subnet and vpc?

GitHub - cloudposse/terraform-aws-elasticache-redis: Terraform module to provision an ElastiCache Redis Clusterattachment image

Terraform module to provision an ElastiCache Redis Cluster - GitHub - cloudposse/terraform-aws-elasticache-redis: Terraform module to provision an ElastiCache Redis Cluster

jose.amengual avatar
jose.amengual

did you see them in the plan?

andylamp avatar
andylamp

yep - they are already created.

andylamp avatar
andylamp

In fact, this is the plan in question:

(venv) andylamp@ubuntu-vm:~/Desktop/my-tf$ tf plan
module.vpc-dev.module.my-vpc.aws_vpc.this[0]: Refreshing state... [id=vpc-00337c2afe6fce5c3]
module.vpc-dev.module.my-vpc.aws_eip.nat[0]: Refreshing state... [id=eipalloc-012856d4a1d951283]
module.vpc-dev.module.my-vpc.aws_subnet.private[2]: Refreshing state... [id=subnet-08266fec0283b0297]
module.vpc-dev.module.my-vpc.aws_subnet.private[0]: Refreshing state... [id=subnet-0325a36af25038e1c]
module.vpc-dev.module.my-vpc.aws_subnet.private[1]: Refreshing state... [id=subnet-0d55505e94c067aa2]
module.vpc-dev.module.my-vpc.aws_route_table.public[0]: Refreshing state... [id=rtb-0d42a5eb7d09ee795]
module.vpc-dev.module.my-vpc.aws_internet_gateway.this[0]: Refreshing state... [id=igw-005edae1c9ed3fa6b]
module.vpc-dev.module.my-vpc.aws_subnet.public[0]: Refreshing state... [id=subnet-0975f87892ab81275]
module.vpc-dev.module.my-vpc.aws_subnet.public[2]: Refreshing state... [id=subnet-0f148b639a32bc4d0]
module.vpc-dev.module.my-vpc.aws_subnet.public[1]: Refreshing state... [id=subnet-05611d5e681c8ff3c]
module.vpc-dev.module.my-vpc.aws_route_table.private[0]: Refreshing state... [id=rtb-04a14b9c331a1298a]
module.vpc-dev.module.my-vpc.aws_route.public_internet_gateway[0]: Refreshing state... [id=r-rtb-0d42a5eb7d09ee7951080289494]
module.vpc-dev.module.my-vpc.aws_nat_gateway.this[0]: Refreshing state... [id=nat-05b7e44d3b553b476]
module.vpc-dev.module.my-vpc.aws_route_table_association.public[0]: Refreshing state... [id=rtbassoc-0f540546824c3129d]
module.vpc-dev.module.my-vpc.aws_route_table_association.private[1]: Refreshing state... [id=rtbassoc-033f6fc6e37b040e8]
module.vpc-dev.module.my-vpc.aws_route_table_association.public[1]: Refreshing state... [id=rtbassoc-08deab663c8d993d9]
module.vpc-dev.module.my-vpc.aws_route_table_association.public[2]: Refreshing state... [id=rtbassoc-08d46742cd13a384d]
module.vpc-dev.module.my-vpc.aws_route_table_association.private[0]: Refreshing state... [id=rtbassoc-045d114f50647d8a6]
module.vpc-dev.module.my-vpc.aws_route_table_association.private[2]: Refreshing state... [id=rtbassoc-011e95d58db228e53]
module.vpc-dev.module.my-vpc.aws_default_network_acl.this[0]: Refreshing state... [id=acl-01bbf32f28ecd8ea9]
module.vpc-dev.module.my-vpc.aws_route.private_nat_gateway[0]: Refreshing state... [id=r-rtb-04a14b9c331a1298a1080289494]
╷
│ Error: Invalid count argument
│ 
│   on .terraform/modules/my-redis-cluster.redis/main.tf line 31, in resource "aws_elasticache_subnet_group" "default":
│   31:   count      = module.this.enabled && var.elasticache_subnet_group_name == "" && length(var.subnets) > 0 ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use
│ the -target argument to first apply only the resources that the count depends on.
jose.amengual avatar
jose.amengual

did you set the var.elasticache_subnet_group_name ?

jose.amengual avatar
jose.amengual

no wait

andylamp avatar
andylamp

I was under the impression that this was not needed

andylamp avatar
andylamp
terraform-aws-elasticache-redis/main.tf at master · cloudposse/terraform-aws-elasticache-redisattachment image

Terraform module to provision an ElastiCache Redis Cluster - terraform-aws-elasticache-redis/main.tf at master · cloudposse/terraform-aws-elasticache-redis

andylamp avatar
andylamp

followed this bit.

jose.amengual avatar
jose.amengual

var.subnets I think is basically 0

jose.amengual avatar
jose.amengual

module.subnets.private_subnet_ids is a list

jose.amengual avatar
jose.amengual

are you passing a list of subnets?

andylamp avatar
andylamp

yes, the private ones.

andylamp avatar
andylamp

what essentially I get from the output of this:

# grab the private subnets within the provided VPC
data "aws_subnet_ids" "vpc-private-subs" {
  vpc_id     = data.aws_vpc.redis-vpc.id
  depends_on = [data.aws_vpc.redis-vpc]
  tags = {
    Name = "*private*"
  }
}
andylamp avatar
andylamp

which is a list of subnets.

andylamp avatar
andylamp

matching to that tag.

andylamp avatar
andylamp

(btw, thanks for replying fast!)

jose.amengual avatar
jose.amengual

how does the module initialization looks like?

andylamp avatar
andylamp

sec - let me fetch that.

andylamp avatar
andylamp
module "redis" {
  source = "cloudposse/elasticache-redis/aws"

  version = ">= 0.40.0"

  name = var.cluster-name

  engine_version = var.redis-stack-version
  instance_type  = var.redis-instance-type
  family         = var.redis-stack-family
  cluster_size   = var.cluster-size

  snapshot_window          = "04:00-06:00"
  snapshot_retention_limit = 7

  apply_immediately = true

  automatic_failover_enabled = false
  at_rest_encryption_enabled = false
  transit_encryption_enabled = false

  vpc_id  = var.vpc-id
  subnets = data.aws_subnet_ids.vpc-private-subs.ids

  snapshot_name = "redis-snapshot"

  parameter = [
    {
      name  = "notify-keyspace-events"
      value = "lK"
    }
  ]
}
andylamp avatar
andylamp

basically this - and then I call it in my [main.tf](http://main.tf) as:

module "my-redis-cluster" {
  source = "./modules/my-ec-redis"
  vpc_id = data.aws_vpc.vpc-dev.id
}
andylamp avatar
andylamp

(the data bits are in the [variables.tf](http://variables.tf) of the my-ec-redis module)

jose.amengual avatar
jose.amengual

try passing a hardcoded subnet id to the subnets =

andylamp avatar
andylamp

so, with [] (which assumes hardcoded) it works.

jose.amengual avatar
jose.amengual

like subnets = [ "subnet-0325a36af25038e1c"]

andylamp avatar
andylamp

I think this works, it just does not like when I dynamically fetch it from the target vpc

jose.amengual avatar
jose.amengual

I think this is a bit weird

andylamp avatar
andylamp

tell me about it

jose.amengual avatar
jose.amengual

you have a module creating the vpc, then you do a data lookup to look at stuff the vpc module created

andylamp avatar
andylamp

yep

jose.amengual avatar
jose.amengual

and plus you tell the data resouce it depends on the module

andylamp avatar
andylamp

exactly.

jose.amengual avatar
jose.amengual

cyclical dependency

jose.amengual avatar
jose.amengual

so use the output of the module.vpc-dev and output the subnet ids

jose.amengual avatar
jose.amengual

and use the output as input for the redis module

jose.amengual avatar
jose.amengual

TF will know how to build the dependency from that relationship

andylamp avatar
andylamp

right, gotcha

andylamp avatar
andylamp

let me try this.

jose.amengual avatar
jose.amengual

when you start using depends_on you can make TF confused

andylamp avatar
andylamp

cool, let me try this and hopefully this works - thanks for the tip!

andylamp avatar
andylamp

(if not, I’ll just ping here again )

andylamp avatar
andylamp

seems to be able to plan it now, I am curious however why this did not happen with aws eb resource (aws_elastic_beanstalk_environment) there I seem to be passing them as such:

  setting {
    name      = "Subnets"
    namespace = "aws:ec2:vpc"
    value     = join(",", data.aws_subnet_ids.vpc-public-subnets.ids)
    resource  = ""
  }

It is able to both find them and use them successfully - if it was a circular dependency, then surely it would be a problem there as well right?

jose.amengual avatar
jose.amengual

but there is no depends_on

andylamp avatar
andylamp

there is

jose.amengual avatar
jose.amengual

so it can calculate the graph dependency

andylamp avatar
andylamp
module "eb-test" {
  source     = "./modules/my-eb"
  vpc_id     = module.vpc-dev.vpc_name
  depends_on = [module.vpc-dev]
}
andylamp avatar
andylamp

and this is how the my-eb module is defined:

# now create the app
resource "aws_elastic_beanstalk_application" "eb-app" {
  name = var.app_name
}

# configure the ELB environment for the target app
resource "aws_elastic_beanstalk_environment" "eb-env" {
  application         = aws_elastic_beanstalk_application.eb-app.name
  name                = var.env_name
  solution_stack_name = var.solution_stack_name == null ? data.aws_elastic_beanstalk_solution_stack.python_stack.name : var.solution_stack_name
  tier                = var.tier


  # Configure various settings for the environment, they are grouped based on their scoped namespace

  # -- Configure namespace: "aws:ec2:vpc"

  setting {
    name      = "VPCId"
    namespace = "aws:ec2:vpc"
    value     = var.vpc_id
    resource  = ""
  }

  # associate the ELB environment with a public IP address
  setting {
    name      = "AssociatePublicIpAddress"
    namespace = "aws:ec2:vpc"
    value     = "True"
    resource  = ""
  }

  setting {
    name      = "Subnets"
    namespace = "aws:ec2:vpc"
    value     = join(",", data.aws_subnet_ids.vpc-public-subnets.ids)
    resource  = ""
  }
  
  // more properties...
 }
jose.amengual avatar
jose.amengual

that will be because the count can be calculated

andylamp avatar
andylamp

right, it does not use count

jose.amengual avatar
jose.amengual

count or for_each can’t use values that can’t be calculated at plan time

jose.amengual avatar
jose.amengual

otherwise how many will create then?

andylamp avatar
andylamp

ok I see, that’s helpful to know.

jose.amengual avatar
jose.amengual

you can sometimes do a local with a tenary that can calculate the value before hand

andylamp avatar
andylamp

nice, do you happen to have an example that I can see of such a use case? I’d be really helpful.

jose.amengual avatar
jose.amengual

no I was wrong sorry, the value needs to be calculated at compile time, I do not think is possible to do it otherwise

jose.amengual avatar
jose.amengual

where I had a logic to do that but it did not work and I had to change the count arguments because I was trying to use a data resource

jose.amengual avatar
jose.amengual

if you apply once a data resource like when using target and then use them in the count then that will work I think but it adds a step that you do not need if you make your logic simpler

Julian Gog avatar
Julian Gog

one of your inputs and as well a count argument is not determiable by terrafrom. I had the same Issue some days ago (https://sweetops.slack.com/archives/CB6GHNLG0/p1628844492126800) There is no workaround. Either hardcode the variable or create things by your own. I wasted 3 Days figuring this out

Hey everyone, I am getting this error using your S3-module with the privileged_principal_arns option. Any clue why? I am using TF version 0.14.9. this was a common error at earlier versions

Error: Invalid count argument

  on .terraform/modules/service.s3_bucket.s3_bucket/main.tf line 367, in resource "aws_s3_bucket_policy" "default":
 367:   count      = local.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || length(var.s3_replication_source_roles) > 0 || length(var.privileged_principal_arns) > 0 || var.policy != "") ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

ERRO[0043] Hit multiple errors:
Hit multiple errors:
exit status 1 
andylamp avatar
andylamp

@jose.amengual thanks a lot for the clarification, I think hard-coded values is the way to go for now.

jose.amengual avatar
jose.amengual

np

1
RB avatar
length of privileged_principal_arns not determinable before apply · Issue #102 · cloudposse/terraform-aws-s3-bucketattachment image

Describe the Bug If no resources are created yet and the var.privileged_principal_arns is a variable, this will lead in to this error: Error: Invalid count argument on .terraform/modules/service.s3…

2021-08-18

Mark juan avatar
Mark juan

Do anyone know how to add cloudwatch as grafana data source by using helm chart(prometheus-grafana)?

Mark juan avatar
Mark juan

For better context i’m using this values.yaml

podSecurityPolicy:
  enabled: true
grafana:
  datasources:
    datasources.yaml:
    apiVersion: 1
    datasources:
    - name: Cloudwatch
      type: cloudwatch
      isDefault: true
      jsonData:
        authType: arn
        assumeRoleArn: "${ASSUME_ROLE_ARN}"
        defaultRegion: "${CLUSTER_REGION}"
        customMetricsNamespaces: ""
    version: 1
  grafana.ini:
    feature_toggles:
      enable: "ngalert"
  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 10
    metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 80
    - type: Resource
      resource:
        name: memory
        targetAverageUtilization: 80
  image:
    repository: grafana/grafana
    tag: 8.1.0
  ingress:
  %{ if GRAFANA_HOST != "" }
    enabled: true
    hosts:
      - ${GRAFANA_HOST}
  %{ else }
    enabled: false
  %{ endif }
prometheus:
  prometheusSpec:
    storageSpec:
      ## Using PersistentVolumeClaim
      volumeClaimTemplate:
        spec:
          storageClassName: gp2
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 50Gi
Mark juan avatar
Mark juan

using this in terraform

data "template_file" "prom_template" {
  template = file("./templates/prometheus-values.yaml")
  vars = {
    GRAFANA_HOST = var.domain_name == "" ? "" : "grafana.${local.cluster_name}.${var.domain_name}"
    CLUSTER_REGION = var.app_region
    ASSUME_ROLE_ARN = aws_iam_role.cloudwatch_role.arn 
  }
}

resource "helm_release" "prometheus" {
  chart            = "kube-prometheus-stack"
  name             = "prometheus"
  namespace        = kubernetes_namespace.monitoring.metadata.0.name
  create_namespace = true
  version          = "17.1.1"

  repository = "<https://prometheus-community.github.io/helm-charts>"

  values = [
    data.template_file.prom_template.rendered
  ]
}
Mark juan avatar
Mark juan

Then setup kubectl and installed the helm chart and after that port forwarded to see GUI but not able to see the cloudwatch data source

Pierre-Yves avatar
Pierre-Yves

nice trick I learn today: conditionnally create a block https://codeinthehole.com/tips/conditional-nested-blocks-in-terraform/

Conditional nested blocks in Terraform — David Winterbottom

Using dynamic blocks to implement a maintenance mode.

2
AugustasV avatar
AugustasV

For login to AWS, right now I manually using some bash scripts to assume role, but that MFA token expire every 1 hour. What would be option to automate those tasks? I would like probably to use docker

Alex Jurkiewicz avatar
Alex Jurkiewicz

logging in from your pc to use awscli?

Alex Jurkiewicz avatar
Alex Jurkiewicz

when you assume a role, you can set the expiry, the default is 1hr but you can go up to 8 generally

AugustasV avatar
AugustasV

Oh ok it’s have security concerns…

loren avatar

Look into the refreshable credential in botocore, and the credential_process option in the awscli config. Between the two, you can have auto-refreshing temporary credentials

z0rc3r avatar

that’s all workarounds, you probably want to have AWS SSO configured for your account. its credentials are supported on aws-cli and terraform provider side. still you have to login from time to time once temporary credentials expire (this is configurable)

loren avatar

no thanks. AWS SSO is terribly limited when it comes to the full set of IAM features. sticking with Okta and an IAM Identity Provider for now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Checkout leapp.cloud. It handles automatic refreshes and provides best-in-class ui

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Leapp - One step away from your Cloudattachment image

Leapp grants to the users the generation of temporary credentials only for accessing the Cloud programmatically.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @Andrea Cavagna

Andrea Cavagna avatar
Andrea Cavagna

Both for AWS SSO and Okta Federated IAM Roles, Leapp store secure informations, such as AWS SSO token to generate credentials, and SAML response, in a secure place locally (EG. Keychain for MacOS) https://docs.leapp.cloud/contributing/system_vault/

And in the app the User can choose in which AWS Account log into Leapp generate and rotate temporary credentials.

If you have any question feel free to text me

System Vault - Leapp

Leapp is a tool for developers to manage, secure, and gain access to any cloud. From setting up your access data to activating a session, Leapp can help manage the underlying assets to let you use your provider CLI or SDK seamlessy.

loren avatar

I’ve been following leapp for a while, but haven’t yet given it a try

loren avatar

i can’t figure out the leapp setup when using okta. what role_arn? i have many roles through okta, and many accounts…

Andrea Cavagna avatar
Andrea Cavagna

It’s the federated role arn with okta: Okta app add a federated role for each role in any account you have access to, you see is in the IAM Role panel https://docs.leapp.cloud/use-cases/aws_iam_role/#aws-iam-federated-role

loren avatar

If leapp is getting that info from Okta, why is it asking me for a role_arn in the setup?

Andrea Cavagna avatar
Andrea Cavagna

Leapp is not getting this information from Okta, it’s up to you know what is the correct roleArn for a given IAM Role federated with okta

loren avatar

oh yeah, ok, no. this is too much setup for me. i’d have to add dozens of accounts and pick a single role for each one, or multiply by each role i wanted in Leapp

loren avatar

direct integration with okta would be a much nicer interface, similar to aws-okta-processor

Andrea Cavagna avatar
Andrea Cavagna

The Okta application is always the same, you have just to add all the role arn you can access to. The integration with okta is in roadmap, I’ll keep you updated

loren avatar

not for multiple accounts, the idp arn changes also

Andrea Cavagna avatar
Andrea Cavagna

ok thanks

loren avatar

i’ll be glad to try it again when Okta is supported directly as an IdP

loren avatar

i’m also curious if you had considered a credential_process integration via ~/.aws/config, instead of writing even the temp credential to ~/.aws/credentials?

Andrea Cavagna avatar
Andrea Cavagna

Absolutely, Leapp will soon move the core business logic to a local Daemon that will comunicate with the UI. In the Daemon roadmap there is the credentials process written in the ~/.aws/config file: https://github.com/Noovolari/leapp-daemon/issues/20

1
managedkaos avatar
managedkaos

Hello team! Asking a question about name length limits.

TLDR Is there a document listing resources and their associated limits for the name and name_prefix lengths?

Details Some resources have limits imposed on how long the name or name_prefix can be when being created in terraform. For aws_iam_role , for example, the name_prefix limit is 32 characters.

I sometimes have long values for the variables i use to populate the name prefix so I protect from errors by using substr like this:

var.name        = "super-cool-unicorn-application"
var.environment = "staging"

resource "aws_iam_role" "task" {
    ...
    name_prefix = substr("${var.name}-${var.environment}-task-", 0, 32)
    ...
}

However, I know that other resources allow for longer values for name_prefix (can’t think of one off the top of my head… will add it if i find it).

I’d like to use a reference for these lengths so I can allow my names and prefixes to be as long as possible.

Does such a reference exist? If not, is there a where to “mine” it out of the terraform and/or provider source code?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we have an id length limit parameter i believe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Sorry, we know of no comprehensive list of ID length limits. We did consider adding some pre-configured length limits but gave up because (a) we could not find such a list and (b) to the extent we did find limits, they were all over the place.

managedkaos avatar
managedkaos


all over the place.
Agreed!

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The null-label module that we use to generate all our ID fields does have a length limit field. If you find length limits on resource names, you can submit a PR or even just open an issue to set that length on the module that creates that resource.

Of course you can also use it directly.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)
GitHub - cloudposse/terraform-null-label: Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])attachment image

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - GitHub - cloudposse/terraform-null-label: Terraform Module to define a consistent naming conven…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@here our devops #office-hours are starting now! join us on zoom to talk shop url: cloudposse.zoom.us/j/508587304 password: sweetops

Release notes from terraform avatar
Release notes from terraform
07:53:44 PM

v1.0.5 1.0.5 (August 18, 2021) BUG FIXES: json-output: Add an output change summary message as part of the terraform plan -json structured logs, bringing this format into parity with the human-readable UI. (#29312) core: Handle null nested single attribute values (<a href=”https://github.com/hashicorp/terraform/issues/29411“…

json-output: Add output changes to plan logs by alisdair · Pull Request #29312 · hashicorp/terraformattachment image

Extend the outputs JSON log message to support an action field (and make the type and value fields optional). This allows us to emit a useful output change summary as part of the plan, bringing the…

handle null NestingSingle values by jbardin · Pull Request #29411 · hashicorp/terraformattachment image

Null NestingSingle attributes were not being handled in ProposedNew. Fixes #29388

Tony Bower avatar
Tony Bower

I see a [context.tf](http://context.tf) referred to in many of the CloudPosse modules. Is that a file I should copy and commit to my project unaltered?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes you can do that

Tony Bower avatar
Tony Bower

Excellent, thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s inside every module already

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you don’t need to copy it if you are just using the modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just provide the namespace, env, stage, name

Tony Bower avatar
Tony Bower

Ok, that’s where I was confused I guess. Here’s an example of an example (heh) where I see it added.

https://github.com/cloudposse/terraform-aws-datadog-integration/tree/master/examples/rds-enhanced

terraform-aws-datadog-integration/examples/rds-enhanced at master · cloudposse/terraform-aws-datadog-integrationattachment image

Terraform module to configure Datadog AWS integration - terraform-aws-datadog-integration/examples/rds-enhanced at master · cloudposse/terraform-aws-datadog-integration

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those vars are inside [context.tf](http://context.tf)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the main purpose of the file is to provide common inputs to all modules w/o repeating them everywhere

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, note, we’ve just released an updated version that adds some more fields)

Mohammed Yahya avatar
Mohammed Yahya

Terraform v1.0.5 now adds summary in JSON plan, I just implemented this using JQ last week

2
Mohammed Yahya avatar
Mohammed Yahya
05:09:58 AM

2021-08-19

Mark juan avatar
Mark juan

Do anyone know how to add cloudwatch as data source in grafana using this helm chart https://github.com/prometheus-community/helm-charts?

GitHub - prometheus-community/helm-charts: Prometheus community Helm chartsattachment image

Prometheus community Helm charts. Contribute to prometheus-community/helm-charts development by creating an account on GitHub.

Alencar Junior avatar
Alencar Junior

Hi all, I have a question about creating service connections resources on Azure Devops. Terraform stores the personal_access_token value in the state file and I would like to avoid that. I was wondering if there is a better and more secure approach of creating this resource?

resource "azuredevops_serviceendpoint_github" "serviceendpoint_github" {
  project_id            = azuredevops_project.project.id
  service_endpoint_name = "xyz"

  auth_personal {
    personal_access_token = "TOKEN"
  }
}
Pierre-Yves avatar
Pierre-Yves

Hello @Alencar Junior, if you store the personnal_access-token in an azure KeyVault, you can then read it securely with data.

data.azurerm_key_vault_secret.example.value

https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/key_vault_secret

1
Alencar Junior avatar
Alencar Junior

Thanks @Pierre-Yves, I will try that.

Alencar Junior avatar
Alencar Junior

Hi @Pierre-Yves, I’ve created a key vault secret and I am using a data resource in order to fetch its value to my service connection however, when I run the command terraform state pull I can see the personal_access_token value as a plain text, am I missing something?

Pierre-Yves avatar
Pierre-Yves

when writting to azure key vault, you should set azurerm_key_vault_secret option to:

  content_type = "password"

variables should have the parameter “sensitive = true” https://learn.hashicorp.com/tutorials/terraform/sensitive-variables

Protect Sensitive Input Variables | Terraform - HashiCorp Learnattachment image

Protect sensitive values from accidental exposure using Terraform sensitive input variables. Provision a web application with Terraform, and mark input variables as sensitive to restrict when Terraform prints them out to the console.

Alencar Junior avatar
Alencar Junior
02:51:00 PM

Hi @Pierre-Yves, this is how I’m fetching the secrets:

data "azurerm_key_vault" "existing" {
  name                = "test-kv"
  resource_group_name = "test-rg"
}

data "azurerm_key_vault_secret" "github" {
  name         = "github-pat"
  key_vault_id = data.azurerm_key_vault.existing.id
}

resource "azuredevops_serviceendpoint_github" "serviceendpoint_ghes_1" {
  project_id            = data.azuredevops_project.project.id
  service_endpoint_name = "Test GitHub Personal Access Token"

  auth_personal {
    personal_access_token = data.azurerm_key_vault_secret.github.value
  }
}

I set the secret type as password however, it seems the secret value will be always stored in the raw state as plain-text.

This is what I get when running terraform state pull :

{
      "mode": "managed",
      "type": "azuredevops_serviceendpoint_github",
      "name": "serviceendpoint_ghes_1",
      "provider": "provider[\"<http://registry.terraform.io/microsoft/azuredevops\|registry.terraform.io/microsoft/azuredevops\>"]",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "auth_oauth": [],
            "auth_personal": [
              {
                "personal_access_token": "[PLAIN-TEXT-VALUE]",
                "personal_access_token_hash": "......"
              }
            ],
            "authorization": {
              "scheme": "Token"
            },
            "description": "Managed by Terraform",
            "id": "..........",
            "project_id": "........",
            "service_endpoint_name": "Test GitHub Personal Access Token",
            "timeouts": null
          },
          "sensitive_attributes": [
            [
              {
                "type": "get_attr",
                "value": "auth_personal"
              }
            ]
          ],
Pierre-Yves avatar
Pierre-Yves

sorry I don’t have much more information on it, you might want to encrypt/decrypt your secrets you might have a look to rsadecrypt function and encryption

Alencar Junior avatar
Alencar Junior

That might be the way. Thanks for the help, I really appreciate it!

1
roth.andy avatar
roth.andy

Should I be able to do this?

module "foo" {
  source = "git::<https://foo.com/bar.git?ref=${var.module_version}>"

Specifically, pulling the ref from a TF variable

loren avatar

only if you’re using terragrunt

roth.andy avatar
roth.andy
08:29:57 PM

well then

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we “should be able to” but terraform doesn’t let us

loren avatar

Part of it is the module is cached during init, which doesn’t take vars, so the dereference needs to happen outside of TF

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ahhhhh yes, makes sense @loren

2021-08-20

Grubhold avatar
Grubhold

SOLVED Hi folks, I’m using CloudPosse’s ECS web app module along with SSM Parameter store. I have a bunch of secrets and variables in .tfvars that I have used to create and pass them on SSM and encrypt with KMS. But I’m not sure how to actually pass those from SSM to ECS Task Definition for the containers? I couldn’t figure it out from the modules and I need it to be secure. Would appreciate your guidance.

Grubhold avatar
Grubhold
09:18:51 AM

How do I provide it to this

Grubhold avatar
Grubhold

Based on Foqal app’s suggestions it lead me to this page https://alto9.com/2020/05/21/aws-ssm-parameters-as-ecs-environment-variables/#comments

Grubhold avatar
Grubhold

If I understand correctly I need to have this block for every variable that I want to pass in the container

variable "secrets" {
  type = list(object({
    name      = string
    valueFrom = string
  }))
  description = "The secrets to pass to the container. This is a list of maps"
  default     = null
}
Grubhold avatar
Grubhold

For example this is the type of variables I have in .tfvars

directory_api_parameter_write = [
  {
    name        = "DB_PASSWORD"
    value       = "password"
    type        = "SecureString"
    overwrite   = "true"
    description = "Issuer Directory"
  },
  {
    name        = "DB_USER"
    value       = "sa"
    type        = "String"
    overwrite   = "true"
    description = "Issuer Directory"
  }
]
Grubhold avatar
Grubhold

How do I structure it? Sorry I couldn’t find an example in the repos

Grubhold avatar
Grubhold

SOLUTION for anyone interested this is how I did this.

  1. Provide your secrets in a list in .tfvars for example
    directory_api_parameter_write = [
      {
     name        = "DB_PASSWORD"
     value       = "password"
     type        = "SecureString"
     overwrite   = "true"
     description = "Directory API"
      },
      {
     name        = "DB_USER"
     value       = "sa"
     type        = "String"
     overwrite   = "true"
     description = "Directory API"
      }
    ]
    
Grubhold avatar
Grubhold
  1. In [variables.tf](http://variables.tf) point to that
    variable "directory_api_parameter_write" {
      type        = list(map(string))
    }
    
Grubhold avatar
Grubhold
  1. Include this variable name in the task definition module calling as var.directory_api_parameter_write
jose.amengual avatar
jose.amengual

but now the task def will have all your secrets in plain text

jose.amengual avatar
jose.amengual

or are they showing up as secrets? the reason I point this out is because ECS now supports secrets arn mapping in the task Def but a year ago it did not

managedkaos avatar
managedkaos

indeed it does. I don’t use a module for the task def, instead opting for a template. but mine is formatted like this:

... 
       "secrets": [
            {
                "name": "VARIABLE_NAME",
                "valueFrom": "arn:aws:ssm:${logs_region}:${account_id}:parameter/${name}/${environment}/VARIABLE_NAME"
            },
...
]
jose.amengual avatar
jose.amengual

that is exactly what I was referring to @managedkaos

jose.amengual avatar
jose.amengual

in the old day you needed to use something like chamber as an ENTRYPOINT to feed the ENV variables and such

Grubhold avatar
Grubhold

@jose.amengual @managedkaos indeed they are now showing as plain text in the task definition page on AWS. And they’re not being treated as secrets or valueFrom but instead as value.

Grubhold avatar
Grubhold
11:32:34 AM

@managedkaos I can’t change it now at this point to template because I need to demo this in the upcoming week. I need to use CloudPosse’s module as its how everything is structured

Grubhold avatar
Grubhold
11:34:08 AM

What you suggest me do? The corresponding variables for these two is this

Grubhold avatar
Grubhold
11:36:52 AM

I tried passing this from .tfvars but terraform gives error that it doesn’t accept quotes in valueFrom and it gets invalid when I remove quotes

ClientException: The Systems Manager parameter name specified for secret TEST_DB_ENC_KEY is invalid. The parameter name can be up to 2048 characters and include the following letters and symbols: a-zA-Z0-9_.-,
Grubhold avatar
Grubhold

@managedkaos @jose.amengual Really out of options I think that this should be very easy but I must be missing a big part. I got so used to CloudPosse’s modules and the structure and I want to use it, but very stuck at this point. Need your guidance

Grubhold avatar
Grubhold

Also @managedkaos if a template is the best way to go, in that case I only need to include the template instead of the container_definitions part in the module? Can you please show me an example of the template file and it’s variables/tfvars how it’s being passed to the template?

jose.amengual avatar
jose.amengual

system panager parameters are path+name like /myapp/servicea/mysecret

jose.amengual avatar
jose.amengual

even if you have on layer is still /mysecret

jose.amengual avatar
jose.amengual

and they need to exist before hand obviously

Grubhold avatar
Grubhold
04:55:35 PM

so the first time I’m actually writing the variables to parameter store I need to provide path? I actually went ahead and changed them to this. Is this correct?

Grubhold avatar
Grubhold

But again it terraform yells at me with the new name

Error: ClientException: The Systems Manager parameter name specified for secret /opdev/issuer/DB_PASSWORD is invalid. The parameter name can be up to 2048 characters and include the following letters and symbols: a-zA-Z0-9_.-,
Grubhold avatar
Grubhold
04:59:48 PM

Is it something to do with how secrets is configured in the module itself?

jose.amengual avatar
jose.amengual

no, you need name and value

jose.amengual avatar
jose.amengual

name is just the name that will. end up as an env var

jose.amengual avatar
jose.amengual

the value from is the arn of the secret I believe

jose.amengual avatar
jose.amengual

I have an example but I’m not in my computer

Grubhold avatar
Grubhold

@jose.amengual I think I got somewhere with your comment. So I took the arn output of SSM

issuer_parameter_write = [
  {
    name        = "/opdev/issuer/DB_ENC_KEY"
    value       = "secretkey123"
    type        = "SecureString"
  },

And added it to the secrets variable that is being passed to the container definition as such

issuer_secrets = [
  {
    name      : "/dk/ssi_api/DB_ENC_KEY"
    valueFrom : "arn:aws:ssm:eu-west-1:1111123213123:parameter/opdev/issuer/DB_ENC_KEY"
  }
]

On the AWS Task Definition UI it is now showing the arn instead of the plain text value itself.

It no longer gave me that notorious error! it just yelled at me saying that

The secret name must be unique and not shared with any new or existing environment variables set on the container, such as '/opdev/issuer/DB_ENC_KEY'
Grubhold avatar
Grubhold

And thats because I have already registered that var with that name on SSM using that parameter_write for the SSM module

Grubhold avatar
Grubhold

But I wouldn’t know the arn if I haven’t created it first. For the sake of automation I will need to create maybe a function to take the arn outputs and pass to the consumer module. What you suggest @jose.amengual the output currently is this

name_list = tolist([
  "/opdev/issuer/DB_ENC_KEY",
  "/opdev/issuer/DB_SIG_KEY",
  "/opdev/issuer//NODE_CONFIG_DIR",
  "/opdev/issuer/NODE_CONFIG_ENV",
])
ssm-arn = tomap({
  "/opdev/issuer/DB_ENC_KEY" = "arn:aws:ssm:eu-west-1:111112312312:parameter/opdev/issuer/DB_ENC_KEY"
  "/opdev/issuer/DB_SIG_KEY" = "arn:aws:ssm:eu-west-1:111112312312:parameter/opdev/issuer/DB_SIG_KEY"
  "/opdev/issuer/NODE_CONFIG_DIR" = "arn:aws:ssm:eu-west-1:111112312312:parameter/opdev/issuer/NODE_CONFIG_DIR"
  "/opdev/issuer/NODE_CONFIG_ENV" = "arn:aws:ssm:eu-west-1:111112312312:parameter/opdev/issuer/NODE_CONFIG_ENV"
})
Grubhold avatar
Grubhold

Possible Solution added a new function called as arn_list_map to [outputs.tf](http://outputs.tf) to combine name_list and arn_list in ssm-parameter-store module

# Splitting and joining, and then compacting a list to get a normalised list
locals {
  name_list = compact(concat(keys(local.parameter_write), var.parameter_read))

  value_list = compact(
    concat(
      [for p in aws_ssm_parameter.default : p.value], data.aws_ssm_parameter.read.*.value
    )
  )

  arn_list = compact(
    concat(
      [for p in aws_ssm_parameter.default : p.arn], data.aws_ssm_parameter.read.*.arn
    )
  )

  # Combining name_list and arn_list and mapping them together to produce output as a single object.
  arn_list_map = [
  for k, v in zipmap(local.name_list, local.arn_list) : {
    name  = k
    valueFrom = v
  }
  ]
}

output "names" {
  # Names are not sensitive
  value       = local.name_list
  description = "A list of all of the parameter names"
}

output "values" {
  description = "A list of all of the parameter values"
  value       = local.value_list
  sensitive   = true
}

output "map" {
  description = "A map of the names and values created"
  value       = zipmap(local.name_list, local.value_list)
  sensitive   = true
}

output "arn_map" {
  description = "A map of the names and ARNs created"
  value       = zipmap(local.name_list, local.arn_list)
}

output "arn_list" {
  description = "Key and valueFrom map list"
  value = local.arn_list_map
}
Grubhold avatar
Grubhold

So this will actually produce a list of objects where you can just point it in the task definition module. Taking the arns directly from parameter_write

jose.amengual avatar
jose.amengual

so my container def have this :

"secrets": [
      {
        "valueFrom": "${db_secret_arn}:password::",
        "name": "DATABASE_PASSWORD"
      },
jose.amengual avatar
jose.amengual

then :

data "template_file" "server_task_definition" {
  template = file("${path.module}/templates/container_definition_ossindex_server.json")

  vars = {
    db_secret_arn                = data.aws_secretsmanager_secret.db_info.arn
}
}
jose.amengual avatar
jose.amengual

and on the module :

module "ecs_alb_service_task_index_server" {
  source                             = "git::<https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=0.35.0>"
  name                               = var.name
  namespace                          = var.namespace
  stage                              = var.environment
  attributes                         = var.attributes
  container_definition_json          = data.template_file.server_task_definition.rendered
...
jose.amengual avatar
jose.amengual

that is how I use it, if you use the container_definition module is just a different input format but the output is the same

Grubhold avatar
Grubhold

@jose.amengual Thanks for the examples

Desire BANSE avatar
Desire BANSE

Hi folks. I’m getting a AWS region issue when trying to run terraform-aws-config/examples/cis . For some reason it expects eu-west-1 while I have set it at us-east-1.

module.aws_config.aws_config_aggregate_authorization.central[0]: Creating...
aws_iam_policy.support_policy: Creating...
module.aws_config.aws_config_configuration_recorder.recorder[0]: Creating...
aws_iam_role.support_role: Creating...
module.aws_config_storage.module.storage[0].aws_s3_bucket.default[0]: Creating...
module.aws_config.aws_config_configuration_recorder.recorder[0]: Creation complete after 0s [id=config]
module.aws_config.aws_config_aggregate_authorization.central[0]: Creation complete after 1s [id=NNNNNN:us-east-1]
aws_iam_policy.support_policy: Creation complete after 1s [id=arn:aws:iam::NNNNNNNNN:policy/terraform-NNNNNN]
aws_iam_role.support_role: Creation complete after 1s [id=test-policy]
aws_iam_policy_attachment.support_policy_attach: Creating...
aws_iam_policy_attachment.support_policy_attach: Creation complete after 0s [id=test-policy]
╷
│ Error: Error creating S3 bucket: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-west-1'
│ 	status code: 400, request id: ID, host id: ID
│
│   with module.aws_config_storage.module.storage[0].aws_s3_bucket.default[0],
│   on .terraform/modules/aws_config_storage.storage/main.tf line 1, in resource "aws_s3_bucket" "default":
│    1: resource "aws_s3_bucket" "default" {
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
Create data source for aws iam roles by jlamande · Pull Request #18585 · hashicorp/terraform-provider-awsattachment image

The purpose of this new data source is to provide a way get ARNs and Names of IAM Roles that are created outside of the current Terraform state. E.g., in an AWS SSO powered environment, IAM Roles a…

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Wish they would add it generically for all resources. Eg finding a bucket by regex

Create data source for aws iam roles by jlamande · Pull Request #18585 · hashicorp/terraform-provider-awsattachment image

The purpose of this new data source is to provide a way get ARNs and Names of IAM Roles that are created outside of the current Terraform state. E.g., in an AWS SSO powered environment, IAM Roles a…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

How’s your Go skills? You can open a PR too like this person did

Alex Jurkiewicz avatar
Alex Jurkiewicz

The one I opened almost a year ago is not merged, seems like a poor ROI

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I was working on one and gave up on it at some point: https://github.com/hashicorp/terraform-provider-aws/pull/16989

[WIP] Support wildcard in Lambda permission by yi2020 · Pull Request #16989 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

2021-08-21

Mohammed Yahya avatar
Mohammed Yahya

I faced this issue in https://github.com/cloudposse/terraform-aws-dynamic-subnets

╷
│ Error: Error in function call
│ 
│   on .terraform/modules/subnets/outputs.tf line 53, in output "nat_ips":
│   53:   value       = coalescelist(aws_eip.default.*.public_ip, aws_eip.nat_instance.*.public_ip, data.aws_eip.nat_ips.*.public_ip, list(""))
│ 
│ Call to function "list" failed: the "list" function was deprecated in Terraform v0.12 and is no longer available; use tolist([ ...
│ ]) syntax to write a literal list.

with version 1.0.5 if I have time I will create a PR

Mohammed Yahya avatar
Mohammed Yahya

never mind I’m using old version of the module

2021-08-22

2021-08-23

Gabriel avatar
Gabriel

I want to have dedicated private subnet sets for each AWS service used such as RDS, ElastiCache, etc. I want to use a dedicated VPC CIDR e.g. 10.0.0.0/16 I want to then break that IP Range down to many more subnets and much less hosts using a mask of /29

I wish these subent sets could be created dynamically and used for the different services without conflicts and without me having to specify each subnet manually e.g. :

Some RDS MySQL Instance > 10.0.0.0/29, 10.0.0.8/29, 10.0.0.16/29 Another RDS MySQL Instance > 10.0.0.24/29, 10.0.0.32/29, 10.0.0.40/29 Some EC Redis Instance > 10.0.0.48/29, 10.0.0.56/29, 10.0.0.64/29 …. ….

Can I achieve that with https://github.com/cloudposse/terraform-aws-dynamic-subnets ?

GitHub - cloudposse/terraform-aws-dynamic-subnets: Terraform module for public and private subnets provisioning in existing VPCattachment image

Terraform module for public and private subnets provisioning in existing VPC - GitHub - cloudposse/terraform-aws-dynamic-subnets: Terraform module for public and private subnets provisioning in exi…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub - cloudposse/terraform-aws-named-subnets: Terraform module for named subnets provisioning.attachment image

Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But honestly, I would avoid the strategy of allocating subnet ranges this way. It makes it very rigid and if you’re operating in lots of accounts, difficult to scale. We typically use security groups for isolation, not networks.

1
Michael Manganiello avatar
Michael Manganiello

The only place where I’ve found more specialized CIDRs being useful, is on AWS Client VPNs, to map user’s memberOf attribute to whitelisted CIDRs

1
Gabriel avatar
Gabriel

Recently I had to delete a subnet that was used in an RDS subnet group. It is a big hassle and problem moving an RDS instance from one subnet group to another. Thats why the idea was to have one subnet group only for one RDS Instance to make services like RDS and their networks independent from each other so that in the future when i want to do something with one service and it’s network, it does not impact other services.

@Erik Osterman (Cloud Posse) could you elaborate a bit more on rigidity and multiple accounts?

2021-08-24

Mark juan avatar
Mark juan

Hi everyone! After adding the cloudwatch as data source in grafana by using kube-prometheus-stack helm chart, it got added, but as i am testing it it’s saying metric request error, also none of the dashboards are working!!

Mark juan avatar
Mark juan

The policy and role i’m using are

data "aws_iam_policy_document" "grafana_cloudwatch" {
  statement {
    sid    = "AllowReadingMetricsFromCloudWatch"
    effect = "Allow"
    actions = [
      "cloudwatch:ListMetrics",
      "cloudwatch:GetMetricStatistics",
      "cloudwatch:GetMetricData"
    ]
    resources = ["*"]
  }

  statement {
    sid    = "AllowReadingLogsFromCloudWatch"
    effect = "Allow"
    actions = [
      "logs:DescribeLogGroups",
      "logs:GetLogGroupFields",
      "logs:StartQuery",
      "logs:StopQuery",
      "logs:GetQueryResults",
      "logs:GetLogEvents"
    ]
    resources = ["*"]
  }
  
  statement {
    sid       = "AllowReadingResourcesForTags"
    effect    = "Allow"
    actions   = ["tag:GetResources"]
    resources = ["*"]
  }
}

data "aws_iam_policy_document" "base_policy" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
    effect = "Allow"
  }
}

data "aws_iam_policy_document" "account_assume_role" {
  source_json = data.aws_iam_policy_document.base_policy.json
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "AWS"
      identifiers = ["${data.aws_caller_identity.current.arn}"]
    }
    effect = "Allow"
  } 
}

resource "aws_iam_role" "cloudwatch_role" {
  name               = "${local.cluster_name}-grafana-cloudwatch-role"
  assume_role_policy = data.aws_iam_policy_document.account_assume_role.json
}

resource "aws_iam_policy" "data_source_policy" {
  name_prefix = "${local.cluster_name}-grafana-cloudwatch-policy"
  policy      = data.aws_iam_policy_document.grafana_cloudwatch.json
}

resource "aws_iam_role_policy_attachment" "DataSourceCloudwatchPolicy" {
  policy_arn = aws_iam_policy.data_source_policy.arn
  role       = aws_iam_role.cloudwatch_role.name
}
Michael Dizon avatar
Michael Dizon
terraform-aws-components/providers.tf at master · cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/providers.tf at master · cloudposse/terraform-aws-components

terraform-aws-components/variables.tf at master · cloudposse/terraform-aws-componentsattachment image

Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/variables.tf at master · cloudposse/terraform-aws-components

Markus Muehlberger avatar
Markus Muehlberger

Have a look at https://github.com/cloudposse/terraform-aws-components/tree/all-new-components

This branch is much more up-to-date (yet still outdated, I believe).

GitHub - cloudposse/terraform-aws-components at all-new-componentsattachment image

Opinionated, self-contained Terraform root modules that each solve one, specific problem - GitHub - cloudposse/terraform-aws-components at all-new-components

Michael Dizon avatar
Michael Dizon

thanks! could you tell me how to specify a branch in vendir?

Markus Muehlberger avatar
Markus Muehlberger
ref: 0.140.0

can be a tag or a branch so you could use

ref: all-new-components

My information on the branch is a couple months old, so maybe someone from Cloudposse wants to chime in and tell me if I’m wrong. (it works for me though).

1
Michael Dizon avatar
Michael Dizon

damn it does not support TF 1.0

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

variable noob question … how do I make sure a string does not have the word latest in it?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
regex - Functions - Configuration Language - Terraform by HashiCorp

The regex function applies a regular expression to a string and returns the matching substrings.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

You can use something like regex(".*latest.*", var)

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i can’t spell regex

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i had rege()

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

The big question is what is the plural of regex?

loren avatar

if that had been rage() i would have understood

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

based off the last two weeks I wouldn’t be surprised if i wrote rage

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

The plural of regex is regrets

3
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

@Yoni Leitersdorf (Indeni Cloudrail) your regex proposal didn’t seem to work unless I have done it wrong

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Can you post the code?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
variable "bottlerocket_ami_ssm_parameter_name" {
  description = "The SSM parameter name for the AMI ID to use (e.g. /aws/service/bottlerocket/aws-k8s-1.18/x86_64/1.2.0/image_id)."
  type        = string

  validation {
    condition     = length(regexall(".*latest.*", var.bottlerocket_ami_ssm_parameter_name)) != 0
    error_message = "The bottlerocket_ami_ssm_parameter_name value can not contain the word 'latest' in it."
  }
}
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I’m on my phone so can’t test it right now, but look at how they use regex here: https://www.terraform.io/docs/language/values/variables.html#custom-validation-rules

Input Variables - Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

They use the singular function, not regexall, and wrap it with can for the true. You want latest not to be in there so you’d need to check that can is false

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

@Steve Wade (swade1987) did it work for you?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Nope I couldn’t get it working and it was like midnight local time so just left it

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I tried a few attempts

Pavel avatar

trying to figure out why i cant reach my redis cluster using this https://github.com/cloudposse/terraform-aws-elasticache-redis#output_host

GitHub - cloudposse/terraform-aws-elasticache-redis: Terraform module to provision an ElastiCache Redis Clusterattachment image

Terraform module to provision an ElastiCache Redis Cluster - GitHub - cloudposse/terraform-aws-elasticache-redis: Terraform module to provision an ElastiCache Redis Cluster

Pavel avatar

what subnet should i put this one?

Pavel avatar

i have private and public ones

Pavel avatar

i have ec2 which is on a public subnet, same vpc, i put the cluster on a private one, but if its same vpc shouldn’t it be reachable? the sg is just anything from source vpc should allow ingress

Pavel avatar
Pavel
11:05:02 PM

what is this subnet group?

Pavel avatar

i didn’t make this

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

its a default one that redis creates

Pavel avatar

the tf module wants a list of ids

Pavel avatar

oo i see subnet groups exist in elasticache

Pavel avatar

so it has all my pub/priv subnets

Pavel avatar

still cant get my ec2 to connect

Pavel avatar

redis-cli -h [master.nv21-development-redis.ez60p2.use1.cache.amazonaws.com](http://master.nv21-development-redis.ez60p2.use1.cache.amazonaws.com) ping i get nothing just hangs there. not even a timeout

jose.amengual avatar
jose.amengual

are the instances and redis on the same subnet?

jose.amengual avatar
jose.amengual

is the SG of redis allowing the subnet?

jose.amengual avatar
jose.amengual

are they in the same VPC

jose.amengual avatar
jose.amengual

many things to check

Pavel avatar

“are the instances and redis on the same subnet?” the subnet group of the redis contains the subnet of the ec2

Pavel avatar

“is the SG of redis allowing the subnet?” im not sure I understand, they are both on the same VPC. the security group allows ingress from the default sg of the VPC

Pavel avatar
Pavel
12:05:48 AM
Pavel avatar
Pavel
12:06:47 AM

then the default sg is applied to the instance security groups

Pavel avatar

i think it has to do with the security groups because i can open the SG up to all traffic from anwyhere and it still doesn’t connect

jose.amengual avatar
jose.amengual

the ping the redis endpoint and compare the ip subnet with the instance ip

jose.amengual avatar
jose.amengual

maybe is in a public subnet

Pavel avatar

for the sake of testing, im going to recreate the redis only on public subnet which the ec2 is on

Pavel avatar

i figure the ec2 and redis need to be on the same subnet?

jose.amengual avatar
jose.amengual

yes

jose.amengual avatar
jose.amengual

it is highly NOT recommended to have redis on a public subnet

jose.amengual avatar
jose.amengual

you should move your instance tot eh private subnet

jose.amengual avatar
jose.amengual

put an ALB on front or something to reach it

Pavel avatar

if i move ec2 to a private subnet then i cant ssh into it

Pavel avatar

im doing this just to test connectivity

jose.amengual avatar
jose.amengual

you can use SSM Session manager or Instance connect

jose.amengual avatar
jose.amengual

you need the help of someone on the Networking department, you could put everything in the public subnet but it will expose it to the internet which is not recommended

Pavel avatar

i seemed to messed up all my stuff now

Pavel avatar

so i gotta start over

Pavel avatar

i would not put redis on public subnet normally

Pavel avatar

if it is on a public subnet on a vpc with an igw, should i be able to access from my local machine?

jose.amengual avatar
jose.amengual

yes

jose.amengual avatar
jose.amengual

but keep in mind it is exposed to the internet

Pavel avatar

it didn’t used to be that way

Pavel avatar

ok .. i have no idea. i made it totally public.. can’t even ping it

Pavel avatar

sg allows all traffic from my IP

Pavel avatar

its on public subnets

Pavel avatar

this is becoming senseless

jose.amengual avatar
jose.amengual

you need guidance from someone with networking knowledge, there is some basic concepts you are missing that you need to understand before you can get this to work

Pavel avatar

i mean i can get literally any other service to work in this same manner, vpc with nat and igw, public subnets associated to service, secureity group allowing ingress

Pavel avatar

before elasticache was in no way accessible from outside the vpc

Pavel avatar

but lets just suppose i use the example on that tf package

Pavel avatar

any service on the same vpc with that sg attached should be able to access it

Pavel avatar

im going to step away from this. i’ll try again tomorrow

Pavel avatar

thanks for your help

jose.amengual avatar
jose.amengual

np

Pavel avatar

im an idiot, i had tls on.

Pavel avatar

telnet just says connection timeout

Mr.Devops avatar
Mr.Devops

hi there - trying to perform some conditional expression base on user_data and user_data_base64 arguments for aws_instance resources. I understand that you cannot use both argument in the resource but figure i ask the community.

long story short - i’m migrating our state and trying to prevent a resource recreation from a previous engineer sloppy mess.

jose.amengual avatar
jose.amengual

something like this :

module "ec2" {
  source  = "cloudposse/ec2-instance/aws"
  version = "0.32.1"

  enabled          = module.this.enabled
  ami              = data.aws_ami.default.id
  ami_owner        = "amazon"
  ssh_key_pair     = null
  vpc_id           = local.vpc_id
  subnet           = local.private_subnet_ids[0]
  instance_type    = var.instance_type
  security_groups  = [module.default_sg.id]
  user_data_base64 = base64encode(data.template_file.userdata[0].rendered)
  root_volume_size = 40
  root_volume_type = "standard"

  associate_public_ip_address   = false
  root_block_device_encrypted   = true
  create_default_security_group = false

  context = module.this.context
}
jose.amengual avatar
jose.amengual
data "template_file" "userdata" {
  count    = module.this.enabled ? 1 : 0
  template = file("${path.module}/templates/userdata.sh.tmpl")

  vars = {
    bucket    = local.bucket_id
    region    = var.region
    host_tags = yamlencode({ "tags" = formatlist("%s:%s", keys(module.this.tags), values(module.this.tags)) })
  }
}
Mr.Devops avatar
Mr.Devops

hi @jose.amengual the problem i’m having is between aws region where one region is set to use user_data_base64 and another region is set to use user_data . Trying to find some way of using a single aws_instance resource just to accomplish the use of either or. Just an example

Mr.Devops avatar
Mr.Devops

hope that makes sense

Mr.Devops avatar
Mr.Devops
resource "aws_instance" "this" {
  count         = 1
  
  ami               = var.ami
  availability_zone = "${var.region}a"
  instance_type     = var.instance
  if user_data = true ? user_data else
  user_data_base64 = true ? base64encode(
  templatefile("${path.root}/cloud-config/test.yml"
  }
Mr.Devops avatar
Mr.Devops

just an example above not really the code

jose.amengual avatar
jose.amengual

why not to use base64 everywhere?

Mr.Devops avatar
Mr.Devops

Well.. that is the plan but at this moment I’m migrating state and user data argument are diff for each aws region unfortunately

Mr.Devops avatar
Mr.Devops

The right way is to keep them consistent for each region and idk why this person did it this way

jose.amengual avatar
jose.amengual

you could have a local that check against a list of non-base64 regions and use that to decide

jose.amengual avatar
jose.amengual

the list will get smaller once you migrate

jose.amengual avatar
jose.amengual

until is empty

jose.amengual avatar
jose.amengual

and you can start a comment like

#### I HATE THIS, PLEASE MIGRATE
jose.amengual avatar
jose.amengual

and then define the local

Mr.Devops avatar
Mr.Devops

hmm trying to picture what you mean by using local sorry

jose.amengual avatar
jose.amengual

a local variable

jose.amengual avatar
jose.amengual

base64_regions = { us-west-2: true, us-east-2: false}

Mr.Devops avatar
Mr.Devops

sorry i meant to say how would you use a local variable to switch between which resource argument to use (user_data vs user_data_base64) within your aws_instance resource block?

Mr.Devops avatar
Mr.Devops

since both will conflict with each other

jose.amengual avatar
jose.amengual

base64_region_enabled = var.region == base64_regions[${var.region}] ? true : false

jose.amengual avatar
jose.amengual

something like that

jose.amengual avatar
jose.amengual

and then you set the value to null

jose.amengual avatar
jose.amengual
user_data_base64 =  local.base64_region_enabled ? base64encode(
  templatefile("${path.root}/cloud-config/test.yml" : null
jose.amengual avatar
jose.amengual

I think if you se it to null you can define it but it will not be used

jose.amengual avatar
jose.amengual

so it will not conflic

Mr.Devops avatar
Mr.Devops

yes this would work in the resource but how would you tell in your resource to use just user_data since that would also need to be included in the aws_instance block?

Wouldn’t having this cause conflict error

resource "aws_instance" "this" {
user_data_base64 =  local.base64_region_enabled ? base64encode(
  templatefile("${path.root}/cloud-config/test.yml" : null
user_data = ""
} 
Mr.Devops avatar
Mr.Devops

something like that

Mr.Devops avatar
Mr.Devops

jose.amengual avatar
jose.amengual

you null that too

jose.amengual avatar
jose.amengual
resource "aws_instance" "this" {
user_data_base64 =  local.base64_region_enabled ? base64encode(
  templatefile("${path.root}/cloud-config/test.yml") : null
user_data = local.base64_region_enabled ? null : templatefile("${path.root}/cloud-config/test.yml")
} 
Mr.Devops avatar
Mr.Devops

ah ok let me give that a try thx Pepe!

jose.amengual avatar
jose.amengual

np

jose.amengual avatar
jose.amengual

It might not work, but give it a try

1
Mr.Devops avatar
Mr.Devops

yeah doesn’t work since it complains about conflicts between the two resource arguments. I’ve tried this already this.

jose.amengual avatar
jose.amengual

even with null?

Mr.Devops avatar
Mr.Devops

yes

jose.amengual avatar
jose.amengual

then use our module

jose.amengual avatar
jose.amengual

and do a count to enable disable the base64

jose.amengual avatar
jose.amengual

you can instantiate two, one that will do base64 one that not

Mr.Devops avatar
Mr.Devops

do you have a direct link to the module i can review?

Mr.Devops avatar
Mr.Devops

no need i can find it from ec2-instance/aws

Mr.Devops avatar
Mr.Devops

thx

Mr.Devops avatar
Mr.Devops

i will give the module a try tomorrow thx again!

jose.amengual avatar
jose.amengual

np

2021-08-25

Balazs Varga avatar
Balazs Varga

hello all, I would like to create a cert for a private hosted zone. Before with ansible we created a public hosted zone, then created he cert and waited until it became valid then deleted the hosted zone and create the zone with same name in private mode. How can I do the same in terraform ?

2021-08-26

Mark juan avatar
Mark juan

Hi everyone! I am a using a policy and added tag on it, and in the policy i’m filtering on the conditions based on the tag like this, but it’s not filtering out!

Mark juan avatar
Mark juan
data "aws_iam_policy_document" "grafana_datasource" {
  statement {
    sid    = "AllowReadingMetricsFromCloudWatch"
    effect = "Allow"
    actions = [
      "cloudwatch:ListMetrics",
      "cloudwatch:GetMetricStatistics",
      "cloudwatch:GetMetricData"
    ]

    resources = ["*"]

    condition {
      test     = "StringEquals"
      variable = "aws:ResourceTag/Project"
      values = [
        "${local.cluster_name}"
      ]
    }
  }
  statement {
    actions   = ["sts:AssumeRole"]
    resources = [aws_iam_role.grafana_datasource.arn]
  }
}
Mark juan avatar
Mark juan
locals{
  common_tags = {
    Project     = local.cluster_name
    Provisioner = "TERRAFORM"
    Environment = local.environment
  }
}
Mark juan avatar
Mark juan
resource "aws_iam_policy" "data_source_policy" {
  name_prefix = "${local.cluster_name}-grafana-cloudwatch-policy"
  policy      = data.aws_iam_policy_document.grafana_datasource.json
  tags        = local.common_tags
}
Mark juan avatar
Mark juan

Can someone help me with this?

Alex Jurkiewicz avatar
Alex Jurkiewicz

iam policies are about permissions, not filtering

Alex Jurkiewicz avatar
Alex Jurkiewicz

you can’t let people see only a subset of metrics with ListMetrics

Alex Jurkiewicz avatar
Alex Jurkiewicz

well, you can put the resources in seperate regions or accounts

Mark juan avatar
Mark juan
Actions, resources, and condition keys for Amazon CloudWatch - Service Authorization Reference

Lists all of the available service-specific resources, actions, and condition keys that can be used in IAM policies to control access to Amazon CloudWatch.

Markus Muehlberger avatar
Markus Muehlberger

You can see in the actions table, that no condition keys are supported for the cloudwatch:ListMetrics action.

Actions, resources, and condition keys for Amazon CloudWatch - Service Authorization Reference

Lists all of the available service-specific resources, actions, and condition keys that can be used in IAM policies to control access to Amazon CloudWatch.

Mark juan avatar
Mark juan

It’s just that i want to see the cloudwatch metrics for the particular cluster

Mark juan avatar
Mark juan

and that can be done by tagging the resources

Mark juan avatar
Mark juan

for a particular cluster

Markus Muehlberger avatar
Markus Muehlberger

Not with IAM permissions. IAM permissions are solely to allow or deny a request.

Mark juan avatar
Mark juan

Is there any other way to do so!

managedkaos avatar
managedkaos

Hello, team! Have you seen TF resources (IAM roles in particular) show bogus changes like this:

   ~ resource "aws_iam_role" "codedeploy" {
      ~ assume_role_policy    = jsonencode(
          ~ {
              ~ Statement = [
                  ~ {
                      ~ Principal = {
                          ~ Service = [
                              - "codedeploy.amazonaws.com",
                                "ecs-tasks.amazonaws.com",
                              + "codedeploy.amazonaws.com",
                            ]
                        }
                        # (3 unchanged elements hidden)
                    },
                ]
                # (1 unchanged element hidden)
            }
        )

Note that this section is really just a rehash of what’s already there:

                              - "codedeploy.amazonaws.com",
                                "ecs-tasks.amazonaws.com",
                              + "codedeploy.amazonaws.com",

I thought it might be an ordering thing but not sure…

RB avatar

yes, terraform often reorders array values

RB avatar
Order is lost for data `aws_iam_policy_document` when applied to S3 buckets, iam roles, kms keys, etc · Issue #11801 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

RB avatar

my first aws provider update. please upvote if you want better rabbitmq / mq broker support.

https://github.com/hashicorp/terraform-provider-aws/pull/20661

mq_broker: rm `auto_minor_version_upgrade` and `host_instance_type`'s `ForceNew` by nitrocode · Pull Request #20661 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

RB avatar

also if anyone wants to review, please feel free.

mq_broker: rm `auto_minor_version_upgrade` and `host_instance_type`'s `ForceNew` by nitrocode · Pull Request #20661 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

RB avatar

i was thinking about adding a test here but im a bit of a noob

Almondovar avatar
Almondovar

Hi colleagues, i am updating our eks terraform module from v 13 to v 17 and i noticed in the terraform plan that it wants to remove the autoscaling groups, any idea why it wants to do that?

 ~ resources              = [
          - {
              - autoscaling_groups              = [
                  - {
                      - name = "eks-xxxx"
                    },
                ]
              - remote_access_security_group_id = ""
            }, 
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we have many EKS modules, so best to present which one in particular

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the likely explanation is the module was refactored to “create before destroy” to make blue/green cluster upgrades easier.

2021-08-29

Amit Karpe avatar
Amit Karpe

What are the main advantage using cp vs normal module AWS EKS? cloudposse/terraform-aws-eks-cluster VS terraform-aws-modules/terraform-aws-eks

z0rc3r avatar

terraform-aws-modules are not “normal”. there are no “normal” modules actually. cloudposse and terraform-aws-modules are same in terms of origin and support, made by community and supported by community

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

why CP’s is not normal?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

cloudposse/terraform-aws-eks-cluster is maintained by Cloud Posse

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform-aws-modules modules are maintained by @antonbabenko and other people

z0rc3r avatar

I saw a lot of misunderstanding about terraform-aws-modules, due to its naming a lot of people think it’s some kind of “official” modules provided or endorsed by hashicorp

this1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

both modules should do almost the same things

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I saw a lot of misunderstanding about terraform-aws-modules, due to its naming a lot people think it’s some kind of “official” modules provided or endored by hashicorp

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, they chose a good name

Almondovar avatar
Almondovar

if i had to choose, i would choose the most “Famous” one, because that might mean that there is biger community and bigger possibilities of “googling for errors”

Amit Karpe avatar
Amit Karpe

Let me rephrase it What benefits user will have if choose to use CP’s EKS module?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Cloud Posse has more than 130 modules covering almost all AWS resources (we are adding new modules often)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all modules have similar interfaces, inputs, etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and all covered by tests that deploy the modules on a real AWS account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and all modules have a working example (examples/complete ) folder that gets deployed and tested by Terratest (look into test folder)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not forget to mention, all the modules are used in production at tens of Cloud Posse’s customers ( so we maintain them and fix bugs/issues as they discovered)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I’m pretty sure the same could be said for terraform-aws-modules, but CP does not maintain those

z0rc3r avatar

my personal gripe with cloudposse modules, is that they versioned as 0.x, which means (according to semver) that they can break backward compatibility on minor version changes. as end user I have to actively monitor releases and verify code changes. terraform-aws-modules are more conservative in introducing breaking changes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we try to not break the modules, but since we have many of them and we constantly improve them, and terraform and AWS make changes/improvements all the time, some breaking changes could and will be introduced

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we create releases from each change, so if you pin to a release (not to main/master), then it would continue working

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the releases perform the same function as the versioning, and having 0.x.x and not 1.0.0 is just for historical reasons, we do not consider all versions of 0.x.x as minor releases which we will deliberately break (as mentioned, we can/will break the backwards compatibility in some cases when AWS/Terraform introduce changes or when we need to bring a module to our standard and best practices)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@z0rc3r I remember you had some issues with some of the modules, just don’t remember with which ones

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please reming me which modules you had issues with, maybe we were in the middle of some refactoring at the time (sorry for any problems you had)

Matt Simonsen avatar
Matt Simonsen

@Andriy Knysh (Cloud Posse) this thread is great. I appreciate your candor, helpfulness and all the work your team does!

1
z0rc3r avatar

@Andriy Knysh (Cloud Posse) this one was painful https://github.com/cloudposse/terraform-aws-eks-cluster/pull/114 because I had to migrate states for this release and following. I do pin versions, but also perform periodic dependency refresh, like with any other code

my point is with 0.x I cannot expect any stability and every version update requires rigorous review. with terraform-aws-modules if I see update from 4.3 to 4.6 for example, I know my code will continue working as expected

feat: use security-group module instead of resource by SweetOps · Pull Request #114 · cloudposse/terraform-aws-eks-clusterattachment image

what use security-group module instead of resource update tests why more flexible than current implementation bring configuration of security group/rules to one standard references CPCO-409

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Fair point, we’ll discuss versioning and improve it, thanks

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@z0rc3r yes, the module was kind of broken b/c of the security group update, and sorry, we did not catch it. Should not happen again at least for the minor versions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

as much as I hate we had to break compatibility, this is another reason why we’re still on 0.x - we need to continue standardization of our modules before the interface can stabilize. this overhaul of security group management (on the heels of our context,tf updates) is one of the last major things we need to undertake.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i suspect we might also need to do one more pass at IAM roles/policies and then our modules will be ultimately flexible and standardized for interoperability.

2021-08-30

ekristen avatar
ekristen

The releases notes for the vpc module say “this version is not recommended for 0.26.1”, does that mean 0.26.1 or 0.26.0 since 0.26.0 introduced a breaking change? or should 0.26.x be avoided entirely because it’s not a major bump and apparently there are some breaking changes?

ekristen avatar
ekristen

@Andriy Knysh (Cloud Posse) any chance you know or who can answer? thanks.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ekristen sorry for the confusion, please use 0.25.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we started to update the modules to a new security group module, which was not completed 100%

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is already fixed, and we’ll release new versions of the other modules soon

ekristen avatar
ekristen

Copy. Should you all delete 0.26.x or mark them as pre-release or release 0.27.0 that’s an update of 0.25.0 to prevent people from using it? For example I’m using renovate to keep all my modules up-to-date and now I have a lot of terraform modules asking to update to a broken version

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as for VPC, there is not much difference b/w 0.25.0 and what we wanted to improve - it does create a VPC and other resources w/o any issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Releases · cloudposse/terraform-aws-vpcattachment image

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

ekristen avatar
ekristen

oh that’s new

ekristen avatar
ekristen

huh, ok, odd

ekristen avatar
ekristen

BTW, since it’s going to be breaking changes, is the plan to rev to 1.0.0?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes we need to grow up and start using 1.x.x - we’ll discuss that

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Sorry @ekristen this is partly my fault. After the 0.26.0 release went out and we realized it was broken and we did not want people building on it, I marked it pre-release after the fact. I thought that would be sufficient, but in fact it was not, for 3 reasons. First, apparently the Renovate bot does not look at the GitHub pre-release designation. Also, by the time I made it pre-release, the Terraform registry had already published it. The third problem is that our auto-release workflow also does not respect the GitHub pre-release designation, and just cuts a normal release even if it is based on a pre-release.

This all led to our normal auto-release and auto-update systems generating several releases of broken code marked as patch releases. This is all stuff we had not dealt with before and why we are still on zero-based releases, but it is stuff @Erik Osterman (Cloud Posse) wants us to get a handle on so we can move to 1.0 releases with confidence.

ekristen avatar
ekristen

Hey @Jeremy G (Cloud Posse) thanks for following up. Interestingly the renovate bot does take pre-release into an account, but might not do so retro-actively. I did some troubleshooting with them and now that they are marked as pre-released renovate seems to be ignore it.

Appreciate the information and the work you all have done to standardize this stuff! All my private modules now follow a very similar pattern including using the context module a lot, makes everything much easier.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@ekristen (cc: @Erik Osterman (Cloud Posse)) Thanks for the info on Renovate bot respecting pre-release. Makes sense that it might not revert from release to pre-release the way I did it, but great to know going forward. I think probably the first thing we need to do is fix our auto-release workflow so that if the most recent release was a pre-release, it does not publish a non-prerelease version.

In any case, thank you for your patience as we work out these kinks.

ekristen avatar
ekristen

No problem at all! Thanks for all you do.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so rennovatebot should refer to the source of the modules, and if the source is the registry (vs github), then I don’t even think prerelease meta is available; that’s a github thing. hence, my guess is it’s not respected using the registry as a source. we recently enabled rennovatebot at a customer site well after this whole snaffu (just a few days ago), and it opened PRs for the prereleases. cc: @Dylan

https://www.terraform.io/docs/registry/modules/publish.html#releasing-new-versions

Terraform Registry - Publishing Modules - Terraform by HashiCorp

Anyone can publish and share modules on the Terraform Registry.

Dylan avatar

Since [registry.terraform.io](http://registry.terraform.io) supports semantic versioning, I guess we should be able to mark new versions of our Terraform modules as pre-releases by appending some sort of tag (-rc1, etc.) to the version number in the Github release. (Although, idk if we’ve ever had a proof of concept for this.)

Dylan avatar

In the current situation, where we want to mark something as a pre-release after the fact, we’re looking at something a lot more complicated. Looking at https://registry.terraform.io/modules/cloudposse/vpc/aws/latest there’s definitely no hint that this is a pre-release for renovatebot to pick up on, unless it were able to follow the “source code” link and parse the github release history + tags.

ekristen avatar
ekristen

renovate definitely knows how to look at different sources, ie github-releases vs github vs terraform registry. it’s possible when referring to modules via SSH it’s not using github releases and instead using raw git tags, in that case, using -rcN is valid for pre-release in semantic versioning

Michael Dizon avatar
Michael Dizon

hey everyone. i’ve been using atmos to set up my latest project over the past two weeks. it’s been great so far! i am strugging a little bit with accessing the s3 bucket and dynamodb table which was created in the master account, when. switch to a (terraform) that I created on the identity account that the account module created. it seems like a role and policy needs to be created for this to work. any ideas/suggestions?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

either the role you are using to provision needs to have access to the root/master account (at least to the state S3 bucket and DynamoDB table), or you need to add a role to the backend config which has permissions to access the S3 bucket and DynamoDB

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is an example of the backend config file backend.tf.json we usually use for that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
{
  "terraform": {
    "backend": {
      "s3": {
        "acl": "bucket-owner-full-control",
        "bucket": "eg-uw2-root-tfstate",
        "dynamodb_table": "eg-uw2-root-tfstate-lock",
        "encrypt": true,
        "key": "terraform.tfstate",
        "region": "us-west-2",
        "role_arn": "arn:aws:iam::xxxxxxxxx:role/eg-gbl-root-terraform",
        "workspace_key_prefix": "xxxxxxx"
      }
    }
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

where the role arn:aws:iam::xxxxxxxxx:role/eg-gbl-root-terraform has the required permissions

Michael Dizon avatar
Michael Dizon

oh cool!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that for cross-account access, you need to have permissions on both sides

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s say you are using this role to provision TF components eg-uw2-identity-admin

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the role is provisioned in the identity account and has admin permissions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the role eg-gbl-root-terraform is provisioned in the root account and has permissions to access the s3 state bucket and dynamoDB table

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then, you need to give the role eg-uw2-identity-admin the permissions to assume the eg-gbl-root-terraform role in the other (root) account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you need to add a trust policy to the role eg-gbl-root-terraform with permissions for eg-uw2-identity-admin to assume the eg-gbl-root-terraform role

Michael Dizon avatar
Michael Dizon

so, iam-primary-roles is currently in the identity stack and it doesn’t look like it creates any roles in the root/master account, did i do something wrong? also, do i need to create those roles and policies on my own as a separate component, do i need to modify iam-primary-roles, or can it be done in the yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in iam-primary-roles you create the primary roles in the identity account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

eg-gbl-root-terraform should be created in the root account using the iam-delegated-roles component

Michael Dizon avatar
Michael Dizon

great, that’s what I did. is that supposed to generate roles in the root/master account as well?

Michael Dizon avatar
Michael Dizon

the name for the master account is not root, in case that matters, i supplied that name in the root_account_stage_name variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is an example of YAML config for the root delegated roles

    iam-delegated-roles:
      vars:
        account_number: "xxxxxx"
        exclude_roles: ["poweruser", "helm"]
        account_role_policy_arns:
          # IAM Policy ARNs to attach to each role, overriding the defaults
          ops: ["arn:aws:iam::aws:policy/ReadOnlyAccess"]
          observer: ["arn:aws:iam::aws:policy/job-function/ViewOnlyAccess"]
          terraform: ["root-terraform"]

        
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


the name for the master account is not root, in case that matters, i supplied that name in the root_account_stage_name variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the name could be anything

Michael Dizon avatar
Michael Dizon

ok cool

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s just the function that the account performs, and that’s root (as the root of the Org structure),management (as AWS calls it), or billing

Michael Dizon avatar
Michael Dizon

i see

Michael Dizon avatar
Michael Dizon

what does root-terraform refer to?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module will auto-generate the ARN of the role

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it will be eg-gbl-root-terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

where eg is the namespace

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

gbl is the environment (since IAM is global, not region based, we call it gbl and not uw2)

Michael Dizon avatar
Michael Dizon

this is where i’m having an issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

root is the stage (account name)

Michael Dizon avatar
Michael Dizon

it only creates the roles in identity

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform is the name of the role

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it follows Cloud Posse naming convention of {namespace}-{environment}-{stage}-{name}

Michael Dizon avatar
Michael Dizon

oh i see, i just realized i need to use iam-delegated-roles in each stack

Tom Vaughan avatar
Tom Vaughan

When using terraform-aws-tfstate-backend is it possible to use a single dynamoDB table? Currently using it for several different services on AWS and each are in a separate folder and the result is a table for each service.

Zach avatar

You only need one dynamodb table for all of your terraform locking yes

Tom Vaughan avatar
Tom Vaughan

Could you tell me what I need to set? I have tried setting dynamodb_table_name parameter but get an error saying that the table already exists.

Alex Jurkiewicz avatar
Alex Jurkiewicz

tables are free, so I wouldn’t worry much about this. If you need one table per stack it’s nbd

Zach avatar

what exactly are you setting this on?
I have tried setting dynamodb_table_name parameter

Zach avatar

oh. OH

Zach avatar

I’m not clear on the particular use-case of this module, but if you’re going to repeatedly use it you’ll need to define unique names for buckets and tables, so yes you’ll get 1 of each per thing you’re initializaing

Tom Vaughan avatar
Tom Vaughan

ok, thanks

RB avatar

you should only need to create that once

RB avatar

then you can point each of your terraform root modules to use the dynamodb table and s3 bucket via the backend block

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

even if it was modified so you could re-use, i would advise against it. architecturally, you want to build decoupled components. 2 different state backends should not share resources.

larry kirschner avatar
larry kirschner

question about setting up ECS for a microservice-based app:

Is there any example that shows how to set up an app that is a collection of micro services where:

• each micro service has its own docker image/container definition • micro services can route to other micro services w DNS names, e.g. uploads can make requests to <http://graphql.microservice> • the ingress (ALB?) load balancer maps HTTP paths to different microservices, e.g. /home, /graphql

…I’ve been looking at these two modules and their examples:

https://github.com/cloudposse/terraform-aws-ecs-web-app/

https://github.com/cloudposse/terraform-aws-ecs-alb-service-task

…but I can’t tell if either is intended to support an app composed of multiple ECS micro services that can intercommunicate?

GitHub - cloudposse/terraform-aws-ecs-web-app: Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.attachment image

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - GitHub - cloudposse/terraform-aws-ecs-web-app: Terraform module that…

GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service which exposes a web service via ALB.attachment image

Terraform module which implements an ECS service which exposes a web service via ALB. - GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service whic…

1
AugustasV avatar
AugustasV
  count = length(aws_db_instance.db_instance) 
 on aws_cloudwatch_alarm.tf line 4, in resource "aws_cloudwatch_metric_alarm" "unhealthyhosts":
   4:   alarm_name          = "${aws_db_instance.db_instance[count.index].identifier}.${var.environment} unhealthy machine in ${aws_db_instance.db_instance[count.index].identifier}!"
    |----------------
    | aws_db_instance.db_instance is object with 66 attributes
    | count.index is 27

Why is that?

Alex Jurkiewicz avatar
Alex Jurkiewicz

looks like aws_db_instance.db_instance object (aka map) does not have an item called "27"

2021-08-31

Manuel Morejón avatar
Manuel Morejón

Hi team! Nice to be here with you. I’m facing this issue https://github.com/cloudposse/terraform-aws-elasticsearch/issues/57 Do you have any suggestion to resolve it?

IAM Policy cannot be created due to InvalidTypeException · Issue #57 · cloudposse/terraform-aws-elasticsearchattachment image

Describe the Bug ES was created without iam_role_arns. After adding it and applying it failed with: module.elasticsearch.aws_iam_role.elasticsearch_user[0]: Creating… module.elasticsearch.aws_iam…

    keyboard_arrow_up