#terraform (2021-05)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-05-01

Matt Gowie avatar
Matt Gowie

If anybody is interested in contributing to a good open source module — We have a few good first issues in our terraform-aws-multi-az-subnets repository. Ranges from super easy change a variable to a different type + rename to figure out the difference between two modules and write some quick docs. Check em out if you’re interested!

cloudposse/terraform-aws-multi-az-subnetsattachment image

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

1
2
1
Alex Jurkiewicz avatar
Alex Jurkiewicz

How does that differ from dynamic subnets?

cloudposse/terraform-aws-multi-az-subnetsattachment image

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Zach avatar


How does that differ from dynamic subnets?
That’s actually one of the documentation issues https://github.com/cloudposse/terraform-aws-multi-az-subnets/issues/23

Alex Jurkiewicz avatar
Alex Jurkiewicz

nice, didn’t even know about this aws feature

2021-05-02

marc slayton avatar
marc slayton

Multi-AZ subnets occur in more than one reliability zone.

2021-05-03

msharma24 avatar
msharma24

Starting a greenfield Terraform env - Customer has Bitbucket and Bamboo , Would you recommend Atlantis over Bamboo Pipeline (YAML Specs) for Terraform Automation ?

Matt Gowie avatar
Matt Gowie

Yeah save yourself re-implementing the wheel.

1
Matt Gowie avatar
Matt Gowie

There is an #atlantis channel if you have any questions.

1
Jeff Behl avatar
Jeff Behl

alright, apologies for this being so general, but.. how/where are folks registering the outputs of terraform for use in app configurations? eg: my app needs to use the SQS queue created by terraform, and it uses a env file with vars and values. env file could/should be a template of some sorts, but how to get the values? CI/CD on terraform output that looks for specific outputs and pushes to consul (or any persistent place)? some place where ansible could get facts for template generation? but for both, storing the results somewhere accessible seems to be the question for us. parsing the state file seems like a horrible idea, so I’ll discount that.. thx

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

We use terraform output. Note that it parses the state file… you shouldn’t do it yourself

loren avatar

people also push to key/value stores directly, instead of outputs. for example, there is a consul provider… https://registry.terraform.io/providers/hashicorp/consul/latest/docs/resources/keys

aws parameter store is also popular. or s3, or dynamodb…

Joe Niland avatar
Joe Niland

+1 for SSM param store

Jeff Behl avatar
Jeff Behl

thanks gents.

Jeff Behl avatar
Jeff Behl

@loren we’d considering adding to dynamodb (don’t ask) just this way. just seems…laborious? but don’t think there’s an easy way out of it

loren avatar

Haha the dynamodb item syntax is annoying, could use some sugar

loren avatar

We have a keystore module that tries to be a wrapper for this kind of thing… https://github.com/plus3it/terraform-aws-tardigrade-keystore

plus3it/terraform-aws-tardigrade-keystoreattachment image

Terraform module to create a keystore within S3/SSM - plus3it/terraform-aws-tardigrade-keystore

Alex Jurkiewicz avatar
Alex Jurkiewicz

+1 to using another service to store your Terraform outputs. It lets you decouple the consumer and use another technology.

We use Terraform outputs purely as diagnostic help in our CD logs. Nothing automated reads them, even other Terraform configurations

Jeff Behl avatar
Jeff Behl

@Alex Jurkiewicz meaning using a terraform resource to store the results, correct? This has the definite advantage as well of not having to pipe module output up the stack, effectively making one declare it multiple times. thx

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

sorry, I think terraform outputs are useful to send data from a module to the calling Terraform configuration. I meant “outputs” as in top-level outputs only

1
Jeff Behl avatar
Jeff Behl

i see - thx. just trying to figure out the easiest way to gather and store these outputs. seems it’s either parse outputs and push them to an external service, or use a terraform resource to store them in one..

Petro Gorobchenko avatar
Petro Gorobchenko

Hello everyone, looking for support on this issue. looking to utilize terraform-aws-ecs-web-app running into this issue - cache location is required when cache type is “S3” , seems like it may be coming from

on .terraform/modules/ecs_web_app.ecs_codepipeline.codebuild/main.tf line 292, in resource "aws_codebuild_project" "default":
 292: resource "aws_codebuild_project" "default" {

I can’t see what configuration may be causing this. Any help on this is greatly appreciated.

Matt Gowie avatar
Matt Gowie

Hey Petro, I’d open an issue if nobody response and try digging in yourself. If the type is S3 then you should look at the corresponding resource that is failing and look where you need to add a new variable / value to provide the bucket name / ARN.

Petro Gorobchenko avatar
Petro Gorobchenko

hey @Matt Gowie, thanks for the input. noob question, but are you referring to modifying the modules that are imported to resolve the issue? or are you mentioning that within terraform-aws-ecs-web-app

Joe Niland avatar
Joe Niland

the ecs-codepipeline module has a cache_type variable which defaults to ‘S3’. ecs-web-app doesn’t set it explicitly or expose it as a variable. @Petro Gorobchenko you could open a PR to make this change, if you are able.

Petro Gorobchenko avatar
Petro Gorobchenko

sounds good. Ill play around with it and see if I can create a PR for it.

Petro Gorobchenko avatar
Petro Gorobchenko

added a PR, unsure if I’m missing any steps for the process. https://github.com/cloudposse/terraform-aws-ecs-web-app/pull/147

Making s3_cache_type pass through for module ecs_codepiple by pgbce · Pull Request #147 · cloudposse/terraform-aws-ecs-web-appattachment image

what S3 CacheType on is explicitly set and not exposed. why Attempting to run the module is causing on .terraform/modules/ecs_web_app.ecs_codepipeline.codebuild/main.tf line 292, in resource &qu…

1

2021-05-04

Pierre-Yves avatar
Pierre-Yves

Hello, how do you initialize a new disk on an Azure VM with Terraform ? I am looking to automatize the next step which have to be done after the two steps:

• azurerm_managed_disk

• azurerm_virtual_machine_data_disk_attachment The point I want to do with terraform is mounting and formating the disk on windows which is describe here: Initialize a new data disk

mudiki08 avatar
mudiki08

Hi folks, does anyone can provide me details on how to spin up a full blown AWS EKS cluster using terraform with self managed nodes and fargate profile. There was no clear documentation on how to get started with it! I would love start using the modules that has been already built. Thanks for your help in advance!

Mr.Devops avatar
Mr.Devops

hoping someone can help. I’m using the azurerm_role_assignment resource. What i like to be able to do is have a list of resource i can scope to. I did something using map types, but using it in this way

variable "role" {
  type        = map(any)
  description = "The permission block for roles assignment"
  default = {
    "default_assignment" = {
      scope                = ""
      role_definition_name = "Reader"
      principal_id         = ""
    }
  }
}

would result me to setting my inputs as

role = {
    "scope-001" = {
      scope = "/subscriptions/${local.sub_id}"
      role_definition_name = "Contributor"
      principal_id         = dependency.identity.outputs
    },
    "scope-002" = {
      scope = "/subscriptions/${local.sub_id}"
      role_definition_name = "Reader"
      principal_id         = dependency.identity
    }

where instead i would like to use it something like this

role = {
    "scope-001" = {
      scope = "/subscriptions/${local.sub_id}"
      role_definition_name = "Contributor"
      principal_id         = dependency.identity.outputs
    },
    {
      scope = "/subscriptions/${local.sub_id}"
      role_definition_name = "Reader"
      principal_id         = dependency.identity
    }
Pierre-Yves avatar
Pierre-Yves

hello, you say you want to use a list of map and you are using a map of map.

it will be simplier with this

variable "role" {
  type        = list(any)
  description = "The permission block for roles assignment"
  default = [
      {
      scope                = ""
      role_definition_name = "Reader"
      principal_id         = ""
    }
    ]
 }

then you can use a for_each for loop to go through it

2021-05-05

Adrian avatar

hey, I used terraform-aws-elastic-beanstalk-environment to create an Elastic Benstalk env. I want to upload an new docker image. Is there an bucket for this? didnt see any reference for this.

cloudposse/terraform-aws-elastic-beanstalk-environmentattachment image

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Saichovsky avatar
Saichovsky

I have set up security hub using terraform and part of the resources include a lambda which gets triggered by an EventBridge rule. So whenever I run terraform apply, a new aws_cloudwatch_event_target resource is created as a trigger attached to the existing lambda. So we have a duplication of triggers to the lambda, with the latest one being the active one and the former being disabled. Both triggers have the same ARN, but they have separate IDs

resource "aws_cloudwatch_event_target" "event_target" {
    arn            = "arn:aws:lambda:eu-west-1:123456789012:function:service-security_hub_to_jira"
    event_bus_name = "default"
    id             = "eng-security_hub_to_jira_rule-terraform-20210505102803432000000001"
    rule           = "eng-security_hub_to_jira_rule"
    target_id      = "terraform-20210505102803432000000001"
}

This is the output from terraform state show. It only lists one resource when i provide the resource address, but in the lambda console, under triggers, I have two EventBridge resources with the same ARN, but one is enabled and the other disabled.

  1. Is this a bug in terraform?
  2. Is there a way to have terraform apply ID the event rule by ARN and not by id which is not even viewable on the AWS console?
Sergey Kvetko avatar
Sergey Kvetko

Hi! Could somebody makes release https://github.com/cloudposse/terraform-provider-utils with darwin_arm64 support?

cloudposse/terraform-provider-utilsattachment image

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt

cloudposse/terraform-provider-utilsattachment image

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

matt avatar

Sure, I’ll look into that

matt avatar

Although I think it won’t be possible right now as discussed in this issue

darwin/arm64 build · Issue #27257 · hashicorp/terraformattachment image

I didn't see an existing issue, so I thought I'd open an issue to track building an arm64 (Apple Silicon) binary for macOS. After migrating to a new Mac, I have seen at least one issue usin…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh interesting. @dansimau used to be at uber. He wrote https://github.com/uber/astro. I guess that’s why development stalled on it.

uber/astroattachment image

Astro is a tool for managing multiple Terraform executions as a single command - uber/astro

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know the minimal IAM policy required to read SQS messages ?

wannafly37 avatar
wannafly37

I’m using a queue policy with:

      "Action": [
        "sqs:SendMessage",
        "sqs:ReceiveMessage",
        "sqs:DeleteMessage"
      ],
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does it need Send to read SQS messages?

wannafly37 avatar
wannafly37

Probably not, but my app uses the same perms for producer and consumers

Alex Jurkiewicz avatar
Alex Jurkiewicz

You may need KMS key access too if you are using a cmk

Mr.Devops avatar
Mr.Devops
04:42:30 PM

Reposting just in case anyone missed and willing to help

hoping someone can help. I’m using the azurerm_role_assignment resource. What i like to be able to do is have a list of resource i can scope to. I did something using map types, but using it in this way

variable "role" {
  type        = map(any)
  description = "The permission block for roles assignment"
  default = {
    "default_assignment" = {
      scope                = ""
      role_definition_name = "Reader"
      principal_id         = ""
    }
  }
}

would result me to setting my inputs as

role = {
    "scope-001" = {
      scope = "/subscriptions/${local.sub_id}"
      role_definition_name = "Contributor"
      principal_id         = dependency.identity.outputs
    },
    "scope-002" = {
      scope = "/subscriptions/${local.sub_id}"
      role_definition_name = "Reader"
      principal_id         = dependency.identity
    }

where instead i would like to use it something like this

role = {
    "scope-001" = {
      scope = "/subscriptions/${local.sub_id}"
      role_definition_name = "Contributor"
      principal_id         = dependency.identity.outputs
    },
    {
      scope = "/subscriptions/${local.sub_id}"
      role_definition_name = "Reader"
      principal_id         = dependency.identity
    }
Release notes from terraform avatar
Release notes from terraform
10:03:41 PM

v0.15.2 0.15.2 (May 05, 2021) ENHANCEMENTS: terraform plan and terraform apply: Both now support a new planning option -replace=… which takes the address of a resource instance already tracked in the state and forces Terraform to upgrade either an update or no-op plan for that instance into a “replace” (either destroy-then-create or create-then-destroy depending on configuration), to allow replacing a degraded object with a new object of the same configuration in a single action and preview the…

Matt Gowie avatar
Matt Gowie

Huh -replace is interesting. Figure it’s the same as doing a terraform state rm $RESOURCE && terraform apply -target=$RESOURCE. I’m sure I’ve done that… but not too many times.

Zach avatar

more like ‘terraform taint <resource>; terraform apply’

Zach avatar

doing state rm doesn’t destroy the resource, you’d have a conflict with anything that had the same names/ids on the apply

loren avatar

From the hangops slack…

Matt Gowie avatar
Matt Gowie

Ah yeah, that is my goof in regards to rm > taint

Matt Gowie avatar
Matt Gowie

Is there a big presence of terraform folks in the hangops slack? I haven’t poked my head in there in a couple years.

loren avatar

Mostly just apparentlymart, with any regulatory, but that’s just enough to be useful

Matt Gowie avatar
Matt Gowie

Yeah I could see just him being around being a big benefit.

loren avatar

I muted every other channel there, that slack has a way low signal:noise otherwise

Zach avatar

terraform and aws channels are incredibly helpful on that slack though

Dhaval Dedhia avatar
Dhaval Dedhia

Hi, I am trying to make create multiple DataDog monitors via Terraform. And I am faced with a weird issue. Can anyone please help me out here?? My resource block looks like this (which is inspired from Cloudposse’s module

resource "datadog_monitor" "monitor" {
  for_each = var.datadog_monitors

  name                = each.value.name
  type                = each.value.type
  query               = each.value.query
  message             = format("%s%s", each.value.message, var.alert_tags)
.
.
.
.
.
.
.
}

And in my tfvars file, I have a map of monitors configs which I pass via tfvars.

datadog_monitors {
high-error-logs = {
    name                = "[P2] [uds] [prod] [naea1] monitor name here"
    type                = "log alert"
    query               = "logs("service:service-name platform:platform-name environment:prod status:error env:env-name region:us-east-1").index("*").rollup("count").last("10m") > 50"
    tags                = ["managed_by:Terraform", "platform:platform-name", "environment:prod", "env:env-name", "service:service-name", "region:us-east-1"]
  }
}

I am not able to pass in the query exactly like this because of the quotes in the query value (“) I tried to replace “ with ‘, but that won’t work because the query then become invalid. I even tried to prefix the quotes in the middle with an " but that gives me errors well. I am stuck. Has anybody else faced a similar issue before and can help me out please?

Matt Gowie avatar
Matt Gowie

You can try doing the following:

 query               = <<-EOT
logs("service:service-name platform:platform-name environment:prod status:error env:env-name region:us-east-1").index("*").rollup("count").last("10m") > 50
EOT
1
Matt Gowie avatar
Matt Gowie

I believe that will do it.

Matt Gowie avatar
Matt Gowie

Or use the https://github.com/cloudposse/terraform-datadog-monitor module and define your monitors via YAML, which is great.

cloudposse/terraform-datadog-monitorattachment image

Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor

Dhaval Dedhia avatar
Dhaval Dedhia

Yup that worked. I had tried to use the heredoc style, but i got an error because i put everything in the same single line. This works however. Thank you so much!

Dhaval Dedhia avatar
Dhaval Dedhia

And the reason why i cannot use cloudposse’s module is because of the yaml configuration. I will have to reference that for my monitor values. And almost all the remaining monitors/infra that we have uses tfvars in its native form.

managedkaos avatar
managedkaos

Hey, Team!  Question:  When you encounter a catastrophic error in TF (crash or resource conflict), what’s the best way to find and/or recover any resources that  have been created but not written to state?

Example, the TF config says create resource named X but resource X already exists (manually created, from another TF project, etc).  So TF encounters an error and stops processing (at best) or crashes (at worst).  The resources created up to that point may have not been written to state prior to the stop/crash.

On small projects, I’ve gone through the console or CLI and manually removed things.  But I’m wondering if there’s a better way in the event a project contains hundreds (or more!) resources all over the place.  TIA!

msharma24 avatar
msharma24

I usually start by TG_LOG=DEBUG and running plan to observe the logs

2021-05-06

SecOH avatar

Hello, guys. I have a question. Is it mandatory that amazonmq(rabbitmq) security group’s egress rule should allow all outbound traffic? (egress 0.0.0.0/0) https://github.com/cloudposse/terraform-aws-mq-broker/blob/3951c8e1cf4faf94c3c92b2b01d26b078bc60d88/sg.tf#L8 If I create security group for mq, the egress rule must be added to my SG.

cloudposse/terraform-aws-mq-brokerattachment image

Terraform module for provisioning an AmazonMQ broker - cloudposse/terraform-aws-mq-broker

Matt Gowie avatar
Matt Gowie

@SecOH not totally sure. I’m not sure if the AMQ service requires an outside connection to do updates or similar… but I would guess so. If you want to put up a PR to add an additional _enabled bool flag which disabling adding the egress rule then we’d be happy to review and get merged I’m sure.

cloudposse/terraform-aws-mq-brokerattachment image

Terraform module for provisioning an AmazonMQ broker - cloudposse/terraform-aws-mq-broker

SecOH avatar

@Matt Gowie Happy to get your reply, Thank you. I will make a PR if I get some free time.!

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

If any of y’all use GitHub and you want to see Dependabot support for Terraform, there is a way to help: https://github.com/dependabot/dependabot-core/issues/1176#issuecomment-833383992

Tl;dr: dependabot is working on HCL2/tf0.14/tf0.15 support and is asking for any people with public repos interested in testing

1
1
greg n avatar

Hello guy, I just ran into https://github.com/cloudposse/terraform-aws-multi-az-subnets not supporting var vpc_default_route_table_id like https://github.com/cloudposse/terraform-aws-dynamic-subnets#input_vpc_default_route_table_id. Could that be useful so worth raising a GH issue for ?

cloudposse/terraform-aws-multi-az-subnetsattachment image

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

cloudposse/terraform-aws-dynamic-subnetsattachment image

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Matt Gowie avatar
Matt Gowie

@greg n GH issue or a small PR to help support would be much appreciated!

cloudposse/terraform-aws-multi-az-subnetsattachment image

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

cloudposse/terraform-aws-dynamic-subnetsattachment image

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Michael Dizon avatar
Michael Dizon

anyone know of a way to conditionally create a resource (in my case a aws_lambda_function) only if a resource exists (an aws_ecr_image that gets uploaded as a separate process outside of TF) ?

pjaudiomv avatar
pjaudiomv

For stuff like that I usually set a var from a bash script in my pipeline before terraform runs and have it conditionally create based off that

Michael Dizon avatar
Michael Dizon

interesting, do you have an example?

pjaudiomv avatar
pjaudiomv

you would have a variable say a bool create_lambda and only create if that var is true before terraform runs for ecs you could do something like

aws ecr list-images --repository-name anchore | jq -r '.imageIds[].imageTag' | grep -w -q 1.0.0 && export TF_VAR_create_lambda=true || export TF_VAR_create_lambda=false

so if the tag 1.0.0 exists it sets the var to true otherwise false

1
Michael Dizon avatar
Michael Dizon

i’m using TF Cloud, does that make a difference?

pjaudiomv avatar
pjaudiomv

idk ive never used TF cloud

Michael Dizon avatar
Michael Dizon

I’ll figure it out. Thanks for the tip!

1
greg n avatar
#!/usr/bin/env bash
set -eu
set -o pipefail
# set -x
###
# A shell script to be used as a Terraform external datasource to fetch AMI as created by ImageBuilder pipeline.
# Can't use a normal aws_ami datasource as that errors out if there's no result, giving a chicken&egg situation.
#
# Usage:
# data "external" "imagebuilder_ami" {
#   program = ["bash", "${path.module}/files/jq-ext-latest-imagebuilder-arn.sh"]
#   query = {
#     JQ_AWS_REGION = "eu-west-2"
#     JQ_IMAGE_NAME = "xxxx-ami-builder"
#   }
# }
#
# Unit Testing:
#   echo '{"JQ_AWS_REGION": "eu-west-2", "JQ_IMAGE_NAME": "xxxxx-ami-builder"}' | \
#     ./files/jq-ext-latest-imagebuilder-arn.sh
###

# Use JQ's @sh to escape the datasource arguments & eval to set env vars
eval "$(jq -r '@sh "JQ_AWS_REGION=\(.JQ_AWS_REGION) JQ_IMAGE_NAME=\(.JQ_IMAGE_NAME)"')"

aws imagebuilder list-images  --output json --owner 'Self'                               | \
    jq -r --arg JQ_IMAGE_NAME "${JQ_IMAGE_NAME}" '.imageVersionList[] |
        select(.name == $JQ_IMAGE_NAME) | [.] |
        max_by(.dateCreated).arn'                                                        | \
    xargs -n1 -I% aws imagebuilder --output json get-image --image-build-version-arn %   | \
    jq --arg JQ_AWS_REGION "${JQ_AWS_REGION}"                                              \
      '.image.outputResources.amis[] | select( .region == $JQ_AWS_REGION)'              || \
    true
greg n avatar

That will give you an external data source that won’t fail if an AMI isn’t found.

data "external" "imagebuilder_ami" {
  program = ["bash", "${path.module}/files/jq-ext-latest-imagebuilder-arn.sh"]
  query = {
    JQ_AWS_REGION = var.AWS_REGION
    JQ_IMAGE_NAME = "${local.full_name}-ami-builder"
  }
}
1
Michael Dizon avatar
Michael Dizon

amazing, i’m going to look at this over the weekend

greg n avatar

and I use it l like this:

image_id  = length(keys(data.external.imagebuilder_ami.result)) > 0 ? data.external.imagebuilder_ami.result.image : data.aws_ami.ubuntu.id
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

could this be an asymmetric routing issue?

Release notes from terraform avatar
Release notes from terraform
07:03:45 PM

v0.15.3 0.15.3 (May 06, 2021) ENHANCEMENTS: terraform show: Add data to the JSON plan output describing which changes caused a resource to be replaced (#28608) BUG FIXES: terraform show: Fix crash for JSON plan output of new resources with sensitive attributes in nested blocks (<a href=”https://github.com/hashicorp/terraform/issues/28624“…

2021-05-07

Paul Robinson avatar
Paul Robinson

Hi @Matt Gowie I’ve just joined following a couple of PRs to the terraform-aws-multi-az-subnets module. I have a question about this if you can explain please? https://github.com/cloudposse/terraform-aws-multi-az-subnets/pull/48#pullrequestreview-649820152
We are not going to support the use case of private subnets without NAT gateways, at least not in this module.
I saw you reviewed the follow up PR #50. Is there any contextual design discussion that I can read up on please?

Fix nat_gateway_enabled=false Invalid index error by paulrob-100 · Pull Request #48 · cloudposse/terraform-aws-multi-az-subnetsattachment image

what Fix #44 Same test from #45 applied and still fails Readded test after merge of #47 and using us-east-2 as advised Tried to retain the output_map AZ => null design choice when nat_gateway_e…

Matt Gowie avatar
Matt Gowie

Hey @Paul Robinson — I don’t believe that was discussed. More just a unilateral decision. But your point about gateway load balancers in your final comment seems like a valid one one to me.

@Jeremy G (Cloud Posse) can you please review Paul’s comment and discuss? If the functionality is disabled by default and it’s supporting a valid use-case with newer AWS patterns then I don’t see why we would turn away an eager contributor who is willing to implement.

Fix nat_gateway_enabled=false Invalid index error by paulrob-100 · Pull Request #48 · cloudposse/terraform-aws-multi-az-subnetsattachment image

what Fix #44 Same test from #45 applied and still fails Readded test after merge of #47 and using us-east-2 as advised Tried to retain the output_map AZ => null design choice when nat_gateway_e…

Paul Robinson avatar
Paul Robinson

Thanks both. Yeah it was one of the reasons for choosing this module.

Private subnets without routes to nat gateways are standard with the advent of transit gateway and the gateway load balancer.

Linking again to an aws blog with reference VPCs. https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-gateway-load-balancer-supported-architecture-patterns/

It doesn’t seem like the existing module is incompatible here.

Introducing AWS Gateway Load Balancer: Supported architecture patterns | Amazon Web Servicesattachment image

Customers often ask me how they can maintain consistent policies and practices as they move to the cloud, especially as it relates to using the network appliances. They trust third-party hardware and software appliances to protect and monitor their on-premises traffic, but traditional appliance deployment models are not always well suited to the cloud. Last […]

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Paul Robinson Thank you for bringing this up.

First, this pattern of using a Gateway Load Balancer appliance is new to me, and AFAIK, new to Cloud Posse. In general, as the saying goes, we like to “eat our own dog food”, which means we only publish modules and features that we use so we can design the features appropriately and ensure that the module works in practice. This is not a hard-and-fast rule, but rather a reflection of our values. We do often accept contributions to modules that add features we have not used and do not plan on using, and we are grateful for the community support in enhancing the modules to be more useful to more people.

We recently had an internal discussion on the topic of private subnets without gateways due to a different PR and decided not to support them, because it adds a surprising amount of complexity to a module to make them completely optional. This was also in part due to the fact that we had never seen a use case for them and did not contemplate one, so this was not dog food we were ever going to eat. Thank you for educating me on this emerging use case; I will keep it in mind in future PR reviews.

Also, it was not at all clear to me that part of the enhancement you were seeking with your PR was enabling private subnets without NAT. I did not think it was important to you, I thought you were just trying to generalize.

We have 3 modules for creating subnets: • terraform-aws-named-subnetsterraform-aws-dynamic-subnetsterraform-aws-multi-az-subnets As far as I can recall, for the past 2 years we have only used terraform-aws-dynamic-subnets in client engagements, which makes me personally (and this is definitely me and not Erik or Cloud Posse in general) less interested in maintaining or enhancing the other 2 modules, because we have limited resources and are having trouble keeping up with all the PRs across all our modules, so I would rather we not try to keep 3 modules whose differences are difficult to articulate. (See, for comparison, our decision to deprecate and eventually freeze terraform-terraform-label in favor of terraform-null-label ).

Furthermore, when it comes to creating subnets, if you are only creating private subnets without gateways, I personally (and again, not speaking for Cloud Posse) do not see much point in using a Terraform module. It is easy enough to do just using the AWS provider directly and the Terraform built-in function cidrsubnet.

So that is the long story behind the terse
We are not going to support the use case of private subnets without NAT gateways, at least not in this module.
I suggest you look at terraform-aws-named-subnets if you want to create private subnets without gateways. If that doesn’t work for you, then we can discuss what you need and how to get it done in the most appropriate way.

cloudposse/terraform-aws-named-subnetsattachment image

Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.

cloudposse/terraform-aws-dynamic-subnetsattachment image

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

cloudposse/terraform-aws-multi-az-subnetsattachment image

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

cloudposse/terraform-terraform-labelattachment image

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

cloudposse/terraform-null-labelattachment image

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)
Do not require NAT gateway IDs for private subnets by Nuru · Pull Request #51 · cloudposse/terraform-aws-multi-az-subnetsattachment image

what Do not require NAT gateway IDs for private subnets why Users should be able to create subnets without NAT gateways Implemented in #48, which was closed in favor of #50, but #50 left this fe…

Paul Robinson avatar
Paul Robinson

Thanks @Jeremy G (Cloud Posse) for your detailed and insightful response and also your PR.

I can appreciate the pain of maintaining so many modules and the need to deprecate the lesser used/older ones. I also appreciate the time you spend building community and encouraging contributions :raised_hands:

I found a few people asking about the differences between the various subnets modules in this channel. Indeed, there is an open issue on the terraform-aws-multi-az-subnets module. Perhaps I can add my thinking which informed my choice of module.

I did also consider the terraform-aws-dynamic-subnets and terraform-aws-named-subnets modules, but found the former to be simplistic for my needs and the latter to be limited to a single AZ. It does seem at least 2 modules are needed here.

The low barrier to entry terraform-aws-dynamic-subnets module is perfect as a starter/utility/shared services VPC . It splits the VPC CIDR range equally into public and private subnets. This means you are likely to choose a larger CIDR range due to generally needing more ips in the private subnet ranges. Also if you use VPC endpoints, they would probably be located in the public subnets since there are likely spare ips in those subnets.

However I like to use smaller VPC CIDR ranges and make best use of the available CIDR range by having smaller public subnets and relatively larger private subnets. I usually split the private subnets into app/data and a separate one for vpc endpoints. This is a safety design such that, for example, scaling lambda VPC ips cannot exhaust the data subnet range. Also a security design since AWS service traffic and gateway traffic is routed internally to the AWS backbone (which can also lead to a design with no NAT gateway as with this thread). Also a separate VPC endpoints subnet allows NACL rules and/or security groups on the endpoints. For that VPC design, the terraform-aws-multi-az-subnets is perfect.

As to the other benefits of the cloudposse modules, there’s the tagging support and features of the terraform-null-label, the clearly thoughtful implementation and versioning, the CI github actions/PR process, community and terraform registry. Many reasons to contribute rather than roll my own .

I also note the similarities between multi-az-subnets and dynamic-subnets. Your recent switch to using for_each from count in multi-az-subnets is a good example of an important contribution to robustness btw . I can see how some of these PRs would ideally be done on each subnet module (eg/ #49).

I did also consider the named-subnets module but would have had to use for_each at the module level (one for each AZ), and I saw the multi-az-subnets properly calculates the subnet ranges for n AZs for the user, so multi-az-subnets was the winner for our needs.

All considered, it’s difficult to see how to reduce the number of modules to reduce maintenance burden. Perhaps if dynamic-subnets was extended to support multiple categories of private subnets and varying subnet cidr ranges per category, possibly using a sub-module? And make the public subnet optional? And incorporate the named subnets differences if any. This might complicate the interface tbh. I like the simplicity of dynamic-subnets, which I’m sure contributes to its popularity.

Thanks again!

Document the difference between this module and terraform-aws-dynamic-subnets · Issue #23 · cloudposse/terraform-aws-multi-az-subnetsattachment image

This org offers two repo&#39;s with Terraform modules that almost seem to do similar things: https://github.com/cloudposse/terraform-aws-dynamic-subnets and this repo. Is it an idea to document wha…

Added Assign Public IP on Launch by nadenf · Pull Request #49 · cloudposse/terraform-aws-multi-az-subnetsattachment image

what Provides option to auto-assign public IP addresses for public subnets. why EKS needs auto-assign public IP enabled for public subnets.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Thank you @Paul Robinson for your review of the 3 modules and the differences between them. I would appreciate it if you would edit it slightly and post it as a comment to https://github.com/cloudposse/terraform-aws-multi-az-subnets/issues/23 to help us get started on the requested documentation.

Thank you also for educating me about uses of subnets without gateways.

Document the difference between this module and terraform-aws-dynamic-subnets · Issue #23 · cloudposse/terraform-aws-multi-az-subnetsattachment image

This org offers two repo&#39;s with Terraform modules that almost seem to do similar things: https://github.com/cloudposse/terraform-aws-dynamic-subnets and this repo. Is it an idea to document wha…

Paul Robinson avatar
Paul Robinson

great yes I think that I can help there

Paul Robinson avatar
Paul Robinson

@Jeremy G (Cloud Posse) @Matt Gowie I’ve added my comment to https://github.com/cloudposse/terraform-aws-multi-az-subnets/issues/23.

Any opinions welcome. Hope it’s useful

Document the difference between this module and terraform-aws-dynamic-subnets · Issue #23 · cloudposse/terraform-aws-multi-az-subnetsattachment image

This org offers two repo&#39;s with Terraform modules that almost seem to do similar things: https://github.com/cloudposse/terraform-aws-dynamic-subnets and this repo. Is it an idea to document wha…

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Paul Robinson This is great! Thank you so much. @Erik Osterman (Cloud Posse) I suggest you look at Paul’s documentation because I agree with him on some key points moving forward: • All the modules are missing support for some attributes/features, such as IPv6 support and map_customer_owned_ip_on_launch • dynamic-subnets and multi-az-subnets are practically identical, but adding support for the new features would mean updating both of them. • dynamic and named are both still using count instead of for_each , and one reason for not changing them is that it would cause existing installations to attempt to delete and re-create the subnets • Given the feature overlap, missing features, and outdated nature of some of the code, it probably would be best going forward to either merge these 3 into 1 or create a modern 4th module and deprecate all 3 of the existing ones. I favor the latter, because if we merge them all into, say multi-az (which is the only one using for_each), it may make it harder for existing users of multi-az to transition. Also, a new module could address the multiple outstanding issues with the existing modules without having to worry about backward compatibility.

Paul Robinson avatar
Paul Robinson

Glad to help :slightly_smiling_face:

Just a comment on the count change to for_each in the multi-az module in v0.12.0 I used the tf state management tools to upgrade an array index to the availability zone string but it a manual process and great care is required. eg/ module.private_app_subnets.aws_subnet.private[0] becomes module.private_app_subnets.aws_subnet.private["us-east-1a"] and the same for the route tables etc.

You are very clear that modules should be pinned, so these sorts of upgrades are to be expected of course.

I think the missing feature that could force another change is the availability_zone_id which is useful in multi-account peering/transit-gateway scenarios. To minimise costs in a peering scenario you would want the subnets deployed in the same az_id across accounts. In terms of module implementation, this would likely mean adding the az_id to the for_each instance index (probably replacing the availability zone name). That change would result in the same state management headache.

Another one I thought of is the case where you want m subnets placed in n availability zones, where m>n. In that case, the state needs a unique identifier for the for_each instance index, so again there’s a state move. This might be what’s required to add subnets to an existing vpc, for example allocating new subnets in a large VPC CIDR range, or when allocating new subnets after adding a new CIDR range to the VPC.

Migrating from one module to another comes with additional complexity however. It would be nice if there was a tool which could manage the state moves needed from one module to another. Are you aware of anything like that?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

No, I know of no tools other than Terraform for managing Terraform state.

I don’t know of any resources that take availability zone IDs as inputs, and I would not expect there to be any, as that sort of defeats the purpose of having a dynamic mapping of AZs to AZ IDs. I think you need to manually manage your AZ selection based on AZ IDs, so I do not think AZ IDs will come into play in modules or for_each keys.

Matt Gowie avatar
Matt Gowie

@Paul Robinson there is https://github.com/minamijoyo/tfmigrate but it isn’t full featured and is still a good bit of work.

minamijoyo/tfmigrateattachment image

A Terraform state migration tool for GitOps. Contribute to minamijoyo/tfmigrate development by creating an account on GitHub.

Paul Robinson avatar
Paul Robinson

Cool thanks for the link @Matt Gowie. tfmigrate using with the json form and jq might work for loops.

Paul Robinson avatar
Paul Robinson

@Jeremy G (Cloud Posse) it is useful in multi-account scenarios where you are using NLBs to expose a service in the service account as a local ENI in the client account. For that case, the NLB is defined in the service account and needs to know which AZ IDs to enable in the NLB. https://docs.aws.amazon.com/vpc/latest/privatelink/endpoint-service-overview.html#vpce-endpoint-service-availability-zones

Gateway Load Balancer also tries to route within the same AZ iirc.

The underlying subnet resource allows it as an input to handle these scenarios. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet#availability_zone_id

Also see https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-gateway-load-balancer.html#gwlbe-limitations
When you create a Gateway Load Balancer endpoint, the endpoint is created in the Availability Zone that is mapped to your account and that is independent from other accounts. When the service provider and the consumer are in different accounts, use the Availability Zone ID to uniquely and consistently identify the endpoint Availability Zone. For example, use1-az1 is an Availability Zone ID for the us-east-1 Region and maps to the same location in every AWS account.

To keep traffic within the same Availability Zone, we recommend that you create a Gateway Load Balancer endpoint in each Availability Zone that you will send traffic to.

VPC endpoint services for interface endpoints - Amazon Virtual Private Cloud

The following are the general steps to create an endpoint service for interface endpoints.

Gateway Load Balancer endpoints (AWS PrivateLink) - Amazon Virtual Private Cloud

A Gateway Load Balancer endpoint enables you to intercept traffic and route it to a service that you’ve configured using Gateway Load Balancers , for example, for security inspection. The owner of the service is the service provider , and you, as the principal creating the Gateway Load Balancer endpoint, are the

managedkaos avatar
managedkaos

anyone used this app to go from infra back to code? I’ve used terraformer but this looks slicker since it generates TF, CFN, CDK, and even Pulumi

https://former2.com/

Former2attachment image

Convert your existing cloud resources into CloudFormation / Terraform / Troposphere

jose.amengual avatar
jose.amengual

only Terraformer from google

Former2attachment image

Convert your existing cloud resources into CloudFormation / Terraform / Troposphere

1
Sachin c avatar
Sachin c

Hi Team, I was trying to use latest version of cloudposse/ec2-autoscale-group/aws module and found cloudwatch alarm name is duplicating in default alarms.

Expected behavior:

  # module.autoscale_group.aws_cloudwatch_metric_alarm.all_alarms["cpu_high"] will be created
  + resource "aws_cloudwatch_metric_alarm" "all_alarms" {
      + actions_enabled                       = true
      + alarm_actions                         = (known after apply)
      + alarm_description                     = "Scale up if CPU utilization is above 70 for 120 seconds"
      + alarm_name                            = "appname-prod-backend-cpu-utilization-high"

Actual Result:

  # module.autoscale_group.aws_cloudwatch_metric_alarm.all_alarms["cpu_high"] will be created
  + resource "aws_cloudwatch_metric_alarm" "all_alarms" {
      + actions_enabled                       = true
      + alarm_actions                         = (known after apply)
      + alarm_description                     = "Scale up if CPU utilization is above 70 for 120 seconds"
      + alarm_name                            = "appname-prod-backend-appname-prod-backend-cpu-utilization-high"

2021-05-08

Alec Fong avatar
Alec Fong

Hello! What’s the differences between terraform-aws-multi-az-subnets and terraform-aws-dynamic-subnets? Why/when would I use one over the other?

cloudposse/terraform-aws-multi-az-subnetsattachment image

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

cloudposse/terraform-aws-dynamic-subnetsattachment image

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Alec Fong avatar
Alec Fong

Ah I see multi-az is more explicit in defining each public and private subnets where dynamic creates both subnets for you.

cloudposse/terraform-aws-multi-az-subnetsattachment image

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

cloudposse/terraform-aws-dynamic-subnetsattachment image

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Alec Fong avatar
Alec Fong

If starting fresh would there be any benefits to using multi-az?

jose.amengual avatar
jose.amengual

we usually use dynamic subnets

1
Brij S avatar

Hey all, I’m trying to setup EKS with managed node groups with the following config

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "eks-vpc"
  cidr = "172.21.0.0/16"

  azs            = [data.aws_availability_zones.available.names[0], data.aws_availability_zones.available.names[1]]
  public_subnets = ["172.21.0.0/20", "172.21.16.0/20"]

  enable_nat_gateway                = false
  enable_vpn_gateway                = true
  propagate_public_route_tables_vgw = true

  tags = merge(var.tags, {
    "kubernetes.io/cluster/eks" = "shared"
  })

  public_subnet_tags = merge(var.tags, {
    "kubernetes.io/cluster/eks" = "shared"
  })
}

module "eks" {
  source                          = "terraform-aws-modules/eks/aws"
  version                         = "15.2.0"
  cluster_name                    = "eks"
  cluster_version                 = "1.19"
  subnets                         = module.vpc.private_subnets
  vpc_id                          = module.vpc.vpc_id
  cluster_enabled_log_types       = ["scheduler"]
  tags                            = var.tags
  cluster_endpoint_private_access = true

  node_groups_defaults = {
    ami_type  = "AL2_x86_64"
    disk_size = 20
    # subnets   = module.vpc.private_subnets
  }
  node_groups = {
    gitlab-eks = {
      name             = "gitlab-eks"
      desired_capacity = 3
      max_capacity     = 5
      min_capacity     = 3
      instance_types   = ["t3.2xlarge"]
      capacity_type    = "ON_DEMAND"
    }
  }
}

However, I keep running into the following error;

Error: List shorter than MinItems
  on .terraform/modules/eks/modules/node_groups/node_groups.tf line 8, in resource "aws_eks_node_group" "workers":
   8:   subnet_ids    = each.value["subnets"]
Attribute supports 1 item minimum, config has 0 declared

Has anyone else run into this? I’ve looked in the eks module issues and havent found anything and I also tried adding/removing the subnet in the nodegroup defaults, with no success

loren avatar

It looks like you’re creating public subnets in the vpc module, but using private subnets in the eks module, so subnets is indeed an empty list

Brij S avatar

oh man, good eye! thanks

1

2021-05-09

RO avatar

Hi everyone

Building a VPC with 3 subnets using terraform, I have some more to add to it but it’s want to know if anyone has a ready receipt so I can check to see any mistakes I may made

2021-05-10

François Davier avatar
François Davier

Hi, want to use https://registry.terraform.io/modules/cloudposse/cloudwatch-events/aws/latest. Target is to monitor if aws backup job is ok/ko when copying restauration point from source region to vault target region. Have you some example please ? Thank you

Rhys Davies avatar
Rhys Davies

Hey can anyone recommend an article or series or blog post about how to correctly structure the layers in a terraform project

2
David Morgan avatar
David Morgan

hi i am trying to use terraform-aws-modules/dynamodb-table/aws 0..13.0 when i specify ttl_attribute = “ttl” i get the following message

An argument named "ttl_attribute" is not expected here

this is what i’m an trying to run

module "cache_dynamo_table_forum_post_count" {
  source  = "terraform-aws-modules/dynamodb-table/aws"
  version = "0.13.0"

  name      = "mytable_name"
  hash_key  = "my_id"
  billing_mode   = "PAY_PER_REQUEST"
  ttl_attribute = "ttl"
}

thanks

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

ttl_atribute -> ttl_attribute_name

Oliver avatar

Hi David, I thinnk the variable name is ttl_attribute_name

Oliver avatar

oh you beat me to it

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

The fastest in the West

1
David Morgan avatar
David Morgan

verified - thanks for the quick response!

2021-05-11

Jason avatar

Hi all I’m using terraform-aws-transit-gateway (https://github.com/cloudposse/terraform-aws-transit-gateway) to create TGW and share it with external principals.

I faced an issue when sharing TGW with external principal as below

rror: error reading EC2 Transit Gateway: InvalidTransitGatewayID.NotFound: Transit Gateway tgw-090ff1710310403a7 was deleted or does not exist.
        status code: 400, request id: 836b5c87-7b76-44f9-b318-f1fbf47fa785

  on ../../../modules/tgw/main.tf line 49, in data "aws_ec2_transit_gateway" "this":
  49: data "aws_ec2_transit_gateway" "this" {

The reason is this module has a check

data "aws_ec2_transit_gateway" "this" {
  id = local.transit_gateway_id
}

As you may know, for external principal, it needs to be accepted from second aws account then it can be seen and define as data_source

My question is whether we have timeout/delay in data_source or dependency to wait for the accepter to accept the sharing, then it can process for next steps. ?

Thanks everyone

cloudposse/terraform-aws-transit-gatewayattachment image

Terraform module to provision AWS Transit Gateway, AWS Resource Access Manager (AWS RAM) Resource, and share the Transit Gateway with the Organization or another AWS Account. - cloudposse/terraform…

Jason avatar

Does anyone face same issue like me ?

cloudposse/terraform-aws-transit-gatewayattachment image

Terraform module to provision AWS Transit Gateway, AWS Resource Access Manager (AWS RAM) Resource, and share the Transit Gateway with the Organization or another AWS Account. - cloudposse/terraform…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

There’s been some discussion here and other forums about third-party account access provided to vendors. Has anyone ever seen a collection of all of the known vendors’ IAM role requirements in IaC format (CloudFormation, Terraform, etc.)?

My thinking is this - let’s say you want to give a third party vendor access to your account through an IAM role, and you’d rather do it through your IaC process and not using a click-through-stack-deployment. It would be great to have the roles’ code just easily available for download and inclusion in your code repo.

This should be easy with CloudFormation (as the stack code is visible in click-through process), but more complicated for Terraform.

managedkaos avatar
managedkaos

@Yoni Leitersdorf (Indeni Cloudrail) if i was going that route, i would use the AWS managed policies for job functions. They are hecka easy to implement in TF and can be tweaked with additional permissions as needed. So depending on what the vendor is asking for (DB, support, billing, etc) you can just assign that role.

One thing to note: the read ony role allows access to the contents of parameters and secrets that might contain sensitive values. if you just want someone to “see” resources but not their configuration or values, the “view only” role is the way to go.

https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html

AWS managed policies for job functions - AWS Identity and Access Management

Use the special category of AWS managed policies to support common job functions.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Yeah, that’s one of the known pitfalls with using AWS managed policies. People don’t fully understand what ReadOnly means

1
loren avatar

also, the managed policies change pretty frequently… https://twitter.com/mamip_aws

MAMIP - Monitor AWS Managed IAM Policies Changes (@mamip_aws) | Twitter

The latest Tweets from MAMIP - Monitor AWS Managed IAM Policies Changes (@mamip_aws). Monitor AWS Managed IAM Policies Changes - Crafted by @zoph from @zoph_io. eu-west-1

David Morgan avatar
David Morgan

is there a cloudposse way to specify attributes to ignore aka terraform lifecycle ignore_changes?

David Morgan avatar
David Morgan

specifically with this module - “cloudposse/ec2-instance/aws”

David Morgan avatar
David Morgan

an ami we were using has been deleted, now when we run terraform we get the following error

Your query returned no results. Please change your search criteria and try again

i don’t want to destroy the instance and redeploy with a new ami - i just want cloudposse to ignore it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately, lifecycle blocks don’t support interpolation, so there’s no nice way to do it. in the past, we’ll define 2 versions of the identical resource and feature flag the lifecycle behavior in one of them. This though adds a lot of overhead for maintenance we do it very seldom.

David Morgan avatar
David Morgan

for this particular module cloudposse/ec2-instance/aws would it be better to not perform a data lookup on the AMI which fails when the AMI no longer exists?

Michael Warkentin avatar
Michael Warkentin

Looking to get some feedback on this issue: https://github.com/cloudposse/terraform-aws-dynamodb/issues/84

Not sure if I misunderstood how it should be configured or ran into a bug

Can't disable TTL · Issue #84 · cloudposse/terraform-aws-dynamodbattachment image

Describe the Bug I&#39;m trying to disable the TTL feature on some dynamodb tables, and it doesn&#39;t seem possible with the current set of variables. It looks like there was an attempt to make it…

loren avatar

in case you’re using the ram share accepter resource and have started getting failures when destroying the accepter from the member account: https://github.com/hashicorp/terraform-provider-aws/issues/19319

Cannot destroy `aws_ram_resource_share_accepter` from member account when share contains some resource types · Issue #19319 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

Steffan avatar
Steffan

Hi guys - Wondering if someone can help me out here. i dont see any datasource config on tf doc to grab existing user key and secret so it can be used in another module. is this something that can be done? how can i achieve this so like

data "aws_iam_access_key" "example" {
  name = "example"
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

you can’t access the secret access key after creation

Alex Jurkiewicz avatar
Alex Jurkiewicz

it’s only returned as a response to the ‘create access key’ api action

Alex Jurkiewicz avatar
Alex Jurkiewicz

if you are creating the access key in terraform, you can save it somewhere secure and then load it from that location in other Terraform configurations

Steffan avatar
Steffan

do you by any chance have a sample of how i can save the returned value

Alex Jurkiewicz avatar
Alex Jurkiewicz
resource "aws_iam_access_key" "test" {
  user = "alex"
}

If you create an access key like ^, you can access the secret key with aws_iam_access.key.test.secret

1
Steffan avatar
Steffan

ahh okey

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can use a pattern like this to write it to SSM: https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair

cloudposse/terraform-aws-ssm-tls-ssh-key-pairattachment image

Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair

1
Steffan avatar
Steffan

okey guys thanks so much for the tips. You ve been really helpful

2021-05-12

Almondovar avatar
Almondovar

Hi all, glad do find you! we are using module terraform-aws-vpc-peering-multi-account v0.5.0 with Terraform v0.11 and we need to upgrade our terraform to v0.14, may i have some questions please?

  1. how do i know which version of terraform matching with each version of the module?
  2. What is the optimal path to upgrade the module from 0.5.0 to 0.14.0? - do i need to upgrade one by one the versions or i can “merge” few versions together?
  3. i don’t want in any case to run tf apply in the production system, is it possible to upgrade the module and tf version without never hitting tf apply?
  4. if need for terraform import comes, is it supported? i dont see any import examples on github Thank you!
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a way in terraform 0.13 to perform a data lookup for region using an aliased provider?

loren avatar
data "aws_region" "current" {
  provider = aws<.alias>
}
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

thanks man, its a facepalm moment as i can’t spell my alias correctly

loren avatar

gotta love troubleshooting for an hour, to discover a single typo… administrator vs adminstrator, so many times

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

quick guardduty question if thats ok … i have configured the following for region X …

resource "aws_guardduty_organization_admin_account" "this" {
  provider         = aws.master-account
  admin_account_id = data.aws_organizations_organization.this.master_account_id
}

resource "aws_guardduty_detector" "this" {
  provider = aws.master-account
  enable   = true
}

resource "aws_guardduty_organization_configuration" "this" {
  provider    = aws.master-account
  auto_enable = true
  detector_id = aws_guardduty_detector.this.id
  depends_on  = [aws_guardduty_organization_admin_account.this]
}

however, when going to guardduty i can see all the accounts but they aren’t listed as members, do I need to execute something else as well or leave it for X hours?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @matt

matt avatar

I think you’re probably missing the nuance in the documentation here…

matt avatar
auto_enable - (Required) When this setting is enabled, all new accounts that are created in, or added to, the organization are added as a member accounts of the organization's GuardDuty delegated administrator and GuardDuty is enabled in that AWS Region.
matt avatar

It only auto-enables accounts that are added to the organization after GuardDuty was enabled

matt avatar

This is why we created the guardduty sub-command on our turf utility

matt avatar
cloudposse/turfattachment image

CLI Tool to help with various automation tasks (mostly all that stuff we cannot accomplish with native terraform) - cloudposse/turf

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Add `templatestring` function by nitrocode · Pull Request #28686 · hashicorp/terraformattachment image

Closes #26838 This allows us to fully deprecate the template provider by allowing us to templatize a string. ✗ go install . ✗ ~/go/bin/terraform console > templatestring(&quot;Hello, $${name}!&q…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Add templatestring function to improve flexibility of templating. · Issue #26838 · hashicorp/terraformattachment image

Current Terraform Version v0.13.3 Use-cases As part of our Azure configuration, we&#39;re making extensive use of API management&#39;s policies to perform various request validations. The policies …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

instead, we might want to add this to cloudposse/terraform-provider-utils

1
RB avatar

ya i just started reading up on the terraform-provider-utils and i think that might be the best way to go

RB avatar

this pr is older and it looks like pselle is trying to push it into terraform core

https://github.com/hashicorp/terraform/pull/24978#issuecomment-721867351

lang/funcs: add template function by barryib · Pull Request #24978 · hashicorp/terraformattachment image

The templatefile function is great way ton render template in a consistent way. But so far, we don&#39;t have the ability to render dynamic templates in the same way and are forced to use the templ…

RB avatar

i think we can recreate the templatestring functionality using a hacky replace() function across a map of variables. not my favorite but would work until hashicorp changes their mind

loren avatar

The nice part about the template functions, is being able to use complex objects and other terraform functions in your template. Would love a templatestring function that supported that

Brij S avatar

are you able to store locals in a .tfvars file?

managedkaos avatar
managedkaos

I don’t think so. :thinking_face:

Locals are more like temp values that TF uses during execution. Their values are usually set by some expression or combination of other variables.

Maybe try using a [locals.tf](http://locals.tf) that only contains your locals?

managedkaos avatar
managedkaos

If you are trying to set some local value before execution starts, you probably want a variable instead.

Brij S avatar

ahh I figured, thanks! Just double checking

1
Troy Taillefer avatar
Troy Taillefer

I use terragrunt to share locals across different directories but in pure tf there isn’t anyway to do that I think

2021-05-13

Evgenii Prokofev avatar
Evgenii Prokofev

Hi. Can someone give a clue what purpose of cloudposse/route53-cluster-hostname/aws module? How it can be used?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The purpose is just to add a hostname to a zone (e.g. a cluster’s hostname). I hate the name of the module and would like to rename it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fun fact, this was probably the first module we released about 5 years

1
Quentin BERTRAND avatar
Quentin BERTRAND

Hi (sorry for the digging up) I was looking for why this name for this module, now I know

Do you know if it would be a difficult job to rename it (without breaking the current uses)?

Albert Balinski avatar
Albert Balinski

Hello, I have seen that yesterday a PR Greater control over Access Logging was merged in https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/161 (btw, thank you very much for maintaining it, great stuff!). I am not sure if this is a coincident, but today I got an error:

│ Error: Error putting S3 logging: InvalidTargetBucketForLogging: The owner for the bucket to be logged and the target bucket must be the same.
│       status code: 400, request id: VGEG9C37YWX8BH14, host id: abc=
│
│   with module.qa.aws_s3_bucket.origin[0],
│   on .terraform\modules\qa\main.tf line 200, in resource "aws_s3_bucket" "origin":
│  200: resource "aws_s3_bucket" "origin" {

After I have downgraded version from 0.65.0 to 0.64.0 it works correctly

jacob.tran avatar
jacob.tran

Hello all, I’m using terraform-aws-ec2-instance to provision an ec2, but I got an error: ╷ │ Error: ConflictsWith │ │ with module.ec2_instance.module.default_sg.aws_security_group_rule.default[“ingress-tcp-443-443-ipv4-no_ipv6-no_ssg-no_pli-no_self-no_desc”], │ on .terraform/modules/ec2_instance.default_sg/main.tf line 58, in resource “aws_security_group_rule” “default”: │ 58: self = lookup(each.value, “self”, null) == null ? false : each.value.self │ │ “self”: conflicts with cidr_blocks

Could someone help me?

SlackBot avatar
SlackBot
02:10:32 AM

This message was deleted.

emem avatar

i have my terraform.state file created but its not beiing used

GitRepository Git avatar
GitRepository Git

Good Morning To All

GitRepository Git avatar
GitRepository Git

on main.tf line 160, in resource “aws_security_group” “websg”: │ 160: ingress = { │ 161: cidr_blocks = [ local.anywhere ] │ 162: description = “open ssh prt” │ 163: from_port = 22 │ 164: protocol = “tcp” │ 165: to_port = 22 │ 166: #security_groups = [ “value” ] │ 167: #self = false │ 168: #ipv6_cidr_blocks = [ “value” ] │ 169: #prefix_list_ids = [ “value” ] │ 170: │ 171: │ 172: } │ ├──────────────── │ │ local.anywhere is “0.0.0.0/0” │ │ Inappropriate value for attribute “ingress”: set of object required. ╵

Alex Jurkiewicz avatar
Alex Jurkiewicz

Use ingress {

GitRepository Git avatar
GitRepository Git

Yes

GitRepository Git avatar
GitRepository Git

i got this error

2021-05-14

loren avatar
Terraform 0.14 support · Issue #1176 · dependabot/dependabot-coreattachment image

Dependabot&#39;s terraform support doesn&#39;t work with HCL 2.0. In particular, our logic for parsing HCL files is here and shells out to this tool which only support HCL 1.0 and, unfortunately, l…

1
Zach avatar

oooo

Heath Snow avatar
Heath Snow

nice

David Morgan avatar
David Morgan

hello, i am using “cloudposse/elasticache-redis/aws” i am using terraform 0.13.7 and specifying version “0.13.0” however i get the following errors when running init:

Module module.cache_redis.module.redis.module.dns (from
git::<https://github.com/cloudposse/terraform-aws-route53-cluster-hostname.git?ref=tags/0.3.0>)
does not support Terraform version 0.13.7

and

Module module.cache_redis.module.redis.module.label (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.14.1>)
does not support Terraform version 0.13.7

i’m not sure how to resolve the version incompatibility thank you…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Did you try using more recent versions of those modules?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
cloudposse/terraform-aws-route53-cluster-hostnameattachment image

Terraform module to define a consistent AWS Route53 hostname - cloudposse/terraform-aws-route53-cluster-hostname

David Morgan avatar
David Morgan

i am not specifying them directly - i am only specifying the redis module

module "redis" {
  source = "cloudposse/elasticache-redis/aws"
  # Cloud Posse recommends pinning every module to a specific version
  version = "0.13.0"
  ....
}

they seem to be referenced from the cloudposse module, not from my tf file

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Why version 0.13.0 of the module?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

There’s 0.38.0 available.

David Morgan avatar
David Morgan
11:34:27 PM

ok - that fixed it - i was going off of this

David Morgan avatar
David Morgan

i thought the version was in reference to terraform

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Ah - you’re confusing between the version of Terraform and the version of the module.

1
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Yep

David Morgan avatar
David Morgan

thanks

1

2021-05-15

GitRepository Git avatar
GitRepository Git

i’m using Terraform v0.15.1

2021-05-16

mikesew avatar
mikesew

Q: I’ve accidentally tried to rename an AWS RDS subnet group with an RDS instance still attached. Unfortunately, terraform complained I needed to move the DB to a diff subnet group.

Error: Error modifying DB Instance dev-db-01: InvalidVPCNetworkStateFault: You cannot move DB instance dev-mpa-spa-db-01 to subnet group dev-db-01-dbsg. The specified DB subnet group and DB instance are in the same VPC. Choose a DB subnet group in different VPC than the specified DB instance and try again.

.. now it seems that the terraform state is a little messed up. Does anybody have suggestions to unmuck this? is it historically to manually fix it and then re-import the resource? or untaint something?

Alex Jurkiewicz avatar
Alex Jurkiewicz

What do you mean by “messed up”?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Using the terraform state rm/add commands are what you would use here. But I’m not convinced from your info that the state is messed up.

mikesew avatar
mikesew

I didn’t provide full info. however, doing a terraform state rm/import ended up doing the trick.

# create a dummy.tfvars file from my TFE workspace variables
export AWS_DEFAULT_REGION=us-west-2
export AWS_ACCESS_KEY=XXXX
export AWS_SECRET_ACCESS_KEY=XXX

terraform state rm  module.rds_instance.aws_db_subnet_group.default
terraform import module.rds_instance.aws_db_subnet_group.default  my-existing-db-subnet-grp-01

I really don’t like having to handle AWS-dependencies with terraform.. things like db option groups and db subnet groups

1

2021-05-17

Brandon Metcalf avatar
Brandon Metcalf

hello everyone. with version 0.17.2 of cloudposse/terraform-aws-s3-bucket and terraform 0.14.5, the policy that gets generated and to be applied to the newly created bucket looks like

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AWSCloudTrailAclCheck",
      "Effect": "Allow",
      "Action": "s3:GetBucketAcl",
      "Resource": "arn:aws:s3:::dev-gov-test-us-gov-west-1-cloudtrail",
      "Principal": {
        "Service": "cloudtrail.amazonaws.com"
      }
    },
    {
      "Sid": "AWSCloudTrailWrite",
      "Effect": "Allow",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::dev-gov-test-us-gov-west-1-cloudtrail/*",
      "Principal": {
        "Service": [
          "config.amazonaws.com",
s.com"
        ]
      },
      "Condition": {
        "StringEquals": {
          "s3:x-amz-acl": [
            "bucket-owner-full-control"
          ]
        }
      }
    }
  ]
}
Brandon Metcalf avatar
Brandon Metcalf

notice the s.com on a line by itself. i believe “cloudtrail.amazonaws.com” is getting truncated resulting in this. and when terraform tries to apply the policy, the following error occurs:

Error putting S3 policy: MalformedPolicy: Policy has invalid resource
Brandon Metcalf avatar
Brandon Metcalf

plan output shows the policy as

      + policy                      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "s3:GetBucketAcl"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "cloudtrail.amazonaws.com"
                        }
                      + Resource  = "arn:aws:s3:::dev-gov-test-us-gov-west-1-cloudtrail"
                      + Sid       = "AWSCloudTrailAclCheck"
                    },
                  + {
                      + Action    = "s3:PutObject"
                      + Condition = {
                          + StringEquals = {
                              + s3:x-amz-acl = [
                                  + "bucket-owner-full-control",
                                ]
                            }
                        }
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = [
                              + "config.amazonaws.com",
                              + "cloudtrail.amazonaws.com",
                            ]
                        }
                      + Resource  = "arn:aws:s3:::dev-gov-test-us-gov-west-1-cloudtrail/*"
                      + Sid       = "AWSCloudTrailWrite"
                    },
                ]
              + Version   = "2012-10-17"
            }
Brandon Metcalf avatar
Brandon Metcalf

it turns out the debug output seems to be a red herring. this is actually occurring in govcloud, so the arn is incorrect. instead of aws it should be aws-gov-us. i’ll look into submitting a PR.

pjaudiomv avatar
pjaudiomv

using module version 0.20.0 or above should work

Brandon Metcalf avatar
Brandon Metcalf

the latest is 0.17.2

Brandon Metcalf avatar
Brandon Metcalf

i’ve worked around the issue by passing in arn_format and doing a lookup on the current partition

pjaudiomv avatar
pjaudiomv

Hmm I guess the GitHub tags and terraform ones are different

mikesew avatar
mikesew

Question about structuring terraform variables (as I’m learning). Do you prefer putting variables (aka *.tfvars variables)

• A) in terraform cloud workspaces variables section? OR

• B) in your git repo alongside your .tf code?

main.tf
/env
  /dev
    dev.auto.tfvars
  /prd
    prd.auto.tfavrs

With the latter option, it seems like I need a separate batch file or cmd to properly point to the environment’s -var-file , and that this wouldn’t work with terraform cloud unless I’m missing something.

Mohammed Yahya avatar
Mohammed Yahya

small A and lots of B

Mohammed Yahya avatar
Mohammed Yahya

in A put sensitive variables, and everything else in B, you cloud even generate B with Terraform it self and mange it

2021-05-18

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know the appropriate IAM role to allow users to change their password and setup MFA ?

Joe Hosteny avatar
Joe Hosteny
cloudposse/terraform-aws-iam-assumed-rolesattachment image

Terraform Module for Assumed Roles on AWS with IAM Groups Requiring MFA - cloudposse/terraform-aws-iam-assumed-roles

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

thanks man thats awesome

loren avatar

i forget, which of the TACOS other than atlantis supported self-hosted deployments?

loren avatar
SweetOps #office-hours for December, 2020

SweetOps Slack archive of #office-hours for December, 2020. Meeting password: sweetops Public “Office Hours” are held every Wednesday at 11:30 PST via Zoom. It’s open to everyone. Ask questions related to DevOps & Cloud and get answers!

2021-05-19

Marko Sustarsic avatar
Marko Sustarsic

Hi there, I have a question about terraform-aws-cloudfront-s3-cdn which we’re using to provide images to a number of our client applications. Currently if an image is unavailable on Cloudfront the clients will receive a 404 response with no content, and we’d like to change that so that we return a fallback image, whilst still maintaining the 404 http code for backwards compatibility. Looking at the docs of this module, if we set website_enabled = true we would be able to provide a custom error response. However, presumably the “error document” that is passed in should be an html file, not an alternative asset to be served. Does anyone know if there’s an easy way of achieving what we need using this module?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Sounds like you want to use the error page handler logic of cloudfront. No need to need with a S3 website backend

Marko Sustarsic avatar
Marko Sustarsic

Thank you @Alex Jurkiewicz. I’m guessing that’s not something that this module’s api exposes? Would you suggest doing it in aws management console?

Alex Jurkiewicz avatar
Alex Jurkiewicz
cloudposse/terraform-aws-cloudfront-s3-cdnattachment image

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Marko Sustarsic avatar
Marko Sustarsic

Ah brilliant, thanks

Release notes from terraform avatar
Release notes from terraform
07:23:42 PM

v0.15.4 0.15.4 (May 19, 2021) NEW FEATURES:

Noting changes made outside of Terraform: Terraform has always, by default, made a point during the planning operation of reading the current state of remote objects in order to detect any changes made outside of Terraform, to make sure the plan will take those into account. Terraform will now report those detected changes as part of the plan result, in order to give additional context about the planned changes. We’ve often heard that people find it…

3
loren avatar

pretty cool release there, lot going on

Michael Warkentin avatar
Michael Warkentin

Is there a way to disable tags entirely for the cloudposse modules? I’m trying to reuse some config to configure DynamoDB local tables, and it doesn’t support tagging..

WC avatar

Hi, I got error this when create the efs_file_system resource:

Error: error reading EFS FileSystem: empty output

  on main.tf line 33, in data "aws_efs_file_system" "tf-efs-fs":
  33: data "aws_efs_file_system" "tf-efs-fs" {
WC avatar

the resource is like this:

data "aws_efs_file_system" "tf-efs-fs" {
  creation_token = "my-efs-file-system"
}
WC avatar

I had spent a whole day to fix this, but still not progress. Please help me to know what is wrong. Thank you.

2021-05-20

greg n avatar

You said you’re trying to create and efs filesystem, but the snippet you posted is a data source for looking up an existing resource. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_file_system vs https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/efs_file_system

Jeff Dyke avatar
Jeff Dyke

Dumb question time. I’m building new vpcs to replace those created by console. Each time i create a new state folder, with a backend, the first plan doesn’t have a remote state. Is there a better way than -lock=false ? Its only me building this so i’m not worried about it…just more interested in the workflow. First time working on completely new infra with terraform.

Alex Jurkiewicz avatar
Alex Jurkiewicz

what backend are you using? s3?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can create the state storage area by hand to bootstrap

Jeff Dyke avatar
Jeff Dyke

I am using s3. Should have mentioned that. The bucket is already created, the key/path is obviously not there yet. I could put an empty file up there, but doesn’t seem much better than -lock=false when working alone. Thanks for the comments.

Alex Jurkiewicz avatar
Alex Jurkiewicz

RFC for a cloudposse null label feature request

We use this module a lot, but some of the tag names don’t match our internal standard. For example, we use “environment” rather than “stage”. As a workaround, we set additional_tag_map = { environment = "dev" }, but this means we have a Stage tag which duplicates the value.

In AWS this is a problem since there is a limit of 10 tags per resource, and we add other tags globally for cost /security control.

So, I’d like to enhance this module so you can disable certain default tags or override their name. For example:

module "context" {
  ...
  stage_tag_name = "environment" # use 'environment' instead of 'stage'
  namespace_tag_name = "" # don't create a namespace tag at all
}

Thoughts?

Chris Fowles avatar
Chris Fowles

makes sense to me - we use “env” to align things with datadogs unified service tagging model, just because it was something to follow: https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging

Unified Service Taggingattachment image

Datadog, the leading service for cloud-scale monitoring.

1
Jeff Dyke avatar
Jeff Dyke

This also makes a ton of sense to me as stage is often repetitive. While i appreciate the ability to be ultimately expressive about what i’m building, i find its often never really used, i like the optional nature of this RFP b/c my/our situation is not everyone’s.

1

2021-05-21

2021-05-22

adebola olowose avatar
adebola olowose

Hello Guys Please i need your advise on how to alter a baseline. we want to move our cloudrail logs from the master accounts to a centralize audit account. the questions is, we have cloudtrail and s3 buckets which collects logs for all the other accounts on the master account now we want to move it to the audit account. whats the best approach to do this?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

We were confronted with the same situation. What we ended up doing is setting up new CloudTrail settings that send everything to the audit account, shutting down the old CloudTrail settings, and copying the data over to separate buckets.

The idea here is that you don’t have to merge the old data with the new data, as long as you know when the cutover was done and adapt your queries accordingly.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Re your question for using TF - you referring to how to create a central cloudtrail audit account?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Or what specifically?

adebola olowose avatar
adebola olowose

Like moving it from the master where its sits now to the audit account, essentially what we want to do is alter our base line this will include the AWS CONFIG

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Oh we didn’t do that part via TF. We used TF to build the new audit.

adebola olowose avatar
adebola olowose

Thank you

adebola olowose avatar
adebola olowose

can you be so kind to send a sample of the TF for the audit?

2021-05-23

2021-05-24

adebola olowose avatar
adebola olowose

Thank you @Yoni Leitersdorf (Indeni Cloudrail) any idea on how to use terraform to do this?

Karl Webster avatar
Karl Webster

Is there a maintainer of the terraform-aws-dynamic-subnets here? The following PR (created by the renovate bot) fixes my issues:

• The PR: https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/128

• The Issue it fixes: https://github.com/cloudposse/terraform-aws-dynamic-subnets/issues/133 I would @ but I have no idea who the right person is..

Matt Gowie avatar
Matt Gowie

@Karl Webster you can always bump any PR in #pr-reviews to get some on it. I’ll try to get this one merged for ya.

Matt Gowie avatar
Matt Gowie

Merged — Thanks for the bump @Karl Webster

Karl Webster avatar
Karl Webster

Ah, I did not even realise that was a channel, noted for next time

Mohammed Yahya avatar
Mohammed Yahya

upcoming TF Project https://www.terrateam.io/

TerraTeam

Seamless GitHub integration for Terraform pull request automation

Alex Jurkiewicz avatar
Alex Jurkiewicz

looks like atlantis, right?

TerraTeam

Seamless GitHub integration for Terraform pull request automation

Mohammed Yahya avatar
Mohammed Yahya

yes.

Chris Fowles avatar
Chris Fowles

what’s the difference?

mfridh avatar

A much more clever business model, definitely.

Mohammed Yahya avatar
Mohammed Yahya

I did not test it yet, it is open for beta tester

Zach avatar


what’s the difference?
atlantis does the apply before you merge right? The screenshots look like terrateam does it on/after the merge.

2021-05-25

Andrew Nazarov avatar
Andrew Nazarov

Hah, thanks for Terrateam, will check it out for sure. It’s funny I came to this channel to ask about best practices of running TF in a CI/CD system and the last message is kinda related to this:) Nonetheless let me ask the question anyway to collect some valuable feedback. Say, we have a repo with TF code that creates a piece of infrastructure. What would be the safest and the most convenient way to make changes and run the code? A natural thing for people with dev background is to leverage PR/MRs to review changes: to do this we would run terraform plan in a feature branch (MR) and we even can make plan results easily visible either via comments or MR widgets. However, things might go wrong - somebody else could make changes to the code in his/her branch and in this MR will be reviewed separately. It doesn’t have changes of the first one and if it’s merged first then the results of the first one, probably already approved MR, will be obsolete. That means reviewing the MR doesn’t have that much value. We also need to take a look at the plan result in the mainline, therefore we have to apply manually then. And somehow we need a mechanism to ask peers to check out changes. What are the better workflows with and (which is even more interesting:) without additional tooling? How are you running TF code and keep the confidence?)

Hah, Terrateam even uses a word “confidence” in their slogan:)

Mohammed Yahya avatar
Mohammed Yahya

I would do it this way: All features branches should display planning for upcoming changes, and only one branch ( main or master ) can apply changes after PR reviewed and merged. so your CICD should watch for main branch changes and trigger based on changes there.

Mohammed Yahya avatar
Mohammed Yahya

If each feature branch can apply TF changes, some branches will overwritten the previous one, and could be two branches run at the same time within big teams.

Mohammed Yahya avatar
Mohammed Yahya

so to summarize my apprach: • create feature branch • display cost using infracost • display security findings using checkov or cloudrail

• display graph of new resouces

• display plan output of changes

• assign senior engineers for review

• approve

• merge Then CICD will be triggered and apply changes and send notification to slack

This can be achieve using multiple ways - so no secret recipe here.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

yep, we follow the above model. When you add a commit to a pull request, your CD system runs a speculative plan with your changes.

The key is that you only allow PRs to merge when they are up to date with the base branch. This means your speculative plans are always “fresh”. On the downside, if you have many PRs open, you can get into an annoying cycle of “oh, someone else merged, let’s update my branch, oh, someone else merged”. But it’s not a real issue

2
Alex Jurkiewicz avatar
Alex Jurkiewicz

this workflow is described in detail in the Spacelift docs. We use Spacelift, but this workflow is generic and can be used with any CD system: https://docs.spacelift.io/integrations/source-control/github#proposed-workflow

1
1
Alex Jurkiewicz avatar
Alex Jurkiewicz

you also mention that reviews become “obsolete” in this situation. You’re exactly right, and Github has an option to dismiss review approvals when new commits are pushed to a PR. I don’t know why it’s not enabled by default, but IMO it should be

1
Andrew Nazarov avatar
Andrew Nazarov

Thanks for the feedback, really appreciate this:)

Actually we are working like what was mentioned:

• create a feature branch

• open a PR/MR

• assign reviewers

• checkov is going to be implemented soon

• we showed graph, but for some reason stopped doing this

• display plan

• wait for approvals

• merge But additionally we have to:

• check a plan in the mainline after the merge

• apply manually in the mainline if everything is as expected The problem is that we either need to merge the latest code to all PRs manually (usually we don’t have many of them, that’s true) or double check the plan in the mainline. And actually the latter is done in both cases to have this confidence:)
Github has an option to dismiss review approvals when new commits are pushed to a PR.
That’s true, but it removes approvals if a new commit is landed into this PR’s branch, not in case of another PR that’s created recently, correct?:)

All in all, I’m more concerned about these two final steps I mentioned. And there is something in the air that things could have been more simple or done differently with even a better confidence. At least, probably there should be some automatic rebase hook that update all PRs if new code appears in the mainline. Hence approvals will be removed with the aforementioned feature enabled.

Michael Warkentin avatar
Michael Warkentin

The most common github integration is atlantis: https://www.runatlantis.io

It operates by applying from your feature branch in combination with PRs locking the state for that project (so only one PR is planning and applying at any given time)

Terraform Pull Request Automation | Atlantis

Atlantis: Terraform Pull Request Automation

Alex Jurkiewicz avatar
Alex Jurkiewicz


That’s true, but it remove approvals if a new commit is landed into this PR’s branch, not in case of another PR that’s created recently, correct?:)
If the base branch gets changed, your pull request cannot be merged because it’s out of date. You update it, and this causes the plan to be regenerated and old approvals to be dismissed.

Andrew Nazarov avatar
Andrew Nazarov

@Alex Jurkiewicz Depending on the merge strategy and settings, but I see your point:) Huge thanks for your comments!

Andrew Nazarov avatar
Andrew Nazarov

I’m trying to find anything related to the reasons why we put Atlantis on hold years ago. No luck so far, however occasionally I’ve found a saved thread started by @sheldonh and just want to link it here as well: https://sweetops.slack.com/archives/CB6GHNLG0/p1617922933434300

Seperate discussion…. I get that terraform doesn’t fit into the typical workflow with CI CD very well at least out of the box.

To be fair though if these tools such as terraform cloud, spacelift, env0 are in essence running the same CLI tool that you can run in your own CI CD job that preserves the plan artifact, what do you feel is the actual substantial difference for the core part of terraform plans?

Don’t get me wrong I love working with stuff like terraform cloud, but I guess I’m still struggling to see the value in that if you write a pipeline then handles plan artifacts

sheldonh avatar
sheldonh

I downloaded the module by Cloudposse for Atlantis. Was interested in running in ECS fargate but found Azure DevOps didn’t seem to be supported except in a fork/PR version and never reconciled this.

I’ve been running from my machine directly for now as i’m the only one writing terragrunt/terraform at this time. I wrote a Go wrapper for coordinating the runs but would like to get back to a PR comment based workflow in the future if others start contributing.

I’m finding that working directly in a dev team the need for collaborative PR workflow is less than when in the Cloud Operations team as it’s more application deployment focused and less shared.

Andrew Nazarov avatar
Andrew Nazarov

Thanks for sharing:)

Aumkar Prajapati avatar
Aumkar Prajapati

Hey all, does anyone know a good way to migrate worker_groups to node_groups with the terraform-aws-modules/vpc/aws Terraform module in an EKS environment? We’re looking to move to managed nodes from our unmanaged setup.

mfridh avatar

Basically: Add new managed node groups “on the side”…

cordon and drain the old worker ASGs and let the new managed ones scale up..

Then remove the non-managed ones…

Aumkar Prajapati avatar
Aumkar Prajapati

I’ll give that a shot, thanks! I was thinking about other ways beyond just running them alongside but it’ll do, I need to dismantle our cluster autoscaler in the pod as well probably…

mfridh avatar

The cluster-autoscaler can handle any number of ASGs.

But do read some of the notes here https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html

Managed node groups - Amazon EKS

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.

Aumkar Prajapati avatar
Aumkar Prajapati

iirc, doesn’t amazon manage autoscaling?

Aumkar Prajapati avatar
Aumkar Prajapati

for node groups

Aumkar Prajapati avatar
Aumkar Prajapati

It’s worker groups where cluster-autoscaler needs to be installed, right?

mfridh avatar

The Managed Node Group is an “upper” layer on top of the “regular” resources such as an underlying Autoscaling Group.

Aumkar Prajapati avatar
Aumkar Prajapati

Ahhh, gotcha, must’ve been lost in my readings, so autoscalers does need to be installed right?

mfridh avatar

So to actually automatically scale it - you still need the cluster-autoscaler.

mfridh avatar

Personally I suggest having a node group (or worker group) which is more or less “static”, for some particular jobs such as the cluster-autoscaler itself.

1
Aumkar Prajapati avatar
Aumkar Prajapati

Gotcha, thanks! Not a problem to install

mfridh avatar

Usually not much cpu or ram needed… it all depends on how many of those jobs you have.

Aumkar Prajapati avatar
Aumkar Prajapati

What do you mean by that last point?

mfridh avatar

the “static” group for “Cluster tools”, usually doesn’t need to be big and costly …

mfridh avatar

so I feel most of the time - it is worth it.

Aumkar Prajapati avatar
Aumkar Prajapati

Yeah, this is mostly for a small RC environment our devs are using.

Sean Turner avatar
Sean Turner

Would someone be willing to test drive the deployment of a serverless single page application I built that deploys via terraform module? Please have a look at the code first as well so you know what is being deployed. github.com/seanturner026/moot.git

One would need go, yarn, and the awscli as the module builds golang lambdas (latest version is better as it uses go modules), builds a vuejs bundle with yarn, and also aws s3 syncs the bundle to the s3 bucket built by the module.

It works on my machine currently (lol), but I want to share this around and want to iron out any deployment kinks first.

It deploys api gateway, ssm parameters, iam roles, dynamodb, cloudfront, cognito. If you uncomment fqdn_alias and hosted_zone_name, you’ll also get an ACM cert and can access cloudfront via custom DNS (moot.link in my example as I bought a cheap domain).

module "moot" {
  source = "github.com/seanturner026/moot.git"

  name                           = "moot"
  aws_profile                    = "default" // whatever profile you want to use in .aws/config
  admin_user_email               = "[email protected]" // or your email if you want the email with cognito creds
  enable_delete_admin_user       = false
  github_token                   = "42"
  gitlab_token                   = "42"
  slack_webhook_url              = "42"
#   fqdn_alias                     = "moot.link"
#   hosted_zone_name               = "moot.link"
  enable_api_gateway_access_logs = true
  tags                           = {}
}
managedkaos avatar
managedkaos

sounds cool but i’m tied up this week. i added a slack reminder to give this a shot over the weekend

1
Sean Turner avatar
Sean Turner

Thanks so much!

managedkaos avatar
managedkaos

@Sean Turner i took the code for a spin! here’s an update: • I used GitHub as my repo source and created a token but it may be helpful to provide the exact permissions the token needs. People will likely want to scope the token to the exact permissions vs creating an admin token with all access. • I used the code in moot/terraform_examples/complete. I did not add a custom domain so I had to add the following to get the domain that was created. Might be nice to already have that in the example.

output "moot" {
  value = module.moot
}

• I used an admin email address and successfully received the email with the password but when i tried to log in, nothing happened.

• The cloudwatch logs for the API gateway were useful to indicate that there is some sort of integration error but I didn’t go much further than this in terms of debugging:

{
    "httpMethod": "POST",
    "integrationError ": "-",
    "ip": "...",
    "protocol": "HTTP/1.1",
    "requestId": "AHRliheDIAMEMrQ=",
    "requestTime": "29/May/2021:23:08:06 +0000",
    "responseLength": "109",
    "routeKey": "POST /auth/login",
    "status": "400"
}
{
    "httpMethod": "OPTIONS",
    "integrationError ": "-",
    "ip": "...",
    "protocol": "HTTP/1.1",
    "requestId": "AHRlmgJpoAMEMmQ=",
    "requestTime": "29/May/2021:23:08:06 +0000",
    "responseLength": "0",
    "routeKey": "-",
    "status": "204"
}
{
    "httpMethod": "GET",
    "integrationError ": "-",
    "ip": "...",
    "protocol": "HTTP/1.1",
    "requestId": "AHRlngCSoAMEMvw=",
    "requestTime": "29/May/2021:23:08:06 +0000",
    "responseLength": "26",
    "routeKey": "GET /repositories/list",
    "status": "401"
}
managedkaos avatar
managedkaos

• I didn’t try the Slack integration

Sean Turner avatar
Sean Turner

Thanks! Great feedback Weird that you were getting an integration error. I think I got something like that once and recreating the lambdas fixed it?

Sean Turner avatar
Sean Turner

Really appreciate the follow up.

managedkaos avatar
managedkaos

no problem. i will give the lambda recreate a try

Sean Turner avatar
Sean Turner

The good news as that terraform apply worked ;) There’s quite a few dependencies in there

1
managedkaos avatar
managedkaos

actually, i may be reading the logs wrong. if there is a - next to integrationError, it means there’s no data which likely means there isn’t an integration error.

so i’ll try the lambdas and update

Sean Turner avatar
Sean Turner

Any lambda cloud watch logs?

managedkaos avatar
managedkaos

ahh let me see

Sean Turner avatar
Sean Turner

If not then they aren’t being triggered

managedkaos avatar
managedkaos

ok i found the lambda logs. maybe my password is bad?

No older events at this moment. 
Retry
START RequestId: a803588c-7553-4b72-b32a-7ac0e9f6dbe7 Version: $LATEST
{
    "level": "info",
    "msg": "handling request on /auth/login",
    "time": "2021-05-29T23:04:58Z"
}

{
    "level": "error",
    "msg": "InvalidParameterException: Missing required parameter USERNAME",
    "time": "2021-05-29T23:04:59Z"
}

END RequestId: a803588c-7553-4b72-b32a-7ac0e9f6dbe7
REPORT RequestId: a803588c-7553-4b72-b32a-7ac0e9f6dbe7	Duration: 928.26 ms	Billed Duration: 929 ms	Memory Size: 128 MB	Max Memory Used: 45 MB	Init Duration: 115.99 ms	
START RequestId: 74db7a5f-0d1c-4b66-8fc2-df367d977d70 Version: $LATEST
{
    "level": "info",
    "msg": "handling request on /auth/login",
    "time": "2021-05-29T23:08:06Z"
}

{
    "level": "error",
    "msg": "NotAuthorizedException: Incorrect username or password.",
    "time": "2021-05-29T23:08:06Z"
}

END RequestId: 74db7a5f-0d1c-4b66-8fc2-df367d977d70
REPORT RequestId: 74db7a5f-0d1c-4b66-8fc2-df367d977d70	Duration: 210.39 ms	Billed Duration: 211 ms	Memory Size: 128 MB	Max Memory Used: 45 MB	
Sean Turner avatar
Sean Turner

yep, that would be it ha. my first frontend

Sean Turner avatar
Sean Turner

Need to add a toast or something if it fails

1
managedkaos avatar
managedkaos

ok i copied the password with care and got the “new password” prompt. entered new pass and now nothing. will check the log again

managedkaos avatar
managedkaos

no update in the logs

Sean Turner avatar
Sean Turner

I usually just add a $ to the password from the email and it logs me in

Sean Turner avatar
Sean Turner

Might be the password updated already. Do a refresh and use the new password?

managedkaos avatar
managedkaos

i refreshed and tried the temp pwd with the $. that worked. if there is password complexity requirement, that might be useful to share. my password was good but still not as complex as the generated one. shorter for sure. one special char. maybe no caps.

Sean Turner avatar
Sean Turner

Yeah, might be worth parameterising the cognito password requirements

managedkaos avatar
managedkaos

i was able to add a repo in github. i clicked the link to get to the repo and the link is malformed…no .com : https://github/console6500/cherokee

Sean Turner avatar
Sean Turner

Ah interesting! I’ll need to fix that

managedkaos avatar
managedkaos

ok i added a note and a version, clicked deploy. not sure what’s supposed to happen

Sean Turner avatar
Sean Turner

Generally you would need to have a change on the HEAD branch. That gets merged into the BASE branch, and then a github release is created from the BASE branch

managedkaos avatar
managedkaos

got it.

Sean Turner avatar
Sean Turner

My head branch was new, I was using this echo 1 >> some_new_file && ga . && gcmsg "blah" && git push origin new to test easily

managedkaos avatar
managedkaos

ok i’ll give it a shot later.

Sean Turner avatar
Sean Turner

Thanks again !!

1
Sean Turner avatar
Sean Turner

Added the toasts to the login view, added outputs.tf to the examples, and fixed the no .com issue as well .

1
marc slayton avatar
marc slayton

Provider crash. Hey all – I’ve recently been working with the latest versions of yaml_stack_config. I’ve been trying to wire up the account-map component to perform its tfstate.tf lookups using the remote_state modules that come with yaml_stack_config. When I perform the lookups, I get the following message, fairly consistently:

Error: rpc error: code = Unavailable desc = transport is closing


Error: 1 error occurred:
	* step "plan cmd": job "terraform subcommand": command "terraform plan -out gbl-master-account-map.planfile -var-file gbl-master-account-map.terraform.tfvars.json" in "./components/terraform/account-map": exit status 1
marc slayton avatar
marc slayton

It feels like the aws provider might have something to do with it. I’ve tried various combinations of the cloudposse/utils provider: 0.4.3, 0.4.4, v0.6.0 and v0.8.0 with no change in symptoms. The backtraces mostly show healthy operation, but when control is relinquished, the aws provider appears to be dead.

2021/05/26 02:06:04 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2021/05/26 02:06:04 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021/05/26 02:06:04 [INFO] backend/local: plan operation completed

2021/05/26 02:06:04 [TRACE] statemgr.Filesystem: removing lock metadata file terraform.tfstate.d/gbl-master/.terraform.tfstate.lock.info
2021/05/26 02:06:04 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate.d/gbl-master/terraform.tfstate using fcntl flock
2021-05-26T02:06:04.172Z [DEBUG] plugin: plugin exited
Error: rpc error: code = Unavailable desc = transport is closing

Just curious if anyone else here has seen this phenomenon and knows a workaround, or a better way to find the cause. I’ve also tried the cloudposse/utils provider and installing from scratch. They all seem to work reasonably well. Alas, debugging terraform execution isn’t my strong suit.

marc slayton avatar
marc slayton

Actually, this looks like the place where it is failing:

2021/05/26 02:40:08 [TRACE] ExecuteWriteOutput: Saving Create change for output.cicd_roles in changeset
2021/05/26 02:40:08 [TRACE] EvalWriteOutput: Saving value for output.cicd_roles in state
2021/05/26 02:40:08 [TRACE] vertex "output.cicd_roles": visit complete
2021/05/26 02:40:08 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2021-05-26T02:40:08.127Z [DEBUG] plugin: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/aws/3.42.0/linux_amd64/terraform-provider-aws_v3.42.0_x5 pid=19511
2021-05-26T02:40:08.127Z [DEBUG] plugin: plugin exited

It looks like somehow the response object is not getting reset. ‘output.cicd_roles’. seems to come up every time.

Matt Gowie avatar
Matt Gowie

@Andriy Knysh (Cloud Posse) would likely be the best to weigh in on this one if he’s got a minute.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

must be an invalid YAML config

marc slayton avatar
marc slayton

I’ll double check…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-yaml-stack-configattachment image

Terraform module that loads an opinionated &quot;stack&quot; configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and replace the config files with your own to test

marc slayton avatar
marc slayton

ok, will do. I did run all my files through yamllint. No changes. I’ll try your files next.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can DM your YAML config and code so I can take a look if you still having issues

marc slayton avatar
marc slayton

You were completely right. I am so embarrassed. I was using anchors to define my tags in a way that was passing the lint checks, but was not producing a valid config. Mea culpa, and thanks for the help.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in any case, we don’t have good enough documentation, so any help on finding issues and improving the modules is greatly appreciated, thanks

Matt Gowie avatar
Matt Gowie

Don’t think it would’ve helped here, but we do have some docs on stack files which might be useful: https://docs.cloudposse.com/reference/stacks/

1
1
marc slayton avatar
marc slayton

cool! I will check these out. I’ve been making lots of notes as I ramp up – will have some tutorial material fairly soon I think.

2021-05-26

Michael avatar
Michael

Hi there. I’ve got a beginner’s question :slightly_smiling_face: I structured my Terraform set up in modules. How can I output data (e.g. the public IP of my EC2 instance) after the deployment with terraform console?

$ terraform apply -auto-approve
...
Plan: 14 to add, 0 to change, 0 to destroy.
...
module.default.module.aws_ec2.aws_instance.debian: Creation complete after 13s [id=i-0000aaaabbbbccccd]

$ terraform console
> module.default.module.aws_ec2.aws_instance.debian.public_ip

> ╷
│ Error: Unsupported attribute
│ 
│   on <console-input> line 1:
│   (source code not available)
│ 
│ This object does not have an attribute named "module".
managedkaos avatar
managedkaos

you need to create an output in your top level module that prints the value you are looking for:

output "public_ip" {
    value = module.default.module.aws_ec2.aws_instance.debian.public_ip
    description = "The EC2 public IP"
}
managedkaos avatar
managedkaos

then run terraform refresh

Mohammed Yahya avatar
Mohammed Yahya

Terraformers, anyone test/use Scalr agent on-premise?

ohad avatar

No but how about trying env0 self hosted agent? https://docs.env0.com/docs/security-overview#self-hosted-agents is that relevant for you? (Disclaimer, i am founder at env0)

Security Overview

Our mission at env0 is to empower teams to make use of the freedom of the cloud while maintaining the governance and control needed in today’s world. For the first time ever, env0 gives teams the ability to provision and manage their own environments in the cloud, using your existing Infrastructure …

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @Sebastian Stadil

marc slayton avatar
marc slayton

Hey all – quick question about importing resources. I noticed that atmos supports the ‘import’ command, but I’m not completely sure if it can be used to import resources into a stack component. Is this possible?

Brian Ojeda avatar
Brian Ojeda

I used it yesterday to import accounts into the account component of the CP ref arch aws components.

Brian Ojeda avatar
Brian Ojeda
atmos terraform import account aws_organizations_organization.this o-abcef12345 --stack glb-root --region us-east-1
atmos terraform import account 'aws_organizations_account.organization_accounts["dev"]' 000000000000 --stack glb-root --region us-east-1
atmos terraform import account 'aws_organizations_account.organization_accounts["prod"]' 000000000000 --stack glb-root --region us-east-1
Brian Ojeda avatar
Brian Ojeda

Posting it as example. I had to pass the region arg too.

marc slayton avatar
marc slayton

Thanks, Brian – this really helps. Let you know how it goes. Cheers –

marc slayton avatar
marc slayton

After a little playing around with turning on/off sections of the main.tf in the account component, I was able to rebuild the entire account component from scratch – Org, OUs, Accounts, Policies – everything. Thanks so much for your pointer on this!

Brian Ojeda avatar
Brian Ojeda

No problem. Anytime.

mfridh avatar

When the tools are fighting the engineer’s explicit intentions, example #4568:

│ Error: Output refers to sensitive values
│ 
│   on outputs.tf line 13:
│   13: output recommended_ecs_ami {
│ 
│ To reduce the risk of accidentally exporting sensitive data that was intended to be only internal, Terraform requires that any root module output containing sensitive data be explicitly marked as
│ sensitive, to confirm your intent.
│ 
│ If you do intend to export this data, annotate the output value as sensitive by adding the following argument:
│     sensitive = true
╵

Yes, I want to output it… it’s actually not sensitive, just because the source happens to be a publically available ssm parameter, which I’ve even explicitly marked as sensitive = false in the sub-module.

loren avatar

I think there is a new function unsensitive() or something, to remove the sensitive attribute from the object

loren avatar
nonsensitive - Functions - Configuration Language - Terraform by HashiCorp

The nonsensitive function removes the sensitive marking from a value that Terraform considers to be sensitive.

mfridh avatar

awesome, thanks!

mfridh avatar

it’s pretty cool that the meta of values carry all the way through.

Alex Jurkiewicz avatar
Alex Jurkiewicz

yeah, the system is actually pretty good. Just that there’s no way to know ahead of time what resource attributes are sensitive, so you have to add sensitive and nonsensitive incrementally as you find problems

2021-05-27

emem avatar

hi anyone experience this while creating a module from cloudflare

Enter a value: yes

module.zone.cloudflare_zone.default[0]: Destroying... [id=68ecf4a68af4f9ee970ce00a6f275064]
module.zone.cloudflare_zone.default[0]: Destruction complete after 1s
module.zone.cloudflare_zone.default[0]: Creating...

Error: Error setting plan free for zone "68ecf4a68af4f9ee970ce00a6f275064": HTTP status 403: Authentication error (10000)

  on ../../cloudflare/modules/main.tf line 26, in resource "cloudflare_zone" "default":
  26: resource "cloudflare_zone" "default" {
Fabian avatar

Hi. We have had terraform validations break twice in the last few months without us changing anything. This is very frustrating. The first time some AWS thing changed somehow. Still not sure about the second time. Has anyone else had similar situations? Are you just spending time fixing or has anyone considered moving away from TF? I think there are benefits to Terraform with AWS, but I don’t want to spend engineering time fixing these issues.

Brian Ojeda avatar
Brian Ojeda

Relatively new to TF, so I haven’t run into that issue. The alternatives are CDK and CloudFormation. Although I cannot speak to CDK, I would not switch back to CFN over TF. CFN has its problems…

• roughly 50% slower to deploy the same stacks

• import functionality is minimal and clunky

• very slow to add support for new services

• works only with aws resources

• deployments to k8s are more complex due to the need for multiple ioc languages

• error messages are cryptic for the unexperienced

• require much more profound knowledge of aws Experience: I spent the past 4-5 years writing hundreds of CFN templates. I wrote nearly all the automation using those templates to deploy thousands of stacks across multi-region and several dozen accounts.

Fabian avatar

Ok. Thank you.

Fabian avatar

I find it quite frustrating to see code break without changes. I haven’t seen anything like this before, I think.

Gerald avatar

(A bit of a shameless plug as I’m part of the team, sorry about that ) but if you have issues spotting unwanted changes, you might want to try running driftctl (OSS) against your infra to get a clear overview of what has actually changed. It will compare your state against your AWS account and list all resources not managed by TF. Once you’ve done that, you can create a baseline with a .driftignore file (here’s a link to a tutorial), and you’ll see each new change to your infra appear when you run the tool.

Start tracking drifts from a clean state with a .driftignore fileattachment image

How to start tracking drifts from a clean state whatever your IaC coverage, by automatically generating a .driftignore file

loren avatar

make sure you pin your terraform version in your root module, and at least be aware of changes in the provider versions that you are using

loren avatar

and for sure pin all module versions

Fabian avatar

Hi Loren. That’s a good point. I think quite a few versions are pinned, but I’ll check. Stupid question - is there a good way to follow provider changes (in our case AWS)?

loren avatar

i’d recommend subscribing to the github releases… if you haven’t done that before, go here: https://github.com/hashicorp/terraform-provider-aws. click “Watch”, Custom, check Releases, Apply

hashicorp/terraform-provider-awsattachment image

Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.

loren avatar

you can also use the github integration for slack to post releases to a channel

MattyB avatar

@Fabian - what do you mean terraform validations broke? Can you go into a bit more detail? Terraform still has not reach an official v1 release. There are typically more headaches associated with a tool or language that moves as fast as Terraform has. If you’re not pinning to a specific version and stay on the latest…you may have migraines instead of a caffeine headache.

Fabian avatar

For example this alert started (my engineers said that nothing in our code had changed to trigger this)

[TF-WRAPPER][Terraform][Plan] Planning Terraform Code

Error: Unsupported argument

  on vpc.tf line 25, in module "vpc":
  25:   enable_s3_endpoint   = true

An argument named "enable_s3_endpoint" is not expected here.
loren avatar

that’s almost certainly from not pinning, and getting the latest module version, where the module has removed that argument

loren avatar

pin your modules!

Fabian avatar

ok. I’ll have a look at that! thank you for the direction.

MattyB avatar

Right. There are also some other best practices that CloudPosse suggests you follow. I’ll link them later on unless someone else has them handy and can share.

Fabian avatar

Awesome! Thank you!

barak avatar

Hi everyone, Happy to introduce a new open-source tool to tag and trace IaC (Terraform, Cloudformation, Serverless). We are using it in our CI to have a consistent owner, cost center, trace from code to cloud and other tags that are automatically added to each IaC resource. Feedback & GitHub are highly welcome !

Github Repo: https://github.com/bridgecrewio/yor Blog: https://bridgecrew.io/blog/announcing-yor-open-source-iac-tag-trace-cloud-resources/

bridgecrewio/yorattachment image

Extensible auto-tagger for your IaC files. The ultimate way to link entities in the cloud back to the codified resource which created it. - bridgecrewio/yor

Announcing our latest open-source project, Yor: Automated IaC tag and trace | Bridgecrew Blogattachment image

Yor is an automated IaC tag and trace tool that automatically adds attribution and trace tags to lower MTTR and simplify access control and cost allocation.

3
1
Alex Jurkiewicz avatar
Alex Jurkiewicz

This is a cool idea! How does it work? Static analysis of the source code?

bridgecrewio/yorattachment image

Extensible auto-tagger for your IaC files. The ultimate way to link entities in the cloud back to the codified resource which created it. - bridgecrewio/yor

Announcing our latest open-source project, Yor: Automated IaC tag and trace | Bridgecrew Blogattachment image

Yor is an automated IaC tag and trace tool that automatically adds attribution and trace tags to lower MTTR and simplify access control and cost allocation.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I imagine with Terraform there will be difficulties around supporting modules and in particular the CloudPosse “null label” convention. But still, I can see a lot of value here!

1
1

2021-05-28

Pierre-Yves avatar
Pierre-Yves

Hello, is there a way to do some math in terraform ? I would like to check if my server name contains an even or odd number, to dynamically set an azure zone , ( 1 if odd , 2 if even ).

Alex Jurkiewicz avatar
Alex Jurkiewicz

do you need the rule to be what you said? Or do you just need 50% of servers to go into each zone?

Pierre-Yves avatar
Pierre-Yves

Hi the Foqal bot, told me that I can use the modulo % math

Pierre-Yves avatar
Pierre-Yves

haha that’s better then I can support 3 zones but I may have no luck when there is low number of vms ..

Zoltan K avatar
Zoltan K

as I see no even or odd num func defined https://www.terraform.io/docs/language/functions/index.html so you need to go for remainder operator https://www.terraform.io/docs/language/expressions/operators.html

Functions - Configuration Language - Terraform by HashiCorp

The Terraform language has a number of built-in functions that can be called from within expressions to transform and combine values.

Pierre-Yves avatar
Pierre-Yves

@Zoltan K thanks that’s what I was looking for

1
Pierre-Yves avatar
Pierre-Yves

yes that’s what I am doing right now

Zoltan K avatar
Zoltan K

and we need to load up terrafrom manual to foqal to make it smarter I guess

Zoltan K avatar
Zoltan K

Pierre-Yves avatar
Pierre-Yves

well he gave me the math “%” to use :)

1
Pierre-Yves avatar
Pierre-Yves

so I end up selecting my zone_id for servers named testXXX with a “% 3” as there are 3 zone in Azure Region regex("([1-9][0-9]{0,2})$", srv_name)[0] % 3 + 1 which produce as a result a zone number in 1, 2, 3

1
foqal avatar
foqal
11:25:52 AM

@Pierre-Yves’s question was answered by <@Foqal>

david hoang avatar
david hoang

GM!  Trying to update a dynamic subnet and having some issues. Goal - update the current wide open nacl to a stricter nacl

I added resources for creating a public and private network acl. Then added public_network_acl_id (pointing to the resource id) to the dynamic_subnets module.

Getting error: count = local.public_network_acl_enabled

any examples on using a created network acl and associating them to the subnets?

mrwacky avatar
mrwacky

This would be great to have to be able to customize the NACLs managed by this module.. Or is there another preferred Cloudposse way to manage NACLs?

mrwacky avatar
mrwacky

I found https://github.com/cloudposse/terraform-aws-named-subnets Is this the alternative to dynamic-subnets?

cloudposse/terraform-aws-named-subnetsattachment image

Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we could have a more robust way of managing NACLs more like our new security group module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-security-groupattachment image

Terraform module to provision AWS Security Group. Contribute to cloudposse/terraform-aws-security-group development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But we haven’t gotten to that

david hoang avatar
david hoang

I have a workaround — testing and will update for a PR soon, thanks!

Alex Kagno avatar
Alex Kagno

Hi all, trying to leverage this AWS ES module and I can’t seem to get access to it… Is there a way from this module to enable open access? I’m trying to secure it with AWS Cognito and it creates without error but I can’t seem to turn open access on.

https://github.com/cloudposse/terraform-aws-elasticsearch

Thanks for all the great modules

cloudposse/terraform-aws-elasticsearchattachment image

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Matt Gowie avatar
Matt Gowie

This is more on debugging the deployed configuration and seeing what is different about your configured cognito backed ES instance vs an AWS blog post cognito backed ES instance. The module should be providing the correct flags or otherwise there would likely be an issue open about this, so I would suggest just digging into what you’re missing by comparing it to an AWS blog post.

cloudposse/terraform-aws-elasticsearchattachment image

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Alex Kagno avatar
Alex Kagno
module "elasticsearch" {
  source = "cloudposse/elasticsearch/aws"

  namespace   = "main"
  stage       = "development"
  name        = "logging"

  zone_awareness_enabled  = "true"

  vpc_enabled = "false"

  cognito_authentication_enabled  = "true"
  cognito_identity_pool_id        = aws_cognito_identity_pool.main.id
  cognito_user_pool_id            = aws_cognito_user_pool.main.id
  cognito_iam_role_arn            = aws_iam_role.cognito.arn
  
  elasticsearch_version     = "7.10"
  instance_type             = "t3.medium.elasticsearch"
  instance_count            = 4
  ebs_volume_size           = 10
  encrypt_at_rest_enabled   = true
  
  create_iam_service_linked_role  = true

  domain_hostname_enabled               = true
  dns_zone_id                           = data.aws_route53_zone.main.zone_id
  domain_endpoint_options_enforce_https = true

  custom_endpoint                   = "aes.${local.r53_zone_name}"
  custom_endpoint_certificate_arn   = module.acm_aes.acm_certificate_arn
  custom_endpoint_enabled           = true

  advanced_options = {
    "rest.action.multi.allow_explicit_index" = "true"
  }

  tags = local.common_tags
}

2021-05-29

2021-05-30

DevOpsGuy avatar
DevOpsGuy

Guys, I am trying to use s3 as backend for storing terraform state files. We have only one account (For, QA, STG and PROD in our aws). I am using GitLab for CICD. Not sure how to store environment specific state files. Can someone please help how to use terraform workspace concept in case if we have only one service account in aws for all the environments.

msharma24 avatar
msharma24

Hey @DevOpsGuy 1 - You should use a seperate AWS Account for dev and prod 2 - Here is an example of how I have done this in the past - https://github.com/msharma24/multi-env-aws-terraform

( I just pushed this example for you )

msharma24/multi-env-aws-terraformattachment image

Multi environment AWS Terraform demo. Contribute to msharma24/multi-env-aws-terraform development by creating an account on GitHub.

DevOpsGuy avatar
DevOpsGuy

@msharma24 I really thank you so much. It worked.

msharma24 avatar
msharma24

Mate

DevOpsGuy avatar
DevOpsGuy

2021-05-31

marcoscb avatar
marcoscb

Hello all, I am using terraform-aws-eks-cluster module 0.38.0 with TF 0.14.11 and kubernetes provider 2.1.0 but although the initial from-scratch deployment works fine trying to plan the same deployment from another environment (same code and config) fails with the error “Error: Get “http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth”: dial tcp [:80: connect: connection refused”. Using TF 0.13.7 works fine from everywhere everytime. I think its related to TF 0.14 managing that kubernetes provider dependecy with the eks cluster created along the same apply through data resources differently than TF 0.13. Has someone experienced this before? Is there any plan to split the eks cluster and dependent resources in separate stacks to avoid this in terraform-aws-eks-cluster modules? Thanks.

Titouan avatar
Titouan

Hi, I’m also having a similar “eks-cluster” module issue like in these threads. Any leads as to what might be going on? https://sweetops.slack.com/archives/CB6GHNLG0/p1612683973314300

    keyboard_arrow_up