#terraform (2021-03)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-03-01

discourse avatar
discourse
04:07:03 PM
Need a way to toggle label_key_case context [Terraform]

Before I went off to develop a method I thought I would reach out and ask for some help/ guidance. Here’s the situation;

I have a unique scenario where the IAM role needs to have the “Name” = Value be all capitalized to match the AD naming schema requirement. (i.e., SERVICE-TEAM-ROLE )

I am using terraform-null-label module for everything and have the name = module.custom_role_label where I can …

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know how you can run terraform state show on module.gitlab_repository_webhook.gitlab_project_hook.this[15] ?

RB avatar
terraform state show "module.gitlab_repository_webhook.gitlab_project_hook.this[15]"

should work

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i was missing the quotes thanks @RB

RB avatar

it’s not obvious but any time you use square brackets, you’ll need quotations

RB avatar

best to default to using quotations all the time

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

makes sense

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

in tf 0.13 can you do for a for_each on a module?

loren avatar

yes

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

the issue i have is each time it needs to execute against a different account

jose.amengual avatar
jose.amengual

I think for loops are totally random

jose.amengual avatar
jose.amengual

so every time you execute them it will pick a different account

jose.amengual avatar
jose.amengual

I do not know if there is a way to for an order

jose.amengual avatar
jose.amengual

maybe @loren knows?

loren avatar

i’m guessing getting the provider right is the issue, more than order?

loren avatar

there is not yet a way to use an expression on a module’s “providers” block… it is static, even if the module is using for_each . so i don’t think you can do what you are thinking yet…

loren avatar

you could maybe template the tf files from outside terraform, not using for_each, but creating an instance of the module per account and setting the provider

jose.amengual avatar
jose.amengual

good point I was assuming he had that working

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i want to loop around each account in a list and execute a module

Alex Jurkiewicz avatar
Alex Jurkiewicz

if I have an aws_iam_policy_document and a JSON policy statement, what’s the best way to append the latter to the former?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have an IAM policy aggregator module that maybe can help?

1
loren avatar

On the aws_iam_policy_document, use source_json and/or override_json in pre-3.28.0 of aws provider. Or with >=3.28.0, use source_policy_documents and/or override_policy_documents

2
loren avatar

Meant to open an issue @Erik Osterman (Cloud Posse), with 3.28.0, that aggregator module can probably be archived…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ah cool, is this related to that AWS provider release that made you cry tears of joy?

loren avatar

Yeah, haha, that was the first of them… The second was managing roles with exclusive policy attachments

Alex Jurkiewicz avatar
Alex Jurkiewicz

wow. I wish I waited to rewrite this policy statement in HCL/jsonencode

Matt Gowie avatar
Matt Gowie

Great tool for if you ever need to do that in the future @Alex Jurkiewicz: https://github.com/flosell/iam-policy-json-to-terraform

flosell/iam-policy-json-to-terraform

Small tool to convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document - flosell/iam-policy-json-to-terraform

2
Alex Jurkiewicz avatar
Alex Jurkiewicz

I’m thinking about doing something horrible with jsonencode and jsondecode

discourse avatar
discourse
06:52:09 AM
Need a way to toggle label_key_case context [Terraform]

This is an interesting use-case and it makes sense what you’re trying to do. Unfortunately, we don’t support selectively manipulating the case for an individual tag.

I suggest opening an issue to see if we can collect support for it. In the meantime, the best workaround I think suggest would be using a local and doing your own transformations there.

2021-03-02

nagakonduru379 avatar
nagakonduru379

Does anyone know how to work with dynamodb backup and restore using terraform ? Or by using aws backup with terraform?

Pavel avatar

hey all

wave1
Pavel avatar

I started using the /terraform-aws-ecs-codepipeline module yesterday and I ran into a couple issue, the first more immediate on is that when the Deploy to Ecs stage runs it will set the desired count to 3 containers. I am not setting this anywhere in my configuration, I actually have it set to 1 as this is for dev. I am running EC2 to host my containers.

Pavel avatar
Pavel
06:12:52 PM

basically its just stuck here til it times out

Pavel avatar

is there some setting in my ecs service that im missing?

Pavel avatar

i think it may have to do with ec2 capacity

Pavel avatar

the desired count being mysteriously set to 3 needs to be solved

RB avatar

create a ticket with a Minimum, Reproducible Example and then someone will investigate eventually

How to create a Minimal, Reproducible Example - Help Center
Stack OverflowThe World’s Largest Online Community for Developers
Pavel avatar

this is probably upstream from this package tbh

Pavel avatar

i’ve reviewed all the tf code and i don’t see anything that would set a desired count at all. the module doesn’t control my service or anything and the Deploy stage part of the code is pretty minimal.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s just not enough to go on. The cloudposse/terraform-aws-ecs-codepipeline doesn’t even create the tasks. You must be using other modules. Provide as much context as possible. We have hundreds of modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(and use threads)

Pavel avatar

i think i found the culprit actually, im running a next js deployment, and im doing a build on the container once its tarts. that build appears to trigger the autoscale based on cpu usage

1
Pavel avatar

the reason the deployment isn’t removing the previous deployment looks like a host capacity issue

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that makes sense

Pavel avatar

i gotta figure out how to get my app autoscale policy to scale up more ec2s to fit the service needs

Mohammed Yahya avatar
Mohammed Yahya
Generate Terraform module documentationattachment image

Generate documentation from Terraform modules in various output formats.

1
RB avatar

Yes! this is currently used in all the cloudposse tf modules

Generate Terraform module documentationattachment image

Generate documentation from Terraform modules in various output formats.

RB avatar

and in my company modules

Matt Gowie avatar
Matt Gowie

Ah they launched a docs site though — that looks fresh.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(didn’t know it supported a config)

Mohammed Yahya avatar
Mohammed Yahya

Yes, and you can get header from a file like [doc.tf](http://doc.tf) and add it to README.md

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

WAnt to share some cool stuff about this today during announcements in #office-hours?

1
Jeff Dyke avatar
Jeff Dyke

Curious if anyone has done a TG -> TF migration. I’m about to embark on my own, and if you have any info to share it would be appreciated. I started with TF, but its been a couple years, so mostly trying to get my head around replacing the abstractions, for one i’ll be using https://github.com/cloudposse/terraform-provider-utils/blob/main/examples/data-sources/utils_stack_config_yaml/data-source.tf, for a complementary solution to TG going up two levels by default. Thanks for any input. (Edited due to response in thread, posted the wrong repo)

cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (E.g. deep merging) - cloudposse/terraform-provider-utils

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

https://github.com/cloudposse/terraform-yaml-config is more or less generic module to work with YAML files and deep-merge them (and using imports to be DRY)

cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have a more opinionated one for YAML stack configs (Terraform and helmfile vars defined in YAML files for stacks of any level of hierarchy ) https://github.com/cloudposse/terraform-yaml-stack-config

cloudposse/terraform-yaml-stack-config

Terraform module that loads an opinionated "stack" configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …

Jeff Dyke avatar
Jeff Dyke

ahh, crud, thats what i meant to post.

Jeff Dyke avatar
Jeff Dyke

Thanks for chiming in and correcting that.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it uses this TF provider https://github.com/cloudposse/terraform-provider-utils/blob/main/examples/data-sources/utils_stack_config_yaml/data-source.tf to deep-merge YAML configs (TF deep-merging was very slow, go is much faster)

cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (E.g. deep merging) - cloudposse/terraform-provider-utils

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(having said that, we did not try to convert TG to TF )

Jeff Dyke avatar
Jeff Dyke

That makes complete sense. The stack module which has the child modules seems like a great fit to the default actions of TG. It also seems like it can solve my remote_state configuration issue, which was the second thing i was concerned about.

Jeff Dyke avatar
Jeff Dyke

Its all in a branch, of course, and if i can wipe enough of company data and still be useful to others, i’ll make sure i do a write up and share here.

1
Mohammed Yahya avatar
Mohammed Yahya

It’s all about Terraform modules that you are using, start with one module at a time, replace terragrunt blocks with terraform block:

• backend block >> backend.tf

• provider block >> provider.tf

• inputs block >> terraform.tfvars

• locals block >> locals.tf and for each env in Terragrunt create a env folder in terrraform

Mohammed Yahya avatar
Mohammed Yahya
mhmdio/terraform-infracost-atlantis

Contribute to mhmdio/terraform-infracost-atlantis development by creating an account on GitHub.

Mohammed Yahya avatar
Mohammed Yahya

so you need to map back the abstraction from Terragrunt into Terraform, let me know if you need any help

Jeff Dyke avatar
Jeff Dyke

Thanks!

2021-03-03

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know of a module or anything that can lock down the default security group in all regions within an account?

Mohammed Yahya avatar
Mohammed Yahya

delete them!

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

can you do that via terraform though?

Mohammed Yahya avatar
Mohammed Yahya

nope, try cloud-nuke

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

my favourite tool

Gareth avatar

HI All, Can anybody share their wisdom on the best way to rotate aws_iam_access_key’s Do most people taint the resource / module that created them or do you have two resources like aws_iam_access_key.keyone with a count var.rotate = true aws_iam_access_key.keytwo with a count var.rotate = true with and output of the above equally switching between the two? once applied you would then roll your environment and then set keyone to false? When come to then rolling them again in the future you’d state move keytwo to one and repeat?

Mohammed Yahya avatar
Mohammed Yahya

AWS SSO would be better alternative

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think managing credentials that rotate with terraform is a bad idea.

Gareth avatar

Thanks for talking the time to leave comments. I’m always open to learning new ways to do things, so could you expand on your answers to suggest how I’d best manage things such as SMTP authentication details within applications?

I can’t always use an instance role as the application requires a username and password to be configured or they won’t start. Typically I create a user and then user data grabs the ID and ses_smtp_password_v4 details from secret store on boot up. This allows us to change the details every time we flatten and build the environment. My challenge comes when I get to a Prod system where I need to leave the access keys in place while a new ASG spins up and replaces the old machines/keys. I can simply do this by removing the current key from state, run the TF apply again but this never feels overly right. Hence my original thoughts about two variables that I use but I’m interested to hear an alternative approach.

1
Gareth avatar

Anybody got further thoughts on this?

Mohammed Yahya avatar
Mohammed Yahya

I think you should move this out of Terraform, use lambda function with boto3 to provide temp credentials to you ASG when needed, otherwise use something like HashiCorp vault to implement that.

Gareth avatar

Thanks Mohammed, interesting idea, I’ll give it some further thought as I can see how that could work.

mikesew avatar
mikesew

can hashicorp vault actually produce IAM secret key & access keys? doesn’t seem so with the kv engine. You’d have to re-gen the new keys, store them in the kv engine as a new set.. basically the multi-step plan @Gareth was intiailly thinking about

1
Gareth avatar

The issue I see with tainting it is stopping the removal of the current key before we’ve rolled our environment to get the new key. Guess you could target the creation or delete it from the state before applying but both options feel fudgy.

Mike Robinson avatar
Mike Robinson

Hello,

In the module eks-iam-role, is there a recommended way for passing in an AWS managed policy ? An example use case would be creating an OIDC role for the VPC CNI add-on as described here. Currently all I can think of is something like:

data "aws_iam_policy" "cni_policy" {
  arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}

module "vpc_cni_oidc_role" {
  source = "cloudposse/eks-iam-role/aws"
  version = "x.x.x"

  [...a bunch of vars...]

  aws_iam_policy_document = data.aws_iam_policy.cni_policy.policy
}
cloudposse/terraform-aws-eks-iam-role

Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role

Configuring the VPC CNI plugin to use IAM roles for service accounts - Amazon EKS

The Amazon VPC CNI plugin for Kubernetes is the networking plugin for pod networking in Amazon EKS clusters. The CNI plugin is responsible for allocating VPC IP addresses to Kubernetes nodes and configuring the necessary networking for pods on each node. The plugin:

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know what IAM permissions / RBAC is required to view workloads in the EKS cluster via the AWS console? I can’t for the life of me find it documented anywhere!

Mike Robinson avatar
Mike Robinson
Troubleshooting IAM - Amazon EKS

This topic covers some common errors that you may see while using Amazon EKS with IAM and how to work around them.

Bart Coddens avatar
Bart Coddens

HI all, I am wondering who you are managing the s3state files

RB avatar

S3 with dynamodb locking

RB avatar

Versioned s3 bucket

Bart Coddens avatar
Bart Coddens

yeah, cloudposse has a module for this

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Bart Coddens avatar
Bart Coddens

which works fine

Bart Coddens avatar
Bart Coddens

but you cannot use a variable in the file it generates

Bart Coddens avatar
Bart Coddens
 backend "s3" {
   region         = "us-east-1"
   bucket         = "< the name of the S3 state bucket >"
   key            = "terraform.tfstate"
   dynamodb_table = "< the name of the DynamoDB locking table >"
   profile        = ""
   role_arn       = ""
   encrypt        = true
 }
Bart Coddens avatar
Bart Coddens

I want to use a variable here because the profile can change, and the key as well

RB avatar

last time i checked, that module cannot reuse an s3 bucket unfortunately

Bart Coddens avatar
Bart Coddens

what I can do is copy over the backend.tf file it generates and replace the key there

Bart Coddens avatar
Bart Coddens

per thing that I need

Bart Coddens avatar
Bart Coddens

I don’t want to put everything in a single state file

Bart Coddens avatar
Bart Coddens

but I want to use a single bucket

RB avatar

i run this script

# bucket name
export tfstateBucket=mybucket
# get repo name
export repoName=$(git config --get remote.origin.url | cut -d '/' -f2 | cut -d '.' -f1)
# get current directory
export repoDir=$(git rev-parse --show-prefix | rev | cut -c 2- | rev)
# create backend
cat <<EOF > backend.tf
terraform {
  required_version = ">=0.12"
  backend "s3" {
    encrypt        = true
    bucket         = "$tfstateBucket"
    dynamodb_table = "TerraformLock"
    region         = "us-west-2"
    key            = "$repoName/$repoDir/terraform.tfstate"
  }
}
EOF
# see the newly created file just to be safe
cat backend.tf
RB avatar

it always gives me a unique key based on the repo name and repo path

RB avatar

youll have to change tfstateBucket

Bart Coddens avatar
Bart Coddens

that’s easy, we have multiple customers

Bart Coddens avatar
Bart Coddens

and I use the customername as a single identifier

Bart Coddens avatar
Bart Coddens

hah you can use it

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Bart Coddens avatar
Bart Coddens

they generate file based on variables

Bart Coddens avatar
Bart Coddens

so a combo of init, apply and init again should work

RB avatar
RB
06:43:27 PM

¯_(ツ)_/¯

Bart Coddens avatar
Bart Coddens

hmmm interesting

RB avatar

it wont work for me yet as the bucket is not configurable but im glad it works for you

Bart Coddens avatar
Bart Coddens

don’t you use a single bucket to put all the state files in per project ?

Bart Coddens avatar
Bart Coddens

sepparated by keys ?

RB avatar

that module would have to be consumed in everyone of our modules for the backend.tf to be generated, which would then create a new bucket and dynamodb table for each module

RB avatar

what id want is to reuse a single dynamodb and state bucket for every terraform module

Bart Coddens avatar
Bart Coddens

yeah

Bart Coddens avatar
Bart Coddens

same here

RB avatar

but i dont believe the cloudposse tf module supports that (yet)

Bart Coddens avatar
Bart Coddens

I want to make the location of the state file configurable

1
Bart Coddens avatar
Bart Coddens

the rest can stay as is

RB avatar

im sure everyone would be open to a PR. probably no one has gotten around to it.

Bart Coddens avatar
Bart Coddens

because I get in trouble as well when I have a single AWS account that we use as a internal account and we want to add a new project

Bart Coddens avatar
Bart Coddens

then you need to import the current bucket and dynamodb table

Bart Coddens avatar
Bart Coddens

and then it gets a bit messy

RB avatar

gross

RB avatar

i wouldnt import a bucket and dynamodb table to more than one module

RB avatar

i also wouldnt import any resource to more than one module

RB avatar

thats an antipattern

Bart Coddens avatar
Bart Coddens

thanks for the suggestion

Bart Coddens avatar
Bart Coddens

I am still a bit newbie in terraform

RB avatar

there’s a lot to learn but basically resources should be stored in a single state. if they are stored in multiple states then multiple modules may try to modify the same resource and then have conflicting states

RB avatar

keep shared resources in their own modules.

Bart Coddens avatar
Bart Coddens

I do

Bart Coddens avatar
Bart Coddens

what I am puzzled about is what the best suggestion to:

Bart Coddens avatar
Bart Coddens

I have one AWS account

Bart Coddens avatar
Bart Coddens

we have several machines under that account

Bart Coddens avatar
Bart Coddens

that we use internally

Bart Coddens avatar
Bart Coddens

for all external customers, they have their own account

Bart Coddens avatar
Bart Coddens

which makes handling a state file easy

Bart Coddens avatar
Bart Coddens

for the multiple projects under our own internal account

Bart Coddens avatar
Bart Coddens

I am puzzled how to handle this the best

Bart Coddens avatar
Bart Coddens

a bucker per internal project ?

Bart Coddens avatar
Bart Coddens

or prefixes in one big s3 bucket

RB avatar

this is what i do with multiple accounts.

1 tfstate bucket 1 dynamodb reuse the bucket with a key of the repo name inside the repo name key, use a repo path key

RB avatar

same shared bucket across all accounts

RB avatar

same shared dynamodb across all accounts

RB avatar

the unique identifier is the key / prefix

Bart Coddens avatar
Bart Coddens

so you dynamically create the s3state.tf that pushes the state to the backend right ?

RB avatar

no. i dynamically create the backend.tf using the script i shared earlier

RB avatar

then it magically works

Bart Coddens avatar
Bart Coddens

nice

Bart Coddens avatar
Bart Coddens

it’s more clear now, thanks a lot

np1
Bart Coddens avatar
Bart Coddens

in the past I was the main creator of the terraform configs, now I need share work with co’s, that is better in one way but gives me way more work

Bart Coddens avatar
Bart Coddens

but in the end, we will benefit all (I hope so )

RB avatar

divide and conquer!

MattyB avatar

We currently have the concept of a centralized aws account that manages buckets & dynamo (via terraform) as well as other things unrelated to this conversation (logging, monitoring, etc..). Using terraform workspaces & assume_role in the provider - you can do exactly what you’re looking for. This may be a bit more advanced and makes assumptions about having SSO and other policies in place already IIRC. I’ll try to provide a scrubbed example shortly.

Bart Coddens avatar
Bart Coddens

nice thanks MattyB

MattyB avatar

Here’s an example I found for the provider portion: https://gist.github.com/RoyLDD/156d384bd33982e77630b27ff9568f63

Will try to get back to you about the terraform backend. Gotta run to a meeting..

Jeff Dyke avatar
Jeff Dyke

Thanks for the convo

MattyB avatar

Back to it…using the examples above and a backend config like so:

terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket = "prod-terraform-state-bucket"
    key    = "terraform.tfstate"
    region = "us-west-1"

    # Replace this with your DynamoDB table name!
    dynamodb_table = "prod-terraform-state-lock-table"
    encrypt        = true
  }
}

When you use terraform workspace with this config it stores the state in the bucket path at :env/<environment>/tfstate in the centralized account

Jeff Dyke avatar
Jeff Dyke

I avoided workspaces when i first started b/c they were rather opaque. Do you use these as a go to?

MattyB avatar

Yep. We use it with every account. 1 coworker copy/pasted between staging and prod directories instead…and that account is a mess. He also didn’t use CloudPosse modules, and developed his own instead. It’s seen as the red headed step child that nobody wants to touch. lol

Jeff Dyke avatar
Jeff Dyke

Fair enough. I plan on using the modules plus some tips here and in #office-hours today. trying to get as many opinions as possible before i start changing everything.

MattyB avatar

Gotcha. It’s straightforward and IMO helps keep environments as close to the same as possible. If you want a smaller DB in staging than you need in prod just set a different variable. Super simple. I don’t know what copy/pasting between directories (dev/stage/prod) buys you.

Jeff Dyke avatar
Jeff Dyke

Not something that drives me. I’m more interested in keeping environments clean and reduce copy/paste.

Jeff Dyke avatar
Jeff Dyke

right now i have none

MattyB avatar

Nice! Bug free

Jeff Dyke avatar
Jeff Dyke

yep, and that’s what i’m trying to do with plain TF, with some CP module assistance.

Jeff Dyke avatar
Jeff Dyke

now all abstractions are handled via TG.

Bart Coddens avatar
Bart Coddens

it’s a pain that terraform does not allow variables here

Alex Jurkiewicz avatar
Alex Jurkiewicz

today’s dumb terraform code. I had some Lambda functions defined as a map for for_each purposes, like

locals {
  lambda_functions = {
    foo = { memory=256, iam_policies = tolist([...]) }
    bar = { memory=128 }
  }
}

and I wanted to loop over any custom iam policies defined to add them to the execution role. This is the simplest for_each loop I could write that worked:

dynamic "inline_policy" {
    for_each = length(lookup(local.lambda_functions[each.key], "iam_policies", [])) > 0 ? each.value.iam_policies : []
    content {
      name   = "ExtraPermissions${md5(jsonencode(inline_policy.value))}"
      policy = jsonencode(inline_policy.value)
    }
  }

You’d think for_each = lookup(local.lambda_functions[each.key], "iam_policies", []) would work, but it doesn’t, because you can’t build a default value with the same type as the iam_policies values from your data type. Sometimes, I wish Terraform never tried to add strict typing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform “static” typing is annoying in many cases

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
try(local.lambda_functions[each.key]["iam_policies"], [])
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe that ^ will work

loren avatar

Just add iam_policies = [] to each item. Simplifies all the logic away

2
loren avatar

for example (now that i’m back at a keyboard…)

locals {
  lambda_functions = {
    foo = { memory=256, iam_policies = [...] }
    bar = { memory=128, iam_policies = [] }
  }
}

...

dynamic "inline_policy" {
    for_each = local.lambda_functions[each.key].iam_policies
    content {
      name   = "ExtraPermissions${md5(jsonencode(inline_policy.value))}"
      policy = jsonencode(inline_policy.value)
    }
  }
CedricT avatar
CedricT

Hello everybody, nice to meet you ! I’m Cedric and I just joigned.

wave3
CedricT avatar
CedricT

My first question is about the terraform-aws-rds-cluster module. I see the input “enabled_cloudwatch_logs_exports” we can use to select which logs send to Cloudwatch, but actually I don’t find any input relative to the Log Group in which I will send the logs. Any clue ?

jose.amengual avatar
jose.amengual

you could set it like this

enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
jose.amengual avatar
jose.amengual

the doc are related to cloudwatch log groups for RDS

jose.amengual avatar
jose.amengual

that is where you can find the info and details of the valid loggroups

CedricT avatar
CedricT

Thanks @jose.amengual that works great. Actually what i didn’t get is that I don’t have to declare any Cloudwatch Log group. So I used enabled_cloudwatch_logs_exports = [“postgresql”] and everything has been fine! Thanks again.

jose.amengual avatar
jose.amengual

np

jose.amengual avatar
jose.amengual

ahhhh yes in postgress you only have one

1
CedricT avatar
CedricT

Thanks.

Pavel avatar

is there a way to integrate “cloudposse/ecs-codepipeline/aws” module with SSM Parameter Store to feed the build image environment vars

Mohammed Yahya avatar
Mohammed Yahya
Prototype functionality in support of ongoing Module Acceptance Testing research by apparentlymart · Pull Request #27873 · hashicorp/terraform

During subsequent release periods we&#39;re hoping to perform more research and development on the problem space of module acceptance testing. So far we&#39;ve been doing this research using out-of…

2021-03-04

Mohammed Yahya avatar
Mohammed Yahya

I created VSCode Terraform IaC Extension Pack to help with Develop Terraform templates and modules, please test and feedback https://marketplace.visualstudio.com/items?itemName=mhmdio.terraform-extension-pack

Terraform IaC Extension Pack - Visual Studio Marketplace

Extension for Visual Studio Code - Awesome Terraform Extensions

6
1
Andy avatar

indent-rainbow FTW

Terraform IaC Extension Pack - Visual Studio Marketplace

Extension for Visual Studio Code - Awesome Terraform Extensions

1
maarten avatar
maarten

@Andriy Knysh (Cloud Posse) my greetings. I’m checking out the terraform-aws-backup module. Also want to keep an a cross organisational copy. Would it make sense to use the module on both accounts or use a simple aws_backup_vault resource on the receiving end ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Hi @maarten

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what resources do you need to backup?

maarten avatar
maarten

dynamodb (cmk)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Cross-account replication with Amazon DynamoDB | Amazon Web Servicesattachment image

Hundreds of thousands of customers use Amazon DynamoDB for mission-critical workloads. In some situations, you may want to migrate your DynamoDB tables into a different AWS account, for example, in the eventuality of a company being acquired by another company. Another use case is adopting a multi-account strategy, in which you have a dependent account […]

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-backup

Terraform module to provision AWS Backup, a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services such as EBS volumes, RDS databases, Dy…

Cross-Account Backups - AWS Backup

Using AWS Backup, you can back up to multiple AWS accounts on demand or automatically as part of a scheduled backup plan. Cross-account backup is valuable if you need to store backups to one or more AWS accounts. Before you can do this, you must have two accounts that belong to the same organization in the AWS Organizations service. For more information, see

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

having said that, you can add the above to the top-level module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then yes, you create a separate aws_backup_vault and use https://github.com/cloudposse/terraform-aws-backup/blob/master/variables.tf#L42 to copy into it

cloudposse/terraform-aws-backup

Terraform module to provision AWS Backup, a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services such as EBS volumes, RDS databases, Dy…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(IAM permissions must be in place)

Pavel avatar

I noticed that the github_webhook module has an inline provider, this prevents from any parent modules from being used in a for_each, for example “cloudposse/ecs-codepipeline/aws” is affected by this. I was not able to spin up multiple codepipelines by a loop unless I removed all the github webhook stuff from it. A seperate issue is that the whole webhook thing doesn’t work with my org for some reason. when it tries to post to the gtihub api its missing the org so it ends up looking like https://api.github.com/repo//repo-name/hooks where the // should be org. Maybe there is some documentation about required parameters, but if you test the examples with 0.23 and terraform 0.14, it wont work.

Justin Seiser avatar
Justin Seiser

Any Familar with the <https://github.com/cloudposse/terraform-aws-efs> module? I can not figure out how it wants me to pass https://github.com/cloudposse/terraform-aws-efs/blob/master/variables.tf#L12

cloudposse/terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs

Justin Seiser avatar
Justin Seiser

Word.

Justin Seiser avatar
Justin Seiser

still not sure how to pass a map of maps like that

Justin Seiser avatar
Justin Seiser

I’ve tried tons of variations on..

  access_points = {
    jenkins = {
      posix_user = {
        uid = 1000
        gid = 1000
      }
      root_directory = {
        path = "/jenkins"
        creation_info = {
          owner_gid = 1000
          owner_uid = 1000
          permissions = 777
        }
      }
    }
  }
Justin Seiser avatar
Justin Seiser

but they all error out with things similar to

The given value is not suitable for child module variable "access_points"
defined at .terraform/modules/efs/variables.tf:12,1-25: element "jenkins":
element "root_directory": all map elements must have the same type.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is the issue with the latest TF versions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they introduced so-called “strict” type system

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all items in a map must have the same types even for all complex items

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to make all map entries having the same fields, set those not in use to null

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
posix_user = {
        uid = 1000
        gid = 1000
        path = null
        creation_info = {
          owner_gid = null
          owner_uid = null
          permissions = null
      }
      root_directory = {
        uid = null
        gid = null
        path = "/jenkins"
        creation_info = {
          owner_gid = 1000
          owner_uid = 1000
          permissions = 777
        }
      }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

something like that ^

Justin Seiser avatar
Justin Seiser

that looks horrible.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not pretty (and might not work, in which case we’ll have to redesign the module)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can separate those into two vars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

PRs are always welcome

Justin Seiser avatar
Justin Seiser

If I was able to understand what this was doing, i wouldnt be here asking

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, but you are trying to use the feature of the module which was tested with TF 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF 0.13 and up introduced a lot of changes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the feature needs to be updated to support TF 0.13/0.14

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the only ways around are: 1) use the ugly syntax above; 2) open a PR to separate the map into two variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Justin Seiser would you like to open a PR with the changes? We’ll help with it and review

Justin Seiser avatar
Justin Seiser

I am not capable of writing the PR.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I see you opened an issue, thanks (we’ll get to it)

Tomek avatar

Given the map:

map = {
  events = ["foo"]
  another_key = "bar"
}

How would you go about appending "baz" to the list in the events key so that you end up with:

new_map = {
  events = ["foo", "baz"]
  another_key = "bar"
}
loren avatar
new_map = merge(
  map,
  {
    events = concat(map.events, ["baz"])
  },
)
1
kumar k avatar
kumar k

Hello

kumar k avatar
kumar k

Can someone please provide the upgrade steps from 0.12.29>0.13.6?

MattyB avatar

In regards to CloudPosse Terraform modules or with Terraform in general?

  1. Start your morning with an extra hit of caffeine.
  2. Be thankful it’s not as bad as 0.11 to 0.12
3
jose.amengual avatar
jose.amengual

it should work right away

kumar k avatar
kumar k

Terraform In general

jose.amengual avatar
jose.amengual

including Cloudposse modules

jose.amengual avatar
jose.amengual

most of the modules are 0.13-0.14 compatible

kumar k avatar
kumar k

I am running the following steps tfenv use latest:^0.13

kumar k avatar
kumar k

terraform 0.13upgrade

kumar k avatar
kumar k

terraform init terraform apply

1
jose.amengual avatar
jose.amengual

in your modules?

kumar k avatar
kumar k

yes

jose.amengual avatar
jose.amengual

that is fine

jose.amengual avatar
jose.amengual

then you need to update the source of cloudposse modules if one of them does not work

kumar k avatar
kumar k

Is this mandatory to run terraform 0.13upgrade?

kumar k avatar
kumar k

Even if run these it upgrades tfenv use latest:^0.13 terraform init terraform apply

jose.amengual avatar
jose.amengual

in yours modules probably

jose.amengual avatar
jose.amengual

you need to change few thing

jose.amengual avatar
jose.amengual

the providers and such

jose.amengual avatar
jose.amengual

so is better to run it

kumar k avatar
kumar k

For upgrading 0.13.6–>0.14.5 tfenv use latest:^0.14 terraform init terraform apply

Are these steps good?

jose.amengual avatar
jose.amengual

you still need to run the upgrade command I think

mikesew avatar
mikesew

..and if our modules are in their own git repo, once the terraform init and terraform apply succeed, we’d have to commit in a featurebranch, then merge to master, correct?

git checkout -b upgrade_to_terraform013
tfenv use latest:^0.13
terraform init
terraform apply
git commit -m 'upgrade module to terraform 0.13.6'
git tag -a "v1.5" -m "v1.5"  #(or whatever your next v is)
git push --follow-tags
kumar k avatar
kumar k

Looks like upgrade step is not needed for 13–>14 upgrade

RB avatar

i use this all over the place at my current job and it works well

2021-03-05

Takan avatar

hi guys, anyone knows how to create “trusted advisor” in terraform?

Gareth avatar

Good Evening, I need to create an API Gateway => Lambda => Dynamodb setup for the first time. As far as I can tell each of these items sits outside of a VPC. and while I can see there are VPC endpoints for Lambda and Dynamodb; do I actually need to use them? Is this one of those times where you can do either way, and one is more secure than the other but has double the operating costs?

My Lambda only need to talk to the Dynamodb and nothing else. All requests come from the public Internet to the API. I’m used to my application always being on a private subnet, does that concept exist in this scenario? The tutorials I’ve watch on this from Hashicorp and AWS don’t really mention VPC’s , they do in an announcement about end point support. Which is why I think I’m over thinking this. Thanks for your time,

loren avatar

Leverage the serverless paradigm for all it’s worth! Skip the vpc, it is gloriously freeing

1
Gareth avatar

Thanks @loren you’re a fountain of knowledge as usual.

loren avatar

Just pay attention to your resource policies, control them tightly and actively with terraform, so you don’t expose anything you don’t mean to expose

1
tolstikov avatar
tolstikov

VPC endpoints are needed only when you want to connect to your services from VPC without using public Internet (via AWS PrivateLink): https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html

VPC endpoints - Amazon Virtual Private Cloud

Use a VPC endpoint to privately connect your VPC to other AWS services and endpoint services.

1
vicken avatar

Hi, I’m also having a similar “eks-cluster” module issue like in these threads. Any leads as to what might be going on? https://sweetops.slack.com/archives/CB6GHNLG0/p1612683973314300

I also have this issue, is there any solution for this? thanks!

vicken avatar

I’m using the cloudposse/eks-cluster with the cloudposse/named-subnets module mostly configured like the examples:

I also have this issue, is there any solution for this? thanks!

vicken avatar
module "vpc" {
  source  = "cloudposse/vpc/aws"
  version = "0.20.4"

  context    = module.this.context
  cidr_block = "10.0.0.0/16"
}

locals {
  us_east_1a_public_cidr_block  = cidrsubnet(module.vpc.vpc_cidr_block, 2, 0)
  us_east_1a_private_cidr_block = cidrsubnet(module.vpc.vpc_cidr_block, 2, 1)
  us_east_1b_public_cidr_block  = cidrsubnet(module.vpc.vpc_cidr_block, 2, 2)
  us_east_1b_private_cidr_block = cidrsubnet(module.vpc.vpc_cidr_block, 2, 3)
}

module "us_east_1a_public_subnets" {
  source  = "cloudposse/named-subnets/aws"
  version = "0.9.2"

  context           = module.this.context
  subnet_names      = ["eks"]
  vpc_id            = module.vpc.vpc_id
  cidr_block        = local.us_east_1a_public_cidr_block
  type              = "public"
  igw_id            = module.vpc.igw_id
  availability_zone = "us-east-1a"
  attributes        = ["us-east-1a"]
  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging>
  tags = {
    "kubernetes.io/cluster/cluster" : "shared"
    "kubernetes.io/role/elb" : "1"
  }
}

module "us_east_1a_private_subnets" {
  source  = "cloudposse/named-subnets/aws"
  version = "0.9.2"

  context           = module.this.context
  subnet_names      = ["eks"]
  vpc_id            = module.vpc.vpc_id
  cidr_block        = local.us_east_1a_private_cidr_block
  type              = "private"
  availability_zone = "us-east-1a"
  attributes        = ["us-east-1a"]
  ngw_id            = module.us_east_1a_public_subnets.ngw_id
  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging>
  tags = {
    "kubernetes.io/cluster/cluster" : "shared"
    "kubernetes.io/role/internal-elb" : "1"
  }
}

module "us_east_1b_public_subnets" {
  source  = "cloudposse/named-subnets/aws"
  version = "0.9.2"

  context           = module.this.context
  subnet_names      = ["eks"]
  vpc_id            = module.vpc.vpc_id
  cidr_block        = local.us_east_1b_public_cidr_block
  type              = "public"
  igw_id            = module.vpc.igw_id
  availability_zone = "us-east-1b"
  attributes        = ["us-east-1b"]
  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging>
  tags = {
    "kubernetes.io/cluster/cluster" : "shared"
    "kubernetes.io/role/elb" : "1"
  }
}

module "us_east_1b_private_subnets" {
  source  = "cloudposse/named-subnets/aws"
  version = "0.9.2"

  context           = module.this.context
  subnet_names      = ["eks"]
  vpc_id            = module.vpc.vpc_id
  cidr_block        = local.us_east_1b_private_cidr_block
  type              = "private"
  availability_zone = "us-east-1b"
  attributes        = ["us-east-1b"]
  ngw_id            = module.us_east_1b_public_subnets.ngw_id
  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging>
  tags = {
    "kubernetes.io/cluster/cluster" : "shared"
    "kubernetes.io/role/internal-elb" : "1"
  }
}

module "eks_cluster" {
  source  = "cloudposse/eks-cluster/aws"
  version = "0.34.0"

  context = module.this.context
  region  = "us-east-1"
  vpc_id  = module.vpc.vpc_id
  subnet_ids = [
    module.us_east_1a_public_subnets.named_subnet_ids["eks"],
    module.us_east_1b_public_subnets.named_subnet_ids["eks"],
    module.us_east_1a_private_subnets.named_subnet_ids["eks"],
    module.us_east_1b_private_subnets.named_subnet_ids["eks"]
  ]
  kubernetes_version                = "1.18"
  oidc_provider_enabled             = true
  enabled_cluster_log_types         = ["api", "authenticator", "controllerManager", "scheduler"]
  cluster_log_retention_period      = 90
  cluster_encryption_config_enabled = true
  map_additional_aws_accounts       = [REDACTED]
}

# Ensure ordering of resource creation to eliminate the race conditions when applying the Kubernetes Auth ConfigMap.
# Do not create Node Group before the EKS cluster is created and the `aws-auth` Kubernetes ConfigMap is applied.
# Otherwise, EKS will create the ConfigMap first and add the managed node role ARNs to it,
# and the kubernetes provider will throw an error that the ConfigMap already exists (because it can't update the map, only create it).
# If we create the ConfigMap first (to add additional roles/users/accounts), EKS will just update it by adding the managed node role ARNs.
data "null_data_source" "wait_for_cluster_and_kubernetes_configmap" {
  inputs = {
    cluster_name             = module.eks_cluster.eks_cluster_id
    kubernetes_config_map_id = module.eks_cluster.kubernetes_config_map_id
  }
}

module "eks-node-group" {
  source  = "cloudposse/eks-node-group/aws"
  version = "0.18.1"

  context = module.this.context
  subnet_ids = [
    module.us_east_1a_private_subnets.named_subnet_ids["eks"],
    module.us_east_1b_private_subnets.named_subnet_ids["eks"]
  ]
  cluster_name              = data.null_data_source.wait_for_cluster_and_kubernetes_configmap.outputs["cluster_name"]
  desired_size              = 2
  min_size                  = 1
  max_size                  = 2
}
vicken avatar

After creation, terraform plan works fine.

When I change the existing us_east_1a_private_subnets/subnet_names and us_east_1b_private_subnets/subnet_names to be ["eks", "mysql"] and do terraform plan, I see:

Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused

Releasing state lock. This may take a few moments...

With the subnet_names change, the debug output contains a WARNING: Invalid provider configuration was supplied. Provider operations likely to fail: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

2021-03-05T10:10:43.078-0800 [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [WARN] Invalid provider configuration was supplied. Provider operations likely to fail: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable: timestamp=2021-03-05T10:10:43.078-0800
2021-03-05T10:10:43.079-0800 [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [DEBUG] Enabling HTTP requests/responses tracing: timestamp=2021-03-05T10:10:43.078-0800
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "local.enabled"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.apply_config_map_aws_auth"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.kubernetes_config_map_ignore_role_changes"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "local.map_worker_roles"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.map_additional_iam_roles"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.map_additional_iam_users"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.map_additional_aws_accounts"
2021/03/05 10:10:43 [DEBUG] ReferenceTransformer: "module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]" references: []
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Refreshing state... [id=kube-system/aws-auth]
2021-03-05T10:10:43.083-0800 [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [INFO] Checking config map aws-auth: timestamp=2021-03-05T10:10:43.083-0800
2021-03-05T10:10:43.083-0800 [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [DEBUG] Kubernetes API Request Details:
---[ REQUEST ]---------------------------------------
GET /api/v1/namespaces/kube-system/configmaps/aws-auth HTTP/1.1
Host: localhost
User-Agent: HashiCorp/1.0 Terraform/0.14.7
Accept: application/json, */*
Accept-Encoding: gzip


-----------------------------------------------------: timestamp=2021-03-05T10:10:43.083-0800
2021-03-05T10:10:43.084-0800 [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [DEBUG] Received error: &url.Error{Op:"Get", URL:"<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>", Err:(*net.OpError)(0xc000e67cc0)}: timestamp=202
1-03-05T10:10:43.084-0800

Without the change, there is no WARNING.

vicken avatar
2021-03-05T10:07:59.545-0800 [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:07:59 [DEBUG] Enabling HTTP requests/responses tracing: timestamp=2021-03-05T10:07:59.545-0800
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "local.enabled"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.apply_config_map_aws_auth"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.kubernetes_config_map_ignore_role_changes"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "local.map_worker_roles"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.map_additional_iam_roles"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.map_additional_iam_users"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.map_additional_aws_accounts"
2021/03/05 10:07:59 [DEBUG] ReferenceTransformer: "module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]" references: []
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Refreshing state... [id=kube-system/aws-auth]
2021-03-05T10:07:59.548-0800 [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:07:59 [INFO] Checking config map aws-auth: timestamp=2021-03-05T10:07:59.548-0800
2021-03-05T10:07:59.548-0800 [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:07:59 [DEBUG] Kubernetes API Request Details:
---[ REQUEST ]---------------------------------------
GET /api/v1/namespaces/kube-system/configmaps/aws-auth HTTP/1.1
Host: [REDACTED]
User-Agent: HashiCorp/1.0 Terraform/0.14.7
Accept: application/json, */*
Authorization: Bearer [REDACTED]
Accept-Encoding: gzip

Versions:

Terraform v0.14.7
+ provider registry.terraform.io/hashicorp/aws v3.28.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.0.2
+ provider registry.terraform.io/hashicorp/local v2.0.0
+ provider registry.terraform.io/hashicorp/null v3.0.0
+ provider registry.terraform.io/hashicorp/random v3.0.1
+ provider registry.terraform.io/hashicorp/template v2.2.0
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@johncblandii

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m not seeing any solutions out there, but lots of similar references https://github.com/cloudposse/terraform-aws-eks-cluster/issues/104#issuecomment-792520725

Fail with I/O timeout due to bad configuration of the Kubernetes provider · Issue #104 · cloudposse/terraform-aws-eks-cluster

Describe the Bug Creating an EKS cluster fails due to bad configuration of the Kubernetes provider. This appears to be more of a problem with Terraform 0.14 than with Terraform 0.13. Error: Get &qu…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) is still actively working on this, but no silver bullet yet.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, this is a race condition

Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

between TF itself and the kubernetes provider

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

many people see similar error, but in different cases (some on destroy, some on updating cluster params)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(if you just provision the cluster, all seems ok, only updating/destroying it causes those race condition errors)

Titouan avatar
Titouan

Hey, same issue here, apply a fresh eks cluster all good bu when I want to update or destroy:

╷
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp 127.0.0.1:80: connect: connection refused
│ 
│ 
Titouan avatar
Titouan

I cleaned the state with terraform state rm module.eks.kubernetes_config_map.aws_auth_ignore_changes[0] and it worked

Matt Gowie avatar
Matt Gowie

Amazon RabbitMQ support just shipped in the AWS Provider — https://github.com/hashicorp/terraform-provider-aws/pull/16108#event-4416150216

resource/aws_mq_broker: Add RabbitMQ engine type by yardensachs · Pull Request #16108 · hashicorp/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

3

2021-03-08

Tiago Casinhas avatar
Tiago Casinhas

Any resource on how to add a shared LB to multiple Beanstalk environments? I don’t see such options on the example in the cloudposse terraform github or the official provider’s

jose.amengual avatar
jose.amengual

if you do it by path or port you can do it

jose.amengual avatar
jose.amengual

but you can’t use the same path and port

jose.amengual avatar
jose.amengual

you could do same port as 443 and path /app1 /app2 / (default app)

Tiago Casinhas avatar
Tiago Casinhas

Thank you

Tiago Casinhas avatar
Tiago Casinhas

I’m going to give it a try

Bart Coddens avatar
Bart Coddens

Hi all, as I want to migrate existing infrastructure to a module based configuration based, what’s the best approach ? Importing the existing configuration does not seem to be the best idea

Matt Gowie avatar
Matt Gowie

Can you expand on the context / your migration a bit Bart? Might be able to provide some guidance, but I’m not sure what you’re referring to.

Bart Coddens avatar
Bart Coddens

hey Matt

Bart Coddens avatar
Bart Coddens

like a build iam policies with terraform before

Bart Coddens avatar
Bart Coddens

now I used mostly the cloudposse modules, like this one for example:

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-iam-role

A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role

Bart Coddens avatar
Bart Coddens

the configuration does exist already

Bart Coddens avatar
Bart Coddens

another example, I have existing s3 buckets

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Bart Coddens avatar
Bart Coddens

so when I deploy the module, you get conflicts because the resource does exist already

Bart Coddens avatar
Bart Coddens

so what I want to do is, lay the module configuration over the existing resources

Matt Gowie avatar
Matt Gowie

As you in created those resources via ClickOps already?

Bart Coddens avatar
Bart Coddens

yes and creating the resources in terraform myself

Bart Coddens avatar
Bart Coddens

now I want to base my config on one codebase, mostly build by modules

Matt Gowie avatar
Matt Gowie

If you created resources via ClickOps then you only have a few options:

  1. Delete the resources and let them be recreated by your Terraform code. This only works if the resources are not critical and are not in a production environment.
  2. Import the resources using terraform import
  3. Accept that those resources are managed outside of IaC, document them, and then work to never do ClickOps again so you don’t run into that issue again in the future.
this1
Bart Coddens avatar
Bart Coddens

yeah, but I herited quite a legacy installed base

Matt Gowie avatar
Matt Gowie

You look into terraformer as another option… but I haven’t used it myself so I’m not sure how if that will work out for you or not.

GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer

Alex Jurkiewicz avatar
Alex Jurkiewicz
Serialise origin bucket modifications by alexjurkiewicz · Pull Request #136 · cloudposse/terraform-aws-cloudfront-s3-cdn

You can&#39;t modify an S3 bucket&#39;s policy & public access block at the same time, AWS API will complain: OperationAborted: A conflicting conditional operation is currently in progress agai…

1
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Anyone with experience in using tools (like parliament) in CI/CD to catch overly privileged IAM policies?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I’m writing a blog post about how to do this and am looking to review the different methods people have used out there.

Ofir Rabanian avatar
Ofir Rabanian

wouldn’t that force some sort of code inspection? I’m familiar with tools that’ll scan traffic in order to figure least privilege of an app, and you might be able to have that as part of a complex integration test.. or are you\ talking about something else?

2021-03-09

Martin Heller avatar
Martin Heller

Hi guys, anyone can give me a hand? When initially deploying cloudposse/alb/aws with cloudposse/ecs-alb-service-task/aws I am always getting:

The target group with targetGroupArn ... does not have an associated load balancer.

On the second run it works. I guess there is a depends_on missing in cloudposse/alb/aws or am I missing sth? Thx

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Load balancer takes time to become available after it gets created, so TF calls the API which can’t find the LB in the “ready” state. At the time when we created the module, we could not find a workaround, except 1) using two-stage apply with -target;2) separate the LB into diff folder and apply it first, then all the rest.

RB avatar

Can you add a depends_on argument directly to the alb service task module that depends on the alb? I believe that’s available on tf 14 and should work.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that prob would work. Can you test and open a PR? We’ll review promptly, thanks

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
02:42:22 PM

can anyone help me with an issue with the upstream RDS module

module "db" {
  source  = "terraform-aws-modules/rds/aws"
  version = "2.20.0"

I am trying to upgrade from 5.6 to 5.7 and getting the following error …

Error: Error Deleting DB Option Group: InvalidOptionGroupStateFault: The option group 'de-qa-env-01-20210119142009861600000003' cannot be deleted because it is in use.
	status code: 400, request id: e9a3c5b5-61fa-4648-bc95-183fba0fa32b

however the instance has been upgrade fine

Willie avatar

I’m having trouble mounting an EFS volume in an ECS Fargate container. The container fails to start with ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: mount.nfs4: Connection timed out : unsuccessful EFS utils command execution; code: 32

Terraform config in thread 

Willie avatar

The EFS resources:

resource "aws_efs_file_system" "ecs01" {
  tags = {
    Name = "ecs-efs-01"
  }
}

resource "aws_efs_mount_target" "main" {
  file_system_id  = aws_efs_file_system.ecs01.id
  subnet_id       = aws_subnet.private2.id
  security_groups = [aws_security_group.ecs_efs.id]
}

Networking:

resource "aws_security_group" "ecs_efs" {
  name   = "ecs-efs"
  vpc_id = aws_vpc.ecs-service-vpc2.id

  egress {
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group_rule" "ecs_efs_to_cluster" {
  from_port                = 2049
  to_port                  = 2049
  protocol                 = "tcp"
  security_group_id        = aws_security_group.ecs_efs.id
  source_security_group_id = aws_security_group.ecs_cluster.id
  type                     = "egress"
}

resource "aws_security_group" "ecs_cluster" {
  name   = "ecs-cluster"
  vpc_id = aws_vpc.ecs-service-vpc2.id

  egress {
    cidr_blocks = ["0.0.0.0/0"]
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
  }

  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = [aws_security_group.ecs_alb.id]
  }

  egress {
    from_port       = 2049
    to_port         = 2049
    protocol        = "tcp"
    security_groups = [aws_security_group.ecs_efs.id]
  }

  ingress {
    from_port       = 2049
    to_port         = 2049
    protocol        = "tcp"
    security_groups = [aws_security_group.ecs_efs.id]
  }

  lifecycle {
    create_before_destroy = true
  }
}

ECS task definition:

resource "aws_ecs_task_definition" "service_task" {
  family                = "service"
  execution_role_arn    = aws_iam_role.ecs_task_execution_role.arn
  container_definitions = <<TASK_DEFINITION
    [
        {
            "essential": true,
            "name": "service",
            "image": "${module.service_ecr.repository_url}:latest",
            "command": [
                "sh",
                "-c",
                "service serve -api-key $service_API_KEY -tls=false -server-url $service_FQDN -http-addr :80"
            ],
            "environment": [
                  {
                    "name": "SERVICE_FQDN",
                    "value": "https://${aws_route53_record.ecs01.fqdn}"
                  },
                  {
                    "name": "SERVICE_API_KEY",
                    "value": "foo"
                  }
                ],
            "logConfiguration": {
              "logDriver": "awslogs",
              "options": {
                "awslogs-group": "${aws_cloudwatch_log_group.ecs_service.name}",
                "awslogs-region": "${data.aws_region.current.name}",
                "awslogs-stream-prefix": "ecs"
              }
            },
            "mountPoints": [
              {
              "containerPath": "/var/db/service",
              "sourceVolume": "service-db"
              }
            ],
            "portMappings": [
                {
                    "containerPort": 80,
                    "protocol": "tcp",
                    "hostPort": 80
                },
                {
                    "containerPort": 2049,
                    "protocol": "tcp",
                    "hostPort": 2049
                }
            ]
        }
    ]
TASK_DEFINITION

  volume {
    name = "service-db"

    efs_volume_configuration {
      file_system_id = aws_efs_file_system.ecs01.id
      # root_directory = "/opt/db/service/01/"
      root_directory = "/"
      # transit_encryption      = "ENABLED"
      # transit_encryption_port = 2999
      # authorization_config {
      #   access_point_id = aws_efs_access_point.test.id
      #   iam             = "ENABLED"
      # }
    }
  }
  cpu                      = 256
  memory                   = 512
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
}
rei avatar

Have you set the platform version for the ecs service to 1.4.0?

rei avatar

# Set platform version to 1.4.0 otherwise mounting EFS volumes won't work with Fargate. # Defaults to LATEST which is set to 1.3.0 which doesn't allow efs volumes. platform_version = "1.4.0"

Willie avatar

yeah I have this in my service hah platform_version = "1.4.0" # 1.3.0 doesn't support EFS mounts

Willie avatar

I was missing an ingress rule in resource "aws_security_group" "ecs_efs" 

rei avatar

:yes

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know how to get RDS version upgrades working (e.g. 5.6 to 5.7) using the upstream RDS module?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i can’t seem to figure it out as it tries to delete the old option group which is referenced by the snapshot before performing the upgrade

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i have been unable to get TF to apply cleanly without deleting the existing snapshot prior to the upgrade which is just crazy as i have no way of rolling back if the upgrade borks out

mikesew avatar
mikesew

I’m in somewhat the same boat. I’ve never heard of TF trying to delete the snapshot (I mean, terraform doesnt even control the snapshot as a resource). Do you mean delete the option group as you stated earlier? because that HAS happened to me too. my brainstorming would be that we’d have to pre-create a second paramter group and option group for the NEW-INCOMING version. Then the upgrade path would be to change engine_version, option_group_name and parameter_group_name all at the same time from the 5.6’s to the 5.7 equivalents:

  engine_version       = "5.7"
  option_group_name    = module.db_option_group.my_5.7_optgrp.name
  parameter_group_name = module.db_parameter_group.my_5.7_paramgrp.id

???

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Yeh the option group for 5.6 couldn’t be deleted as it is used by previous snapshots (which is fine)

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I have no idea why it’s trying to be deleted in the first place it’s too tighter coupling really

mikesew avatar
mikesew

I agree - terraform is really ugly when handling RDS in my opinion. I thought i was going crazy until i happened on a youtube talk saying the same things. In theory if you switch the RDS instance to a new option group, then it won’t try to delete it, is my thoughts..

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I am doing this with the upstream module I am wondering if the issue is I’m using the option group module and passing it to the rds module rather than directly to the instance module inside the Ed’s module

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

@mikesew i managed to hack this by deleting the option group and parameter group from terraform remote state so that it does not get deleted, but it works well

mikesew avatar
mikesew

that’s great but a non-optimal solution.. the thing is, I haven’t seen this side of terraform management (stateful managed databases like RDS or AzureSQL) mentioned nearly enough in the blogs . I think I”m honestly doing it wrong.

• Do you have 1 option group per database, or 1 common option group for a group of DB’s? (we had been doing a common option group but are seeing how inflexible it has become with the snapshots being tied to it)

• how about paramter groups? 1 per db, or a common one?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I’m creating a parameter, option and subnet group per database as all our databases are different.

melissa Jenner avatar
melissa Jenner

Module: cloudposse/elasticache-redis/aws. I got error below. Can someone help?

Error: Unsupported argument

on .terraform/modules/redis/main.tf line 92, in resource “aws_elasticache_replication_group” “default”: 92: multi_az_enabled = var.multi_az_enabled

An argument named “multi_az_enabled” is not expected here.

module "redis" {
  source = "cloudposse/elasticache-redis/aws"

  availability_zones               = data.terraform_remote_state.vpc.outputs.azs
  vpc_id                           = data.terraform_remote_state.vpc.outputs.vpc_id
  enabled                          = var.enabled
  name                             = var.name
  tags                             = var.tags

  allowed_security_groups          = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
  allowed_cidr_blocks              = ["20.10.0.0/16"]
  subnets                          = data.terraform_remote_state.vpc.outputs.elasticache_subnets
  cluster_size                     = var.redis_cluster_size #number_cache_clusters
  instance_type                    = var.redis_instance_type
  apply_immediately                = true
  automatic_failover_enabled       = true
  multi_az_enabled                 = true
  engine_version                   = var.redis_engine_version
  family                           = var.redis_family
  cluster_mode_enabled             = false
  replication_group_id             = var.replication_group_id
  at_rest_encryption_enabled       = var.at_rest_encryption_enabled
  transit_encryption_enabled       = var.transit_encryption_enabled
  cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled
  cluster_mode_num_node_groups     = var.cluster_mode_num_node_groups
  snapshot_retention_limit         = var.snapshot_retention_limit
  snapshot_window                  = var.snapshot_window
  dns_subdomain                    = var.dns_subdomain
  cluster_mode_replicas_per_node_group = var.cluster_mode_replicas_per_node_group

  parameter = [
    {
      name  = "notify-keyspace-events"
      value = "lK"
    }
  ]
}
RB avatar

have you tried removing multi_az_enabled ?

RB avatar

it does also look like a bug in the module

RB avatar

pretty weird because the arg multi_az_enabled exist on both the module level and on the aws_elasticache_replication_group resource

melissa Jenner avatar
melissa Jenner

Removed multi_az_enabled. #multi_az_enabled = true. But, still got error.

Error: Unsupported argument

on .terraform/modules/redis/main.tf line 92, in resource “aws_elasticache_replication_group” “default”: 92: multi_az_enabled = var.multi_az_enabled

An argument named “multi_az_enabled” is not expected here.

[terragrunt] 2021/03/09 1301 Hit multiple errors:

RB avatar

let’s chat in this thread

RB avatar

can you share your full terraform ?

cat *.tf

and dump it here ?

melissa Jenner avatar
melissa Jenner

$ cat context.tf module “this” { source = “cloudposse/label/null” version = “0.24.1” # requires Terraform >= 0.13.0

enabled = var.enabled namespace = var.namespace environment = var.environment stage = var.stage name = var.name delimiter = var.delimiter attributes = var.attributes tags = var.tags additional_tag_map = var.additional_tag_map label_order = var.label_order regex_replace_chars = var.regex_replace_chars id_length_limit = var.id_length_limit label_key_case = var.label_key_case label_value_case = var.label_value_case

context = var.context }

variable “context” { type = any default = { enabled = true namespace = null environment = null stage = null name = null delimiter = null attributes = [] tags = {} additional_tag_map = {} regex_replace_chars = null label_order = [] id_length_limit = null label_key_case = null label_value_case = null } description = <<-EOT Single object for setting entire context at once. See description of individual variables for details. Leave string and numeric variables as null to use default value. Individual variable settings (non-null) override settings in context object, except for attributes, tags, and additional_tag_map, which are merged. EOT

validation { condition = lookup(var.context, “label_key_case”, null) == null ? true : contains([“lower”, “title”, “upper”], var.context[“label_key_case”]) error_message = “Allowed values: lower, title, upper.” }

validation { condition = lookup(var.context, “label_value_case”, null) == null ? true : contains([“lower”, “title”, “upper”, “none”], var.context[“label_value_case”]) error_message = “Allowed values: lower, title, upper, none.” } }

variable “enabled” { type = bool default = true description = “Set to false to prevent the module from creating any resources” }

variable “namespace” { type = string default = null description = “Namespace, which could be your organization name or abbreviation, e.g. ‘eg’ or ‘cp’” }

variable “environment” { type = string default = null description = “Environment, e.g. ‘uw2’, ‘us-west-2’, OR ‘prod’, ‘staging’, ‘dev’, ‘UAT’” }

variable “stage” { type = string default = null description = “Stage, e.g. ‘prod’, ‘staging’, ‘dev’, OR ‘source’, ‘build’, ‘test’, ‘deploy’, ‘release’” }

variable “name” { type = string default = “redis-blue-green” description = “Name for the cache subnet group. Elasticache converts this name to lowercase.” }

variable “delimiter” { type = string default = null description = «-EOT Delimiter to be used between namespace, environment, stage, name and attributes. Defaults to - (hyphen). Set to "" to use no delimiter at all. EOT }

variable “attributes” { type = list(string) default = [] description = “Additional attributes (e.g. 1)” }

variable “tags” { type = map(string) default = { Name = “redis-blue-green” }

description = “Additional tags (e.g. map('BusinessUnit','XYZ')” }

variable “additional_tag_map” { type = map(string) default = {} description = “Additional tags for appending to tags_as_list_of_maps. Not added to tags.” }

variable “label_order” { type = list(string) default = null description = «-EOT The naming order of the id output and Name tag. Defaults to [“namespace”, “environment”, “stage”, “name”, “attributes”]. You can omit any of the 5 elements, but at least one must be present. EOT }

variable “regex_replace_chars” { type = string default = null description = «-EOT Regex to replace chars with empty string in namespace, environment, stage and name. If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits. EOT }

variable “id_length_limit” { type = number default = null description = «-EOT Limit id to this many characters (minimum 6). Set to 0 for unlimited length. Set to null for default, which is 0. Does not affect id_full. EOT validation { condition = var.id_length_limit == null ? true : var.id_length_limit >= 6 || var.id_length_limit == 0 error_message = “The id_length_limit must be >= 6 if supplied (not null), or 0 for unlimited length.” } }

variable “label_key_case” { type = string default = null description = «-EOT The letter case of label keys (tag names) (i.e. name, namespace, environment, stage, attributes) to use in tags. Possible values: lower, title, upper. Default value: title. EOT

validation { condition = var.label_key_case == null ? true : contains([“lower”, “title”, “upper”], var.label_key_case) error_message = “Allowed values: lower, title, upper.” } }

variable “label_value_case” { type = string default = null description = «-EOT The letter case of output label values (also used in tags and id). Possible values: lower, title, upper and none (no transformation). Default value: lower. EOT

validation { condition = var.label_value_case == null ? true : contains([“lower”, “title”, “upper”, “none”], var.label_value_case) error_message = “Allowed values: lower, title, upper, none.” } }

melissa Jenner avatar
melissa Jenner

Compare to terraform-aws-elasticache-redis/examples/complete/context.tf. Changes I made: 62c84 < default = true —
default = null
86,87c108,109 < default = “redis-blue-green” < description = “Name for the cache subnet group. Elasticache converts this name to lowercase.” —
default = null
description = “Solution name, e.g. ‘app’ or ‘jenkins’”
107,110c129 < default = { < Name = “redis-blue-green” < } < —
default = {}

melissa Jenner avatar
melissa Jenner

I did not make much changes in context.tf.

RB avatar

could you use triple backticks to format your code ? it makes it easier to read

RB avatar

also could you provide a minimal viable reproducible example ?

melissa Jenner avatar
melissa Jenner
$ cat context.tf 
module "this" {
  source  = "cloudposse/label/null"
  version = "0.24.1" # requires Terraform >= 0.13.0

  enabled             = var.enabled
  namespace           = var.namespace
  environment         = var.environment
  stage               = var.stage
  name                = var.name
  delimiter           = var.delimiter
  attributes          = var.attributes
  tags                = var.tags
  additional_tag_map  = var.additional_tag_map
  label_order         = var.label_order
  regex_replace_chars = var.regex_replace_chars
  id_length_limit     = var.id_length_limit
  label_key_case      = var.label_key_case
  label_value_case    = var.label_value_case

  context = var.context
}

variable "context" {
  type = any
  default = {
    enabled             = true
    namespace           = null
    environment         = null
    stage               = null
    name                = null
    delimiter           = null
    attributes          = []
    tags                = {}
    additional_tag_map  = {}
    regex_replace_chars = null
    label_order         = []
    id_length_limit     = null
    label_key_case      = null
    label_value_case    = null
  }
  description = <<-EOT
    Single object for setting entire context at once.
    See description of individual variables for details.
    Leave string and numeric variables as `null` to use default value.
    Individual variable settings (non-null) override settings in context object,
    except for attributes, tags, and additional_tag_map, which are merged.
  EOT

  validation {
    condition     = lookup(var.context, "label_key_case", null) == null ? true : contains(["lower", "title", "upper"], var.context["label_key_case"])
    error_message = "Allowed values: `lower`, `title`, `upper`."
  }

  validation {
    condition     = lookup(var.context, "label_value_case", null) == null ? true : contains(["lower", "title", "upper", "none"], var.context["label_value_case"])
    error_message = "Allowed values: `lower`, `title`, `upper`, `none`."
  }
}

variable "enabled" {
  type        = bool
  default     = true
  description = "Set to false to prevent the module from creating any resources"
}

variable "namespace" {
  type        = string
  default     = null
  description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}

variable "environment" {
  type        = string
  default     = null
  description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}

variable "stage" {
  type        = string
  default     = null
  description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}

variable "name" {
  type        = string
  default     = "redis-blue-green"
  description = "Name for the cache subnet group. Elasticache converts this name to lowercase."
}

variable "delimiter" {
  type        = string
  default     = null
  description = <<-EOT
    Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
    Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
  EOT
}

variable "attributes" {
  type        = list(string)
  default     = []
  description = "Additional attributes (e.g. `1`)"
}

variable "tags" {
  type        = map(string)
  default     = {
    Name = "redis-blue-green"
  }

  description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}

variable "additional_tag_map" {
  type        = map(string)
  default     = {}
  description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}

variable "label_order" {
  type        = list(string)
  default     = null
  description = <<-EOT
    The naming order of the id output and Name tag.
    Defaults to ["namespace", "environment", "stage", "name", "attributes"].
    You can omit any of the 5 elements, but at least one must be present.
  EOT
}

variable "regex_replace_chars" {
  type        = string
  default     = null
  description = <<-EOT
    Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
    If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
  EOT
}

variable "id_length_limit" {
  type        = number
  default     = null
  description = <<-EOT
    Limit `id` to this many characters (minimum 6).
    Set to `0` for unlimited length.
    Set to `null` for default, which is `0`.
    Does not affect `id_full`.
  EOT
  validation {
    condition     = var.id_length_limit == null ? true : var.id_length_limit >= 6 || var.id_length_limit == 0
    error_message = "The id_length_limit must be >= 6 if supplied (not null), or 0 for unlimited length."
  }
}

variable "label_key_case" {
  type        = string
  default     = null
  description = <<-EOT
    The letter case of label keys (`tag` names) (i.e. `name`, `namespace`, `environment`, `stage`, `attributes`) to use in `tags`.
    Possible values: `lower`, `title`, `upper`.
    Default value: `title`.
  EOT

  validation {
    condition     = var.label_key_case == null ? true : contains(["lower", "title", "upper"], var.label_key_case)
    error_message = "Allowed values: `lower`, `title`, `upper`."
  }
}

variable "label_value_case" {
  type        = string
  default     = null
  description = <<-EOT
    The letter case of output label values (also used in `tags` and `id`).
    Possible values: `lower`, `title`, `upper` and `none` (no transformation).
    Default value: `lower`.
  EOT

  validation {
    condition     = var.label_value_case == null ? true : contains(["lower", "title", "upper", "none"], var.label_value_case)
    error_message = "Allowed values: `lower`, `title`, `upper`, `none`."
  }
}
1
RB avatar

could you provide a minimal example ?

RB avatar

minimal as in the minimum required to reproduce the same error message

melissa Jenner avatar
melissa Jenner
$ cat variables.tf 
variable "region" {
  default     =  "us-west-2"
}

variable "redis_cluster_size" {
  type        = number
  description = "Number of nodes in cluster"
  default     =  2
}

variable "redis_instance_type" {
  type        = string
  description = "Elastic cache instance type"
  default     = "cache.t2.small"
}

variable "redis_family" {
  type        = string
  description = "Redis family"
  default     = "redis5.0"
}

variable "redis_engine_version" {
  type        = string
  description = "Redis engine version"
  default     = "5.0.6"
}

variable "at_rest_encryption_enabled" {
  type        = bool
  description = "Enable encryption at rest"
  default     = false
}

variable "transit_encryption_enabled" {
  type        = bool
  description = "Enable TLS"
  default     = false
}

variable "cloudwatch_metric_alarms_enabled" {
  type        = bool
  description = "Boolean flag to enable/disable CloudWatch metrics alarms"
  default     = true
}

variable "replication_group_id" {
  type        = string
  description = "The replication group identifier. This parameter is stored as a lowercase string."
  default     = "redis-blue-green"
}

#variable "replication_group_description" {
#  type        = string
#  description = "A user-created description for the replication group."
#  default     = "redis-cluster-blue-green"
#}

variable "cluster_mode_num_node_groups" {
  type        = number
  description = "Number of node groups (shards) for this Redis replication group"
  default     = 0
}

variable "cluster_mode_replicas_per_node_group" {
  type        = number
  description = "Number of replica nodes in each node group. Valid values are 0 to 5."
  default     = 3
}

variable "automatic_failover_enabled" {
  type        = bool
  default     = true
  description = "Specifies whether a read-only replica will be automatically promoted to read/write primary if the existing primary fails."
}

variable "multi_az_enabled" {
  type        = bool
  default     = true
  description = "Multi AZ (Automatic Failover must also be enabled.)"
}

variable "snapshot_retention_limit" {
  type        = number
  description = "The number of days for which ElastiCache will retain automatic cache cluster snapshots before deleting them."
  default     = 1
}

variable "snapshot_window" {
  type        = string
  description = "The daily time range (in UTC) during which ElastiCache will begin taking a daily snapshot of your cache cluster."
  default     = "06:30-07:30"
}

variable "apply_immediately" {
  type        = bool
  default     = true
  description = "Apply changes immediately"
}

variable "dns_subdomain" {
  type        = string
  default     = "redis-blue-green"
  description = "The subdomain to use for the CNAME record. If not provided then the CNAME record will use var.name."
}
RB avatar

oof… youre doing this one at a time, one file per…

RB avatar

if you can create a minimal reproducible example, i’ll be able to help

RB avatar

but at this point, this is a lot of code to trudge through

RB avatar

the tests pass, for now, id point you to the current tf code in the example..

RB avatar
cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

RB avatar

@melissa Jenner ^

melissa Jenner avatar
melissa Jenner
$ cat main.tf 
module "redis" {
  source = "cloudposse/elasticache-redis/aws"

  availability_zones               = data.terraform_remote_state.vpc.outputs.azs
  vpc_id                           = data.terraform_remote_state.vpc.outputs.vpc_id
  enabled                          = var.enabled
  name                             = var.name
  tags                             = var.tags
  
  allowed_security_groups          = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
  allowed_cidr_blocks              = ["20.10.0.0/16", "20.10.51.0/24", "20.10.52.0/24"]
  subnets                          = data.terraform_remote_state.vpc.outputs.elasticache_subnets
  cluster_size                     = var.redis_cluster_size #number_cache_clusters
  instance_type                    = var.redis_instance_type
  apply_immediately                = true
  automatic_failover_enabled       = true
  #multi_az_enabled                 = true
  engine_version                   = var.redis_engine_version
  family                           = var.redis_family
  cluster_mode_enabled             = false
  replication_group_id             = var.replication_group_id
  #replication_group_description    = var.replication_group_description

  #at-rest encryption is to increase data security by encrypting on-disk data.
  at_rest_encryption_enabled       = var.at_rest_encryption_enabled

  #in-transit encryption protects data when it is moving from one location to another.
  transit_encryption_enabled       = var.transit_encryption_enabled

  cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled

  cluster_mode_num_node_groups     = var.cluster_mode_num_node_groups
  snapshot_retention_limit         = var.snapshot_retention_limit
  snapshot_window                  = var.snapshot_window
  dns_subdomain                    = var.dns_subdomain
  cluster_mode_replicas_per_node_group = var.cluster_mode_replicas_per_node_group

  parameter = [
    {
      name  = "notify-keyspace-events"
      value = "lK"
    }
  ]
}
MattyB avatar

@melissa Jenner - This is a bug due to the overall configuration of what’s being passed into the CloudPosse module. If you’d like to do some pair programming I can spend some time helping you out since I just went through and configured this module myself a couple of weeks ago.

RB avatar

if you 2 could figure out what the issue is, perhaps a PR can be submitted to make the module easier to use

2
melissa Jenner avatar
melissa Jenner

@MattyB As of now, do you have ideas of how to fix it?

melissa Jenner avatar
melissa Jenner

@MattyB You said, “I just went through and configured this module myself a couple of weeks ago.”. Did you actually fix it?

MattyB avatar

@melissa Jenner Not without seeing more of the variables that are being passed in. There are a few gotchas depending on how you set it up. Clustered mode, and other settings.

melissa Jenner avatar
melissa Jenner

@MattyB I posted context.tf and variables.tf. Do you see anything?

RB avatar

good luck! let us know when you pr

MattyB avatar

Without additional context (missing variables being passed into here) I’m unable to help you.

melissa Jenner avatar
melissa Jenner

Can you share your code?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t think multi AZ is valid for Redis. It only applies to Memcached, right?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Redis instead has the concept of “cluster mode”

MattyB avatar

Unfortunately not. I can tell you we’re running in clustered mode and that to use the module in any fashion you’ll need to thoroughly understand the CloudPosse implementation and to do some reading here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_replication_group. for example:

number_cache_clusters - (Optional) The number of cache clusters (primary and replicas) this replication group will have. If Multi-AZ is enabled, the value of this parameter must be at least 2. Updates will occur before other modifications. One of number_cache_clusters or cluster_mode is required.
melissa Jenner avatar
melissa Jenner

It is valid. I can manually configure it in AWS console.

1
melissa Jenner avatar
melissa Jenner

And, I do need multi_az_enabled. Regardless, I need to be able to provision redsi. By removing multi_az_enabled, I am still not able to provision redis.

2021-03-10

Hank avatar

Hi all, i am looking for terraform-aws-eks modules? Have a question about that what’s the difference between cloudposse and AWS? What’s the purpose that we write our own eks modules rather than using the open source eks modules?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

hey @Hank i would personally swerve the AWS one its trying to be everything for everyone and in my personal opinion needs a major refactor

this1
Hank avatar

Thanks Steve.

Hank avatar

What I consider is that how we handle the upgrade or new feature integration from AWS? We need add it into our own eks modules after new feature come out,right?

Matt Gowie avatar
Matt Gowie

@Hank One important thing to note: The terraform-aws-modules organization is not built by AWS. It’s built by Anton Babenko who is an AWS Hero I believe, but that does not mean those modules are actively maintained by AWS.

The large majority of Cloud Posse modules are kept up to date with best practices and new features. Particularly a module surrounding EKS since Cloud Posse and the community here use them very heavily.

Hank avatar

Thank u very much

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

at present i manage my own modules for EKS that work with bottlerocket

Gene Fontanilla avatar
Gene Fontanilla

Can anyone recommend a managed node group module for EKS?

Matt Gowie avatar
Matt Gowie
cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We also have managed Spot.io node groups

Gene Fontanilla avatar
Gene Fontanilla

@Erik Osterman (Cloud Posse), im interested on how this works on spot.io

Release notes from terraform avatar
Release notes from terraform
05:44:12 PM

v0.14.8 Version 0.14.8

MattyB avatar

Awe shoot, someone forgot release notes.

loren avatar

nah, their release automation always does this… creates release, then follows up a bit later to post the description and any artifacts. the rss integration isn’t smart enough to handle that

MattyB avatar

oh nice, i assumed since it said release notes that it was going to be the release notes haha

Release notes from terraform avatar
Release notes from terraform
06:24:16 PM

v0.14.8 BUG FIXES: config: Update HCL package to fix panics when indexing using sensitive values (#28034) core: Fix error when using sensitive…

config: update hcl and cty by jbardin · Pull Request #28034 · hashicorp/terraform

This fixes some panics with marked values, but also includes Unicode 13 updates in both the hcl and cty packages. The CHANGELOG update for this will mention the unicode changes as well as the bug f…

msharma24 avatar
msharma24

Hi - how do you run a target apply on a resource when using Terraform Cloud?

tim.davis.instinct avatar
tim.davis.instinct

I’m not exactly sure this is natively possible with TFC as of now. It wasn’t possible before, and I don’t see any documents updating the support for it.

I know we support this directly using a variable in env0 that passes the target flag to the apply:

https://docs.env0.com/docs/additional-controls#terraform-partial-apply

Disclaimer: I’m the DevOps Advocate for env0.

Additional Controls

Using the environment variable ENV0_TERRAFORM_TARGET, you can specify specific resources that will be targeted for apply. The value of the variable will be passed to Terraform -target flag.Read more here.Using the environment variable ENV0_TF_VERSION, you can specify the Terraform version you would…

tim.davis.instinct avatar
tim.davis.instinct

Hey, sorry. We did some digging, and you can do this with TFC now. Using CLI runs, you can use resource targeting. It is still not available in the UI.

msharma24 avatar
msharma24

Thanks for your response. I just got introduced to a customer who have ruined their TF State with weird cyclic dependencies and deleted the resources manually. TF target and state remove are the only saviors

tim.davis.instinct avatar
tim.davis.instinct

Oof! Totally understandable. Hopefully you can get it all cleaned up.

msharma24 avatar
msharma24

I managed to fix the bad terraform state issue-what I could have fixed in an hour with Terraform CLI took me 8 hours with Terraform Cloud

Takan avatar

hi all how we can revert to a previous state?

jose.amengual avatar
jose.amengual

there is no such thing as revert in TF

jose.amengual avatar
jose.amengual

but if you saved the previous plan you might be able to revert

jose.amengual avatar
jose.amengual

if you screw up the state but you have not apply you could maybe restore to and old version fo the state

Takan avatar

do you know any tools to export existing AWS resources to terraform style?

jose.amengual avatar
jose.amengual

sure

jose.amengual avatar
jose.amengual
GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer

jose.amengual avatar
jose.amengual

the best there is

Takan avatar

yes, that one is good as i know so far but for example: how to generate a tfstate file from the AWS existing resources with it?

jose.amengual avatar
jose.amengual

there is plenty of examples in the doc

jose.amengual avatar
jose.amengual

you need o do it per product manually

jose.amengual avatar
jose.amengual

there is no such thing as scan all a give me a state and tf files

jose.amengual avatar
jose.amengual

although you can use a for loop and go trough all the products supported by terraformer

2021-03-11

Bart Coddens avatar
Bart Coddens

HI all, the documentation is a bit unclear on this module:

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-sns-topic

Terraform Module to Provide an Amazon Simple Notification Service (SNS) - cloudposse/terraform-aws-sns-topic

Bart Coddens avatar
Bart Coddens

it says: subscribers:

(email is an option but is unsupported, see below).
Bart Coddens avatar
Bart Coddens

but then no extra info, does this refer to:

    # The endpoint to send data to, the contents will vary with the protocol. (see below for more information)
    endpoint_auto_confirms = bool
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there an easy way to obtain the difference in hours between two dates?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i want to provide an expiry date to a cert module in the format yyyy-mm-dd

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

and then work out the validity in hours

Bart Coddens avatar
Bart Coddens

hmmm terraform can confuse me with this:

Bart Coddens avatar
Bart Coddens
data "aws_sns_topic" "cloudposse-hosting" {
  name = "cloudposse-hosting-isawesome"
}

alarm_actions = ["${data.aws_sns_topic.cloudposse-hosting.arn}"]
loren avatar

remove the interpolation syntax:

alarm_actions = [data.aws_sns_topic.cloudposse-hosting.arn]
Bart Coddens avatar
Bart Coddens

ha ok, that worked

Bart Coddens avatar
Bart Coddens

thx

Bart Coddens avatar
Bart Coddens

then it complains with:

Bart Coddens avatar
Bart Coddens

Template interpolation syntax

Bart Coddens avatar
Bart Coddens

what’s the best way to format this ?

Ron Basumallik avatar
Ron Basumallik

Hi I’m trying to use cloudposse’s elastic beanstalk module and getting this error

Error: Invalid count argument

  on .terraform/modules/elastic_beanstalk.elastic_beanstalk_environment.dns_hostname/main.tf line 2, in resource "aws_route53_record" "default":
   2:   count   = module.this.enabled ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

Anyone seen this before?

Ron Basumallik avatar
Ron Basumallik

Ok looks like I won’t be using cloudposse

David Napier avatar
David Napier

Can someone point me in the right direction regarding the best use of using context?

jose.amengual avatar
jose.amengual

in the context.tf file itself there is comments

David Napier avatar
David Napier

Thank you very much (both). I’ll watch the video now. I just need a demonstration of it’s use.

1
jose.amengual avatar
jose.amengual

it takes a bit to get around it and understand it well but when it clicks your are like “I should have started using this last week”

David Napier avatar
David Napier

Got it sorted, looking forward to cleaning up my code with this, thanks guys!

1
melissa Jenner avatar
melissa Jenner

I have two VPCs. One if blue vpc (vpc_id = vpc-0067ff2ab41cc8a3e), another is shared VPC (vpc_id = vpc-076a4c26ec2217f9d). VPC peering connects these two VPCs. I provision MariaDB in the shared VPC. But, I got errors below. Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-076a4c26ec2217f9d and the EC2 security group is in vpc-0067ff2ab41cc8a3e status code: 400, request id: 75954d06-375c-4680-b8fe-df9a67f2574d

Below is the code. Can someone help?

module "master" {
  source = "terraform-aws-modules/rds/aws"
  version = "2.20.0"
  identifier = var.master_identifier
  engine            = var.engine
  engine_version    = var.engine_version
  instance_class    = var.instance_class
  allocated_storage = var.allocated_storage
  storage_type      = var.storage_type
  storage_encrypted = var.storage_encrypted
  name     = var.mariadb_name
  username = var.mariadb_username
  password = var.mariadb_password
  port     = var.mariadb_port
  vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
                            data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
                            data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
                            data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
                            data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
                            data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
  maintenance_window = var.maintenance_window_master
  backup_window      = var.backup_window_master
  multi_az = true
  tags = {
    Owner       = "MariaDB"
    Environment = "blue-green"
  }
  enabled_cloudwatch_logs_exports = ["audit", "general"]
  subnet_ids = data.terraform_remote_state.vpc-shared.outputs.database_subnets
  create_db_option_group = true
  apply_immediately = true
  family = var.family
  major_engine_version = var.major_engine_version
  final_snapshot_identifier = var.final_snapshot_identifier
  deletion_protection = false
  parameters = [
    {
      name  = "character_set_client"
      value = "utf8"
    },
    {
      name  = "character_set_server"
      value = "utf8"
    }
  ]
  options = [
    {
      option_name = "MARIADB_AUDIT_PLUGIN"
      option_settings = [
        {
          name  = "SERVER_AUDIT_EVENTS"
          value = "CONNECT"
        },
        {
          name  = "SERVER_AUDIT_FILE_ROTATIONS"
          value = "7"
        },
      ]
    },
  ]
}

module "replica" {
  source = "terraform-aws-modules/rds/aws"
  version = "2.20.0"
  identifier = var.replica_identifier
  replicate_source_db = module.master.this_db_instance_id
  engine            = var.engine
  engine_version    = var.engine_version
  instance_class    = var.instance_class
  allocated_storage = var.allocated_storage
  username = ""
  password = ""
  port     = var.mariadb_port
  vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
                            data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
                            data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
                            data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
                            data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
                            data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]

  maintenance_window = var.maintenance_window_replica
  backup_window      = var.backup_window_replica
  multi_az = false
  backup_retention_period = 0
  create_db_subnet_group = false
  create_db_option_group    = false
  create_db_parameter_group = false
  major_engine_version = var.major_engine_version
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is anyone using Terraform to manage and provision ECS Fargate + AWS CodeDeploy and doing database migrations? We’re using GitHub Actions as our CI platform to build the docker image for a Rails app, then using a continuous delivery platform to deploy the terraform (spacelift). I’m curious how to run the rake migrations?

Coming from an EKS world, we’d just run them as a Job , but with ECS, there’s no such straight equivalent (scheduled tasks don’t count).

Ideas considered:

• Using the new AWS Lambda container functionality ( but still not sure how we’d trigger it relative to the ECS tasks)

• Using a CodePipeline, but also not sure how we’d trigger it in our current continuous delivery model, since right now, we’re calling terraform to deploy the ECS task and update the container definition. I don’t believe there’s any way to deploy a code pipeline and have it trigger automatically.

• Using Step Functions (haven’t really used them before). Just throwing out buzz words.

• Using ECS task on a cron schedule (but we have no way to pick the appropriate schedule)

Mohammed Yahya avatar
Mohammed Yahya

I would split Iac codebase pipeline and App codebase pipeline, deploy Terraform using Spacelift. then deploy app code using Github CI by building the image, upload to ECR, the ECR should triggered Codepipeline which use codedeploy for deployments ( like blue/green)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, that would be a conventional approach what I don’t like about it is that is we have multiple systems modifying the ECS task. We wouldn’t be able to deploy ECS task changes (E.g. new SSM parameters) along side a new image deployment.

We’ve managed to recreate an argocd style deployment strategy for infra using spacelift. So really just want to solve the specific problem of running migrations.

Mohammed Yahya avatar
Mohammed Yahya

Yes, true you need to ignore ecs task definition and lb values, lot of pain

joshmyers avatar
joshmyers

Ideally I prefer not coupling deployments and migrations

joshmyers avatar
joshmyers

They need to be backwards compatible anyways

joshmyers avatar
joshmyers

Or just have the migration task def sitting around and trigger a run-task with the task def for migration?

joshmyers avatar
joshmyers

Not sure exactly which part of the migration is causing issues? Where to run the migration from? Coupling it with CI/CD ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


trigger a run-task
would be perfect if it was supported natively in terraform, but requires local exec

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We need to run the migration as part of the CD

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the CD is calling terraform when files in a git repo change (~like argocd). By design, in this part of the CD, there’s no workflow associated with calling terraform, so there’s no way to run other steps like in a CI pipeline. Terraform’s job is simply to synchronize the git repo with AWS.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Ideally I prefer not coupling deployments and migrations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

agree

loren avatar


trigger a run-task
would be perfect if it was supported natively in terraform, but requires local exec
Can you maybe post to an s3 object or ddb item with terraform, and have that event trigger a lambda that invokes the run-task?

Mohammed Yahya avatar
Mohammed Yahya

terraform-provider-aws v3.32.0 is out now with new resource ACM Private CA, and more support for AWS managed RabbitMQ https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.32.0

Release v3.32.0 · hashicorp/terraform-provider-aws

FEATURES: New Data Source: aws_acmpca_certificate (#10213) New Resource: aws_acmpca_certificate (#10213) New Resource: aws_acmpca_certificate_authority_certificate (#17850) ENHANCEMENTS: resourc…

2021-03-12

msharma24 avatar
msharma24

Any one using Terraform Cloud can share their Git workflow and repo structure? I RTFM for TF Cloud and Github and what Hashi suggested is to either use persistent branch for each stage - dev stage branch and prod stage branch or else use folders for each env stage which translates to TF cloud workspaces and apply everything when merged to ‘main` branch. There workflow don’t appear DRY and easy to change IAC to me.

Is there a better workflow anyone using?

Mohammed Yahya avatar
Mohammed Yahya

one fo the common questions asked here, there are no single best approach for this, but I will list them:

• repo level: each env on standalone repo

• branch level: each env on standalone branch

• folder level: ( I prefer this) each env on standalone folder My selection would be repo level for enterprises clients - branch and folder for startups and medium-small clients Also you need to split you terraform codebase into stacks:

• network stack

• data stack

• app stack this will help you with making your state file small and fast, and small blast radius. So IMHO my current approach for TF cloud project:

• tf-xyz repo

• env/dev/us-east-1 folder contains TF codebase

• create workspace on TF cloud called dev-us-east-1 points to that folder I hope this gives some light

mikesew avatar
mikesew

Thank you from another observer. I (a DBA) have been advocating for a separate data-layer, but my current org seems to only split between core and application, grouping the database alongside the app. We also have the env inside folder. I like your use of the region as a sub-dir underneath. do you only have .tfvars inside ./env/dev/us-east-1, or are there actual full-on [main.tf](http://main.tf) etc.?

Mohammed Yahya avatar
Mohammed Yahya

here all in one approach

.
├── README.md
├── Taskfile.yml
└── env
    └── dev
        └── eu-central-1
            ├── README.md
            ├── data.tf
            ├── dev.auto.tfvars
            ├── doc.tf
            ├── eks.tf
            ├── elasticache.tf
            ├── helm.tf
            ├── kms.tf
            ├── kubernetes.tf
            ├── locals.tf
            ├── mq.tf
            ├── outputs.tf
            ├── provider.tf
            ├── random.tf
            ├── rds.tf
            ├── sg.tf
            ├── ssm.tf
            ├── terraform.tf
            ├── variables.tf
            └── vpc.tf

3 directories, 22 files
Mohammed Yahya avatar
Mohammed Yahya

here separated stacks

├── Makefile
├── README.md
├── Taskfile.yml
├── app
│   ├── backend.tf
│   ├── cfn
│   │   └── mq.yaml
│   ├── codedeploy-ecs-ingest.tf
│   ├── codedeploy-ecs.tf
│   ├── codedeploy-iam.tf
│   ├── cross-account-access.tf
│   ├── data.tf
│   ├── ecs-alb.tf
│   ├── ecs-cloudwatch.tf
│   ├── ecs-cluster.tf
│   ├── ecs-container-definition-ingest.tf
│   ├── ecs-container-definition.tf
│   ├── ecs-iam.tf
│   ├── ecs-ingest-nlb.tf
│   ├── ecs-route53.tf
│   ├── ecs-service-ingest.tf
│   ├── ecs-service.tf
│   ├── ecs-sg.tf
│   ├── ecs-variables.tf
│   ├── locals.tf
│   ├── mq-nlb.tf
│   ├── mq-route53.tf
│   ├── mq-sg.tf
│   ├── mq.tf
│   ├── outputs.tf
│   ├── provider.tf
│   ├── random.tf
│   ├── secretsmanager.tf
│   ├── terraform.tfvars
│   └── vars.tf
├── data
│   ├── backend.tf
│   ├── data.tf
│   ├── provider.tf
│   ├── random.tf
│   ├── rds.tf
│   ├── reoute53.tf
│   ├── secret.tf
│   ├── sg.tf
│   ├── terraform.tfvars
│   └── vars.tf
└── network
    ├── acm.tf
    ├── backend.tf
    ├── cvpn-cloudwatch.tf
    ├── cvpn-endpoint.tf
    ├── cvpn-sg.tf
    ├── cvpn-tls-users.tf
    ├── cvpn-tls.tf
    ├── outputs.tf
    ├── plan.out
    ├── provider.tf
    ├── route53-resolver.tf
    ├── route53.tf
    ├── terraform.tfvars
    ├── vars.tf
    └── vpc.tf
TED Vortex avatar
TED Vortex

i rotate multiple configurations through the backend config

msharma24 avatar
msharma24

Thank you for response. Did u mean you reset the backend between local and TF cloud to rotate the config?

TED Vortex avatar
TED Vortex

that’s a different topic, I have a one time set up for the remote backend states

TED Vortex avatar
TED Vortex

then i have a main(global) config and a per env one

TED Vortex avatar
TED Vortex

so I make conditionals because my envs are not very different

TED Vortex avatar
TED Vortex
ozbillwang/terraform-best-practices

Terraform Best Practices for AWS users. Contribute to ozbillwang/terraform-best-practices development by creating an account on GitHub.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Which item in that list? Not quite getting what you mean sorry. But it sounds interesting

TED Vortex avatar
TED Vortex

backend config section

Bart Coddens avatar
Bart Coddens

what do you prefer, remote states or data sources ?

Bart Coddens avatar
Bart Coddens

I tend to go for data sources when they are available

RB avatar

data sources

RB avatar

technically you can use a data source for a terraform remote state too

Bart Coddens avatar
Bart Coddens

in a cloudwatch alarm, thinking howto best implement this:

Bart Coddens avatar
Bart Coddens

dimensions = { InstanceId = “i-cloudposseisawesome” }

Bart Coddens avatar
Bart Coddens

a remote state pull is the best option here I guess because the data source is a bit messy

Zach avatar

messy in what way

joshmyers avatar
joshmyers

remote state pull has some less than ideal side effects e.g. state versioning not being backwards compat

RB avatar

what does the data source of the instance id look like

Bart Coddens avatar
Bart Coddens

ah now they don’t seem to discourage this, in the past they did

RB avatar

you’re not going to show us, are you ?

RB avatar

what a tease

Bart Coddens avatar
Bart Coddens

hehe, cannot find the reference, I used it in the past but it was a bit messy. I should retry

2021-03-14

patrykk avatar
patrykk

Hi. I would like to use terraform-aws-cloudwatch-flow-logs for Terraform >=0.12.0 so I pulled branch 0.12/master from gitrepo. I get multiple warnings about interpolation syntax. I checked other brunches and all of them use the old interpolation syntax “${var.something}“. Is there any branch with sorted interpolation (terra 0.12.x) for that module? I can do it myself but no sense if that is already done and I am just blind

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this module was not converted to the new syntax yet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

pull requests are welcome

patrykk avatar
patrykk

ok. Thanks

bk avatar

HI folks, I wouldn’t normally post so soon after filing a bug but the github bug template suggested joining this Slack Please shout at me if this was bad etiquette.

Any run into an issue where the s3 bucket creation doesn’t respect the region you set in the provider? https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/88

Error creating S3 bucket ... the region 'us-east-1' is wrong; expecting 'eu-central-1' · Issue #88 · cloudposse/terraform-aws-tfstate-backend

Found a bug? Maybe our Slack Community can help. Describe the Bug I&#39;ve set the AWS provider to use us-east-1 but I&#39;m getting this error when the module tries to create the s3 bucket: Error …

RB avatar

Interesting. Do you have eu-central-1 used anywhere in your terraform code? I can’t find that string used in the module itself

Error creating S3 bucket ... the region 'us-east-1' is wrong; expecting 'eu-central-1' · Issue #88 · cloudposse/terraform-aws-tfstate-backend

Found a bug? Maybe our Slack Community can help. Describe the Bug I&#39;ve set the AWS provider to use us-east-1 but I&#39;m getting this error when the module tries to create the s3 bucket: Error …

RB avatar

The region in the module is using the region data source so it should use the region from the provider

bk avatar

Hey Rb, thanks for replying. Nope, I don’t think I’m specifying that anywhere in my code. I attached a gist to the bug.

I suppose I could create a new directory and copy bit by bit to be completely sure.

bk avatar

same error in a fresh directory/fresh terraform init

bk avatar

weird

bk avatar

I do see eu-west-1 explicitly defined in .terraform\modules\terraform_state_backend.dynamodb_table_label\examples\autoscalinggroup but that’s an example file and also not eu-central-1

bk avatar

I gotta assume the module properly creates unique bucket names, right? https://github.com/hashicorp/terraform/issues/2774

AuthorizationHeaderMalformed for S3 remote · Issue #2774 · hashicorp/terraform

I&#39;m using S3 remote and configuring with the following command: terraform remote config -backend=S3 -backend-config=&quot;bucket=test&quot; -backend-config=&quot;key=terraform.tfstate&quot; -ba…

bk avatar

gah… this solves it I think:

s3_bucket_name = random_string.random.result

the bucket name wasn’t unique somehow

2021-03-15

Bart Coddens avatar
Bart Coddens

any keybase experts in the group ? We are migrating to more teamwork in terraform configurations. If I use module: https://github.com/cloudposse/terraform-aws-iam-user and I configure the login profile, the console password is encrypted as a base64 key with my own encryption key in keybase. In the workflow I decrypt the key and store it in a password vault. If I leave the company, it’s best that my co’s taint the resource and recreate it with their own key ?

cloudposse/terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

Joe Niland avatar
Joe Niland

You can use any public PGP key so perhaps you should use a ‘shared’ one in the first place?

cloudposse/terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

hkaya avatar

Hi Bart, did you find a solution to your problem? I think a ‘shared’ private would lead to the situation that a new key would eventually be required which would also bring re-encryption of secrets to the table. You would have to implement your own rotation method to handle this regularly and without breaking stuff.

hkaya avatar

I am currently facing the same issue and would like to learn how others are dealing with this challenge.

Joe Niland avatar
Joe Niland

@hkaya have a listen to the latest office hours. It’s discussed on there.

hkaya avatar

@Joe Niland thanks, will surely do.

LT avatar

Hi All, got a silly question, I’ve deployed the https://github.com/cloudposse/terraform-aws-jenkins into AWS environment, but I can’t seemed to find the URL to access jenkins server, I tried Route53 DNS name with port 8080 and 80 in the URL, nothing seemed to work. Could anymore point me how to access the jenkins server?

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Zach avatar

From the network diagram it should be open on port 443, https

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

LT avatar

@Zach Thanks for the quick respond, I’ve tried all them, 443, 80, 8080 on Route53 DNS, ALB DNS name, EFS hostname, efs_dns_name

LT avatar

still not hitting it.

Zach avatar

I’ve not used this module myself but you probably just need to hit all the basic troubleshooting then - go see what the R53 record is pointed at, check any ALBs and TG health, etc

Zach avatar

like when you say yoiu can’t reach it… whats happening? Is it timing out? (thats probably a SG issue) 404 (reaches jenkins but something is screwy so its not found)? 503 (hitting the LB but jenkins isn’t responding)?

Mohammed Yahya avatar
Mohammed Yahya
output "elastic_beanstalk_environment_hostname" {
  value       = module.elastic_beanstalk_environment.hostname
  description = "DNS hostname"
}
LT avatar
LT
01:38:40 PM

@Zach route53 zone name just returned no such endpoint I believe.

LT avatar
LT
01:39:40 PM

ALB DNS however showing this, however not that jenkins login page nor a place where I can navigate to it.

LT avatar
LT
01:42:04 PM

@Mohammed Yahya Not sure what you mean with that, but I have that output and its not responding via web request,

LT avatar

@Andriy Knysh (Cloud Posse) @Maxim Mironenko (Cloud Posse), sorry for the ping, but seeing the great work from both of you, if could provide some insight as to what is happening here, I would greatly appreciate this.

LT avatar

just to provide some info, I am using the complete module where I go with all the default info provided in the example tfvars file except different dns_zone_id and github_oauth_token and jenkins usename and pw of course.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it gets deployed to Elastic Beanstalk, so this output should work

output "elastic_beanstalk_environment_hostname" {
  value       = module.elastic_beanstalk_environment.hostname
  description = "DNS hostname"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

unless it did not get deployed successfully

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please login to the AWS console and look at the CodePipeline

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it had any errors, it would show the failed steps in red

LT avatar
LT
07:08:07 PM

@Andriy Knysh (Cloud Posse) Thank you for the quick respond, everything is deployed successfully.

LT avatar
LT
07:09:41 PM

there’s an error like you said, but I have a question, would failed build stop me from accessing jenkins server?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the pipeline could not access the repo or the branch

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it either does not exist, or is private

LT avatar

Fixing this now. Thanks Andriy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/jenkins

Contribute to cloudposse/jenkins development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

once working, you can switch the repo

LT avatar

Thanks @Andriy Knysh (Cloud Posse). Looks like all good now.

LT avatar

I have another question, after looking through all the root modules, is changing from github repo to Bitbucket repo possible using this module?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Bitbucket is prob not supported by the current version (but should be possible to switch to it)

LT avatar

Thanks for clarifying that. Appreciate it.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I create some Lambda functions like this:

resource aws_lambda_function main {
  for_each = local.functions
  ...
}

Is it possible to add dependencies so these functions are created in serial, or so they depend on each other?

Alex Jurkiewicz avatar
Alex Jurkiewicz

re-posting this question

loren avatar

i don’t think so… not literally, anyway, which i imagine would look like this:

resource aws_lambda_function main {
  for_each = local.functions
  
  <attr> = aws_lambda_function.main["<label>"].<attr>
}
loren avatar

only option i can think of is to split it into as many resource blocks as you need to link the dependencies

loren avatar

i’d be interesting in following if you open a feature request…

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yeah. It seems like when you try and create a lot of functions at the same time, AWS will return rate limit failures for most of them as the Lambda control plane limit is 15/sec, and you can only create 1 Lambda in any one VPC at a time. So this results in Terraform creating about one Lambda every 2mins if they are all in the same VPC

loren avatar

oh, that should definitely be opened as a bug

loren avatar

terraform has retry handlers for rate limiting

Alex Jurkiewicz avatar
Alex Jurkiewicz

Well, that’s the thing. The internal retry logic works and it eventually succeeds. It just takes forever

loren avatar

lulz ok

Alex Jurkiewicz avatar
Alex Jurkiewicz

So it’s not really a Terraform bug, and I doubt Hashicorp would do anything to address this. For now we work around it with -parallelism=1 but that slows down the whole configuration

Alex Jurkiewicz avatar
Alex Jurkiewicz

Another use-case is AppSync resources, you can only modify each AppSync API in serial, so if you have for_each on appsync resolvers or data source you get failures (AWS provider has no rate limiting retry logic as it’s a new service and this is always forgotten )

loren avatar

the provider might be able to be smarter about how it schedules them… is there an api to check the rate limit?

loren avatar

for_each is still relatively new also… i feel like this is a good issue to open either way

Alex Jurkiewicz avatar
Alex Jurkiewicz

yeah. That would be ideal. There’s no AWS API to check rate limit. I wonder the AWS provider uses the AWS SDK’s built in retry logic or roll their own. The AWS SDK retry logic is really dumb, it’s plain exponential backoff w/ jitter. So you end up with cases like this where you are retrying every 120 seconds when you could be retrying every 10

Alex Jurkiewicz avatar
Alex Jurkiewicz

That’s fair. I’ll open a bug. I’m not hopeful though.

Alex Jurkiewicz avatar
Alex Jurkiewicz
resource aws_resource foo {
  count = n
  depends_on = [ aws_resource.foo[n-1] ]
}

I wonder if it works with count

loren avatar

heh. yeah, but if there’s some discussion it would be worth it

loren avatar

maybe another approach would be to expose retry logic to every resource as a core feature so you can override it some… retry_method and retriable_errors or somesuch

Alex Jurkiewicz avatar
Alex Jurkiewicz

I hastily wrote up this feature request – per-resource parallelism https://github.com/hashicorp/terraform/issues/28152

Allow resource-level parallelism management (for for_each blocks) · Issue #28152 · hashicorp/terraform

Current Terraform Version 0.14 Use-cases Sometimes, providers have limitations, or the backend API has a limitation, regarding parallel modification of a certain resource. Adding a lifecycle { para…

1
loren avatar

fwiw, you can avoid the deadlock in your example by threading the bucket name to the public access block from the bucket policy…

resource aws_s3_bucket default {
  name = mybucket
}
resource aws_s3_bucket_policy default {
  name = aws_s3_bucket.default.name
  policy = ...
}
resource aws_s3_public_access_block default {
  name = aws_s3_bucket_policy.default.name
  ...
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

Yeah. I know

1

2021-03-16

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have an example of tf variable validation to make sure a date is in the format YYYY-MM-DD ?

is the below valid?

variable "expiry_date" {
  description = "The date you wish the certificate to expire."
  type        = string

  validation {
    condition = regex("(\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)")
    error_message = "The expiry_date value must be in the format YYYY-MM-DD."
  }
}
Matt Gowie avatar
Matt Gowie

Looks almost correct — You need to specify var.expiry_date in the condition for the regex to be run on that input variable.

loren avatar

slack needs a line-by-line review and suggestion feature

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i fixed using …

variable "expiry_date" {
  description = "The date you wish the certificate to expire."
  type        = string

  validation {
    condition     = length(regexall("(\d\d\d\d)-(\d\d)-(\d\d)", var.expiry_date)) > 0
    error_message = "The expiry_date value must be in the format YYYY-MM-DD."
  }
}
2
Florian SILVA avatar
Florian SILVA

Hello guys ! I joined recently this Slack since I’m starting to use the CloudPosse module https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment. I’m very satisfied of the module but I see think a feature is missing. I’ve open an issue for it and would be glad to discuss and help for pushing it if I’m doing things well.

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Florian SILVA thanks, your contributions are very welcome

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Florian SILVA avatar
Florian SILVA

Thank you @Andriy Knysh (Cloud Posse). The current issue that I opened is the following: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/issues/174 If I could have a feedback when someone have time. I’m not sure yet on how to resolve it the best. :)

OliverS avatar
OliverS

In case anyone is interested, I published a module on terraform registry that provides an alternative method of integrating provisioned state with external tools like helm and kustomize: https://github.com/schollii/terraform-local-gen-files. API is still alpha, although I have made use of it in a couple projects and I really like the workflow it supports. So any feedback welcome!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Nice!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is something have wanted to do more of, however, have trouble reconciling it with gitops patterns we follow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The end result needs to be it’s opening a PR against the repo with the changes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(it’s that last part haven’t looked into)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TIL: The terraform (cli) does some interesting caching.

Error: Failed to download module

Could not download module "pr-16696-acme-migrate"
(pr-16696-acme-migrate.tf.json:3) source code from
"git::<https://github.com/acme/repo.git?ref=>....": subdir ".terraform" not found
  1. Terraform downloads all the module sources from git to .terraform/
  2. The first clone is always a deep clone, with all files (including all dot files)
  3. The next time terraform encounters a module source and there’s a “cache hit” on the local filesystem, it does a copy of all the files, but ignores all dot files
  4. If (like we did) happen to have .terraform directory with terraform code for a micro service repo, this “dot file”, was was ignored.
  5. Renaming .terraform to terraform resolved the problem.
2
1

2021-03-17

Ashish Srivastava avatar
Ashish Srivastava

Hi, I facing some issues with output value even after using depends_on block

Ashish Srivastava avatar
Ashish Srivastava

I’m provisioning privatelink on mongoatlas, and require a connection string, following their github example I created my script but it fails at the output’s end.

Jonathan Chapman avatar
Jonathan Chapman

I’m working on upgrading Terraform from 0.12 to 0.13 and it is telling me that it will make the following change. Also, I’m upgrading the AWS provider to >= 3

  # module.redirect_cert.aws_acm_certificate_validation.cert[0] must be replaced
-/+ resource "aws_acm_certificate_validation" "cert" {
        certificate_arn         = "arn:aws:acm:us-east-1:000:certificate/b55ddee7-8d98-4bf2-93eb-0029cb3e8929"
      ~ id                      = "2020-10-28 18:31:37 +0000 UTC" -> (known after apply)
      ~ validation_record_fqdns = [ # forces replacement
          + "_2b63a2227feb97338346b0920e49818b.xxx.com",
          + "_423e90cf36285adac5ee4213289e73ab.xxx.com",
        ]
    }

The validation records exist is both AWS and the terraform state, but not in the aws_acm_certificate_validation. I’ve read the documentation for upgrading the AWS provider to 3 and they mention it should be ok.

I’m uncertain what will happen if I apply this. Can anyone help confirm what will happen if I do apply this change? My biggest concern is that it doesn’t do anything to my cert.

Jonathan Chapman avatar
Jonathan Chapman

In case anyone has this question. The answer is no, it will not delete the certificate.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to perform client auth on nginx ingress controller using a CA, server cert and client client created via Terraform

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know how i can get the server cert using tls_locally_signed_cert to include the CA in the chain?

Brij S avatar

Does anyone here use TFE/TFC internally? How do you manage module versions in Github and test new releases?

Matt Gowie avatar
Matt Gowie

You can look at the way that Cloud Posse does module version tagging + testing for any of their modules as a good way to accomplish this.

Matt Gowie avatar
Matt Gowie

tl;dr for that topic is:

  1. We use the release-drafter GH action to automate tagging / releases: https://github.com/cloudposse/terraform-datadog-monitor/blob/master/.github/workflows/auto-release.yml
  2. And we use terratest to accomplish module tests: https://github.com/cloudposse/terraform-datadog-monitor/blob/master/test/src/examples_complete_test.go
cloudposse/terraform-datadog-monitor

Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor

cloudposse/terraform-datadog-monitor

Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor

Matt Gowie avatar
Matt Gowie

The test-harness setup can be easily reused to make that repeatable across many module repos: https://github.com/cloudposse/test-harness

cloudposse/test-harness

Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness

Brij S avatar

oh this test harness looks interesting. How do you use it though?

Matt Gowie avatar
Matt Gowie

There is no good docs to follow, but if you look at any cloud posse module then you can reverse engineer the setup and go from there.

Brij S avatar

I just upgraded a terraform module to TF13 by running terraform 0.13upgrade . I created a versions.tf file with the following content:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
    helm = {
      source = "hashicorp/helm"
    }
    kubernetes = {
      source = "hashicorp/kubernetes"
    }
  }
  required_version = ">= 0.13"
}

When I publish this to TFE, I get the following error:

Error: error loading module: Unsuitable value type: Unsuitable value: string required (in versions.tf line 3)

I’m not sure what this error eludes to, I’ve checked other public terraform modules with the same file and I dont notice anything different

Hao Wang avatar
Hao Wang

what is the version of TFE?

Brij S avatar

how do I find the version? We do have several versions of terraform available

Brij S avatar

TFE v201910-1

Hao Wang avatar
Hao Wang
terraform {
  required_providers {
    tfe = {
      version = "~> 0.24.0"
    }
  }
}
Hao Wang avatar
Hao Wang

should be similar to this ^^^

Brij S avatar

thats required for a module?

Hao Wang avatar
Hao Wang

oh it should be in the root folder

Brij S avatar

I’m confused - this is a standalone terraform module

  1. how come I have to add the tfe provider
Brij S avatar

and it is in the ‘root’ folder aleady

Hao Wang avatar
Hao Wang

tfe not seen in the code you posted

Hao Wang avatar
Hao Wang

or I missed something

Brij S avatar

yeah i’m trying to publish a module to terraform enterprise

Hao Wang avatar
Hao Wang

what is in [version.tf](http://version.tf)?

Hao Wang avatar
Hao Wang

oh ic, tfe is not needed

Brij S avatar

right..

Hao Wang avatar
Hao Wang

sorry, we should look at version.tf

Brij S avatar
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
    helm = {
      source = "hashicorp/helm"
    }
    kubernetes = {
      source = "hashicorp/kubernetes"
    }
  }
  required_version = ">= 0.13"
}
Hao Wang avatar
Hao Wang

can I run the codes on my local laptop?

Hao Wang avatar
Hao Wang

did you enable DEBUG?

Release notes from terraform avatar
Release notes from terraform
06:04:18 PM

v0.15.0-beta2 No content.

1
Release notes from terraform avatar
Release notes from terraform
06:24:17 PM

v0.15.0-beta2 UPGRADE NOTES: The output of terraform validate -json has been extended to include a code snippet object for each diagnostic. If present, this object contains an excerpt of the source code which triggered the diagnostic. Existing fields in the JSON output remain the same as before. (#28057) ENHANCEMENTS: config: Improved type…

cli: Add comprehensive JSON diagnostic structure by alisdair · Pull Request #28057 · hashicorp/terraform

The motivation for this work is to unify the diagnostic rendering path, ensuring that there is one standard representation of a renderable diagnostic. This means that commands which emit JSON diagn…

Tomek avatar

What’s the best way to have terraform start tracking an s3 bucket that was created in the console (and has data in it already)? The terraform has a definition for the s3 bucket but is currently erroring because of BucketAlreadyOwnedByYou

Tomek avatar

is this something I can accomplish with the terraform state command?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

terraform import

Tomek avatar

oh nice, thanks!

Ryan Fisher avatar
Ryan Fisher

Anyone know how I can update a parameter for an existing object? I need to modify this:

obj = { 
  one = {
    two = {
      foo = bar
    }
  },
  three = "four"
}

Into this:

obj = { 
  one = {
    two = {
      foo = bar,
      biz = baz
    }
  },
  three = "four"
}
Matt Gowie avatar
Matt Gowie

Terraform data structures are immutable so you need to create a new local using the previous object. For example:

new_data = merge(var.old_data, {
  one = {
    two = {
      biz = "baz"
    }
  } 
})
Matt Gowie avatar
Matt Gowie

I didn’t test the above so I might be off in some syntax, but you get the idea.

Ryan Fisher avatar
Ryan Fisher

ok, yeah, that is what I am trying. Thanks.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also if you need true deep merging you cannot do it in terraform core with merge

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

1
OliverS avatar
OliverS

Hey sometimes I get asked why I prefer TF over CF (cloudformation). I’m curious what others’ reasons are. Mine, after using CF a couple months (so not nearly as much as terraform which been using almost 3 years):

• CF difficult to modularize (nesting doesn’t cut it and IIRC nesting is discouraged)

• CF has clunky template language

• Planning is shallow and it is often difficult to know why something will be changed

• Can get stuck in messed up state eg upgrade failed then rollback failed too

• Infra upgrade-as-a-transaction (atomic, all or nothing) just feels weird

• Having to load templates to s3 is annoying Could probably find more but that’s all that comes to mind just now.

1
loren avatar

lack of data sources to lookup values from existing resources

loren avatar

no locals or other method for easily generating/reusing intermediate values

loren avatar

very limited functions and expressions available in the DSL

loren avatar

it’s yaml, but not really, with syntax that isn’t really yaml, and syntax that IS yaml but isn’t supported (anchors)

loren avatar

parameter validation is downright annoying when using the AWS types and trying to make a variable optional (try using the type AWS::EC2::KeyPair::KeyName and making the keypair an optional parameter…)

loren avatar

terraform isn’t just for aws! IaC for github, or ldap, or dns, or or or or…

this1
1
10002
Zach avatar


lack of data sources to lookup values from existing resources
this one blows my mind every time
terraform isn’t just for aws!
and this is pretty key. terraform brings together or covers most/all your sources not just the AWS pieces.

OliverS avatar
OliverS

All good ones!

OliverS avatar
OliverS

• exported outputs of stacks are not allowed to change

• lacks a central place where templates can be shared

Alex Jurkiewicz avatar
Alex Jurkiewicz

No escape hatches to import/export/manually edit resources / statefiles

loren avatar

oh that’s a good one @Alex Jurkiewicz! state manipulation commands are huge. so many times i’ve wished i could just move stuff around in the cfn stack to avoid resource cycles or get myself out of a broken stack launch/update

10002
Alex Jurkiewicz avatar
Alex Jurkiewicz

We have some redshift clusters managed as nested stacks in cloudformation, and they are wedged in a completely inoperable state. It’s impossible to make ANY change to the resources within. We’ve had to fall back to importing them into a new Terraform configuration and setting a “deny all” stack update policy in cloudformation. But those stacks will live forever as some busted thing

2
Matt Gowie avatar
Matt Gowie

These are great

I feel like we should to make a cheap website called tf-vs-cf.com that just lists these things for easily sending to clients, managers, new engineers, etc.

2
1
managedkaos avatar
managedkaos

@Matt Gowie let’s do it! i’ve created an org and a repo to host the static site and picked up the domain tfvscf.com (no dashes ).

we can do a static site hosted on github pages.

for now maybe add these topics as issues? and then figure out the site design as we go.

all contributors are welcome! wave

managedkaos avatar
managedkaos
tfvscf/tfvscf.com

Hosting the website, tfvscf.com. Contribute to tfvscf/tfvscf.com development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

Hahah you went and did it @managedkaos — cool. I’ll try to contribute over the weekend

managedkaos avatar
managedkaos

let me know your github ID and i’ll send an invite

Matt Gowie avatar
Matt Gowie

Would be happy to host on Amplify unless someone has a better option. I know GH pages is a good option… but I have a whole simple Amplify setup that I like and use for customer sites. That is how I cheaply host mattgowie.com + masterpoint.io:

module "masterpoint_site" {
  source = "git::<https://github.com/masterpointio/terraform-aws-amplify-app.git?ref=master>"

  namespace                    = var.namespace
  stage                        = var.stage
  name                         = "masterpointio"
  organization                 = "masterpointio"
  repo                         = "masterpoint.io"
  gh_access_token              = local.secrets.gh_access_token
  domain_name                  = "masterpoint.io"
  description                  = "The simple HTML website for Masterpoint Consulting (@masterpointio)."
  build_spec_content           = data.local_file.masterpoint_build_spec.content
  enable_basic_auth_on_master  = false
  enable_basic_auth_on_develop = true
  basic_auth_username          = "masterpoint"
  basic_auth_password          = local.secrets.basic_auth_password
  develop_pull_request_preview = true

  custom_rules = [{
    source    = "<https://www.masterpoint.io>"
    target    = "<https://masterpoint.io>"
    status    = "301"
    condition = null
  }]
}
1
Matt Gowie avatar
Matt Gowie

@managedkaos GH is @Gowiem

Gowiem - Overview

Terraform and AWS Consultant. AWS Community Builder. Owner @ Masterpoint Consulting. - Gowiem

managedkaos avatar
managedkaos

yeah that would be cool! i picked this up as a project to do some static site work (which i have been meaning to ramp up on). I’ve also been wanting to try amplify so yep, I’m open!

2
sheldonh avatar
sheldonh

Matt that’s awesome. Is it free? Netlify is pretty much my go to for static site hosting and has cicd and is free. I thought amplify with aws resources would cost?

Mohammed Yahya avatar
Mohammed Yahya

Thank you guys for this, I was looking into make it based on @Matt Gowie suggestion.

Matt Gowie avatar
Matt Gowie

@sheldonh it’s basically free. With Amplify, you pay per build I’m pretty sure, but the hosting is free? Don’t quote me. I do know that I pay somewhere between nothing to less than 50 cents a month for my two static sites that I manage on Amplify.

1
sheldonh avatar
sheldonh

Got you. So it’s free because within free tier on AWS then, but not free as it free service level, right? Asking as netlify is pretty much the gold standard for static website ease of usage when I looked in the past, but am open to trying something new for future projects if it makes sense, esp integrated with AWS.

managedkaos avatar
managedkaos

I think its free/next-to-free because its cheap.
Static Web Hosting - Pay as you go

Build & Deploy
$0.01 per build minute

HOSTING
$0.023 per GB stored per month
$0.15 per GB served

https://aws.amazon.com/amplify/pricing/

AWS Amplify Pricing | Front-End Web & Mobile | Amazon Web Services

With AWS Amplify, you pay only for what you use with no minimum fees or mandatory service usage. Get started for free with the AWS Free Tier.

this1
Matt Gowie avatar
Matt Gowie

I don’t even know if it’s within free tier… I just think they charge you for each build minute and for the large majority of FE sites build minutes are extremely low.

managedkaos avatar
managedkaos

they have some good examples on the pricing page

managedkaos avatar
managedkaos

This example:
A startup team with 5 developers have an app that has 300 daily active users. The team commits code 2 times per day.
comes out to $8/mo.

so a site that is waaaaay less than even that will be crazy cheap

managedkaos avatar
managedkaos

i think the benefit to Netlify is you are not tied to AWS. i used it a while back and it was easy to onboard from GitHub. I can see the AWS onboarding being overwhelming if you’re not already using it.

For an everyday dev that just wants to deploy a static site, yeah use Netlify. Easy decision.

If you are already on AWS or at least know how to set up an account and maybe already have some other workloads there, perhaps maybe consider Amplify?

this2
Matt Gowie avatar
Matt Gowie

Yeah, I fully agree with that decision tree.

1
Takan avatar

anyone knows how to fix the following error ?

Failed to load state: Terraform 0.14.5 does not support state version 4, please update.
Alex Jurkiewicz avatar
Alex Jurkiewicz

Update your version of terraform

Takan avatar

i did it with tfenv. But, still not work at all

2021-03-18

Bart Coddens avatar
Bart Coddens

Here terraform confuses me a bit, I source a module like this:

module “global-state” {   source       = “../../modules/s3state-backend”   profile      = “cloudposse-rocks”   customername = “cloudposseisawesome” }

Bart Coddens avatar
Bart Coddens

I need to pass the values of the variables in main terraform file like this, is there a more handy way to do this ?

Bart Coddens avatar
Bart Coddens

because the values are variable, like cloudposse-rocksforever for example

Jon avatar

There was a really well put together document on Cloudposse’s GitHub a little while back that talked about Terraform modules. In a gist, it basically said “we will do our best to make a well developed module but in the end you are might need to add your own secret sauce to make it work well for your use case.” I thought I had bookmarked it but I guess not. If anyone knows what readme I was referring to and has the link handy, I’d really appreciate it if you posted it! Until then, I’ll keep looking around for it.

joshmyers avatar
joshmyers

Anybody else hit this? KMS key policy re ordering - https://github.com/hashicorp/terraform-provider-aws/issues/11801

Order is lost for data aws_iam_policy_document when applied to resource aws_kms_key · Issue #11801 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

joshmyers avatar
joshmyers

FYI this just looks like a KMS issue that can be easily replicated in the UI. KMS key policies are saved in a random order no matter how they are applied/saved

Order is lost for data aws_iam_policy_document when applied to resource aws_kms_key · Issue #11801 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

joshmyers avatar
joshmyers

nvm, done some digging and added a comment to that ticket.

joshmyers avatar
joshmyers

Also opened an AWS support request, will see what they say.

loren avatar

default_tags are coming in v3.33.0 as an attribute of the aws provider… https://github.com/hashicorp/terraform-provider-aws/pull/17974

[Proof of Concept] Default tags implementation: provider and subnet/vpc resources by anGie44 · Pull Request #17974 · hashicorp/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

10003
loren avatar

note it’s public preview and currently limited to aws_subnet and aws_vpc

provider: New default_tags argument as a public preview for applying tags across all resources under a provider. Support for the functionality must be added to individual resources in the codebase and is only implemented for the aws_subnet and aws_vpc resources at this time. Until a general availability announcement, no compatibility promises are made with these provider arguments and their functionality.

[Proof of Concept] Default tags implementation: provider and subnet/vpc resources by anGie44 · Pull Request #17974 · hashicorp/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

Matt Gowie avatar
Matt Gowie

Wow. That’s huge. I love it.

2
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Poor @Imanuel Mizrachi just implemented a Cloudrail rule to alert if no tags were added to a resource, now he’ll need to see if default_tags is set and know which resources are impacted

2
Imanuel Mizrachi avatar
Imanuel Mizrachi

Just missing “advanced_tags” now…

1
melissa Jenner avatar
melissa Jenner

How to code terraform properly so that it can provision the security groups I manually created?

At AWS console, I manually provisioned the security rules below for ElasticSearch. There are three VPCs. Transit gateway connects them. ElasticSearch is installed in VPC-A.

   Type      Protocol   Port range      Source

All traffic     All        All       40.10.0.0/16  (VPC-A)
All traffic     All        All       20.10.0.0/16  (VPC-B)
All traffic     All        All       30.10.0.0/16  (VPC-C)

Outbound rules:
   Type      Protocol   Port range    Destination
All traffic     All        All        0.0.0.0/0

But, the terraform code below is not able to provision the above security groups.

resource "aws_security_group" "shared-elasticsearch-sg" {
  name = var.name_sg
  vpc_id = data.terraform_remote_state.vpc-A.outputs.vpc_id
  ingress {
    from_port = 0
    to_port   = 0
    protocol  = "-1"
    cidr_blocks = [data.terraform_remote_state.vpc-A.outputs.vpc_cidr_block,
                   data.terraform_remote_state.vpc-B.outputs.vpc_cidr_block,
                   data.terraform_remote_state.vpc-C.outputs.vpc_cidr_block]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = var.name_sg
  }
}

module "elasticsearch" {
  source                = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
  security_groups       = [aws_security_group.shared-elasticsearch-sg.id,
                           data.terraform_remote_state.vpc-A.outputs.default_security_group_id]
  vpc_id                 = data.terraform_remote_state.vpc-A.outputs.vpc_id
  ......
}

The above code provision the security rules below:

Inbound rules:
  Type      Protocol   Port range         Source
All TCP       TCP       0 - 65535   sg-0288988f38d2007be / shared-elasticSearch-sg
All TCP       TCP       0 - 65535   sg-0893dfcdc1be34c63 / default
Outbound rules:
  Type      Protocol   Port range    Destination
All TCP       TCP      0 - 65535      0.0.0.0/0

Security rules of sg-0288988f38d2007be / shared-elasticSearch-sg

   Type      Protocol   Port range      Source
All traffic     All        All       40.10.0.0/16  (VPC-A)
All traffic     All        All       20.10.0.0/16  (VPC-B)
All traffic     All        All       30.10.0.0/16  (VPC-C)

Outbound rules:
   Type      Protocol   Port range    Destination
All traffic     All        All        0.0.0.0/0

The terraform code provisioned security groups do not work. In VPC-B and VPC-C, it cannot reach elasticsearch at VPC-A. How to code terraform properly so that it can provision the security groups I manually created?

khabbabs avatar
khabbabs

What are the rules in the default SG?

melissa Jenner avatar
melissa Jenner

I do not know. Actually, I do not need default SG.

melissa Jenner avatar
melissa Jenner

I only need one SG which I manually created. “data.terraform_remote_state.vpc-A.outputs.default_security_group_id” is included in the Terraform code. Actually, I do not need it. The problem can be solved if Terraform code provisions SG with Inbound rules of “Type: All traffic. Protocol: All, Port: All”, not “Type All TCP, Protocol: TCP , Port range: 0 - 65535”.

Jeff Dyke avatar
Jeff Dyke

Sorry i may have read this wrong, but if these were created in the console, were they properly imported into terraform and statefiles to the point where you could run terraform plan and get no changes? Also when talking about VPCs, you don’t mention Peering?. I realize these are likely example cidr’s, but you’re not using random internal cidrs. I clicked on this a couple days ago and wanted to respond before clicking another thread. Hope you’ve made some progress.

2021-03-19

Mohammed Yahya avatar
Mohammed Yahya

@loren https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.33.0 with provider: New default_tags for applying tags across all resources under a provider

Release v3.33.0 · hashicorp/terraform-provider-aws

NOTES: data-source/aws_vpc_endpoint_service: The service_type argument filtering has been switched from client-side to new EC2 API functionality (#17641) provider: New default_tags argument as a p…

1
Bart Coddens avatar
Bart Coddens

Hi all, I would like to query a existing security group id and assign it to a ec2 instance

Bart Coddens avatar
Bart Coddens

for example:

hkaya avatar

Hi folks, can someone help me with an issue regarding gitlab provider? I’d like to try setting up a vanilla gitlab.com workspace from scratch. Right now the repo is completely empty, only one intial (owner) account exists. I tried using his personal access token to create a first gitlab_group resource, but I’m only getting 403 forbidden errors. Am I missing something or is there another requirement beforehand?

hkaya avatar

the exact error including the api path looks like this:

Error: POST <https://gitlab.com/api/v4/groups>: 403 {message: 403 Forbidden}
hkaya avatar

turns out, I can import a manually created group as a resource to the terraform state, when I set it to public. Looks like, the free gitlab product is not exactly suitable for this purpose.

Bart Coddens avatar
Bart Coddens
data "aws_security_groups" "cloudposse-ips" {
  tags = {
    Name = "cloudposse-ips"
  }
}


vpc_security_group_ids      = ["data.aws_security_groups.cloudposse-ips.ids"]
imran hussain avatar
imran hussain

You do not need to qoute vpc_security_group_ids = [data.aws_security_groups.cloudposse-ips.ids]

Bart Coddens avatar
Bart Coddens

this does not seem to work

Bart Coddens avatar
Bart Coddens

though the security group gets queried correctly:

Bart Coddens avatar
Bart Coddens
data "aws_security_groups" "cloudposse-ips" {
    arns    = [
        "arn:aws:ec2:eu-west-1:564923036937:security-group/sg-0d5e812c1bb1c471a",
    ]
    id      = "eu-west-1"
    ids     = [
        "sg-0d5e812c1bb1c471a",
    ]
    tags    = {
        "Name" = "cloudposse-ips"
    }
    vpc_ids = [
        "vpc-0baf4791f3db9bd8c",
    ]
}
Jo avatar

Should setting the following Environment variables in the shell (zsh) ensure that the variables are set in azurerm provider section?

export ARM_CLIENT_ID=aaaa
export ARM_CLIENT_SECRET=bbbb
export ARM_SUBSCRIPTION_ID=cccc
export ARM_TENANT_ID=dddd
Jo avatar
provider "azurerm" {
  subscription_id = var.subscription_id
  client_id       = var.client_id
  client_secret   = var.client_secret
  tenant_id       = var.tenant_id
  version         = ">=2.51.0"
  features {}
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

no. Terraform doesn’t know how to map those env vars to your Terraform variables

Alex Jurkiewicz avatar
Alex Jurkiewicz

Terraform execution engine can’t access environment variables at all, it’s up to you to explicitly specify every input variable

Jo avatar

should be using TF_VAR_client_id when running locally?

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes, or use -var client_id=foo to your terraform plan/apply, or create a tfvars file

Jo avatar

something very odd happens in my environment, - its different between the init and plan steps

Jo avatar

if i set ENV variables in the ARM_CLIENT_ID format the init works, but then plan doesnt

Jo avatar

then I set the TF_VAR_client_id and others then the plan works

Jo avatar

and the init stage only works if I have the ARM_XXXXX_ID ENV variables in place

Alex Jurkiewicz avatar
Alex Jurkiewicz

i guess the azurerm provider reads those environment variables directly as a configuration source

Alex Jurkiewicz avatar
Alex Jurkiewicz

check the docs for it

Bart Coddens avatar
Bart Coddens

when I specify it as such: vpc_security_group_ids = [”${data.aws_security_group.cloudposse-ips.ids}”]

joshmyers avatar
joshmyers

Have you tried vpc_security_group_ids     = [data.aws_security_group.cloudposse-ips.ids]

Bart Coddens avatar
Bart Coddens

that works, I was confused howto make a list

Bart Coddens avatar
Bart Coddens

but that seems to work as well ! thx

Bart Coddens avatar
Bart Coddens

it works

Bart Coddens avatar
Bart Coddens

but then I get a Interpolation warning

Mohammed Yahya avatar
Mohammed Yahya
Checkmarx/kics

Find security vulnerabilities, compliance issues, and infrastructure misconfigurations early in the development cycle of your infrastructure-as-code with KICS by Checkmarx. - Checkmarx/kics

2021-03-20

bk avatar

Hi friends, I’m creating an ec2 instance with https://github.com/cloudposse/terraform-aws-ec2-instance and the ssh keypair with https://github.com/cloudposse/terraform-aws-key-pair. the ssh connection seems to be timing out (no authorization error).

Is there some non-obvious, not-default setting I need to use to get the networking bits to work?

cloudposse/terraform-aws-ec2-instance

Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance

cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

pjaudiomv avatar
pjaudiomv

What are the SG rules on ec2

cloudposse/terraform-aws-ec2-instance

Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance

cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

bk avatar

I have all the rules listed here: https://github.com/cloudposse/terraform-aws-ec2-instance#example-with-additional-volumes-and-eip thought most importantly:

{
   type        = "ingress"
   from_port   = 22
   to_port     = 22
   protocol    = "tcp"
   cidr_blocks = ["0.0.0.0/0"]
}
pjaudiomv avatar
pjaudiomv

Is it in a private subnet or public one , if in private Do you have a nat

bk avatar

instance has a public ip.

pjaudiomv avatar
pjaudiomv

Check the route table for subnet make sure 0.0.0.0 isn’t a black hole for some reason

bk avatar

I am slightly suspicious of

100	All traffic	All	All	0.0.0.0/0	 Allow
*	All traffic	All	All	0.0.0.0/0	 Deny
bk avatar

does * mean apply to all or like n/a rule applies to nothing?

pjaudiomv avatar
pjaudiomv

Looks good to me, can you post your terraform

bk avatar

yeah, one minute

bk avatar
discentem/mdmrocketship

Contribute to discentem/mdmrocketship development by creating an account on GitHub.

bk avatar

I’m sure I am making some dumb mistake with the networking bits

pjaudiomv avatar
pjaudiomv

I think the subnet has no egress

pjaudiomv avatar
pjaudiomv

What is route 0.0.0.0 pointed to if there is one

pjaudiomv avatar
pjaudiomv

Your using a private subnet but you have nat set to false

bk avatar

ah. so should be nat_gateway_enable = true ?

pjaudiomv avatar
pjaudiomv

Yea

pjaudiomv avatar
pjaudiomv

It is probably the best module ever written

pjaudiomv avatar
pjaudiomv

I love cloudpossee modules but nothing beats that one for creating vpcs and subnets and nats etc

bk avatar

I will check it out

pjaudiomv avatar
pjaudiomv

I usually call it like this

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name                           = "gitlab-runner-vpc"
  cidr                           = var.vpc_cidr_block
  azs                            = local.vpc_azs
  public_subnets                 = local.vpc_public_subnets
  private_subnets                = local.vpc_private_subnets
  enable_nat_gateway             = true
  single_nat_gateway             = true
  one_nat_gateway_per_az         = false
  enable_dns_hostnames           = true
  enable_dns_support             = true
  enable_s3_endpoint             = false
  enable_dynamodb_endpoint       = false
  manage_default_security_group  = true
  default_security_group_ingress = []
  default_security_group_egress  = []

  tags = local.tags
}
pjaudiomv avatar
pjaudiomv

then ill do something like this

locals {
  tags = {
    "environment" = "gitlab-runners"
  }

  vpc_public_subnets = [
    cidrsubnet(var.vpc_cidr_block, 8, 0),
    cidrsubnet(var.vpc_cidr_block, 8, 1),
    cidrsubnet(var.vpc_cidr_block, 8, 2),
  ]

  vpc_private_subnets = [
    cidrsubnet(var.vpc_cidr_block, 2, 1),
    cidrsubnet(var.vpc_cidr_block, 2, 2),
    cidrsubnet(var.vpc_cidr_block, 2, 3),
  ]

  vpc_azs = [
    "${var.aws_region}a",
    "${var.aws_region}b",
    "${var.aws_region}c",
  ]
}
pjaudiomv avatar
pjaudiomv
variable "aws_region" {
  type    = string
  default = "us-east-1"
}

variable "vpc_cidr_block" {
  type    = string
  default = "10.15.0.0/16"
}
pjaudiomv avatar
pjaudiomv
output "public_subnets" {
  value = module.vpc.public_subnets
}

output "private_subnets" {
  value = module.vpc.private_subnets
}

output "vpc_id" {
  value = module.vpc.vpc_id
}

output "vpc_cidr_block" {
  value = module.vpc.vpc_cidr_block
}
bk avatar

hmm. I think there is also something else wrong. should I put the instance on the public subnet heh?

pjaudiomv avatar
pjaudiomv

to get a single subnet you could do ``module.vpc.private_subnets[1]`

bk avatar

okay it works if I put the instance on the public subnet. is that bad for security though?

pjaudiomv avatar
pjaudiomv

yup

pjaudiomv avatar
pjaudiomv

you should just use a nat

pjaudiomv avatar
pjaudiomv

or dont use ssh and use ssm

bk avatar

okay I’m going to switch to vpc module you mentioned.

pjaudiomv avatar
pjaudiomv

those are your two options if you want to access the ec2 in a private subnet

bk avatar

thanks for answering my obviously noob questions

pjaudiomv avatar
pjaudiomv

no prob

bk avatar

hmm, would this be module.vpc.private_subnets[1] or module.vpc.private_subnets[0]

pjaudiomv avatar
pjaudiomv

0 would be in az a and 1 would be in az b

pjaudiomv avatar
pjaudiomv

Ex us-east-1b

jose.amengual avatar
jose.amengual

if the connection is timing out you have a security group problem

bk avatar

makes sense. so I’m still not understanding something about nat, after switching to the aws vpc module.

pjaudiomv avatar
pjaudiomv

In this case it was def not having a route out to the internet the priv subnet had no routes to get out

bk avatar

I think i am having the same or similar problem still: https://github.com/discentem/mdmrocketship/blob/main/main.tf

pjaudiomv avatar
pjaudiomv

Ok maybe you had two problems

bk avatar

probably

pjaudiomv avatar
pjaudiomv

If there is now a route to the nat on that subnets route table then yea I would double check your SG

bk avatar

i could also just not use the ec2 module and use normal resource. that might be abstracting too much right now

bk avatar

I’m still lacking understanding on my issue (due to little networking experience), especially given the advice to keep the host on a private subnet.

I found this article suggesting a jumphost setup but.. is that required or can I do it directly somehow with an internet gateway/nat gateway? https://dev.betterdoc.org/infrastructure/2020/02/04/setting-up-a-nat-gateway-on-aws-using-terraform.html

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I would use SSM instead - it installs an agent on your EC2 and you can access it far more securely.

I might have missed this - but why do you want to have SSH access? Is it just to manage the host? If so, AWS SSM would be your solution.

1
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Opening SSH to the world on a public subnet is a big no-no.

bk avatar

Yeah just to manage it. Okay, I’ll go with ssm I guess. Any material you can share on why it’s more secure?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

There’a bunch of it online, but the idea is this: If you have SSH open to the world, then you’re only relying on the authentication mechanism. Specifically, you’re hoping nobody gets your SSH key_pair. So, for a hacker, they just need to get that file somehow and they can get into your system. (and there’s a lot of ways for them to get the file)

However, when you put the server inside a private subnet, then now they need to actually establish authentication via AWS’s IAM. In most organizations, that is far more secure, as it requires MFA and may have additional mechanisms attached to it.

bk avatar

Makes sense, thanks for the explanation!

2021-03-21

loren avatar
Release v0.2.0 · hashicorp/terraform-cdk

Breaking Changes Generated classes for modules from the registry will change - see this comment for more details Phased out testing and support for Terraform 0.12. It’s likely to still work for no…

1

2021-03-22

Or Azarzar avatar
Or Azarzar

Hi All!

What is the best approach to integrate a security audit step on a Terraform pipeline in Jenkins using a third-party provider?

  1. Should the provider supply a Jenkins plugin that adds an extra step having access to a repo with the Terraform plan file output?
  2. Should the provider supply a Jenkins shared library that can be imported in any existing pipeline, calling a dedicated function with the Terraform plan output or path?
  3. Should the provider supply a docker image that exposes a rest API endpoint receiving the Terraform plan output?
roth.andy avatar
roth.andy

Not sure I’m fully understanding your question, but generally I don’t want something that is locked to Jenkins. I prefer something like a CLI that I can run in Jenkins, but also in any other CI tool or even locally

roth.andy avatar
roth.andy

But this is one of those things where if you ask 10 people you’ll get ~10~1 different answers

bazbremner avatar
bazbremner

I’m with @roth.andy on this one - I want to be able to run everything my CI system can run (regardless of the tool in choice) outside of CI and wrap the CI system du jour around it later/separately. As far as I’m concerned, CI is just there to run code and tell me if it worked, rather than being forced to use Jenkins or whatever to be able to make use of another tool.

roth.andy avatar
roth.andy


I want to be able to run everything my CI system can run
Yep. We even make Jenkins use the same commands that we’d use locally. Jenkins doesn’t run some long script, it literally just runs task test, task deliver, task deploy with environment variables as parameters.

Or Azarzar avatar
Or Azarzar

Ok that’s an interesting thought. does a cli tool equals a running container?

roth.andy avatar
roth.andy

Most third party providers I’ve used have a CLI tool that accepts an api token, that takes care of interfacing with the SaaS service in order to upload your scans/data/whatever so they can do their thing. They might also provide that CLI tool in a docker image as a convenience, but there’s nothing requiring that that image be used. We have our own docker image that is used as the Jenkins execution environment that includes the CLI tool

roth.andy avatar
roth.andy

For example: fossa-cli

fossas/fossa-cli

Fast, portable and reliable dependency analysis for any codebase. Supports license & vulnerability scanning for large monoliths. Language-agnostic; integrates with 20+ build systems. - fossas…

roth.andy avatar
roth.andy
Connect to Bridgecrew CLI

Introduction In addition to the automatic scans run periodically by Bridgecrew, you can run scans from a command line. This allows you to: Skip particular checks on Terraform Definition Blocks you chooseSkip particular checks on all resourcesRun only specific checks Installation Running Scans by C…

bazbremner avatar
bazbremner

No, running from the CLI != running a container, but there shouldn’t be anything that you do that prevents containerising that tool, and bonus points for making a sane and regularly updated container available, since it is a common use case, but again, not everyone is running containers by default.

Or Azarzar avatar
Or Azarzar

@michael.l

mrwacky avatar
mrwacky

I thought I saw something in Cloudposse TF to limit the length of resource names (ie for when we generate a label longer than AWS allows us to name a resource).. but I can’t find it

jose.amengual avatar
jose.amengual

module.this

MattyB avatar

I’m on mobile, IIRC check context.tf or the module it’s copied from.

jose.amengual avatar
jose.amengual
mrwacky avatar
mrwacky

ty

mrwacky avatar
mrwacky

oic, this is only in terraform-null-label, and never made it into terraform-label

mrwacky avatar
mrwacky

pretty sure I asked this same question here a couple months ago

2021-03-23

Mohammed Yahya avatar
Mohammed Yahya

TErraform Cloud now support README.md file and output.tf file, and there values are shown in the UI pretty neat

jose.amengual avatar
jose.amengual

wait, they discovered the use of README.md files?

1
jose.amengual avatar
jose.amengual

lol

Mohammed Yahya avatar
Mohammed Yahya

they did. I wish they allow to install aws cli there

jose.amengual avatar
jose.amengual

local_exec { curl......} no?

Mohammed Yahya avatar
Mohammed Yahya

not sure, we can use http provider instead.

Mohammed Yahya avatar
Mohammed Yahya
data "http" "example" {
  url = "<https://checkpoint-api.hashicorp.com/v1/check/terraform>"

  # Optional request headers
  request_headers = {
    Accept = "application/json"
  }
}
Matt Gowie avatar
Matt Gowie

Yeah… I saw this for the first time the other day. I laughed because what a completely useless feature for them to build instead of some of the other things they could have built. They don’t touch the product in what seems like 12+ months and then they build something that just shows the README. Yeah… not that important to me to be honest.

1
Mohammed Yahya avatar
Mohammed Yahya

well

null_resource.aws (local-exec): Executing: ["/bin/sh" "-c" "aws"]
null_resource.aws (local-exec): usage:
null_resource.aws (local-exec): Note: AWS CLI version 2, the latest major version of the AWS CLI, is now stable and recommended for general use. For more information, see the AWS CLI version 2 installation instructions at: <https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html>

null_resource.aws (local-exec): usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
null_resource.aws (local-exec): To see help text, you can run:

null_resource.aws (local-exec):   aws help
null_resource.aws (local-exec):   aws <command> help
null_resource.aws (local-exec):   aws <command> <subcommand> help
null_resource.aws (local-exec): aws: error: the following arguments are required: command

null_resource.aws: Creation complete after 2s [id=7470069141691516634]
Mohammed Yahya avatar
Mohammed Yahya

look like aws cli is installed in terraform runners in TFC, I can rest in peace now

jose.amengual avatar
jose.amengual

lol

jose.amengual avatar
jose.amengual

ru aws s3 ls…..and see if you see something

Mohammed Yahya avatar
Mohammed Yahya

I did

Mohammed Yahya avatar
Mohammed Yahya

hahaha

jose.amengual avatar
jose.amengual

I will imagine they have them lockdown

jose.amengual avatar
jose.amengual

I guess it did not work?

Mohammed Yahya avatar
Mohammed Yahya
null_resource.aws_v2 (local-exec): Executing: ["/bin/sh" "-c" "aws s3 ls"]
null_resource.aws_v2 (local-exec): 2020-12-01 19:09:27 xxxxx-eu-central-1-tf-state
null_resource.aws_v2 (local-exec): 2020-10-15 14:05:37 airflow-xxxxx
jose.amengual avatar
jose.amengual

interesting

walicolc avatar
walicolc

Masha’allah, is there a blog about this @Mohammed Yahya

1
Mohammed Yahya avatar
Mohammed Yahya
04:34:30 PM

this screenshot, I hope it could help

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Our Engineering team decided to share how they’re doing CI/CD for Terraform with Jenkins, hopefully this could be helpful for any one here: https://indeni.com/blog/terraform-goes-hand-in-hand-with-ci-cd/

Terraform goes hand in hand with CI/CD | Indeni

In today’s competitive market, our success depends on how quickly we can innovate and deliver value to customers. It’s all about speeding time to market […]

3
Mohammed Yahya avatar
Mohammed Yahya

Thanks for sharing, give good vibes about devsecops

Terraform goes hand in hand with CI/CD | Indeni

In today’s competitive market, our success depends on how quickly we can innovate and deliver value to customers. It’s all about speeding time to market […]

1
Jeff Behl avatar
Jeff Behl

hey all - first time caller, long time listener :slightly_smiling_face: fyi on something that was vexing me in the terraform-aws-elasticsearch module: being a smart user, I also use terraform-null-label, applying it as so to a security group I’ll be using for my elasticsearch domains:

resource "aws_security_group" "es_internal" {
  description = "internal traffic"
  name        = module.label.id
  tags        = module.label.tags
  vpc_id      = data.aws_vpc.main.id
}

```

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Best bet is to dig through our modules for more examples

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in your case, you need to add attributes for disambiguation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

see the attributes argument

Jeff Behl avatar
Jeff Behl

figured that out afterwards. you guys think of everything, thanks

1
Jeff Behl avatar
Jeff Behl

in the terraform module, I use context = module.label.context. the end result? the terraform module tries to create a security group with the same name as the one I already created, so it errors out.

Jeff Behl avatar
Jeff Behl

on an AWS error of “security group with that name already created!”

Jeff Behl avatar
Jeff Behl

interesting side effect of using best practices

sheldonh avatar
sheldonh

Quick catch-up, any progress by folks on better azure pipelines terraform workflow? I can use multistage pipelines or potentionally use another tool like Terraform Cloud, env0, scalyr, but azure pipelines will be the easiest with no approval required.

Any reason to push for the others right now or is azure pipelines servicing others well right now with multistage yaml pipelines.

tim.davis.instinct avatar
tim.davis.instinct

Hey there! I am the DevOps Advocate with env0. We’d be glad to setup a demo for you to show you our value over ADO by itself.

sheldonh avatar
sheldonh

Very little time right now per onboarding. Any comparison on site?

Ideally knowing modules + state management + open policy checks + PR integration with azure devops is key + reasonable free tier to get started and show value. Very small team and experience I have is with terraform cloud mostly. Azure devops is honestly most likely outcome but would love any quick value proposition to weigh more heavily when I get time.

tim.davis.instinct avatar
tim.davis.instinct

Totally understand on the no-time thing. Unfortunately, I don’t have anything direct for ADO + env0 built yet, it’s on my list. Time is not on my side either And to be honest, we have GitHub and just finished GitLab, but don’t have full PR Plan and service checks just yet with ADO. OPA for Policy enforcement is absolutely there. We also have a strong free tier, and a very lax POC policy right now, so no worries to anyone on testing it out, proving value. I can get with product and get a date on full ADO completion. In the meantime, I have these 3 videos (10.5 mins total) that illustrate our full use cases from end to end.

https://m.youtube.com/playlist?list=PLFFBGbxfEa7ZPUvNWIAvdLpAtXpK_fjSm

Use Case Videos

Share your videos with friends, family, and the world

sheldonh avatar
sheldonh

Thank you! Normally no time would be an excuse but I onboarded this week and so it’s literally true . I’ll keep in in mind for sure. Ty!

tim.davis.instinct avatar
tim.davis.instinct

Congrats on the new gig! Best of luck with everything. Feel free to reach out if anything looks good, or if you ever think we could help with anything.

1
EvanG avatar

Back to this thread: https://sweetops.slack.com/archives/CB6GHNLG0/p1611948097136100. I figured out how to write an AWS policy that only requires MFA for human users in a group. It’s pretty cool. You have to enter your MFA code when you assume role. This is HUGE security risk for cloud based companies.

Question for AWS users. Has anyone figured out how to use cli MFA with terraform?

1
walicolc avatar
walicolc

Share

Question for AWS users. Has anyone figured out how to use cli MFA with terraform?

EvanG avatar
data "aws_caller_identity" "current" {
}
data "aws_iam_policy_document" "assume_role_policy" {
  statement {
    effect = "Allow"
    actions = ["sts:AssumeRole"]
    resources = "${var.assume_role_arns}"
  }
}

resource "aws_iam_policy" "assume_role_group" {
  name = "${var.assume_role_policy_name}"
  policy = "${var.enable_mfa == "true" ? data.aws_iam_policy_document.require_mfa_for_assume_role.json : data.aws_iam_policy_document.assume_role_policy.json}"
}

resource "aws_iam_policy" "require_mfa" {
  count = "${var.enable_mfa == "true" ? 1 : 0}"
  name = "${var.require_mfa_policy_name}"
  policy = "${data.aws_iam_policy_document.mfa_policy.json}"
}

resource "aws_iam_group" "assume_role_group" {
  name = "${var.assume_role_group_name}"
}

resource "aws_iam_group_policy_attachment" "assume_role_attach" {
  group = "${aws_iam_group.assume_role_group.name}"
  policy_arn = "${aws_iam_policy.assume_role_group.arn}"
}

resource "aws_iam_group_policy_attachment" "mfa_requirement_attach" {
  count = "${var.enable_mfa == "true" ? 1 : 0}"
  group = "${aws_iam_group.assume_role_group.name}"
  policy_arn = "${aws_iam_policy.require_mfa.arn}"
}

data "aws_iam_policy_document" "require_mfa_for_assume_role" {
  statement {
    sid = "AllowAssumeRole"
    effect = "Allow"
    actions = ["sts:AssumeRole"]
    resources = "${var.assume_role_arns}"
    condition {
      test = "BoolIfExists"
      variable = "aws:MultiFactorAuthPresent"
      values = ["true"]
    }
  }
}

data "aws_iam_policy_document" "mfa_policy" {
  statement {
    sid = "AllowManageOwnVirtualMFADevice"
    effect = "Allow"
    actions = [
      "iam:CreateVirtualMFADevice",
      "iam:DeleteVirtualMFADevice"
    ]
    resources = [
      "arn:aws:iam::${data.aws_caller_identity.current.account_id}:mfa/$${aws:username}",
    ]
  }

  statement {
    sid = "AllowManageOwnUserMFA"
    effect = "Allow"
    actions = [
      "iam:DeactivateMFADevice",
      "iam:EnableMFADevice",
      "iam:GetUser",
      "iam:ListMFADevices",
      "iam:ResyncMFADevice"
    ]
    resources = [
      "arn:aws:iam::${data.aws_caller_identity.current.account_id}:user/$${aws:username}",
      "arn:aws:iam::${data.aws_caller_identity.current.account_id}:mfa/$${aws:username}"
    ]
  }

  statement {
    sid = "DenyAllExceptListedIfNoMFA"
    effect = "Deny"
    not_actions = [
      "iam:CreateVirtualMFADevice",
      "iam:EnableMFADevice",
      "iam:GetUser",
      "iam:ListMFADevices",
      "iam:ListVirtualMFADevices",
      "iam:ResyncMFADevice",
      "sts:GetSessionToken"
    ]
    resources = ["*"]
    condition {
      test     = "BoolIfExists"
      variable = "aws:MultiFactorAuthPresent"
      values   = ["false"]
    }
  }
}
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Quiz: If you use an aws_autoscaling_group with aws_launch_configuration, without specifying a VPC (that is, AWS is expected to use the default VPC), and without setting associate_public_ip_address,

then do the EC2 instances generated have a public IP address or not? (answer in the thread)

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Answer: Interestingly, associate_public_ip_address defaults to false in a launch config, however, Terraform ignores it and decides not to set the value and instances have a public IP anyway.

https://github.com/hashicorp/terraform-provider-aws/issues/1484

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
If the instance is launched into a default subnet in a default VPC, the default is true. If the instance is launched into a nondefault subnet in a VPC, the default is false
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Intuitive eh?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

even more

In the "Create Launch Configuration", select "Advanced Details" and look for the "IP Address Type" Section, you'll see:

IP Address Type

Only assign a public IP address to instances launched in the default VPC and subnet. (default)

Assign a public IP address to every instance.

Do not assign a public IP address to any instances. Note: this option only affects instances launched into an Amazon VPC
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Yeah, so Terraform isn’t assigning anything, and then AWS follows the default behavior.

Luis Masaya avatar
Luis Masaya

The default should be the same for both (false) as it seems to me to be the option with less security risk.

loren avatar

this is like the “default” in the console click-wizard where aws will gladly open your instance to tcp/22 to the world when you launch into a default subnet in the default vpc…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

It’s all so secure

loren avatar

the combination of tftest and pytest is really feeling so much nicer and more robust/extensible than terratest…. https://github.com/GoogleCloudPlatform/terraform-python-testing-helper

GoogleCloudPlatform/terraform-python-testing-helper

Simple Python test helper for Terraform. Contribute to GoogleCloudPlatform/terraform-python-testing-helper development by creating an account on GitHub.

loren avatar

i think i shared it before, but working with it more lately, and it’s worth sharing again

GoogleCloudPlatform/terraform-python-testing-helper

Simple Python test helper for Terraform. Contribute to GoogleCloudPlatform/terraform-python-testing-helper development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

Do you run a full plan / apply / test lifecycle with this or do you just use it to statically check your TF code?

loren avatar

we’re just starting down this road, but the idea is to run the apply using localstack in CI. or maybe using moto in server mode. moto does more services, but localstack is a little easier to get going

loren avatar

so yes, we’ll have “terraform test configs” that reference the module, create dependent resources, and pass in varying arguments to cover any module logic. and we’ll use pytest/tftest to invoke the apply/destroy for each config, hitting the localstack endpoints

Matt Gowie avatar
Matt Gowie

Ah wow you’re going for it.

loren avatar

we can do a bit more, reading back the outputs and assert‘ing they match what we expect. but personally i don’t find a ton of value in that, as i expect terraform’s acceptance tests to validate things like that reasonably well

loren avatar

one “extra” bit i find valuable is being able to actually invoke a lambda this way, to confirm that the packaging is valid. say, if the lambda has dependencies not present in the lambda runtime, then it needs to be in the lambda package. it’s easy to get this packaging wrong. so it is useful to actually invoke the lambda, test the imports (for example), and report all is well or fail

loren avatar

we may also have tftest run a plan -detailed-exitcode subsequent to the apply to detect persistent diffs and occasional issues with for_each logic. so it’ll actually be apply/plan/destroy…

Matt Gowie avatar
Matt Gowie

Cool. Extensive! You should write it all up. I’d read.

loren avatar

we’ve been doing similar for a while with terratest, but without localstack. which meant we had to manually run the tests for each pr, and hit real AWS endpoints (and costs) (and cleanup, people forget)

loren avatar

but testing in golang is nowhere near as nice as pytest, plus golang syntax is just not as clean as python, and seems harder for devops folks to pick up

Matt Gowie avatar
Matt Gowie

Yeah — I’ve done the using terratests against a test AWS Account thing and then wipe that account on a schedule. Not too bad on costs, but I get your point.

The golang vs python decision is definitely an org to org thing. Neither are perfect, but ya gotta pick one. I like that this is an option that I didn’t know about before though

loren avatar

yeah, i like localstack because i can’t (easily) test terraform modules with a real account with real credentials on all pull requests in public projects. and pretty much all our modules are public. so we needed to do something anyway. we did get it working with terratest, but in the process we found tftest and liked it so much that now we’re planning to switch everything to use it instead

Matt Gowie avatar
Matt Gowie

Good stuff

kgib avatar
cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

TED Vortex avatar
TED Vortex

anyone here used the cloudflare modules ? can give me some backend best practices ? cheers

2021-03-24

Matt Gowie avatar
Matt Gowie

Appreciate some :thumbsup:s on this AWS provider aws_cognito_user resource addition issue: https://github.com/hashicorp/terraform-provider-aws/issues/4542

Add aws_cognito_user resource · Issue #4542 · hashicorp/terraform-provider-aws

Description Currently the aws_cognito has an aws_cognito_user_group resource which represents a group of users. In the AWS IDP console there is an option to create a user, and assign it to groups. …

3
Mohammed Yahya avatar
Mohammed Yahya
tftestattachment image

Simple Terraform test helper

loren avatar

yep, that’s the one i’m talking about here, https://sweetops.slack.com/archives/CB6GHNLG0/p1616536971126300

the combination of tftest and pytest is really feeling so much nicer and more robust/extensible than terratest…. https://github.com/GoogleCloudPlatform/terraform-python-testing-helper

Mohammed Yahya avatar
Mohammed Yahya

nice

Mohammed Yahya avatar
Mohammed Yahya

I saw it somewhere and forgot where,

walicolc avatar
walicolc

You need this tool like yesterday. Absolutely what you need when dealing with imports on tf

https://github.com/GoogleCloudPlatform/terraformer

GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer

1
1
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

We’re running a survey on the difference between identifying cloud security issues in production, vs in code (“shift left”). It’s a short one, meant to get a high level of understanding of people’s sentiments: https://docs.google.com/forms/d/e/1FAIpQLSc7izchAxnCqkQbdwIBETYX51hGmX_GMdqO9ZnEYSx34V_20Q/viewform?usp=sf_link

We’re giving away free Visa gift cards to help incentivize people for their time. However, I can also share here, that we will be sharing the raw results (minus PII, etc.), and then the conclusions, for the benefit of everyone here who is thinking about “shift left”.

Any comments on the survey are also very welcome.

1
uselessuseofcat avatar
uselessuseofcat

What is your way of managing Security Groups trough Terraform? I would like to create a module where I specify the list of ports and allowed CIDR block multiple times. I can do for_each but that can only be done for one thing, for example for ports, I do not know how to apply for_each for both ports and CIDR block? Thanks!

uselessuseofcat avatar
uselessuseofcat

hmm, it looks like there are each.key and each.value

1
managedkaos avatar
managedkaos

you might need to do some sort of map and then loop over the map in your for_each.

For example your dev SG might have one CIDR and ports and your prod SG might have a differencr CIDR and ports. Put them into a map (or serperate maps) and pull the values out that way.

I’m waving my hands at the moment and don’t have an example, but that’s how i would approach it. will share soon…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-security-group

Terraform module to provision AWS Security Group. Contribute to cloudposse/terraform-aws-security-group development by creating an account on GitHub.

1
1
uselessuseofcat avatar
uselessuseofcat

Great! Thanks a lot!

Release notes from terraform avatar
Release notes from terraform
07:14:23 PM

v0.14.9 0.14.9 (March 24, 2021) BUG FIXES: backend/remote: Fix error when migrating existing state to a new workspace on Terraform Cloud/Enterprise. (#28093)

backend/remote: Fix new workspace state migration by alisdair · Pull Request #28093 · hashicorp/terraform

When migrating state to a new workspace, the version check would error due to a 404 error on fetching the workspace record. This would result in failed state migration. Instead we should look speci…

2021-03-25

sohel2020 avatar
sohel2020

which one is the best practice (tf version 0.12 / 0.13) and why

  1. name = format("%s-something", [var.my](http://var.my)_var)
  2. name = "${[var.my](http://var.my)_var}-something"
6
2
Alex Jurkiewicz avatar
Alex Jurkiewicz
  1. Use formatonly if you need to use an exotic format option
2
this1
sohel2020 avatar
sohel2020

@Alex Jurkiewicz any example?

aaratn avatar

"${[var.my](http://var.my)_var}-something" - Keep it simple !

Mohammed Yahya avatar
Mohammed Yahya

it depends, use any of them while keep consistent thought your whole modules and templates

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m the outlier. I prefer because the format template can be a variable itself.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Writing open source modules means everyone has an opinion on the format. This is why null-label is so flexible.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

forces one format.

Jeff Behl avatar
Jeff Behl

readability

mikesew avatar
mikesew

This seems similar to a echo vs printf argument? great stuff , I havent even thought of the debate

OliverS avatar
OliverS

Some of you might be interested in terraform-aws-multi-stack-backends this is first release: https://registry.terraform.io/modules/schollii/multi-stack-backends/aws. There are diagrams in the examples/simple folder. Any feedback welcome!

Alex Jurkiewicz avatar
Alex Jurkiewicz

tl;dr?

OliverS avatar
OliverS

@Alex Jurkiewicz @Erik Osterman (Cloud Posse) It makes it easy to correlate terraform states that relate to the same “stack”.

Eg if you have a stack that consists of a state for VPC/network, another for EKS, a third state for resources specific to a sandbox deployment of microservices in that cluster (eg RDS instances used by the micro-services), and a fourth state for AWS resources specific to a staging deployment in that cluster, then you will see all 4 backends in that that module’s tfvars file.

The module creates a bucket and stores all states mentioned in the tfvars file there. You can of course have multiple buckets if you want (say one per stack).

So the other thing this module does is enable you to never again have to worry about creating backend.tf files; it creates them for you.

melissa Jenner avatar
melissa Jenner

Does anyone know a Kinesis Firehose Terraform Module that sends Data Streams to Redshift?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Hey guys, would appreciate your collective minds for some feedback: Our IaC security tool (Cloudrail) now has a new capability called “Mandate On New Resources Only”. If this is set on a rule, Cloudrail will only flag resources that are set to be created under the TF plan.

This brought up an interesting philosophical question: If a developer is adding new TF code that uses an existing module, is it really a new resource? Technically, it is. Actually several resources in many cases generated by the module. But in reality, it’s just the same module again, with different parameters.

Some of our devs said “well, technically, yes, but it’s the same module, so from an enforcement perspective, it’s not a new resource, it’s just like other uses of the same module”.

I’m adding examples in a thread on this message. Appreciate your guys’ and gals’ thoughts on this matter as we think through it.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

So, for example, adding a new resource like so, is clear a new resource:

resource "aws_vpc" "myvpc" { ... }
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

But, adding some new code that looks like this:

module "policies-id-rules-delete" {
  source                  = "../../modules/api-gw/method"
...
}

Is technically a new resource, but using a module that’s already used elsewhere before.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
Alex Jurkiewicz avatar
Alex Jurkiewicz

The exception is a little too complex imo. I’m happy for the developer to have to fix the module and then use a newer version for their new addition

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

What if the developer is not an infrastructure dev, just a regular software dev who copy-pasted some code? He doesn’t know how to fix a module.

Alex Jurkiewicz avatar
Alex Jurkiewicz

They complain to infra dev, who can exclude the resource from enforcement if they want to grandfather in one last usage of the old version

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Makes sense.

loren avatar

heh. and you have for_each on a module, and the dev just adds a new item to the list of keys…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Oh yeah, what do you do with that?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

New resource @Alex Jurkiewicz?

loren avatar

no new tf code at all, but you have new resources! potentially with different inputs that violate policies

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Let’s say we can identify if the new resources have the same violations or new violations.

loren avatar

i’d call it a new resource if the plan says so

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

What Loren said. Adding more logic will feel like magic, and make the system less understandable

this1
loren avatar

i think i’m in agreement with @Alex Jurkiewicz, basically… this is static analysis, same as code style enforcement. CI says you’re wrong, you don’t fight it, you go figure out how to fix it

1
Tomek avatar

i have a module that creates an ECS task that is used with a for_each. Is there a way to use the same execution role across each invocation of the module? (Only way i can think of is creating the role outside the module and passing the arn in as a var

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

It depends on how creative you want to get. You can’t use a data source for this, because it will fail if it can’t find the role. So, you can use some bash coding, aws CLI using, etc.

But that’s quite a mess.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

If you look through this ticket, you’ll see some terrible examples of how to achieve this: https://github.com/hashicorp/terraform/issues/16380

Alex Jurkiewicz avatar
Alex Jurkiewicz
  1. Inject the execution task from outside
  2. Split your for_each into the first element and the rest. Create the first separately, and then add it as a dependency for the rest which use your for_each loop Approach 1 sounds waaaay better

2021-03-26

joshmyers avatar
joshmyers

I have a list of maps (nested) - can these be collapsed down?

joshmyers avatar
joshmyers
  + badgers = [
      + {
          + "dev" = {
              + "us-east-1" = {
                  + "profile-service" = {}
                }
            }
        },
      + {
          + "dev" = {
              + "us-west-2" = {
                  + "profile-service" = {}
                }
            }
        },
      + {
          + "qa" = {
              + "us-east-1" = {
                  + "profile-service" = {}
                }
            }
        },
      + {
          + "qa" = {
              + "us-west-2" = {
                  + "profile-service" = {}
                }
            }
        },
      + {
          + "prod" = {
              + "us-east-1" = {
                  + "profile-service" = {}
                }
            }
        },
      + {
          + "prod" = {
              + "us-west-2" = {
                  + "profile-service" = {}
                }
            }
        },
      + {
          + "dev" = {
              + "us-east-1" = {
                  + "account-service" = {}
                }
            }
        },
      + {
          + "dev" = {
              + "us-west-2" = {
                  + "account-service" = {}
                }
            }
        },
      + {
          + "qa" = {
              + "us-east-1" = {
                  + "account-service" = {}
                }
            }
        },
      + {
          + "qa" = {
              + "us-west-2" = {
                  + "account-service" = {}
                }
            }
        },
      + {
          + "dev" = {
              + "us-east-1" = {
                  + "compliance-service" = {}
                }
            }
        },
      + {
          + "dev" = {
              + "eu-west-1" = {
                  + "compliance-service" = {}
                }
            }
        },
      + {
          + "qa" = {
              + "us-east-1" = {
                  + "compliance-service" = {}
                }
            }
        },
      + {
          + "qa" = {
              + "eu-west-1" = {
                  + "compliance-service" = {}
                }
            }
        },
    ]
joshmyers avatar
joshmyers

can that be collapsed down into

joshmyers avatar
joshmyers

e.g.

joshmyers avatar
joshmyers
tomap({
  "dev" = tomap({
    "us-east-1" = tomap({
      "account-service" = {}
      "compliance-service" = {}
    })
  })
  "qa" = tomap({
    "us-east-1" = tomap({
      "account-service" = {}
      "compliance-service" = {}
    })
  })
})
joshmyers avatar
joshmyers

tl;dr how to merge a list of maps

joshmyers avatar
joshmyers

I wonder if I need a deep merge…

joshmyers avatar
joshmyers

Hmm, get Error: json: cannot unmarshal array into Go value of type map[string]interface {} using https://github.com/cloudposse/terraform-provider-utils

cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You need to call jsonencode on the maps

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we operate on strings due to terraform’s handling of objects

joshmyers avatar
joshmyers

Aye, tried that, still the same. Resorted to using a JSON file as per the example, still no dice

joshmyers avatar
joshmyers
[
    {
        "dev": {
            "us-east-1": {
                "profile-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "dev": {
            "us-west-2": {
                "profile-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "qa": {
            "us-east-1": {
                "profile-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "qa": {
            "us-west-2": {
                "profile-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "prod": {
            "us-east-1": {
                "profile-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "prod": {
            "us-west-2": {
                "profile-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "dev": {
            "us-east-1": {
                "account-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "dev": {
            "us-west-2": {
                "account-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "qa": {
            "us-east-1": {
                "account-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "qa": {
            "us-west-2": {
                "account-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "dev": {
            "us-east-1": {
                "compliance-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "dev": {
            "eu-west-1": {
                "compliance-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "qa": {
            "us-east-1": {
                "compliance-service": {
                  "badgers": "foo"
                }
            }
        }
    },
    {
        "qa": {
            "eu-west-1": {
                "compliance-service": {
                  "badgers": "foo"
                }
            }
        }
    }
]
joshmyers avatar
joshmyers

Still get Error: json: cannot unmarshal array into Go value of type map[string]interface {}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you share the the HCL?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt @Andriy Knysh (Cloud Posse)

joshmyers avatar
joshmyers

Yup, gimme a mo, let me create a gist

joshmyers avatar
joshmyers

merge(local.badgers...) isn’t deep merging, so I end up with

joshmyers avatar
joshmyers
merged_badgers = tomap({
  "dev" = tomap({
    "eu-west-1" = tomap({
      "compliance-service" = {}
    })
  })
  "prod" = tomap({
    "us-west-2" = tomap({
      "profile-service" = {}
    })
  })
  "qa" = tomap({
    "eu-west-1" = tomap({
      "compliance-service" = {}
    })
  })
})
joshmyers avatar
joshmyers

Ah, no, thinkg “If more than one given map or object defines the same key or attribute, then the one that is later in the argument sequence takes precedence” is happening

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this provider https://github.com/cloudposse/terraform-provider-utils can deep-merge list of maps

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
The `deep_merge_yaml` data source accepts a list of YAML strings as input and deep merges into a single YAML string as output
joshmyers avatar
joshmyers

Thanks @Andriy Knysh (Cloud Posse) yeah, killer feature, nice work guys!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it accepts a list of YAML strings (not terraform objects/maps) b/c of TF provider limitations

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

same with the outputs - it’s a string of merged contents

joshmyers avatar
joshmyers

I’m trying to use deep_merge_json @Andriy Knysh (Cloud Posse) see https://gist.github.com/joshmyers/7e96e291a920fac77f9a7314bc3397ba

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

joshmyers avatar
joshmyers

Yeah, am using that data source in the above gist, basic example, only difference is the JSON

joshmyers avatar
joshmyers

Getting Error: json: cannot unmarshal array into Go value of type map[string]interface {}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you provide a list of JSON-encoded strings (possibly read from files), and then can convert the result from string to JSON

locals {
  json_data_1 = file("${path.module}/json1.json")
  json_data_2 = file("${path.module}/json2.json")
}

data "utils_deep_merge_json" "example" {
  input = [
    local.json_data_1,
    local.json_data_2
  ]
}

output "deep_merge_output" {
  value = jsondecode(data.utils_deep_merge_json.example.output)
}
joshmyers avatar
joshmyers

Yes, am trying to read a JSON file, other than that, same as the example for JSON

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, so this file(“${path.module}/badgers.json”) is already an array

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and here yu put it into another array

input = [
    local.json_data_2
  ]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

your json should be a map, not an array

joshmyers avatar
joshmyers

OK, if I use input = local.json_data_2

joshmyers avatar
joshmyers
❯ terraform plan

Error: Incorrect attribute value type

  on main.tf line 96, in data "utils_deep_merge_json" "example":
  96:   input = local.json_data_2
    |----------------
    | local.json_data_2 is "[\n    {\n        \"dev\": {\n            \"us-east-1\": {\n                \"profile-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"dev\": {\n            \"us-west-2\": {\n                \"profile-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"qa\": {\n            \"us-east-1\": {\n                \"profile-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"qa\": {\n            \"us-west-2\": {\n                \"profile-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"prod\": {\n            \"us-east-1\": {\n                \"profile-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"prod\": {\n            \"us-west-2\": {\n                \"profile-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"dev\": {\n            \"us-east-1\": {\n                \"account-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"dev\": {\n            \"us-west-2\": {\n                \"account-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"qa\": {\n            \"us-east-1\": {\n                \"account-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"qa\": {\n            \"us-west-2\": {\n                \"account-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"dev\": {\n            \"us-east-1\": {\n                \"compliance-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"dev\": {\n            \"eu-west-1\": {\n                \"compliance-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"qa\": {\n            \"us-east-1\": {\n                \"compliance-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    },\n    {\n        \"qa\": {\n            \"eu-west-1\": {\n                \"compliance-service\": {\n                  \"badgers\": \"foo\"\n                }\n            }\n        }\n    }\n]\n"

Inappropriate value for attribute "input": list of string required.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe try this

json { { “dev”: { “us-east-1”: { “profile-service”: { “badgers”: “foo” } } } }, ```

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that the provider was created for specific purposes, it’s not universal thing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
input = [
    local.json_data_2
  ]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

input should be an array of strings

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each string should be json-encoded map

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the provider deep-merges maps, not arrays

joshmyers avatar
joshmyers

OK, will have a play around with that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try this

{
    {
        "dev": {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note the top-level { to make it a map

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will def work if you put all of these parts

{
        "qa": {
            "eu-west-1": {
                "compliance-service": {
                  "badgers": "foo"
                }
            }
        }
    }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

into separate files

joshmyers avatar
joshmyers

OK, I’ll see if I can break it down and pass each in

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but you can try

{
    {
        "dev": {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and see what happens (it will deep-merge that map, I’m just not sure what the result will be)

joshmyers avatar
joshmyers

Nope Error: invalid character '{' looking for beginning of object key string - will try breaking it down and passing in each

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try

joshmyers avatar
joshmyers

I don’t control the JSON either, it comes back from Terraform as a list of maps, so will try passing in each

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
data = {
    {
        "dev": {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, in TF you can loop thru the list of maps, convert each one to string, and add it to an array to provide as input to the provider

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that will work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(maybe we can improve the provider to accept one input with JSON/YAML encoded string of list of maps (instead of giving it a list of encoded strings)

joshmyers avatar
joshmyers

Aye, realise this may not be your use case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


in TF you can loop thru the list of maps, jsonencode each one to string, and add it to an array to provide as input to the provider

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try that ^, will work

1
joshmyers avatar
joshmyers

So

joshmyers avatar
joshmyers
data "utils_deep_merge_json" "example" {
  input = [
    "{\"qa\":{\"us-east-1\":{\"profile-service\":{}}}}",
    "{\"qa\":{\"us-east-1\":{\"account-service\":{}}}}"
  ]
}
joshmyers avatar
joshmyers

Should that work?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s cryptic… but it should in principle

joshmyers avatar
joshmyers

It is a jsonencoded map

joshmyers avatar
joshmyers

Error: json: unsupported type: map[interface {}]interface {}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you run the example first to confirm it’s working for you?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then add your files to the example to confirm your files are ok

joshmyers avatar
joshmyers

Yeah, good point

joshmyers avatar
joshmyers

So, I’ve cloned <https://github.com/cloudposse/terraform-provider-utils>

joshmyers avatar
joshmyers

cd examples/data-sources/utils_deep_merge_json/

joshmyers avatar
joshmyers

terraform init

joshmyers avatar
joshmyers

terraform plan

joshmyers avatar
joshmyers

Error: json: unsupported type: map[interface {}]interface {}

joshmyers avatar
joshmyers

Terraform 0.14.6 …

joshmyers avatar
joshmyers

So example isn’t working for me either…

joshmyers avatar
joshmyers

If I drop down to 0.2.1 it works

joshmyers avatar
joshmyers

0.3.0 / 0.3.1 aren’t working, not sure what it could be locally?

joshmyers avatar
joshmyers

Yup, my use case works with 0.2.1 too

joshmyers avatar
joshmyers
Changes to Outputs:
  + deep_merge_output = {
      + qa = {
          + us-east-1 = {
              + account-service = {}
              + profile-service = {}
            }
        }
    }
joshmyers avatar
joshmyers
deep_merge_output = {
  "dev" = {
    "eu-west-1" = {
      "compliance-service" = {}
    }
    "us-east-1" = {
      "account-service" = {}
      "compliance-service" = {}
      "profile-service" = {}
    }
    "us-west-2" = {
      "account-service" = {}
      "profile-service" = {}
    }
  }
  "prod" = {
    "us-east-1" = {
      "profile-service" = {}
    }
    "us-west-2" = {
      "profile-service" = {}
    }
  }
  "qa" = {
    "eu-west-1" = {
      "compliance-service" = {}
    }
    "us-east-1" = {
      "account-service" = {}
      "compliance-service" = {}
      "profile-service" = {}
    }
    "us-west-2" = {
      "account-service" = {}
      "profile-service" = {}
    }
  }
}
joshmyers avatar
joshmyers

Working with 0.2.1 - thanks guys!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

0.2.1 is the version of what?

joshmyers avatar
joshmyers
cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

joshmyers avatar
joshmyers

The example doesn’t work for me locally beyond version 0.2.1

1
Bart Coddens avatar
Bart Coddens

terraform confuses me a bit again

Bart Coddens avatar
Bart Coddens
I query the instance id with a datasource like this:data "aws_instance" "instancetoapplyto" {
  filter {
    name   = "tag:Name"
    values = ["${var.instancename}"]
  }
}
Bart Coddens avatar
Bart Coddens

and then I use it in a cloudwatch alarm:

Bart Coddens avatar
Bart Coddens
    InstanceId = data.aws_instance.instancetoapplyto.id
Bart Coddens avatar
Bart Coddens

this works but I get a warning like this:

Bart Coddens avatar
Bart Coddens
Warning: Interpolation-only expressions are deprecated

  on ../../../modules/cloudwatch/alarms.tf line 8, in data "aws_instance" "instancetoapplyto":
   8:     values = ["${var.instancename}"]
Bart Coddens avatar
Bart Coddens

I know about the Interpolation-only expression but here it confuses me

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in TF versions after 0.11, you don’t need to use string interpolation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try

values = [var.instancename]
Bart Coddens avatar
Bart Coddens

that works indeed ! Thanks Andriy !

Matt Gowie avatar
Matt Gowie

Would appreciate some on this GH provider issue: https://github.com/integrations/terraform-provider-github/issues/612

4
Matt Gowie avatar
Matt Gowie

Being able to enable security alerting is fairly useless if the majority of the team can’t see it and I need to manually click into a client’s 40 or so repos to enable them to be able to see it.

Marcin Brański avatar
Marcin Brański

I got a map providing IAM group name and policies that should be attached to it.

  groups = {
    Diagnose: [
      "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
      "arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess",
      ...
     ], ...

I want the policies to be attached to groups with aws_iam_group_policy_attachment But because of the group format it require double iteration to enumerate all policies attached to a group.

locals {
    groups_helper = chunklist(flatten([for key in keys(local.groups): setproduct([key], local.groups[key])]), 2)
}

resource "aws_iam_group_policy_attachment" "this" {
    for_each = {
      for group in local.groups_helper : "${group[0]}.${group[1]}" => group
  }

  group = each.value[0]
  policy_arn = each.value[1]
}

I did it with :point_up: code but I think that it should be much more simple than my hacky, that groups_helper.

loren avatar

this is how i would do it:

locals {
  groups_policies = flatten([for group, policies in local.groups : [
    for policy in policies : {
      name  = group
      policy = policy
    }
  ]])
}

resource "aws_iam_group_policy_attachment" "this" {
  for_each = { for group in local.groups_roles : "${group.name}.${group.policy}" => group }
  
  group      = each.value.name
  policy_arn = each.value.policy
}
loren avatar

fairly similar in the end, but i feel like the data model is more explicit

Marcin Brański avatar
Marcin Brański

Yeah. It’s also a viable option, maybe a little bit better because you have a list of dictionaries instead of list of tuples/list so it’s more explicit.

Such a map should be so easy to iterate over

I’d imagine there could be some sugar syntax that I’m not aware of. Something like this

resource "aws_iam_group_policy_attachment" "this" {
  for_each = local.groups

  group = each.value[0]
  policy_arn = [for value in each.value[1]]
}
loren avatar

fwiw, if exclusive managment of policy attachments is something you’re looking for, there is this feature request… it was recently implemented for roles and i think makes things much easier… https://github.com/hashicorp/terraform-provider-aws/issues/17511

Exclusive management of inline & managed policies for IAM groups · Issue #17511 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

Marcin Brański avatar
Marcin Brański

Exactly man! This is what I would like to see

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to get my head around how much of a cluster <insert swear word here> the upgrade from 0.13.4 to 0.14.x is

mainly from a CI perspective (we use Atlantis) and a pre-commit / dev perspective

pjaudiomv avatar
pjaudiomv

0.13.x to 0.14.x should give you no <explicitive> issues

pjaudiomv avatar
pjaudiomv

I’ve had no issues

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

what about the lock files?

pjaudiomv avatar
pjaudiomv

Ok ha was just about to respond to that

pjaudiomv avatar
pjaudiomv

That’s the question, to check them in to source control or not

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

how can you do it without?

pjaudiomv avatar
pjaudiomv

A bunch of repos I do, as I want them to be locked to that version. However some I don’t

pjaudiomv avatar
pjaudiomv

The pipeline will generate it each time on init

pjaudiomv avatar
pjaudiomv

so if you check it in to source control, the only way you can bump a provider version is by doing a init locally and the recommitting the lock file

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yeh thats a little rough

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

probably needs a pre-commit hook

loren avatar

i just gitignore the lock file, and have a pre-terraform step for CI that initializes the terraform plugin directory with versions that i manage using terraform-bundle

pjaudiomv avatar
pjaudiomv

yup it depends on your ci and workflow. in general we dont allow people to run terrafom locally so its a little easier

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

we do the same

1
pjaudiomv avatar
pjaudiomv

yup i do what loren said on many repos

loren avatar
hashicorp/terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

interesting never seen this

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

any good documentation on using this?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

@loren maybe?

loren avatar

i am not good documentation

loren avatar

the readme is actually pretty great

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i get you add a file and bundle it but how does that help?

loren avatar

the file pins the versions you want. you create the bundle out-of-band as a zip archive, host it somewhere your CI can reach it. the curl it, unzip it, copy the plugin directory to the value of the env TF_PLUGIN_CACHE_DIR

loren avatar

now you have all your plugins locally, in a place terraform will look for them, cuz that’s how TF_PLUGIN_CACHE_DIR works

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

do you do this with all your tf root repos?

loren avatar

that’s the only place i do it, yes

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

so you check in the bundle?

loren avatar

no

loren avatar

the bundle file, yes, not the zip

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

so atlantis creates the bundle before being executed?

loren avatar

atlantis downloads the bundle and extracts it

loren avatar

or whatever your CI is, i use codebuild

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

what created the bundle and where do you host it?

loren avatar

that is done out-of-band, when we’re ready to update the tf and provider versions

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to work out the path here

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

shall we take to DM or do it here?

loren avatar
loren
09:05:40 PM

¯_(ツ)_/¯

loren avatar

it’s your thread

TED Vortex avatar
TED Vortex

anyone happen to have a tutorial on https://github.com/cloudposse/terraform-aws-tfstate-backend ? terraform newbie here, could use some hints and best practices for implementing it here: https://github.com/0-vortex/cloudflare-terraform-infra

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

0-vortex/cloudflare-terraform-infra

Terraform infrastructure templates for our CloudFlare instances - 0-vortex/cloudflare-terraform-infra

2021-03-27

sheldonh avatar
sheldonh

@Erik Osterman (Cloud Posse) do you all still like the yaml_config approach? I’m building out some 2 environment pipelines and started with this. It’s elegant, but verbose.

Would you recommend using a tfvars as an argument for an environment instead of the yaml config in certain cases? I like the yaml config, but just trying to think through what I can do to avoid adding abstractions on abstractions when possible, and make it easier to collaborate.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The yaml config approach is central to what we do

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our stack config is incredibly dry - more dry that anything feasible in terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We support imports, and deep merging

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If something is too verbose it’s probably your schema that is wrong :-)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However you will notice that few of our modules themselves use the yaml - that all use native hcl variables

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But our root modules are what use YAML

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’d be happy to walk you through the approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also our docs.CloudPosse.com is updated now with some more details

sweetops1
Marcin Brański avatar
Marcin Brański

@Erik Osterman (Cloud Posse) Is atmos preferred way now to start terraforming infrastructure? I have new client that is starting from scratch, AWS, K8S, helm, istio so seems like a atmos would be perfect match for them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, this is what we are using everywhere now. It makes it easy to separate config from business logic. Happy to give you a walk through.

sheldonh avatar
sheldonh

Would love to see this stuff or participate in a call or the weekly session to cover a deeper dive.

I’m setting stuff up brand new right now and looking to make this as a flexible as I can to duplicating environments based on some input but struggling a little with not relying on the remote terraform backend for state and so on. I like the yaml config approach in concept, but the other fixtures.eu-west-1.tfvars feels easier to understand for a cli driven approach.

sheldonh avatar
sheldonh

definitely wanting to be able to deal with backend + plans not being so brittle so I see some of the initial value in the yaml merging, just haven’t probably figured out the full potential yet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I like the yaml config approach in concept, but the other fixtures.eu-west-1.tfvars feels easier to understand for a cli driven approach.
It’s only easier since it’s familiar

sheldonh avatar
sheldonh

I’m getting there. Not sure why I’m nesting vars under components, but I’m getting close.

sheldonh avatar
sheldonh

[Seperate Question] I also have to initialize my backend in a setup step ahead of time instead of being more dynamic with the terraform-tf-state backend since it generates the backend tf. Is there any “pipeline” type setup you’d recommend to take advantage of your tf-state locking module, but not require the manual setup steps, backend.tf and other steps to be completed manually first?

I’m used to backend as terraform cloud which made it basically stupid simple with no setup requirement. Would like to approach something like this a bit more, but with S3.

No rush, just trying to think through a “non-terraform-cloud” oriented remote backend setup with the same ease of setup.

1
Matt Gowie avatar
Matt Gowie

I usually include a “bootstrap” root module in all client project which invokes tf-state-backend either once all together OR once for each root module in the project. Then you only need to do it once and it templates out the backend.tf for each root module and you don’t worry about it going forward. I also use this to templatize my versions.tf files so they’re all consistent across all root modules. Can provide a dummy example if that is useful.

hkaya avatar

I’d love to see your dummy example

sheldonh avatar
sheldonh

Me too. I need to start making this a bit more flexible to repeat in multiple environments. I want to figure out how to do this as the backend with terraform cloud was fire and forget. With azure pipelines needing to setup the backend in each region, I need as stupid simple as possible, even if it’s a backend pipeline that runs all the state file bucket setup stuff (assuming I setup one bucket per plan).

This needs to be easy to work with and allow me to tear down and rebuild plans without destroying state buckets ideally.

sheldonh avatar
sheldonh

@Erik Osterman (Cloud Posse) so I want to avoid having to create the backend.tf file manually using the process described in the tf-backend-state module.

https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/examples/remote-state

Is this an example I could use if I’m already using yaml config to define my backend and initialize all these at once for the region? I’m a bit unclear. I’m with creating backend.tf if I have to, but was hoping to use the yaml config to generate all my backend buckets more dynamically and also prevent the stacks from tearing down the backend buckets when running terraform destroy

cloudposse/terraform-yaml-stack-config

Terraform module that loads an opinionated &quot;stack&quot; configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …

sheldonh avatar
sheldonh

I think I’m good now. I was trying to set up a separate bucket and Dynamo table per module or plan. I found some prior conversation about this that talked about just creating a single state bucket and using prefixes. That’s a lot easier to handle and should eliminate the backend config worries. I’m assuming the locking is per prefix not global for all state files in a bucket right?

sheldonh avatar
sheldonh

Ok, could use help on one last thing. I see patterns for tfstate used in places in the cloudposse modules for locals, but not sure how to use. I want to make my pipeline use a unique state file in the bucket. Am I supposed to be able to make the backend file name in the bucket a variable or environment variable to provide it?

sheldonh avatar
sheldonh

Bump…. I’m using yaml config. I’m using the stack sorta approach with an input variable of

module "yaml_config" {
  source                     = "cloudposse/config/yaml"
  map_config_local_base_path = "../config"
  map_config_paths = [
    "default.config.yml"
    ,var.config_import
  ]
  # context = module.label.context
}

Now I have one last piece I don’t have confidence in… the state management. I’m using tf-state-backend to deploy a state bucket per account. Now how do I make this variable based on the yaml stack I’m deploying?

terraform {
    required_version = '>= 0.12.2'

    backend 's3' {
        region = 'eu-central-1'
        bucket = 'qa-tfstate-backend'
        key = 'qa/terraform.tfstate'
        dynamodb_table = 'qa-tfstate-backend-lock'
        profile = ''
        role_arn = ''
        encrypt = 'true'
    }
}

I don’t think those can be variables? Do I remove the key and provide a partial backend config variable input on the cli to be able to do this? Any guidance would be appreciated as this is the last thing I think I need to do some test deploys

sheldonh avatar
sheldonh

bump see thread comment, any help appreciated

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is there a way to iterate over a list and get an index as well as the value? At the moment I am doing this in a string template which is sort of ugly:

      %{for index in range(length(local.mylist))}
      [item${index}]
      name=local.mylist[item].name
      %{endfor}
loren avatar

Not sure about in a string template, but in a regular for expression, it’s just for index, value in <list> : ...

Alex Jurkiewicz avatar
Alex Jurkiewicz

wow, i was hoping that would work but i couldn’t find it in the docs

Alex Jurkiewicz avatar
Alex Jurkiewicz

can you? or is it a little secret

pjaudiomv avatar
pjaudiomv

I’ll often convert the list to a map with the index as the keys depending on what I’m trying to do

loren avatar


wow, i was hoping that would work but i couldn’t find it in the docs
i believe i first saw it in some examples on the discourse site. for syntax stuff like this, i go back to the hcl spec… it has several examples of how it works, though they’re a bit subtle and you still need to know what you’re wanting to find… https://github.com/hashicorp/hcl/blob/main/hclsyntax/spec.md#for-expressions

hashicorp/hcl

HCL is the HashiCorp configuration language. Contribute to hashicorp/hcl development by creating an account on GitHub.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

yup, it works! thanks @loren

2

2021-03-28

2021-03-29

Gareth avatar

Good morning, Has anybody got an idea if it is possible to override a value in a map of objects that’s set in tfvars? Doc’s suggest that the cmd line takes priority over tfvars and that you can override items via cmd line but I’m struggling to get the nesting right. terraform plan -var=var.site_configs.api.lambdas_definitions.get-data={“lambda_zip_path”:”/source/dahlia_v1.0.0_29.zip”}

structure is:

variable "site_configs" {
type = map(object({
   lambdas_definitions = map(object({
                      lambda_zip_path = string
                     }))
  }))
}

above is a large data structure but I’ve tried to simplified it for the purpose of this question.

Alex Jurkiewicz avatar
Alex Jurkiewicz

you can’t perform a deep merge of data like that. You can only override the entire variable value

Gareth avatar

Thanks Alex, I feared you’d say that.

Alex Jurkiewicz avatar
Alex Jurkiewicz

if you want to perform a deep merge of input variable data, I suggest you convert the input data to a json/yaml file, pre-process it using your own logic, and then load it with jsondecode(file(var.lambda_definition_file))

Gareth avatar

Yep, understand. Guess I’ve been lucky so far that I’ve not needed to change anything in this manner before. I had your suggestion in the back of my mind originally but I was hopeful I could avoid it. Thanks for the help.

1
Mohammed Yahya avatar
Mohammed Yahya

watch out for this nasty bug in RabbitMQ ( enable logging ) https://github.com/hashicorp/terraform-provider-aws/issues/18067

aws_mq_broker RabbitMQ general logs cannot be enabled · Issue #18067 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

1
John Clawson avatar
John Clawson

I’m looking at building out a new account structure (currently all dumped into one wild-west style account) for my company using https://github.com/cloudposse/reference-architectures, but I don’t think we’ll wind up using EKS or kubernetes in any fashion. For now our needs are pretty simple and fargate should suffice. Will I regret using the reference architecture to build out the Organizations account structure even if I don’t need all of the components that it’s made to create?

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

1
John Clawson avatar
John Clawson

never mind, just reading more and finding that this has been deprecated in favor of atmos, so I’ll go play with that!

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Bart Coddens avatar
Bart Coddens

Good morning (in Belgium/Europe) to all, I a am bit confused by this module:

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Bart Coddens avatar
Bart Coddens

you can assign a logging target and it’s described as such:

Bart Coddens avatar
Bart Coddens
object({
    bucket_name = string
    prefix      = string
  })
Bart Coddens avatar
Bart Coddens

but howto define this in the terraform config ?

aaratn avatar
logging = {
bucket_name = foobar-bucket
prefix = foo
}
aaratn avatar

this is a map

Bart Coddens avatar
Bart Coddens

thx a lot !

2021-03-30

sheldonh avatar
sheldonh

Hate bugging folks again, but I’m so close. I just need a pointer on the backend remote state with the new yaml config stuff. @Erik Osterman (Cloud Posse) anyone can point me towards a good example?

I’m unclear if I have to use cli options or if module “backend” actually works to define the remote state dynamically for me as part of the yaml config stack setup

https://github.com/cloudposse/terraform-yaml-stack-config/blob/75cd7c6d6e17a9c701d4067dbcd1eedcf6039aa4/examples/complete/main.tf#L12

I found this. I thought backend configs must be hard coded and can’t be variables so someone point me towards a post or example so I can see S3 remote backend being used with yaml config for stacks pretty please. I want to deploy my stacks to 3 accounts, and each has custom yaml overrides from default.

cloudposse/terraform-yaml-stack-config

Terraform module that loads an opinionated &quot;stack&quot; configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Did you see the atmos command to generate the backend config?

Basically there’s no need to use variables

At least, we haven’t needed to - so I am thinking there’s a simpler way without variables. Config are static by design.

(E.g. if we support variables we are reinventing terraform, the goal here is to define what is static and have terraform compute that which is dynamic)

cloudposse/terraform-yaml-stack-config

Terraform module that loads an opinionated &quot;stack&quot; configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So let’s say you start with a backend like this:

This might be defined in globals.yaml

terraform:
  vars: {}
  backend_type: s3 # s3, remote, vault, etc.
  backend:
    s3:
      encrypt: true
      bucket: "eg-ue2-root-tfstate"
      key: "terraform.tfstate"
      dynamodb_table: "eg-ue2-root-tfstate-lock"
      role_arn: "arn:aws:iam::123456789:role/eg-gbl-root-terraform"
      acl: "bucket-owner-full-control"
      region: "us-east-2"
    remote:
    vault:
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then you can have some-other-account.yaml with:

import:
  - globals

terraform:
  backend:
    s3:
      bucket: "eg-uw2-root-tfstate"
      region: "us-west-2"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

see what I did there? You define what your standard backend configuration looks like

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then you overwrite the globals with what’s specific or unique.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in this case, I’m now pointing to a bucket in us-west-2

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

to generate the backend configurations, I would run atmos terraform backend generate

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that will drop a .tf.json file that you should commit to source control (if practicing gitops)

sheldonh avatar
sheldonh

Looking forward to evaluating. Ok… one quick thing. I’m trying to use mage right now. While I want to examine atmos, the CI solution i have in place would need to be gutted. Can you link me to the code for atmos so I can look at what’s doing? Or do I stick with atmos just for backend configuration and that’s it?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/atmos

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, atmos compiles down to a binary so you can just call it from mage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s just a command like any other

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so we use atmos for everything, way more than just backend generation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/atmos

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ultimately, all we do is serialize the backend config as json, after doing all the deep merging

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can implement the pattern however you want.

sheldonh avatar
sheldonh

I know it’s most likely epic if you are involved. I think you all write some Go as well, so any 1 sentence type answer on why choose atmos over using mage and writing native go commands?

Would like to avoid nesting more tools than necessary so feedback would be welcome

sheldonh avatar
sheldonh

trying to plug these into azure pipelines so sticking with mage might make sense for working with a bunch of Go devs for example, but open to input!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think using mage makes sense

1
sheldonh avatar
sheldonh

trying to stretch myself by leveraging more native tooling than my current build frameworks, so will examine your backend logic for initialization. That’s the only piece I’m sorta stuck on. thanks for helping me out today!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Mage has no dependencies (aside from go) and runs just fine on all major operating systems, whereas make generally uses bash which is not well supported on Windows. Go is superior to bash for any non-trivial task involving branching, looping, anything that’s not just straight line execution of commands. And if your project is written in Go, why introduce another language as idiosyncratic as bash? Why not use the language your contributors are already comfortable with?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it’s a strong argument. One of the challenges with #variant has been debugging it.

sheldonh avatar
sheldonh

yes, that’s what I’m working on using more.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

using a DSL in go makes sense.

sheldonh avatar
sheldonh

The thing is I’m doing non Go work too, esp with terraform. If I leverage mage to run it, it’s more verbose, but would be easier to plug into other teams projects if Go developers I think

sheldonh avatar
sheldonh

so a library of mage functions for run/init. I just have to figure out how you are using the backend config stack so I can still use your yaml config, but ensure the backend is filled.

Are you generating backend.tf files for every stack as part of cli or is this done in some other dynamic way?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

THe cli generates the backend for each component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Btw, our idea with the stack config is that it’s tool agnostic. We use it with Spacelift, Terraform Cloud, and then wrote our own on the command line called atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you subscribe to the idea, then using mage would just be another tool that operates on the schema.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No one tool implements the full power of the schema. The stack config schema is just a way to express cross-tool configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We wrote a terraform provider to read the configuration https://github.com/cloudposse/terraform-provider-utils

cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils

sheldonh avatar
sheldonh

Ok. So I’m back to the backend config has to be generated for every folder, but it’s core operation could be using backend s3 config with a different key prefix for each one right? You are just iterating through all the stacks to generate backend.tf files for each of these, but I could do that manually for the low volume I have right now?

sheldonh avatar
sheldonh

like shown at top of thread.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, could just do it manually

sheldonh avatar
sheldonh

Ok that helps, not as fancy, but it helps. I want to use it, but trying to force myself to stick with mage for right now for the sake of others adoption later. Might come back around to experiment with atmos more too Maybe I’ll try it this week if I get a chance.

thanks again!

sheldonh avatar
sheldonh

So basically what I’m coming to is the backend.tf file needs to be a cli variable for me to change it dynamically, with a partial backend config.

Basically I atmos would handle creating more configs for me otherwise, but without it’s benefit, I have to do a partial backend config and change the terraform state file name based on the staging.config.yml or qa.config.yml as otherwise it won’t know where to look and remote datasource prohibits variables for the file name.

sheldonh avatar
sheldonh

I think that’s what I was trying to confirm, but all the abstractions, as elegant as they are, are hard to parse through when I didn’t build the yaml stack stuff.

sheldonh avatar
sheldonh

whelp. I’m seeing that i think the key for the file CAN be a variable. The bucket can’t. Trying that now.

sheldonh avatar
sheldonh

Variables may not be used here. oh well. Looks like back to the cli backend partial config. I guess you can’t use variables for key, though terraform issue points towards possible improvements on that imminent.

sheldonh avatar
sheldonh
Using variables in terraform backend config block · Issue #13022 · hashicorp/terraform

Terraform Version v0.9.0 Affected Resource(s) terraform backend config Terraform Configuration Files variable &quot;azure_subscription_id&quot; { type = &quot;string&quot; default = &quot;74732435-…

Marcin Brański avatar
Marcin Brański

Wow. That topic got me going with atmos Part about backend should be included in documentation

3
sheldonh avatar
sheldonh

I don’t get it. All the examples are using variables too lol. All I want is to provide this using yaml config and I’d be golden. I’m assuming though I can’t do this because I have to provide it for the very first step to run. Order of operations says backend accessed first so can’t leverage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie

Matt Gowie avatar
Matt Gowie

Hard to parse out your exact need @sheldonh — Feel like I’m missing something but can’t put my finger on it.

Regardless, I’d be happy to chat about this with you and help you get it sorted. Want to schedule some time to chat? It’s awesome that you’re pushing to adopt the approach early so would love to help if I can. I’m also going to start writing some docs that utilize and highlight the Atmos + Backend generation functionality in the coming week, so hopefully those will help you / others in the future surrounding this topic.

1
Marcin Brański avatar
Marcin Brański

Awesome Matt. I will be creating documentation for bootstrapping atmos including backend generation as well. I will ask them if they would like to open source it. Keep me in the loop if you would like some help on it

Gareth avatar

Good evening all, I am trying to use the below json input to supply a value to a map I’ve created normally within terraform and in part it works fine on windows

terraform_0.13.5 plan -var=build_version_inputs={get-data:\"1.0.1_1\",post-data:\"1.0.1_1\"} 

However when try it on centos it fails with the below error. I assume its to do with the escaping within bash?

Error: Extra characters after expression

  on <value for var.build_version_inputs> line 1:
  (source code not available)

An expression was successfully parsed, but extra characters were found after
it.

I’ve tried a variety of escaping sequences but to no luck. Any suggestions, please?

/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={"get-data":"1.0.1_1"},{"post-data":"1.0.1_1"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={\"get-data\":\"1.0.1_1\",\"post-data\":\"1.0.1_1\"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:\"1.0.1_1\",post-data:\"1.0.1_1\"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:"1.0.1_1",post-data:"1.0.1_1"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:1.0.1_1,post-data:1.0.1_1}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={"get-data":"1.0.1_1","post-data":"1.0.1_1"}

Last questions; if specifying {"get-data":"1.0.1_1","post-data":"1.0.1_1"} in a json file how do you reference the variable you are trying to supply data to? Like this? I assume not, given the error I’ve just got

 {
	"variable": {
		"build_version_inputs": {
			"get-data": "1.0.1_1",
			"post-data": "1.0.1_1"
		}
	}
}

/usr/local/bin/terraform_0.13.5 plan -var-file my.tf.json

Alex Jurkiewicz avatar
Alex Jurkiewicz

The json file you have there is probably not in the format you want. As a variable file, that is creating a single variable called variable.

You could change the format to:

{
  "build_version_inputs": {...}
}

to create a variable called build_version_inputs

Alex Jurkiewicz avatar
Alex Jurkiewicz

if you want to supply that same data on the commandline, use

terraform_0.13.5 plan -var "build_version_inputs={\"get-data\":\"1.0.1_1\",\"post-data\":\"1.0.1_1\"} "

You need to quote the keys because they have - characters in them. I suggest using _ instead.

Also, if you want to load complex variable types like this via the commandline, make sure the variable’s type is accurate. If you use type=any , Terraform may get confused and try to load it as a string

2
Gareth avatar

Hi Alex, feels like your always having to come to my aid. Thank you for taking the time to do that.

In terms of my variable, I’ve already defined it as

variable "build_version_inputs" {
  type        = map(string)
  description = "map of built assets by Jenkins"
}

the data structure I’m using {"get-data":"1.0.1_1","post-data":"1.0.1_1"} does work for my needs. as its simple accessed later via a lookup. snippet from the resource…

 for_each = var.lambdas_definitions
 format("%s_v%s.zip", each.value.lambda_zip_path, var.build_version_inputs[each.key])

which gives me what i expect, here is a sinipet from the windows based plan of terraform_0.13.5 plan -var=build_version_inputs={get-data:\"1.0.1_1\",post-data:\"1.0.1_1\"}

~ s3_key           = "my-v1.0.1.zip" -> "/source/my_v1.0.1_1.zip"
s3_bucket          = "assets.mybucket.dev"
~ s3_key           = "my-v1.0.1.zip" -> "/source/my_v1.0.1_1.zip"
source_code_hash   = "p1slm77OpGkBvYGSyki/hItZ6lx0AVRastFep1bdoK8="

Looking at what you’ve kindly put above I see you’ve managed to give me the correct escaped sequence for the unix side. I new it was an escaping issue, i just couldn’t find the culprit. looks like the wrapping of the whole var string was one of my mistakes.

Thank you very much for the correct syntax. given the issue with escaping etc, I guess it’ll be best to supply this via a file. the syntax you supplied

{
  "build_version_inputs": {...}
}

will that just allow the values to be set, given the variable is created else where? Could have sworn I’d tried this already but guess I screwed it up somewhere.

Alex Jurkiewicz avatar
Alex Jurkiewicz

If you create a file myyvars.tfvars.json with content

{
  "build_version_inputs": {
    "get-data": "foo",
    "post-data": "bar"
  }
}

And run terraform plan -var-file myvars.tfvars.json, it will do what you want. I think.

1
Gareth avatar

Thanks Alex, just giving that a go. brb

Gareth avatar

Can confirm that gets me to where I needed to be. Thank you once again Alex.

2

2021-03-31

Saichovsky avatar
Saichovsky

Hey peeps,

How do I list resources that I can import into my statefile? In other words, I know that I can import already existing resources using terraform import <address> <ID> , but before importing, I would like to see what’s available - a list containing <ID> and probably other resources that were created outside of terraform

Fred Torres avatar
Fred Torres
GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer

dtan4/terraforming

Export existing AWS resources to Terraform style (tf, tfstate) - dtan4/terraforming

Importing existing AWS resources to terraform using terraformingattachment image

Yes !! you can easily import your AWS infrastructure to terraform using terraforming.

sheldonh avatar
sheldonh

I’ve followed our past discussions on pulumi. I’m curious if anyone has actually had a great result using it if you are working with Go/python/node devs?

Since it’s an abstraction of the apis just like terraform, for application stacks it sorta makes sense to me over learning HCL in depth for some. Was thinking about it providing value for serverless cli oriented alternative that could handle more resource creation that is needed specifically tied to the app.

I don’t find it as approachable as HCL, but in the prior roles I was working with folks that knew HCL, but not Go, now it’s the opposite. They know Go, but not HCL

Release notes from terraform avatar
Release notes from terraform
03:04:22 PM

v0.15.0-rc1 0.15.0-rc1 (Unreleased) ENHANCEMENTS: backend/azurerm: Dependency Update and Fixes (#28181) BUG FIXES: core: Fix crash when referencing resources with sensitive fields that may be unknown (<a href=”https://github.com/hashicorp/terraform/issues/28180” data-hovercard-type=”pull_request”…

check for unknowns when marking resource values by jbardin · Pull Request #28180 · hashicorp/terraform

When we map schema sensitivity to resource values, there may be unknowns when dealing with planned objects. Check for unknowns before iterating over block values. Fixes #28153

1
1
1
cool-doge1
hkaya avatar

Hi, did anyone ever try to manually remove an EKS cluster from the state and after changing some stuff in the console, try to reimport the EKS again back into the state? I am running into a race condition when adding subnets to the cluster and was wondering if the destroy + create path could be avoided…

    keyboard_arrow_up