#terraform (2021-06)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-06-01

loren avatar

I forget who else was looking for this, but the new aws provider release has support for aws amplify… https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.43.0

1
Matt Gowie avatar
Matt Gowie

Me. For way too long. Found out about this a week or two back — Stoked it finally shipped party_parrot

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Ditto. We have long given up and moved to self managed S3 + CloudFront

Michael Warkentin avatar
Michael Warkentin

I opened the initial issue, so yeah I’ve been waiting.

1
Michael Warkentin avatar
Michael Warkentin

Sucks that there’s no integration for env vars with param store / secret mgr (in amplify) so moving to terraform would mean committing some tokens to source..

Matt Gowie avatar
Matt Gowie

@Michael Warkentin Use sops + the sops provider. Better way of dealing with secrets then PStore or secrets manager IMO.

mozilla/sopsattachment image

Simple and flexible tool for managing secrets. Contribute to mozilla/sops development by creating an account on GitHub.

carlpett/terraform-provider-sopsattachment image

A Terraform provider for reading Mozilla sops files - carlpett/terraform-provider-sops

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie will you be joining us for #office-hours today?

Matt Gowie avatar
Matt Gowie

Yeah @Erik Osterman (Cloud Posse) — I’ll be on.

Harry avatar

Does anyone know of a good terraform module for creating an S3 bucket set up to host a static site?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-s3-websiteattachment image

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

Harry avatar

oh perfect, thanks @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-cloudfront-s3-cdnattachment image

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

see examples folders

robschoening avatar
robschoening

For those of you using terraform static analysis and plan verification tools (Sentinel, Snyk, tfsec, checkov, etc.), it would be great to hear your thoughts on what features are missing or what approach you see working/not working? Do you see this as something that should be coupled with PR process, CI/CD, TACOS platform, all of above or something else entirely? In full transparency, I’m the founder of https://soluble.ai which integrates a variety of static analysis tools into a ?(free) GitHub App. But the question is just honest discovery, useful to all. Curious what you all think and what your experiences have been.

Soluble: Secure your cloud infrastructure

Secure your cloud infrastructure – Infrastructure as Code (IaC) – Terraform, CloudFormation, Kubernetes

Pierre-Yves avatar
Pierre-Yves

hello Rob, I am using tfsec as a pre commit to validate my terraform code before pushing to Azure. I didn’t integrate it in CICD “yet” but will do it

Soluble: Secure your cloud infrastructure

Secure your cloud infrastructure – Infrastructure as Code (IaC) – Terraform, CloudFormation, Kubernetes

robschoening avatar
robschoening

Curious if it is just you authoring, one team, many teams? Does tfsec do about what you need? Are you writing a lot of custom policy?

Pierre-Yves avatar
Pierre-Yves

one team but I have split code in several repos. default tfsec rules fits to me and yes I am authoring mainly for our infra team . for now tfsec is just taken as a warning and will not block the ci/cd. When it will then it should be in the cicd

2021-06-02

Luis avatar

Does anyone have an example on how to use the “kubelet_additional_options” variable for the terraform-aws-eks-node-group module? I am testing it like this without any luck so far. Thanks

kubelet_additional_options = "--allowed-unsafe-sysctls=net.core.somaxconn,net.ipv4.ip_local_port_range=1024 65000"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The space here 1024 65000 is suspcicious

1
Release notes from terraform avatar
Release notes from terraform
06:13:43 PM

v0.15.5 0.15.5 (June 02, 2021) BUG FIXES: terraform plan and terraform apply: Don’t show “Objects have changed” notification when the detected changes are only internal details related to legacy SDK quirks. (#28796) core: Prevent crash during planning when encountering a deposed instance that has been removed from the configuration. (<a…

emem avatar

hi guys anyone has an idea how to resolve this currently defined in cloudflare terraform module. I first thought i should set the attribute for paused: true. But its still does not seem tot work. Plese help

➜  staging git:(BTA-6363-Create-a-terraform-code-base-for-cloudflare) ✗ terraform plan
Acquiring state lock. This may take a few moments...

Error: Unsupported attribute

  on ../../cloudflare/modules/firewall.tf line 6, in locals:
   6:       md5(rule.expression),

This object does not have an attribute named "expression".
managedkaos avatar
managedkaos

@emem I’m not familiar with cloudflare resources but I’m wondering, what is the resource/variable/object/etc named rule? Seems as though your are not referencing it correctly… :thinking_face:

Is rule a value that you are creating or is this from a third party module you are using?

emem avatar

thanks @managedkaos was able to find the issue

managedkaos avatar
managedkaos

no problem! glad you worked it out

emem avatar

have u encountered this before

nil entry in ImportState results. This is always a bug with
the resource that is being imported. Please report this as
a bug to Terraform
Chris Fowles avatar
Chris Fowles

i’m hitting a problem with the way some of our modules are designed now that we’re starting to switch to AWS SSO for auth. we use data "aws_caller_identity" "current" {} a bit to get the current account id rather than having to pass it in, unfortunately when using SSO it looks like this is the root account rather than the account you’re applying against. does anyone have an easy way around this or do i need to go on an adventure?

Brian Ojeda avatar
Brian Ojeda

Something isn’t right. That should return the respective account’s id. I use it all the time. I also use the aws-cli implementation of the same command all the time to check the current account.

aws sts get-caller-identity --profile dev
aws sts get-caller-identity --profile prod
1
Chris Fowles avatar
Chris Fowles

do you use AWS SSO?

Brian Ojeda avatar
Brian Ojeda

Here is a quick to test…

provider "aws" {
  region  = "us-east-1"
  profile = "sandbox"
}

data "aws_caller_identity" "current" {}

output "account_id" {
  value = data.aws_caller_identity.current.account_id
}

output "id" {
  value = data.aws_caller_identity.current.id
}
Brian Ojeda avatar
Brian Ojeda

Yes. I have for 2-3 years.

Brian Ojeda avatar
Brian Ojeda

AWS SSO (not old school SSO via IAM).

Chris Fowles avatar
Chris Fowles

yeh ok - i’ll do some more digging then

Brian Ojeda avatar
Brian Ojeda
[profile default]
sso_start_url = <https://yourcompany.awsapps.com/start>
sso_region = us-east-1
sso_account_id = 000000000000
sso_role_name = AdministratorAccess
region = us-east-1

[profile sandbox]
sso_start_url = <https://yourcompany.awsapps.com/start>
sso_region = us-east-1
sso_account_id = 000000000000
sso_role_name = AdministratorAccess
region = us-east-1

[profile dev]
sso_start_url = <https://yourcompany.awsapps.com/start>
sso_region = us-east-1
sso_account_id = 111111111111
sso_role_name = AdministratorAccess
region = us-east-1

[profile prod]
sso_start_url = <https://yourcompany.awsapps.com/start>
sso_region = us-east-1
sso_account_id = 222222222222
sso_role_name = AdministratorAccess
region = us-east-1
# sso login (using default profile)
aws sso login
# now have access to all profile despite only "login" to "default profile
aws sts get-caller-identity
aws sts get-caller-identity --profile dev
aws sts get-caller-identity --profile prod
Chris Fowles avatar
Chris Fowles

you are correct - i was looking at this at 11pm last night and came to the wrong conclusion when i saw something change

Chris Fowles avatar
Chris Fowles

it was another issue

Chris Fowles avatar
Chris Fowles

thanks for diving so deep on this to help me out

Brian Ojeda avatar
Brian Ojeda

Np

2021-06-03

emem avatar

hi guys who has gotten around resolving this terraform import issue before

nil entry in ImportState results. This is always a bug with
the resource that is being imported. Please report this as
a bug to Terraform
Henry Course avatar
Henry Course

guess this might be the right place to put this, got a contribution PR that should now be ready for review: https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/22

Added support for incoming SASL/IAM auth by hcourse-nydig · Pull Request #22 · cloudposse/terraform-aws-msk-apache-kafka-clusterattachment image

what Added support for the incoming (AWS provider 3.43.x) SASL/IAM auth method. why Allows access control to an MSK cluster via IAM instead of requiring SCRAM secret management. references AWS…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Added support for incoming SASL/IAM auth by hcourse-nydig · Pull Request #22 · cloudposse/terraform-aws-msk-apache-kafka-clusterattachment image

what Added support for the incoming (AWS provider 3.43.x) SASL/IAM auth method. why Allows access control to an MSK cluster via IAM instead of requiring SCRAM secret management. references AWS…

Pierre-Yves avatar
Pierre-Yves

Hello, when using terraform cloud, how do you provide terraform init argument ? I didn’t find a way to do it I am used to provide variable to connect to the remote state like this : terraform init -reconfigure -backend-config="login=$TF_VAR_login" ...

tim.davis.instinct avatar
tim.davis.instinct

Hey there, you should be able to pass CLI args using the TF_CLI_ARGS as a variable: https://www.terraform.io/docs/cli/config/environment-variables.html#tf_cli_args-and-tf_cli_args_name

Environment Variables - Terraform by HashiCorp

Terraform uses environment variables to configure various aspects of its behavior.

1
msharma24 avatar
msharma24

I’m consulting a customer to not use TF cloud. The business plan costs an arm and a leg

tim.davis.instinct avatar
tim.davis.instinct

@msharma24 We’d love for you and your customers to check out our pricing models at env0 if the TFC quotes have your head spinning

https://www.env0.com/pricing

Disclaimer: I’m the DevOps Advocate at env0

Note: The Enterprise tier pricing isn’t listed because these are 100% customized agreements from top to bottom, so we don’t know what one looks like until we spec out what is needed.

1
managedkaos avatar
managedkaos

Have you seen something like this where you know there are changes (made manually in the console), terraform knows there are changes, and yet there is no plan to revert the changes?

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":


Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

No changes. Your infrastructure matches the configuration.
loren avatar

They’ve been tracking that, I think… Patched some instances in 0.15.5, but sounds like there are still some occasions… https://github.com/hashicorp/terraform/issues/28776

"Objects have changed outside of Terraform" but no actual changes are shown · Issue #28776 · hashicorp/terraformattachment image

I have a configuration I&#39;ve just updated to 0.15.4 and now terraform plan/apply always reports the following: Note: Objects have changed outside of Terraform Terraform detected the following ch…

pjaudiomv avatar
pjaudiomv

my plans got worse after 0.15.5

Vijay LL avatar
Vijay LL

Hello Guys, Is anyone using Terraform API driven runs? curl -s –header “Content-Type: application/octet-stream” –request PUT –data-binary @${config_dir}.tar.gz “$upload_url” I am trying to understand and use this. I’d like to do this through Go or Python

Alex Jurkiewicz avatar
Alex Jurkiewicz

Terraform Cloud, I’m assuming

Vijay LL avatar
Vijay LL

Yes or Terraform Enterprise

marcoscb avatar
marcoscb

Hello, I’m trying to update the AMI on an EKS cluster created with terraform-aws-eks-cluster-0.38.0 module and terraform-aws-eks-node-group-0.19.0 setting create_before_destroy = true in the eks_node_group module but pods are not relocated to the new nodes and the node group keeps modifying and times out. Anybody using this kind of rolling updates with this modules? Any hint about how to orchestrate this rollings? Thanks.

Hao Wang avatar
Hao Wang

do you mean rolling update in ASG or k8s deployment?

2021-06-04

Raja Miah avatar
Raja Miah

hi anyone have any good resources or links for terraforming a aws api-gateway ??

rms1000watt avatar
rms1000watt

Can I get some upvotes on this? lol for some reason it’s been sitting there for a long time, but adding S3 Replication Time Control would be very valuable from Terraform https://github.com/hashicorp/terraform-provider-aws/pull/11337

original issue I think https://github.com/hashicorp/terraform-provider-aws/issues/10974

Add replication time control by rebrowning · Pull Request #11337 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

2
2
matt.bernard2006 avatar
matt.bernard2006

Hey all. Is this still under review? I’m manually editing the module with this PR and it’s working well so far. Any idea on a new release? https://github.com/cloudposse/terraform-aws-sso/pull/13

issue 12 possible fix by innominatus · Pull Request #13 · cloudposse/terraform-aws-ssoattachment image

what a.permission_set_arn is providing a unique value to the account_assignment name. However the permission_set_arn can not be determined until after the apply of the permission sets. Using a.per…

matt.bernard2006 avatar
matt.bernard2006

Any updates on this yet?

issue 12 possible fix by innominatus · Pull Request #13 · cloudposse/terraform-aws-ssoattachment image

what a.permission_set_arn is providing a unique value to the account_assignment name. However the permission_set_arn can not be determined until after the apply of the permission sets. Using a.per…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB and @Andriy Knysh (Cloud Posse) I think are taking a look at this right now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(we encountered this as well)

2021-06-06

Alex Jurkiewicz avatar
Alex Jurkiewicz
  on .terraform/modules/apigw_certificate/main.tf line 37, in resource "aws_route53_record" "default":
  37:   name            = each.value.name

A reference to "each.value" has been used in a context in which it
unavailable, such as when the configuration no longer contains the value in
its "for_each" expression. Remove this reference to each.value in your
configuration to work around this error.

Started seeing this error with cloudposse / terraform-aws-acm-request-certificate . Anyone familiar with this Terraform error? I’ve never seen it before and can’t quite understand it

cloudposse/terraform-aws-acm-request-certificateattachment image

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

pjaudiomv avatar
pjaudiomv

did you upgrade this module from a previous version, also is this happening on plan or apply

cloudposse/terraform-aws-acm-request-certificateattachment image

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

Alex Jurkiewicz avatar
Alex Jurkiewicz

on plan. I figured it out – if you pass in a hostname with uppercase letters, you get this error

pjaudiomv avatar
pjaudiomv

ahh good to know

Alex Jurkiewicz avatar
Alex Jurkiewicz
1
Zach avatar

quite the weird error

2021-06-07

Gabriel avatar
Gabriel

Hi All, terraform plan does already some validation like duplicate variables but what is missing is duplicate validation for the contents of maps and lists does anyone know of a way/tool to validate .tfvars files duplicates including duplicates inside maps and lists?

Brian A. avatar
Brian A.

https://github.com/terraform-linters/tflint might be able to do what you need @Gabriel

terraform-linters/tflintattachment image

A Pluggable Terraform Linter. Contribute to terraform-linters/tflint development by creating an account on GitHub.

2
Gene Fontanilla avatar
Gene Fontanilla

is it possible to pass outputs a inputs for variables?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, you can pass outputs from modules as inputs to other modules

Rhys Davies avatar
Rhys Davies

hey guys this is probably a FAQ so sorry if so: What’s a good article or series for writing CI for Terraform? Specifically I now have a small team of people all working on a project together, what’s a good resource to follow on how to test. deploy and not step on each other’s toes?

*We use CircleCI and Terraform, no PaaS (yet)

Hemanth Gokavarapu avatar
Hemanth Gokavarapu

You can use the terraform cloud if you are looking for a Paid service… https://www.hashicorp.com/blog/learn-ci-cd-automation-with-terraform-and-circleci

If you don’t want terraform cloud, you can try something like this..

https://victorops.com/blog/a-ci-cd-template-for-terraform

Learn CI/CD Automation with Terraform and CircleCIattachment image

Get started automating Terraform in CI/CD with a new tutorial that walks you through deploying a web app and its underlying infrastructure using the same CircleCI workflow.

A CI/CD Template for Terraform

Use our CI/CD template for Terraform to learn how you can use Infrastructure-as-Code (IaC) to improve CI/CD processes. This template will show you exactly how to implement and maintain a CI/CD pipeline with Terraform.

Hemanth Gokavarapu avatar
Hemanth Gokavarapu

if you want to validate and find configuration issues of your terraform in the CI process.. you can use our free product https://get.soluble.cloud/

Soluble: Secure your cloud infrastructureattachment image

Automated Infrastructure as Code (IaC – Terraform, CloudFormation, Kubernetes) static security testing for developers

Rhys Davies avatar
Rhys Davies

awesome! I’ll do some reading, thank you

1
Matt Gowie avatar
Matt Gowie

I’d suggest against Terraform Cloud. They’re getting better, but are still fairly behind their competitors. Scalr or Spacelift are the way to go IMO:

https://scalr.com/

https://spacelift.io/

Collaboration and Automation for Terraform | Scalr

Scalr is a remote state & operations backend for Terraform with access controls, policy as code, and many quality of life features.

The best CI/CD for Infrastructure as Code

Enable collaboration. Ensure control and compliance. Customize and automate your workflows.

Hemanth Gokavarapu avatar
Hemanth Gokavarapu

Spacelift has quite few disadvantages compared to scalr and Terraform cloud… I like the terraform cloud triggers which I use a lot that doesn’t exist in Scalr but if you are more into OPA, shared modules, Custom policies .. scalr might be a good fit..

1
ohad avatar

You can check out our product ( disclaimer - i am CEO of env0) at www.env0.com which allows you to do much more than Terraform Cloud imho.

You can check out this video which presents all 4 solutions - Terraform Cloud, env0, Scalr, Spacelift https://youtu.be/4MLBpBqZmpM

2
Michael Warkentin avatar
Michael Warkentin

We use the Fargate module for deploying atlantis: https://www.runatlantis.io

Terraform Pull Request Automation | Atlantis

Atlantis: Terraform Pull Request Automation

2021-06-08

Thomas Hoefkens avatar
Thomas Hoefkens

Hi all, I am using the helm provider to deploy a chart… but when adding a template in the helm chart, the tf deployment does not detect the fact that I added a yaml file… how can this be resolved?

Brian Ojeda avatar
Brian Ojeda

https://registry.terraform.io/ - Anyone else having issues reaching the site?

Partha avatar

i can access

Partha avatar

the site

Partha avatar

@Brian Ojeda

Brian Ojeda avatar
Brian Ojeda

me too now.

1
Brian Ojeda avatar
Brian Ojeda
Announcing HashiCorp Terraform 1.0 General Availabilityattachment image

Terraform 1.0 — now generally available — marks a major milestone for interoperability, ease of upgrades, and maintenance for your automation workflows.

Release notes from terraform avatar
Release notes from terraform
11:43:37 AM

v1.0.0 1.0.0 (June 08, 2021) Terraform v1.0 is an unusual release in that its primary focus is on stability, and it represents the culmination of several years of work in previous major releases to make sure that the Terraform language and internal architecture will be a suitable foundation for forthcoming additions that will remain backward compatible. Terraform v1.0.0 intentionally has no significant changes compared to Terraform v0.15.5. You can consider the v1.0 series as a direct continuation…

9
Mohammed Yahya avatar
Mohammed Yahya

at last

Matt Gowie avatar
Matt Gowie

Feels unexciting as there isn’t much new being released, but at least we’ll finally stop hearing jokes about terraform not being 1.0

this1
Mohammed Yahya avatar
Mohammed Yahya

exactly

Chris Fowles avatar
Chris Fowles

it’s important to know that we can now stop having to consider a version upgrade as a major activity - which is nice

bp avatar

you called it @Erik Osterman (Cloud Posse) !

1
bp avatar

wonder if terraform test is still beta in v1.0 or staying with v0.15

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


@Chris Fowles
it’s important to know that we can now stop having to consider a version upgrade as a major activity - which is nice
Yes/no.

Now we’re back to 0.11 and 0.12 style version upgrades - the kind that happen every year and are scary.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

With regular breaking changes, we got much better at handling them.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


but at least we’ll finally stop hearing jokes about terraform not being 1.0
But now I lose my excuse for why cloudposse modules are 0.x

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Jon Butterworth avatar
Jon Butterworth

Hi all. QQ if I may.. I’m seeing the following error

Error: "name_prefix" cannot be less than 3 characters

This is coming from the eks-workers module. Looks as though it’s then coming from the ec2-autoscale-group module and then from the label/null module.

Full Error:

│   on .terraform/modules/eks_workers.autoscale_group/main.tf line 4, in resource "aws_launch_template" "default":
│    4:   name_prefix = format("%s%s", module.this.id, module.this.delimiter)

I can’t seem to see why it’s not getting an id.. FYI, I’ve changed nothing. Just calling the eks-workers module…

module "eks_workers" {
  source = "./modules/eks-workers"

  cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
  cluster_endpoint                   = module.eks_cluster.eks_cluster_endpoint
  cluster_name                       = module.eks_cluster.eks_cluster_id
  cluster_security_group_id          = module.eks_cluster.security_group_id
  instance_type                      = "t3.medium"
  max_size                           = 8
  min_size                           = 4
  subnet_ids                         = module.vpc.public_subnets
  vpc_id                             = module.vpc.vpc_id

  associate_public_ip_address        = true
}

NB: Although the module is local, it was cloned this morning so is up to date.

Jon Butterworth avatar
Jon Butterworth

Anyone got any thoughts on this? Would a GH Issue be more suitable for this?

Brij S avatar

Hey all, Im using the terraform eks community module. Im trying to tag the managed nodes with the following:

      additional_tags = {
        "k8s.io/cluster-autoscaler/enabled"             = "true"
        "k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
        "Name"                                          = var.cluster_name
      }

In addition to this i’m trying to merge the tags above with var.tags with minimal success - does anyone know how to do that?

I tried the following with no luck

      additional_tags = {
        merge(var.tags, 
          "k8s.io/cluster-autoscaler/enabled"             = "true"
          "k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
          "Name"                                          = var.cluster_name
        )
      }
Avenia avatar
 tags = merge(
    {
      "Name" = format("%s", var.name)
    },
    local.tags,
  )
}
Avenia avatar

i think your issue is the { } missing around your 3 bottom tags.

Brij S avatar

let me try adding the { }

Avenia avatar
additional_tags = {
        merge(var.tags, {
          "k8s.io/cluster-autoscaler/enabled"             = "true"
          "k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
          "Name"                                          = var.cluster_name
        })
      }
Brij S avatar

that results in

  50:       additional_tags = {
  51:         merge(var.tags, {
  52:           "k8s.io/cluster-autoscaler/enabled"             = "true"
  53:           "k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
  54:           "Name"                                          = var.cluster_name
  55:         })
  56:       }

Expected an attribute value, introduced by an equals sign ("=")
Avenia avatar

= ${var.cluster_name}” ?

Avenia avatar

it shouldnt need that. but thats odd.

Brij S avatar

still the same error

Avenia avatar

what version is this?

Avenia avatar

Ohj

Avenia avatar

you still ahve a syntax error

Brij S avatar

14.10

Avenia avatar
additional_tags = merge(var.tags, {
          "k8s.io/cluster-autoscaler/enabled"             = "true"
          "k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
          "Name"                                          = var.cluster_name
        })
Avenia avatar

try that.

Brij S avatar

An argument or block definition is required here.

Brij S avatar

additional_tags is map(string) value so that should work

Avenia avatar

turn your tags into a local and see if it works then

Avenia avatar
locals {
  #Instance Tagging
  tags = {
    "service"   = var.service_name
    "env"       = var.environment
    "stackname" = "${var.environment}-${var.application_name}"
  }
}

etc

Avenia avatar

then do locals.tags in the merge.

Brij S avatar

hmm I’ll try - the thing is, var.tags are picked up from various *.tfvars files

Brij S avatar

so locals might make it so i duplicate some tags

Avenia avatar

are you outputting them?

Brij S avatar

the tags? no

Avenia avatar

Threading this to reduce noise.

Brij S avatar

good call

Avenia avatar

so your vars are in multiple files?

Brij S avatar

yeah

Brij S avatar

for different environments

Avenia avatar

How exactly are you structuring your terraform?

each app/env should have its own set of terraform.tfvars files

something like

app –terraform – dev – terraform.tfvars – main.tfoutputs.tfvariables.tf – stage – terraform.tfvars – main.tfoutputs.tfvariables.tf – prod – terraform.tfvars – main.tfoutputs.tfvariables.tf

(your experience may vary this is what we use basically)

Or use something like terragrunt where you can define them all in a single place and it keeps it a bit more DRY.

you should be able to import your module in the main.tf call, and expose the locals to the module there, where it can generate the local tags.

Brij S avatar
      additional_tags = merge(var.tags, {
        Name                                            = var.cluster_name
        "k8s.io/cluster-autoscaler/enabled"             = "true"
        "k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
      }, )
Brij S avatar

that seems to have worked, however my instances dont have any of the tags after apply

Brij S avatar

the plan didnt show a change either

Thomas Hoefkens avatar
Thomas Hoefkens

Hi all, I am using the helm provider to deploy a chart… but when adding a template in the helm chart, the tf deployment does not detect the fact that I added a yaml file… how can this be resolved?

Gabriel avatar
Gabriel

I am not sure if there is a clean way to make it work with terraform. a hack/workaround could be to determine if there are changes some other way, if yes taint/replace the resource. or just handle helm separately :D

Thomas Hoefkens avatar
Thomas Hoefkens

@Gabriel So is this a known “side-effect” of using the helm provider? That is a big issue imo…

Gabriel avatar
Gabriel

yes, i think its a known issue https://github.com/hashicorp/terraform-provider-helm/issues/372 not aware if it has been fixed somewhere

Values modified outside of terraform not detected as changes · Issue #372 · hashicorp/terraform-provider-helmattachment image

Terraform Version Terraform v0.12.12 Helm provider Version ~> 0.10 Affected Resource(s) helm_resource Terraform Configuration Files resource &quot;helm_release&quot; &quot;service&quot; { name =…

2021-06-09

Jon Butterworth avatar
Jon Butterworth

I posted a question for module support yesterday and it’s lost in the scroll back. Is this the best place for module support? Or should I raise a github issue? TIA.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Here is good

Jon Butterworth avatar
Jon Butterworth

Thanks, I’ve reshared.

Jon Butterworth avatar
Jon Butterworth
07:54:48 AM

Reshared here so it doesn’t get lost in scrollback

Hi all. QQ if I may.. I’m seeing the following error

Error: "name_prefix" cannot be less than 3 characters

This is coming from the eks-workers module. Looks as though it’s then coming from the ec2-autoscale-group module and then from the label/null module.

Full Error:

│   on .terraform/modules/eks_workers.autoscale_group/main.tf line 4, in resource "aws_launch_template" "default":
│    4:   name_prefix = format("%s%s", module.this.id, module.this.delimiter)

I can’t seem to see why it’s not getting an id.. FYI, I’ve changed nothing. Just calling the eks-workers module…

module "eks_workers" {
  source = "./modules/eks-workers"

  cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
  cluster_endpoint                   = module.eks_cluster.eks_cluster_endpoint
  cluster_name                       = module.eks_cluster.eks_cluster_id
  cluster_security_group_id          = module.eks_cluster.security_group_id
  instance_type                      = "t3.medium"
  max_size                           = 8
  min_size                           = 4
  subnet_ids                         = module.vpc.public_subnets
  vpc_id                             = module.vpc.vpc_id

  associate_public_ip_address        = true
}

NB: Although the module is local, it was cloned this morning so is up to date.

MrAtheist avatar
MrAtheist

Does anyone know if theres a way to ignore changes to the entire module? Ive got this tgw module originally deployed, but it has been messed with manually a couple of times that i dont know if i could salvage it by monkey patching the tf code, hence this question…

Jon Butterworth avatar
Jon Butterworth

Resources have the lifecylce meta-argument, with which you can use ignore_changes - I know this doesn’t answer your question, but the reason for me mentioning is; https://www.terraform.io/docs/language/modules/syntax.html - This mentions that the lifecycle argument is reserved for future releases.. so perhaps lifecycle is/will (be) available for modules

Jon Butterworth avatar
Jon Butterworth
│ Error: Unsupported argument
│ 
│   on main.tf line 19, in module "vpc":
│   19:   lifecycle = {
│ 
│ An argument named "lifecycle" is not expected here.
MrAtheist avatar
MrAtheist

sadly just stumbled upon this.. hopefully it’ll get incorporated somehow

https://www.reddit.com/r/Terraform/comments/mrzsbg/how_to_use_lifecycle_feature_with_ec2instance/

How to use lifecycle feature with ec2-instance module with terraform?

In a terraform task, created an ec2_instance creation module module “ec2_instance” { source =…

MrAtheist avatar
MrAtheist

any other shady hacks for this? or am i doomed to monkey patch this mess…?

Jon Butterworth avatar
Jon Butterworth

Could you use terraform state mv to move resources into a new module which represents what is in state?

MrAtheist avatar
MrAtheist

thanks, checking it out, im pretty newb when it comes to tf…

MrAtheist avatar
MrAtheist

slight update: i ended up messing with the state file instead of monkey patching the tf code… im not endorsing my actions in anyway shape of form lol

Mr.Devops avatar
Mr.Devops

Running into issue creating aks cluster in azure when using manage identity and private dns zones. Hoping to find anyone who worked with AKS and possibly provide some guidance please

Dias Raphael avatar
Dias Raphael

Hi Team, I would like to create a hosted zone in aws through terraform…Can you suggest me a terraform module which does the same or any guidance would be helpful.

jose.amengual avatar
jose.amengual
Announcing HCP Packerattachment image

HCP Packer is a new cloud service designed to bridge the gap between image creation and deployment with image-management workflows. The service will be available for beta testing in the coming months.

Zach avatar


While HCP Packer is not “Packer in the cloud,”
Too late, its 100% going to be branded “Packer in the cloud”

pjaudiomv avatar
pjaudiomv

I created a module for Route 53 Resolver DNS Firewall using the cloudposse scaffolding if anyone wants to kick the tires on it https://github.com/pjaudiomv/terraform-aws-route53-resolver-dns-firewall

pjaudiomv/terraform-aws-route53-resolver-dns-firewallattachment image

Terraform module to provision AWS DNS firewall resources. - pjaudiomv/terraform-aws-route53-resolver-dns-firewall

Alex Jurkiewicz avatar
Alex Jurkiewicz

nice, in Terraform 1.0, terraform destroy -help states only that it’s an alias for terraform apply -destroy. But terraform apply -help doesn’t mention -destroy

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think it’s because’s -destroy comes from plan

1

2021-06-10

Jon Butterworth avatar
Jon Butterworth

In regards to [contex.tf](http://contex.tf) and this module.. can someone tell me where module.this.id is coming from? In specific reference to the aws-ec2-autoscale-group and aws-eks-workers modules. But this seems to be a standard configuration across a lot of modules.

Alex Jurkiewicz avatar
Alex Jurkiewicz

It’s using the null-label module. This is a module which doesn’t create infrastructure, but is designed to create a consistent name based on inputs

Alex Jurkiewicz avatar
Alex Jurkiewicz

the module is instantiated as “this” and id is one of the null-label outputs. Specifically the one that outputs the “consistent name”

Jon Butterworth avatar
Jon Butterworth

I’m having a hard time trying to narrow down the error I’m seeing when I use the eks-workers module.

Jon Butterworth avatar
Jon Butterworth

It calls EC2-Autoscale-Group.

Jon Butterworth avatar
Jon Butterworth

Which has a name prefix, which it gets from module.this.id

Jon Butterworth avatar
Jon Butterworth

However the error I’m seeing suggests it’s getting a no value from module.this.id

Alex Jurkiewicz avatar
Alex Jurkiewicz

there’s no default name. Did you pass in any of the variables used by null-label module?

Jon Butterworth avatar
Jon Butterworth

Starting to see this now.. there’s no namespace, environment or stage.. which are inputs.

Jon Butterworth avatar
Jon Butterworth

I haven’t done anything, I’m just using a CP module.

Alex Jurkiewicz avatar
Alex Jurkiewicz

namespace, environment, stage, name, attributes

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes, this requirement is not well documented

Alex Jurkiewicz avatar
Alex Jurkiewicz

simplest approach is to set name only to specify the name you want to use for the module’s resources

Jon Butterworth avatar
Jon Butterworth

I see.. So I must pass these attributes into the eks-workers module?

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes, you have to pass at least one of them

Jon Butterworth avatar
Jon Butterworth

The confusion came because the offending module is nested

Alex Jurkiewicz avatar
Alex Jurkiewicz

passing multiple of them, and the null label module’s other variables are designed for advanced workflows where you compose or nest multiple labels

Jon Butterworth avatar
Jon Butterworth

Brilliant, thank you. That was a simple fix

Jon Butterworth avatar
Jon Butterworth

I passed name, and now I’m onto the next error! But at least I’m passed that point.

1
Jon Butterworth avatar
Jon Butterworth

Thanks again,

Johnmary avatar
Johnmary

I am new to terraform I am trying to use cloudposse (git url: https://github.com/cloudposse/terraform-aws-tfstate-backend) to save the state of the terraform on s3 bucket on AWS but I keep getting this error on Jenkins: (

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Error: Failed to get existing workspaces: S3 bucket does not exist.

The referenced S3 bucket must have been previously created. If the S3 bucket
was created within the last minute, please wait for a minute or two and try
again.

Error: NoSuchBucket: The specified bucket does not exist
	status code: 404, request id: RTY8A45R6KR8G72F, host id: yEzmd9hrvPSY3MY3trWfvdtyw4VcJZ+L+hf79QpkOkbSD7GU4Xz9EViWHbDRXiHjTp8k5LgPIzM=

). Any help and guidance will be appreciated thanks.

cloudposse/terraform-aws-tfstate-backendattachment image

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Jon Butterworth avatar
Jon Butterworth
Error: NoSuchBucket: The specified bucket does not exist

The bucket doesn’t exist - Create it first

cloudposse/terraform-aws-tfstate-backendattachment image

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Johnmary avatar
Johnmary

when I created the bucket I get this errors:

Acquiring state lock. This may take a few moments...
[31m
[1m[31mError: [0m[0m[1mError locking state: Error acquiring the state lock: 2 errors occurred:
	* ResourceNotFoundException: Requested resource not found
	* ResourceNotFoundException: Requested resource not found



Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.[0m
Jon Butterworth avatar
Jon Butterworth

So you created the bucket and now you’re getting that error from TF?

Johnmary avatar
Johnmary

I created it from AWS console with the exact name that was expected by the terraform form for storing the state.

Jon Butterworth avatar
Jon Butterworth

That’s an issue with DynamoDB - I’ve not used that module before, but it looks as though it creates the bucket for you if you follow the guide.

Jon Butterworth avatar
Jon Butterworth

cloudposse/tfstate-backend/aws should have created the bucket for you if you followed this?

Johnmary avatar
Johnmary

yes that what it should do but I don’t know why its not creating that

Jon Butterworth avatar
Jon Butterworth
resource "aws_s3_bucket" "default" {
....

According to the module it does create the bucket.

resource "aws_dynamodb_table"
...

It also sorts the dynamo table.

Jon Butterworth avatar
Jon Butterworth

Step one on the readme… where did you put it?

Johnmary avatar
Johnmary

I added it in a folder called backend and the created a main.tf and added it there

Johnmary avatar
Johnmary
08:29:37 AM

this is my terraform structure

Johnmary avatar
Johnmary

So the first step is in the main.tf in the backend folder and then the second step is in the backend.tf

Jon Butterworth avatar
Jon Butterworth

That’s probably your problem.

Jon Butterworth avatar
Jon Butterworth

add that module to management-site/main.tf

Johnmary avatar
Johnmary
08:39:17 AM

not sure thats the problem the jenkins deploy job calls it first before the management site.

Jon Butterworth avatar
Jon Butterworth

Have you tried just following the steps in the module first? To see if it works that way? Then moving things around once you know it works?

Jon Butterworth avatar
Jon Butterworth

I’m not sure why you’ve got logic in a script to check whether the bucket exists, and it it doesn’t exist; run the backend module. To me that doesn’t make sense… the whole point of Terraform is that it creates things which don’t exist and doesn’t re-create things which do exist.

Jon Butterworth avatar
Jon Butterworth

What you’ve done won’t work though, you’ve created a backend with no state to put into it.

Jon Butterworth avatar
Jon Butterworth

You need to bin that backend directory and bring the module into your [main.tf](http://main.tf)

Jon Butterworth avatar
Jon Butterworth

Then run terraform init followed by terraform apply followed by terraform init -force-copy

Jon Butterworth avatar
Jon Butterworth

The first init pulls the module down, the apply creates the S3 bucket and the second init copies the backend to the bucket.

Johnmary avatar
Johnmary

This has been resolved thank you but I didn’t use cloudposse again as the issue persisted even after put all in same file as you advised. thanks for the help.

Johnmary avatar
Johnmary

This was the one I used to achieve that. https://github.com/stavxyz/terraform-aws-backend

stavxyz/terraform-aws-backendattachment image

A Terraform module for your AWS Backend + a guide for bootstrapping your terraform managed project - stavxyz/terraform-aws-backend

Raja Miah avatar
Raja Miah

hi everyone looking for any ideas or resources that i can use to setup using terraform api gateway with a cognito user pool any help would be much appreciated if you wanna contact me i can explain in more detail our current setup and issues we are facing

Thomas W. avatar
Thomas W.

Hi there. I run into a weird error with aws provider and wonder if anyone have run into this too:

resource "aws_synthetics_canary" "api" {
  name                 = "test"
  artifact_s3_location = "s3://${aws_s3_bucket.synthetic.id}"
  execution_role_arn   = aws_iam_policy.synthetic.arn
  handler              = "apiCanaryBlueprint.handler"
  runtime_version      = var.synthetic_runtime_version
  zip_file             = data.archive_file.synthetic.output_path

  schedule {
    expression = "rate(60 minutes)"
  }
}

terraform apply and:

│ Error: error reading Synthetics Canary: InvalidParameter: 1 validation error(s) found.
│ - minimum field size of 1, GetCanaryInput.Name.
│ 
│ 
│   with aws_synthetics_canary.api,
│   on monitoring.tf line 94, in resource "aws_synthetics_canary" "api":
│   94: resource "aws_synthetics_canary" "api" {
│ 
╵
Harry avatar

I’ve got a VPC with some private subnets, and I’m passing those subnet IDs into a module to deploy instances to run an app. I’m also passing in an instance type, but not all instance types are available in all regions and one subnet doesn’t have the instance type I need in it. I’m trying to use the aws_subnet data resource to retrieve the AZs each subnet is in, then use aws_ec2_instance_type_offerings to filter the list of subnets so I only deploy in ones where the instance type is available, but I’m not sure how to create a data resource for each subnet. Can I use foreach here?

Michael Dizon avatar
Michael Dizon
Aurora postgres iam_roles failing to apply · Issue #129 · terraform-aws-modules/terraform-aws-rds-auroraattachment image

When passing iam db role to iam_roles variable, e.g. iam_roles = [<db_role_arn>] , it fails to apply role to Aurora postgres db cluster with following error - Error: InvalidParameterValue: Th…

Michael Dizon avatar
Michael Dizon

trying to provide iam_roles but get this error: Error: InvalidParameterValue: The feature-name parameter must be provided with the current operation for the Aurora (PostgreSQL) engine.

Andy Miguel (Cloud Posse) avatar
Andy Miguel (Cloud Posse)

@here We’re having a special edition of #office-hours next week and will be joined by @Taylor Dolezal who is a Senior Developer Advocate at HashiCorp. Please queue up any questions (or gripes) you have about Terraform on this thread and we’ll have Taylor review them live on the call, thanks!

@here we have another special edition of Office Hours next week Wednesday June 16th!

@Taylor Dolezal will be joining us! Taylor is a Senior Developer Advocate at HashiCorp and we’ll be talking to him about an array of topics including: his role, what’s it like to be a developer at HashiCorp, what we can expect next for Terraform, Nomad vs Kubernetes, security considerations with custom providers, and answering live Q&A from anyone who joins! Hope to see you there

1
2
Taylor Dolezal avatar
Taylor Dolezal
12:35:16 AM

@Taylor Dolezal has joined the channel

2021-06-11

Jon Butterworth avatar
Jon Butterworth

Hi all, quick question for sanity’s sake… In the EKS-Workers module where it refers to autoscaling groups.. This is not the same as Cluster Autoscaler? Or is it?

Jon Butterworth avatar
Jon Butterworth

I’ve deployed a cluster using EKS-Workers.. set max nodes to 8 and min nodes to 3.. but when I deploy 20 nginx pods the nodes don’t scale.

Jon Butterworth avatar
Jon Butterworth

Perhaps there’s an input to enable autoscaling? Or do I need to look at writing something myself to enable cluster auto scaling?

Jon Butterworth avatar
Jon Butterworth

I think I’ve answered this myself. I needed to deploy the autoscaler pod

Mohammed Yahya avatar
Mohammed Yahya

I love the feeling when support asked you how you did solve it?

3
1

2021-06-12

Mohammed Yahya avatar
Mohammed Yahya

https://github.com/tonedefdev/terracreds allow you to store token for TF cloud or similar SaaS like ( env0 - scalr - spacelift) in macos or windows vault instead of plain text, same as aws-vault. I used this between switching TF Cli workflow between TF cloud and Scalr.

tonedefdev/terracredsattachment image

A Terraform Cloud/Enterprise credentials helper. Contribute to tonedefdev/terracreds development by creating an account on GitHub.

4

2021-06-13

MrAtheist avatar
MrAtheist

Questions for terraform-aws-modules/vpc/aws: im switching from single NATGW to multi NATGW setup per AZ. In the plan it’s instructed to destroy the original NATGW that was originally created. This seems fishy to me as it would basically cease the outgoing traffic during which the apply is doing its thing… Anyone knows a way to skip the destroy? or is there a better way to go about this?

Brian Ojeda avatar
Brian Ojeda

Check if the create_before_destroy is set and is true. If it is set and true, then it there is little to no down time downtime.

https://www.terraform.io/docs/language/meta-arguments/lifecycle.html

MrAtheist avatar
MrAtheist

hmm dont think that would work in this case as it would try to create a NATGW with the elastic ip hooked up to the original NATGW…

the problem here is that i have whitelisted the original elastic ip somewhere else and messing with the original NATGW in any way shape of form would break this link

MrAtheist avatar
MrAtheist
Modifying single_nat_gateway destroys existing nat gatway · Issue #506 · terraform-aws-modules/terraform-aws-vpcattachment image

I&#39;ve created a VPC with single_nat_gateway=true. When attempting to change to single_nat_gateway=false the plan shows the following: # module.vpc.aws_nat_gateway.this[0] must be replaced -/+ re…

Ashish Sharma avatar
Ashish Sharma

Hi Guy ….Do we have any utility like tfenv for windows to use tf version whichver we like ?

2021-06-14

rei avatar

Hi folks, I am starting to migrate my terraform state and stuff to Terraform Cloud. So far so good, however now I encountered the following error, when migrating the module using the cloudposse eks modue.

│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp 127.0.0.1:80: connect: connection refused
│ 
│   with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│   on .terraform/modules/eks_cluster/auth.tf line 83, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│   83: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
│ 

Any ideas/hints?

I have tried to change the kubernetes provider, checked the IAM credentials. Still no clue

zeid.derhally avatar
zeid.derhally

looks like your kubernetes provider is not configured correctly or is expecting k8s on localhost

rei avatar

however before importing the state to Terraform Cloud it worked.

rei avatar

There is nothing on the docs, that the provider needs a config file, just pass the host, cluster_ca_certificate and token from the data resources

zeid.derhally avatar
zeid.derhally

correct, the config file is not necessary if you set those properties. I was just going by the error message where it show that it is trying to connect to 127.0.0.1

rei avatar

it does not make sense, it tries to connect to localhost… the only change I made was addition the remote backend pointing to terraform cloud

marc slayton avatar
marc slayton

Is your backend initialized? There’s a one-time step to push your s3 config.

rei avatar

Yes it is. I see the state file and the resources in the TFC GUI. When I run plan it executes it remotely

rei avatar

But then it throws the error

marc slayton avatar
marc slayton

It’s a little hard to say, but it fees like a configuration problem. I’ve often found those out by looking at the TF_LOG=TRACE output. There’s a lot of info given about each call, including where the variables are referenced from.

rei avatar

Thx, would give it a try

rei avatar

I found the cause, the missing config path: Add this env var to fix

export KUBE_CONFIG_PATH=~/.kube/config
marc slayton avatar
marc slayton

Nice catch!

rei avatar

Well I needed to go through 70k lines of trace logs

marc slayton avatar
marc slayton

That does seem like a lot. I’ve been thinking of a tool that might help make that easier – something that parses for diagnostic info, and helps the user interpret the logs a bit better. It might be something that could be added to the utils provider, given time.

uselessuseofcat avatar
uselessuseofcat

I was able to send notification to SNS topic when a new log event appears in Log Group via aws_cloudwatch_log_metric_filter and aws_cloudwatch_metric_alarm , but I was wondering, how can I send the message itself and not just metric values? Thanks!

Jags avatar

hi there, I’m new user of atmos workflow. just wondering how to import existing resources using atmos or i should do it outside of atmos and then use atmos after

marc slayton avatar
marc slayton

Hi all – I’m troubleshooting a specific problem in terraform-aws-components//account-map, a shared component which makes remote-state calls using modules defined in terraform-yaml-stack-config. I’ve been troubleshooting a few cases where the terraform-aws-provider seems to hang up for various reasons during the remote-state call. The reasons aren’t always clear, but they result in terraform errors such as: Error: rpc error: code = Unavailable desc = transport is closing Would any of you have an idea what the provider might be giving up on here? Are there techniques that might pull more debugging info out of the utils provider?

marc slayton avatar
marc slayton

Here’s a TF_LOG=TRACE output. I’ve found this particular issue to be more difficult than most.

marc slayton avatar
marc slayton
marc slayton avatar
marc slayton

This line seems to be central to the issue:

path=.terraform/providers/registry.terraform.io/cloudposse/utils/0.8.0/linux_amd64/terraform-provider-utils_v0.8.0 pid=16307 error="exit status 2"

The provider seems to fail without a lot of additional info. In this case, I’ve linted the yaml, checked the variable dependencies, etc. I might try looking for version compatibilities next – or perhaps rebuild the module to add a bit more debugging, if available.

marc slayton avatar
marc slayton

I figured this out. There were pieces of terraform-yaml-stack-config I didn’t understand. Once I found the TF_LOG_PROVIDER flag and the terraform-provider-utils source, I realized the merge tool could actually be a powerful scanning/diagnostic/debug tool to help with bad configurations like my own. Might be a win for other people down the road. I’ll try and post a few example diagnostics – like maybe something that warns about bad remote-state configs.

2
Florian SILVA avatar
Florian SILVA

Hello guys,

I just pushed a new PR to the Beanstalk env module. Could somebody take a look on it when possible ? This feature would close some old issues and PR at the same time. https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/182

Feat/add nlb by florian0410 · Pull Request #182 · cloudposse/terraform-aws-elastic-beanstalk-environmentattachment image

what Add NLB support in the module Set default protocol to TCP in case that loadbalancer_type == &quot;network&quot; S3 logs and Security Groups are not valid for Network ELB. HealthCheckPath appl…

joshmyers avatar
joshmyers

Anyone here using  default_tags with the AWS provider? Seen any gotchas? A few open issues around perpetual diff/conflicting resource tag problems. Looks maybe not fully baked yet….

pjaudiomv avatar
pjaudiomv

Only issue I had was with ecs on govcloud

joshmyers avatar
joshmyers

Perpetual diffs when changing a resource not related to the tag change? Are you using terraform-null-label at the mo on all resources and passing them around into other modules etc?

joshmyers avatar
joshmyers
default_tags always shows an update · Issue #18311 · hashicorp/terraform-provider-awsattachment image

Description I have been looking forward to the default tagging support and tested it on a project yesterday which uses https://github.com/terraform-aws-modules/terraform-aws-vpc/ — this immediately…

joshmyers avatar
joshmyers
Tag error for ECS in govcloud partion using default_tags · Issue #19185 · hashicorp/terraform-provider-awsattachment image

Provider version 3.38.0 Terraform 0.15.1 when using default_tags feature apply fails as it tries to create tags on ecs resource. Community Note Please vote on this issue by adding a reaction to t…

pjaudiomv avatar
pjaudiomv

Yup that’s the one

2021-06-15

brad.whitcomb97 avatar
brad.whitcomb97

Hi all, I’m fairly new to Terraform and I’m still getting to grips with the best practices….

I’m currently in the process of creating a simple environment which includes a newly created - VPC, IGW, Public Subnet and a EC2 Instance.

However, at the point of applying the config I receive the error message below, has anyone seen anything like this before? Any help/advice would be greatly appreciated

terraform apply --auto-approve
module.vpc.aws_subnet.main_subnet: Creating...
module.vpc.aws_vpc.vpc: Creating...
module.vpc.aws_vpc.vpc: Creation complete after 4s [id=vpc-09da0001c2b98a15f]
module.network.aws_internet_gateway.igw: Creating...
module.network.aws_internet_gateway.igw: Creation complete after 1s [id=igw-0e922b721b610639f]
╷
│ Error: error creating subnet: InvalidVpcID.NotFound: The vpc ID 'aws_vpc.vpc.id' does not exist
│ 	status code: 400, request id: 5b4df02a-6826-45a4-a3ca-1e7fcaff4920
│ 
│   with module.vpc.aws_subnet.main_subnet,
│   on modules/vpc/main.tf line 11, in resource "aws_subnet" "main_subnet":
│   11: resource "aws_subnet" "main_subnet" {
roth.andy avatar
roth.andy

Code snippet?

bazbremner avatar
bazbremner

What are you supplying as the value for the availability_zone if anything? That looks like your problem.

brad.whitcomb97 avatar
brad.whitcomb97

resource "aws_subnet" "main_subnet" { vpc_id = aws_vpc.main.id cidr_block = var.subnet_cidr map_public_ip_on_launch = “true” availability_zone = var.availability_zone

tags = { name = “main Subnet” } }

variable “availability_zone” { type = string default = “eu-west-2a” }

loren avatar

are you passing any vars when calling terraform apply? do you have a terraform.tfvars file setting the availability_zone?

loren avatar

the error is coming from the aws api, it’s not really a terraform error, exactly. might inspect the console to see what’s up…

loren avatar

actually, the error is saying you are passing the literal string "var.availability_zone" as the value for var.availability_zone

InvalidParameterValue: Value (var.availability_zone)
loren avatar

so i’d take another look at your aws_subnet block, and make sure it’s not actually this:

availability_zone = "var.availability_zone"
brad.whitcomb97 avatar
brad.whitcomb97

I haven’t got a .tfvars file

Any ideas why the variable I have set isn’t being picked up?

module "vpc" { source = “./modules/vpc” vpc_id = “var.vpc_id” vpc_cidr = “10.0.0.0/24” subnet_cidr = “10.0.1.0/24” availability_zone = “eu-west-2a” }

loren avatar

this is incorrect syntax:

vpc_id = "var.vpc_id"

(at least, it is incorrect if you mean to pass the value of the variable vpc_id, instead of the literal string "var.vpc_id")

loren avatar

remove the quotes:

vpc_id = var.vpc_id
brad.whitcomb97 avatar
brad.whitcomb97

quotes have been removed, but error still occurs

brad.whitcomb97 avatar
brad.whitcomb97

New/different error message received -

module.network.aws_internet_gateway.igw: Creation complete after 1s [id=igw-065812ef10aa22484] │ *Error: error creating subnet: InvalidVpcID.NotFound: The vpc ID 'aws_vpc.vpc.id' does not exist* *│* status code: 400, request id: b5a36b6d-fe2f-4f46-b2c5-eb6452e7b23e │  │  with module.vpc.aws_subnet.main_subnet, │  on modules/vpc/main.tf line 11, in resource “aws_subnet” “main_subnet”: │  11: resource “aws_subnet” “main_subnet” {

brad.whitcomb97 avatar
brad.whitcomb97
resource "aws_subnet" "main_subnet" {
  vpc_id = var.vpc_id
  cidr_block = var.subnet_cidr
  map_public_ip_on_launch = "true"
  availability_zone = var.availability_zone

  tags = {
    name = "main Subnet"
  }

variable "vpc_id" {
  type = string
  default = "aws_vpc.vpc.id"
}
Eric Jacques avatar
Eric Jacques

Hello folks, I’m trying to play with terraform-aws-ecs-web-app, launching examples/complete with all defaults, terraform apply went well but, task eg-test-ecs-web-app keep dying because of “Task failed ELB health checks”, maybe because of fargate, someone have an idea ?

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

FYI, if anybody here was using my gitpod-terrafrom image for Gitpod, I moved it to ECR Public as Dockerhub annoyed me: https://github.com/Vlaaaaaaad/gitpod-terraform/pull/11 AKA https://gallery.ecr.aws/vlaaaaaaad/gitpod-terraform

Cleanup and move to ECR Public by Vlaaaaaaad · Pull Request #11 · Vlaaaaaaad/gitpod-terraformattachment image

DockerHub got… lazy and user-hostile so I am moving this image to Amazon ECR Public. Also doing some long-needed cleanup

ECR Public Gallery

Amazon ECR Public Gallery is a website that allows anyone to browse and search for public container images, view developer-provided details, and see pull commands

3
Kyle Johnson avatar
Kyle Johnson

Question on using multiple workspaces and graphs

We currently use terragrunt and its dependency resource to pull outputs from other workspaces (example: one workspace is for VPC config and most other resources pull subnet ID’s from it). It seems we could be doing this with the terraform_remote_state provider, but we would miss out on terragrunt’s ability to understand the graph of dependencies (the run-all commands are smart about ordering based on the graph).

How do folks handle the graph without a tool like Terragrunt? Some form of pipeline which understands dependencies? Avoid having deeply nested graphs to begin with?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use spacelift to setup dependencies using rego policies

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we label all of our components and then make other components dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

plus spacelift provides graph visualization across root modules

Kyle Johnson avatar
Kyle Johnson

I guess I need to go look at spacelift thanks!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

checkout env0, scalr, and TFC too

Anand Gautam avatar
Anand Gautam

I am using this module (https://registry.terraform.io/modules/cloudposse/config/aws/latest) to deploy AWS Config using the CIS 1.2 AWS benchmark with this submodule (https://registry.terraform.io/modules/cloudposse/config/aws/latest/submodules/cis-1-2-rules). I get an error on the terraform plan though:

│ Error: Invalid index
│
│   on .terraform/modules/aws_config/main.tf line 99, in module "iam_role":
│   99:     data.aws_iam_policy_document.config_sns_policy[0].json
│     ├────────────────
│     │ data.aws_iam_policy_document.config_sns_policy is empty tuple
│
│ The given key does not identify an element in this collection value.

The error goes way when I put create_sns_topic value to true Any insights on how to get rid of this error? It seems like the module expects sns policy to exist

Alex Jurkiewicz avatar
Alex Jurkiewicz

there’s probably a parameter to pass in an existing SNS topic

Alex Jurkiewicz avatar
Alex Jurkiewicz

I guess the module expects you to either specify create_sns_topic = true or to specify a pre-existing SNS topic

Anand Gautam avatar
Anand Gautam

Ahh okay, Saw the readme and it says to add this var findings_notification_arn if I pass false for create_sns_topic . Thanks for the insight!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @matt

2021-06-16

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@here We’re having a special edition of #office-hours next week and will be joined by @Taylor Dolezal who is a Senior Developer Advocate at HashiCorp. Please queue up any questions (or gripes) you have about Terraform on this thread and we’ll have Taylor review them live on the call, thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s this week, now

@here We’re having a special edition of #office-hours next week and will be joined by @Taylor Dolezal who is a Senior Developer Advocate at HashiCorp. Please queue up any questions (or gripes) you have about Terraform on this thread and we’ll have Taylor review them live on the call, thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ideas…. what do you not like about terraform?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What are the hardest parts getting terraform adopted by your organization?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What’s it like behind the scenes running such a popular open source project with thousands of contributors?

managedkaos avatar
managedkaos

What are the benefits of using CDK for Terraform over vanilla Terraform? I understand the case for using a programming language a developer might be more familiar with compared to Terraform, but is there any other huge benefit to the CDK that might be overlooked by someone already using Terraform? Also, are there any anti-patterns that can be avoided by someone just starting out with the CDK? Thank you!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Good one

Release notes from terraform avatar
Release notes from terraform
02:03:41 PM

v1.1.0-alpha20210616 1.1.0 (Unreleased) NEW FEATURES: lang/funcs: add a new type() function, only available in terraform console (#28501) ENHANCEMENTS: configs: Terraform now checks the syntax of and normalizes module source addresses (the source argument in module blocks) during configuration decoding rather than only at module installation time. This…

lang/funcs: add (console-only) TypeFunction by mildwonkey · Pull Request #28501 · hashicorp/terraformattachment image

The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it&#39;s handy to be abl…

Austin Loveless avatar
Austin Loveless

When using https://github.com/cloudposse/terraform-aws-eks-iam-role?ref=tags/0.3.1 how can I add an annotation to the serviceaccount to use the IAM role I created? I had to do it manually after the serviceaccount and IAM role were created. Is there a way I can automate this?

Command I used: kubectl annotate serviceaccount -n app service [eks.amazonaws.com/role-arn=arn:aws:iam::xxxxx:role/rolename@app](http://eks.amazonaws.com/role-arn=arn:aws:iam::xxxxx:role/rolename@app)

cloudposse/terraform-aws-eks-iam-roleattachment image

Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role

MattyB avatar
cloudposse/terraform-aws-eks-iam-roleattachment image

Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role

managedkaos avatar
managedkaos

This is more of an annoyance with AWS than with Terraform but…

Is there a way to deregister task all definitions for a given task family on destroy?

TLDR: I’m finding that when I work with ECS, each service creates a “family” of task definitions. Once I’m done with the service I can terraform destroy and it goes away but the task family and the task definitions hang around. I can clean them up from the console and/or CLI but is there a way to nudge TF to do it for me? It would be one less thing to have to do to keep my “unused-resources-in-the-AWS-console” OCD in check.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is the kind of thing we want to add to a forthcoming terraform-aws-utils provider (nothing there yet, but we’re pretty close to pulling the trigger on it)

1
1
Nate Diaz avatar
Nate Diaz

anyone know why during a terraform destroy terraform would still try to resolve the data resources? for example, i have a blah.tf file that has some data sources for resources that no longer exist. This makes my terraform destroy error out. Why does terraform care about those data sources? shouldn’t it just try to destroy everything within the state?

Ian Bartholomew avatar
Ian Bartholomew

im starting out a new greenfield terraform project, and I am curious how people are structuring their projects? Most recently, in the last few years, I have been using a workspaces based approach (OSS workspaces, not tf cloud workspaces), but I found a lot of issues with this approach (the project and state grew to the point of needing to be moved to separate repos, which led to issues of how to handle changes that crossed repos, and also how to handle promotion of changes, etc), so I’m looking around to see other ways of structuring a TF project. Are people still using terragrunt, and structuring their project accordingly? Thanks!

msharma24 avatar
msharma24
msharma24/multi-env-aws-terraformattachment image

Multi environment AWS Terraform demo. Contribute to msharma24/multi-env-aws-terraform development by creating an account on GitHub.

msharma24 avatar
msharma24

Using Terraform workspaces sucks so I figured crating a wrapper script to do multi env deployment, and here is reference I posted on my github

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

I think workspaces are on the way out. The Terraform 1.0 compatibility guarantees explicitly call our workspaces as not being guaranteed.

The recommended approach from Hashicorp is one directory per environment.

Personally, I use a single directory and a different backend configuration per environment – so like Terragrunt, but we’re using Spacelift.

Ian Bartholomew avatar
Ian Bartholomew

how do you handle promotion of changes with the different backends?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Generally, we continuously deploy our infra, so once you merge a PR, it will automatically deploy to dev, then stg, then prod. So there’s no real promotion in the usual case.

If we are making a major change which does require promotion, generally we rely on logic in the Terraform configuration which creates something only if environment is “dev” or whatever.

2
Ian Bartholomew avatar
Ian Bartholomew

awesome, thanks!

Pierre-Yves avatar
Pierre-Yves

in my team we are doing the same as @Alex Jurkiewicz

msharma24 avatar
msharma24

This is an interesting discussion! @Alex Jurkiewicz @Pierre-Yves I have followed a similar approach but only auto deploying feature/* branches to dev, and then when a PR is opened we run a TF plan against prod, so that we can review the plan + code review in the PR and when merged auto deploy to Prod

Alex Jurkiewicz avatar
Alex Jurkiewicz

hm, you deploy feature branches? What if the PR is not later merged

managedkaos avatar
managedkaos

@Alex Jurkiewicz can you comment on this?
The Terraform 1.0 compatibility guarantees explicitly call our workspaces as not being guaranteed.
I just started a project that is using TF 1.0 and workspaces so reading this gives me pause

managedkaos avatar
managedkaos

Typically, I have one environment per directory but I’m setting up a project in automation that uses workspaces as it simplifies the project structure. The project will have multiple (>5) environments at a time and they will come and go randomly as devs work on them so I figured using workspaces would be the best approach.

Alex Jurkiewicz avatar
Alex Jurkiewicz

There is a section in the Terraform docs about their compatibility guarantees for Terraform 1.x. Search that for “workspace”

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think workspaces are the only viable native-terraform approach to dynamic environments like you describe, and I still use them for this purpose. But I’m trying to move away for static environments

1
managedkaos avatar
managedkaos

thanks

Pierre-Yves avatar
Pierre-Yves

I have static environment stage and production, bu no workspace . I think I’ll need workspace to be able to test my module and destroy everything after the test. but did not found other usage …

Alex Friedman avatar
Alex Friedman

@Ian Bartholomew Which direction did you end up going? I’m about to hit the same question.

Ian Bartholomew avatar
Ian Bartholomew

I’m going with terragrunt and the folder per environment, the nested folder per domain, and using external modules to handle promotion of changes

10001
Alex Friedman avatar
Alex Friedman

Which external modules?

Ian Bartholomew avatar
Ian Bartholomew

our own. basically having a separate repo for a given domain, and version it using git tags, and referencing it the primary infrastructure repo, and using the ref option to point to a specific release of that. that way you can promote up changes in your infra with PRs, changing the version to point to a new one. This section in their docs does a good job at explaining it

Quick start

Learn how to start with Terragrunt.

Alex Friedman avatar
Alex Friedman

Thanks. I’m a bit wary of creating a separate repo for each domain, but the flow makes sense otherwise.

2
Ian Bartholomew avatar
Ian Bartholomew

yah, i feel you. The reason i am is that in the past, when we have put all the resources into a single state, it gets to be hard to manage pretty quick, both in terms of actual state size, as well as just understanding what all is changing. like on one project, doing a plan required redirecting the plan output to a separate file to inspect, since the output would exceed the console line limit

Ian Bartholomew avatar
Ian Bartholomew

so, for me, breaking it up makes it easier to manage

Alex Friedman avatar
Alex Friedman

I’ve always separated out state by env/module. Is that not the norm?

Ian Bartholomew avatar
Ian Bartholomew

i think it is. we might be talking about the same thing. i just mean domain as in area of concern

Alex Friedman avatar
Alex Friedman

Yeah, I think we are, gotcha.

1
Pierre-Yves avatar
Pierre-Yves

I agree with @Ian Bartholomew split the tfstate amongst repos = light weight tfstate + better maintainability + improve access right

2

2021-06-17

Johann avatar

Someone can share with me good terraform examples for eks+fargate?

Mohammed Yahya avatar
Mohammed Yahya

this is awesome reference https://github.com/maddevsio/aws-eks-base I highly recommend it

maddevsio/aws-eks-baseattachment image

This boilerplate contains the know-how of the Mad Devs team for the rapid deployment of a Kubernetes cluster, supporting services, and the underlying infrastructure in the Amazon cloud. - maddevsio…

Johann avatar

@Mohammed Yahya Thank you!

Johann avatar

just to learn

greg n avatar

How can I suppress Checkov failures coming from upstream modules pls? Putting suppression comments in the module call doesn’t seem to work.

module "api" {
  #checkov:skip=CKV_AWS_76:Do not enable logging for now
  #checkov:skip=CKV_AWS_73:Do not enable x ray tracing for now
  source       = "[email protected]:XXXXXXXX/terraform-common-modules.git//api-gateway?ref=main"
<snip>
}
Mohammed Yahya avatar
Mohammed Yahya

Skipping directories To skip a whole directory, use the environment variable CKV_IGNORED_DIRECTORIES. Default is CKV_IGNORED_DIRECTORIES=node_modules,.terraform,.serverless

Mohammed Yahya avatar
Mohammed Yahya

you know where modules path are right? in .terraform folder > modules you can skip the whole folder or pass down the specific module.

Mohammed Yahya avatar
Mohammed Yahya

also you can skip the check it self.

Mohammed Yahya avatar
Mohammed Yahya

another solution would be to exit with non-zero

greg n avatar

perfect! thank you

Bernard Gütermann avatar
Bernard Gütermann
02:43:03 PM

Hi,I don’t know if this is the right place to ask but I’m trying to run the Elastic Beanstalk example at https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/master/examples/complete and it is not working. I get the following output. The only thing I changed was the value of the “source” and “version” fields in the ““elastic_beanstalk_environment” module to “cloudposse/elastic-beanstalk-environment/aws” and “0.41.0”. What am I missing ?

Raja Miah avatar
Raja Miah

Did you update the variables file?

Bernard Gütermann avatar
Bernard Gütermann

yes

Bernard Gütermann avatar
Bernard Gütermann

Actually I fixed it. It’s a version issue. I will do a PR to the repo

Bernard Gütermann avatar
Bernard Gütermann
Fix Example dependencies so that it runs by bernardgut · Pull Request #183 · cloudposse/terraform-aws-elastic-beanstalk-environmentattachment image

what Updated the versions of dependencies in the example so that it works with terraform 1.0 why Previous version returned &quot;deprecated&quot; errors on vpc module. Example didn&#39;t run out…

JG avatar

Hello. I am trying to learn how to use the for_each meta argument and am really hoping someone can help me out. I am trying to create 4 subnets each with a different name & cidr, like so:

resource "aws_subnet" "public_a" {  for_each = {   "public_subnet_a" = "10.10.0.0/28"   "public_subnet_b" = "10.10.0.16/28"  }

 vpc_id      = aws_vpc.this.id  cidr_block    = each.value  availability_zone = "us-west-1a"

 tags = merge(   var.tags,   {    "Name" = each.key   },  ) }

I need to use the resultant subnet ids in several other resources like acls and route tables but am having issues because everything then seems to require I add a for_each argument to each resource so I can then refer to the aws_subnet.public_a[each.key].id. Questions:  1) Is there a way around doing this such as splitting these into individual elements and then not having to add a for_each to every resource that references the subnet id? 2) Even if I add the for_each to something like a route table I still get errors and am not sure what the for_each should reference since if I put something like for_each = aws.subnet.public_a.id I would have to add the [each.key]. Assuming I do have to use a for_each for every resource that references the subnets, what is the proper way to handle this? 3) Is my code for the subnet ok or should I have handled it differently - perhaps it is inherently faulty?

I have tried element, flatten, using a data source block, using [*], etc.. I appreciate any help but please explain in terms someone who is learning can understand as I really want to progress in my understanding. Thank you.

Joe Niland avatar
Joe Niland

Can you show the complete module code?

Raja Miah avatar
Raja Miah

Separate out the values: “public_subnet_a” = “10.10.0.0/28” “public_subnet_b” = “10.10.0.16/28”

into the variable.tf and then use the: for_each = var.{name of variable}

sheldonh avatar
sheldonh

For what it’s worth not that it solves your immediate issue, but might help… I wrote up how to use for-each a bit more on my blog and it’s to date got a ton of traffic as many people get confused on this.

Your mileage may vary, but maybe you’ll find something useful there and if not add a comment and let me know. I had to write that down as it’s super intuitive.

I feel like the for each design reflects the Go developers that built it, but not normal users so behavior isn’t similar to many other tools I’ve used. Once I started learning Go it made a lot more sense.

https://www.sheldonhull.com/how-to-iterate-through-a-list-of-objects-with-terraforms-for-each-function/

How to Iterate Through A List of Objects with Terraform's for_each functionattachment image

While iterating through a map has been the main way I’ve handled this, I finally ironed out how to use expressions with Terraform to allow an object list to be the source of a for_each operation. This makes feeding Terraform plans from yaml or other collection input much easier to work with.

JG avatar

Hey thanks for the responses - I will read your blog and see if I can figure it out @sheldonh and if not I will post the code after I sanitize it @Joe Niland

David Morgan avatar
David Morgan

does the cloudposse “terraform-aws-ecs-cloudwatch-autoscaling” module support target tracking scaling strategy?

David Morgan avatar
David Morgan

or is it just the step scaling strategy

2021-06-18

Mark juan avatar
Mark juan

Hey everyone!

Mark juan avatar
Mark juan

I got a problem, can someone please help me with this?

Mark juan avatar
Mark juan

i)What i am getting- i am creating prometheus by using helm release of prometheus and using this repo "<https://prometheus-community.github.io/helm-charts>" so i am getting the services name like this 1)prometheus-kube-prometheus-operator 2)prometheus-kube-state metrics 3) prometheus-grafana, etc. ii)What i am expecting- 1)prometheus-operator 2)state metrics 3)grafana, etc. iii)Where i am stuck- As i am using this repo i am giving name only in helm release of prometheus name=“prometheus” so i think the suffix coming from repo only , how can i make them to what i want.

Prometheus Community Kubernetes Helm Charts

Prometheus community Helm charts

Markus Muehlberger avatar
Markus Muehlberger

If you have a look at the kube-state-metrics chart, you can see that the name is composed of the release name and chart name, if you don’t provide a fullnameOverride value. In your case that would be prometheus (release name) and kube-state-metrics (chart name) which results in prometheus-kube-state-metrics.

If you don’t want that you will have to set the fullnameOverride in your values file.

prometheus-community/helm-chartsattachment image

Prometheus community Helm charts. Contribute to prometheus-community/helm-charts development by creating an account on GitHub.

1
Mark juan avatar
Mark juan
resource "helm_release" "prometheus" {
  chart = "kube-prometheus-stack"
  name = "prometheus"
  namespace = "monitoring"
  create_namespace = true

  repository = "<https://prometheus-community.github.io/helm-charts>"

  # When you want to directly specify the value of an element in a map you need \\ to escape the point.
  set {
    name = "podSecurityPolicy\\.enabled"
    value = true
  }

  set {
    name = "server\\.persistentVolume\\.enabled"
    value = false
  }

  set {
    name = "server\\.resources"
    # You can provide a map of value using yamlencode
    value = yamlencode({
      limits = {
        cpu = "200m"
        memory = "50Mi"
      }
      requests = {
        cpu = "100m"
        memory = "30Mi"
      }
    })
  }
}
Mark juan avatar
Mark juan

in this where can i pass?

Markus Muehlberger avatar
Markus Muehlberger

I haven’t worked with the helm Terraform provider yet, but it would be something like

set {
  name = "fullnameOverride"
  value = "my-full-name"
}

I reckon

Markus Muehlberger avatar
Markus Muehlberger

The stack resource does use a suffix (see here for example), so you might be out of luck there.

prometheus-community/helm-chartsattachment image

Prometheus community Helm charts. Contribute to prometheus-community/helm-charts development by creating an account on GitHub.

Mark juan avatar
Mark juan

the value should be what i want to give the name of prometheus service right?

Markus Muehlberger avatar
Markus Muehlberger

Yes, in case of kube-state-metrics it would be the full name, for the stack it would probably be prometheus (to get prometheus-operator).

Mark juan avatar
Mark juan

but the single helm chart is installing all the things

Mark juan avatar
Mark juan

all the services

Mark juan avatar
Mark juan

i tried by passing like this

set {
  name = "fullnameOverride"
  value = "my-full-name"
}
Mark juan avatar
Mark juan

but not worked

Mark juan avatar
Mark juan

Is it possible or not?

Mark juan avatar
Mark juan

Like without changing anything in repo

nicolasvdb avatar
nicolasvdb

Hi, I’m using the module terraform-aws-config from https://github.com/cloudposse/terraform-aws-config and it seems you can’t create the resources without using “create_sns_topic = false” you get this error:

╷
│ Error: Invalid index
│ 
│   on main.tf line 99, in module "iam_role":
│   99:     data.aws_iam_policy_document.config_sns_policy[0].json
│     ├────────────────
│     │ data.aws_iam_policy_document.config_sns_policy is empty tuple
│ 
│ The given key does not identify an element in this collection value.

Just letting you guys know.. no breaking issue, use terraform 0.15.5 btw

Alex Jurkiewicz avatar
Alex Jurkiewicz

did you also see findings_notification_arn?

Alex Jurkiewicz avatar
Alex Jurkiewicz

someone else reported this same confusion recently, sounds like the docs need an improvement here

Alyson avatar
Alyson
08:51:20 PM

Hi, I didn’t understand how to set the value of the map_additional_iam_roles variable in the terraform-aws-eks-cluster module

I tried it this way and was unsuccessful:

map_additional_iam_roles = {"rolearn":"arn:aws:iam::xxxxxxxx:role/JenkinsRoleForTerraform"}

https://github.com/cloudposse/terraform-aws-eks-cluster

Alex Jurkiewicz avatar
Alex Jurkiewicz

In the README, the type for this variable is defined as:

list(object({
    rolearn  = string
    username = string
    groups   = list(string)
  }))

You need to provide values in that format

Alex Jurkiewicz avatar
Alex Jurkiewicz

eg, you are missing username and groups values

Alyson avatar
Alyson
09:48:15 PM

I tried that way, but it failed too.

I’m checking the hashicorp documentation to see if I can get a light

Alex Jurkiewicz avatar
Alex Jurkiewicz

how did it fail? Did you get the same error message, or a different one?

Alyson avatar

the error is different

Error: Variables not allowed                                                                                                       
                                                                                                                                   
  on vars.tf line 137, in variable "map_additional_iam_roles":                                                                     
 137:     rolearn = "arn:aws:iam::xxxxx:role/JenkinsRoleForTerraform"                                                       
                                                                                                                                   
Variables may not be used here.                                                                                                    
                                                                                                                                   
                                                                                                                                   
Error: Missing item separator                                                                                                      
                                                                                                                                   
  on vars.tf line 137, in variable "map_additional_iam_roles":                                                                     
 136:   default = [                                                                                                                
 137:     rolearn = "arn:aws:iam::xxxxxxx:role/JenkinsRoleForTerraform"                                                       
                                                                                                                                   
Expected a comma to mark the beginning of the next item.                                                                           
                                                                                                                                   
ERRO[0002] Hit multiple errors:                                                                                                    
Hit multiple errors:                                                                                                               
exit status 1 
Alex Jurkiewicz avatar
Alex Jurkiewicz

sounds like you have some syntax errors then. Can you post your full code for this module block?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I’m guessing you are trying to wrap this module and pass in this value as a variable. And that you have some syntax errors in the wrapping part.

To start with, I suggest you hardcode the values in your module definition.

Alex Jurkiewicz avatar
Alex Jurkiewicz
  module "eks_cluster" {
    source = "cloudposse/eks-cluster/aws"
    # ...
    map_additional_iam_roles = [
      {
        rolearn = "x"
        username = "y"
        groups = []
      }
    ]
  }
Alyson avatar

With your tip it worked perfectly. Thanks a lot for the help, @Alex Jurkiewicz

1

2021-06-19

2021-06-20

Mark juan avatar
Mark juan

Hi all, I’ve raised the same issue before i)What i am getting- i am creating prometheus by using helm release of prometheus and using this repo "<https://prometheus-community.github.io/helm-charts>" so i am getting the services name like this 1)prometheus-kube-prometheus-operator 2)prometheus-kube-state metrics 3) prometheus-grafana, etc. ii)What i am expecting- 1)prometheus-operator 2)state metrics 3)grafana, etc. iii)Where i am stuck- As i am using this repo i am giving name only in helm release of prometheus name=“prometheus” so i think the suffix coming from repo only , how can i make them to what i want. but this time i was able to rename the prometheus operator but using the fullnameOverride set, but not for other services like node exporter,etc

Prometheus Community Kubernetes Helm Charts

Prometheus community Helm charts

2021-06-21

Jon Butterworth avatar
Jon Butterworth

Anyone thought of a way to achieve this without using null_data_source which is deprecated. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/c25940a8172fac9f37bc2a74c99acf4c21ef12b0/examples/complete/main.tf#L89 I tried moving it to locals but kept seeing the aws-auth configmap error.

Pierre-Yves avatar
Pierre-Yves

was it just moved to a dedicated “null provider” resources, so I guess you just have to load the provider to get it working. https://registry.terraform.io/providers/hashicorp/null/latest/docs https://registry.terraform.io/providers/hashicorp/null/latest/docs/data-sources/data_source

Jon Butterworth avatar
Jon Butterworth

Am I right in saying that in theory, we should be able to move this into locals? Since all we’re doing here is waiting until the cluster & config map is up before we deploy the node group?

Zach avatar

Where do you see that its deprecated? That isn’t mentioned on the docs. However the documentation says that locals can achieve the same effect. https://registry.terraform.io/providers/hashicorp/null/latest/docs/data-sources/data_source

pjaudiomv avatar
pjaudiomv

In certain versions of terraform a warning is displayed saying it’s deprecated when using it

pjaudiomv avatar
pjaudiomv
hashicorp/terraform-provider-nullattachment image

Provides constructs that intentionally do nothing, useful in various situations to help orchestrate tricky behavior or work around limitations. This provider is maintained by the HashiCorp Terrafor…

Zach avatar

Huh, weird that it doesn’t show in the docs

pjaudiomv avatar
pjaudiomv

Somewhere I believe I read that there would be no more development in ways of enhancements or feature requests too

Zach avatar

oh well, anyways the Locals should work for him

1
Jon Butterworth avatar
Jon Butterworth

Yes, TF is complaining that it’s deprecated. I’ve tried using locals to get the cluster name, but it does not have the same effect. Which is strange.

Jon Butterworth avatar
Jon Butterworth

If I add a local cluster_name to pull from the eks_cluster module, and then set the relevant field in the worker nodes module.. I get the config map error

Zach avatar

what is “the config map error”?

Jon Butterworth avatar
Jon Butterworth

I wonder if there’s a difference with where a local would get it’s value from and where a data source would? Presumably a data source get’s it’s inputs from the API, which means the resource has to be up & local get it’s value from the module… so in theory a local variable could be populated before the cluster is fully up?

Zach avatar

can you paste what you’re defining for the local

Jon Butterworth avatar
Jon Butterworth
locals {
  cluster_name = module.eks_cluster.cluster_id
}
Jon Butterworth avatar
Jon Butterworth

Then using local.cluster_name in eks_node_group

module "eks_node_group" {
...
cluster_name = local.cluster_name
...
}
Zach avatar

Ah, thats kind of what I was suspecting. The null data source was using both the cluster name AND the config map attribute, so that everything was synchronized. So you’d need a local defined that uses both those values

Jon Butterworth avatar
Jon Butterworth

I did add a local for the config map ID

Zach avatar

they need to be joined in single local though

Jon Butterworth avatar
Jon Butterworth

Oh, that will be my problem.

Jon Butterworth avatar
Jon Butterworth
locals {
  cluster_name = module.eks_cluster.cluster_id
  kubernetes_config_map_id = module.eks_cluster.kubernetes_config_map_id
}
Jon Butterworth avatar
Jon Butterworth

Not this then?

Zach avatar

yah join those into 1 string basically

Zach avatar

that will cause the local to be undefined until both those values are available

Jon Butterworth avatar
Jon Butterworth

Not completely sure what you mean?

Jon Butterworth avatar
Jon Butterworth

You mean join using join() ?

Zach avatar
locals {
  wait_on_thing = "${module.eks_cluster.cluster_id}- ${module.eks_cluster.kubernetes_config_map_id}"
}
Jon Butterworth avatar
Jon Butterworth

I see.

Jon Butterworth avatar
Jon Butterworth

Got it, excellent, thank you. I’ll test that now

Zach avatar

That’ll only help you part-way though

Zach avatar

you’ll have to then somehow use that variable in the other resource, so that it is dependent on it

Zach avatar

might mean you do some ugly string splitting to get the cluster-id (or whatever it is you need)

Jon Butterworth avatar
Jon Butterworth

Ah yeah, that’s true…

Jon Butterworth avatar
Jon Butterworth

Seems logical to use null_data_source really, since it gives what we want. But having the deprecated warning is annoying.

Jon Butterworth avatar
Jon Butterworth

Something like this I guess? element(split("-", "${wait_for_thing}"),0)

Zach avatar

yah

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

can anyone help me with the below please …

resource "aws_acm_certificate" "cert" {
  count             = var.enable_ingress_alb ? 1 : 0
  domain_name       = "alb.platform.${var.team_prefix}.${var.platform}.${var.root_domain}"
  validation_method = "DNS"

  tags = {
    CreatedBy       = "Terraform"  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_route53_record" "cert_validation" {
  for_each = { for domain in aws_acm_certificate.cert.*.domain_validation_options : domain.domain_name => { name = domain.resource_record_name, record_value = domain.resource_record_value } }
  name     = each.value["name"]
  type     = "CNAME"
  records  = [each.value["record_value"]]
  zone_id  = data.terraform_remote_state.dns.outputs.zone
  ttl      = 60
}

when enable_ingress_alb = true I am getting the following error

Error: Unsupported attribute

  on .terraform/modules/kubernetes/modules/kubernetes-bottlerocket/ingress-alb-certs.tf line 17, in resource "aws_route53_record" "cert_validation":
  17:   for_each = { for domain in aws_acm_certificate.cert.*.domain_validation_options : domain.domain_name => { name = domain.resource_record_name, record_value = domain.resource_record_value } }

This value does not have any attributes.

I think it has something to do with the * in aws_acm_certificate.cert.*.domain_validation_options

loren avatar

try:

  for_each = var.enable_ingress_alb ? { for domain in aws_acm_certificate.cert[0].domain_validation_options : domain.domain_name => { name = domain.resource_record_name, record_value = domain.resource_record_value } } : {}

?

loren avatar

though note, if you start using subject alternative names, the validation record for root.zone and for *.root.zone are the same, which can lead to a race condition… here’s how we handled that: https://github.com/plus3it/terraform-aws-tardigrade-acm/blob/master/main.tf#L40-L46

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Interesting thanks man I think the [0] doesn’t work so looking at options

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
Error: Unexpected resource instance key

  on .terraform/modules/kubernetes/modules/kubernetes-bottlerocket/ingress-alb-certs.tf line 25, in resource "aws_acm_certificate_validation" "cert":
  25:   certificate_arn = aws_acm_certificate.cert.0.arn

Because aws_acm_certificate.cert does not have "count" or "for_each" set,
references to it must not include an index key. Remove the bracketed index to
refer to the single instance of this resource.
loren avatar

did you change something?

resource "aws_acm_certificate" "cert" {
  count             = var.enable_ingress_alb ? 1 : 0

count is right there

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I removed it by accident

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Schoolboy error

Babar Baig avatar
Babar Baig

Hey guys. Quick question. Can we create a terraform module conditionally?

MattyB avatar

I’d love to know this as well. I believe people have asked for the functionality, but I’m not sure if it’s been added lately. CloudPosse modules allow you to pass variables in like “enabled” or “<resource>_enabled”.

1
Tim Birkett avatar
Tim Birkett

A common pattern in many Terraform modules is to add some sort of “create” or “enabled” parameter as mentioned above, but in recent Terraform versions it’s possible to use for_each and count on modules: https://www.terraform.io/docs/language/meta-arguments/for_each.html

1
Babar Baig avatar
Babar Baig

I found a solution. We can use count on modules for Terraform versions 0.13 +

Babar Baig avatar
Babar Baig

So I was able to do something like count = var.myflag ? 0 : 1 in the module definition.

Babar Baig avatar
Babar Baig
08:17:55 PM
Mohammed Yahya avatar
Mohammed Yahya

also consider for_each to preserve index position when applying the same module multiple times

1
1
Pierre-Yves avatar
Pierre-Yves

here is an example using count to trigger a module for a given environnment variable

module "servers" {
  count                              = var.env == "stage" ? 0 : 1
  source                             = "[email protected]:v3/your_module?ref=v1.67"
  location                           = var.location
  env                                = var.env

2021-06-22

Andrea Cavagna avatar
Andrea Cavagna

hey guys, I just found this: https://github.com/iann0036/cfn-tf-custom-types custom type for CloudFormation, you can now add also Terraform resource, this is awesome to me

iann0036/cfn-tf-custom-typesattachment image

CloudFormation Custom Types for Terraform resources. - iann0036/cfn-tf-custom-types

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have any useful resources for path based DENY rules on WAF v2?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am reading the docs and getting myself totally confused

Pavel avatar

hi all

Pavel avatar

im having some weirdness with the key-pair/aws module

Pavel avatar

so i have this

module "key_pair" {
  source = "cloudposse/key-pair/aws"
  namespace             = var.app_name
  stage                 = var.environment
  name                  = "key"
  ssh_public_key_path   = "./.secrets"
  generate_ssh_key      = "true"
  private_key_extension = ".pem"
  public_key_extension  = ".pub"

}

I have the .pem files it generates on one machine, but i want to transfer this to another machine. I put the same key files in the same folder. But tf apply wants to create new private/public key pairs for some reason

 # module.key_pair.local_file.private_key_pem[0] will be created
  + resource "local_file" "private_key_pem" {
      + directory_permission = "0777"
      + file_permission      = "0600"
      + filename             = "./.secrets/xx-development-key.pem"
      + id                   = (known after apply)
      + sensitive_content    = (sensitive value)
    }

  # module.key_pair.local_file.public_key_openssh[0] will be created
  + resource "local_file" "public_key_openssh" {
      + content              = <<-EOT
            ssh-rsa xxx
        EOT
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./.secrets/xx-development-key.pub"
      + id                   = (known after apply)
    }
Joe Niland avatar
Joe Niland

Are you using remote state?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ssm-tls-ssh-key-pairattachment image

Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it writes the keys to SSM

1

2021-06-23

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a way to only perform a remote state lookup if a value is true?

Zach avatar

put a count on the data resource?

Zach avatar

or does it not support that?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

oh it looks like it does

Zach avatar

easy then

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

100% inserts embarrassed face

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i guess this is what i get for working for a medical procedure this morning

Nikola Milic avatar
Nikola Milic

I think I’m stuck on a past decision to include stage option inside my backend declaration for my application x.

main.tf

# Backend ----------------------------------------------------------------------

 module "terraform_state_backend" {
    source                             = "cloudposse/tfstate-backend/aws"
    version                            = "~> 0.32"
    namespace                          = var.application
    stage                              = terraform.workspace
    name                               = "terraform"
    profile                            = var.aws_credentials_profile_name
    terraform_backend_config_file_path = "."
    terraform_backend_config_file_name = "backend.tf"
    force_destroy                      = false
 }

Since my workspace was dev at the time, my remote backend bucket, as I now realize, has been unfortunately called x-*dev*-terraform instead of what I think should have been from the beginning, just x-terraform.

Now, when I added my new terraform workspace called prod and doing a simple terraform plan, I see that it would create additional state bucket, dynamodb table etc. All of that backend-y stuff which shouldn’t really be added since my unfortunate x-dev-terraform state bucket already has subfolders for each of my workspaces, right?

So now, I’m stuck. There is a prod/ folder inside my state bucket, but the state is empty, so it wants to create everything including the backend (which I guess should not be added). If i edit this module declaration from the top, and remove the stage line, it cannot just edit resources but must replace them, which I think would break in half as it tries to keep state but also replace state bucket. How do i escape this?

In short, my idea is to recreate everything from dev back on prod, in the same state bucket, by using workspaces.

Matt Gowie avatar
Matt Gowie

@Nikola Milic the tfstate-backend module is typically (and recommended) to be invoke by itself as a single root module in isolation once for all of your other root modules. You can store all state files in a single bucket and utilize workspace_key_prefix in your backend configuration to properly separate root modules. Your workspaces will then create the various folders for each environment that you create.

Matt Gowie avatar
Matt Gowie

My suggestion is to create a new root module to invoke tfstate-backend, transition your state for your root module to that new backend, and then remove + destroy the tfstate-backend module usage in your environment root module.

Nikola Milic avatar
Nikola Milic

How do you isolate a root module?

Matt Gowie avatar
Matt Gowie

Root modules are terraform projects. They store state.

Child modules are modules you consume in root modules.

Matt Gowie avatar
Matt Gowie

So what I’m saying is you’ll create a separate terraform project alongside your existing one called bootstrap or whatever you want to call it that then creates your state bucket in isolation from the rest of your terraform resources.

Nikola Milic avatar
Nikola Milic

I see. Let’s say that I create that brand new folder for creating new state bucket (which will be correctly called x-terraform). If I follow bootstraping guide on the repo, there will be a backend.tf file as a result of that process. Should that file replace current backend.tf in this original folder?

Matt Gowie avatar
Matt Gowie

Yeah.

Matt Gowie avatar
Matt Gowie

Then do a tf init and it’ll ask you to transition the state to the newly configured bucket.

Nikola Milic avatar
Nikola Milic

At what point do i delete the module declaration from my original main.tf?

Matt Gowie avatar
Matt Gowie

When doing this method, be sure to look into the workspace_key_prefix argument for the s3 backend configuration. You’ll need that to manage multiple root modules in the same bucket.

Matt Gowie avatar
Matt Gowie

After you transition state to the new bucket. Once you do that, your legacy state bucket is no longer necessary.

Matt Gowie avatar
Matt Gowie

You can keep it around for historical purposes if you want by just removing it from state, but everything should get dup’d over to the new bucket so that’s up to you.

Nikola Milic avatar
Nikola Milic

workspace_key_prefix argument does not exist on the cloudposse module?https://github.com/cloudposse/terraform-aws-tfstate-backend

Matt Gowie avatar
Matt Gowie

No, you’ll need to supply that yourself for each root module’s [backend.tf](http://backend.tf) file.

Matt Gowie avatar
Matt Gowie

The other way is to create a tfstate-backend for each root module that you have, but we’re moving away from that as it’s unnecessary.

Nikola Milic avatar
Nikola Milic

Hm I’m kind of confused by the terminology, let me see if we are on the same page. When you say “each root module that you have” what do you exactly mean?

If i understand you correctly i should have this:

/infra bootstrap/ main.tf <- backend config main/ dev/ dev.tfvars prod/ prod.tfvars backend.tf <- copied from bootstrap main.tf <- declaration of app resources

what are those root modules in this scenario

Matt Gowie avatar
Matt Gowie

main/ and bootstrap/ are the root modules.

Nikola Milic avatar
Nikola Milic

so there should be two backend.tf files in those two folders, each of them having additional workspace_key_prefix value same as the name of the folder

Matt Gowie avatar
Matt Gowie

In smaller projects, having one root module can work. But in larger environments where you’re managing 1000s of resources it quickly becomes a huge headache so the community best practice is to separate root modules for areas of concern to decrease blast radius (think of it as having a root module for your various tiers of infra: Network, Data, Application Cluster, Monitoring, etc.)

Matt Gowie avatar
Matt Gowie

Cloud Posse themselves goes with very fine grained root modules where they create one for each type of AWS service (check out terraform-aws-components for that), but that isn’t necessary for all projects.

cloudposse/terraform-aws-componentsattachment image

Opinionated, self-contained Terraform root modules that each solve one, specific problem - cloudposse/terraform-aws-components

Matt Gowie avatar
Matt Gowie


so there should be two backend.tf files in those two folders, each of them having additional workspace_key_prefix value same as the name of the folder
Yeah

Nikola Milic avatar
Nikola Milic

Gotcha. Thanks for the really well explained solution.

Nikola Milic avatar
Nikola Milic

I’ll try and make it work, if I get stuck, expect me here for more answers

Matt Gowie avatar
Matt Gowie

Np. This is confusing stuff for anybody newer to Terraform. Hashi doesn’t do a good job in pushing best practices as well as they should.

1
Nikola Milic avatar
Nikola Milic

Btw, one more quick question, do i need to worry about workspaces in my bootstrap project? I guess not?

Nikola Milic avatar
Nikola Milic

@Matt Gowie All went smoothly, thanks again!

sheldonh avatar
sheldonh

Related to the prior question on backend declarations. I want a dynamic backend creation in s3/dynamo, like how terragrunt does the project initialization. However, I want to keep things as native terraform as possible.

I know Go. Should I just look at some code to initialize backend writing it myself, or is there some Go tool I’m missing out there that creates the remote backend, s3 dynamic creation, and policies? Something like tfbackend init so I can use native terraform for the remaining tasks without more frameworks? (I looked at Terraspace, promising, but again like Terragrunt another abstraction layer to troubleshooting)

sheldonh avatar
sheldonh

Ideally I’d use the cloudposse backend config module, except I’m not ok with having to run that first to init then generate the tf file. I’m half tempted to just flip back to using terraform cloud for remote runs and be done with it.

sheldonh avatar
sheldonh

I could use Go, but more code to write and tear down, which feels like a stubborn refusal to then benefit from pulumi/terragrunt at that point

sheldonh avatar
sheldonh

Terragrunt output is so messy it’s hard to debug at times, so I’m considering backing out some of the terragrunt for native terraform stacks. I have tf var files already and wrapper for handling this… but I don’t have backend s3 creation handled.

Even if I’m using PR workflow with Azure pipelines, it might just make sense to just leverage terraform cloud and be done with it i guess.

Matt Gowie avatar
Matt Gowie

The pattern nowadays with the tfstate backend module is to just create it once and use the one bucket for all root modules using the workspace_key_prefix. I and others are digging it as you only need to create the backend once and then it’s a fairly untouched root module going forward.

Does that not work for you?

sheldonh avatar
sheldonh

Ok, so one “stack” = backend creation and just use that going forward. I thought it would cause locks due to 1 dynamo provisioned, but I’m guessing the dynamo part is per backend state path instead, so I can still not be locked to a single non-parallel run for all stacks using bucket

Matt Gowie avatar
Matt Gowie

“stack” in the SweetOps sense or stack in some other sense? Damn terminology is overused so I have to check

Matt Gowie avatar
Matt Gowie

But in general, one tfstate backend creation period.

Matt Gowie avatar
Matt Gowie

Dynamo locking still works the way you would want it to as long as you’re utilizing workspace_key_prefix.

sheldonh avatar
sheldonh

Hmm. One directory containing this action and then that’s it for that aws account. Everything else is purely path changes. Back to using backendconfig file/vars to set to this at runtime.

thanks for confirming the key prefix is used with dynamo. Thought I’d be locking everything up with a single run at a time. Wasn’t aware the key prefix was the way it was locked. thanks!

oskar avatar
concourse/governanceattachment image

Documentation and automation for the Concourse project governance model. - concourse/governance

concourse/governanceattachment image

Documentation and automation for the Concourse project governance model. - concourse/governance

concourse/governanceattachment image

Documentation and automation for the Concourse project governance model. - concourse/governance

oskar avatar

just curious what others think, any 2c welcome

concourse/governanceattachment image

Documentation and automation for the Concourse project governance model. - concourse/governance

concourse/governanceattachment image

Documentation and automation for the Concourse project governance model. - concourse/governance

concourse/governanceattachment image

Documentation and automation for the Concourse project governance model. - concourse/governance

Matt Gowie avatar
Matt Gowie

Confused on what pattern you’re asking exactly? Concourse overall or some tf pattern within the files you shared?

oskar avatar

check out the inputs. it’s all yaml driven.

oskar avatar

not a common pattern.

oskar avatar

at least haven’t seen this in the wild before.

oskar avatar
locals {
  contributors = {
    for f in fileset(path.module, "contributors/*.yml") :
    trimsuffix(basename(f), ".yml") => yamldecode(file(f))
  }

  teams = {
    for f in fileset(path.module, "teams/*.yml") :
    trimsuffix(basename(f), ".yml") => yamldecode(file(f))
  }

...

resource "github_membership" "contributors" {
  for_each = local.contributors

  username = each.value.github
  role     = "member"
}

resource "github_team" "teams" {
  for_each = local.teams

  name        = each.value.name
  description = trimspace(join(" ", split("\n", each.value.purpose)))
  privacy     = "closed"

  create_default_maintainer = false
}

...
sheldonh avatar
sheldonh

It’s not uncommon I’d think. Actually moved towards using yaml for a bit and merging but there’s some gotchas and it adds an additional abstraction that can be problematic at times to troubleshoot. I ended up trying to stick mostly with TFVars when possible.

1
Matt Gowie avatar
Matt Gowie

Ah using YAML as a datasource is what you mean. Yeah this is becoming a more common approach IMO. I use it a lot for the same purpose: team.yaml, repos.yaml, accounts.yaml, etc.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(ya, tfvars cannot be loaded selectively at run time, which makes yaml better plus it’s portable)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have a module for this pattern https://github.com/cloudposse/terraform-yaml-config

cloudposse/terraform-yaml-configattachment image

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-datadog-monitorattachment image

Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor

cloudposse/terraform-aws-configattachment image

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config

cloudposse/terraform-opsgenie-incident-managementattachment image

Terraform module to provision Opsgenie resources from YAML configurations using the Opsgenie provider,, complete with automated tests - cloudposse/terraform-opsgenie-incident-management

oskar avatar

thanks for the pointers there folks.
I ended up trying to stick mostly with TFVars when possible.
i feel like this should still be the default to not throw away the “typing” checks and the potential vars validation. if granular loading at runtime is not an issue i guess this is not a pattern to adopt then. very cool though for when needed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what we do is use native tf variables in our open source modules, but leverage YAML for the configuration in our components (aka root modules)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that way we get both.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

our modules validate the types, while our configuration is separate from the code.

cool-doge1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

can anyone recommend a WAF module with kinesis firehose setup?

Alex Jurkiewicz avatar
Alex Jurkiewicz

the wafv2 resources in Terraform are quite poor. We actually stopped using them because they were so slow

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

interesting

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

do you still not stream it to firehose?

Alex Jurkiewicz avatar
Alex Jurkiewicz

we had firehose streaming for a while, but dropped it. We use access logs from cloudfront level now instead

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

we don’t use cloudwatch unfortunately

Matt Gowie avatar
Matt Gowie


We actually stopped using them because they were so slow
Slow to plan / apply?

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes

Alex Jurkiewicz avatar
Alex Jurkiewicz

cloudfront access logs write to s3 fwiw

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are implementing the terraform-aws-firewall-manager with WAFv2. We stopped short of kinesis only due to time, but will probably add it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-firewall-managerattachment image

Terraform module to configure AWS Firewall Manager - cloudposse/terraform-aws-firewall-manager

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

adds firehose

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @Ben Smith (Cloud Posse)

2021-06-24

Mazin Ahmed avatar
Mazin Ahmed
attachment image

I am excited to be speaking in Bsides Amman about my ongoing research on cloud security, starting with: Attack Vectors on Terraform Environments!

Save the date: July 3rd More details to come! https://pbs.twimg.com/media/E4peMkzX0AIWWPP.jpg

1
Babar Baig avatar
Babar Baig

Hello everyone. I have a question. I want to use this module to create my organisation, workspaces and variables required by those workspaces https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/tfe/latest Below are the points where I am confused.

  1. Do I need to setup a separate repository (or even the same repository with different path) and place all the TF cloud related infra setup code there.
  2. Create a workspace for that repository in Terraform Cloud
  3. Create this specific workspace and variables related to it manually in Terraform Cloud. Thats what I can think of. Is there any other way. I want to know how community is using this module.

Thanks.

Release notes from terraform avatar
Release notes from terraform
04:03:44 PM

v1.0.1 1.0.1 (June 24, 2021) ENHANCEMENTS: json-output: The JSON plan output now indicates which state values are sensitive. (#28889) cli: The darwin builds can now make use of the host DNS resolver, which will fix many network related issues on MacOS. BUG FIXES: backend/remote: Fix faulty Terraform Cloud version check when migrating…

jsonplan and jsonstate: include sensitive_values in state representations by mildwonkey · Pull Request #28889 · hashicorp/terraformattachment image

A sensitive_values field has been added to the resource in state and planned values which is a map of all sensitive attributes with the values set to true. To achieve this, I stole and exported the…

1
Mohammed Yahya avatar
Mohammed Yahya

https://github.com/hashicorp/envconsul very nice tool to pass env variables generated on the fly from consul (configuration) or Vault ( secrets)

hashicorp/envconsulattachment image

Launch a subprocess with environment variables using data from @HashiCorp Consul and Vault. - hashicorp/envconsul

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Env0 lands $17M Series A as infrastrucure as code control plane gains traction – TechCrunchattachment image

As companies deliver code ever faster, they need tooling to provide some semblance of control and governance over the cloud resources being used to deliver it. Env0, a startup that is helping companies do just that, announced a $17 million Series A today. M12, Microsoft’s Venture Fund, led the roun…

4
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc: @ohad congrats!

Env0 lands $17M Series A as infrastrucure as code control plane gains traction – TechCrunchattachment image

As companies deliver code ever faster, they need tooling to provide some semblance of control and governance over the cloud resources being used to deliver it. Env0, a startup that is helping companies do just that, announced a $17 million Series A today. M12, Microsoft’s Venture Fund, led the roun…

omry avatar

Thanks @Erik Osterman (Cloud Posse)

ohad avatar

Thanks a lot @Erik Osterman (Cloud Posse)!!

Ryan Ryke avatar
Ryan Ryke

hi guys long time no talk can someone please merge https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/pull/45

Perform aws partition lookup for arn by bwmetcalf · Pull Request #45 · cloudposse/terraform-aws-cloudtrail-s3-bucketattachment image

what Instead of requiring the user to define arn_format for gov or china regions, lookup the partition in this module why Makes using this module easier Fixes #44

1
1
Matt Gowie avatar
Matt Gowie

Released as 0.19.0 — Thanks @Ryan Ryke!

Perform aws partition lookup for arn by bwmetcalf · Pull Request #45 · cloudposse/terraform-aws-cloudtrail-s3-bucketattachment image

what Instead of requiring the user to define arn_format for gov or china regions, lookup the partition in this module why Makes using this module easier Fixes #44

Matt Gowie avatar
Matt Gowie
Release v0.19.0 · cloudposse/terraform-aws-cloudtrail-s3-bucketattachment image

Perform aws partition lookup for arn @bwmetcalf (#45) what Instead of requiring the user to define arn_format for gov or china regions, lookup the partition in this module why Makes using this m…

Ryan Ryke avatar
Ryan Ryke

trying to use it in gov cloud

Ryan Ryke avatar
Ryan Ryke

cc @Erik Osterman (Cloud Posse)

Ryan Ryke avatar
Ryan Ryke

i lied… my problem is a flow logs issue in gov cloud…

Ryan Ryke avatar
Ryan Ryke

wave

msharma24 avatar
msharma24

What is the practice followed to grant Terraform IAM access to multiple AWS Accounts - In the past I have just created one IAM user in the SharedServices which can assume a “Terraform Deploy IAM Role with Admin Policy” in all other accounts where I wish to create resources with terraform and I would just use the IAM Access Keys in the CICD Configuration securely.

Ryan Ryke avatar
Ryan Ryke

i usually set assume role in the provider section

Release notes from terraform avatar
Release notes from terraform
10:53:41 PM

v1.0.1 1.0.1 (June 24, 2021) ENHANCEMENTS: json-output: The JSON plan output now indicates which state values are sensitive. (#28889) cli: The darwin builds can now make use of the host DNS resolver, which will fix many network related issues on MacOS. BUG FIXES: backend/remote: Fix faulty Terraform Cloud version check when migrating…

Release v1.0.1 · hashicorp/terraformattachment image

1.0.1 (June 24, 2021) ENHANCEMENTS: json-output: The JSON plan output now indicates which state values are sensitive. (#28889) cli: The darwin builds can now make use of the host DNS resolver, whi…

jsonplan and jsonstate: include sensitive_values in state representations by mildwonkey · Pull Request #28889 · hashicorp/terraformattachment image

A sensitive_values field has been added to the resource in state and planned values which is a map of all sensitive attributes with the values set to true. To achieve this, I stole and exported the…

Ryan Ryke avatar
Ryan Ryke
add arn format to the kms policy by rryke · Pull Request #27 · cloudposse/terraform-aws-vpc-flow-logs-s3-bucketattachment image

currently getting: Error: MalformedPolicyDocumentException: Policy contains a statement with one or more invalid principals. on .terraform/modules/flow_logs.kms_key/main.tf line 1, in resource &quo…

Ryan Ryke avatar
Ryan Ryke

i have an issue in gov cloud with the kms key being cranky at me

add arn format to the kms policy by rryke · Pull Request #27 · cloudposse/terraform-aws-vpc-flow-logs-s3-bucketattachment image

currently getting: Error: MalformedPolicyDocumentException: Policy contains a statement with one or more invalid principals. on .terraform/modules/flow_logs.kms_key/main.tf line 1, in resource &quo…

Ryan Ryke avatar
Ryan Ryke

looks like the bucket policy was updated to add the arn:aws-gov-cloud option but the kms key does not

Ryan Ryke avatar
Ryan Ryke

trying to roll with just cp modules on this customer

1
Ryan Ryke avatar
Ryan Ryke

cc @Erik Osterman (Cloud Posse) again (sorry)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan Ryke thanks for the PR, it looks good, @Matt Gowie and myself made some comments

Ryan Ryke avatar
Ryan Ryke

updated, i added iam in (missed that thanks) not sure what you meant about changing the format

Matt Gowie avatar
Matt Gowie

@Ryan Ryke shipped as 0.12.1

Release v0.12.1 · cloudposse/terraform-aws-vpc-flow-logs-s3-bucketattachment image

Enhancements add arn format to the kms policy @rryke (#27) currently getting: Error: MalformedPolicyDocumentException: Policy contains a statement with one or more invalid principals. on .terr…

Ryan Ryke avatar
Ryan Ryke

thanks a bunch

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Ryan Ryke

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

by changing the format I meant you can provide your own value in the variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since the code now uses the var, it will work

Phil Hadviger avatar
Phil Hadviger

Anybody know where I can find info about module.this ? https://github.com/cloudposse/terraform-aws-vpc/blob/master/main.tf#L13 I haven’t been able to find anything in the Terraform docs, and have only seen this in CloudPosse modules so far.

Alex Jurkiewicz avatar
Alex Jurkiewicz

it’s a cloudposse convention to use https://github.com/cloudposse/terraform-null-label as module "this"

cloudposse/terraform-null-labelattachment image

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Alex Jurkiewicz avatar
Alex Jurkiewicz

essentially, it’s a module with no resources that exports a consistent name you can use for other resources

Phil Hadviger avatar
Phil Hadviger

I guess that’s what I’m confused about. I expected to find something like module "this" in the .tf files, but I searched through all the code and can’t find that. I see module "label" and all kinds of other references, but not the reference to this. Sorry if I’m a bit slow on the uptake.

Alex Jurkiewicz avatar
Alex Jurkiewicz

there is, in context.tf

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-groupattachment image

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then you have module.this

Alex Jurkiewicz avatar
Alex Jurkiewicz

which is also, by convention, the same file & filename. You can see the original in the null-label repo

1
Phil Hadviger avatar
Phil Hadviger

haha… wow… it’s right there. I guess GitHub search failed me. Sorry about that.

Phil Hadviger avatar
Phil Hadviger

Thanks for taking the time!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this way, we don’t have to specify all the common vars in each module, the file just provides all the vars we use in ALL modules, so we have consistent naming convention for the common vars

1
Phil Hadviger avatar
Phil Hadviger

Yeah makes perfect sense actually.

2021-06-25

Alencar Junior avatar
Alencar Junior

Folks, I was wondering how do you deal with aws_ecs_task_definition and continuous delivery to ECS. Do you keep the task-definition.yml within the application repository or you manage it by Terraform? I’m stuck with being able to build my app and deploy the latest release tag to ECS within the pipeline however, I have environment variables and configs which are dependent on other terraform resources outputs.

Bruce avatar

We use Terraform to deploy our ECS services as part of our Ci/CD pipeline. To be honest Terraform is exactly deigned for the deployment of apps (Waypoint maybe better for this). That said, we update the ECS task definition (in HCL) with the new service tag number which is passed in as a variable during runtime. This then updates the service and a new one is rolled out.

Bruce avatar

A downside of this approach is that it will delete the previous task definition, which means a roll back would require a redeployment of the previous version.

Alencar Junior avatar
Alencar Junior

@Bruce Thanks for sharing your approach! Indeed, that’s definitely a downside. It would be nice to keep the task definition revisions.

mfridh avatar

Seems like some good discussion about just this has been happening https://github.com/hashicorp/terraform-provider-aws/issues/258#issuecomment-655764975

aws_ecs_task_definition overwrites previous revision · Issue #258 · hashicorp/terraform-provider-awsattachment image

This issue was originally opened by @dimahavrylevych as hashicorp/terraform#8740. It was migrated here as part of the provider split. The original body of the issue is below. Hello community, I fac…

2
Chris Childress avatar
Chris Childress

Hello, everyone. Hope all’s well. Before I submit an issue on Github, I wanted to make sure I wasn’t doing something “dumb”.

I am attempting to use the rds proxy module located at “cloudposse/rds-db-proxy/aws”. I’ve filled in most of the values, and I want the module to create an IAM role for accessing the RDS authentication Secret (rather than providing my own). I’m getting the following errors when I try a “terraform plan”:

Error: expected length of name to be in the range (1 - 128), got

  on .terraform/modules/catalog_aurora_proxy/iam.tf line 78, in resource "aws_iam_policy" "this":
  78:   name   = module.role_label.id



Error: expected length of name to be in the range (1 - 64), got

  on .terraform/modules/catalog_aurora_proxy/iam.tf line 84, in resource "aws_iam_role" "this":
  84:   name               = module.role_label.id

I have tried:

• terraform init

• terraform get

• terraform get inside the module (the “cloudposse/label/null” module didn’t appear to download automatically)

Chris Childress avatar
Chris Childress

I’m using version 0.2.0 of the module, though I believe there weren’t any changes since 0.1.0?

I’m currently on terraform 0.14.11.

Chris Childress avatar
Chris Childress

Ah!! I figured it out

Chris Childress avatar
Chris Childress

it was because the “name” parameter for the RDS proxy module wasn’t set yet

Chris Childress avatar
Chris Childress

I narrowed it down by setting the iam_role_attributes field to insert a letter, which passed the iam role creation issue and then gave me an error about the length of the name in the “module.this.id”

Chris Childress avatar
Chris Childress

after I commented out the iam_role_attributes parameter line and set the name for the module, everything was fine

oskar avatar

has anyone in here integrated https://driftctl.com into their workflows somehow? just curious.

Catch Infrastructure Driftattachment image

driftctl is a free and open-source CLI that warns of infrastructure drift and fills in the missing piece in your DevSecOps toolbox.

Phil Hadviger avatar
Phil Hadviger

I’ve just started last week, but still experimenting with it. I love what it does so far, but I’m struggling a bit to get the multi-region issues worked out.

Catch Infrastructure Driftattachment image

driftctl is a free and open-source CLI that warns of infrastructure drift and fills in the missing piece in your DevSecOps toolbox.

1
Kevin Neufeld(PayByPhone) avatar
Kevin Neufeld(PayByPhone)

How are teams handling terraform destroy of managed s3 buckets that have a large 500K+ objects? We have sometimes resorted to emptying the bucket via the management portal. We have been looking at using the on destroy provisioning step but passing the correct creds down into the script problematic in our case.

loren avatar

add force_destroy = true to the config, run the apply, then destroy, https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket#force_destroy?

Kevin Neufeld(PayByPhone) avatar
Kevin Neufeld(PayByPhone)

We have force_destory set to true but it still take a long long long long long time to complete.

loren avatar

oh yeah, it is sloooooow

loren avatar

i guess you could terraform state rm <bucket> and then destroy? and handle the bucket removal out-of-band?

Kevin Neufeld(PayByPhone) avatar
Kevin Neufeld(PayByPhone)

not ideal, but it is a work around

loren avatar

aws knowledge center says to use a lifecycle policy to expire/delete all objects and versions after one day

pjaudiomv avatar
pjaudiomv

Yea for really large versioned buckets I end up running a boto script to empty the bucket before running the destroy, this is a lot quicker but certainly not ideal and have yet to find a better way

2
Phil Hadviger avatar
Phil Hadviger

Not sure if it’lll help, but Terraform has this feature: https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep

Phil Hadviger avatar
Phil Hadviger

But if the S3 command itself times out, not sure that’ll help.

loren avatar

Anyone play with this tool yet? Claiming to be an open source alternative to Sentinel… https://github.com/terraform-compliance/cli

terraform-compliance/cliattachment image

a lightweight, security focused, BDD test framework against terraform. - terraform-compliance/cli

Alex Jurkiewicz avatar
Alex Jurkiewicz

that seems like it provides human readable definitions of rules, which Sentinel doesn’t provide, right?

terraform-compliance/cliattachment image

a lightweight, security focused, BDD test framework against terraform. - terraform-compliance/cli

Alex Jurkiewicz avatar
Alex Jurkiewicz

we’ve decided to go with OPA for writing rules directly against Terraform. It’s not the easiest language to work with. But I don’t have a lot of confidence there’s a clear winner in this space yet

2
loren avatar

Me either, not investing time in anything just yet

corcoran avatar
corcoran

Probably worth looking at Checkov too. Yeah OPA + Conftest is a decent shout.

2021-06-26

2021-06-27

Michael Koroteev avatar
Michael Koroteev

Hi guys did anyone encounter this and can assist ? https://github.com/cloudposse/terraform-aws-eks-cluster/issues/117 thanks!

Unable to add cluster_log_types · Issue #117 · cloudposse/terraform-aws-eks-clusterattachment image

On an existing EKS cluster that was created with this module, I&#39;m am unable to add cluster_log_types to the cluster. module &quot;eks_cluster&quot; { source = &quot;cloudposse/eks-cluster/aws&q…

2021-06-28

Mark juan avatar
Mark juan

Anyone encountered this issue

│ Error: Error creating ElastiCache Replication Group (cosmos-test-elasticache): InvalidParameterValue: When specifying preferred availability zones, the number of cache clusters must be specified and must match the number of preferred availability zones.
│       status code: 400, request id: a29ff76d-dad3-4775-b1cf-6b265a37dbe4
│ 
│   with module.redis["cluster-2"].aws_elasticache_replication_group.redis_cluster,
│   on ../redis/main.tf line 3, in resource "aws_elasticache_replication_group" "redis_cluster":
│    3: resource "aws_elasticache_replication_group" "redis_cluster" {
│ 
╵
Mark juan avatar
Mark juan

this is my main.tf file

data "aws_availability_zones" "available" {}

resource "aws_elasticache_replication_group" "redis_cluster" {

  automatic_failover_enabled    = true
  availability_zones            = data.aws_availability_zones.available.names
  replication_group_id          = "${var.name}-elasticache"
  replication_group_description = "redis replication group"
  node_type                     = var.node_type
  number_cache_clusters         = 2
  parameter_group_name          = "default.redis6.x"
  port                          = 6379
  subnet_group_name             = aws_elasticache_subnet_group.redis_subnets.name
}

resource "aws_elasticache_subnet_group" "redis_subnets" {
  name       = "tf-test-cache-subnet"
  subnet_ids = var.redis_subnets
}
Tim Birkett avatar
Tim Birkett

Well… What region are you using? How many AZ names are returned by the data resource? Is that the same number as the number off cache clusters?

Mark juan avatar
Mark juan

ap-south-1

Mark juan avatar
Mark juan
multi-az-disabled
Mark juan avatar
Mark juan

should i enable it ?

Raja Miah avatar
Raja Miah

this is related to a a mismatch with AZ == cache clusters?

1
praneeth avatar
praneeth

I am facing the same issue and this says adding element works but it is not

support more nodes than there are AZs defined by gusse · Pull Request #108 · cloudposse/terraform-aws-elasticache-redisattachment image

what Allows not defining availability_zones Can create more nodes than you have defined AZs (once AWS provider is fixed hashicorp/terraform-provider-aws#14070 (comment)) why availability_zones p…

Rene avatar

Hello. I’m wondering about this ECS module. How exactly can I work EFS in to the container definition? https://registry.terraform.io/modules/cloudposse/ecs-container-definition/aws/latest

Rene avatar

Because as it seems, I can only really use a mount_points argument, once I involve that of volume it naturally doesn’t work since this seems not supported. Am I missing something?

Rene avatar

Okay, so found that it should be adjusted in the task definition. But there I seem to run in to other issues..

Rene avatar
Error: Invalid value for module argument
  on main.tf line 90, in module "thehive_service_task":
  90:   volumes = [
  91:     {
  92:     host_path = "/",
  93:     name      = "efs-ecs"
  94:     efs_volume_configuration = [
  95:       {
  96:       file_system_id = "fs-XXXXX"
  97:       root_directory = "/"
  98:       transit_encryption = "ENABLED"
  99:       transit_encryption_port = null
 100:       authorization_config = []      
 101:       }
 102:     ]
 103:   #  docker_volume_configuration = null
 104:   },
 105:   ]
The given value is not suitable for child module variable "volumes" defined at
.terraform/modules/thehive_service_task/variables.tf:205,1-19: element 0:

attribute “docker_volume_configuration” is required.

Rene avatar

So I need to enable that of the docker_volume_configuration - And that gets me in to this issue:

Rene avatar
Plan: 1 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes
module.test_service_task.aws_ecs_task_definition.default[0]: Creating...
Error: ClientException: When the volume parameter is specified, only one volume configuration type should be used.
  on .terraform/modules/test_service_task/main.tf line 36, in resource "aws_ecs_task_definition" "default":
  36: resource "aws_ecs_task_definition" "default" {
Rene avatar

Sooo.. kind of deadlocked there right now as I cannot null the for_each neither.

Rene avatar

Okay.. So this apparently works.. Since it’s still implicitly applying source-dest mapping even though you apply EFS config.. It’s weird, and not the nicest way or am I mistaken? empty vars needing to be set even though the code already implied them to be empty? Feels nasty and cluttery at least.

module "test_service_task" {
  source    = "cloudposse/ecs-alb-service-task/aws"
  version   = "0.57.0"
...
  volumes = [
    {
    host_path = null
    name      = "efs-ecs"
     efs_volume_configuration = [
       {
       file_system_id = "fs-XXXX"
       root_directory = "/"
      transit_encryption = "ENABLED"
      transit_encryption_port = null
      authorization_config = []
      }
    ]
    docker_volume_configuration = []
...
Matt Gowie avatar
Matt Gowie

Anyone know if there is an open issue / discussion within the TF community around terraform.lock.hcl files not supporting multiple operating systems? Or what to do about that hangup? I’m starting to think about checking in lock files… but if they don’t work cross platform then I’m unsure how folks make em work for their whole team.

Matt Gowie avatar
Matt Gowie

Will post to #office-hours if no real good answers.

tomv avatar
terraform providers lock -platform=windows_amd64 -platform=darwin_amd64 -platform=linux_amd64
1
tomv avatar

will generate a lock.hcl for all platforms

Florian SILVA avatar
Florian SILVA

I had the issue too while using TFE, it sounds like it was because I was using the plugin_cache_dir configuration which was forcing in some way the platform of my providers to my local terraform and not the remote one. I resolved my issue by using a .terraformignore file specifying to exclude .terraform.lock.hcl* for remote execution.

Mohammed Yahya avatar
Mohammed Yahya

I’m not using it for now, I’m adding it to gitignore lot of issues when using CI

Mohammed Yahya avatar
Mohammed Yahya

until it mature I guess.

Matt Gowie avatar
Matt Gowie

Huh interesting. I will try out the providers lock CMD. Maybe that’ll help…

Matt Gowie avatar
Matt Gowie

My issue also could be plugin_cache_dir as I of course use that as well.

Alex Jurkiewicz avatar
Alex Jurkiewicz

You also need to lock Darwin arm64 now

Alex Jurkiewicz avatar
Alex Jurkiewicz

@Matt Gowie this is the closest I know https://github.com/hashicorp/terraform/issues/27769

Caching not usable in 0.14.x due to lock file checksums · Issue #27769 · hashicorp/terraformattachment image

Plugin caching is unusable since it fails when verifies checksum in dependency lock file. Is there way to disable this locking feature? Tbh I can see that caching feature is more needed than this s…

1
Matt Gowie avatar
Matt Gowie

Good find. That sheds some more light.

RB avatar

upvote plz https://github.com/hashicorp/terraform/pull/28700

that will eventually add the ability to templatize strings instead of being stuck creating a file just to feed it into templatefile

String templates as values by apparentlymart · Pull Request #28700 · hashicorp/terraformattachment image

This is a design prototype of one possible way to meet the need of defining a template in a different location from where it will ultimately be evaluated, as discussed in #26838. For example, a gen…

1
sheldonh avatar
sheldonh

I revisited using native terraform with terraform cloud instead of terragrunt and was annoyingly reminded of the limitations when I tried to pass my env variables file with -var-file and it complained :laughing:

I think that’s probably my biggest annoyance right now. If I could rely on env.auto.tfvars working I’d do that. Otherwise I’d have to use a CLI/Go SDK to set all the variables in the workspace in Terraform Cloud itself.

Otherwise I feel I’m back to terragrunt if I don’t want to use my own wrappers to pass environment based configurations. I do like the yaml config module, but it’s too abstracted right for for easy debugging so I’m sticking with environment files.

msharma24 avatar
msharma24

I have been down this road last month and sticking with native TF + env files in a BB Cloud Pipeline.

Alex Jurkiewicz avatar
Alex Jurkiewicz

what doesn’t work about -var-file or auto.tfvars?

Mohammed Yahya avatar
Mohammed Yahya

my exact annoy problem, you can solve this by using environment variable

TF_CLI_ARGS_plan=-var-file=../../env/prod/us-east-1/prod.tfvars

using this env instead of a flag, will allow running in TFC or TFCLI workflows

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@sheldonh you’re overdue for an update on atmos and what we are doing with stack configs.

sheldonh avatar
sheldonh

Sneaky! I’ll check it out soon then. I did explore stack configs a few months back and it wasn’t the right fit for me at the time.

I will say while I get the general appeal of Variant2, i’ve struggled to find use for it rather than just writing Go/PowerShell as it’s another DSL and very verbose. For you working across many tools it probably makes sense, but my last go at it didn’t provide the right fit. Always willing to recheck this stuff out though!

sheldonh avatar
sheldonh

Oh, and one thing that I found I really missed with vanilla terraform was the dependency being pulled from local outputs. For instance I use Cloudposse label module. I use as an input for context for every other item in terragrunt, but with native terraform I found it more complicated to use the remote state in s3 that is also dynamically set in backend config. Felt like I was adding more complexity… though that’s just my reaction as I tried to convert back 2-3 simple modules.

sheldonh avatar
sheldonh

Be nice if terraform had a bit more opinionated workflow options built in to simplify inherited env, such as terraform -env staging that would automatically load env/staging.tfvars…. I could wish right!

Matt Gowie avatar
Matt Gowie

@sheldonh When I first started into terraform I created something along those lines by bash scripting around it locally by keying off of the selected workspace name: https://github.com/Gowiem/DotFiles/blob/master/terraform/functions.zsh#L1-L30

Problem of course is that it doesn’t scale. You can write scripts / make targets at the root of your terraform repo that do the same and then push your team to always use those scripts / targets… but yeah not great.

Gowiem/DotFilesattachment image

Gowiem DotFiles Repo. Contribute to Gowiem/DotFiles development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

I like Mohammed’s solution though — That’s a good one

sheldonh avatar
sheldonh

I wrote a go wrapper for project that does goyek -env dev -stack 'vpc,security-groups' -runall and loops through the directories to avoid using run-all at a parent level. Of course if things work well I also built the folder structure based on cloudposse docs so goyek -env staging -stack '02-foundation' -runall and it does runall in each directory.

I’m not saying this is perfect, but with streams of stdout/stderr it’s pretty reliable.

I like the cli args concept, will have to think on that to figure out if it does what I want as feeding outputs from one small piece into another is a strength of terragrunt. Great ideas and thank you for sharing it all!

2021-06-29

Michael Koroteev avatar
Michael Koroteev
Unable to add cluster_log_types · Issue #117 · cloudposse/terraform-aws-eks-clusterattachment image

On an existing EKS cluster that was created with this module, I&#39;m am unable to add cluster_log_types to the cluster. module &quot;eks_cluster&quot; { source = &quot;cloudposse/eks-cluster/aws&q…

Thomas Hoefkens avatar
Thomas Hoefkens

Hi everyone, could you give me a hint on how to pass a json object as an input variable to a module… e.g. the module contains the <<POLICY EOF>> or <<ITEM >> syntax.. can I pass json into a variable and then use jsonencode in the module? If yes, how do you pass json as an input? Perhaps as a string?

Alex Jurkiewicz avatar
Alex Jurkiewicz

if the JSON has a known static structure, you can pass it as native Terraform object. If the JSON is freeform, you can pass it as a string

Thomas Hoefkens avatar
Thomas Hoefkens

Hi Alex, yes, I am trying to figure out how to pass a dynamodb item as a variable..

Thomas Hoefkens avatar
Thomas Hoefkens

The docs only show an example with an inline <<ITEM >>

Alex Jurkiewicz avatar
Alex Jurkiewicz
variable "user_provided_json" {
  type = object({
    name = string
    age = number
    alive = bool
    friends = list(string)
  })
}

or

variable "user_provided_json_string" {
  type = string
}
locals {
  user_provided_json = jsondecode(var.user_provided_json_string)
}
Thomas Hoefkens avatar
Thomas Hoefkens

if I pass it as string, do I need to escape things? I am guessing so…

Alex Jurkiewicz avatar
Alex Jurkiewicz

not sure what you mean exactly but maybe

Thomas Hoefkens avatar
Thomas Hoefkens

thank you Alex

Thomas Hoefkens avatar
Thomas Hoefkens

short question still.. if I pass it as an object, how do I pass it to the item or policy property?

Thomas Hoefkens avatar
Thomas Hoefkens

do I just say policy = var.user_provided_json ?

Grubhold avatar
Grubhold

Hi folks, I need your help in understanding the folder structure I need to have for different environments (dev, stage, prod) when building an infra very similar to https://github.com/cloudposse/terraform-aws-ecs-web-app

cloudposse/terraform-aws-ecs-web-appattachment image

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

Mohammed Yahya avatar
Mohammed Yahya

its up to you how you want to do this layout, if you follow cloudposse approach, check this https://docs.cloudposse.com/tutorials/first-aws-environment/ if not, you can create workspaces using https://www.terraform.io/docs/language/state/workspaces.html

State: Workspaces - Terraform by HashiCorp

Workspaces allow the use of multiple states with a single configuration directory.

Grubhold avatar
Grubhold
10:21:29 PM

Thank you for the link @Mohammed Yahya actually it doesn’t matter which approach. Just trying to understand how to make the structure to be able to deploy on 3 different environments with slight differences choosing the best approach for my case. I currently have the following folder structure. I’m unsure how to manage tfvar files for each environment with the modules structure of Cloud Posse

Grubhold avatar
Grubhold
10:22:47 PM
2
Mohammed Yahya avatar
Mohammed Yahya

for the modules folder:

Grubhold avatar
Grubhold

@Mohammed Yahya yes?

Grubhold avatar
Grubhold
12:11:12 PM

I thought of it like this. But I’m not sure whether this works with how Cloud Posse has structured the modules etc.

Grubhold avatar
Grubhold

The .tf files found in root are the ECS https://github.com/cloudposse/terraform-aws-ecs-web-app

Mohammed Yahya avatar
Mohammed Yahya

I see, this is my approach for layout https://github.com/mhmdio/terraform-templates-base

mhmdio/terraform-templates-baseattachment image

Terraform Templates Base - monoRepo. Contribute to mhmdio/terraform-templates-base development by creating an account on GitHub.

1
Mohammed Yahya avatar
Mohammed Yahya

I think there are tons of way for the layout, just test and see what match your use case.

1
Grubhold avatar
Grubhold

Thanks for sharing, the template looks very clean. You just earned yourself a follower and a ^^

1
jason einon avatar
jason einon

Hey, anyone ever had to do TF based interview test… looking to pull one together and though this would be a good place for some ideas

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think it can be easy to set the bar too high with a test of a specific tool. I tend to ask candidates if they know how to create a resource that will sometimes be deployed and sometimes not, eg count 0/1.

sheldonh avatar
sheldonh

Agreed. Ability to learn and show adaptability for infra as code etc over specific domain language.

Before I had to do it recently I had never deployed a load balancer due to the nature of my work and VPC/subnets was new due to limitations in last company. Now I’ve done full stack deployment.

Eagerness to demonstrate some area of infra as code makes sense, but maybe let them pick what they excel at so they can shine?

If they have only used console… that’s a different story and will answer if they are starting from scratch on infra as code.

sheldonh avatar
sheldonh

I’d also suggest maybe offering part of it to be show and tell and let them show something they are enthusiastic about

Mohammed Yahya avatar
Mohammed Yahya

also understand this very well ( automation for terraform), since this topic is the most probably you gonna work in real life - beside normal tf dev tasks. https://learn.hashicorp.com/collections/terraform/automation

Automate Terraform | Terraform - HashiCorp Learnattachment image

Automate Terraform by running Terraform in Automation with CircleCI, or following guidelines for other CI/CD platforms.

Vikram Yerneni avatar
Vikram Yerneni

Folks, anyone here got into Terraform module testing for infrastructure code? Source: https://www.terraform.io/docs/language/modules/testing-experiment.html#writing-tests-for-a-module

sheldonh avatar
sheldonh

i took a minor swing at it with Terratest as I’m working with Go now. I haven’t tried the new testing framework in the current experiment. I’m kinda waiting for this work to stablize before I mess around with it.

Vikram Yerneni avatar
Vikram Yerneni

Thanks Sheldon

2021-06-30

Release notes from terraform avatar
Release notes from terraform
04:13:45 PM

v1.1.0-alpha20210630 1.1.0 (Unreleased) NEW FEATURES: cli: terraform add generates resource configuration templates (#28874) config: a new type() function, only available in terraform console (<a href=”https://github.com/hashicorp/terraform/issues/28501” data-hovercard-type=”pull_request”…

commands: `terraform add` by mildwonkey · Pull Request #28874 · hashicorp/terraformattachment image

terraform add generates resource configuration templates which can be filled out and used to create resources. The template is output in stdout unless the -out flag is used. By default, only requir…

lang/funcs: add (console-only) TypeFunction by mildwonkey · Pull Request #28501 · hashicorp/terraformattachment image

The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it&#39;s handy to be abl…

Alex Jurkiewicz avatar
Alex Jurkiewicz

Hmm, weird workflow to import and then add

commands: `terraform add` by mildwonkey · Pull Request #28874 · hashicorp/terraformattachment image

terraform add generates resource configuration templates which can be filled out and used to create resources. The template is output in stdout unless the -out flag is used. By default, only requir…

lang/funcs: add (console-only) TypeFunction by mildwonkey · Pull Request #28501 · hashicorp/terraformattachment image

The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it&#39;s handy to be abl…

Alex Jurkiewicz avatar
Alex Jurkiewicz

I guess it is aimed at teams who are importing legacy stuff

    keyboard_arrow_up