#terraform (2021-07)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-07-30

Ricardo Underwood avatar
Ricardo Underwood

hi everyone, I’m trying to create a gke cluster, but somehow I’m getting a weird problem, maybe someone has seen this before, any comments are welcome Terraform v1.0.0 Google Provider v3.77 terraform-google-modules/kubernetes-engine/google v16.0.1

Ricardo Underwood avatar
Ricardo Underwood
2021-07-30T13:42:43.732Z [INFO]  provider.terraform-provider-google_v3.77.0_x5: 2021/07/30 13:42:43 [DEBUG] Google API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 404 Not Found
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Fri, 30 Jul 2021 13:42:43 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0

{
  "error": {
    "code": 404,
    "message": "Not found: projects/common-yvfxgaubr/locations/europe-west1/clusters/gke-common-cluster1.",
    "errors": [
      {
        "message": "Not found: projects/common-yvfxgaubr/locations/europe-west1/clusters/gke-common-cluster1.",
        "domain": "global",
        "reason": "notFound"
      }
    ],
    "status": "NOT_FOUND"
  }
}

POST /v1beta1/projects/common-yvfxgaubr/locations/europe-west1/clusters?alt=json&prettyPrint=false HTTP/1.1
Host: container.googleapis.com
User-Agent: google-api-go-client/0.5 Terraform/1.0.0 (+<https://www.terraform.io>) Terraform-Plugin-SDK/2.5.0 terraform-provider-google/3.77.0 blueprints/terraform/terraform-google-kubernetes-engine/v16.0.1
Content-Length: 1755
Content-Type: application/json
X-Goog-Api-Client: gl-go/1.16.2 gdcl/20210606
Accept-Encoding: gzip

{
 "cluster": {
  "addonsConfig": {
   "horizontalPodAutoscaling": {
    "disabled": false
   },
   "httpLoadBalancing": {
    "disabled": false
   },
   "networkPolicyConfig": {
    "disabled": false
   }
  },
  "autopilot": {
   "enabled": false
  },
  "autoscaling": {
   "autoprovisioningNodePoolDefaults": {}
  },
  "binaryAuthorization": {
   "enabled": false
  },
  "databaseEncryption": {
   "state": "DECRYPTED"
  },
  "defaultMaxPodsConstraint": {
   "maxPodsPerNode": "110"
  },
  "initialClusterVersion": "1.20.8-gke.900",
  "ipAllocationPolicy": {
   "clusterSecondaryRangeName": "europe-west1-subnet-01-secondary-pods",
   "servicesSecondaryRangeName": "europe-west1-subnet-01-secondary-svc",
   "useIpAliases": true
  },
  "legacyAbac": {
   "enabled": false
  },
  "locations": [
   "europe-west1-b",
   "europe-west1-c",
   "europe-west1-d"
  ],
  "loggingService": "logging.googleapis.com/kubernetes",
  "maintenancePolicy": {
   "window": {
    "dailyMaintenanceWindow": {
     "startTime": "05:00"
    }
   }
  },
  "masterAuth": {
   "clientCertificateConfig": {}
  },
  "masterAuthorizedNetworksConfig": {
   "cidrBlocks": [
    {
     "cidrBlock": "10.20.0.0/14",
     "displayName": "All Networks"
    }
   ],
   "enabled": true
  },
  "monitoringService": "monitoring.googleapis.com/kubernetes",
  "name": "gke-common-cluster1",
  "network": "projects/host-project-tozs/global/networks/global-network-demo",
  "networkConfig": {},
  "networkPolicy": {
   "enabled": true,
   "provider": "CALICO"
  },
  "nodePools": [
   {
    "config": {
     "serviceAccount": "[email protected]",
     "workloadMetadataConfig": {
      "nodeMetadata": "GKE_METADATA_SERVER"
     }
    },
    "locations": [
     "europe-west1-b",
     "europe-west1-c",
     "europe-west1-d"
    ],
    "name": "default-pool"
   }
  ],
  "shieldedNodes": {
   "enabled": true
  },
  "subnetwork": "projects/host-project-tozs/regions/europe-west1/subnetworks/europe-west1-subnet-01",
  "verticalPodAutoscaling": {
   "enabled": true
  },
  "workloadIdentityConfig": {
   "identityNamespace": "common-yvfxgaubr.svc.id.goog"
  }
 }
}


2021-07-30T13:42:44.721Z [INFO]  provider.terraform-provider-google_v3.77.0_x5: 2021/07/30 13:42:44 [DEBUG] Google API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 500 Internal Server Error
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Fri, 30 Jul 2021 13:42:44 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0

{
  "error": {
    "code": 500,
    "message": "Internal error encountered.",
    "errors": [
      {
        "message": "Internal error encountered.",
        "domain": "global",
        "reason": "backendError"
      }
    ],
    "status": "INTERNAL"
  }
}
Almondovar avatar
Almondovar
02:50:16 PM

hi all, anyone recently updated the cloudinit from 2.1 to 2.2? any pitfalls that we need to be aware of?

Jonas Steinberg avatar
Jonas Steinberg

why does terraform’s yamldecode function sort lexicographically on keys?

> yamldecode("{a: 1, c: 3, b: 2}")
{
  "a" = 1
  "b" = 2
  "c" = 3
}
Jonas Steinberg avatar
Jonas Steinberg

thankfully (for me) for structures also sort lexicographically so zipping lists works out.

Alex Jurkiewicz avatar
Alex Jurkiewicz

A yaml object has no intrinsic sorting, so “why” is probably just “that’s what the deserialiser happens to do”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, in GO, order is not preserved, so they probably just sort it by default

sheldonh avatar
sheldonh

Now that terraform is officially publishing regularly updated docker images (pretty sure that’s recent for them officially doing this?) is anyone using terraform via docker run only?

I need to check, but think I’d need to mount the .aws/cred, .terraformrc and local directory at a minimum. If I wrap that up in an alias curious if anything else folks have run into issues with on docker based runs?

Alex Jurkiewicz avatar
Alex Jurkiewicz

the docker hub image has existed for ages, but it went through a long period of neglect. Nice to see it’s being updated again.

Personally I use tfenv everywhere, even in CI

1
sheldonh avatar
sheldonh

I’ve been using tfswitch. I like tfenv too. I guess I liked that tfswitch read the version information from the terragrunt or terraform plans and would allow dynamic switching. Practically I realize I haven’t had need for that as much as I thought

Was thinking docker would simplify versioning among many environments though with less issues, so I’ll have to give it a shot and see.

Reinholds Zviedris avatar
Reinholds Zviedris

I would suggest to look towards using Atlantis as it also supports multiple TF versions and is quite nicelly integrated into CI/CD pipelines.

1

2021-07-29

mcseoliver avatar
mcseoliver

Anyone here is good at terraform for OCI and GCP?

Kenan Virtucio avatar
Kenan Virtucio

Hello, I’m using this module https://registry.terraform.io/modules/cloudposse/cloudfront-cdn/aws/latest, is there a way to modify the Default(*) behavior in ordered_cache input?

Ryan Ryke avatar
Ryan Ryke

hello, updated the terraform-aws-backup module to work in gov cloud… was following the pattern that was recently used in the flow logs s3 bucket module: https://github.com/cloudposse/terraform-aws-backup/pull/22

Update main.tf by rryke · Pull Request #22 · cloudposse/terraform-aws-backup attachment image

what updating the iam are for gov cloud usage why default arns will not work references Link to any supporting github issues or helpful documentation to add some context (e.g. stackoverflow). Us…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Ryan Ryke

Update main.tf by rryke · Pull Request #22 · cloudposse/terraform-aws-backup attachment image

what updating the iam are for gov cloud usage why default arns will not work references Link to any supporting github issues or helpful documentation to add some context (e.g. stackoverflow). Us…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please run

make init
make github/init
make readme
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and terraform fmt

Ryan Ryke avatar
Ryan Ryke
06:54:29 PM

apparently my developer tools are broken… reinstall

joshmyers avatar
joshmyers

Any nicer way than this to lowercase keys/values of a map() into a new map?

joshmyers avatar
joshmyers
  lower_keys = flatten([
    for k in keys(module.label.tags) : [
      lower(k)
    ]
  ])

  lower_values = flatten([
    for k in values(module.label.tags) : [
      lower(k)
    ]
  ])

  foo = zipmap(local.lower_keys, local.lower_values)
joshmyers avatar
joshmyers

nvm, got it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

{ for k,v in module.label.tags … }

1
joshmyers avatar
joshmyers

Yup

joshmyers avatar
joshmyers

brain fart. Thanks.

2021-07-28

Almondovar avatar
Almondovar

Hi all, the renovate bot suggested to update the eks module terraform-aws-modules/eks/aws from version 15.2.0 to 17.1.0. Since this will be applied to production, how can i understand the underlying changes that it will trigger? since they have 8 versions difference, reading 8 different readme’s is not the proper action i guess right? thanks!

Mohammed Yahya avatar
Mohammed Yahya

your best option maybe to create a new branch and test with terraform plan, without applying, and see changes, if anything will be replaced then you need to plan out the migration to new changes.

Almondovar avatar
Almondovar

alright, sounds like a solid plan, thanks mate!

Almondovar avatar
Almondovar

i am getting the error

│ Could not retrieve the list of available versions for provider hashicorp/aws: no available releases match the given constraints >= 3.40.0, 3.42.0, >= 3.43.0

but in our providers.tf file we got

required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.42.0"
    }

any ideas what might be wrong please?

Mohammed Yahya avatar
Mohammed Yahya

I saw this today, funny, I tried lower the version like 3.41.0 then run init, and update it back to 3.42.0

Almondovar avatar
Almondovar

i had to stop because seems that some breaking changes take place

 Error: Invalid function argument
│
│   on .terraform/modules/eks/local.tf line 188, in locals:
│  187:     for index in range(var.create_eks ? local.worker_group_count : 0) : templatefile(
│  188:       lookup(
│  189:         var.worker_groups[index],
│  190:         "userdata_template_file",
│  191:         lookup(var.worker_groups[index], "platform", local.workers_group_defaults["platform"]) == "windows"
│  192:         ? "${path.module}/templates/userdata_windows.tpl"
│  193:         : "${path.module}/templates/userdata.sh.tpl"
│  194:       ),
│  195:       merge({
│  196:         platform            = lookup(var.worker_groups[index], "platform", local.workers_group_defaults["platform"])
│  197:         cluster_name        = coalescelist(aws_eks_cluster.this[*].name, [""])[0]
│  198:         endpoint            = coalescelist(aws_eks_cluster.this[*].endpoint, [""])[0]
│  199:         cluster_auth_base64 = coalescelist(aws_eks_cluster.this[*].certificate_authority[0].data, [""])[0]
│  200:         pre_userdata = lookup(
│  201:           var.worker_groups[index],
│  202:           "pre_userdata",
│  203:           local.workers_group_defaults["pre_userdata"],
│  204:         )
│  205:         additional_userdata = lookup(
│  206:           var.worker_groups[index],
│  207:           "additional_userdata",
│  208:           local.workers_group_defaults["additional_userdata"],
│  209:         )
│  210:         bootstrap_extra_args = lookup(
│  211:           var.worker_groups[index],
│  212:           "bootstrap_extra_args",
│  213:           local.workers_group_defaults["bootstrap_extra_args"],
│  214:         )
│  215:         kubelet_extra_args = lookup(
│  216:           var.worker_groups[index],
│  217:           "kubelet_extra_args",
│  218:           local.workers_group_defaults["kubelet_extra_args"],
│  219:         )
│  220:         },
│  221:         lookup(
│  222:           var.worker_groups[index],
│  223:           "userdata_template_extra_args",
│  224:           local.workers_group_defaults["userdata_template_extra_args"]
│  225:         )
│  226:       )
│  227:     )
│     ├────────────────
│     │ local.workers_group_defaults["platform"] is "linux"
│     │ path.module is ".terraform/modules/eks"
│     │ var.worker_groups is tuple with 1 element
│
│ Invalid value for "path" parameter: no file exists at #!/bin/bash -xe
│
│ # Allow user supplied pre userdata code
│ ${pre_userdata}
│
│ # Bootstrap and join the cluster
│ /etc/eks/bootstrap.sh --b64-cluster-ca '${cluster_auth_base64}' --apiserver-endpoint '${endpoint}' ${bootstrap_extra_args} --kubelet-extra-args '${kubelet_extra_args}' '${cluster_name}'
│
│ # Allow user supplied userdata code
│ ${additional_userdata}
│ ; this function works only with files that are distributed as part of the configuration source code, so if this file will be created by a resource in this configuration you must instead obtain this result
│ from an attribute of that resource.
╵
Brad McCoy avatar
Brad McCoy

Hi all I just finished my latest blog on Getting Certified in Terraform, hope it helps people that want to go for the cert: https://bradmccoydev.medium.com/devops-journey-how-to-get-certified-in-terraform-c0bce1caa3d?source=friends_link&sk=517761f1f657b610207662d6a87cf871

DevOps Journey — How to get certified in Terraform attachment image

This month I have been putting my team through the Hashicorp Terraform Associate exam and decided to take the exam first myself to lead by…

1
othman issa avatar
othman issa

Hello everyone here

othman issa avatar
othman issa

i have an issue working in terraform connect modules togrther, i need help plz ?

venkata.mutyala avatar
venkata.mutyala

So having one module use another module?

sheldonh avatar
sheldonh

@Erik Osterman (Cloud Posse) is there a template/starter out there for variant just for running a terraform workflow of a couple directories without all the placeholder yaml. I find the examples repo for using the Atmos/terraform confusing due to level of placeholder yaml files I need to replace. Was hoping to try again but find a barebones one.

Also I am assuming the workflow examples I’ve seen that mention backend as first step just run your cloudposse backend tf state module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
variant2/examples at master · mumoshu/variant2 attachment image

Turn your bash scripts into a modern, single-executable CLI app today - variant2/examples at master · mumoshu/variant2

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is the best place to start

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I wouldn’t use our variant code to start - it’s a master class and more complicated.

sheldonh avatar
sheldonh

Thank you!

sheldonh avatar
sheldonh

That explains my confusion The last time i used the yaml stack config it was brand new and I got seriously confused after 2-3 days of work on it. Will revisit now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s less confusing now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It is definitely a different way to think about terraform, but it’s proven to be immensely popular with our customers.

sheldonh avatar
sheldonh

Thank you! I tried it when it still wasn’t mature, but really wanted to avoid terragrunt for this project if I could stick with native, as it’s so hard to convert back later on.

sheldonh avatar
sheldonh

Will take a look! Appreciate all the insight as always

sheldonh avatar
sheldonh

@Erik Osterman (Cloud Posse) is this part of the tutorial still relevant with atmos? https://docs.cloudposse.com/tutorials/atmos-getting-started/#5-invoke-an-atmos-workflow

Was trying to avoid using the entire atmos config/docker setup and focus just on what’s being run with variant/cli without dependency on the geodesic container for now.

NOTE: I built my entire terragrunt stack to start with the terraform-null-label + random pets so I’m feeding those values into every single part of deployment from thereafter. That’s another reason I’m trying to make sure I know where the logic is being set for the workflow, because ideally I don’t gut all of that work.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Geodesic is not required

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It just makes it easier to start up since we document all the requirements in the Docker file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also check out geodesic today. We invested a lot into usability. It’s hard to make a mistake now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That was about 4 months ago

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There is also a geodesic tutorial

sheldonh avatar
sheldonh

I can. Trying to keep it easy to plug into azure pipelines + spacelift and don’t need any role assumption and all the rest. If I can execute the planning commands with zero need to customize then it might be worth using. Otherwise, I still have to call it externally.

Are you saying basically the entire stack stuff is just native terraform, other than the workflows, which is variant still?

sheldonh avatar
sheldonh

Oh, I think I’m close. My problem is most likely that I don’t see any variant files for the examples on plan-all. I’m trying to figure out where this stuff sits. I see the atmos repo has terraform-plan files, which are useful and show me that the config/vars are being set. Going to go look in github org and search for more on where the plan-all type logic is coming in.

variable "default-args" {
    type  = list(string)
    value = [
      "-out",
      "${opt.stack}-${param.component}.planfile",
      "-var-file",
      "${opt.stack}-${param.component}.terraform.tfvars.json"
    ]
  }
sheldonh avatar
sheldonh

Maybe this an import or abstracted somewhere i’m missing

sheldonh avatar
sheldonh

There we go… logging my progress for others in case this is useful.

… so the plan-all = https://github.com/cloudposse/reference-architectures/blob/63b99a8f7f8e8dc06de6de4cda1465154ab00632/stacks/workflows-ue2.yaml#L9

This is actually calling the -job: terraform plan vpc. In looking at the atmos repo you can see:

job "terraform plan" {
  concurrency = 1
  description = "Run 'terraform plan'"

  parameter "component" {

....

So I’m seeing that these job names (which suprised me with a space in it, making me think some subcommands) are actually variant run but configured from the yaml file to organize the list of jobs to iterate on.

reference-architectures/workflows-ue2.yaml at 63b99a8f7f8e8dc06de6de4cda1465154ab00632 · cloudposse/reference-architectures attachment image

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - reference-architectures/workflows-ue2.yaml at 63b99a8f7f8e8dc06de6de4cda14651…

sheldonh avatar
sheldonh

And lastly the way this is being built is from the atmos/modules/workflow/workflow.variant file. https://github.com/cloudposse/atmos/blob/a5fd6ea6515e40b0ac278e454db86009bdcb3286/atmos/modules/workflow/workflow.variant#L55

This shows it does a hcl items ` items = conf.env_config.workflows[param.workflow].steps` and uses this to build the collection of jobs to run 1 at a time from what I see.

Geez. No wonder my first go around was confusing. Super flexible, but was expecting this in code/Go/scripts. HCL made this a bit confusing since I wasn’t aware it was building the entire flow inside the HCL variant files.

cc @Erik Osterman (Cloud Posse) there’s my logged results I know things are changing still. Unless you say something is way off I plan on trying out the workflow file + terraform variant files for now with some terraform stack yaml from your tutorials repo and see how it goes. I think this is all compatible with spacelift and the work you’ve been doing.

Thank you!

atmos/workflow.variant at a5fd6ea6515e40b0ac278e454db86009bdcb3286 · cloudposse/atmos attachment image

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - atmos/workflow.variant at a5fd6ea6515e40b0ac278e454db86009bdcb3286 · cloudposse/atmos

sheldonh avatar
sheldonh

Getting there! variant run terraform plan tfstate-backend --stack euc1-root ran. The inheritance with yaml stuff adds complexity to do natively. Pretty awesome how the generate command sets up the dynamic key info in the backend each time. This is the stuff I was using terragrunt to solve.

If it works out I’ll post up a blog post on my experience. I’m thankful for the terraform module variant files as that’s pretty darn cool! Can’t wait to try out the spacelift stuff too. Thank you!

Fernando avatar
Fernando

@sheldonh could you please share your blog url here? I’m quite interested on knowing how this process ends. Thanks!

sheldonh avatar
sheldonh

Sure! Don’t have a post yet. We’ll see how it goes . I post a weekly update here https://app.mailbrew.com/sheldonhull?aff=sheldonhull

Sheldon Hull attachment image

Create beautiful, automated newsletters with content from the sites and apps you love.

1
sheldonh avatar
sheldonh

@Erik Osterman (Cloud Posse) real quick, just to confirm I’m not on the wrong track….

With the new work y’all are doing am I going down a path that y’all are moving away from by using variant to run the defined workflow from the yaml (like ue1.dev.yml) file? I finally figured out that stuff yesterday. (Avoiding Atmos for now to understand the workflow itself better)

I’m guessing that I define the workflow for deploy (probably already there) and it’s basically running the back end generate initialization plan and apply.

My option using the yaml stack config seems to be use that variant based workflow (that Atmos is a wrapper for I think) and it manages my backend state and commands and hopefully this sets everything up with spacelift next week.

Otherwise I write tasks to do the same steps that variant is doing in my own Task library which I’m trying to avoid so I can benefit from work y’all have done. I see that I can’t just run the yaml stack config without variant/wrapper because it requires the dynamic building of the backend.tf.json.

It’s making a lot more sense after digging through it yesterday.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


it’s basically running the back end generate initialization plan and apply.
Yes, we write a backend.tf.json that is derived from the deep merged configuration of backend. Then we write a terraform.tfvars.json which is based on the deep merged configuration of vars

This deepmerging is what’s so powerful. It’s what allowed us to build a service catalog. We can deploy services this easily:

import:
- catalog/eks
- catalog/rds
- catalog/vpc

And we’ll have a new environment with a VPC, EKS cluster and RDS cluster in 4 lines of code.

Plus, we can override settings for anything very succinctly as well.

Everything is ultimately pure terraform using our custom terraform provider.

the stack config allows us to define configuration in a tool agnostic way. so we have written atmos as one tool. but we also have written many other things that operate on the stack configuration. stack configs provide something impossible to do in OEM terraform: deep merging, imports, YAML anchors, etc. And yet, since it’s portable, we’re able to use pure terraform with it.

https://github.com/cloudposse/terraform-provider-utils

https://github.com/cloudposse/terraform-yaml-stack-config

https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation

https://github.com/cloudposse/terraform-tfe-cloud-infrastructure-automation (not currently maintained, but shows how we defined a portable config format, and then we were able to use it interchangably with spacelift, atmos, and TFC).

• Also, I think the word workflow is used interchangeably, so I’m not sure of the usage in the context.

What we were talking about yesterday on the call is to think more in terms of policies than workflows.

Atmos implements something we called workflows, which are not policy based. They just run a set of steps in order. But that is what I say we should move away from. Also, just a reminder: we’re rewriting atmos in native go and moving away from variant. Variant served it’s purpose: it helped us prototype what we wanted, but now that we know what the interface is, we’re making a cli for it. Also, atmos is not to be confused with a general purpose task runner like make or gotask.

With spacelift, we implement a policy driven approach. But it won’t make much sense until you see it first hand. This stuff is radically father ahead than the way 99% of companies use terraform or even think of using terraform.

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

even the way most companies use policies with terraform is different. e.g. atlantis added support for policies, but like most other solutions, those policies are scoped to what one run of terraform is doing. Imagine if you had a policy that governed the every execution of terraform (plan/apply) across all root modules, and was aware of all other root modules. so you could have an overarching collection of policies that govern when/how everything runs.

cool-doge1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…that’s possible with spacelift

sheldonh avatar
sheldonh
07:32:47 PM

@Erik Osterman (Cloud Posse) This really helped. Variant is neat, but the “workflow” was very confusing initially because variant was newer to me. It duplicates what I coded in my Go based cli task and so abstracting from Go was a bit clunkier than just writing the code for me.

My problem is that:

  • root module/core infra is very hard to refactor once you start relying on it across the team. Thus trying to get this ready for something like Spacelift.
  • I like terragrunt for DRY, but trying to stick with native so the work I do will benefit more of the org without relying on more Terragrunt quirks.

Right now as part of the an application team rather than cloud operations I own one complete aws app. This means I’m doing the entire stack in dev/prod for VPC -> ECS tasks. So my takeaway:

  1. Variant, Make, Mage, whatever to call terraform native init, plan, and apply at this time. That’s basically what variant was trying to do.
  2. Variant workflows are just another form of the same problem with cli driven vs policy driven. Policy driven is more the future, BUT right now it’s not a plug and play to do this. It’s likely something I’ll need to do after I get things working with spacelift and my stack configs.

Therefore my action item is use whatever to do the run commands right now, leverage the yaml stacks to be compatible with this new approach being taken, and come back soon to look into the policy driven approach before I scale to team.

Sound about right? On the same page? Thanks again, I’ve really enjoyed the creative thinking you have on this topic and appreciate all the pointers!

Sound about right? Is your new Go based cli open sourced yet so I can look sometime as that’s what I’m now working in full time. Would like to see what’s there to learn and maybe eventually contribute if I see something I can help with.

sheldonh avatar
sheldonh

Love how much of what I’ve been doing crosses over into similar area. Just noticed the randompet script that’s using yq. I love that I used yq to convert a powershell object into datadog configuration files dynamically. It’s a great tool! I also get a laugh everytime folks look strangely at me for itchy-elephant or the like in the vpc name

Jaden Sullivan avatar
Jaden Sullivan

Been working on the Kafka module - still having issues with the Zone ID. Documentation says it’s not necessary afaik, but running without the variable makes TF lose its mind. I’ve got a zone being created on my end, and tried directly referencing that in the module call for the Kafka, but that doesn’t seem to be working either

Release notes from terraform avatar
Release notes from terraform
06:03:41 PM

v1.1.0-alpha20210728 1.1.0 (Unreleased) NEW FEATURES: cli: terraform add generates resource configuration templates (#28874) config: a new type() function, only available in terraform console (<a href=”https://github.com/hashicorp/terraform/issues/28501” data-hovercard-type=”pull_request”…

commands: `terraform add` by mildwonkey · Pull Request #28874 · hashicorp/terraform attachment image

terraform add generates resource configuration templates which can be filled out and used to create resources. The template is output in stdout unless the -out flag is used. By default, only requir…

lang/funcs: add (console-only) TypeFunction by mildwonkey · Pull Request #28501 · hashicorp/terraform attachment image

The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it&#39;s handy to be abl…

2021-07-27

Almondovar avatar
Almondovar

Hi guys, i am succesfully importing db’s into terraform, but i have an issue as there is a second db with the same name but in tokyo instead of frankfurt, and although that the second module has different name, when i am importing it with the same identifier/name, its messing with the first database. How can i make terraform understand that the second db with the same name is in tokyo and not frankfurt? i searched for region tag in the modules [inputs> but no luck </i](https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest?tab=inputs)

Alex Jurkiewicz avatar
Alex Jurkiewicz

The AWS provider can only work on a single region at a time. Look up multi pp let provider configuration and provider aliases.

You need two AWS providers in your code, one for each region

Almondovar avatar
Almondovar

Hi alex, i tried to work with aliases,

provider "aws" {
  alias      = "fr"
  access_key = "${var.AWS_ACCESS_KEY_ID}"
  secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
  region     = "eu-central-1"

Then i tried to call it within the module - but i cannot call the provider from within the module

 module "db1" {
  source  = "terraform-aws-modules/rds/aws"
  version = "3.3.0"

  providers {
    aws = "aws.fr"
  }

  identifier = "db1"

This is the detailed error

│ Error: Unsupported block type
│
│   on main.tf line 221, in module "db1":
│  221:   providers {
│
│ Blocks of type "providers" are not expected here.
Alex Jurkiewicz avatar
Alex Jurkiewicz

providers = {

Almondovar avatar
Almondovar

that was it! thank you very much!!

sheldonh avatar
sheldonh

@ or other spacelift contact… curious if any further updates on support for Azure DevOps Pull requests yet?

Paweł Hytry - Spacelift avatar
Paweł Hytry - Spacelift

Launching on Monday! Will let you know as soon as it’s live

1
sheldonh avatar
sheldonh

woot woot

sheldonh avatar
sheldonh

Any details? PR comments on plan or anything else? Looking forward to checking things out

Paweł Hytry - Spacelift avatar
Paweł Hytry - Spacelift

Integration with Azure DevOps repos and webhooks are there, PR comments will be coming a bit later; hope we will be able to get some feedback from you and a few other customers and iterate (=add what’s needed) very quickly

bananadance1
Rhys Davies avatar
Rhys Davies

Hey all, having a problem on Terraform 0.11 AWS. Under what circumstances would taint not taint a resource, such that it would not be recreated on the next apply? I wasn’t aware of any circumstance, but currently I am trying to taint a DB resource, so that it can be recreated, I see:

The resource aws_db_instance.test_db in the module root has been marked as tainted!

after running terraform taint aws_db_instance.test_db yet I don’t see the resource being recreated when I next terraform apply

Rhys Davies avatar
Rhys Davies

to add a little context, I’m debugging an old project with a CI using this command to recreate a DB, that has previously been working solidly.

AWS provider 2.70 DB in question is an RDS with 9.6.20 for the engine

2021-07-26

emem avatar

hi guys. anyone mistakenly deleted their terraform.state and was still able to target existing infrastructure and destroy in terraform before

Alex Jurkiewicz avatar
Alex Jurkiewicz

you can rebuild your state by hand using terraform import

Alex Jurkiewicz avatar
Alex Jurkiewicz

not fun, but doable

omry avatar
GitHub - GoogleCloudPlatform/terraformer: CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code attachment image

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GitHub - GoogleCloudPlatform/terraformer: CLI tool to generate terraform files from e…

emem avatar

they are multiple resources

emem avatar

eks,vpc,iam,subnet,fargate

emem avatar

how do i go about it in that case

omry avatar
terraformer/aws.md at master · GoogleCloudPlatform/terraformer attachment image

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - terraformer/aws.md at master · GoogleCloudPlatform/terraformer

emem avatar

nice thanks alot. let me try this out

1
emem avatar

@ quick question have u tried this with eks before. seems like its not supported. But its part of the suppported resourced on terraformer doc

omry avatar

No, sorry but I have not tried it with EKS. If it’s not working I suggest you should open an issue.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

interested to hear how people are using checkov in collab with tflint and tfsec our tester is currently spiking it but interested in how the wider community integrate it

Paul Robinson avatar
Paul Robinson

hey all, I’m looking for some help with this module in preparation for a couple of prs please. https://github.com/cloudposse/terraform-aws-transit-gateway

The go tests are failing due to

TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121: Error: Invalid count argument
TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121: 
TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121:   on ../../modules/subnet_route/main.tf line 15, in resource "aws_route" "count":
TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121:   15:   count                  = var.route_keys_enabled ? 0 : length(local.route_config_list)
TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121: 
TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121: The "count" value depends on resource attributes that cannot be determined
TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121: until apply, so Terraform cannot predict how many instances will be created.
TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121: To work around this, use the -target argument to first apply only the
TestExamplesCompleteDisabledModule 2021-07-26T09:38:47+01:00 command.go:121: resources that the count depends on.

The solution would appear to be to create the vpcs/subnets before the transit gateway/route tables. Just wondering if the authors are also seeing this issue?

GitHub - cloudposse/terraform-aws-transit-gateway: Terraform module to provision AWS Transit Gateway, AWS Resource Access Manager (AWS RAM) Resource, and share the Transit Gateway with the Organization or another AWS Account. attachment image

Terraform module to provision AWS Transit Gateway, AWS Resource Access Manager (AWS RAM) Resource, and share the Transit Gateway with the Organization or another AWS Account. - GitHub - cloudposse/…

Jaden Sullivan avatar
Jaden Sullivan

Hey guys - still got a lot to learn, but I’m looking at the Kafka module and the zone_id variable specifically. Where is that zone ID obtained from AWS? Haven’t been able to find anything but AZ IDs in their docs.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know if you can easily use lambdas with docker containers (hosted in a private repo like Harbor) ?

robschoening avatar
robschoening

It’s a stretch to say that lambda uses docker containers. Lambda uses container images as an alternate packaging format and distribution mechanism to the zip file in S3. AFAIK only ECR is supported.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

that is what i meant my bad, also that is what I thought as well

loren avatar
loren

This might work for you, once implemented, https://github.com/aws/containers-roadmap/issues/939

[ECR] [Remote Docker Repositories]: Pull through cache · Issue #939 · aws/containers-roadmap attachment image

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

1
1

2021-07-23

Mohammed Yahya avatar
Mohammed Yahya
Indeni on LinkedIn: #IaC #Terraform #CICD attachment image

Join us next week for a live chat with Mohammed Yahya, a DevOps engineer at globaldatanet! He helps clients automate infrastructure-as-code to achieve …

3
Mohammed Yahya avatar
Mohammed Yahya

let me know if posting like this one is not appropriate

Indeni on LinkedIn: #IaC #Terraform #CICD attachment image

Join us next week for a live chat with Mohammed Yahya, a DevOps engineer at globaldatanet! He helps clients automate infrastructure-as-code to achieve …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

all good, @

1
robschoening avatar
robschoening

For those of you that have added OPA to your terraform pipelines, what approach did you take? Did you use OPA directly, conftest, a 3rd party security tool that embedded OPA, remote execution engine that embeds it?

Alex Jurkiewicz avatar
Alex Jurkiewicz

We use it via Spacelift. Works great

1
robschoening avatar
robschoening

Do you maintain a single rego rule set? Multiple? How do you test/validate rego changes to prevent unintended regression?

Alex Jurkiewicz avatar
Alex Jurkiewicz

We define a set of rules centrally, and stacks can opt in to using them. Almost all stacks do.

At the moment our rules are all warnings only, so they don’t block merge. We don’t change the rules often so dealing with regressions is ignored

robschoening avatar
robschoening

Thanks. The team authoring the rego is deciding what rules to add in the spirit of unit/integration testing to maximize quality? Or mandate? Presume the former. Really appreciate the feedback.

Alex Jurkiewicz avatar
Alex Jurkiewicz

We aren’t really at the scale you are talking about. There are 10 SREs who write and own the Terraform files. A larger number of developers can contribute pull requests.

But I don’t think we have any policy’s around what rules to add or not. It’s a best effort thing to reduce security issues. There’s none outside our team driving this

Mohammed Yahya avatar
Mohammed Yahya
Policy-based infrastructure guardrails with Terraform and OPA attachment image

Learn how Open Policy Agent (OPA) can be leveraged to secure infrastructure deployments by building policy-based guardrails around them.

Mohammed Yahya avatar
Mohammed Yahya
GitHub - Scalr/sample-tf-opa-policies attachment image

Contribute to Scalr/sample-tf-opa-policies development by creating an account on GitHub.

2021-07-22

Julien Bonnier avatar
Julien Bonnier

Hey there, I’m trying to create an eks_cluster with fargate using cloudposse modules but I keep getting a TLS handshake timeout after an hour… Any one knows what could be the issue? I am new to EKS and Kubernetes so I might be doing something wrong module.eks_cluster.aws_eks_cluster.default[0]: Still creating... [56m40s elapsed] ╷ │ Error: error creating EKS Cluster (dw-dev-common-eks-cluster-cluster): RequestError: send request failed │ caused by: Post “https://eks.us-east-1.amazonaws.com/clusters”: net/http: TLS handshake timeout │ │ with module.eks_cluster.aws_eks_cluster.default[0], │ on .terraform/modules/eks_cluster/main.tf line 47, in resource “aws_eks_cluster” “default”: │ 47: resource “aws_eks_cluster” “default” { │ ╵ Releasing state lock. This may take a few moments…

Mohammed Yahya avatar
Mohammed Yahya

TLS handshake timeout When a node is unable to establish a connection to the public API server endpoint, you may an error similar to the following error.

server.go:233] failed to run Kubelet: could not init cloud provider "aws": error finding instance i-1111f2222f333e44c: "error listing AWS instances: \"RequestError: send request failed\\ncaused by: Post  net/http: TLS handshake timeout\""

The kubelet process will continually respawn and test the API server endpoint. The error can also occur temporarily during any procedure that performs a rolling update of the cluster in the control plane, such as a configuration change or version update. To resolve the issue, check the route table and security groups to ensure that traffic from the nodes can reach the public endpoint.

Mohammed Yahya avatar
Mohammed Yahya

so check nodes sg, if not working try to update k8s version to 1.20

Julien Bonnier avatar
Julien Bonnier

I finally sort of figure it out. I think my problem was some sort of race condition. I was following the examples from different CP modules but they are very confusing. Some mention you have to create a null resource, some say you have to set depends_on on the node groupe module. Turns out since I’m using module_depends_on I don’t get the error anymore. I’m not sure if this is what fixed the issue though.

1
Julien Bonnier avatar
Julien Bonnier

Thanks for your help @

1

2021-07-21

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

HashiCorp Waypoint Demo happening now!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@here our devops #office-hours are starting now! join us on zoom to talk shop url: cloudposse.zoom.us/j/508587304 password: sweetops

Release notes from terraform avatar
Release notes from terraform
07:03:40 PM

v1.0.3 1.0.3 (July 21, 2021) ENHANCEMENTS terraform plan: The JSON logs (-json option) will now include resource_drift, showing changes detected outside of Terraform during the refresh step. (#29072) core: The automatic provider installer will now accept providers that are recorded in their registry as using provider protocol version 6….

json-output: Add resource drift to machine readable UI by alisdair · Pull Request #29072 · hashicorp/terraform attachment image

The new resource_drift message allows consumers of the terraform plan -json output to determine which resources Terraform found to have changed externally after the refresh phase. Note to reviewers…

2021-07-20

loren avatar
loren

coworker posted a handy one-liner for seeing all the aws permissions your terraform is using…

TF_LOG=trace terraform apply --auto-approve 2>&1 | \
grep 'DEBUG: Request ' | \
sed -e 's/.*: Request//' \
    -e 's/ Details:.*$//' \
    -e 's#/#:#' | \
sort -u
7
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s interesting!

2
loren avatar
loren

i recommended running a second time with plan to catch any “read” calls that only happen when the resource is already in the tfstate, and also destroy if your role needs that level of access…

Tim Birkett avatar
Tim Birkett

I find iamdump really useful for this.

https://github.com/claranet/iamdump

claranet/iamdump attachment image

Like tcpdump for AWS IAM policies. Contribute to claranet/iamdump development by creating an account on GitHub.

loren avatar
loren

iamlive is another one. i just like not having to use any tool or start any server… https://github.com/iann0036/iamlive

iann0036/iamlive

Generate an IAM policy from AWS calls using client-side monitoring (CSM) or embedded proxy - iann0036/iamlive

Matt Gowie avatar
Matt Gowie

Just pulled up this list for a SweetOps member who DM’d me privately, but if anybody is looking to help work on Cloud Posse open source modules, we’ve got a bunch of “help wanted” tickets ready to go!

https://github.com/issues?q=is%3Aissue+is%3Aopen+org%3Acloudposse+archived%3Afalse+sort%3Aupdated-desc+label%3A%22help+wanted%22+

3
egy ardian avatar
egy ardian

Hi, new here to modules. is there any link or vid to learn how to use public modules? i found it dizzy about “how to use” module.

loren avatar
loren
Reuse Configuration with Modules | Terraform - HashiCorp Learn attachment image

Create and use Terraform modules to organize your configuration. Host a static website in AWS using two modules: one from the Terraform Registry and one you will build yourself. Organize configuration using directories and workspaces.

egy ardian avatar
egy ardian

alright i’ll read it

egy ardian avatar
egy ardian

thanks

egy ardian avatar
egy ardian

my questions is : do i need to make a new main.tf> [variables.tf](http://variables.tf) and <http://outputs.tf|outputs.tf to use modules? i tried with defining the modules and put every required inputs on main.tf is that a good of using modules or i need to put them in variables.tf?

Brij S avatar
Brij S

Has anyone here been able to enable ASG metrics using the terraform-eks module?

kumar k avatar
kumar k

Hello, I have a requirement to restore elastic cacehe redis cluster from snapshot using terraform.I am using “snapshot_name” parameter to restore,but the automatic backup is getting deleted once it creates new cluster.I found another paramter “snapshot_arns”(rdb from s3)to restore.Which is the best option to restore using terraform?

2021-07-19

Lyubomir avatar
Lyubomir

Hi All,

I am facing an issue with the terraform-aws-eks-node-group module. Can someone spot what the issue is?

The code for the nodegroup is the following:

module "linux_nodegroup_1" {
  source  = "xxx"
  enabled = var.linux_nodegroup_1_enabled

  name                              = var.linux_nodegroup_1_name
  ami_image_id                      = var.linux_nodegroup_1_ami_image_id
  subnet_ids                        = var.vpc.private_subnet_ids
  cluster_name                      = module.eks_cluster.eks_cluster_id
  instance_types                    = var.linux_nodegroup_1_instance_types
  desired_size                      = var.linux_nodegroup_1_desired_size
  min_size                          = var.linux_nodegroup_1_min_size
  max_size                          = var.linux_nodegroup_1_max_size
  kubernetes_labels                 = var.linux_nodegroup_1_kubernetes_labels
  kubernetes_taints                 = var.linux_nodegroup_1_kubernetes_taints
  kubernetes_version                = var.linux_nodegroup_1_kubernetes_version
  disk_size                         = var.linux_nodegroup_1_disk_size
  create_before_destroy             = var.linux_nodegroup_1_create_before_destroy
  cluster_autoscaler_enabled        = var.linux_nodegroup_1_cluster_autoscaler_enabled
  existing_workers_role_policy_arns = local.linux_nodegroup_1_existing_workers_role_policy_arns

  context                           = module.this.context
}

We want to spin a nodegroup with a specific AMI, however we observe a strange behaviour. The ASG is creating it’s own launch template, ignoring the launch template created by the Terraform module. The launch template created by the terraform module is correct. The LT create created by the ASG uses the default Amazon Linux 2 AMI.

Looking at the code is a bit difficult to understand what might be going wrong.

Balazs Varga avatar
Balazs Varga

any idea ?

Error: error creating Route in Route Table (rtb-0d0b28cb13c1c4a5c) with destination (192.168.1.0/24): InvalidTransitGatewayID.NotFound: The transitGateway ID 'tgw-0b127487563c95832' does not exist.
│ 	status code: 400, request id: 07f5837d-bfdb-468c-afcc-f3ce90626923

the goal: create a route in route table and attach it to the subnet I just created earlier. the transit gateway is an “external/already created” resources. I got the ID using

data "aws_ec2_transit_gateway" "exists_tgw" {
}

in terraform plan I see the correct ID, but when I run the apply I got that error

loren avatar
loren

is the vpc attached to the transit gateway?

Balazs Varga avatar
Balazs Varga

oh… you are right. that is missing. thanks

1
Anton Sh. avatar
Anton Sh.

Hello everyone ! What are the best practices to scan and check AWS IAM policies (permissions) For example, on one terraform works a lot of people and we want to check permissions not to give too much ( e.g. “*” ) for policies. It can be maybe a pre-commit hook or terraform module?

Andrew Miskell avatar
Andrew Miskell

I know I’m missing something here, but could use a point in the right direction. I’ve built an EKS cluster using the updated terraform-aws-eks-cluster module and everything got built properly. When I attempt to run terraform plan or apply again afterwards, I’m presented with the following error.

╷
│ Error: configmaps "aws-auth" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
│
│   with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│   on .terraform/modules/eks_cluster/auth.tf line 112, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│  112: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
│
╵

I verified I have an valid kubectl configuration file and have access to the cluster.

vicken avatar
vicken

I ran into the same issue going from an existing cluster with v0.39.0 to the same one with v0.42.1 of the terraform-aws-eks-cluster module

terraform plan fails with the same error message. cc @Andriy Knysh (Cloud Posse)

vicken avatar
vicken

Prior to Nuru’s new PR - Enhance Kubernetes provider configuration (#119)…. I worked around it by manually passing a KUBE_CONFIG_PATH (which was something new I had to do)

KUBE_CONFIG_PATH=~/.kube/config terraform plan

Also tried it now (defining the KUBE_CONFIG_PATH) and plan ran successfully.

(Wondering if I should always be defining that env var everywhere now like in pipelines, etc but I’m not sure yet if it will work in all cases, first-time run vs updates)

vicken avatar
vicken

Interesting. The new module inputs:

+  kubeconfig_path_enabled = true
+  kubeconfig_path = "~/.kube/config"

seems to do the equivalent: KUBE_CONFIG_PATH=~/.kube/config terraform plan without having to define KUBE_CONFIG_PATH

Andrew Miskell avatar
Andrew Miskell

I wound up using kube_exec_auth_enabled = true property in the cluster configuration to get it working.

vicken avatar
vicken

I tried that one and got “Unauthorized” Was there another input to set?

Andrew Miskell avatar
Andrew Miskell

No, although check with aws eks get-token works in the shell

1
Andrew Miskell avatar
Andrew Miskell

All it’s supposed to do is call out to aws eks get-token for the token and use it.

Andrew Miskell avatar
Andrew Miskell

You might need kube_exec_auth_aws_profile and kube_exec_auth_aws_profile_enabled if you have multiple profiles in your aws cli config.

vicken avatar
vicken

Yes I do (for multiple profiles). From the cmd line, aws eks get-token works. I’ll give it a try thanks!

Andrew Miskell avatar
Andrew Miskell

At the moment, I don’t… I just have a single default profile.

Andrew Miskell avatar
Andrew Miskell

Is it also normal for terraform to want to update/replace the OIDC thumbprints and the cluster tls data on every run?

vicken avatar
vicken

Thanks! This also works!

  kube_exec_auth_enabled = true
  kube_exec_auth_aws_profile_enabled = true
  kube_exec_auth_aws_profile = "project-1-admin"

Do people also standardize on profile names on teams? Wondering that the way I have the profile name defined… may differ from the way others may have that profile name defined …may differ with a CI environment (default)

Andrew Miskell avatar
Andrew Miskell

Not sure, likely not… may need to have a default value and each user overrides the default if they have a different name.

Andrew Miskell avatar
Andrew Miskell

I’m sure there’s probably a better way to do it… But for some reason the default wouldn’t work for me.

vicken avatar
vicken

Yes. So a matter of passing something extra terraform ... -var=profile-name-goes-here each time. For the kubeconfig_path that config location is pretty standard

Andrew Miskell avatar
Andrew Miskell

yeah, and I have mine in the standard location, but it doesn’t pick it up for some reason.

vicken avatar
vicken

Normally, no I don’t notice TF replacing the OIDC thumbprints (when there are no code changes). But during the plan of updating the module version, yes I notice it is trying to replace it.

vicken avatar
vicken

thank you @! I think settled on

  kube_exec_auth_enabled          = true
  kube_exec_auth_role_arn_enabled = true
  kube_exec_auth_role_arn         = var.role_arn

Turns out I did have a role defined in .tfvars which works when passed in.

Eric López avatar
Eric López
07:33:36 PM

Hello! I am trying the Jenkins module, but I am currently get an error with Backup Vault. Could you please confirm what permissions are required?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know with RDS what the logic is for calculating the max size (for mysql) as when you choose a small initial size (20/50gb) you can’t set the max size to 65536 gb as it fails

Alex Jurkiewicz avatar
Alex Jurkiewicz

From memory there is a page in the docs going through per instance size limitations

Alex Jurkiewicz avatar
Alex Jurkiewicz

Just use aurora though

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

If only … I would if I could believe me!

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I’m trying to find the docs as then I’ll do a lookup in my terraform module

Mark juan avatar
Mark juan

I have these inputs and I want to map these to two maps(i.e map_users and map_roles for role and user policy)

app_admins    = ["arn:aws:iam::617369727400:user/sam","arn:aws:iam::617369727400:user/rock"]
  app_viewers   = ["arn:aws:iam::617369727400:user/rock","arn:aws:iam::617369727400:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS"]
  app_editors   = ["arn:aws:iam::617369727400:user/rock","arn:aws:iam::617369727400:role/AWS-QuickSetup-StackSet-Local-AdministrationRole"]

How can i map these?

Mark juan avatar
Mark juan

Right now, i’m trying to do like this!

map_users = concat([
    for user in var.app_admins:
    {
      userarn  = user
      username = split("/",user)[1]
      groups   = ["system:masters"]
    }]
  ],
  [
    for user in var.app_viewers:
    {
      userarn  = user
      username = split("/",user)[1]
      groups   = ["cluster-viewer"]
    }
  ],
  [
    for user in var.app_editors:
    {
      userarn  = user
      username = split("/",user)[1]
      groups   = ["cluster-editor"]
    }
  ]
  )

but the problem is, it is mapping all the values from the list of string to both the maps

Tim Birkett avatar
Tim Birkett

In your first app_admins section it looks like there’s an extra ] - is that an intentional extra closing square bracket?

When you say “it is mapping all the values from the list of string to both the maps” what do you mean? Can you paste the result?

It may be to do with you using the same variable name (user) for iterating through each array. Try renaming to admin_user, viewer_user and editor_user to rule that out.

Tim Birkett avatar
Tim Birkett

It also looks like you’re mixing users and roles in your arrays in the first post you made which probably helps answer one of the questions above.

Checkout the “filtering elements” section here: https://www.terraform.io/docs/language/expressions/for.html

Tim Birkett avatar
Tim Birkett

Also see regexall for testing the elements of the lists to identify roles vs users.
regexall can also be used to test whether a particular string matches a given pattern, by testing whether the length of the resulting list of matches is greater than zero.
https://www.terraform.io/docs/language/functions/regexall.html

regexall - Functions - Configuration Language - Terraform by HashiCorp

The regex function applies a regular expression to a string and returns a list of all matches.

Mark juan avatar
Mark juan

now using this

 map_users = concat([
    for user in local.user_app_admins:
    {
      userarn  = user
      username = split("/",user)[1]
      groups   = ["system:masters"]
    }
  ],
  [
    for user in local.user_app_viewers:
    {
      userarn  = user
      username = split("/",user)[1]
      groups   = ["cluster-viewer"]
    }
  ],
  [
    for user in local.user_app_editors:
    {
      userarn  = user
      username = split("/",user)[1]
      groups   = ["cluster-editor"]
    }
  ]
  )
//substr(split("/",user)[0], -4, -1) == "user"

  map_roles = concat([
    for role in local.role_app_admins:
    {
      rolearn  = role
      username = split("/",role)[1]
      groups   = ["system:masters"]
    }
  ],
  [
    for role in local.role_app_viewers:
    {
      rolearn  = role
      username = split("/",role)[1]
      groups   = ["cluster-viewer"]
    }
  ],
  [
    for role in local.role_app_editors:
    {
      rolearn  = role
      username = split("/",role)[1]
      groups   = ["cluster-editor"]
    }
  ]
  )
Mark juan avatar
Mark juan
locals{user_app_admins = {
    for name, user in var.app_admins : name => user
    if substr(split("/",user)[0],-4,-1)=="user"
    }
  user_app_viewers= {
    for name, user in var.app_viewers : name => user
    if substr(split("/",user)[0],-4,-1)=="user"
    }
  user_app_editors= {
  for name, user in var.app_editors : name => user
  if substr(split("/",user)[0],-4,-1)=="user"
  }

  role_app_admins = {
  for name, user in var.app_admins : name => user
  if substr(split("/",user)[0],-4,-1)=="role"
  }

  role_app_viewers = {
  for name, user in var.app_viewers : name => user
  if substr(split("/",user)[0],-4,-1)=="role"
  }

  role_app_editors = {
  for name, user in var.app_editors : name => user
  if substr(split("/",user)[0],-4,-1)=="role"
  }
}
Mark juan avatar
Mark juan

Not worked

1
Mark juan avatar
Mark juan
│ Error: Error putting IAM role policy zs-test-k8sAdmins-policy: ValidationError: The specified value for roleName is invalid. It must contain only alphanumeric characters and/or the following: +=,[email protected]_-
│       status code: 400, request id: 1dc44f02-4c71-4a64-9814-c106b4add52c
│ 
│   with aws_iam_role_policy.admin_policy["0"],
│   on rbac.tf line 3, in resource "aws_iam_role_policy" "admin_policy":
│    3: resource "aws_iam_role_policy" "admin_policy" {
│ 
╵
╷
│ Error: Error putting IAM role policy zs-test-k8sAdmins-policy: ValidationError: The specified value for roleName is invalid. It must contain only alphanumeric characters and/or the following: +=,[email protected]_-
│       status code: 400, request id: 238e3a81-1306-44e4-96fd-93bb9fae34d2
│ 
│   with aws_iam_role_policy.admin_policy["1"],
│   on rbac.tf line 3, in resource "aws_iam_role_policy" "admin_policy":
│    3: resource "aws_iam_role_policy" "admin_policy" {
│ 
╵
╷
│ Error: Error putting IAM role policy zs-test-k8sDevs-policy: ValidationError: The specified value for roleName is invalid. It must contain only alphanumeric characters and/or the following: +=,[email protected]_-
│       status code: 400, request id: 088ce065-d0b8-46d2-bbdd-8432eb597681
│ 
│   with aws_iam_role_policy.dev_policy["1"],
│   on rbac.tf line 33, in resource "aws_iam_role_policy" "dev_policy":
│   33: resource "aws_iam_role_policy" "dev_policy" {
│ 
╵

getting this error

Mark juan avatar
Mark juan

i am doing same for map_roles.

2021-07-18

2021-07-17

R Dha avatar
R Dha

hi, I have some manually created resources , and I am importing them using terraform import. So I am planning this to include this in first stage of jenkins pipeline, where the second stage is terraform init and then terraform apply in the third stage. Using s3 as backend. How can I sync the newly imported state file for the manually created resources with the already created state file stored in the s3 bucket. Should I do a terraform push?FYI this is for PagerDuty

ivan.pinatti avatar
ivan.pinatti

I believe you have to init your backend before importing anything,

2021-07-16

Andrew Miskell avatar
Andrew Miskell

Hi All, wanted to run something by here before submitting a issue on GitHub. Trying to use https://github.com/cloudposse/terraform-aws-eks-cluster to build an EKS cluster and workers. Using the example for 0.39.0 I’m getting the following error on Terraform 1.0.2.

│ Error: Error in function call
│
│   on main.tf line 82, in locals:
│   82:   tags = merge(local.common_tags, map("kubernetes.io/cluster/${module.label.id}", "shared"))
│     ├────────────────
│     │ module.label.id will be known only after apply
│
│ Call to function "map" failed: the "map" function was deprecated in Terraform v0.12 and is no longer available; use tomap({ ... }) syntax to write a literal map.
cloudposse/terraform-aws-eks-cluster attachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ thanks for reporting this

cloudposse/terraform-aws-eks-cluster attachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we need to test the module with TF 1.0.2

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently we are working on this PR https://github.com/cloudposse/terraform-aws-eks-cluster/pull/119 to improve the module

Enhance Kubernetes provider configuration by Nuru · Pull Request #119 · cloudposse/terraform-aws-eks-cluster attachment image

what Make Kubernetes provider configuration more robust, and provide alternative options. Enhance and correct README Make explicit the dependency of EKS cluster on Security Group rules Revert PR #…

Andrew Miskell avatar
Andrew Miskell

Excellent, thanks!

2021-07-15

AugustasV avatar
AugustasV

list(string) and map(string) is the same thing? terraform fmt changed from map(string) to list(string) somehow

github140 avatar
github140

Do you have a default value assigned?

Balazs Varga avatar
Balazs Varga

hi all, I have a question. If I created resources with ansible and would like to use it in terraform. E.g. Transit gw was created earlier with ansible, can I get info from that resource with terraform to use it . e.g ID of the tgw and create and attachment.. I sthat possible only with importing that resouce to terraform ?

github140 avatar
github140

Importing resources means that Terraform will be used to manage those. If a lookup is the objective then data sources can be used. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ec2_transit_gateway

Balazs Varga avatar
Balazs Varga

I know this, but after that if I do a destroy, then it will destroy it too right? No other solution? Can it get from another terraform statefile? So create one for base network and one for new objects that can use the other tf’s resources

github140 avatar
github140

You also can read outputs of external / other statefiles.

github140 avatar
github140

Indeed destroying imported resources is destructive. HOWEVER you can find and consume resources using data sources.

1
Balazs Varga avatar
Balazs Varga

will try that. thanks

Brij S avatar
Brij S
04:02:57 PM

Hi all, I’m using the terraform eks module. Im creating managed nodes with it and I realize that the ASG has an activity notification created which you can hook an SNS topic into. How does this notification get created? I looked in the module and cant find it. Any ideas?

Tomek avatar
Tomek

Given a data source like https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/s3_bucket, is there a way to see what kind of IAM permissions are required to use the data source?

github140 avatar
github140

You could trace the API Calls in Cloudtrail and even use it for an IAM policy. https://securingthe.cloud/aws/automatically-generate-aws-iam-policies/

Automatically create AWS IAM policies

AWS Identity & Access Managament (IAM) can now generate a policy based on previous event history!

Alex Jurkiewicz avatar
Alex Jurkiewicz

how could I convert

{a = ["foo", "bar"], b = ["baz"]}

into

[["a", "foo"], ["a", "bar"], ["b", "baz"]]

?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
locals {
  data = {
    a = [
      "foo",
      "bar"
    ],
    b = [
      "baz"
    ]
  }

  result = concat([
    for k, v in local.data : [
      for i in v : [k, i]
    ]
  ]...)
}

output "result" {
  value = local.result
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Outputs:

result = [
  [
    "a",
    "foo",
  ],
  [
    "a",
    "bar",
  ],
  [
    "b",
    "baz",
  ],
]
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@

Alex Jurkiewicz avatar
Alex Jurkiewicz

aha, concat and spread, nice

Mark juan avatar
Mark juan

I have this map of object

sql_db={
    postgres={
      node_type       = "db.t3.micro"
      type            = "postgresql"
      disk_size            = 6 // Disk space in GB
      db_names        =     ["db1","db2"]
      admin_user      = "postgresadmin"
    }
  }

If i want to iterate over the db_names list in this map, how can i do that?

Alex Jurkiewicz avatar
Alex Jurkiewicz
resource "foo" "this" {
  for_each = local.sql_db.db_names
  name = each.key
}
Mark juan avatar
Mark juan

Hey, but that is input from terragrunt file, and i want to iterate over it in terraform file.

Mark juan avatar
Mark juan
module "rds" {
  source = "../rds"
  aws_region        = var.app_region
  db_subnets        = var.db_subnets
  vpc_id            = local.vpc_id

  for_each          = var.sql_db
  rds_name          = "${local.cluster_name}-${each.key}-sql-db"
  admin_user        = each.value.admin_user
  instance_class    = each.value.node_type
  allocated_storage = each.value.disk_size
  rds_type          = each.value.type
  name              = each.value.db_names
}
Mark juan avatar
Mark juan

I want to use that here in postgres db

resource "postgresql_role" "user" {
  ---->for_each = keys(var.sql_db)[0]
  name     = lookup(keys(var.sql_db)[0],"db_names")
  login     = true
  password  = random_string.suffix.result
}

resource "postgresql_database" "postgres" {
  ---->for_each = keys(var.sql_db)[0]
  name     = lookup(keys(var.sql_db)[0],"db_names")
  owner    = postgresql_role.user[each.value].name
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

small note: i would suggest you put for_each at the top of any module/resource block, since it modifies the whole block

Mark juan avatar
Mark juan

Okay, but how can i iterate over list of db to make dbs and assign roles for them

Alex Jurkiewicz avatar
Alex Jurkiewicz

so your second code block is code in the “../rds” module?

Mark juan avatar
Mark juan

yes

Alex Jurkiewicz avatar
Alex Jurkiewicz

when you use for_each with a module block, it’s going to create two copies of the module

Mark juan avatar
Mark juan

actually no they are in same script

Alex Jurkiewicz avatar
Alex Jurkiewicz

I’m sorry, but I’m really confused. Can you provide a simpler example?

Mark juan avatar
Mark juan

yea, So on a wider picture if there are 2 db instances then that loop inside module will create 2 db instance on the basis of sql_db map, right

Mark juan avatar
Mark juan

But i want to create multiple dbs within db instance and for that i am using resource of postgres db and role in this script with provider of postgres

Mark juan avatar
Mark juan

for that i need to iterate over that list of dbs that present inside that sql_db map

Alex Jurkiewicz avatar
Alex Jurkiewicz

ok, there is a limitation of Terraform you will run into here. You can’t create a postgres instance and then create DBs inside it in a single Terraform configuration

Mark juan avatar
Mark juan

in resource block of postgres db and role

Alex Jurkiewicz avatar
Alex Jurkiewicz

Terraform needs to initialise all providers at startup. In this case, the Postgres provider will be unable to initialise because the Postgres instance doesn’t exist. So Terraform will fail.

The only solution is to split these tasks into two separate Terraform configurations

Mark juan avatar
Mark juan

the creation of db instance is in different script and the creation of db and roles in root file itself

Mark juan avatar
Mark juan

Just want a way to iterate over that list, it is creating the roles and db but on the key name but i need it for the db_names list

Alex Jurkiewicz avatar
Alex Jurkiewicz

based on this block you posted before:

resource "postgresql_role" "user" {
  ---->for_each = keys(var.sql_db)[0]
  name     = lookup(keys(var.sql_db)[0],"db_names")
  login     = true
  password  = random_string.suffix.result
}

I think you want something like this:

resource "postgresql_role" "user" {
  for_each = var.sql_db.db_names
  name     = each.key
  login     = true
  password  = random_string.suffix[each.key].result
}
resource "random_string" "suffix" {
  for_each = var.sql_db.db_names
}
Mark juan avatar
Mark juan
╷
│ Error: Missing map element
│ 
│   on db.tf line 45, in resource "postgresql_role" "user":
│   45:   for_each  = var.sql_db.db_names
│     ├────────────────
│     │ var.sql_db is map of object with 1 element
│ 
│ This map does not have an element with the key "db_names".
╵
ERRO[0038] 1 error occurred:
        * exit status 1
Mark juan avatar
Mark juan

got this error

Mark juan avatar
Mark juan

After using this

Mark juan avatar
Mark juan
resource "postgresql_role" "user" {
  for_each  = var.sql_db.db_names
  name      = each.value
  login     = true
  password  = random_string.suffix.result
}

resource "postgresql_database" "postgres" {
  for_each = var.sql_db.db_names
  name     = each.value
  owner    = postgresql_role.user[each.key].name
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

oh, you need var.sql_db.postgres.db_names

Mark juan avatar
Mark juan

Getting this error

Mark juan avatar
Mark juan
╷
│ Error: Invalid for_each argument
│ 
│   on db.tf line 45, in resource "postgresql_role" "user":
│   45:   for_each  = var.sql_db["postgres"].db_names
│     ├────────────────
│     │ var.sql_db["postgres"].db_names is list of string with 2 elements
│ 
│ The given "for_each" argument value is unsuitable: the "for_each" argument
│ must be a map, or set of strings, and you have provided a value of type
│ list of string.
╵
ERRO[0029] 1 error occurred:
        * exit status 1
          
Mark juan avatar
Mark juan

After using this


resource "postgresql_role" "user" {
  for_each  = var.sql_db["postgres"].db_names
  name      = each.value
  login     = true
  password  = random_string.suffix.result
}

resource "postgresql_database" "postgres" {
  for_each = var.sql_db["postgres"].db_names
  name     = each.value
  owner    = postgresql_role.user[each.key].name
}
managedkaos avatar
managedkaos

instead of

for_each  = var.sql_db["postgres"].db_names

try

for_each  = toset(var.sql_db["postgres"].db_names)

Note that this will make each.key and each.value the same. Specifically, each.key/value will hold db1 and then db2 on each iteration.

the problem you are running into is that without casting ["db1","db2"] into a set, or indexing into it as an array, you are operating on it as the complete list.

managedkaos avatar
managedkaos

For example: (note this quick example does not use for_each but the concept of using toset() is still the same:

variable "sql_db" {
  default = {
    postgres = {
      node_type  = "db.t3.micro"
      type       = "postgresql"
      disk_size  = 6
      db_names   = ["db1", "db2"]
      admin_user = "postgresadmin"
    }
  }
}

output "dbnames" {
  value = [for i in toset(var.sql_db["postgres"].db_names) : i]
}
dbnames = [
  "db1",
  "db2",
]
1
1
Mark juan avatar
Mark juan

Thanks, it worked!

managedkaos avatar
managedkaos

awesome

2021-07-14

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

How are people (if at all) maintaining a CRL via Terraform without https://github.com/hashicorp/terraform-provider-tls/pull/73 being merged yet?

Matt Gowie avatar
Matt Gowie

Does anyone have a good terraform module for deploying a GitHub Enterprise Server to AWS that they would recommend?

Matt Gowie avatar
Matt Gowie

cc @Erik Osterman (Cloud Posse) — I’m assuming Cloud Posse has had to do this before but you likely deployed it via K8s?

Release notes from terraform avatar
Release notes from terraform
05:23:40 PM

v1.1.0-alpha20210714 1.1.0 (Unreleased) NEW FEATURES: cli: terraform add generates resource configuration templates (#28874) config: a new type() function, only available in terraform console (<a href=”https://github.com/hashicorp/terraform/issues/28501” data-hovercard-type=”pull_request”…

commands: `terraform add` by mildwonkey · Pull Request #28874 · hashicorp/terraform attachment image

terraform add generates resource configuration templates which can be filled out and used to create resources. The template is output in stdout unless the -out flag is used. By default, only requir…

lang/funcs: add (console-only) TypeFunction by mildwonkey · Pull Request #28501 · hashicorp/terraform

The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it&#39;s handy to be abl…

2021-07-13

Saichovsky avatar
Saichovsky

I have a terraform-compliance question on [stackoverflow>. Grateful if someone could take a </i](https://stackoverflow.com/questions/68361457/terraform-compliance-steps-not-distinguishing-resources-in-plan)

terraform-compliance steps not distinguishing resources in plan

I would like to use terraform-compliance to prevent users from creating a aws_s3_bucket_policy to attach policies to aws_s3_bucket resources and instead, I would like for them to use the

Mark juan avatar
Mark juan

Hey everyone! A quick question ,i want to create multiple dbs within single rds instance, is it possible? and if it’s possible then for multiple rds instances how we can create multiple dbs ? Edit:- this is possible by postgres provider but the thing is we can’t use for_each in module with providers(postgres and mysql), is there any way to do so?

Florian SILVA avatar
Florian SILVA

Hello guys ! I’m trying to find a way to identify the combination of an ip address + one of its port with a unique int value. Sounds like it’s what we call a network socket if we add the protocol. (https://en.wikipedia.org/wiki/Network_socket) But is there a way to have a unique int value we could use as an index in terraform ? Like an equation or something ? My usecase is to identify my target groups created with the community alb module: https://github.com/terraform-aws-modules/terraform-aws-alb Since I have many target groups to create, I did some loops but it ended up in automatic index which is hard to access when I want to add specific rules ^^

Matt Gowie avatar
Matt Gowie
md5 - Functions - Configuration Language - Terraform by HashiCorp

The md5 function computes the MD5 hash of a given string and encodes it with hexadecimal digits.

1
Florian SILVA avatar
Florian SILVA

It solves my issue if I don’t need numbers or convert to string so look a good way for me. In the end, it seems that we cannot define ourself the index of these target in the module. I’m trying to do with it but thank you for the solution I may reuse it

loren avatar
loren
HashiCorp attachment image

Watch Mitchell Hashimoto, the co-creator of HashiCorp Terraform, share the history and state of Terraform today.

2021-07-12

jonjitsu avatar
jonjitsu

Is there a good way to access the docs for a previous version of terraform? The main website is for latest and I need to work with 0.14.

loren avatar
loren

which version? they do have docs for v0.11 and older (basically HCL1 instead of HCL2…) https://www.terraform.io/docs/configuration-0-11/index.html

0.11 Configuration Language - Terraform by HashiCorp

Terraform uses text files to describe infrastructure and to set variables. These text files are called Terraform configurations and end in .tf. This section talks about the format of these files as well as how they’re loaded.

jonjitsu avatar
jonjitsu

0.14

loren avatar
loren

sigh. my bad. poor reading

jonjitsu avatar
jonjitsu

that’s cool

loren avatar
loren

the hack i’ve used in the past is to browse the repo by branch or tag… it’s not always pretty, and sometimes you have to view the raw files when there is embedded html to really see what’s up, but it works… https://github.com/hashicorp/terraform/tree/v0.14/docs

hashicorp/terraform attachment image

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…

jonjitsu avatar
jonjitsu

basically the code for multiple providers here is not working in 0.14 https://www.terraform.io/docs/language/providers/configuration.html

Provider Configuration - Configuration Language - Terraform by HashiCorp

Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.

jonjitsu avatar
jonjitsu

ya that’s basically what I’m doing

jonjitsu avatar
jonjitsu

was wondering if there was something better

jonjitsu avatar
jonjitsu

thanks

loren avatar
loren

for that, do not specify configuration_aliases in the required_providers block. that argument is not backwards-compatible

loren avatar
loren

instead, define the provider block with an alias for the second name. everything else should be the same

jonjitsu avatar
jonjitsu

I’m writing a module that takes in two aws providers and it’s not working. I tried

  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
    aws-dns = {
      source = "hashicorp/aws"
    }
  }

with

resource ... {
  provider = aws-dns
...
}

and

module ... {
  providers = {
    aws     = aws
    aws-dns = aws.dns
  }
  ...
}

but that didn’t work, although there’s no syntax errors

jonjitsu avatar
jonjitsu

it just ignores the second provider

jonjitsu avatar
jonjitsu

any ideas on passing in multiple providers to modules?

jonjitsu avatar
jonjitsu

with version 0.14

loren avatar
loren
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
provider "aws" {
  alias = "aws-dns"
}

with

resource ... {
  provider = aws-dns
...
}

and

module ... {
  providers = {
    aws     = aws
    aws-dns = aws.dns
  }
  ...
}
loren avatar
loren

just checking if that worked for you?

jonjitsu avatar
jonjitsu

oh sorry, didn’t see you post, I’ll check now..

jonjitsu avatar
jonjitsu

I can’t get that to run.

loren avatar
loren

well, that’s the gist of doing it in 0.14. the reusable module declares the provider block with the alias. and the caller (e.g. the root module) passes the providers inside the module block

jonjitsu avatar
jonjitsu

Just to settle this conversation, this is what worked: mod1/main.tf:

provider "aws" {
  alias = "other"
}

resource "aws_s3_bucket" "bucket1" {
  bucket_prefix = "bucket1"
}

resource "aws_s3_bucket" "bucket2" {
  provider      = aws.other
  bucket_prefix = "bucket2"
}

main.tf:

provider "aws" { ... }

provider "aws" {
  alias   = "account1"
  ...
}

module "mod1" {
  source = "./mod1"
  providers = {
    aws       = aws
    aws.other = aws.account1
  }
}

thanks again

SlackBot avatar
SlackBot
03:40:14 PM

This message was deleted.

MSaad avatar
MSaad

Hi, I am new to this community and have an issue which i am hoping someone could help me with. I am currently looking at a problem where the use of tfmask seems to allow a failed circleci step which runs a terraform plan to pass/green although it has an error? the command looks similar to below, is anyone aware of this? now when i remove tfmask the build fails as expected.

terraform plan -out terraform.plan -var-file=env/$(ENVIRONMENT).tfvars dep/$(PROJECT) tfmask
bazbremner avatar
bazbremner

How are you running that command? If it’s a shell pipeline, you probably want to make sure you have set -o pipefail or similar

MSaad avatar
MSaad

that command is in makefile and i am calling it from a config file in yml

MattyB avatar
MattyB

Regarding the public terraform registry - https://registry.terraform.io, can the people that publish their modules just remove them at any point? I’m not seeing any info on how that’s handled in their documentation.

Matt Gowie avatar
Matt Gowie
06:21:19 PM

Yeah you can delete and manage versions via the registry UI if you’re the owner —

1
Raymond Chen avatar
Raymond Chen

Hi, what the ‘exports’ directory for? like this: https://github.com/cloudposse/terraform-null-label/tree/master/exports

cloudposse/terraform-null-label attachment image

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Matt Gowie avatar
Matt Gowie

To support null-label chaining and passing around terraform project “context”, the terraform-null-label module exports a special file called [context.tf](http://context.tf) that you can add to any child or root module to be able to utilize a top label. It’s used in all of the cloudposse modules and provides a consistent interface for naming across all the modules.

cloudposse/terraform-null-label attachment image

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Matt Gowie avatar
Matt Gowie

The [context.tf](http://context.tf) file is specifically generated so it can be easily copy / pasted into any other module. So it’s referred to as an “export”.

Raymond Chen avatar
Raymond Chen

Thanks for responding. I got these in my .terraform/module/module.json

    {
      "Key": "terraform_state_backend.dynamodb_table_label",
      "Source": "cloudposse/label/null",
      "Version": "0.22.0",
      "Dir": ".terraform/modules/terraform_state_backend.dynamodb_table_label"
    },
    {
      "Key": "ecs_alb_service_task_myweb_server.task_label",
      "Source": "cloudposse/label/null",
      "Version": "0.24.1",
      "Dir": ".terraform/modules/ecs_alb_service_task_myweb_server.task_label"
    },

Could you elaborate a little bit on how the the exports.tf leads to an entry in this file with different versions? What I’m trying to do is to get an inventory of modules and their versions of the repo by inspecting this file. Thanks @Matt Gowie

Raymond Chen avatar
Raymond Chen

I know the reason I have different versions is because the modules that use the null_label module require different versions. But not sure how they get here

Matt Gowie avatar
Matt Gowie

So what your module.json is telling you is that your terraform_state_backend module is utilizing null-label as a child module at version 0.22.0 AND your ecs_alb_service_task module is utilizing null-label as a child module at version 0.24.1

Matt Gowie avatar
Matt Gowie

Does that make sense?

Matt Gowie avatar
Matt Gowie

They both consume that null-label module and for both consumptions, it requires an instance of that child module on your local.

Raymond Chen avatar
Raymond Chen

Yes, I get that. But I also have other entries like this: { "Key": "default_label", "Source": "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.21.0>", "Dir": ".terraform/modules/default_label" }

They are the same module, right? I wonder why they have different form in the ‘Source’ field. Why it’s named ‘couldposse/label/null’? Where is that from?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

every single module invocation specifies a source, and every source could vary. It’s just a matter of the author/maintainer to update it and we don’t always do that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for example, if we cut a new release of terraform-null-label that has a trickle down effect of literally hundreds of PRs and updates, since modules depend on modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

'couldposse/label/null' is the registry notation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

git::<https://github.com/>... is the git source notation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

historically, we used git. but now we don’t use that anywhere. So anything using the git:: notation is definitely out of date (probably 1+ years)

Raymond Chen avatar
Raymond Chen

Thanks @Erik Osterman (Cloud Posse) I just realized it’s notation difference. So you are officially switched to registry notation? Good to know.

1
Brij S avatar
Brij S

Has anyone been able to enable ASG CW metrics for eks managed nodes? I saw this issue https://github.com/hashicorp/terraform-provider-aws/issues/13793, however two responses seem to link to a possible solution but elsewhere it says that this isnt supported by EKS. Its a bit confusing..

EKS Node Group ASG metrics · Issue #13793 · hashicorp/terraform-provider-aws attachment image

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

Mohammed Yahya avatar
Mohammed Yahya
Very excited to announce [@AquaSecTeam> has acquired [@tfsec_dev>! I will be joining Aqua along with <https://twitter.com/owenrum @owenrum](https://twitter.com/tfsec_dev) to work full-time on the project - watch this space! <https://www.aquasec.com/news/aqua-security-acquires-tfsec/](https://twitter.com/AquaSecTeam)
1

2021-07-11

2021-07-10

2021-07-09

Devops alerts avatar
Devops alerts

I am using terraform to deploy my resources on aws and i am using auto scaling group for ec2 instance deployment for auto scaling. now issue is that every time when due to work load auto scaling group terminate and deploy new instance. private ip get change. and i want to use network interface for route53 records internal. so user can access the app without any issue.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would consider using a different module if you want directly addressable instances in something like an autoscale group.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

IMO, this is more of a use case for what kubernetes calls “Stateful Sets”, which is why we have this “instance group” module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, it does not auto-scale.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

with this module, you can even assign each instance in the group an elastic ip

Devops alerts avatar
Devops alerts

but in my case, i am using private local IP and route53 record will route traffic to that IP.. lets say, we hosted web app on ec2 instance and that app instance local IP get change when every its get terminated and created new one. so i want to create route53 records in a way that do have to go again and again to update the IP in records. is this possible ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

why not use a private load balancer? why do the instances in the autoscale group need to be directly addressed by their private IP address?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

let’s understand the business use-case first, then see what architectural choices will support that.

Devops alerts avatar
Devops alerts

I have trying to deploy sensu-go server with RDS and database will be hosted into RDS using Postgres and Yes. you are right. we dont need to put ec2 instance in auto autoscale group. then in this case. problem will get solve automatically. am i right ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

well, instances will preserve their private IP for the lifetime of the instance. the risk of that instance going away, however, in an autoscale group is heightened because the objective there is to provide fault tolerance and availability, not preservation of state (including IPs). I don’t know enough about sensu - so what part about it needs/wants static IP addresses? my assumption is that the sensu server itself is stateless, and uses RDS to preserve state. And then any agents just need to address the sensu server endpoint, either by DNS or IP. If that’s all correct, then placing the load balancer infront of sensu, will be what you want.

1
Devops alerts avatar
Devops alerts

Cool. Thanks a lot. i got it. let me try and see the results.

2021-07-08

Mark juan avatar
Mark juan

Hi - do anyone know if i’m having a terraform project and inside that i am having module of eks and all and other than that i’m having a separate directory for providers only,like helm,kubectl and kubernetes so they will require 3 inputs host,cert_ca and token how can i take those as an input in module?

Max avatar

Hey guys, I’ve got a problem with terraform. After one of my applies certain resources wasn’t creates due to errors with datadog api, after another try I managed to fix the error, but now I have duplicates of datadog monitors. Is there are any proper way to clean that up?

Mohammed Yahya avatar
Mohammed Yahya

yes, do terraform state list figure out which resource you don’t want and remove it with terraform state rm x.y.z

Mohammed Yahya avatar
Mohammed Yahya

then remove the resource from datadog, manually

1

2021-07-07

Phillip Hocking avatar
Phillip Hocking

oh hi everyone, i’m trying to instantiate a project that uses the cloudposse/label/null resource and it seems to want to error out on terraform 0.12 but then if i bump it up to 0.14 i get hit with the version constraint described in this issue: https://github.com/masterpointio/terraform-aws-amplify-app/issues/1

Version constraint issues with 0.14 · Issue #1 · masterpointio/terraform-aws-amplify-app attachment image

Hey Matt, I was trying to build this after all the Amplify stuff got brought into the actual hashi aws_provider and it looks like the version constraint of >= 0.14.0 causes issue with the cloudp…

loren avatar
loren

You’re issue log points out that terraform is pulling an old version of the null label module. Update your refs, you aren’t actually using 0.24.1

Version constraint issues with 0.14 · Issue #1 · masterpointio/terraform-aws-amplify-app attachment image

Hey Matt, I was trying to build this after all the Amplify stuff got brought into the actual hashi aws_provider and it looks like the version constraint of >= 0.14.0 causes issue with the cloudp…

loren avatar
loren
phillhocking/hubsub.io attachment image

This repository represents the infrastructure necessary to stand up the hubsub.io application platform. - phillhocking/hubsub.io

loren avatar
loren

Try version 0.7.0

Anton Sh. avatar
Anton Sh.

Hello everyone ! I have a question about https://github.com/cloudposse/terraform-aws-elasticsearch how to make elasticsearch public? I created extra VPC with public subnet and put elasticsearch into this public subnet and anyway I have

Error: Head "https://***************.us-east-1.es.amazonaws.com": context deadline exceeded

  on es-indexes.tf line 14, in resource "elasticsearch_index" "this":
: resource "elasticsearch_index" "this" {

when I want to create index with mapping.

cloudposse/terraform-aws-elasticsearch attachment image

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Matt Gowie avatar
Matt Gowie

It’s not possible as far as I know. AWS has lots of documentation about the fact that when you associate an ES cluster with a VPC then it’s intrinsically private.

cloudposse/terraform-aws-elasticsearch attachment image

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Matt Gowie avatar
Matt Gowie
How To: Access Your AWS VPC-based Elasticsearch Cluster Locally - Jeremy Daly

AWS recently announced that their Elasticsearch Service now supports VPC. Learn how to access your secure clusters from your local development machine.

Anton Sh. avatar
Anton Sh.

@Matt Gowie thank you very much! so as I understood I have to use this example https://github.com/cloudposse/terraform-aws-elasticsearch/tree/master/examples/non_vpc

cloudposse/terraform-aws-elasticsearch attachment image

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Matt Gowie avatar
Matt Gowie

Yeah if you want a public cluster I don’t believe you can associate it with a VPC.

Matt Gowie avatar
Matt Gowie

Though I would question if you really want a public cluster vs being able to access that cluster yourself. But I’ll leave that to you to figure out.

Anton Sh. avatar
Anton Sh.

@Matt Gowie thank you so much. I want to create index and mapping with same terraform soI have to have access. But thank you for your advice, I will think how to done in privately.

1
Vikram Yerneni avatar
Vikram Yerneni

Fellas, anyone got to this bug yet? https://github.com/cloudposse/terraform-aws-elasticsearch/issues/18

CC @Erik Osterman (Cloud Posse)

Matt Gowie avatar
Matt Gowie

@ replied in the issue. See if that helps you.

Vikram Yerneni avatar
Vikram Yerneni

Thanks a lot @Matt Gowie. Appreciate the help here

Vikram Yerneni avatar
Vikram Yerneni

I am testing the example u mentioned

Vikram Yerneni avatar
Vikram Yerneni

@Matt Gowie Instead of all four variables namespace, stage, environment, name (based on my colleagues advice), I went with name variable and it worked.

Vikram Yerneni avatar
Vikram Yerneni

I updated the ticket with same

Matt Gowie avatar
Matt Gowie

Release notes from terraform avatar
Release notes from terraform
06:23:43 PM

v1.0.2 1.0.2 (July 07, 2021) BUG FIXES: terraform show: Fix crash when rendering JSON plan with sensitive values in state (#29049) config: The floor and ceil functions no longer lower the precision of arguments to what would fit inside a 64-bit float, instead preserving precision in a similar way as most other arithmetic functions. (<a…

command/jsonstate: remove redundant remarking of resource instance by mildwonkey · Pull Request #29049 · hashicorp/terraform attachment image

ResourceInstanceObjectSrc.Decode already handles marking the decoded values with any marks stored in AttrSensitivePaths, so re-applying those marks is not necessary and seems to be related to panic…

msharma24 avatar
msharma24

Hi - I have a quick question about S3 backend - Do you guys create a TF Backend S3 bucket per account or One S3 bucket to store TF state AWS org wide ?

pjaudiomv avatar
pjaudiomv

I do one per account

1
msharma24 avatar
msharma24

I have used one s3 bucket for the org wide in the past but have had issues where in some one accidently ovwewrote the state file keys

pjaudiomv avatar
pjaudiomv

I use versioned bucket

2
msharma24 avatar
msharma24

Yeah - versioned buckets helps for rollback and DR

Kevin Neufeld(PayByPhone) avatar
Kevin Neufeld(PayByPhone)

one per account as well

msharma24 avatar
msharma24

Thank you for your response. I figured using a CF stackset would be the best option to deploy TF state bucket and dynamodb in each account org wide

Mohammed Yahya avatar
Mohammed Yahya

3 options here( lot of tradeoffs and debates) :

• 1 bucket with all state files when using a shared service account where IaC pipeline lives there. you can have very tight access on this bucket.

• 1 bucket per account with 1 state file for all resources in that account

• 1 bucket per account with x states files based on tf workspaces in that account.

Mohammed Yahya avatar
Mohammed Yahya

always reduce the blast radius in your state file, you don’t want to put all the eggs in one basket.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We used to create one bucket per account, but when you get into programatic creation of accounts, creating that S3 bucket for maintaining state is such a major blocker for automation. Thus we back tracked on this and predominantly use a single S3 bucket.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

programatic creation of backend configurations also helps mitigate misconfiguration.

3
mfridh avatar
mfridh

One versioned bucket in an “admin” account which has no other infra basically. Them make use of a wrapper to terraform which handles the -backend-config and the state file key part for me. It mirrors the path in my repo almost 1:1. /foo/infra/prd == s3://bucket/…/foo/infra/prd/terraform.tfstate

msharma24 avatar
msharma24

Thank you for your response. I m using a CFN stackset to create to the S3 and Dynamodb , deployed to targeted OU from the management account which has Control Tower enabled - so the cfn automatically made use of the ControlTower Execution IAM roles and , it has auto deployment feature so that when new accounts are added to OU - the stack will auto create S3 bucket and DynamoDB in new accounts

1
msharma24 avatar
msharma24
Terraform Backend for multi-account AWS Architecture

TL;DR How to create S3 Bucket and DynamoDB Table for Terraform backend in a multi-account AWS environment.

Raja Miah avatar
Raja Miah

one per acct..

2021-07-06

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have or know about a nice pattern for passing additional security group rules (as a module input) ?

loren avatar
loren

i just call them “security_group_rules” and define it as a list of objects, with attributes mapped to the security group rule resource

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am needing to allow a security group in account X to communicate with a RDS db in account Y

loren avatar
loren

if you don’t want to define the object, you can do it as list(any) and then use try(object.value.attr, null) to default the value to null if not provided

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to work out what the source_security_group_id will be if its cross account

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

can you pass <account id>/<sg id> ?

loren avatar
loren

is that an attribute of a resource, or of the module you are writing that i don’t have access to?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

so the module that i am creating creates the SG attached to the DB (as well as the DB itself)

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i want to provide a way to add additional SG rules to that SG

loren avatar
loren

if you use vpc peering, you can use references in a rule to a security group in another account, yes.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yes we are peering

loren avatar
loren

if you use transit gateway instead, you are SoL

loren avatar
loren

yes, the format for the cross-account sg reference is <account id>/<sg id>

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

perfect thanks man much appreciated

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to work out the foreach as i really want to pass a list of objects containing the account_id and the security group id

loren avatar
loren

honestly, for security group rules, i would just force the caller to define the label as one of the object attributes

loren avatar
loren

then your for_each is just:

for_each = { for rule in var.rules : rule.label => rule }
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i was thinking of doing …

# Allow additional security groups to access the database.
resource "aws_security_group_rule" "rds_ingress_cross_account" {
  for_each                 = toset(var.cross_account_security_group_ids)
  description              = "Ingress: Allow access from ${aws_security_group.pods.name}."
  from_port                = 3306
  protocol                 = "tcp"
  security_group_id        = aws_security_group.rds.id
  source_security_group_id = each.key
  to_port                  = 3306
  type                     = "ingress"
}
loren avatar
loren

aws_security_group["pods"].name

Pierre-Yves avatar
Pierre-Yves

Hello, I am looking for some information to validate deployment : by example do you use “http data source” to check network connectivity after a terraform deployment ? which other post install check do you do ? Thanks

Jawn avatar

Has anyone tried referencing the output on a submodule? The submodule’s resources were also created with a for_each, adding to the fun here

I’ve seen some references talking about using a data resource from the state, but I’m hoping I can just directly reference the objects

loren avatar
loren

what does the code look like in the submodule? and what attribute from the resource do you want?

Jawn avatar

i’ll grab it, 1 sec

Jawn avatar

here is the call to the submodule

module "s3-static-site" {
	for_each = lookup(var.root_true, var.env) ? {} : local.static-site-template
	source = "./modules/static-sites"

        env             = var.env
        region          = var.region
        is_root_account = var.is_root_account
	server_access_logs_bucket = aws_s3_bucket.bucket-server-access-logs.id
 
        kms_key_s3 = var.kms_key_s3
        site_name     = "${lookup(var.static_bucket_prefix, var.env)}each.key"
	acl            = each.value.acl
        index_document     = each.value.index_document
        error_document     = each.value.error_document
        cors_allowed_headers     = each.value.cors_allowed_headers
        cors_allowed_methods     = each.value.cors_allowed_methods
        cors_allowed_origins     = each.value.cors_allowed_origins
        cors_max_age_seconds     = each.value.cors_max_age_seconds
}

Here is the submodule main.tf

resource "aws_s3_bucket" "static-site" {
  bucket = "${lookup(var.static_bucket_prefix, var.env)}${var.site_name}"
  acl    = var.acl
  website {
    index_document = var.index_document
    error_document = var.error_document
  }
  cors_rule {
    allowed_headers = [var.cors_allowed_headers]
    allowed_methods = [var.cors_allowed_methods]
    allowed_origins = [var.cors_allowed_origins]
    max_age_seconds = var.cors_max_age_seconds
  }
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm     = "AES256"
      }
    }
  }
  logging {
    target_bucket = var.server_access_logs_bucket
    target_prefix = "logs/${var.env}/${var.site_name}/"
  }
  tags = merge(
    local.common_tags,
    {
      "Name" = "${lookup(var.static_bucket_prefix, var.env)}${var.site_name}"
    },
  )
}

I’m trying to do something like this in the outputs: output from submodule:

output "static-site-url" {
	value = aws_s3_bucket.static-site.website_endpoint
}

output from calling module (which will be later referenced in the top level main)

output "static-site-url-capture" {
  value = module.s3-static-site.static-site-url
}
Jawn avatar

i’m basically trying to pull the website_urls (4 of them) and then declare them in my cloudflare module for DNS CNames

Jawn avatar

I could just move this submodule up to be a main module, I just wanted to try this and see if it worked

Jawn avatar

I’m also finding this for_each output construct a bit challenging to understand based on the docs

loren avatar
loren

when you use for_each, the result becomes a map of objects instead of a single object. the key in the map is the same as the each.key value from the for_each expression

Jawn avatar

sure, so like [“theitemcreated”]

loren avatar
loren

exactly, and your for_each is basically local.static-site-template , which appears to be a map. so each key in that map can be used as a key on module["s3-static-site"]["your key"]

loren avatar
loren

for your outputs to be usable and make sense, you’ll almost certainly want them to also be a map, otherwise you won’t know which static-site bucket the output is coming from…

Jawn avatar

it’s dynamically referencing it that I’m struggling with I think

So if I had an output like this

output “website_dns” value = aws_s3_bucket.static-site[“websiteA”].website_endpoint

and in the main.tf where I use that variable I could do this

my_website = module.bucket.aws_s3_bucket.website_dns

Jawn avatar

ah, I’m seeing your post (I was typing at the time)

loren avatar
loren

this will give you a map of { name : website_endpoint }

output "static-site-url" {
	value = { for name, site in module.s3-static-site : name => site.static-site-url }
}
loren avatar
loren

i often recommend using outputs just to see the shape of the object, even before you try to use it as a reference somewhere else

loren avatar
loren

once you see the shape, it makes a lot more sense as far as how you get at it

Jawn avatar

I thought you couldn’t print outputs anymore from modules

loren avatar
loren

you could for example just output the whole module resource:

output "s3-static-site" {
	value = module.s3-static-site
}
Jawn avatar

(I’m clearly missing something)

loren avatar
loren

i’m having to guess at what you’re meaning, but i think you’re getting at the change from terraform 0.11 to terraform 0.12, when the outputs became top-level attributes of the module instead of being nested under the outputs attribute… you can still access them, you just won’t find them under .outputs anymore cuz it doesn’t exist

Jawn avatar

@loren This was extremely helpful. Thank you

Jawn avatar

This was the secret sauce I needed

output "static-site-url" {
	value = { for name, site in module.s3-static-site : trimsuffix(name, ".domain.com>") = site.bucket-websites }
}

then declaring the variable

static_site_url =           module.buckets.static-site-url

then to create the records

resource "cloudflare_record" "static-sites" {
  for_each = var.static_site_url
  zone_id  = lookup(var.cloudflare_zone_id, var.env)
  name     = "${lookup(var.cloudflare_dns_previx, var.env)}${each.key}" # record name, site
  value    = each.value # s3 dns name
  type     = "CNAME"
  proxied  = true
  ttl      = 1
}
Pipo avatar

Hey, I made this module to backup the terraform states from Terraform Cloud to S3. If you have any suggestions or feature requests, let me know. https://github.com/mnsanfilippo/terraform-cloud-backup

mnsanfilippo/terraform-cloud-backup attachment image

Terraform Cloud Backup. Contribute to mnsanfilippo/terraform-cloud-backup development by creating an account on GitHub.

2

2021-07-05

2021-07-04

Rostom avatar
Rostom
10:35:58 PM

Hi y’all! I have a question for y’all!

I’m running a cross region deployment where osaka (ap-northeast-3) in AWS is part of the regions. I’m getting issues regarding access denied -> Code Signing Config

│ Error: error getting Lambda Function (pushlishGuardDutyFindingsToMSTeams) code signing config AccessDeniedException: │ status code: 403, request id: 128f5476-b28c-4183-91ec-459acfb6038b

https://github.com/hashicorp/terraform-provider-aws/issues/18328

The region doesn’t current have codesigning enabled. I’m deploying a zip of a python function.

This user suggested to use a dynamic block to disable code signing when deploying to ap-northeast-3. The thing is i’m not even enabling it. I assume it is enabled by default.

Any thoughts/suggestions?

Alex Jurkiewicz avatar
Alex Jurkiewicz

checking the code, it doesn’t seem like code signing is specified at all if the user doesn’t specify it: https://github.com/hashicorp/terraform-provider-aws/blob/main/aws/resource_aws_lambda_function.go#L453

you might want to use TF_LOG=trace or read your cloudtrail logs to check details of why your function is failing. Perhaps it’s not code signing related

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

oh, this is an error while loading the resource: https://github.com/hashicorp/terraform-provider-aws/blob/main/aws/resource_aws_lambda_function.go#L849

It seems like the case of regions without code signing support is not handled by the provider. Fixing this would probably require an update to the provider code. Someone might have submitted a PR for this already.

1
Rostom avatar
Rostom

yeah exactly! it’s been submitted already. The issue is that AWS doesn’t support it in osaka and other regions. So I was wondering if there is a way around it rather than wait for AWS to actually enable this in the region.

Rostom avatar
Rostom

there’s an attached pic where someone suggests the use of dynamic blocks to disable code signing.. not sure how this can be achieved! was wondering if anyone had an idea!

Rostom avatar
Rostom

Alex Jurkiewicz avatar
Alex Jurkiewicz

you can’t work around the issue while still using that resource. You would have to use some other structure in terraform. For example, create a cloudformation stack and manage that with terraform, or use awscli and shell out with data external

1
Rostom avatar
Rostom

ok. good to know. Thanks! (y)

Rostom avatar
Rostom

2021-07-03

Mohammed Yahya avatar
Mohammed Yahya

@Erik Osterman (Cloud Posse) Are you using atmos in your CICD pipelines ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are using spacelift to CD

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use spacelift to run terraform, and terraform to configure spacelift

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use the stack YAML configuration to configure everything.

2021-07-02

joshmyers avatar
joshmyers

How to for loop over a map to create a string of maps e.g. { "foo" = "bar", "environment" = "dev"}

Igor Urazov avatar
Igor Urazov
11:56:00 AM

jsonencode would work, but you need = as delimiter, so ¯_(ツ)_/¯

joshmyers avatar
joshmyers

Yeah, not what I need

joshmyers avatar
joshmyers

My keys have colons in them too which doesn’t help { "foo:bar" = "badgers", "environment" = "dev"}

loren avatar
loren

Probably a string template

2021-07-01

Steffan avatar
Steffan

just wondering if cloudpoose has any video tutorial out there about using any of its modules. anyone at all?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think it’s all pretty standard stuff. Except null label. That definitely deserves more documentation as it is complex and opinionated

Alex Jurkiewicz avatar
Alex Jurkiewicz

You might want to read the readme for that, I don’t know any better resources to learn

Mohammed Yahya avatar
Mohammed Yahya

but definitely like the video addition

Mohammed Yahya avatar
Mohammed Yahya

@Erik Osterman (Cloud Posse) ^^

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The modules are pretty standard. The null label like @ says is opinionated. And how we actually use our modules is pretty radical compared to what I see most companies are doing. The tutorial @ points how is how we actually manage our environments.

Mark juan avatar
Mark juan
module "rds" {
  source = "../rds"
  aws_region = var.app_region
  db_subnets = var.db_subnets
  vpc_id = local.vpc_id

  for_each=var.sql_db
  rds_name = "${local.cluster_name}-${each.key}-sql-db"
  admin_user = each.value.admin_user
  instance_class = each.value.node_type
  allocated_storage = each.value.disk_size
  rds_type          = each.value.type
}



resource "kubernetes_service" "db_service" {
  for_each = module.rds
  metadata {
    name = "${each.key}-rds"
  }
  spec {
    port {
      port =  each.value.db_port
    }
  }
}

resource "kubernetes_endpoints" "db_service" {
  for_each = module.rds
  metadata {
    name = "${each.key}-rds"
  }
  subset {
    address {
      ip = each.value.db_url
    }
    port {
      port = each.value.db_port
    }
  }
}

I want to use the values of for port and ip coming from the output of rds map of object, how can i do that? i tried with above code and it gave errors

Mark juan avatar
Mark juan

output.tf

## DB Output Variables
output "db_admin_user" {
  value       = tomap({
  for k, v in module.rds : k => v.db_admin_user
  })
}

output "db_password" {
  value       = tomap({
  for k, v in module.rds : k => v.db_password
  })
  sensitive = true
}

output "db_port" {
  value       = tomap({
  for k, v in module.rds : k => v.db_port
  })
}

output "db_url" {
  value       = tomap({
  for k, v in module.rds : k => v.db_url
  })
}

output "db_name" {
  value       = tomap({
  for k, v in module.rds : k => v.db_name
  })
}
Mohammed Yahya avatar
Mohammed Yahya
output "db_config" {
value = {
  user = aws_db_instance.database.username
  password = aws_db_instance.database.password
  database = aws_db_instance.database.name
  hostname = aws_db_instance.database.address
  port = aws_db_instance.database.port
}
}

module.rds.db_config[“xyz”].user module.rds.db_config[“xyz”].password …

Mohammed Yahya avatar
Mohammed Yahya

in your case use each.value.db_config.port

Mark juan avatar
Mark juan

can you state how my output.tf will look like, it would be great?

Mark juan avatar
Mark juan

because there’s also a for each loop

Mark juan avatar
Mark juan
│ Error: Incorrect attribute value type
│ 
│   on redis.tf line 20, in resource "kubernetes_service" "redis_service":
│   20:       port = each.value.redis_port
│     ├────────────────
│     │ each.value.redis_port is list of number with 1 element
│ 
│ Inappropriate value for attribute "port": number required.
╵
╷
│ Error: Incorrect attribute value type
│ 
│   on redis.tf line 33, in resource "kubernetes_endpoints" "redis_service":
│   33:       ip = each.value.redis_url
│     ├────────────────
│     │ each.value.redis_url is list of string with 1 element
│ 
│ Inappropriate value for attribute "ip": string required.
╵
╷
│ Error: Incorrect attribute value type
│ 
│   on redis.tf line 36, in resource "kubernetes_endpoints" "redis_service":
│   36:       port = each.value.redis_port
│     ├────────────────
│     │ each.value.redis_port is list of number with 1 element
│ 
│ Inappropriate value for attribute "port": number required.
╵
Mark juan avatar
Mark juan

getting this error with my above code

Mark juan avatar
Mark juan

can you please tell what do i need to try

Mohammed Yahya avatar
Mohammed Yahya

use something like

  dynamic "subnet_mapping" {
    for_each = var.subnet_mapping

    content {
      subnet_id            = subnet_mapping.value.subnet_id
      allocation_id        = lookup(subnet_mapping.value, "allocation_id", null)
      private_ipv4_address = lookup(subnet_mapping.value, "private_ipv4_address", null)
      ipv6_address         = lookup(subnet_mapping.value, "ipv6_address", null)
    }
  }
Mohammed Yahya avatar
Mohammed Yahya

var.db_config in you case

Mark juan avatar
Mark juan

Used dynamic block still gave error

adebola olowose avatar
adebola olowose

Hey Guys how do i make my load balancer conditional, such that if the type is application it should enforce a security group be set else, it should create without security group if type is set to network in a module. Thanks

Mohammed Yahya avatar
Mohammed Yahya
terraform-aws-modules/terraform-aws-alb attachment image

Terraform module to create an AWS Application/Network Load Balancer (ALB/NLB) and associated resources - terraform-aws-modules/terraform-aws-alb

adebola olowose avatar
adebola olowose

Thanks @

Zach avatar

Folks using env0 - does the pricing ‘per environment’ boil down to “per unique terraform state file”?

tim.davis.instinct avatar
tim.davis.instinct

Hey Zach, I’m the DevOps Advocate at env0. Yes, the pro tier is currently set per environment (state file) per month, but we’re currently talking internally about adding more options. We would love to get your feedback as to what you would like to see. This tier is also unlimited users, and unlimited concurrent runs. Also access to SAML, and our Custom Flows which enables a vast area of extensibility, including Policy as Code.

Zach avatar

hello! Just looking at pricing of the TACO vendors and features, was trying to make sense of the env0 one. The issue on my side from cost perspective is that ‘per state’ if we are following a lot of terraform standard practices means I have lots of small ‘low blast area’ states multiplied by ‘n’ environments

1
Zach avatar

I guess I might be looking at around $3k/month in that case

tim.davis.instinct avatar
tim.davis.instinct

You’re absolutely right. We’re entertaining a possible per user type thing as well.

Zach avatar

right on.

Zach avatar

I’m still just poking into the idea of hosted IAC runs anyways, we’re a small org so we’ve gotten by w/o it for awhile. Just starting to hit the problems of “hey … did someone not run this updated state?”

1
tim.davis.instinct avatar
tim.davis.instinct

If you don’t mind sending me a DM with how many users and state files you think you would be working with, I can take it to our CEO and CTO to see what they think. We know pricing is a major concern for the TACoS, so we want to make sure we are doing right by everybody.

loren avatar
loren

personally, i like pricing per runtime minute (like codebuild), or per simultaneous job (like travis-ci). per user sucks, just means we’ll optimize to minimize individuals actually touching the service (not good for you either!)

tim.davis.instinct avatar
tim.davis.instinct

That makes sense @loren. We’re also exploring per deployment models as well.

brad.whitcomb97 avatar
brad.whitcomb97

Hi All, I’m hoping someone can point me in the right direction, I’m currently in the process of learning Terraform and still invariably getting some of basics wrong (so please bare with me!).

I’m attempting to create a multi-tiered/multi-az infrastructure, but I’m struggling to get my subnets to work! I’ve got as far as my code being ‘valid’, but at the point of applying I receive the following error -

Error: error creating application Load Balancer: ValidationError: At least two subnets in two different Availability Zones must be specified
│ 	status code: 400, request id: 48dcf4c9-ef9c-455b-bdd4-1140be1ccffc
│ 
│   with module.security.aws_lb.app_lb,
│   on modules/security/main.tf line 2, in resource "aws_lb" "app_lb":
│    2: resource "aws_lb" "app_lb" {
│ 
╵
╷
│ Error: Error creating Auto Scaling Group: ValidationError: The subnet ID 'aws_subnet.public_subnet.id' does not exist
│ 	status code: 400, request id: b633e797-4b8e-4edc-9311-befee780686b
│ 
│   with module.security.aws_autoscaling_group.web_asg,
│   on modules/security/main.tf line 84, in resource "aws_autoscaling_group" "web_asg":
│   84: resource "aws_autoscaling_group" "web_asg" {

So on the back of the error received, is there any way I can get my output.tf to include both ‘eu-west-2a’ and ‘eu-west-2b’? I’m sure there will be a simple way of doing this, but it’s left me scratching my head for a while.

output "aws_subnet_public_subnet" {
  value = aws_subnet.public_subnet["eu-west-2a"].id
}

My infrastructure consists of the following -

Root Module main.tf variables.tf versions.tf

Child Modules - Security

main.tf

variables.tf

output.tf VPC

main.tf

variables.tf

output.tf EC2

main.tf

variables.tf

output.tf Thanks!

Mark juan avatar
Mark juan

you can use default subnets,right? you can take them from the console

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to execute a lambda in terraform and getting the following error …

403 AccessDenied

does anyone have any ideas as I am running out, it does obtain the source for an s3 bucket in a different account

loren avatar
loren

to kinda go after the obvious first, does the lambda execution role have permissions to perform the action? and does the s3 bucket policy allow the lambda to perform the action?

jose.amengual avatar
jose.amengual

if the lambda does not access to the bucket AND the code file is not on the bucket you will get that error

jose.amengual avatar
jose.amengual
resource "aws_lambda_function" "lambda" {
  function_name = module.this.id
  s3_bucket     = module.lambda_artifact_bucket.outputs.bucket_name
  s3_key        = var.s3_bucket_key
  handler       = var.lambda_handler
  runtime       = "java11"
  timeout       = "60"
  memory_size   = "512"
jose.amengual avatar
jose.amengual

s3_key is something like /dev/app.zip

jose.amengual avatar
jose.amengual

so if the file is not in <s3://module.lambda>_artifact_bucket.outputs.bucket_name//dev/app.zip you get that error

jose.amengual avatar
jose.amengual

and obviously the bucket permissions need to be correct

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

the bucket and key exist

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does the bucket policy need to allow the role used by the lambda to access the file from s3 to use?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

or does the IAM role we use to execute terraform need access to get the file from s3?

loren avatar
loren

you said you are “executing” a lambda. as in, invoking an existing lambda. if that’s correct, the lambda execution role needs permissions, and the bucket policy needs to allow the lambda execution role (or the account) as the principal

loren avatar
loren

if you are attempting instead to “create” or “update” a lambda, then it is the principal running terraform that needs permission to the bucket, and the bucket policy needs to allow that principal

jose.amengual avatar
jose.amengual

@loren is spot on, one is creation, another es executing ( where the lambda does something to that s3 bucket)

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

We are getting access denied when creating the lambda via terraform

jose.amengual avatar
jose.amengual

then you need to check permissions on the bucket

jose.amengual avatar
jose.amengual

the lambda role needs access to the bucket

jose.amengual avatar
jose.amengual
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::xxxxxxx:role/lambda-role",
                    "arn:aws:iam::xxxxxxxx:root"
                ]
            },
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::lambda-artifacts/*",
                "arn:aws:s3:::lambda-artifacts"
            ]
        }
    ]
}
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Is the issue the lambda role not being there in the policy or the IAM role we assume terraform with not being there?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is the bucket in the same region?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Another approach I suggest is to create a temporary bucket and use aws_s3_object_cooy to stage into a bucket you control

Alex Jurkiewicz avatar
Alex Jurkiewicz

Also, check cloudtrail. The full error will be there

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

its in the same region

Joe Niland avatar
Joe Niland

@ sometimes TF_LOG=debug terraform apply is useful to see the underlying AWS API calls

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I figured it out it was regarding object ownership that the lambda wanted to use so had to fix the ACLs

2
1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

thanks for all the help guys was much appreciated

    keyboard_arrow_up