#terraform (2024-08)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-08-01

Veerapandian M avatar
Veerapandian M

@ Team, I am interested in learning Terraform and would appreciate it if anyone could advise me on the best way to learn quickly, as I am an expert in cloud technology but not an IaC (Terraform, etc.).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What is your software development background, and what type of learner are you? Hands on, or by the book?

2024-08-04

Iftach avatar

Hi, i hope this is the place to ask such questions; im trying to upgrade our msk version. we’ve installed msk using https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster (we’re still on version 1.4.0) im running into this error

Error: error deleting MSK Configuration XXX: BadRequestException: Configuration is in use by one or more clusters. Dissociate the configuration from the clusters.

im not sure how to overcome this when using the cloudposse module? can i provide it with the configuration from an external msk configuration resource? i didnt seem to find such an input.

has anyone faced such an issue and can help? thanks!

cloudposse/terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

cloudposse/terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Igor Rodionov avatar
Igor Rodionov

@Iftach Unfortunately this is chicken-egg problem how are terraform resources for MSK done https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/msk_cluster#configuration_info

configuration has to be created/replaced before the MSK cluster can be created/replaced. And while that’s the right order of initial creation, it causes problems for updates. The configuration is attached to the current MSK cluster (what would be recreated/updated on the next step) so you see the error.

If you fine with downtime and do not care about data - you can do terraform destroy and terraform apply

Can you send me the plan output so I can suggest a solution?

Iftach avatar

Hi @Igor Rodionov i do care about avoiding downtime as much as possible. generally, if i was using resource blocks to create the cluster/configuration, i could have used this lifecycle option on the configuration resource to ‘fix’ this issue.

  lifecycle {
    create_before_destroy = true
  }

is there a way to provide the cloudposse msk module with external configuration resource? as far as i could tell, no, but maybe im missing something?

Iftach avatar

this is the relevant part of the plan causing me issues

  # module./../.module.msk.module.msk-apache-kafka-cluster.aws_msk_configuration.config[0] must be replaced
-/+ resource "aws_msk_configuration" "config" {
      ~ arn             = "arn:aws:bla" -> (known after apply)
      ~ id              = "arn:aws:kafka:bla -> (known after apply)
      ~ kafka_versions  = [ # forces replacement
          - "3.5.1",
          + "2.8.0",
        ]
      ~ latest_revision = 1 -> (known after apply)
        name            = "bla"
        # (1 unchanged attribute hidden)
    }
Iftach avatar

I am capable of working around this issue with some manual intervention (create external config unmanaged by TF, moved existing clusters to use that, destroy the old configs and let tf recreate them)

but i wanted to know if this can be fixed at the TF level

2024-08-05

Serdal Kepil avatar
Serdal Kepil

wave Hello, team! I hope this thread is right for the issues. I take the following error when I try to deploy https://github.com/cloudposse/terraform-aws-rds-cluster/tree/main/examples/complete

╷
│ Error: Unsupported argument
│ 
│   on main.tf line 97, in module "rds_cluster":
│   97:   context = module.this.context
│ 
│ An argument named "context" is not expected here.

I use the same variables in the example, so what could be the issue here?

1
theherk avatar
theherk

Maybe share how you are invoking the module?

Serdal Kepil avatar
Serdal Kepil

I created the project in my local

terraform init
 terraform  plan -var-file="fixtures.us-east-2.tfvars"
Serdal Kepil avatar
Serdal Kepil

It generates this error for all properties under module “rds_cluster”

theherk avatar
theherk

Without seeing how you set up that module it is impossible to say. I could for example do:

module "rds_cluster" {
  source = "cloudposse/route53-alias/aws"

  version         = "0.12.0"
  other_attrs = "..."
}

And get errors that look like they’re about terraform-aws-rds-cluster, but they aren’t actually. So need to see how you are setting up the module.

Serdal Kepil avatar
Serdal Kepil

all right, I found an issue with the path and it is resolved. You were right. thanks a lot

2

2024-08-06

Juan Pablo Lorier avatar
Juan Pablo Lorier

Hi, I’m trying to autoscale ecs services. I’ve used the ecs_cloudwatch_autoscale module plus the ecs-cloudwatch-sns-alarms module but the alarm module only accept ARNs for the actions. Is it correct to pass the autoscale policy arns to the actions?

1
Juan Pablo Lorier avatar
Juan Pablo Lorier

To leave an answer here, it does work as expected.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Ben Smith (Cloud Posse)

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

Hey @Juan Pablo Lorier to be clear passing the ARNs from one module to another worked for you?

Juan Pablo Lorier avatar
Juan Pablo Lorier

yes, it did. All went fine by using both modules together

1
Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

Perfect, glad to hear

Juan Pablo Lorier avatar
Juan Pablo Lorier

thanks!

2024-08-07

Release notes from terraform avatar
Release notes from terraform
07:33:31 AM

v1.9.4 1.9.4 (August 7, 2024) BUG FIXES:

core: Unneeded variable validations were being executed during a destroy plan, which could cause plans starting with incomplete state to fail. (#35511) init: Don’t crash when discovering invalid syntax in duplicate required_providers blocks. (<a…

skip unneeded variable validation during destroy by jbardin · Pull Request #35511 · hashicorp/terraformattachment image

Arbitrary expressions cannot be evaluated during destroy, because the &quot;state&quot; of the state is unknown. Missing resources or invalid data will cause evaluations to fail, preventing any pro…

Release notes from terraform avatar
Release notes from terraform
08:03:32 AM

v1.10.0-alpha20240807 1.10.0-alpha20240807 (August 7, 2024) BUG FIXES:

The error message for an invalid default value for an input variable now indicates when the problem is with a nested value in a complex data type. [<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2412199978” data-permission-text=”Title is private” data-url=”https://github.com/hashicorp/terraform/issues/35465” data-hovercard-type=”pull_request” data-hovercard-url=”/hashicorp/terraform/pull/35465/hovercard”…

Release v1.10.0-alpha20240807 · hashicorp/terraformattachment image

1.10.0-alpha20240807 (August 7, 2024) BUG FIXES:

The error message for an invalid default value for an input variable now indicates when the problem is with a nested value in a complex data type. …

configs: Include context when variable default has nested problem by apparentlymart · Pull Request #35465 · hashicorp/terraformattachment image

Previously this error message was including only the main error message and ignoring any context about what part of a potential nested data structure it arose from. Now we&#39;ll use our usual path…

Boris Dyga avatar
Boris Dyga

Hi! Any way to create a lambda layer with a CloudPosse module?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Ben Smith (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have a component for lambdas that we use

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using atmos?

Boris Dyga avatar
Boris Dyga

HI @Erik Osterman (Cloud Posse)! Yes I use atmos. Could you please post a sample code snippet on how to create a layer with that module?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our docs should be publicly available tomorrow

Boris Dyga avatar
Boris Dyga

Great! Looking forward!

Boris Dyga avatar
Boris Dyga

@Erik Osterman (Cloud Posse), any updates?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Boris Dyga it will go live today. I will let you know when it is up

Boris Dyga avatar
Boris Dyga

Thank you!

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

So our component here is the starting point. then you’d probably want to enable CICD if this is a constantly updated lambda app. If it’s static then throw the code in an s3 bucket (or local to the component) and be done.

If CICD is wanted then you’ll want a repo with the workflow similar to https://github.com/cloudposse/example-app-on-lambda-with-gha

This will build and publish (push to an s3 bucket) the lambda. Then write SSM Key with the commit sha and s3 key where the code lives (like <my-lambda-s3-bucket>/<my-app>/<sha>.zip). We then have promote steps in main branch commits or releases that promote - this copies the SSM key to other aws accounts so that the lambda can be updated in other environments. Finally in all the workflows we re-deploy the stack that actually deploys the lambda from s3 using terraform.

The link Erik posted handles most of this

cloudposse/example-app-on-lambda-with-gha
1
Michal Tomaszek avatar
Michal Tomaszek

hi! new here, still reading and learning. out of curiosity: why in versions.tf file for say VPC only greater-than limit is specified for terraform/provider? normally, in semver major version can include breaking changes in API.

1
loren avatar
Version Constraints - Configuration Language | Terraform | HashiCorp Developerattachment image

Version constraint strings specify a range of acceptable versions for modules, providers, and Terraform itself. Learn version constraint syntax and behavior.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


normally, in semver major version can include breaking changes in API

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is correct, it can introduce breaking changes (in fact, it happened a few times in the past with the aws provider)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we do it in TF modules to not update hundreds of them every time a new major version of a provider is released

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in case of breaking changes, we fix a particular module

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

however, in your TF root-modules (components), you can always restrict it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so even if a child module uses a greater-than limit, you can still override it in your root modules to exclude major versions

Michal Tomaszek avatar
Michal Tomaszek

thanks for quick and comprehensive replies!

1

2024-08-08

2024-08-09

george.m.sedky avatar
george.m.sedky

Hey everyone, I just published this VSCode extension for more reliable Terraform/OpenTofu code modifications, still in alpha but it’s much more accurate than GitHub Copilot and Claude 3.5 because it makes use of the latest provider docs + structured generation make sure the TF code is 100% syntactically valid

I’m not sure how useful this extension is, so I’d love to hear your thoughts

https://youtu.be/Uxk1hsSs6tI?si=PO4xxf9WTODM3fMF

1
1
mayank2299 avatar
mayank2299

Thats Great @george.m.sedky

1
1

2024-08-11

2024-08-12

managedkaos avatar
managedkaos
06:31:26 PM
Terraform Roadmap - roadmap.shattachment image

Step by step guide to learn Terraform in 2024. We also have resources and short descriptions attached to the roadmap items so you can get everything you want to learn in one place.

2024-08-13

Michael avatar
Michael

Does anyone use the tool tfschema in your Terraform workflows? I was looking into tooling similar to TFlint that gives me greater visibility into desired configuration parameters and found it. Curious if anyone had any cool use cases or integrations

❯ tfschema resource show google_compute_instance
+---------------------------+--------------------------------+----------+----------+----------+-----------+
| ATTRIBUTE                 | TYPE                           | REQUIRED | OPTIONAL | COMPUTED | SENSITIVE |
+---------------------------+--------------------------------+----------+----------+----------+-----------+
| allow_stopping_for_update | bool                           | false    | true     | false    | false     |
| can_ip_forward            | bool                           | false    | true     | false    | false     |
| cpu_platform              | string                         | false    | false    | true     | false     |
| current_status            | string                         | false    | false    | true     | false     |
| deletion_protection       | bool                           | false    | true     | false    | false     |
| description               | string                         | false    | true     | false    | false     |
| desired_status            | string                         | false    | true     | false    | false     |
| effective_labels          | map(string)                    | false    | false    | true     | false     |
| enable_display            | bool                           | false    | true     | false    | false     |
| guest_accelerator         | list(object({ count=number,    | false    | true     | true     | false     |
|                           | type=string }))                |          |          |          |           |
| hostname                  | string                         | false    | true     | false    | false     |
| id                        | string                         | false    | true     | true     | false     |
| instance_id               | string                         | false    | false    | true     | false     |
| label_fingerprint         | string                         | false    | false    | true     | false     |
| labels                    | map(string)                    | false    | true     | false    | false     |
| machine_type              | string                         | true     | false    | false    | false     |
| metadata                  | map(string)                    | false    | true     | false    | false     |
| metadata_fingerprint      | string                         | false    | false    | true     | false     |
| metadata_startup_script   | string                         | false    | true     | false    | false     |
| min_cpu_platform          | string                         | false    | true     | true     | false     |
| name                      | string                         | true     | false    | false    | false     |
| project                   | string                         | false    | true     | true     | false     |
| resource_policies         | list(string)                   | false    | true     | false    | false     |
| self_link                 | string                         | false    | false    | true     | false     |
| tags                      | set(string)                    | false    | true     | false    | false     |
| tags_fingerprint          | string                         | false    | false    | true     | false     |
| terraform_labels          | map(string)                    | false    | false    | true     | false     |
| zone                      | string                         | false    | true     | true     | false     |
+---------------------------+--------------------------------+----------+----------+----------+-----------+
George Yermulnik avatar
George Yermulnik

I use it every so often when I get stuck figuring the type of the attribute in a resource (especially with complex types). Great tool

1
this1

2024-08-14

Michal Tomaszek avatar
Michal Tomaszek

hi, I’m playing around with Atmos and I wonder: would that be considered the best practice to not include *.tfvars / *.tfvars.json files in repository as these are managed and created on the fly by Atmos anyway? for auto generated backend.tf.json I could ask similar question.

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the tfvars.json files and the backend.tf.json are automatically created by Atmos from the config in YAML manifests - we don’t push those to VCS (and you can even exclude them in .gitignore)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you add any tfvars files, then Terraform will combine them with those that Atmos generates - this is not recommended b/c Terraform processes the files in lexicographical order, and depending on the generated file names, the values in them or in the tfvars files will take precedence (which is not ok)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, if you have all the vars and backend config (and other settings) in YAML, then the command atmos describe component <component> -s <stack> will show you everything about the component in the stack. If you have some vars defined in the terraform code, then Atmos will not be able to show you those values

Michal Tomaszek avatar
Michal Tomaszek

thank you for excellent answer, @Andriy Knysh (Cloud Posse)

1
Release notes from terraform avatar
Release notes from terraform
11:53:34 AM

v1.10.0-alpha20240814 1.10.0-alpha20240814 (August 14, 2024) BUG FIXES:

The error message for an invalid default value for an input variable now indicates when the problem is with a nested value in a complex data type. (#35465) Sensitive marks could be incorrectly transferred to nested resource values, causing erroneous changes during a plan (<a…

Release v1.10.0-alpha20240814 · hashicorp/terraformattachment image

1.10.0-alpha20240814 (August 14, 2024) BUG FIXES:

The error message for an invalid default value for an input variable now indicates when the problem is with a nested value in a complex data type….

configs: Include context when variable default has nested problem by apparentlymart · Pull Request #35465 · hashicorp/terraformattachment image

Previously this error message was including only the main error message and ignoring any context about what part of a potential nested data structure it arose from. Now we&#39;ll use our usual path…

2024-08-19

Ahmed Ellejji avatar
Ahmed Ellejji

wave Hello, team! so i am trying to automate (IAC) our glue services iam-role, s3, Glue Job, Workflow i am facing this error when i use

terraform {
    source = "tfr:///cloudposse/s3-bucket/aws?version=2.0.3"
}

to import s3 module or importing iam-role module

terraform {
    source = "tfr:///cloudposse/iam-role/aws?version=0.19.0"
}

Error is

'.terraform\modules\s3_user.s3_user'...
│ fatal: '$GIT_DIR' too big

i tried everything possible increased buffer size, downloaded the module manualy and point the source to my local but still no use any help please

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

any one can support me with this

1

2024-08-20

Release notes from terraform avatar
Release notes from terraform
06:23:32 PM

v1.9.5 1.9.5 (August 20, 2024) ENHANCEMENTS:

cloud: The cloud block can now interact with workspaces that have HCP resource IDs. (#35495)

BUG FIXES:

core: removed blocks with provisioners were not executed when the resource was in a nested module. (<a href=”https://github.com/hashicorp/terraform/issues/35611“…

Upgrade go-tfe from 1.51.0 to 1.58.0 by jbonhag · Pull Request #35495 · hashicorp/terraformattachment image

This PR bumps the go-tfe dependency in Terraform to v1.58.0, which allows the Terraform cloud block to interact with workspaces that have HCP IDs. This requires changing the mock dependency from go…

Removed targets can be within nested modules by jbardin · Pull Request #35611 · hashicorp/terraformattachment image

The initial implementation of removed block provisioners didn&#39;t take into account that the from target could reside within a nested module. This means that when iterating over the module config…

party_parrot1

2024-08-22

aj_baller23 avatar
aj_baller23

was wondering if you guys can point me int the right direction… Trying to find a terraform module for aws personalize service. Doesn’t seem like the terraform aws provider has a resource for personalize…. unless im not looking in the correct location…. didn’t find any from cloudposse either. any recommendation will be appreciated

kevcube avatar
kevcube

providers don’t provide modules, but if you mean resource, then yes the provider doesn’t seem to have a resource for that API. https://github.com/hashicorp/terraform-provider-aws/issues/30060

because of this, you likely won’t find any modules, as the only option for managing this service in terraform is the awscc provider. Which I have found to be a useful provider, but I haven’t seen it used in any public modules before.

Your options are to write your own module or terraform code using the awscc provider or to upvote the issue and wait.

#30060 [New Service]: AWS Personalize

Description

While Personalize isn’t a particularly new service, it does seem like there is no support for it whatsoever in terraform. I could be completely missing it, but are there any plans to add this?

Requested Resource(s) and/or Data Source(s)

aws_personalize_recommender

Potential Terraform Configuration

No response

References

No response

Would you like to implement a fix?

None

2024-08-23

Michael avatar
Michael

Just came across this opinionated article on “one root module to rule them all”: https://medium.com/@maximonyshchenko/the-secret-to-terraform-efficiency-a76140a5dfa1

The secret to Terraform efficiencyattachment image

*This article is for Terraform heavy users, who manage complex infrastructures.

2
1
kevcube avatar
kevcube

wtf this image really offended me because i’m a terragrunt pusher

The secret to Terraform efficiencyattachment image

*This article is for Terraform heavy users, who manage complex infrastructures.

2
Joe Perez avatar
Joe Perez

I really enjoy seeing how others tackle terraform organization

1
Joe Perez avatar
Joe Perez

I’m not a big fan of terraform cloud’s resource based pricing model, so I lean more into s3 based backends, reuseable terraform modules, data source lookups for loose coupling and tier based separation of resources similar to the author

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This article is for Terraform heavy users, who manage complex infrastructures. Any one can say that, and all things are relative. I know one of our customers learned the hard way, this doesn’t work when their terraform plans took 25GB ram to run. Now they use atmos.

10002
Joe Perez avatar
Joe Perez

That’s crazy, what warrants needing 25GB to run a plan? 1000s of resources in a single state?

Joe Perez avatar
Joe Perez

I’ve heard of plans taking an hour to finish and using targeting as a bandaid

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It can happen easily if creating a resource factory inside of a root module. Imagine a root module, that not only leverages a lot of child modules, but also defines a complete SDLC for dev/staging/prod and N environments.

Joe Perez avatar
Joe Perez

I can definitely see how someone adding one thing for their project or stand alone ticket creating lots of bloat over time

Joe Perez avatar
Joe Perez

The worst I’ve experienced of that has just been a single state for each env, which I guess is good to get started, but scales poorly for teams

Rishav avatar

Really fascinating read, especially since our team just decided to move away from this approach after a little over a year!

Like @Erik Osterman (Cloud Posse) says, expertise is relative. Our trio nearly had a decade of Terraform experience between us when we began the greenfield project, so we went all in on dynamic backends, var-files and even workspaces since we wanted the DRYest setup.

After the year’s experience, some issues became too much to workaround:

• Module version tagging ◦ Simply not possible to have different module tags in dev/stg/prod since all TF config is shared. ◦ Thanks to OpenTofu’s early static evaluation, this is at least possible now.

• More complexity than resource count ◦ I think it’s naive to believe all envs are “identical”, excepting resource counts for cost optimisation. ◦ In order to launch a new feature/implementation, you test and tinker with it in dev. Then move it to stg and probably introduce more changes from feedback, before releasing a polished version in prod. ◦ Some additional features (be that backup or security options) simply don’t exist or are incompatible with how prod requires it.

• Plans per environment for each change ◦ The worst one of all, since the core TF config is shared by all environments, we reguarly had to run plan for all envs for each change, just to make sure we weren’t inadvertently breaking prod by changing something in dev. ◦ This one is unforgiveable, and not even branch-based version strategy can help with this mess.

With that being said, I do not enjoy having to diff between directories nor the repetition brought about by having a dedicated folder for each environment.

But the simplicity of a 1:1 relationship is undeniable. What you see is what you get, and you can rest easy knowing any change in dev will never impact stg, let alone prod.

3
Joe Perez avatar
Joe Perez

Great insight, I think you should repost this with “also send to #terraform

Rishav avatar

Aw, thanks!

You reckon? Alright then.

1
Joe Perez avatar
Joe Perez

I’ve wanted to test moving to that pattern, but I agree that it can be rigid/unforgiving in both good and bad ways

Rishav avatar

Spot-on, that’s it!

When we started out, we were fully onboard with the strict rigidity. After all, all envs should be (near-)identical for a given setup, right?

Well, we believed in it hard enough to push through for a year, managing 6 different envs with the same DRY setup.

However, the tipping point was regularly waiting for plans to complete for 6 different envs for the most trivial of config changes.

If anything, the experience has re-confirmed this is a perfectly valid (and positive!) approach IF you can guarantee there’ll never be any “major” differences between any of your environments beyond changing counts.

1
Joe Perez avatar
Joe Perez

Experimentation is key and the important thing is that you found something that works for your team

1
Paweł Rein avatar
Paweł Rein

I may be missing something but imagine TF config (variables) was separate from “code”. It would be layered in a way Opscode Chef did it with environments and roles, allowing for overrides with total of 11 levels of precedence (some used to find it too complex, I know). Isn’t the reason for this thread to exist and for this problem to be such a “no right way to do it” problem, that TF doesn’t allow for code/config separation and config per env?

Paweł Rein avatar
Paweł Rein

Imagine just hypothetically that you would have no env / region directories and just one plain structure in TF repo, and in another repo you would have Chef like variables layout with defaults and env, region, role overrides

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Paweł Rein if you like to keep config separate from code, organized by environments, and leveraging overrides, inheritance and imports, sounds like you may like atmos https://atmos.tools

1
Paweł Rein avatar
Paweł Rein

One other thought. What the author calls “tier 2” - the IaC stack that is the application team concern. If done the way he suggests, with TF in application repo, it can lead to duplicated and orphaned resources. Not sure if that approach is compatible with Atmos or other generators like terramate. Crossplane seems to be suited well for this kind of separation of concerns.

2024-08-26

Sara Jarjoura avatar
Sara Jarjoura

hey! new to atmos, trying it out for the first time I’m getting the following error trying to convert a repo to use atmos

Could not find the component 'aws-to-github' in the stack 'dev'.
Check that all the context variables are correctly defined in the stack manifests.
Are the component and stack names correct? Did you forget an import?

the directory structure

.
├── atmos.yaml
└── components
    ├── stacks
    │   ├── catalog
    │   │   └── aws-to-github.yaml
    │   ├── deploy
    │   │   ├── _defaults.yaml
    │   │   └── dev.yaml
    │   └── mixins
    │       └── module
    │           ├── aws-to-github.yaml
    │           ├── google-workspace-to-aws.yaml
    │           └── infra-shared.yaml
    └── terraform
        └── aws
            ├── aws-to-github
            │   ├── data.tf
            │   ├── locals.tf
            │   ├── main.tf
            │   ├── policies.tftpl
            │   ├── trust-document.tftpl
            │   └── variables.tf
            └── identity-center
                ├── main.tf
                └── variables.tf

atmos.yaml

base_path: "./"

components:
  terraform:
    base_path: "components/terraform/aws"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: false

stacks:
  base_path: "components/stacks"
  included_paths:
    - "deploy/**/*"
  excluded_paths:
    - "deploy/*/_defaults.yaml"
  name_pattern: "{{.vars.env}}-{{.vars.region}}"

logs:
  file: "/dev/stderr"
  level: Info

I’m sure I’m missing something obvious - any ideas?

1
Sara Jarjoura avatar
Sara Jarjoura

stacks/deploy/dev.yaml

vars:
  env: "dev"
  account: "<AWS ACCOUNT>"

components:
  terraform:
    aws-to-github: {}

import:
  - catalog/aws-to-github

terraform:
  providers:
    aws:
      allowed_account_ids: "{{ .vars.account }}"
      profile: "{{ .vars.env }}-administrator"
Sara Jarjoura avatar
Sara Jarjoura

the command I’m using is

atmos terraform init aws-to-github -s dev
Sara Jarjoura avatar
Sara Jarjoura

ok I found the atmos describe stacks command which showed me my stack is named deploy/dev for some reason

Sara Jarjoura avatar
Sara Jarjoura

ok - for some reason if I use the {stage} naming pattern it works but with {{.vars.env}} it doesn’t

Sara Jarjoura avatar
Sara Jarjoura
04:32:55 PM

¯_(ツ)_/¯

Miguel Zablah avatar
Miguel Zablah

I think is your stack naming try using something like this: {environment}-{stage} instead of {{.vars.env}}-{{.vars.region}}

it might be that .vars is not available at atmos.yaml file

also this might be a question for atmos channel

1
Sara Jarjoura avatar
Sara Jarjoura

awesome that works! I can even remap environment to env in defaults with

vars:
  env: "{{ .vars.environment }}"

super cool, thank you!

Miguel Zablah avatar
Miguel Zablah

Niceee!! Glad I was of help!

1
Jmarz avatar

Hi, at work we use CF and are planning to use TF soon as changes are happening at work. (I couldn’t find here folks asking about learning about TF from scratch). What are some resources that you would suggest to look into in order to learn TF? (other than Hashicorp docs and youtube videos?), I know there are resources out there but I’ve ran into some that are outdated (uploads of 5yrs etc). Any suggestions would help! Thank you! Also, if there are paid ones, I would consider this too.

haque.zubair avatar
haque.zubair

We switched 7 months ago cloudformation is so painful lol

haque.zubair avatar
haque.zubair
AWS API Gateway & Lambda with Terraformattachment image

I attended a PI planning session with the product development team a short while ago. We were preparing for a new feature rollout, I was…

1
Joe Perez avatar
Joe Perez

Shameless plug www.taccoform.com

1
cb578 avatar
Terraform Roadmap - roadmap.shattachment image

Step by step guide to learn Terraform in 2024. We also have resources and short descriptions attached to the roadmap items so you can get everything you want to learn in one place.

1
Jmarz avatar

Thanks all, i’ll look into this roadmap too! Nicely done.

Michal Tomaszek avatar
Michal Tomaszek

hey, I was playing with VPC component from terraform-aws-components repo yesterday. it looks that account-map is prerequisite for it. account-map on the other hand needs account component. is there some workaround needed to use just VPC component without account-map?

Miguel Zablah avatar
Miguel Zablah

Is this the component your refering? https://github.com/cloudposse/terraform-aws-components/blob/main/modules/vpc/README.md if so where do you see that account-map?


tags:

• component/vpc • layer/network • provider/aws


Component: vpc

This component is responsible for provisioning a VPC and corresponding Subnets. Additionally, VPC Flow Logs can
optionally be enabled for auditing purposes. See the existing VPC configuration documentation for the provisioned
subnets.

Usage

Stack Level: Regional

Here’s an example snippet for how to use this component.

catalog/vpc/defaults or catalog/vpc

components: terraform: vpc/defaults: metadata: type: abstract component: vpc settings: spacelift: workspace_enabled: true vars: enabled: true name: vpc availability_zones: - “a” - “b” - “c” nat_gateway_enabled: true nat_instance_enabled: false max_subnet_count: 3 vpc_flow_logs_enabled: true vpc_flow_logs_bucket_environment_name: vpc_flow_logs_bucket_stage_name: audit vpc_flow_logs_traffic_type: "ALL" subnet_type_tag_key: "[example.net/subnet/type](http://example.net/subnet/type)" assign_generated_ipv6_cidr_block: true

import:

  • catalog/vpc

components: terraform: vpc: metadata: component: vpc inherits: - vpc/defaults vars: ipv4_primary_cidr_block: “10.111.0.0/18”

Requirements

NameVersion
terraform>= 1.0.0
aws>= 4.9.0

Providers

NameVersion
aws>= 4.9.0

Modules

NameSourceVersion
endpoint_security_groupscloudposse/security-group/aws2.2.0
iam_roles../account-map/modules/iam-rolesn/a
subnetscloudposse/dynamic-subnets/aws2.4.2
thiscloudposse/label/null0.25.0
utilscloudposse/utils/aws1.3.0
vpccloudposse/vpc/aws2.1.0
vpc_endpointscloudposse/vpc/aws//modules/vpc-endpoints2.1.0
vpc_flow_logs_bucketcloudposse/stack-config/yaml//modules/remote-state1.5.0

Resources

NameType
aws_flow_log.defaultresource
aws_shield_protection.nat_eip_shield_protectionresource
aws_caller_identity.currentdata source
aws_eip.eipdata source

Inputs

NameDescriptionTypeDefaultRequired
additional_tag_mapAdditional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id.This is for some rare cases where resources want additional configuration of tagsand therefore take a list of maps with tag key, value, and additional configuration.map(string){}no
[assign_generated_ip…    
Michal Tomaszek avatar
Michal Tomaszek

@Miguel Zablah, this one exactly. have a look in providers.tf file

Miguel Zablah avatar
Miguel Zablah

I see, you can always modify it to not need that part and let atmos generate the backend for you. I actually use the aws module for this https://github.com/terraform-aws-modules/terraform-aws-vpc so I might not be of much help

terraform-aws-modules/terraform-aws-vpc

Terraform module to create AWS VPC resources

Michal Tomaszek avatar
Michal Tomaszek

that’s exactly what I did. however, I’m still wondering if I could preserve vendor component intact and have that possibility (no account-map component usage) without any removal.

Miguel Zablah avatar
Miguel Zablah

without account and account-map I don’t think it’s possible but you can also ask on atmos maybe someone there knows of a way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, the terraform-aws-components repo consists of our opinionated root modules as part of our refarch that is optimized for use with atmos and documented at docs.cloudposse.com

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Components | atmos

Components are opinionated building blocks of infrastructure as code that solve one specific problem or use-case.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Future versions of our refarch will move away from the account-map, but no ETA on that.

Michal Tomaszek avatar
Michal Tomaszek

assuming I have a spare moment, would it make sense to prepare PR that would make it more generic? i.e. some flag to enable/disable values sourcing from account-map. or do you plan do change the architecture of this component completely? then, such PR maybe it’s not worth the effort

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We were discussing this again last friday on our regular Architecture Review Board call. We have some longer term plans that would be too much, but shorter term there might be something we could do that benefits you.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Out of curiosity, are you also using atmos?

Michal Tomaszek avatar
Michal Tomaszek

ok, I see. yes, Atmos it is

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I understand your use case better, is your goal to use the components in accounts that were already provisioned, such as with control tower?

Michal Tomaszek avatar
Michal Tomaszek

yes, therefore I have the part related to accounts creation already in place. sort of brownfield scenario for VPC component usage.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, great that is the use-case we were discussing. we would like to make it so all the roles can be imperatively defined in the stack configurations, rather than relying on the outputs of the account-map.

the short term workaround is to use the static account map. I see we are missing the definition of that in the documentation here: https://atmos.tools/core-concepts/components/terraform/brownfield#hacking-remote-state-with-static-backends

Brownfield Considerations | atmos

There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gabriela Campana (Cloud Posse) can you add a task for @Andriy Knysh (Cloud Posse) to document that? It’s come up a lot lately

2
1

2024-08-27

parth bansal avatar
parth bansal

hi, every time when we plan and apply the terraform script, the elastic beanstalk resource got tainted due to which the elastic beanstalk resource got replaced every time. can anyone please tell how to resolve this problem.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

2024-08-28

Release notes from terraform avatar
Release notes from terraform
01:03:31 PM

v1.10.0-alpha20240828 1.10.0-alpha20240828 (Aug 28, 2024) BUG FIXES:

The error message for an invalid default value for an input variable now indicates when the problem is with a nested value in a complex data type. (#35465) Sensitive marks could be incorrectly transferred to nested resource values, causing erroneous changes during a plan (<a…

Release v1.10.0-alpha20240828 · hashicorp/terraform

1.10.0-alpha20240828 (Aug 28, 2024) BUG FIXES:

The error message for an invalid default value for an input variable now indicates when the problem is with a nested value in a complex data type. (#…

configs: Include context when variable default has nested problem by apparentlymart · Pull Request #35465 · hashicorp/terraformattachment image

Previously this error message was including only the main error message and ignoring any context about what part of a potential nested data structure it arose from. Now we&#39;ll use our usual path…

2024-08-29

Aditya avatar

Hi Team,

I am trying to delete eks components using the destroy command and getting this error │ Error: Unsupported attribute │ │ on providers.tf line 5, in provider “aws”: │ 5: profile = module.iam_roles.terraform_profile_name │ ├──────────────── │ │ module.iam_roles is object with 13 attributes │ │ This object does not have an attribute named “terraform_profile_name”. ╵ Releasing state lock. This may take a few moments…

This module is under account map and is being used in provider.tf files< Can somebody help me in understanding why this is creating an issue during destruction was not an issue during creation, a version issue ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Jeremy G (Cloud Posse)

1
Aditya avatar

anything on this ?

Aditya avatar

the difference between the components which are successfully deleted and not able to delete is this Not able to delete provider.tf provider “aws” { region = var.region

# Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null. profile = module.iam_roles.terraform_profile_name

dynamic “assume_role” { # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role. for_each = compact([module.iam_roles.terraform_role_arn]) content { role_arn = assume_role.value } } }

and able to delete

provider “aws” { region = var.region

profile = module.iam_roles.profiles_enabled ? coalesce(var.import_profile_name, module.iam_roles.terraform_profile_name) : null

dynamic “assume_role” { for_each = module.iam_roles.profiles_enabled ? [] : [“role”] content { role_arn = coalesce(var.import_role_arn, module.iam_roles.terraform_role_arn) } } }

is profile being null is an issue somehow ?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Aditya What version of account-map are you using? What version of Terraform or OpenTofu are you using?

Older versions of Terraform have issues with destroy that could be the cause of this. I have not seen this problem with any of our current customers, and I suspect the issue is with Terraform.

The profiles feature was something we used for a short time until we found better solutions, and it is only being supported in a legacy mode. I expect you are not using it, in which case you can simply delete the offending line from the [providers.tf](http://providers.tf), although if I’m right about this being a resolved Terraform bug, the best solution is to use a more current version of Terraform, such as 1.5.7.

1
mlanci91 avatar
mlanci91

Hello, I have a few questions around the EKS node group module. I want to create another node group (My last node group is pinned to version 0.27.1! - trying to use v3.1.0 now). I’m trying to use an existing launch template that we have so I set launch_template_id and ami_type to CUSTOM but i’m getting an error “InvalidParameterException: You cannot specify a kubernetes version to use when specifying an image id to use within the launch template.”

I also haven’t been able to figure out how to provide a node group name to the module.. it seems to use random_pets to name the new node group.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Jeremy G (Cloud Posse)

1
mlanci91 avatar
mlanci91

Wondering if this is some other issue on EKS side. The node actuallys shows ready status, even has pods running on it.. but in the AWS console the node group doesn’t show this node associated to it.

If I manually create a node group (clickops) and use a different launch template.. this works. but if I try and use the node group module to point to that launch template, it fails. Doing another run now to get the proper error message and i’ll share the code snippet as well.

mlanci91 avatar
mlanci91
module "eks_node_group_db_az1" {
  source  = "cloudposse/eks-node-group/aws"
  version = "v3.1.0"

  instance_types                 = ["m5.2xlarge"]
  subnet_ids                     = module.us-east-1.subnet_ids
  min_size                       =  1
  max_size                       =  3
  desired_size                   =  1
  cluster_name                   = module.eks_cluster.eks_cluster_id
  ami_image_id                   = ["ami-0cb06ac50a7eea4f2"]
  launch_template_id             = ["lt-0cd374a6cfc0cf659"]
  ami_type                       = "CUSTOM"
  launch_template_version        =  ["7"] 
  block_device_mappings          = [{ 
    "delete_on_termination": true, 
    "device_name": "/dev/xvda", 
    "encrypted": false, 
    "volume_size": 40, 
    "volume_type": "gp2" 
  }]

  # Enable the Kubernetes cluster auto-scaler to find the auto-scaling group
  cluster_autoscaler_enabled = true

  node_role_arn                  = [module.eks_node_group.eks_node_group_role_arn]

  kubernetes_labels              = {
    "apptype" = "stateful"
  }
  kubernetes_taints = [{
    key = "nodeType", 
    value = "database", 
    effect = "NO_SCHEDULE" 
  }]

  # Ensure the cluster is fully created before trying to add the node group
  module_depends_on = module.eks_cluster.kubernetes_config_map_id
}
mlanci91 avatar
mlanci91

Maybe i’m losing my mind.. but this seemed to have worked now.

mlanci91 avatar
mlanci91

I’ll call it a layer 8 issue for now

mlanci91 avatar
mlanci91

I do still want to understand how I can name the node groups to what I want… It seems the node group module is using the label module internally.. so not sure if it’s possible to pass the name I want myself.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@mlanci91 I’m glad you figured out the issue with the Kubernetes version.

Regarding naming the node group, this module uses the Cloud Posse standard null-label module to construct a name. Give it any of the inputs (e.g. name), and the module will append “-workers” to the constructed ID and use that as the name of the node group. At the moment, there is no way to suppress the “-workers” suffix, but other than that you can set whatever name you want via the null-label inputs.

label | The Cloud Posse Reference Architecture

Terraform module designed to generate consistent names and tags for resources. Use terraform-null-label to implement a strict naming convention.

There are 6 inputs considered “labels” or “ID elements” (because the labels are used to construct the ID):

  1. namespace
  2. tenant
  3. environment
  4. stage
  5. name
  6. attributes

This module generates IDs using the following convention by default: {namespace}-{environment}-{stage}-{name}-{attributes}. However, it is highly configurable. The delimiter (e.g. -) is configurable. Each label item is optional (although you must provide at least one). So if you prefer the term stage to environment and do not need tenant, you can exclude them and the label id will look like {namespace}-{stage}-{name}-{attributes}.

  • The tenant label was introduced in v0.25.0. To preserve backward compatibility, it is not included by default.
  • The attributes input is actually a list of strings and {attributes} expands to the list elements joined by the delimiter.
  • If attributes is excluded but namespace, stage, and environment are included, id will look like {namespace}-{environment}-{stage}-{name}. Excluding attributes is discouraged, though, because attributes are the main way modules modify the ID to ensure uniqueness when provisioning the same resource types.
  • If you want the label items in a different order, you can specify that, too, with the label_order list.
  • You can set a maximum length for the id, and the module will create a (probably) unique name that fits within that length. (The module uses a portion of the MD5 hash of the full id to represent the missing part, so there remains a slight chance of name collision.)
  • You can control the letter case of the generated labels which make up the id using var.label_value_case.
  • By default, all of the non-empty labels are also exported as tags, whether they appear in the id or not. You can control which labels are exported as tags by setting labels_as_tags to the list of labels you want exported, or the empty list [] if you want no labels exported as tags at all. Tags passed in via the tags variable are always exported, and regardless of settings, empty labels are never exported as tags. You can control the case of the tag names (keys) for the labels using var.label_key_case. Unlike the tags generated from the label inputs, tags passed in via the tags input are not modified.

There is an unfortunate collision over the use of the key name. Cloud Posse uses name in this module to represent the component, such as eks or rds. AWS uses a tag with the key Name to store the full human-friendly identifier of the thing tagged, which this module outputs as id, not name. So when converting input labels to tags, the value of the Name key is set to the module id output, and there is no tag corresponding to the module name output. An empty name label will not prevent the Name tag from being exported.

It’s recommended to use one terraform-null-label module for every unique resource of a given resource type. For example, if you have 10 instances, there should be 10 different labels. However, if you have multiple different kinds of resources (e.g. instances, security groups, file systems, and elastic ips), then they can all share the same label assuming they are logically related.

For most purposes, the id output is sufficient to create an ID or label for a resource, and if you want a different ID or a different format, you would instantiate another instance of null-label and configure it accordingly. However, to accomodate situations where you want all the same inputs to generate multiple descriptors, this module provides the descriptors output, which is a map of strings generated according to the format specified by the descriptor_formats input. This feature is intentionally simple and minimally configurable and will not be enhanced to add more features that are already in null-label. See examples/complete/descriptors.tf for examples.

All Cloud Posse Terraform modules use this module to ensure resources can be instantiated multiple times within an account and without conflict.

The Cloud Posse convention is to use labels as follows:

  • namespace: A short (3-4 letters) abbreviation of the company name, to ensure globally unique IDs for things like S3 buckets
  • tenant: (Rarely needed) When a company creates a dedicated resource per customer, tenant can be used to identify the customer the resource is dedicated to
  • environment: A short abbreviation for the AWS region hosting the resource, or gbl for resources like IAM roles that have no region
  • stage: The name or role of the account the resource is for, such as prod or dev
  • name: The name of the component that owns the resources, such as eks or rds

NOTE: The null originally referred to the primary Terraform provider used in this module. With Terraform 0.12, this module no longer needs any provider, but the name was kept for continuity.

  • Releases of this module from 0.23.0 onward only work with Terraform 0.13 or newer.
  • Releases of this module from 0.12.0 through 0.22.1 support HCL2 and are compatible with Terraform 0.12 or newer.
  • Releases of this module prior to 0.12.0 are compatible with earlier versions of terraform like Terraform 0.11.
mlanci91 avatar
mlanci91

Thanks @Jeremy G (Cloud Posse)

mlanci91 avatar
mlanci91

Do you mean “name” as an input to the node group module?

mlanci91 avatar
mlanci91

I didn’t see name as a variable in variables.tf

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Yes, name = "foo". The variables associated with naming and the null-label module are in a separate file, [context.tf](http://context.tf), to facilitate automated upgrades and remove all this boilerplate from [variables.tf](http://variables.tf).

mlanci91 avatar
mlanci91

Explains why I couldn’t find it….

mlanci91 avatar
mlanci91

Need to look further into utilizing the label module elsewhere, thanks so much for the reply.

1

2024-08-30

Guru Prasad avatar
Guru Prasad

Hi, Anyone using latest ‘cloudposse/terraform-aws-eks-node-group’ for windows nodes plz? I am having issue when node tries to connect to EKS cluster.

cloudposse/terraform-aws-eks-node-group

Terraform module to provision a fully managed AWS EKS Node Group

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse)

cloudposse/terraform-aws-eks-node-group

Terraform module to provision a fully managed AWS EKS Node Group

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

If you are using the recommended API authentication mode, and deploying Windows nodes, then you have to configure the node groups access using EC2_WINDOWS, not EC2_LINUX.

Our eks/cluster component does not currently support Windows node groups, although our terraform-aws-eks-cluster module does.

cloudposse/terraform-aws-eks-cluster
    keyboard_arrow_up