#atmos (2023-11)

2023-11-01

Matt Gowie avatar
Matt Gowie

Anyone using Atmos with OpenTofu yet?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We haven’t had a chance to test it yet.

1
Matt Gowie avatar
Matt Gowie

I want to swap over a client to tofu early next year so we may give it a go if we get the time. I will be sure to reach out and share what we find if / when that happens.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can add command: tofu to the components, and Atmos will execute the tofu binary without any changes to Atmos or to the stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or globally at any level

terraform:
  command: tofu
Matt Gowie avatar
Matt Gowie

Great stuff – Thanks Andriy!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Overrides | atmos

Use the ‘overrides’ pattern to modify component(s) configuration and behavior in the current scope.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use Teams and split your components per Team (each Team manages a set of components)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then override command per Team and set it to tofu - only that Team will use tofu w/o affecting everyone else

Matt Gowie avatar
Matt Gowie

Nice Andriy! Glad to see it!

2023-11-02

Release notes from atmos avatar
Release notes from atmos
05:14:37 PM

v1.48.0 what

Add overrides section to Atmos stack manifests Update docs https://atmos.tools/core-concepts/components/overrides/

why Atmos supports the overrides pattern to modify component(s) configuration and behavior using the overrides section in Atmos stack manifests. You can override the following sections in the component(s) configuration:

vars settings env command

The overrides section can be used at the…

Release v1.48.0 · cloudposse/atmosattachment image

what

Add overrides section to Atmos stack manifests Update docs https://atmos.tools/core-concepts/components/overrides/

why Atmos supports the overrides pattern to modify component(s) configurati…

Component Overrides | atmos

Use the ‘overrides’ pattern to modify component(s) configuration and behavior in the current scope.

Release notes from atmos avatar
Release notes from atmos
06:04:38 PM

v1.48.0 what

Add overrides section to Atmos stack manifests Update docs https://atmos.tools/core-concepts/components/overrides/

why Atmos supports the overrides pattern to modify component(s) configuration and behavior using the overrides section in Atmos stack manifests. You can override the following sections in the component(s) configuration:

vars settings env command

The overrides section can be used at the…

Release v1.48.0 · cloudposse/atmosattachment image

what

Add overrides section to Atmos stack manifests Update docs https://atmos.tools/core-concepts/components/overrides/

why Atmos supports the overrides pattern to modify component(s) configurati…

Release notes from atmos avatar
Release notes from atmos
06:24:35 PM

v1.48.0 what

Add overrides section to Atmos stack manifests Update docs https://atmos.tools/core-concepts/components/overrides/

why Atmos supports the overrides pattern to modify component(s) configuration and behavior using the overrides section in Atmos stack manifests. You can override the following sections in the component(s) configuration:

vars settings env command

The overrides section can be used at the…

Component Overrides | atmos

Use the ‘overrides’ pattern to modify component(s) configuration and behavior in the current scope.

Release notes from atmos avatar
Release notes from atmos
04:34:38 AM

v1.49.0 what

Update depends_on and atmos describe dependents Restore checking for the context in imports with Go templates removed in #464 Update README

why…

Release v1.49.0 · cloudposse/atmosattachment image

what

Update depends_on and atmos describe dependents Restore checking for the context in imports with Go templates removed in #464 Update README

why

The settings.depends_on was incorrectly proce…

Release notes from atmos avatar
Release notes from atmos
04:54:36 AM

v1.49.0 what

Update depends_on and atmos describe dependents Restore checking for the context in imports with Go templates removed in #464 Update README

why…

2023-11-03

Matthew Reggler avatar
Matthew Reggler

Given the recent addition of passing outputs between stacks in Spacelift’s “Stack Dependencies V2” using the spacelift_stack_dependency_reference resource, has there been any thought toward adding support for this to the existing V1 support in Atmos? (https://atmos.tools/integrations/spacelift/#spacelift-stack-dependencies)

This would be a way to add Terraspace/Terragrunt style stack-to-stack output sharing in the YAML without having to acquire the value “in-code” via remote-state module use. For cases where you just want to pass in resource ARN to a downstream component’s IAM policy this would be life-changing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is interesting topic

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have three diff systems here:

• Terraform

• Spacelift

• Atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently we designed Atmos to be independent of any tools, it just manages the configuration (and just happens to call the terraform binary, but can call anything else)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

The spacelift_stack_dependency_reference feature of Spacelift is nice, but once implemented, it will be completely dependent on Spacelift, and could not be used even with plain terraform (nor with Atmos)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so right now, these three systems are somehow independent and can be used w/o each other

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but you mentioned a good point, yes, it would be useful to define an output from a component in Atmos and let tmos get the remote state w/o needing to create TF code for remote-state

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one the one hand, it will be very very useful

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on the other hand, it will couple Atmos with the internals of Terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this question comes up often, we will prob discuss that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that https://atmos.tools/integrations/spacelift/#spacelift-stack-dependencies is just another configuration section in Atmos YAML manifests

Spacelift Integration | atmos

Atmos natively supports Spacelift. This is accomplished using

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos just manages it, and has the functionality to find all the dependencies and dependent components in the stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the depends_on section then used in a few diff places now (and can be used in diff systems later):

  1. In the atmos describe dependents CLI command https://atmos.tools/cli/commands/describe/dependents
  2. In GitHub actions by calling the CLI command and then triggering the dependent Atmos components

  3. In Spacelift by converting those depends_on configs into the Spacelift stack dependencies https://docs.spacelift.io/concepts/stack/stack-dependencies (done by these modules https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/tree/main/modules)
atmos describe dependents | atmos

This command produces a list of Atmos components in Atmos stacks that depend on the provided Atmos component.

Stack dependencies - Spacelift Documentation

Collaborative Infrastructure For Modern Software Teams

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use this pattern with terraform remote state, but are moving away from that and instead going to use SSM which is more agnostic

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t like the dependency on Spacelift to manage input parameters

Matthew Reggler avatar
Matthew Reggler

Thought about this some more… Could this be managed by an extension to the spacelift-stack and spacelift-stack-from-atmos-config modules in your terraform-spacelift-cloud-infrastructure repository + the addition of something like this to the component YAML?

components:
  terraform:
    component_b:
      settings:
        depends_on:
          1:
            component: "component_a"
        spacelift:
            references:
              1: 
                stack_dependency_component: component_a
                stack_dependency_tenant: core              # optional
                stack_dependency_stage: auto               # optional
                stack_dependency_environment: gbl          # optional
                output_var: component_a_output_name
                input_var: component_b_input_var_name

This seems pretty clean IMO. You already support a lot of Spacelift stack configuration via this interface (https://atmos.tools/integrations/spacelift/#stack-configuration), the tenant , stage, and environment config options would be optional, but would allow constructing a null label stack id target in any location.

Matthew Reggler avatar
Matthew Reggler

This keeps Atmos’ independence from Spacelift, by moving the feature development into your existing Spacelift repository modules, but still adds YAML-level output sharing to your portfolio.

Matthew Reggler avatar
Matthew Reggler

does that solve the coupling issue by avoiding re-using depends_on @Andriy Knysh (Cloud Posse)?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, this could be solved by extending the terraform modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said, we will only be implementing it with customer sponsorship

Matt Gowie avatar
Matt Gowie

I like this conversation as I’ve thought about this problem a good bit. It would be nice, but I know why atmos doesn’t want to solve it natively and I agree with that reasoning: Staying somewhat agnostic to the underlying tool and not creating a tight coupling of stack file configuration <> tool internals is the right way to go.

I’ve always thought about creating a smart terraform_remote_state mixin that would support component to component value sharing through explicit configuration in vars. We’ve talked about it internally a few times, but have never gone to actually implement it. It’s very likely possible, but probably painful to configure and high barrier to understand as an outsider coming into the codebase.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, good discussion

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’d think about that and might come up with something

2023-11-04

Release notes from atmos avatar
Release notes from atmos
11:24:34 PM

v1.50.0 what

Fix processing of Spacelift stack dependencies using settings.depends_on config Implement double-dash – handling for terraform and helmfile commands Update docs Update atmos terraform –help Update atmos helmfile –help

why

Processing of Spacelift stack dependencies using settings.depends_on config had issues with cross-environment, cross-account dependencies

double-dash – can be used on the command line to signify the end of the options for Atmos and the start of the additional…

Release v1.50.0 · cloudposse/atmosattachment image

what

Fix processing of Spacelift stack dependencies using settings.depends_on config Implement double-dash – handling for terraform and helmfile commands Update docs Update atmos terraform –help…

Release notes from atmos avatar
Release notes from atmos
11:44:39 PM

v1.50.0 what

Fix processing of Spacelift stack dependencies using settings.depends_on config Implement double-dash – handling for terraform and helmfile commands Update docs Update atmos terraform –help Update atmos helmfile –help

why

Processing of Spacelift stack dependencies using settings.depends_on config had issues with cross-environment, cross-account dependencies

double-dash – can be used on the command line to signify the end of the options for Atmos and the start of the additional…

2023-11-06

Guus avatar

Hi, quick question regarding atmos terraform workspace <component> -s <stack>, if I understand correctly, the atmos wrapper around the workspace command should calculate the workspace, init it with -reconfigure and select it using terraform workspace select.

Locally this seems to works perfectly, however, in our CI/CD pipeline it fails with the following error after the terraform init happens:

Workspace "xxx" already exists
exit status 1

Error: Atmos exited with code 1.
Error: Process completed with exit code 1.

Any idea on what could cause this or what I’m doing wrong here?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What version of terraform?

Guus avatar

Locally: Terraform v1.6.3 (darwin_arm64) CI: Terraform v1.6.2 (linux_amd64)

jose.amengual avatar
jose.amengual

are you adding that new option to create before selecting that was added to terraform in your pipeline?

Guus avatar

I’m not adding a new option, in CI it’s just running the same command as I try locally, which is the atmos terraform workspace <component> -s <stack> command. Locally it has the correct plan output, but on CI/CD it returns the workspace exists error when running that command.

jose.amengual avatar
jose.amengual

are you running atmos on the Ci pipeline?

Guus avatar

yes

jose.amengual avatar
jose.amengual

the init works fine I guesS?

jose.amengual avatar
jose.amengual

I remember having a similar issue but my init was not working

Guus avatar

Yes, that person seems to have the exact same issue it seems.

jose.amengual avatar
jose.amengual

try that version then

jose.amengual avatar
jose.amengual

and report back

Guus avatar

Well, it either fails on init, or it fails just after, hard to tell because the workspace command is the only thing I’m running for now and it fails on that command.

Guus avatar

Executing a plan gives the same issue (because it will also do the init I guess).

jose.amengual avatar
jose.amengual

Try to add a terraform workspace select before atmos and see if it works

Nate avatar

I am running into an issue assuming a role needed to provision dns-primary in the dns account. Here is the error:

Error: Cannot assume IAM Role
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on providers.tf line 1, in provider "aws":
│    1: provider "aws" {
│ 
│ IAM Role (arn:aws:iam::yyyyyyyyyyyy:role/hct-core-gbl-dns-terraform) cannot be assumed.
│ 
│ There are a number of possible causes of this - the most common are:
│   * The credentials used in order to assume the role are invalid
│   * The credentials do not have appropriate permission to assume the role
│   * The role ARN is not valid
│ 
│ Error: operation error STS: AssumeRole, https response error StatusCode: 403, RequestID: cb414b0e-1fb7-4d2e-911a-de41a9db1a97,
│ api error AccessDenied: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/hct-gbl-root-admin/assumed-from-leapp is not authorized to
│ perform: sts:AssumeRole on resource: arn:aws:iam::yyyyyyyyyyyy:role/hct-core-gbl-dns-terraform

The strange part is that there is no role by the name “hct-core-gbl-dns-terraform” in the dns account. After some digging, I realized that the the ARN of the role stored in account_map output is different from the role ARN generated as part of the assume operation in dns_primary component. Here is the output from the aws-team-roles component for the dns account. You can see that it does not contain the tenant “core”.

role_name_role_arn_map = {
  "admin" = "arn:aws:iam::yyyyyyyyyyyy:role/hct-gbl-dns-admin"
  "terraform" = "arn:aws:iam::yyyyyyyyyyyy:role/hct-gbl-dns-terraform"
}

Here is the output from the account_map component. You can see that the role ARN includes the tenant “core”.

terraform_roles = {
  "core-artifacts" = "arn:aws:iam::xxxxxxxxxx:role/hct-core-gbl-artifacts-terraform"
  "core-audit" = "arn:aws:iam::xxxxxxxxxx:role/hct-core-gbl-audit-terraform"
  "core-dns" = "arn:aws:iam::yyyyyyyyy:role/hct-core-gbl-dns-terraform"
  "core-identity" = "arn:aws:iam::xxxxxxxxxx:role/hct-core-gbl-identity-terraform"
  "core-network" = "arn:aws:iam::xxxxxxxxxx:role/hct-core-gbl-network-terraform"
  "core-root" = "arn:aws:iam::xxxxxxxxxx:role/hct-core-gbl-root-admin"
  "core-security" = "arn:aws:iam::xxxxxxxxxx:role/hct-core-gbl-security-terraform"
  "core-shared" = "arn:aws:iam::xxxxxxxxxx:role/hct-core-gbl-shared-terraform"
  "plat-dev" = "arn:aws:iam::xxxxxxxxxx:role/hct-plat-gbl-dev-terraform"
  "plat-prod" = "arn:aws:iam::xxxxxxxxxx:role/hct-plat-gbl-prod-terraform"
  "plat-qat" = "arn:aws:iam::xxxxxxxxxx:role/hct-plat-gbl-qat-terraform"
  "plat-staging" = "arn:aws:iam::xxxxxxxxxx:role/hct-plat-gbl-staging-terraform"
}

The roles created in the dns account do not include the tenant where as the assume operation is looking for a role that includes the tenant. This seems to be the pattern for output generated by aws-teams and aws-team-roles components.

Any thoughts on what might be causing this discrepancy in role names?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Jeremy G (Cloud Posse)

Nate avatar

@Jeremy G (Cloud Posse) Any thoughts on the above?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Nate when you provisioned the aws-team-roles component, did you provide the tenant to it? It should be configured in the _default.yaml for the tenant, then the stack manifest must be imported into the top-level stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s easy to check if a component has all the required vars:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos describe component aws-team-roles -s <stack> will show you all the info for the component in the stack. Look at the vars section (you should see tenant in there)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Nate Andriy is right. the provisioned role name is based on the component id, which requires both that you have set the tenant to some value (here it it should be set to “core”), and that you have label_order set (normally in the top-level _default.yaml):

    label_order:
      - namespace
      - tenant
      - environment
      - stage
      - name
      - attributes
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can DM us your stack configs for review

Nate avatar

Thank you @Andriy Knysh (Cloud Posse) and @Jeremy G (Cloud Posse)! I am pretty sure I did – but I will double check later tonight. I can also share the stack configs at that time.

Nate avatar

I verified using the describe command above and the output does include the tenant. Sharing a zip file of my stacks folder with you shortly.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

If using Atmos, please share the generated .tfvars.json file for aws-team-roles in the dns account. It should be named core-gbl-dns-aws-team-roles.terraform.tfvars.json and be in the aws-team-roles directory after running atmos terraform plan ...

Nate avatar

{ “descriptor_formats”: { “account_name”: { “format”: “%v-%v”, “labels”: [“tenant”, “stage”] }, “stack”: { “format”: “%v-%v-%v-%v”, “labels”: [“namespace”, “tenant”, “environment”, “stage”] } }, “enabled”: true, “environment”: “gbl”, “namespace”: “eg”, “region”: “us-east-1”, “roles”: { “admin”: { “aws_saml_login_enabled”: false, “denied_permission_sets”: [], “denied_role_arns”: [], “denied_teams”: [], “enabled”: true, “max_session_duration”: 7200, “role_description”: “Full administration of this account”, “role_policy_arns”: [“arnawsiam:policy/AdministratorAccess”], “trusted_permission_sets”: [], “trusted_role_arns”: [ “arnawsiam:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_AdministratorAccess_” ], “trusted_teams”: [“admin”] }, “template”: { “aws_saml_login_enabled”: false, “denied_permission_sets”: [], “denied_role_arns”: [], “denied_teams”: [], “enabled”: false, “max_session_duration”: 7200, “role_description”: “Template role, should not exist”, “role_policy_arns”: [], “trusted_permission_sets”: [], “trusted_role_arns”: [ “arnawsiam:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_AdministratorAccess_” ], “trusted_teams”: [] }, “terraform”: { “aws_saml_login_enabled”: false, “denied_permission_sets”: [], “denied_role_arns”: [], “denied_teams”: [], “enabled”: true, “max_session_duration”: 7200, “role_description”: “Role for Terraform administration of this account”, “role_policy_arns”: [“arnawsiam:policy/AdministratorAccess”], “trusted_permission_sets”: [], “trusted_role_arns”: [ “arnawsiam:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_AdministratorAccess_” ], “trusted_teams”: [“admin”, “devops”] } }, “stage”: “dns”, “tenant”: “core” }

Nate avatar

Here it is @Jeremy G (Cloud Posse) Just replaced the account number with a generic one

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Nate The problem is that label-order is missing

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The .tfvars.json should include

   "label_order": [
   "namespace",
   "tenant",
   "environment",
   "stage",
   "name",
   "attributes"
],

Likely, you neglected an import somehwere.

Nate avatar

I see. I thought this was optional and some sort of default order will prevail. I assume I can include this in “catalog/globals”. As this is a global change, do I need to provision the rest of the components as well?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

It is optional and a default order does prevail, but the default order does NOT include tenant. Everything would work OK if you were not using tenant anywhere, but you either have to use it everywhere or not use it anywhere.

Yes, it is a global change and you will need to reprovision everything that was incorrectly provisioned.

Nate avatar

Ahh….gotcha! So, the framework operates on the assumption that tenant is not used and if one were to use a tenant, we need to provide overrides at the global level. I hope this does not change my accounts structure

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You can use all defaults, which exclude tenant everywhere, but you included tenant in our preferred but non-default configurations of account_name and stack descriptor formats. In the same place you set those, you should have also set label_order. I don’t know how you got the former but not the latter, but if you can point out how/why, we will see about fixing that.

Nate avatar

I don’t recall exactly - but I am sure it was an oversight on my part. I will let you know if I can recall how I got there. I am re-provisioning account, account-map, aws-teams now - will let you know how that goes.

Nate avatar

@Jeremy G (Cloud Posse) and @Andriy Knysh (Cloud Posse) Happy to report that re-provisioning went smoothly and provisioning of dns-primary is working fine now. Thanks so much for your help!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(@Nate please use code blocks in the future for proper formatting)

Nate avatar

I generally do, but missed on that last post - my bad. Will keep that in mind. Thanks @Erik Osterman (Cloud Posse)

1

2023-11-07

Tyler Rankin avatar
Tyler Rankin

Does atmos include any way to identify “in-repo” components? We recently adopted github-action-atmos-affected-trigger-spacelift for our infrastructure monorepo, using direct triggers through spacectl. We define in-repo components in a catalog within an infrastructure monorepo, and use the spacelift component makes use of context filters to split up our spacelift admin stacks.

The action fails, if we change a catalog file for an in-repo component. atmos describe affected correctly identifies that the component’s Spacelift Stack needs to run. But it tries to trigger a spacelift run @ a particular commit SHA (from the infrastructure repo) that doesn’t exist in the repo that the Spacelift Stack actually tracks, since that project is in its own repo. If there is a way to identify these in-repo stacks, I could filter out the stack list that makes a spacectl call ahead of time.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have implemented this for spacelift too, but ended up need to to use issue comments and trigger spacelift based on those issue comments I believe

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc @Dan Miller (Cloud Posse) @Matt Calhoun

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using spacectl to trigger the stacks bypasses things like certain policies

Tyler Rankin avatar
Tyler Rankin

Ah I see how that might work better. Would you exclude in-repo stacks from the git_comment push policy as well?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

absolutely. A stack can be excluded by Spacelift policy (so that it wont be trigger in Spacelift regardless of the comment) or ~excluded by Atmos (so that it’s not included in affected stacks)~

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

~with spacectl, I believe only excluding with atmos will work, since spacectl will trigger directly~

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Set settings.spacelift.workspace_enabled: false to disable any component https://atmos.tools/integrations/spacelift

Spacelift Integration | atmos

Atmos natively supports Spacelift. This is accomplished using

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

hm actually reading through the action now, these are not actually excluded from the trigger. Right now the action does not include an option to exclude some stack from affected, since we instead disable the stack directly in Spacelift.

I’d recommend creating an feature request if this is something that you’d like to see. Plus it wouldnt be too hard to filter for either excluded stacks or stacks with settings.spacelift.workspace_enabled: false

Tyler Rankin avatar
Tyler Rankin

Thanks @Dan Miller (Cloud Posse) - in this case we definitely want the workspace_enabled I think the comment route will allow us to work around the direct trigger of the non-existent commit. Before testing, just trying to think through the behavior if we have the comment push policy applied to the in-repo stacks. Actually, I believe Spacelift won’t open the PR proposed run on the in-repo stack because the PR does not originate in the stack’s project repo - so it likely won’t matter if we do or do not attach the comment push policy to these stacks.

2023-11-08

RB avatar

is there an atmos validate stacks github action ?

1
RB avatar

I’ve been using this

name: "validate"

on:
  workflow_dispatch: {}

  pull_request:
    types: [opened, synchronize, reopened]

jobs:
  stacks:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout code
      uses: actions/checkout@v4

    - name: Setup atmos
      uses: cloudposse/github-action-setup-atmos@v1
      with:
        version: 1.50.0

    - name: Validate
      run: atmos validate stacks
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

not yet. We run this internally almost exactly as you have it

RB avatar

What am I missing?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)
name: Run atmos validate
        id: atmos-validate
        run: |
          ln -s rootfs/usr/local/etc/atmos/atmos.yaml
          atmos validate stacks
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Formatting didn’t work on mobile, but we have to set the path to atmos.yaml config

RB avatar

oh interesting. I’ve been creating the atmos.yaml file directly in the repo root so atmos picks it up right away

RB avatar

thanks Dan!

np1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) would this work?

        name: Run atmos validate
        id: atmos-validate
        environment:
          ATMOS_CLI_CONFIG_PATH: rootfs/usr/local/etc/atmos/atmos.yaml
        run: atmos validate stacks
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ATMOS_CLI_CONFIG_PATH can also be set globally for the workflow

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

yes that’d probably work too. We do that for the describe affected action https://github.com/cloudposse/github-action-atmos-affected-stacks/blob/main/action.yml#L99-L111

    - name: atmos affected stacks
      id: affected
      shell: bash
      env:
        ATMOS_CLI_CONFIG_PATH: ${{inputs.atmos-config-path}}
      run: |
        if [[ "${{ inputs.atmos-include-spacelift-admin-stacks }}" == "true" ]]; then
          atmos describe affected --file affected-stacks.json --verbose=true --repo-path "$GITHUB_WORKSPACE/main-branch" --include-spacelift-admin-stacks=true
        else
          atmos describe affected --file affected-stacks.json --verbose=true --repo-path "$GITHUB_WORKSPACE/main-branch"
        fi
        affected=$(jq -c '.' < affected-stacks.json)
        printf "%s" "affected=$affected" >> $GITHUB_OUTPUT

2023-11-09

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And some simple examples for Digital Ocean. https://github.com/leb4r/terraform-do-components

leb4r/terraform-do-components

Terraform modules that can be re-used in other projects. Built specifically for DigitalOcean.

1
Zain Zahran avatar
Zain Zahran

Hi, I’m trying to pass a list of strings up my hierarchy through context, is there a way for me to maintain the variable type to be a string list when it goes to terraform? (Example in thread)

2
Zain Zahran avatar
Zain Zahran

Example… /catalog/groups/dev/_defaults.yaml

import:
  - path: catalog/groups/_defaults
    context:
      foo:
      - "testing1"
      - "testing2"

/catalog/groups/_defaults.yaml

import:
  - path: catalog/interfaces/bar-example

components:
  terraform:
    bar-example:
      vars:
        foo: "{{ .foo }}"

In terraform, foo is defined as:

variable "foo" {
  description = "A list of strings"
  type        = list(string)
}

The error is something like:

│ Error: Invalid value for input variable
| ...
| list of string required.
╵

It works when I do not use context/go templating and hardcode the list. Like this:

components:
  terraform:
    bar-example:
      vars:
        foo:
        - "testing1"
        - "testing2"

When I describe the stacks in both scenarios here’s the difference I see: Not working

foo: '[testing1 testing2]'

Working:

foo:
- testing1
- testing2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try range

{{- range .foo }}
   - {{ . }}
{{- end }}
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
{{range pipeline}} T1 {{end}}
	The value of the pipeline must be an array, slice, map, or channel.
	If the value of the pipeline has length zero, nothing is output;
	otherwise, dot is set to the successive elements of the array,
	slice, or map and T1 is executed. If the value is a map and the
	keys are of basic type with a defined order, the elements will be
	visited in sorted key order.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
template package - text/template - Go Packages

Package template implements data-driven templates for generating textual output.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, Atmos Go templates in imports support all the Sprig functions https://masterminds.github.io/sprig

Sprig Function Documentation

Useful template functions for Go templates.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can even do something like toJson https://masterminds.github.io/sprig/defaults.html (since Json is a subset of YAML, JSON expressions are allowed in YAML)

Default Functions

Useful template functions for Go templates.

Zain Zahran avatar
Zain Zahran

range did the trick Thanks @Andriy Knysh (Cloud Posse)! A lot of helpful informataion. I didn’t know about the extended support for those sprig functions.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

2023-11-10

2023-11-13

2023-11-14

Bruce Edge avatar
Bruce Edge

Hi, I’m new here. I’m on a search for a configuration mgmt tool to manage cfg for IaC and associated deployed software. Can atmos be used solely for the configuration management aspect? Background: I’m looking to use Gruntwork’s live git-ops format repo with terragrunt. I particularly like their _envcommon structure to share IaC among multiple target deployments and abstract the cfg data for each. This appears to be incompatible with atmos as it also wants to the the top level owner of all root TF modules. I still see value in atmos’ cfg-mgmt component, and I’m wondering if others have used atmos solely for cfg-mgmt, in conjunction with terragrunt, and/or perhaps spacelift. Or, perhaps have any recommendations for a (yaml based?) general purpose cfg-mgmt engine? This IaC tooling problem is complicated enough without each tool suite growing in scope to overlap (usually in some incompatible way) with other adjacent tools.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos does NOT want to be the top level owner of all root TF modules, it just uses them Atmos is mostly used for configuration management to make it DRY and reusable for multi-environment and multi-account architectures. Atmos separates the configuration (stacks) from the implementation (terraform components) so the same components can be provisioned into different environments w/o modifying anything in the TF code. The TF code is pure terraform w/o any additional wrappers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use the same terraform components (root modules) with Atmos or w/o, this is not related. Or you can create your own root modules and use them with Atmos or w/o

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Atmos is probably not ideally suited to wrap terragrunt, although others have successfully done this. The main reason is to shoehorn an existing “legacy” project into atmos, and slowly move over to the atmos design patterns.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If starting a new project, I would not kick off with both atmos and terragrunt, and instead pick the tool that best suits your taste.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

regarding the live repo, it’s organized in a way that’s quite different from how we recommend organizing an infra repo. First of all, the terraform code & configuration is co-mingled in the directory structure. With atmos, you don’t need to do this. Instead keep your business logic (terraform code) entirely separate from your configuration (Stacks).

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Additionally, terragrunt relies extensively on code generation for terraform. With atmos, we do everything to avoid code generation while maintaining DRY code. Thus the live example will need to be modified first to work as vanilla terraform code and not terragrunt code.

Bruce Edge avatar
Bruce Edge

The terragrunt live is really a subset of the full configuration. I’m starting to see where it falls short as it only handles the elements that match the AWS/region/stack structure. I suppose that avoiding the second layer code generation also avoids the inherent problems is creates regarding giant cached terraform directories. So many conflicting patterns… sigh. Will review the inheritance ref. Thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you need any help with Atmos, let us know

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Bruce Edge I’d be happy to jump on a call sometime and walk you through what a live infrastructure looks like on Atmos

Bruce Edge avatar
Bruce Edge

Very tempting. I need to spend some more time with the docs. If there’s any curated videos, I’m interested.

There’s one more elephant in the room. We need to move away from a person based access control to a git-ops role based access control for deployments.

  1. DevOps personnel should able to provision deployment pipelines/tooling, but not the product stacks themselves.
  2. Product stack deployments should be permitted only after a merge commit to a git-ops repo is done, and the plan/test has passed. Does atmos span this domain as well? eg: provide some permission segmentation based on product vs pipeline/tooling assets?

Can it be leveraged to manage setup and cfg of cross-account RBAC for products with strict audit controls?

I realize these are the “can it do everything” questions, but it’s difficult to see the forest through the trees with these non-trivial toolkits, so I’m just going to ask.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are many diff things in the questions, I’ll try to answer step by step. With Atmos, we can leverage these two features:

https://atmos.tools/core-concepts/components/validation

https://atmos.tools/core-concepts/components/overrides/

Component Validation | atmos

Use JSON Schema and OPA policies to validate Components.

Component Overrides | atmos

Use the ‘overrides’ pattern to modify component(s) configuration and behavior in the current scope.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can split Atmos stack manifests (stack config files) into Teams, each Team would manage a set of components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then we could apply OPA policies to each component or per Team (OPA policies are inherited so they can be defined at any level: org, OU, region, account, base component(s), components, teams

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can create OPA policies to check if a user is allowed to provision a component, or if a team is allowed to provision it

# Check if the Team has permissions to provision components in an OU (tenant)
errors[message] {
    input.vars.tags.Team == "devs"
    input.vars.tenant == "corp"
    message = "'devs' team cannot provision components into 'corp' OU"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the validation would apply in all cases: 1) when you provision the components using atmos CLI, or 2) in a CI/CD systems (e.g. GitHub actions) which would run the same atmos commands

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding

DevOps personnel should able to provision deployment pipelines/tooling, but not the product stacks themselves.
Product stack deployments should be permitted only after a merge commit to a git-ops repo is done, and the plan/test has passed.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is CI/CD and release engineering (not directly related to Atmos, but Atmos can help here). We use GitHub Actions with Atmos (https://github.com/cloudposse?q=atmos&type=all&language=&sort=), and yes, rules/configs can be added to the actions to define the release engineering patterns that you are describing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so using the combination of Atmos validation policies (at diff levels) and GitHub actions and repo rules/configs (e.g. when to trigger what and which GH team is allowed to do it, CODEOWNERS, etc.) would allow you to achieve that

Bruce Edge avatar
Bruce Edge

Again, thanks for the details, but damn, it’s hard to get one’s head around all of this.

1) I really like that the policies are checked regardless of “how” a deployment is run. For this to work, where are the policies defined such that they are consulted regardless of the mechanism used to attempt an action?

2) Is there an end to end example catalog? Maybe something like a reference example for an AWS/k8 app with multiple accounts/regions/stacks with OPA deployment guarding?

3) Looking at https://docs.cloudposse.com/github-actions/library/actions/atmos-terraform-apply/ Is there bootstrapping tooling used to create the roles/bucket/etc needed for this? (Maybe a quick-start guide for tire kicking)

4) In that sense, do you segregate the tooling by function, like:

• bootstrap/setup

• pipeline/deployer tooling creation

• product deployment

Bruce Edge avatar
Bruce Edge

I notice that helm support is prominent in atmos. Is this compatible with ArgoCD, or iif one wants to use ArgoCD, that the atmos helm is not used?

I could see these 2 variants being appropriate for different level target stacks, eg: ArgoCD for a dev stack, where one can automatically deploy new images, and atmos helm for production where a new deployment is required to update a component.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all good questions. I’ll try to explain in simple terms what we do and offer:

• We use Atmos to configure and manage the stacks for all the infrastructure environments (orgs, OUs, regions, accounts, teams) • We structure all the stacks per layers, e.g. networking, core, dns, cicd, etc. (which looks like what you are asking in “In that sense, do you segregate the tooling by function?” • Yes, Atmos validation OPA policies can be defined in the stack manifests in all those scopes (orgs, OUs, regions, accounts, teams, base components, components), and they are run every time any Atmos command is executed, e.g. atmos terraform apply <component> -s <stack> will run all the configured and deep-merged policies. It’s run on the command line, and in CI/CD, e.g. GitHub Actions, Spacelift etc. So those validation policies are universal and you can validate anything - Atmos stack manifests, terraform variables, user/team permissions, etc. • We also use Atmos to configure and provision all system-level Helm releases (e.g. alb-controller, argocd, cert-manager etc.) , see https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks. For this, we can use either helmfile/helm (and atmos helmfile apply commands) or deploying Helm releases using Terraform (and atmos terraform apply commands), see https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/argocd/main.tf#L104 for example. So both helmfile/helm and terraform can be used to provision system-level Helm releases • For the app-level releases (non system-level), we use GitHub Actions (for CI) and ArgoCD (for CD). We deploy ArgoCD itself using Atmos + Helm Chart + Terraform helm-release component. To deploy apps, we use Helm Charts with diff environment configurations to make it DRY

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Bruce Edge it’s a lot of things to cover here, not easy to explain everything in a few sentences, you really need to schedule a call with @Erik Osterman (Cloud Posse) to go through all the details

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

GitOps is natively integrated into the solution, provided you use GitHub Actions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(it’s also open source / free)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Admittedly there is a lot to wrap ones head around. It’s an enterprise-scale approach to terraform. Unfortunately, as we sell our refarch, we don’t provide a public reference example.

Bruce Edge avatar
Bruce Edge

Regarding the ref-arch for sale. We looked at the Gruntwork ref arch and it was structured in a manner that required that they also manage the AWS account provisioning, and was not well suited to an enterprise that already had an AWS account provisioning/mgmt team. Is your ref arch adoptable by teams that already have provisioned AWS accounts and want to adopt an opinionated framework for IaC deployment mgmt? Or, does it also require being integrated into the AWS root account level provisioning?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Is your ref arch adoptable by teams that already have provisioned AWS accounts and want to adopt an opinionated framework for IaC deployment mgmt?

Bruce Edge avatar
Bruce Edge

Conceptual question - I’m finding the more I dig into IaC frameworks/wrappers, the thing that surfaces as perhaps more important than terraform integration is enterprise level configuration mgmt. I’m also seeing that there’s overlap between IaC CD and cfg-mgmt tooling where one may need to integrate CD from, say spacelift, with cfg-mgmt from atmos.

Is the atmos primary focus on “terraform at scale”, or, “enterprise IaC cfg-mgmt”, which often happens to use terraform?

I’m asking this because wrote a rudimentary cfg-mgmt that I no longer want to own, and re-hosting that may be the 1st decision I need to make.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes. You can start with an existing Org and accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos main focus is to manage infrastructure configurations at scale

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos is a framework that prescribes patterns and best practices to structure and organize components and stacks to design for organizational complexity and provision multi-account environments for complex organizations.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


We looked at the Gruntwork ref arch and it was structured in a manner that required that they also manage the AWS account provisionin
We also manage the account provisioning with Terraform, however, we’re working on ala carté solutions that don’t require that piece.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Though it’s not something related to atmos, it’s how our AWS components were implemented.

2023-11-15

jose.amengual avatar
jose.amengual

I’m using atmos to deploy an ORG using the account component and the account-map component but it looks like for the account-map component to work I have to have all the accounts defined in the stacks ( accounts created by the account component) before it can work?

jose.amengual avatar
jose.amengual
Error: Error in function call
│ 
│   on main.tf line 15, in locals:
│   15:     for name, info in local.account_info_map : name => format(var.iam_role_arn_template_template, compact(
│   16:       [
│   17:         local.aws_partition,
│   18:         info.id,
│   19:         module.this.namespace,
│   20:         lookup(info, "tenant", ""),
│   21:         module.this.environment,
│   22:         info.stage
│   23:       ]
│   24:     )...)
│     ├────────────────
│     │ info.id is "523262733422"
│     │ info.stage is "networking"
│     │ local.aws_partition is "aws"
│     │ module.this.environment is "global"
│     │ module.this.namespace is "pepe"
│     │ var.iam_role_arn_template_template is "arn:%s:iam::%s:role/%s-%s-%s-%s"
jose.amengual avatar
jose.amengual

I wonder if the yaml config is reading the ORg definition then trying to find the stacks files for each account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you configure tenant?

jose.amengual avatar
jose.amengual

no, I do not use tenant

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

"arn:%s:iam::%s:role/%s-%s-%s-%s" - looks like it needs all 4 context vars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

%s-%s-%s-%s - means namespace-tenant-environment-stage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

update the template

jose.amengual avatar
jose.amengual

I did

jose.amengual avatar
jose.amengual

let me paste the error with that

jose.amengual avatar
jose.amengual

nos is like this

iam_role_arn_template_template: "arn:%s:iam::%s:role/%s-%s-%s"
jose.amengual avatar
jose.amengual
Error: Error in function call
│ 
│   on main.tf line 54, in locals:
│   54:     for name, info in local.account_info_map : name => trimsuffix(format(var.profile_template, compact(
│   55:       [
│   56:         module.this.namespace,
│   57:         lookup(info, "tenant", ""),
│   58:         module.this.environment,
│   59:         info.stage, "~"
│   60:       ]
│   61:     )...), "-~")
│     ├────────────────
│     │ info.stage is "root"
│     │ module.this.environment is "global"
│     │ module.this.namespace is "pepe"
│     │ var.profile_template is "%s-%s-%s-%s-%s"
│ 
│ Call to function "format" failed: not enough arguments for "%s" at 12: need index 5 but have 4 total.
jose.amengual avatar
jose.amengual

would the account-map component try to deploy those roles?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not related to the account-map component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the template is wrong, or the args are wrong

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
[
│   56:         module.this.namespace,
│   57:         lookup(info, "tenant", ""),
│   58:         module.this.environment,
│   59:         info.stage, "~"
│
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

4 ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

b/c you don’t use tenant

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

"%s-%s-%s-%s-%s" - the template wants 5

jose.amengual avatar
jose.amengual

the doc templete is like this

iam_role_arn_template_template: "arn:%s:iam::%s:role/%s-%s-%s-%s-%%s"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s for the case when you use tenant

jose.amengual avatar
jose.amengual

now I have it like this :

 "arn:%s:iam::%s:role/%s-%s-%s"
jose.amengual avatar
jose.amengual

do I need the %%?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

"arn:%s:iam::%s:role/%s-%s-%s" wants 5 arguments

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you provide only 4 in

     for name, info in local.account_info_map : name => trimsuffix(format(var.profile_template, compact(
│   55:       [
│   56:         module.this.namespace,
│   57:         lookup(info, "tenant", ""),
│   58:         module.this.environment,
│   59:         info.stage, "~"
│   60:       ]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

bc/ compact deletes lookup(info, "tenant", ""), since you don’t use tenant

jose.amengual avatar
jose.amengual

in my stack I have

 namespace: pepe
  environment: global
  stage: root
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not about components and stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is plain terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
compact(
│   55:       [
│   56:         module.this.namespace,
│   57:         lookup(info, "tenant", ""),
│   58:         module.this.environment,
│   59:         info.stage, "~"
│   60:       ]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

will have 4 items, not 5

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

format("arn:%s:iam::%s:role/%s-%s-%s") fails b/c it needs 5 arguments

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Call to function "format" failed: not enough arguments for "%s" at 12: need index 5 but have 4 total.
jose.amengual avatar
jose.amengual

I understand, I tried many different ways, let me do this again

jose.amengual avatar
jose.amengual

this all comes from : account_info_map = module.accounts.outputs.account_info_map

jose.amengual avatar
jose.amengual

mmmmm

jose.amengual avatar
jose.amengual

my map looks like this :

 "PePe-Org" = {
    "eks" = false
    "id" = "111111111"
    "stage" = "root"
    "tenant" = tostring(null)
  }
  "audit" = {
    "account_email_format" = "aws+%[email protected]"
    "eks" = false
    "id" = "22222222"
    "ou" = "security"
    "parent_ou" = "none"
    "stage" = "security"
    "tenant" = tostring(null)
  }
jose.amengual avatar
jose.amengual

and so on….

jose.amengual avatar
jose.amengual

the part of the code that is failing first:

iam_role_arn_templates = {
    for name, info in local.account_info_map : name => format(var.iam_role_arn_template_template, compact(
      [
        local.aws_partition,
        info.id,
        module.this.namespace,
        lookup(info, "tenant", ""),
        module.this.environment,
        info.stage
      ]
    )...)

  }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

change var.iam_role_arn_template_template from arn:%s:iam::%s:role/%s-%s-%s-%s to arn:%s:iam::%s:role/%s-%s-%s (b/c you don’t use tenant)

jose.amengual avatar
jose.amengual

ok, I’m pretty sure I tried that but , I will try again

jose.amengual avatar
jose.amengual
Error: Invalid function argument
│ 
│   on main.tf line 15, in locals:
│   15:     for name, info in local.account_info_map : name => format(var.iam_role_arn_template_template, compact(
│   16:       [
│   17:         local.aws_partition,
│   18:         info.id,
│   19:         module.this.namespace,
│   20:         lookup(info, "tenant", ""),
│   21:         module.this.environment,
│   22:         info.stage
│   23:       ]
│   24:     )...)
│ 
│ Invalid value for "args" parameter: too many arguments; only 4 used by format string.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what format string did you use?

jose.amengual avatar
jose.amengual

:hankey: I made a typo I left it "arn:%s:iam::%s:role/%s-%s"

jose.amengual avatar
jose.amengual

you edited the answer…….

jose.amengual avatar
jose.amengual

ok I’m pass line 15

jose.amengual avatar
jose.amengual

now here :

Error: Invalid function argument
│ 
│   on main.tf line 54, in locals:
│   54:     for name, info in local.account_info_map : name => trimsuffix(format(var.profile_template, compact(
│   55:       [
│   56:         module.this.namespace,
│   57:         lookup(info, "tenant", ""),
│   58:         module.this.environment,
│   59:         info.stage, "~"
│   60:       ]
│   61:     )...), "-~")
│ 
│ Invalid value for "args" parameter: too many arguments; only 3 used by format string.

`

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you use arn:%s:iam::%s:role/%s-%s-%s? (I see 5 in there)

jose.amengual avatar
jose.amengual

I did use that, you said to try that, I follow orders

jose.amengual avatar
jose.amengual

now is complaining about the profile template

jose.amengual avatar
jose.amengual
profile_template: "%s-%s-%s" 
jose.amengual avatar
jose.amengual

so I changed it to this:

"%s-%s-%s-%s" 
jose.amengual avatar
jose.amengual

and now I’m here:

│ Error: Invalid function argument
│ 
│   on main.tf line 67, in locals:
│   66:     for name, info in local.account_info_map : name => format(local.iam_role_arn_templates[name],
│   67:       (local.legacy_terraform_uses_admin &&
│   68:         contains([
│   69:           var.root_account_account_name,
│   70:           var.identity_account_account_name
│   71:         ], name)
│   72:     ) ? "admin" : "terraform")
│     ├────────────────
│     │ local.legacy_terraform_uses_admin is true
│     │ var.identity_account_account_name is "identity"
│     │ var.root_account_account_name is "root"
│ 
│ Invalid value for "args" parameter: too many arguments; no verbs in format string.
jose.amengual avatar
jose.amengual

at least I’m moving down the code lines….

jose.amengual avatar
jose.amengual
  terraform_roles = {
    for name, info in local.account_info_map : name => format(local.iam_role_arn_templates[name],
      (local.legacy_terraform_uses_admin &&
        contains([
          var.root_account_account_name,
          var.identity_account_account_name
        ], name)
    ) ? "admin" : "terraform")
  }
jose.amengual avatar
jose.amengual

ok, this works:

  iam_role_arn_template_template: "arn:%s:iam::%s:role/%s-%s-%s-%%s"
        # `profile_template` is the template used to render AWS Profile names.
        profile_template: "%s-%s-%s-%s"          
jose.amengual avatar
jose.amengual

I was very confused about the %%

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s not correct and must be a bug

jose.amengual avatar
jose.amengual

I think that is because the code is trying to create the format and that is to escape it?

jose.amengual avatar
jose.amengual

so this is the issue:

jose.amengual avatar
jose.amengual

if I change it back to %s-%s-%s after I applied with four %%s the changes it wants to make are:

jose.amengual avatar
jose.amengual
iam_role_arn_templates         = {
      ~ PePe-Org = "arn:aws:iam::11111111:role/pepe-global-root-%s" -> "arn:aws:iam::11111111:role/pepe-global-root"
jose.amengual avatar
jose.amengual

( that is for all the accounts, I just pasted one line)

jose.amengual avatar
jose.amengual

the local for iam_role format is :

 iam_role_arn_templates = {
    for name, info in local.account_info_map : name => format(var.iam_role_arn_template_template, compact(
      [
        local.aws_partition,
        info.id,
        module.this.namespace,
        lookup(info, "tenant", ""),
        module.this.environment,
        info.stage
      ]
    )...)
jose.amengual avatar
jose.amengual

and we create the profiles with this:

 terraform_roles = {
    for name, info in local.account_info_map : name => format(local.iam_role_arn_templates[name],
      (local.legacy_terraform_uses_admin &&
        contains([
          var.root_account_account_name,
          var.identity_account_account_name
        ], name)
    ) ? "admin" : "terraform")
  }
jose.amengual avatar
jose.amengual

there is that conditional that is being evaluated INSIDE the format function

jose.amengual avatar
jose.amengual

in simple terms that will translate to

format(local.iam_role_arn_templates[name], admin) 
jose.amengual avatar
jose.amengual

so local.iam_role_arn_templates[name] needs to have a %s or %d to satisfy the format function

jose.amengual avatar
jose.amengual

and that is why this needs to be like that :

iam_role_arn_template_template: "arn:%s:iam::%s:role/%s-%s-%s-%%s"
jose.amengual avatar
jose.amengual

so is adding the literal %s to the end of the role arn

jose.amengual avatar
jose.amengual

that is maybe something that should be in the readme

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i did not work on that, but yes, it’s probably not documented

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks for finding this

1

2023-11-16

2023-11-20

Christian W avatar
Christian W

Hi everyone! I just learned about Atmos (it looks very interesting and well-architected! :clap:) and I’m trying to better understand the relationship between components and stacks.

One question I have is: How does targeting a specific component within a stack work with a potentially unknown/dynamic list of component instances?

Let’s say I have a “vpc”-component and I have an account (“tenant1-ue2-dev”) that contains multiple instances of that component, named e.g. “vpc-1”, “vpc-2” & “vpc-3”.

Would I now have to deploy them separately, like this?

atmos terraform deploy vpc-1 --stack tenant1-ue2-dev
atmos terraform deploy vpc-2 --stack tenant1-ue2-dev
atmos terraform deploy vpc-3 --stack tenant1-ue2-dev

What if these names are not really under my control? Another account might define just one VPC, or it might use different VPC-names. I’d like to avoid needing a huge list of these atmos terraform deploy {instance-x} --stack {stack-name} calls.

In other words, how can I ensure, all individual component instances are properly deployed to all accounts?

atmos terraform deploy | atmos

Use this command to execute terraform apply -auto-approve on an Atmos component in an Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos components provide configurations for the underlying Terraform components. You have to know the Atmos components (and their names) to provision many instances of the Terraform component (vpc) - otherwise, how would you even be able to execute any command to provision the components?

atmos terraform deploy | atmos

Use this command to execute terraform apply -auto-approve on an Atmos component in an Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Having said that, here’s what you can do:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can specify many Atmos components (e.g. vpc-1, vpc-2 , vpc-3) and then use Atmos workflows to provision all of them in one step, see https://atmos.tools/core-concepts/workflows/
Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

Christian W avatar
Christian W

there would be just one “vpc” component (which is known), but the individual stacks might have 1 or more instances of this component.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Atmos allows you to auto-generate component instances using Go templates in imports https://atmos.tools/core-concepts/stacks/imports#go-templates-in-imports
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    # Parameterize Atmos component name
    "eks-{{ .flavor }}/cluster":
      metadata:
        component: "test/test-component"
      # Parameterize variables
      vars:
        enabled: '{{ .enabled }}'
        # <http://masterminds.github.io/sprig/defaults.html>
        name: 'eks-{{ .flavor | default "foo" }}'
        # <http://masterminds.github.io/sprig/os.html>
        service_1_name: '{{ env "service_1_name" }}'
        service_2_name: '{{ expandenv "Service 2 name is $service_2_name" }}'
        tags:
          flavor: '{{ .flavor }}'

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


there would be just one “vpc” component (which is known), but the individual stacks might have 1 or more instances of this component.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes. And then for each individual stack, you’ll provision all those Atmos components by executing

atmos terraform deploy vpc-xxx --stack <stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or place all the commands in a workflow

Christian W avatar
Christian W

thank you for your response! Would that mean, that I need a separate workflow for every stack? because if stack “tenant1-ue2-dev” has two “vpc”-component instances (“vpc-a”, “vpc-b”) and stack “tenant2-ue2-dev” has one “vpc”-component instance (“vpc-1”), I couldn’t define this in a generic way?

Or would you never call individual components/stacks and always use something like atmos describe affected and run terraform plan/apply on those?

Christian W avatar
Christian W

or is there a way to just call something like atmos terraform deploy vpc --stack <stack> and this will deploy all “vpc”-component-instances within that stack?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


or is there a way to just call something like atmos terraform deploy vpc --stack <stack> and this will deploy all “vpc”-component-instances within that stack?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no, Atmos deploys Atmos components, not TF components/code

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

“in a generic way” is not easy to define, there are many diff possibilities

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if tenant1 has 3 VPCs and tenant2 has only 1, then you can create two diff workflow files for tenant1 and tenant2. Or you can create one workflow file (e.g. vpc.yaml) and create two diff workflows for tenant1 and tenant2 inside the file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Or would you never call individual components/stacks and always use something like atmos describe affected and run terraform plan/apply on those?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos describe affected will show you all affected components in all stacks, so if you use the command in CI/CD (e.g. GitHub actions), then you get a list of all affected components in all stacks and then run atmos terraform plan/apply on those

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have such GHAs

Christian W avatar
Christian W

thank you again for your response! Very much appreciated! I’m still struggling to see how that is done in a bigger environment (let’s say, with a different example, I’m setting up 20 accounts and each of them might have 0..N “s3-bucket” component-instances), but I’ll try to read more about atmos and play around with it to better understand it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, what you describing is an interesting case

Christian W avatar
Christian W

in other words, it’s not a usual case, right? typically, you’d have one component-instance per stack and maybe have that component accept a list/map-variable so that it can create multiple resources?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if each account can have 0..N instances of a component, then we can use 1) a workflow per account; or 2) try to auto-detect the components in each stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


in other words, it’s not a usual case, right? typically, you’d have one component-instance per stack and maybe have that component accept a list/map-variable so that it can create multiple resources?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depends

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we def have and used many diff VPCs in each account, and many diff S3 buckets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


2) try to auto-detect the components in each stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe stacks | atmos

Use this command to show the fully deep-merged configuration for all stacks and the components in the stacks.

Christian W avatar
Christian W

okay, thank you very much!!!! I think that helps me

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can create Atmos custom commands:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Subcommands | atmos

Atmos can be easily extended to support any number of custom commands, what we call “subcommands”.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yaml

# Custom CLI commands commands:

  • name: list description: Execute ‘atmos list’ commands

    subcommands

    commands:

    • name: stacks description: | List all Atmos stacks. steps:
      • atmos describe stacks –sections nonegrep -e “^\S”sed s/://g
    • name: components description: | List all Atmos components in all stacks or in a single stack.

      Example usage: atmos list components atmos list components -s plat-ue2-dev atmos list components –stack plat-uw2-prod atmos list components -s plat-ue2-dev –type abstract atmos list components -s plat-ue2-dev -t enabled atmos list components -s plat-ue2-dev -t disabled flags:

      • name: stack shorthand: s description: Name of the stack required: false
      • name: type shorthand: t description: Component types - abstract, enabled, or disabled required: false steps:
      • {{ if .Flags.stack }} {{ if eq .Flags.type “enabled” }} atmos describe stacks –stack {{ .Flags.stack }} –format json | jq ‘.[].components.terraform | to_entries[] | select(.value.vars.enabled == true)’ | jq -r .key {{ else if eq .Flags.type “disabled” }} atmos describe stacks –stack {{ .Flags.stack }} –format json | jq ‘.[].components.terraform | to_entries[] | select(.value.vars.enabled == false)’ | jq -r .key {{ else if eq .Flags.type “abstract” }} atmos describe stacks –stack {{ .Flags.stack }} –format json | jq ‘.[].components.terraform | to_entries[] | select(.value.metadata.type == “abstract”)’ | jq -r .key {{ else }} atmos describe stacks –stack {{ .Flags.stack }} –format json –sections none | jq “.[].components.terraform” | jq -s add | jq -r “keys[]” {{ end }} {{ else }} {{ if eq .Flags.type “enabled” }} atmos describe stacks –format json | jq ‘.[].components.terraform | to_entries[] | select(.value.vars.enabled == true)’ | jq -r ‘[.key]’ | jq -s ‘add’ | jq ‘unique | sort’ | jq -r “values[]” {{ else if eq .Flags.type “disabled” }} atmos describe stacks –format json | jq ‘.[].components.terraform | to_entries[] | select(.value.vars.enabled == false)’ | jq -r ‘[.key]’ | jq -s ‘add’ | jq ‘unique | sort’ | jq -r “values[]” {{ else if eq .Flags.type “abstract” }} atmos describe stacks –format json | jq ‘.[].components.terraform | to_entries[] | select(.value.metadata.type == “abstract”)’ | jq -r ‘[.key]’ | jq -s ‘add’ | jq ‘unique | sort’ | jq -r “values[]” {{ else }} atmos describe stacks –format json –sections none | jq “.[].components.terraform” | jq -s add | jq -r “keys[]” {{ end }} {{ end }}

Christian W avatar
Christian W

and just maybe for clarification: I’m always deploying “a component into a stack” (with a tfstate-file per component/stack), I’m not deploying “an entire stack” (with one tfstate-file per stack), right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can combine the native Atmos commands, Atmos custom commands, and Atmos workflows to create the flow you want. Atmos native commands can be used in custom commands and workflows. Workflows can be used in custom commands, etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


and just maybe for clarification: I’m always deploying “a component into a stack” (with a tfstate-file per component/stack), I’m not deploying “an entire stack” (with one tfstate-file per stack), right?

Christian W avatar
Christian W

thank you - this sounds very powerful!!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos terraform apply/deploy <component> -s <stack> provisions one Atmos component (an instanc of a TF component with specific settings) from the Atmos stack (the stack where the Atmos component is configured)

Christian W avatar
Christian W

okay, understood. I do not want to take up more of your time, so thank you very much!! I will now do some more reading (and hopefully understanding)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you need help, let us know

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i suggest you can start with defining your VPCs and S3 buckets components in diff tenants (each tenant having diff number of VPCs and buckets) and play with Atmos commands:

atmos validate stacks atmos describe component -s atmos validate component -s atmos terraform plan/apply/deploy -s

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then you would think about how to automate all of that (e.g. adding Atmos workflows, custom commands, etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we could def create Atmos custom commands to auto-detect all the related components for each tenant/region/account) using native Atmos commands)

Christian W avatar
Christian W

just as an fyi, my use-case is the following:

We’re building an IaC repo for the AWS platform-team of an org. they are responsible for creating workload-accounts and for setting up some resources within those workload-accounts that must stay under the responsibility of the platform team.

one of the use cases is VPCs. Almost all workload-accounts will have just one VPC, but for reasons, some workload-accounts might require multiple VPCs.

another use case is IAM users for workload identities. Workload-accounts should not be able to create their own IAM users (because better options exist) but if they actually need one, they need to request it from the platform-team via ITSM. The platform team would then add the requested iam-user to their account. so there would be 0..n iam-users in the stack of that account.

Ideally, I’d like to invoke the deployment for either a specific workload-account/stack, or I would like to say “deploy the VPCs for stack X”, followed by “deploy the IAM-users for stack X” without needing to know if there are 0, 1, or N instances of those.

Christian W avatar
Christian W

another use case is optional components. the workload-account might, e.g. request a terraform state-backend within its workload account, which would be created by the platform-team (to make sure its properly set up, etc). so in this case, some workload-accounts/stacks might use the component, and others might not.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like you have a real enterprise-grade infra Atmos supports all those use-cases (even if the advanced ones are not obvious at the first sight). We can help with it if you decide to go with Atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding the support for Teams (configuration, access control, permissions, etc.), take a look at these two Atmos features

https://atmos.tools/core-concepts/components/overrides#use-case-overrides-for-teams

https://atmos.tools/core-concepts/components/validation#opa-policy-examples

Component Overrides | atmos

Use the ‘overrides’ pattern to modify component(s) configuration and behavior in the current scope.

Component Validation | atmos

Use JSON Schema and OPA policies to validate Components.

Hans D avatar

depending on how you actually execute the deployments, but if you do them by hand… we’re using a variation of the custom list command together with good old xargs to do bulk stuff (mainly validate and tf plan) . You might want to throw in some atmos workflow as well.

1

2023-11-21

Christian W avatar
Christian W

Thank you for the feedback on my question from yesterday! I’d have two follow-ups, after having done some more testing:

• How do you typically handle the manual inspection/approval of plans if there is one plan per component-instance per stack and therefore potentially 10s or even 100s of plan files? • Do you struggle with execution times in larger environments, given that there might be 10s or 100s of separate terraform plan/apply-calls? Does atmos do anything in parallel here?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


How do you typically handle the manual inspection/approval of plans if there is one plan per component-instance per stack and therefore potentially 10s or even 100s of plan files?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s usually handled in CI/CD systems like GHA, Spacelift etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you open a PR and modify a lot of components, all of them will be triggered, and you have the ability to review the plans in the corresponding UI (GHA, Spacelift, etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(so don’t open huge PRs)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Do you struggle with execution times in larger environments, given that there might be 10s or 100s of separate terraform plan/apply-calls?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we provision Auto-Scaling Groups for GHA runner controller or Spacelift, so even if hundreds of stacks get triggered, the system adds new runners. They always execute all plans in parallel. This is always a compromise b/w the speed of operations with large number of changes/updates and the cost

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(a human could not possibly handle hundreds of stacks running in parallel on the command line by either using Atmos or just plain Terraform. You need help from an IaC Management Platform)

2023-11-27

Adam Markovski avatar
Adam Markovski

Hi folks, I don’t know if this the right place to ask this question, but here it is: We use Atmos with Spacelift and of course this component for stack configuration. We are currently upgrading the Terraform version and trying to specify it per stack basis, sample:

components:
  terraform:
    code-artifact-repos:
      metadata:
        component: code-artifact
      settings:
        spacelift:
          workspace_enabled: true
          stack_destructor_enabled: false
          protect_from_deletion: true
          terraform_version: "1.5.7"

However this is not picked up by Spacelift. Falling back to the default version of Terraform for the Spacelift component. What are we missing?

cloudposse/terraform-spacelift-cloud-infrastructure-automation

Terraform module to provision Spacelift resources for cloud infrastructure automation

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

cloudposse/terraform-spacelift-cloud-infrastructure-automation

Terraform module to provision Spacelift resources for cloud infrastructure automation

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Adam Markovski if you are using https://github.com/cloudposse/terraform-aws-components, the spacelift component (or a similar code), then here’s how the TF version is set https://github.com/cloudposse/terraform-aws-components/blob/main/modules/spacelift/admin-stack/child-stacks.tf#L115

cloudposse/terraform-aws-components
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but… it’s wrong and needs to be fixed (we are working on some improvements to the component and will fix it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

meanwhile, you can change it to

terraform_version                  = lookup(var.terraform_version_map, try(each.value.settings.spacelift.terraform_version, ""), var.terraform_version)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then this will work

settings:
  spacelift:
    terraform_version: "1.5.7"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but not quite

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you also need to configure var.terraform_version_map

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

something like

        terraform_version_map:
          "1": "1.5.0" 
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then

settings:
  spacelift:
    terraform_version: "1"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or even

terraform_version_map:
  "1.5.0": "1.5.0" 

settings:
  spacelift:
    terraform_version: "1.5.0"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in any case, you need to make this change first

terraform_version                  = lookup(var.terraform_version_map, try(each.value.settings.spacelift.terraform_version, ""), var.terraform_version)
Adam Markovski avatar
Adam Markovski

Got it, trying later tonight

Adam Markovski avatar
Adam Markovski

Thanks for the fast response

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we’ll fix the issue in the new release of the component, and then you could just vendor it if you are using Atmos vendoring)

1
    keyboard_arrow_up