#atmos (2024-02)

2024-02-01

Andy Wortman avatar
Andy Wortman

Question on how to handle stack-unique configuration files in atmos.

1
Andy Wortman avatar
Andy Wortman

The situation is that we have a terraform component, used by lots of stacks, but each stack has its own json configuration file for that component. These files are several hundred lines long, so it would be unwieldy to store in a variable. Currently, we store all these files inside the component, but this means if I change the file for one stack, atmos describe affected returns all stacks with that component, leading to a bunch of no-op plan and apply steps in our git automation.

I’d like to store the files elsewhere in the repo, and have each stack’s yaml configuration point to the file. When the file is changed, it should trigger a plan or apply of just that stack. Does anyone know of a good way to do this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman give me a few< I’ll show you how to do it

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe affected | atmos

This command produces a list of the affected Atmos components and stacks given two Git commits.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
file - if the Atmos component depends on an external file, and the file was changed (see affected.file below), the file attributes shows the modified file

folder - if the Atmos component depends on an external folder, and any file in the folder was changed (see affected.folder below), the folder attributes shows the modified folder
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
file - an external file on the local filesystem that the Atmos component depends on was changed.

Dependencies on external files (not in the component's folder) are defined using the file attribute in the settings.depends_on map. For example:
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    top-level-component3:
      metadata:
        component: "top-level-component1"
      settings:
        depends_on:
          1:
            file: "examples/tests/components/terraform/mixins/introspection.mixin.tf"
Andy Wortman avatar
Andy Wortman

oh, that is awesome!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is how to specify deps on external files and folders in the YAML stack manifests using the settings.depends_on attribute, which is used in atmos describe affected

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that you can also have dependencies on external (but local) terraform modules in your TF components - but in this case Atmos detects that automatically, no need to specify anything in YAML stack manifests

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
component.module - the Terraform component is affected because it uses a local Terraform module (not from the Terraform registry, but from the local filesystem), and that local module has been changed.
Andy Wortman avatar
Andy Wortman

Exactly what I was looking for. Thanks @Andriy Knysh (Cloud Posse)!

Dave avatar

Greetings.. running through a pared down version of https://atmos.tools/design-patterns/organizational-structure-configuration

I was able to run atmos terraform deploy vpc-flow-logs-bucket -s org1-plat-ue2-prod without a problem

Then when I run atmos terraform deploy vpc -s org1-plat-ue2-prod I’m getting the following error: ╷ │ Error: stack name pattern ‘{namespace}-{tenant}-{environment}-{stage}’ includes ‘{environment}’, but environment is not provided │ │ with module.vpc_flow_logs_bucket[0].data.utils_component_config.config[0], │ on .terraform/modules/vpc_flow_logs_bucket/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”: │ 1: data “utils_component_config” “config” { │ ╵ exit status 1

Here is the output of atmos describe component vpc --stack org1-plat-ue2-prod

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "vpc_flow_logs_bucket" {
  count = local.vpc_flow_logs_enabled ? 1 : 0

  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.5.0"

  # Specify the Atmos component name (defined in YAML stack config files)
  # for which to get the remote state outputs
  component = "vpc-flow-logs-bucket"

  # Override the context variables to point to a different Atmos stack if the
  # `vpc-flow-logs-bucket-1` Atmos component is provisioned in another AWS account, OU or region
  stage       = try(coalesce(var.vpc_flow_logs_bucket_stage_name, module.this.stage), null)
  environment = try(coalesce(var.vpc_flow_logs_bucket_environment_name, module.this.environment), null)
  tenant      = try(coalesce(var.vpc_flow_logs_bucket_tenant_name, module.this.tenant), null)

  # `context` input is a way to provide the information about the stack (using the context
  # variables `namespace`, `tenant`, `environment`, and `stage` defined in the stack config)
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and try again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that code was taken from a larger component whicj allows provisioning one VPC flow logs bucket for many VPCs, and the example did not change it to the case where the bucket is deployed in the same stack as the VPC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we’ll update the code and the docs in the next release)

Dave avatar

Thanks! That lead to the following error..

│ Error: Attempt to get attribute from null value
│
│   on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default":
│    5:   log_destination      = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn
│     ├────────────────
│     │ module.vpc_flow_logs_bucket[0].outputs is null
│
│ This value is null, so it does not have any attributes.
╵
exit status 1

after running: atmos terraform deploy vpc-flow-logs-bucket -s org1-plat-ue2-prod then atmos terraform deploy vpc -s org1-plat-ue2-prod

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dave this is another thing that we are going to add to the “Quick Start” in the next release. In fact, it’s documented here https://atmos.tools/core-concepts/components/remote-state

Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in short, the remote-state module uses the utils provider to read Atmos components, and Terraform executes all providers from the component’s folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos.yaml does not exist in the component’s folder, hence the utils provider can’t find the remote state

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are on the host, put atmos.yaml to /usr/local/etc/atmos/atmos.yaml

Dave avatar

Oh, I totally read that the other day, apologies for forgetting… best practices would be then to copy atmos config there any time it is updated?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or just set the ENV var ATMOS_CLI_CONFIG_PATH (this is what geodesic does automatically)

Dave avatar

Oh, I totally did that…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

best practices would be to use a Docker container and use the rootfs pattern

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

```

CLI config is loaded from the following locations (from lowest to highest priority):

system dir (‘/usr/local/etc/atmos’ on Linux, ‘%LOCALAPPDATA%/atmos’ on Windows)

home dir (~/.atmos)

current directory

ENV vars

Command-line arguments

#

It supports POSIX-style Globs for file names/paths (double-star ‘**’ is supported)

https://en.wikipedia.org/wiki/Glob_(programming)

Base path for components, stacks and workflows configurations.

Can also be set using ‘ATMOS_BASE_PATH’ ENV var, or ‘–base-path’ command-line argument.

Supports both absolute and relative paths.

If not provided or is an empty string, ‘components.terraform.base_path’, ‘components.helmfile.base_path’, ‘stacks.base_path’ and ‘workflows.base_path’

are independent settings (supporting both absolute and relative paths).

If ‘base_path’ is provided, ‘components.terraform.base_path’, ‘components.helmfile.base_path’, ‘stacks.base_path’ and ‘workflows.base_path’

are considered paths relative to ‘base_path’.

base_path: “”

components: terraform: # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_BASE_PATH’ ENV var, or ‘–terraform-dir’ command-line argument # Supports both absolute and relative paths base_path: “components/terraform” # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE’ ENV var apply_auto_approve: false # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT’ ENV var, or ‘–deploy-run-init’ command-line argument deploy_run_init: true # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE’ ENV var, or ‘–init-run-reconfigure’ command-line argument init_run_reconfigure: true # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE’ ENV var, or ‘–auto-generate-backend-file’ command-line argument auto_generate_backend_file: true helmfile: # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_BASE_PATH’ ENV var, or ‘–helmfile-dir’ command-line argument # Supports both absolute and relative paths base_path: “components/helmfile” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_USE_EKS’ ENV var # If not specified, defaults to ‘true’ use_eks: true # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH’ ENV var kubeconfig_path: “/dev/shm” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN’ ENV var helm_aws_profile_pattern: “{namespace}-{tenant}-gbl-{stage}-helm” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN’ ENV var cluster_name_pattern: “{namespace}-{tenant}-{environment}-{stage}-eks-cluster”

stacks: # Can also be set using ‘ATMOS_STACKS_BASE_PATH’ ENV var, or ‘–config-dir’ and ‘–stacks-dir’ command-line arguments # Supports both absolute and relative paths base_path: “stacks” # Can also be set using ‘ATMOS_STACKS_INCLUDED_PATHS’ ENV var (comma-separated values string) included_paths: - “orgs//*” # Can also be set using ‘ATMOS_STACKS_EXCLUDED_PATHS’ ENV var (comma-separated values string) excluded_paths: - “/_defaults.yaml” # Can also be set using ‘ATMOS_STACKS_NAME_PATTERN’ ENV var name_pattern: “{tenant}-{environment}-{stage}”

workflows: # Can also be set using ‘ATMOS_WORKFLOWS_BASE_PATH’ ENV var, or ‘–workflows-dir’ command-line arguments # Supports both absolute and relative paths base_path: “stacks/workflows”

logs: file: “/dev/stdout” # Supported log levels: Trace, Debug, Info, Warning, Off level: Info

Custom CLI commands

commands:

  • name: tf description: Execute ‘terraform’ commands

    subcommands

    commands:

    • name: plan description: This command plans terraform components arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true env:
      • key: ENV_VAR_1 value: ENV_VAR_1_value
      • key: ENV_VAR_2

        ‘valueCommand’ is an external command to execute to get the value for the ENV var

        Either ‘value’ or ‘valueCommand’ can be specified for the ENV var, but not both

        valueCommand: echo ENV_VAR_2_value

        steps support Go templates

        steps:

      • atmos terraform plan {{ .Arguments.component }} -s {{ .Flags.stack }}
  • name: terraform description: Execute ‘terraform’ commands

    subcommands

    commands:

    • name: provision description: This command provisions terraform components arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true

        ENV var values support Go templates

        env:

      • key: ATMOS_COMPONENT value: “{{ .Arguments.component }}”
      • key: ATMOS_STACK value: “{{ .Flags.stack }}” steps:
      • atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
      • atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
  • name: play description: This command plays games steps:
    • echo Playing…

      subcommands

      commands:

    • name: hello description: This command says Hello world steps:
      • echo Hello world
    • name: ping description: This command plays ping-pong

      If ‘verbose’ is set to ‘true’, atmos will output some info messages to the console before executing the command’s steps

      If ‘verbose’ is not defined, it implicitly defaults to ‘false’

      verbose: true steps:

      • echo Playing ping-pong…
      • echo pong
  • name: show description: Execute ‘show’ commands

    subcommands

    commands:

    • name: component description: Execute ‘show component’ command arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true

        ENV var values support Go templates and have access to {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables

        env:

      • key: ATMOS_COMPONENT value: “{{ .Arguments.component }}”
      • key: ATMOS_STACK value: “{{ .Flags.stack }}”
      • key: ATMOS_TENANT value: “{{ .ComponentConfig.vars.tenant }}”
      • key: ATMOS_STAGE value: “{{ .ComponentConfig.vars.stage }}”
      • key: ATMOS_ENVIRONMENT value: “{{ .ComponentConfig.vars.environment }}”

        If a custom command defines ‘component_config’ section with ‘component’ and ‘stack’, ‘atmos’ generates the config for the component in the stack

        and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,

        exposing all the component sections (which are also shown by ‘atmos describe component’ command)

        component_config: component: “{{ .Arguments.component }}” stack: “{{ .Flags.stack }}”

        Steps support using Go templates and can access all configuration settings (e.g. {{ .ComponentConfig.xxx.yyy.zzz }})

        Steps also have access to the ENV vars defined in the ‘env’ section of the ‘command’

        steps:

      • ‘echo Atmos component from argument: “{{ .Arguments.component }}”’
      • ‘echo ATMOS_COMPONENT: “$ATMOS_COMPONENT”’
      • ‘echo Atmos stack: “{{ .Flags.stack }}”’
      • ‘echo Terraform component: “{{ .ComponentConfig.component }}”’
      • ‘echo Backend S3 bucket: “{{ .ComponentConfig.backend.bucket }}”’
      • ‘echo Terraform workspace: “{{ .ComponentConfig.workspace }}”’
      • ‘echo Namespace: “{{ .ComponentConfig.vars.namespace }}”’
      • ‘echo Tenant: “{{ .Compo…
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or (especailly if you are on the host), use ATMOS_CLI_CONFIG_PATH to set the path to atmos.yaml to whatever location you like

Dave avatar

Yeah I have that set, I am using the container.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(this is an anooying feature, but that’s how Terraform works with the providers)

Dave avatar

Still having the issue, I’m clearly missing something….

I have done the following (automatically set on my docker run command)
just set the ENV var ATMOS_CLI_CONFIG_PATH (this is what geodesic does automatically)

# Tried both of the following
 √ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/

 √ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/atmos.yaml

I have tried this

if you are on the host, put atmos.yaml to /usr/local/etc/atmos/atmos.yaml

ls -l /usr/local/etc/atmos/
total 4
-rwxr-xr-x 1 root root 1931 Feb  2 09:08 atmos.yaml

Does using my current atmos.yaml suffice, or is there something special about your atmos.yaml from here?

https://github.com/cloudposse/atmos/blob/default-atmos-yaml/examples/quick-start/rootfs/usr/local/etc/atmos/atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

with

/usr/local/etc/atmos/atmos.yaml
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what error do you see?

Dave avatar


√ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/atmos.yaml

╷
> │ Error: Attempt to get attribute from null value
> │
> │   on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default":
> │    5:   log_destination      = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn
> │     ├────────────────
> │     │ module.vpc_flow_logs_bucket[0].outputs is null
> │
> │ This value is null, so it does not have any attributes.
> ╵
> exit status 1

√ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH /_repotest/

╷
│ Error: Attempt to get attribute from null value
│
│   on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default":
│    5:   log_destination      = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn
│     ├────────────────
│     │ module.vpc_flow_logs_bucket[0].outputs is null
│
│ This value is null, so it does not have any attributes.
╵
exit status 1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


are you using the ENV variables, or your atmos.yaml is in /usr/local/etc/atmos/atmos.yaml?

Dave avatar

I’ve tried both, are they mutually exclusive?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using the ENV vars, you need to set 2 vars:

Initial Atmos configuration can be controlled by these ENV vars:

ATMOS_CLI_CONFIG_PATH - where to find atmos.yaml. Path to a folder where the atmos.yaml CLI config file is located
ATMOS_BASE_PATH - base path to components and stacks folders
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos.yaml is loaded from the following locations (from lowest to highest priority):

System dir (/usr/local/etc/atmos/atmos.yaml on Linux, %LOCALAPPDATA%/atmos/atmos.yaml on Windows)
Home dir (~/.atmos/atmos.yaml)
Current directory
ENV variables ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH
Dave avatar

Yes read that

Dave avatar

Those are both set.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


if Atmos sees ATMOS_CLI_CONFIG_PATH, it will not try to use ``/usr/local/etc/atmos/atmos.yaml`

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s ATMOS_BASE_PATH?

Dave avatar
echo $ATMOS_CLI_CONFIG_PATH
/repotest/

echo $ATMOS_BASE_PATH
/repotest/
Dave avatar

Also tried

echo $ATMOS_CLI_CONFIG_PATH
/repotest

echo $ATMOS_BASE_PATH
/repotest
Dave avatar

Also tried

unset ATMOS_CLI_CONFIG_PATH
unset ATMOS_BASE_PATH

cp /repotest/atmos.yaml /usr/local/etc/atmos/
base_path: "/repotest" # atmos.yaml
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s review this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
For this to work for both the `atmos` CLI and the Terraform `utils` provider, we recommend doing one of the following:

- Put `atmos.yaml` at `/usr/local/etc/atmos/atmos.yaml` on local host and set the ENV var `ATMOS_BASE_PATH` to point to the absolute path of the root
  of the repo

- Put `atmos.yaml` into the home directory (`~/.atmos/atmos.yaml`) and set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of
  the repo

- Put `atmos.yaml` at a location in the file system and then set the ENV var `ATMOS_CLI_CONFIG_PATH` to point to that location. The ENV var must
  point to a folder without the `atmos.yaml` file name. For example, if `atmos.yaml` is at `/atmos/config/atmos.yaml`,
  set `ATMOS_CLI_CONFIG_PATH=/atmos/config`. Then set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of the repo

- When working in a Docker container, place `atmos.yaml` in the `rootfs` directory
  at [/rootfs/usr/local/etc/atmos/atmos.yaml](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/rootfs/usr/local/etc/atmos/atmos.yaml>)
  and then copy it into the container's file system in the [Dockerfile](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/Dockerfile>)
  by executing the `COPY rootfs/ /` Docker command. Then in the Dockerfile, set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the
  root of the repo. Note that the [Atmos example](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start>)
  uses [Geodesic](<https://github.com/cloudposse/geodesic>) as the base Docker image. [Geodesic](<https://github.com/cloudposse/geodesic>) sets the ENV
  var `ATMOS_BASE_PATH` automatically to the absolute path of the root of the repo on local host
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

pick one of the methods

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that ATMOS_BASE_PATH must be an absolute path

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is what geodesic is doing:

• In the Dockerfile, copies the rootfs to the container so we have /usr/local/etc/atmos/atmos.yaml in there • Sets ATMOS_BASE_PATH to /localhost/...../infra - NOTE: this is absolute path

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(again, sorry that this is complicated, but Terraform executes providers from the component folder, and we don’t want to place atmos.yaml in every component’s folder)

Dave avatar

Are we sure PATHING had anything to do with it?

As soon as I disabled vpc_flow_logs_enabled: false it worked just fine.

Dave avatar

I went through each of the OPTIONS here

For this to work for both the `atmos` CLI and the Terraform `utils` provider, we recommend doing one of the following:

- Put `atmos.yaml` at `/usr/local/etc/atmos/atmos.yaml` on local host and set the ENV var `ATMOS_BASE_PATH` to point to the absolute path of the root
  of the repo

- Put `atmos.yaml` into the home directory (`~/.atmos/atmos.yaml`) and set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of
  the repo

- Put `atmos.yaml` at a location in the file system and then set the ENV var `ATMOS_CLI_CONFIG_PATH` to point to that location. The ENV var must
  point to a folder without the `atmos.yaml` file name. For example, if `atmos.yaml` is at `/atmos/config/atmos.yaml`,
  set `ATMOS_CLI_CONFIG_PATH=/atmos/config`. Then set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of the repo

- When working in a Docker container, place `atmos.yaml` in the `rootfs` directory
  at [/rootfs/usr/local/etc/atmos/atmos.yaml](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/rootfs/usr/local/etc/atmos/atmos.yaml>)
  and then copy it into the container's file system in the [Dockerfile](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/Dockerfile>)
  by executing the `COPY rootfs/ /` Docker command. Then in the Dockerfile, set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the
  root of the repo. Note that the [Atmos example](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start>)
  uses [Geodesic](<https://github.com/cloudposse/geodesic>) as the base Docker image. [Geodesic](<https://github.com/cloudposse/geodesic>) sets the ENV
  var 

and same results everytime.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it might be something else. You can DM me your setup and i’ll review

2024-02-02

johncblandii avatar
johncblandii

We’re still on 1.44 and have been in the weeds so didn’t see all of the new stuff, but I read the latest releases since then and major kudos on the latest work. This is looking phenomenal and will upgrade soon.

1
1
2
Release notes from atmos avatar
Release notes from atmos
11:14:32 PM

v1.57.0 what

Add default CLI configuration to Atmos code Update/improve examples and docs Update demo.tape

why

Add default CLI configuration to Atmos code - this is useful when executing Atmos CLI commands (e.g. on CI/CD) that does not require components and stacks

If atmos.yaml is not found in any of the searched locations, Atmos will use the default CLI configuration: base_path: “.” components: terraform: base_path: components/terraform apply_auto_approve: false deploy_run_init:…

Release v1.57.0 · cloudposse/atmosattachment image

what

Add default CLI configuration to Atmos code Update/improve examples and docs Update demo.tape

why

Add default CLI configuration to Atmos code - this is useful when executing Atmos CLI comma…

Dr.Gao avatar

Hello When using atmos github actions for terraform drift detection, I saw an example config like below. How can I specify all components? How can I specify components in specific folders?

   select-components:
      runs-on: ubuntu-latest
      name: Select Components
      outputs:
        matrix: ${{ steps.components.outputs.matrix }}
      steps:
        - name: Selected Components
          id: components
          uses: cloudposse/github-action-atmos-terraform-select-components@v0
          with:
            jq-query: 'to_entries[] | .key as $parent | .value.components.terraform | to_entries[] | select(.value.settings.github.actions_enabled // false) | [$parent, .key] | join(",")'
            debug: ${{ env.DEBUG_ENABLED }}
Dr.Gao avatar

in the atmos terraform plan with github actions, the docs on website says Within the "plan" job, the "component" and "stack" are hardcoded (foobar and plat-ue2-sandbox). In practice, these are usually derived from another action..

Dr.Gao avatar

Is there an example that practically uses components from “affected stacks”

Dr.Gao avatar

I see it is using component as key in the yaml a lot, does that mean it support config one component? How we config multiple components in this case?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Terraform Drift Detection | atmos

The Cloud Posse GitHub Action for “Atmos Terraform Drift Detection” and “Atmos Terraform Drift Remediation” define a scalable pattern for detecting and remediating Terraform drift from within GitHub using workflows and Issues. “Atmos Terraform Drift Detection” will determine drifted Terraform state by running Atmos Terraform Plan and creating GitHub Issues for any drifted component and stack. Furthermore, “Atmos Terraform Drift Remediation” will run Atmos Terraform Apply for any open Issue if called and close the given Issue. With these two actions, we can fully support drift detection for Terraform directly within the GitHub UI.

name: "Atmos GitOps Select Components"
description: "A GitHub Action to get list of selected components by jq query"
author: [email protected]
branding:
  icon: "file"
  color: "white"
inputs:
  select-filter:
    description: jq query that will be used to select atmos components
    required: false
    default: '.'
  head-ref:
    description: The head ref to checkout. If not provided, the head default branch is used.
    required: false
    default: ${{ github.sha }}
  atmos-gitops-config-path:
    description: The path to the atmos-gitops.yaml file
    required: false
    default: ./.github/config/atmos-gitops.yaml
  jq-version:
    description: The version of jq to install if install-jq is true
    required: false
    default: "1.6"
  debug:
    description: "Enable action debug mode. Default: 'false'"
    default: 'false'
    required: false
  nested-matrices-count:
    required: false
    description: 'Number of nested matrices that should be returned as the output (from 1 to 3)'
    default: "2"
outputs:
  selected-components:
    description: Selected GitOps components
    value: ${{ steps.selected-components.outputs.components }}
  has-selected-components:
    description: Whether there are selected components
    value: ${{ steps.selected-components.outputs.components != '[]' }}
  matrix:
    description: The selected components as matrix structure suitable for extending matrix size workaround (see README)
    value: ${{ steps.matrix.outputs.matrix }}

runs:
  using: "composite"
  steps:
    - uses: actions/checkout@v3
      with:
        ref: ${{ inputs.head-ref }}

    - name: Read Atmos GitOps config
      ## We have to reference cloudposse fork of <https://github.com/blablacar/action-config-levels>
      ## before <https://github.com/blablacar/action-config-levels/pull/16> would be merged
      uses: cloudposse/github-action-config-levels@nodejs20
      id: config
      with:
        output_properties: true
        patterns: |
          - ${{ inputs.atmos-gitops-config-path }}

    - name: Install Terraform
      uses: hashicorp/setup-terraform@v2
      with:
        terraform_version:  ${{ steps.config.outputs.terraform-version }}
        terraform_wrapper: false

    - name: Install Atmos
      uses: cloudposse/github-action-setup-atmos@v1
      env:
       ATMOS_CLI_CONFIG_PATH: ${{inputs.atmos-config-path}}
      with:
        atmos-version: ${{ steps.config.outputs.atmos-version }}
        install-wrapper: false

    - name: Install JQ
      uses: dcarbone/[email protected]
      with:
        version: ${{ inputs.jq-version }}

    - name: Filter Components
      id: selected-components
      shell: bash
      env:
        ATMOS_CLI_CONFIG_PATH:  ${{ steps.config.outputs.atmos-config-path }}
        JQUERY: |
          with_entries(.value |= (.components.terraform)) |             ## Deal with components type of terraform
          map_values(map_values(select(${{ inputs.select-filter }}))) | ## Filter components by enabled github actions
          map_values(select(. != {})) |                                 ## Skip stacks that have 0 selected components
          map_values(. | keys) |                                        ## Reduce to component names
          with_entries(                                                 ## Construct component object
            .key as $stack | 
            .value |= map({
              "component": ., 
              "stack": $stack, 
              "stack_slug": [$stack, .] | join("-")
            })
          ) | map(.) | flatten                                          ## Reduce to flat array
      run: |
        atmos describe stacks --format json | jq -ce "${JQUERY}" > components.json

        components=$(cat components.json)
        echo "Selected components: $components"
        printf "%s" "components=$components" >> $GITHUB_OUTPUT

    - uses: cloudposse/github-action-matrix-extended@v0
      id: matrix
      with:
        matrix: components.json
        sort-by: ${{ steps.config.outputs.sort-by }}
        group-by: ${{ steps.config.outputs.group-by }}
        nested-matrices-count: ${{ inputs.nested-matrices-count }}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which executes atmos describe stacks --format json, which returns all components in all stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe stacks | atmos

Use this command to show the fully deep-merged configuration for all stacks and the components in the stacks.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


How can I specify all components?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the action returns all components, which is what you need for drift detection

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on the other hand, to detect only the affected component (affected byu the changes in a PR), you can use https://atmos.tools/integrations/github-actions/affected-stacks

Affected Stacks | atmos

Streamline Your Change Management Process

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Terraform Plan | atmos

The Cloud Posse GitHub Action for “Atmos Terraform Plan” simplifies provisioning Terraform from within GitHub using workflows. Understand precisely what to expect from running a terraform plan from directly within the GitHub UI for any Pull Request.

Dr.Gao avatar

Thanks very much! That is very helpful!

Dr.Gao avatar

Is Github action with Atmos production ready? @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are used in production

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need help or find any issues

Dr.Gao avatar

Thanks! Andriy!

Dr.Gao avatar

I will reach out again if we have any issues

2024-02-05

Guus avatar

Hi, when using Atmos + cloudposse components to setup a multi-account AWS organization setup (accounts for identity, dns, audit, …). Say we have a customer who is providing access to an AWS account within their own organization through a role we can assume from one of our own IAM roles. How would we be able to assume this role within our cloudposse setup so we can still use atmos & cloudposse components and store terraform state (S3) and locking (DynamoDB) on our own account while provisioning the actual infrastructure on the customer’s account?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is not related to Atmos since you want to use the same Terraform backend and have already configured backend.s3 section so all state will be stored in the same backend (even for the external account). You are probably using different IAM roles to access the backend and the AWS resources

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to provide an IAM role in assume_role that Terraform will assume to access the external account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using just provider "aws" you can always provide that role in assume_role

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then the account-map component needs to be configured to return that IAM role for a specific context, e.g. for a diff Org, or diff tenant, or diff account - depending on how you model the external account (is it a separate Org, or a separate tenant, or just a separate account)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in Atmos manifests, you create those configurations for the new Org/tenant/account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can alway model an external account as a separate Org, or tenant, or just a separate AWS accou nt in the existing Org/tenant

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Organizational Structure Configuration Atmos Design Pattern | atmos

Organizational Structure Configuration Atmos Design Pattern

Guus avatar

Thank you for the detailed reply, I think I understand what you mean, I’m just not sure how I would make the account-map component return the external role to assume? How would I add the external AWS account (which is part of the customers AWS Org) within my own AWS Org setup configured using atmos + cloud components?

Guus avatar

So I would just see it as a separate external account, which I want to provision resources on using my existing atmos+cloudposse project. So adding it using the manifests and allowing it to be using the assume_role (terraform_role_arn) with an external role would perfectly fit my needs. I just don’t know how to set that up and can’t immediately find any examples either.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using the account-map component, there is no examples like that. To use an existing external account as a separate stage and to use an existing terraform IAM role, you will need to modify the account-map component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  static_terraform_role  = local.account_map.terraform_roles[local.account_name]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. if stage=<new_stage> return the existing Terraform role ARN

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the account-map needs to be modified before you configure the new functionality with Atmos, eg. by providing it with a new input - a map of existing accounts to the terraform roles to assume

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then use that input in the component to add it to the outputs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then you can configure it with Atmos (which is just to configure that new input variable)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for eaxmple:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
   terraform:
     account-map:
       vars:
         existing_accounts_to_terraform_role_arns:
            acc1: arn1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what you are asking is brownfield environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#945 feat: use `account-map` component for brownfield env

what

• feat: use account-map component for brownfield env

why

• Allow brownfield environments to use the account-map component with existing accounts

references

• Related to PR #943

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#943 feat: use `account` component for brownfield env

what

• feat: use account component for brownfield env

why

• Allow brownfield environments to use the account component without managing org, org unit, or account creation • This will allow adding to the account outputs so account-map can ingest it without any changes

references

• Slack conversations in sweetops https://sweetops.slack.com/archives/C031919U8A0/p1702135734967949

tests

I tested with these toggled to true and false to ensure it worked as expected. When these are true, the resources are all created. When these are false, none of the resources are created and all the outputs are filled with existing account information with the ability to override using the yaml inputs.

        organization_enabled: true
        organizational_units_enabled: true
        accounts_enabled: true
Guus avatar

Interesting, thank you!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as you can see, the PRs are under discussion (which means there is no consensus on how to do this, so you need to make changes to the components by yourself for now, using any approarch, and configure it with Atmos)

Alex Soto avatar
Alex Soto

Hi, is there a document explaining why Atmos runs a reconfigure? As I look at example atmos.yamls, the default appears to alway reconfigure. Everytime I run a plan, even for the same component and stack in succession, it’s constantly asking to migrate all workspaces

the one caveat is that I’m playing around right now and using local state

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform init -reconfigure is used to be able to use a component in many stacks

Re-running init with an already-initialized backend will update the working directory to use the new backend settings. Either -reconfigure or -migrate-state must be supplied to update the backend configuration.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can always set it to false in atmos.yaml

Alex Soto avatar
Alex Soto

ok thanks, I’m still learning, not quite I understand why it needs to if it’s re-running in the same stack, unless it’s not detecting that it’s in the same stack and just running it always because the config var says true

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

-reconfigure was introduced (it’s configurable) for the cases when we use multiple Terraform backends per Org, per tenant, or per account. Then we can provision the same TF component into multiple stacks using diff TF backends for each stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and to make it configurable (and disable for the cases like yours), it was added to atmos.yaml config

Alex Soto avatar
Alex Soto

ok thx, my case is very simple right now as I’m bootstrapping. Hopefully warnings go away as I get more sophisticated infra in place.

Gabriel Tam avatar
Gabriel Tam

Hi, I was trying to deploy the https://github.com/cloudposse/terraform-aws-components/tree/main/modules/waf module, but I had a hard time figuring out how to use and_statement or the or_statement or the not_statement from the rules. I know I can do that with straight TF, but I can’t seem to be able to do that with Cloudposse module. Also, is Rule Groups not supported? Can someone please shed some lights? Thank you in advance.

The following snippet is what I had, but I was only able to specify one statement.

byte_match_statement_rules:
  - name: "invalid-path"
    priority: 30
    action: block

    statement:
      field_to_match:
        uri_path:
          rule:
      positional_constraint: "STARTS_WITH"
      search_string: "/api/v3/test/"
      text_transformation:
        rule:
          priority: 0
          type: "NONE"

    visibility_config:
      # Defines and enables Amazon CloudWatch metrics and web request sample collection.
      cloudwatch_metrics_enabled: true
      metric_name: "uri_path"
      sampled_requests_enabled: true
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

the waf component may not have everything that the terraform-aws-waf module has. We use components as root modules for customer implementations, so it likely only has what we’ve required for a given use case. You can likely add anything that the module supports to the component

https://github.com/cloudposse/terraform-aws-waf

cloudposse/terraform-aws-waf
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

rule groups are supported by the managed_rule_group_statement_rules input: https://github.com/cloudposse/terraform-aws-waf/blob/main/variables.tf#L361

variable "managed_rule_group_statement_rules" {
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

@Andriy Knysh (Cloud Posse) do you have an example of using an and_statement or or_statement or not_statement?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i think those or and and statements are not supported by the module. Prs are welcome

Gabriel Tam avatar
Gabriel Tam

@Dan Miller (Cloud Posse), I think the rule groups are not supported in the terraform-aws-components WAF module. The managed rule group are the ones that are managed by AWS, but not the ones we create.

So it sounds like we’ll need to customize it to do both rule groups and multi statements then. @johncblandii

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
    for_each = local.rule_group_reference_statement_rules
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Gabriel Tam are those not what you are describing?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "rule_group_reference_statement_rules" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "rule_group_reference_statement_rules" {
Gabriel Tam avatar
Gabriel Tam

Those are the rule group referencing rules to point to an existing rule groups (arn). I couldn’t find where I can create the rule groups. And the and, or, and not statement are needed for our use cases. Those are the ones we will need to create / customize.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you modify/update/improve it, your contribution to the WAF module would be greatly appreciated (taking into account that WAF is a complex thing, it would benefit many people)

johncblandii avatar
johncblandii

so it seems support for this resource is what @Gabriel Tam is referring to.

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_rule_group

2024-02-06

2024-02-08

Release notes from atmos avatar
Release notes from atmos
12:34:39 AM

v1.58.0 what

Improve Atmos UX and error handling

When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a component and stack and printing error messages if the component or stack is not found

If a user executes any Atmos command that requires Atmos components and stacks, including just atmos (and including from a random folder not related to Atmos configuration), and the CLI config points to an Atmos stacks…

Release v1.58.0 · cloudposse/atmosattachment image

what

Improve Atmos UX and error handling

When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a compone…

Release notes from atmos avatar
Release notes from atmos
12:54:27 AM

v1.58.0 what

Improve Atmos UX and error handling

When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a component and stack and printing error messages if the component or stack is not found

If a user executes any Atmos command that requires Atmos components and stacks, including just atmos (and including from a random folder not related to Atmos configuration), and the CLI config points to an Atmos stacks…

2024-02-09

Release notes from atmos avatar
Release notes from atmos
06:24:25 PM

v1.59.0 what

Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default to dark mode Stylize atmos brand

why

Make it more compelling Add missing context developers might lack without extensive terraform experience

Release v1.59.0 · cloudposse/atmosattachment image

what

Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default …

Introduction to Atmos | atmos

Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

Terraform Limitations | atmos

Terraform Limitations

Release notes from atmos avatar
Release notes from atmos
06:34:34 PM

v1.59.0 what

Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default to dark mode Stylize atmos brand

why

Make it more compelling Add missing context developers might lack without extensive terraform experience

Release notes from atmos avatar
Release notes from atmos
03:14:37 AM

v1.60.0 what

Fix an issue with the skip_if_missing attribute in Atmos imports with context Update docs titles and fix typos Update atmos version CLI command

why

The skip_if_missing attribute was introduced in Atmos release v1.58.0 and had some issues with checking Atmos imports if the imported manifests don’t exist

Docs had some typos

When executing the atmos version command, Atmos automatically checks for the latest…

Release v1.60.0 · cloudposse/atmosattachment image

what

Fix an issue with the skip_if_missing attribute in Atmos imports with context Update docs titles and fix typos Update atmos version CLI command

why

The skip_if_missing attribute was introd…

Release v1.58.0 · cloudposse/atmosattachment image

what

Improve Atmos UX and error handling

When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a compone…

2024-02-10

silopolis avatar
silopolis

This page alone deserves a conf to reveal all its gems and secrets!

https://atmos.tools/reference/terraform-limitations/

Overcoming Terraform Limitations with Atmos | atmos

Overcoming Terraform Limitations with Atmos

this2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Erik Osterman (Cloud Posse) created it, I enjoyed reading it, after reading it you will want to use Atmos :)

Overcoming Terraform Limitations with Atmos | atmos

Overcoming Terraform Limitations with Atmos

silopolis avatar
silopolis

You surely do! And you’re better equipped to do so with all the XP distilled in this page!

RB avatar

Is there a way to visualize the atmos stacks when using github actions to plan and apply? Or is it on the roadmap?

RB avatar

I know there is a native GitHub way to do it by clicking on actions, just wondering if there is another frontend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do not have any immediate plans for a front end

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Focused on adopting GitHub actions and GitHub enterprise functionality as much as possible right now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What specifically are you missing that a UI would solve? Visualizing atmos stacks is a bit broad.

RB avatar

I was thinking that it would be nice to have the ability to see which stacks have drifted visually

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We believe we’ve solved that. We open GitHub Issues. This way drift is actionable (_it can be assigned to someone to remediate, and supports remediation from issues, when possible) and _visual. We also create issues from failures.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem with existing UIs, is they are not actionable. They just show you, you have a bunch of drifted stacks.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Pro tip, you can use projects with github issues.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Github Issues can be synced to jira, if not using github issues.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(note our actions support this out of the box)

2024-02-11

RB avatar

I tried using the component.yaml’s mixins key to copy over my local providers file and it failed.

Does that key only work with the source?

RB avatar

If so, how do i vendor from upstream and copy in my local mixin?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RB are you asking about how to copy a local file to the component’s folder?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  mixins:
    # Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
    # - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
    # This mixin `uri` is relative to the current `vpc` folder
    - uri: ../../mixins/context.tf
      filename: context.tf
RB avatar


@RB are you asking about how to copy a local file to the component’s folder?
Yes

RB avatar

Oh i see, i can use the ../../ expression?

RB avatar

Hmm i tried this and i got an error last time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

is it working for you?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Btw have you also experimented with the new vendor manifest?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What’s nice about that is you can use yaml anchors within the file to DRY up provider copying

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Vendoring | atmos

Use Atmos vendoring to make copies of 3rd-party components, stacks, and other artifacts in your own repo.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah yes, @Andriy Knysh (Cloud Posse) shows an example there:

Option 1

- component: "context"
      # Copy a local file into a local file with a different file name
      # This `source` is relative to the current folder
      source: "components/terraform/mixins/context.tf"
      targets:
        - "components/terraform/vpc/context.tf"
        - "components/terraform/alb/context.tf"
        - "components/terraform/ecs/context.tf"
        # etc...
      # Tags can be used to vendor component that have the specific tags
      # `atmos vendor pull --tags test`
      # Refer to <https://atmos.tools/cli/commands/vendor/pull>
      tags:
        - context
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since there can only be one source, there’s no way to do the copying from a different location in each - component definition

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I could see this improving, but it’s not a pattern we right now use and dissuade against using due to the impossibility of testing it with open ended mixins.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re adding a terratest helper to do atmos testing for components and stacks

RB avatar

Ah that looks better than my current way which is to use a make target

i.e.

make vendor COMP=ecr
make vendor/edit COMP=ecr
.PHONY: vendor
vendor: ## Vendor the component
	@mkdir -p components/terraform/$(COMP)
	@sed 's,ecr,$(COMP),g' ./mixins/component.yaml > components/terraform/$(COMP)/component.yaml
	@atmos vendor pull -c $(COMP)

.PHONY: vendor/edit
vendor/edit: ## Vendor the component and edit the files
	@hcledit block rm 'variable.region' -f components/terraform/$(COMP)/variables.tf -u
	@cp ./mixins/providers.tf components/terraform/$(COMP)/
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On the one hand, make is nice because i understands modification times. Albeit, not in your implementation. But it’s an over optimizing for the most part to do that. Give the vendoring a shot.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In make though, I would do it like this: (functional looking pseudo code)

ALL_PROVIDERS = $(shell find . -type f -name 'providers.tf)

$(ALL_PROVIDERS): mixins/providers.tf
  cp $< $@

.PHONY : providers
providers: $(ALL_PROVIDERS)
1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That would have only copied it if the mixins/providers.tf is newer than the [proviers.tf](http://proviers.tf) inside of a component.

RB avatar

Thanks Erik. You got some make skills!

The other one is using hcledit to remove the variable.region from [variables.tf](http://variables.tf) , after vendoring, as I have moved that into the client’s [providers.tf](http://providers.tf) file

hcledit block rm 'variable.region' -f components/terraform/$(COMP)/variables.tf -u
RB avatar

Any chance vendoring can also include running a cli command via post_vendor key or similar ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since component-level generation (like terragrunt, terramate) is not something we ascribe to, it’s not yet something we can prioritize. I think we will inevitably support it, but for now recommend doing that in make, or like @Hans D does with Gotask

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can see supporting it, primarily for the reason making it easier for companies to migrate from other tools into atmos

RB avatar

no worries, for now I will look into the new vendoring yaml, and tie that back into a make target and i should be good to go

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, and also for very advanced vendoring requirements, there’s https://carvel.dev/vendir/docs/v0.39.x/vendir-spec/

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(which atmos vendoring is based on)_

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can add pre and post hooks to vendoring (no ETA, but in the near future)

Matthew Reggler avatar
Matthew Reggler

Hi, I’ve encountered the same issue here, not sure if this thread resolved the issue from within the ComponentVendorConfig?

I’m trying to replace the supplied providers.tf file with one that sources the iam_roles module from a private registry on vendor (to kill relative paths to the iam_roles module):

  mixins:
    - uri: ../../helpers/providers.registry.tf
      filename: providers.tf

I get the following error

Pulling the mixin '../../helpers/providers.registry.tf' for the component 'my-component' into '/localhost/path/to/components/terraform/my-component/providers.tf'

relative paths require a module with a pwd
Matthew Reggler avatar
Matthew Reggler

This is on Atmos v1.60.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make sure the relative path is correct

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  # mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
  # mixins are processed in the order they are declared in the list
  mixins:
    # <https://github.com/hashicorp/go-getter/issues/98>
    # Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
    # - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
    # This mixin `uri` is relative to the current `vpc` folder
    - uri: ../../mixins/context.tf
      filename: context.tf
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos vendor pull -c infra/vpc

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Pulling sources for the component 'infra/vpc' from 'github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref=1.372.0' into 'examples/tests/components/terraform/infra/vpc'
Pulling the mixin '.......mixins/context.tf' for the component 'infra/vpc' into 'examples/tests/components/terraform/infra/vpc/context.tf'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matthew Reggler if that still is not working for you, please DM me with your config and I’ll take a look

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, update Atmos to the latest

2024-02-12

2024-02-13

RB avatar

Im trying to run cloudposse/github-action-atmos-terraform-plan but Im getting this error

Run cloudposse/github-action-atmos-get-setting@v1
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON

Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
RB avatar

This is my yaml

jobs:
  terraform:
    runs-on: ubuntu-latest
    steps:
    - name: Plan Atmos Component
      uses: cloudposse/github-action-atmos-terraform-plan@v1
      with:
        component: "github-oidc-role/cicd"
        stack: "gbl-prod"
        component-path: "component/terraform/github-oidc-role"
        terraform-plan-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
        terraform-state-bucket: "org-state-bucket"
        terraform-state-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
        aws-region: "us-east-1"
RB avatar

But if I run this, it works

    runs-on: ubuntu-latest
    steps:
    - uses: hashicorp/setup-terraform@v2
  
    - name: Setup atmos
      uses: cloudposse/github-action-setup-atmos@v1
      with:
        install-wrapper: true
  
    - name: Run atmos
      id: atmos
      run: atmos terraform plan github-oidc-role/cicd --stack=gbl-prod
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Igor Rodionov please take a look

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think these moved into the config.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
        component-path: "component/terraform/github-oidc-role"
        terraform-plan-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
        terraform-state-bucket: "org-state-bucket"
        terraform-state-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
        aws-region: "us-east-1"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(the aim being to keep config out of workflows so they are more easily disributed)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Matt Calhoun who has also been working on github-action-atmos-get-setting

RB avatar

Yes that makes sense. I was following the readme. I also tried only the component and the stack and received the same weird issue in github-action-atmos-get-setting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) @Igor Rodionov I think the readme may be out of date?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(it does have the part about the config up above)

Igor Rodionov avatar
Igor Rodionov

@RB have you created the config file >

Igor Rodionov avatar
Igor Rodionov

?

RB avatar

Oh boy no i did not. I thought i could give it that info as inputs to the workflow

RB avatar

If i create that config, wouldn’t i just be duplicating the atmos stack yaml in the gitops config?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov we should have a friendlier error message

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


If i create that config, wouldn’t i just be duplicating the atmos stack yaml in the gitops config?
We are moving most of the gitops config into the atmos stack config. And some of it will be moved into atmos.yaml

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(this was based on feedback we received, and it makes sense in hindsight)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That way the GHA should work more out-of-the-box

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The key thing we’re aiming for is that GHA workflows should not need to be edited.

1
Brett Au avatar
Brett Au

So we moved over to the .github/config/atmos-gitops.yaml pattern but still having the same error

Run cloudposse/github-action-atmos-get-setting@v1
  with:
    component: s3/some-new-bucket
    stack: ue1-devops-prod-01
    settings-path: settings.github.actions_enabled
  env:
    ATMOS_CLI_CONFIG_PATH: 
  
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
    at JSON.parse (<anonymous>)
    at getSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/lib/settings.ts:28:1)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at processSingleSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/useCase/process-single-setting.ts:40:1)
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/main.ts:30:1
Brett Au avatar
Brett Au

Our atmos-gitops.yaml file

  atmos-version: 1.45.3
  atmos-config-path: ./rootfs/usr/local/etc/atmos/
  terraform-state-bucket: bucket
  terraform-state-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
  terraform-plan-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
  terraform-apply-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
  terraform-version: 1.6.0
  aws-region: us-east-1
  enable-infracost: false
  sort-by: .stack_slug
  group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-") 
Brett Au avatar
Brett Au

I do notice that ATMOS_CLI_CONFIG_PATH is empty in the call to get-setting, not sure if thats part of the issue

Brett Au avatar
Brett Au

Looking at this documentation too it seems we may need to setup our atmos.yaml to output json so the settings github action can correct parse it https://atmos.tools/cli/configuration/

CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Igor Rodionov can you please look at this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, heads up, @Brett Au - sorry to do this to you, very soon we’re moving the configuration into atmos.yml for consistency)

Igor Rodionov avatar
Igor Rodionov

Hello @Brett Au What version of cloudposse/github-action-atmos-terraform-plan are you using?

Brett Au avatar
Brett Au

Sorry getting back to this, @RB and I moved into the aws-team-roles and aws-teams modules and we didn’t want to pollute this issue with something custom we did.

I have re-tried this today with cloudposse/github-action-atmos-terraform-plan@v2

Brett Au avatar
Brett Au

I added the configuration into atmos.yaml

integrations:
  github:
    gitops:
      terraform-version: 1.8.1
      infracost-enabled: false
      artifact-storage:
        region: us-east-1
        bucket: bucket
        #table: cptest-core-ue2-auto-gitops-plan-storage
        role: arn:aws:iam::OMITTED:role/l360-gbl-identity-cicd
      role:
        plan: arn:aws:iam::OMITTED:role/l360-gbl-identity-cicd
        apply: arn:aws:iam::OMITTED:role/l360-gbl-identity-cicd
      matrix:
        sort-by: .stack_slug
        group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
Brett Au avatar
Brett Au

I still have atmos-gitops.yaml as it appears uses: cloudposse/github-action-atmos-affected-stacks@v3 still requires it.

I get the following error when running the plan github action

SyntaxError: Unexpected token 'p', "path: /hom"... is not valid JSON
 at JSON.parse (<anonymous>)
    at getSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/lib/settings.ts:28:1)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at processSingleSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/useCase/process-single-setting.ts:40:1)
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/main.ts:30:1
Brett Au avatar
Brett Au
      - name: Plan Atmos Component
        uses: cloudposse/github-action-atmos-terraform-plan@v2
        env:
          ATMOS_CLI_CONFIG_PATH: ${{ github.workspace }}
        with:
          component: ${{ matrix.component }}
          stack: ${{ matrix.stack }}
          atmos-config-path: ${GITHUB_WORKSPACE}
          atmos-version: 1.70.0
Brett Au avatar
Brett Au

I should state that my manual github action (bash) is working just fine

      - name: Run atmos
        if: ${{ steps.shouldrun.outputs.status == 'true' }}
        id: atmos
        run: |
          export ATMOS_CLI_CONFIG_PATH=${GITHUB_WORKSPACE}
          export ATMOS_BASE_PATH=${GITHUB_WORKSPACE}
          atmos terraform plan ${{ matrix.component }} --stack=${{ matrix.stack }} -no-color

So I know the environment can properly run atmos, but it appears the output from the github-action-atmos-get-setting github action is not valid json and failing

Brett Au avatar
Brett Au

We do have a custom name pattern in atmos

  name_pattern: "{environment}-{stage}"
Brett Au avatar
Brett Au

Sorry for the tags, but just wanted to bubble up this old issue

@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) @Igor Rodionov

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I still have atmos-gitops.yaml as it appears uses: cloudposse/github-action-atmos-affected-stacks@v3 still requires it.
The atmos-gitops.yaml file has been entirely eliminated from all actions and replaced with the integrations.github section

Brett Au avatar
Brett Au

ACK I think I may have been on a v2 branch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hopefully an easy fix

Igor Rodionov avatar
Igor Rodionov

@Brett Au, have you succeeded in solving the issue?

RB avatar

This error is coming up now. I added some logs to it and using the same integrations key in the yaml file.

https://github.com/cloudposse/github-action-matrix-extended/issues/9

Only notable difference is that a dynamodb lock table is not supplied.

RB avatar

I see the issue.

⨠ atmos describe config -f json | jq .
parse error: Invalid numeric literal at line 1, column 6

which fails because of these 2 outputs shown

⨠ atmos describe config -f json | head -2
Found ENV var ATMOS_BASE_PATH=/localhost/git/work/atmos
Found ENV var ATMOS_BASE_PATH=/localhost/git/work/atmos

This is because

# atmos.yaml
logs:
  level: Trace

I also tried setting ATMOS_LOGS_LEVEL and get the same error

⨠ export ATMOS_LOGS_LEVEL=Off
⨠ atmos describe config -f json | jq .
parse error: Invalid numeric literal at line 1, column 6
RB avatar

Options

  1. Set log level Off in atmos.yaml and set ATMOS_LOGS_LEVEL=Trace in geodesic
  2. or fix atmos to allow using an environment variable to turn off the Found ENV in the output
  3. or update to strip out Found ENV from the output I think I’ll just do (1) for now but i’d prefer (2) and (3) seems like a temp fix
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, yes I think we have an issue track things. @Gabriela Campana (Cloud Posse) can you confirm we have an issue to fix the output from atmos that makes it I possible to use trace level debugging together with automation. Cc @Andriy Knysh (Cloud Posse)

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All debug/trace output should be going to stderr, but might be missed in some places

1
Brian avatar

Hello, I am struggling to understand how atmos can provision resources into different AWS accounts in a multi-account AWS organization. For example, if I need to provision an IAM role in all my AWS accounts in my organization, how does atmos change provider configurations to gain access to my organization’s child accounts?

Brian avatar
Organizational Structure Configuration Atmos Design Pattern | atmos

Organizational Structure Configuration Atmos Design Pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


atmos change provider configurations to gain access to my organization’s child accounts
That’s up to the terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TL;DR providers support variables. So use other modules or variable inputs to supply the values.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our guiding principle is for atmos to have no cloud-provider specific requirements

Brian avatar

But if you put the provider in the configuration and have to backout the component, wont that inhibit that delete process since terraform will be looking for the provider to do that delete?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


inhibit that delete process since terraform will be looking for the provider to do that delete?
It’s using state from something outside of the current component (root module), so it does not inhibit the delete.

Brian avatar

Okay awesome. Thanks for the info Erik.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know what you’re referring to, however, and we have encountered that when we made mistakes. But it’s not something we encounter anymore.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

some of our engineers will be more familiar than me on the specific implementations.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
module "always" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  # account_map must always be enabled, even for components that are disabled
  enabled = true

  context = module.this.context
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s one of those examples.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. cannot have labels get disabled, if we need to successfully destroy

Brian avatar

Okay this makes sense. Thanks Erik. Was banging my head against a wall there for a while.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as Erik mentioned, it’s up to the provider config

provider "aws" {
  region = var.region

  assume_role = var.assume_role
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

assume_role can be provided from other TF components (as in our examples), or from TF vars, or from Atmos stack manifests for diff Orgs/tenants/accounts

Release notes from atmos avatar
Release notes from atmos
12:14:32 AM

v1.61.0 what

Update readme to be more consistent with atmos.tools Fix links Add features/benefits Add use-cases Add glossary

why

Better explain what atmos does and why

Release notes from atmos avatar
Release notes from atmos
01:54:35 AM

v1.62.0 Add atmos docs CLI command @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133369695” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/537” data-hovercard-type=”pull_request”…

Release v1.62.0 · cloudposse/atmosattachment image

Add atmos docs CLI command @aknysh (#537) what

Add atmos docs CLI command

why

Use this command to open the Atmos docs

aknysh - Overview

aknysh has 264 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
02:04:38 AM

v1.62.0 Add atmos docs CLI command @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133369695” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/537” data-hovercard-type=”pull_request”…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey all! some notable updates to the atmos docs. First, you can now open the from the command line. Just run atmos docs

Here are some notable additions.

Best Practices for Stacks. https://atmos.tools/core-concepts/stacks/#best-practices Best Practices for Components. https://atmos.tools/core-concepts/components/#best-practices Added an FAQ. https://atmos.tools/faq Challenges that led us to writing atmos. https://atmos.tools/reference/terraform-limitations

1
Release notes from atmos avatar
Release notes from atmos
06:34:35 AM

v1.63.0 Add integrations.github to atmos.yaml @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133610579” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/538“…

aknysh - Overview

aknysh has 264 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
06:54:33 AM

v1.63.0 Add integrations.github to atmos.yaml @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133610579” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/538“…

2024-02-14

Adam Markovski avatar
Adam Markovski

You guys are on fire with the Atmos changes

1
Adam Markovski avatar
Adam Markovski

Looks great

Dr.Gao avatar

Hello, I see you have this to support short form of aws region and zones, https://github.com/cloudposse/terraform-aws-utils#introduction do you have something similar for GCP

jose.amengual avatar
jose.amengual

CloudPosse does not have GCP modules

jose.amengual avatar
jose.amengual
Dr.Gao avatar

Thanks!

Dr.Gao avatar

This is very helpful!

1
Dr.Gao avatar

Hello, I need to use multiple modules from GCP module, can I config multiple source in component.yml.? it does not look like it support it. I should not use vendor.yaml in my case since I am not pulling it for the entire infra, just for that component. I did see that cloudposse solve this issue by adding another module to the main module so it only config one ex it needs to use both efs and kms module, instead of pulling two modules, it only need to pull efs since kms is also defined in efs[main.tf](http://main.tf)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

component.yaml is used to pull all files for a component. If you pulling two components, you can place them into separate filders and use 2 diff component.yaml files

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or use one component.yaml and mixins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

with mixins, you can pull anything from multiple sources (but just one by one)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
mixins:
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
      filename: context.tf
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
      version: 1.398.0
      filename: introspection.mixin.tf
Dr.Gao avatar

Thanks! I think compoent.yml with minxins is the way I am going to try

Dr.Gao avatar

Since it is not pull multiple component, it is myltiple module in one component

Dr.Gao avatar

This is really helpful, thanks @Andriy Knysh (Cloud Posse)

Dr.Gao avatar

How the mixins is structured when I have multiple modules from multiple components?

Dr.Gao avatar

Do I create multiple mixin config files for each component?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

a list of mixins is part of spec at the same level as source

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Vendoring | atmos

Use Component Vendoring to make copies of 3rd-party components in your own repo.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
  name: vpc-flow-logs-bucket-vendor-config
  description: Source and mixins config for vendoring of 'vpc-flow-logs-bucket' component
spec:
  source:
    # Source 'uri' supports the following protocols: OCI (<https://opencontainers.org>), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP,
    # and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
    # In 'uri', Golang templates are supported  <https://pkg.go.dev/text/template>
    # If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
    # To vendor a module from a Git repo, use the following format: 'github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
    uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
    version: 1.398.0

    # Only include the files that match the 'included_paths' patterns
    # If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
    # 'included_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
    # <https://en.wikipedia.org/wiki/Glob_(programming)>
    # <https://github.com/bmatcuk/doublestar#patterns>
    included_paths:
      - "**/*.tf"
      - "**/*.tfvars"
      - "**/*.md"

    # Exclude the files that match any of the 'excluded_paths' patterns
    # Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
    # 'excluded_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
    excluded_paths:
      - "**/context.tf"

  # Mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
  # All mixins are processed in the order they are declared in the list.
  mixins:
    # <https://github.com/hashicorp/go-getter/issues/98>
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
      filename: context.tf
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
      version: 1.398.0
      filename: introspection.mixin.tf
Dr.Gao avatar

Awesome!

Dr.Gao avatar

Is there other way to solve this issue?

2024-02-15

Dr.Gao avatar

Hello, for the label order described here https://github.com/cloudposse/terraform-null-label I see we can config label order as we would like. Does it work if the label order is {namespace}-{tenant}-{environment}-{stage} and the folder structure in stacks follows a different order namespace/stage/tenant/environment structure? I think it works, but would like to double check with the expert here. If it does work, is there any disadvantage of doing that

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

short answer: yes, it will work

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

long answer:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the label order in the context (from null-label which is used in all components) is to uniquely and consistently name the cloud (AWS) resources, so your resource names/IDs will look like {namespace}-{tenant}-{environment}-{stage}-{name} (or in whatever order you want)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on the other hand, the Atmos manifests folder structure is for humans to organize the Atmos config and make it DRY. Atmos does not care about the stacks folder structure, it’s for people to config, organize and manage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what Atmos cares about is the context variables defined in the stack manifests - that’s how Atmos finds the stack and component in the stack when you execute commands like atmos terraform plan <component> -s <stack>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Final Notes | atmos

Atmos provides unlimited flexibility in defining and configuring stacks and components in the stacks.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos stack manifests can have arbitrary names and can be located in any sub-folder in the stacks directory. Atmos stack filesystem layout is for people to better organize the stacks and make the configurations DRY. Atmos (the CLI) does not care about the filesystem layout, all it cares about is how to find the stacks and the components in the stacks by using the context variables namespace, tenant, environment and stage
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as described in https://atmos.tools/design-patterns/, you can organize the stacks (Atmos manifests) in many different ways depending on your Organization/tenants/regions/accounts structure

Atmos Design Patterns | atmos

Atmos Design Patterns. Elements of Reusable Infrastructure Configuration

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Provision | atmos

Having configured the Terraform components, the Atmos components catalog, all the mixins and defaults, and the Atmos top-level stacks, we can now

Dr.Gao avatar

Thanks!

Dr.Gao avatar

I love your design !

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all of that was done to be able to separately and independently configure 3 diff things: 1) cloud resource names; 2) Atmos manifests folder structure (for people); 3) Atmos stack names (e.g. plat-ue2-prod)

Hans D avatar

Interesting to follow https://github.com/opentofu/opentofu/issues/685#issuecomment-1945123152: use of the tf lockfiles …

Comment on #685 Provide a way to disable provider dependency lock file

I was going to say “supply chain attacks”, but @Yantrio got there first. :)

The other solution is to just break down and start including the lock file and try to educate my users about another hoop to jump through for almost 0 benefit.

IMO, this is a pretty fundamental cybersecurity concept. It helps ensure that what you got last time is the same as you got this time. It helps make sure that nobody took over the upstream repo and messed with it. It also helps ensure that the configuration you ship to dev is the same you ship to prod (e.g., a 12-factor app).

The problem I usually have is the inverse — too many platforms to support, and not enough people updating the lock file beyond their own personal os/arch.

2024-02-16

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
03:35:32 PM

set the channel topic:

2024-02-18

2024-02-19

2024-02-22

Peter Dinsmore avatar
Peter Dinsmore

Hello, I am currently looking for tooling to optimize our Terraform environments. We previously used Terragrunt in our organization, but it has become too cumbersome over time and feels kind of “previous generation” tooling. We are in the process of setting up a PoC with Terramate, and it’s super neat. Especially the orchestration is powerful. We also looked at Spacelift, but I don’t see any reason to migrate to another CI/CD when we can get to a similar UX in GitHub Actions.

However, I just came across Atmos on Reddit and would love to understand how it compares to Terramate and Terragrunt.

MB avatar

You make a fair point but from what I’ve seen, building your own IaC automation with GitHub Actions can get messy. First of all lack of standardization (which creates silos and bottlenecks across projects), too much reliance on individual knowledge (so when someone leaves, there’re always big gaps) and as things get more complex, it demands more and more resources, limiting what you can build on top of it. Also to mention that maintaining homegrown solutions in-house is a ton of work

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have Atmos handling different Terraform environments for many companies, starting with a simple case with one Org and just a few accounts, to multi-Org, multi-tenant, multi-OU, multi-account with hundreds of components deployed to thousands of stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we also have Atmos working with Spacelift (handling tens of thousands of resources and thousands of stacks in some cases across multiple Orgs/OUs/regions/accounts), and with GitHub Actions (we can give you a demo on how to use Atmos with GHA) (discussion on GHA vs Spacelift vs other CI/CD tools is a completely diff topic, all of them have their own pros and cons, including cost, usability, user experience, access control, audit, messiness :) , etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here are some docs describing the core concepts of Atmos:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need more explanation or help (it’s difficult to answer/describe everything in one go)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a real example of Spacelift stacks using Atmos:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are 1274 Spacelift stacks configured across many Orgs, each having many OUs with multiple accounts, deployed into many diff regions

(Definition: a Spacelift stack is an Atmos component provisioned into an Atmos stack, for example a vpc component can be provisioned multiple times into different Org/OU/account/region stacks, keeping the entire config reusable and DRY using the concepts like imports and inheritance:

https://atmos.tools/core-concepts/stacks/imports

https://atmos.tools/core-concepts/components/inheritance

https://atmos.tools/core-concepts/components/component-oriented-programming

Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

Component-Oriented Programming | atmos

Component-Oriented Programming is a reuse-based approach to defining,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(also take a look at why the tool is called Atmos https://atmos.tools/faq)

Atmos FAQ | atmos

Why is the tool called Atmos?

Peter Dinsmore avatar
Peter Dinsmore

Thanks for sharing! I took some time to read through the resources and decided to stick to Terramate. In the end, I don’t see any significant benefits why I should be using Atmos over Terramate.

As said, we don’t want to use Spacelift because we don’t need any of the features Spacelift offers.

Here’s a bit of feedback:

• Atmos feels really cumbersome to get started with

• The orchestration and order of execution in Terramate helps us better to manage environments and split those up into stacks

• Using yaml for configuration sounds like a step back for us. We like the ability to manage all IaC related config with HCL, otherwise we wouldn’t use Terraform in the first place

• We need code generation and like the approach quite a bit. We also don’t see any issues with tests anyways, thanks for helping us out here!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, no problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding “Using yaml for configuration sounds like a step back for us”

Note that in Atmos, YAML is used for configuration of stacks for diff environments. You still use plain Terraform for the components (which can be used with Atmos or without). There is a clear separation of concerns: code (terraform root modules) and configuration for diff environments (Atmos manifests).

Also, YAML is everywhere now (kubernetes, kustomize, helm, helmfile, etc.), it looks like it’s the modern way to define configurations (regardless of whether you like or hate YAML )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the end, there is no one perfect tool for everything. Terramate has its advantages (the code generation, change detection and testing are cool, also using plain HCL), as well as Atmos (these are different approaches to solve similar problems in diff ways)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We need code generation and like the approach quite a bit. @Peter Dinsmore Can you help me understand how you leverage code generation?

jose.amengual avatar
jose.amengual
Using yaml for configuration sounds like a step back for us. We like the ability to manage all IaC related config with HCL, otherwise we wouldn't use Terraform in the first place

Kubernetes uses yaml and is moving forward….

Peter Dinsmore avatar
Peter Dinsmore

@jose.amengual for Kubernetes yaml makes sense too. For Terraform, I don’t see why I should use a different configuration language if HCL is exactly built for that? E.g. in Terramate, I can describe the purpose (metadata) and orchestration behavior of a stack as HCL in a stack configuration stack.tm.hcl.

Also, the code generation is configured with HCL, which allows the use of Terraform functions inside the code generation. I don’t think YAML would be suitable for that.

Peter Dinsmore avatar
Peter Dinsmore

@Erik Osterman (Cloud Posse) all sorts of things, to mention a few:

• We generate native backend and provider configuration in stacks interpolating stack metadata. It’s super powerful.

• We use Terramate modules for generating templates based on e.g. provider version used (we render different attributes in resource configuration based on if we use the stable or beta google provider)

• We generate Kubernetes manifests using Terraform outputs

Peter Dinsmore avatar
Peter Dinsmore

Anyways, there’s a ton of different tools in the market. Just because we favor a different approach doesn’t mean that Atmos isn’t a great tool! Any contribution to the ecosystem is appreciated.

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Peter Dinsmore I you shared all this. It’s easy to be surrounded by a lot of people who sing praises. The opportunity for improvement lies with constructive criticism.

1
Soren Martius avatar
Soren Martius

@Erik Osterman (Cloud Posse) we should catch up at some point

pv avatar

How do you apply all stacks in a pipeline? If I want my pipeline to run atmos terraform apply, can I do an all flag instead of listing the stack and component? This is with GitHub Actions

Brian avatar

Are you looking for workflows?

https://atmos.tools/core-concepts/workflows/

Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

Brian avatar

I am unaware of a atmos flag that just applies or plans all components defined for stack. However, atmos workflow allows you to synchronous run applies and plans for one or more components.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv please help us to understand what you mean by “apply all stacks in a pipeline”. You probably should not plan/apply all stacks at once (there could be thousands of them), but check what components/stacks have changed and plan/apply only those

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Affected Stacks | atmos

Streamline Your Change Management Process

pv avatar

Yes the workflow is the second best option of just adding each plan/apply command there but yes apply all stacks in a mono repo is better phrased

pv avatar

Also what is the downside to running them all if there is only a change to one stack? Wouldn’t all the terraform show as no changes other than the new stack that you added?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the downside is it will take time and consume resources

pv avatar

Does it consume more resources than running regular terraform? Or is it running more in the background that consumes more resources?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean if you have hundreds or thousands of stacks, why trigger all of them if only one or a few are changed and should be planned/applied. Triggering all ofnthem will take a lot of time and probably cost money

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(Atmos calls regular terraform for each plan/apply, there is no difference here, it does not do anything in the background)

pv avatar

I do not have hundreds of thousands of stacks. But none of this really answers my initial question so I’m assuming that means there is no way to run all stacks unless I create a workflow that includes an apply of each individual stack?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is no option in Atmos to trigger everything (e.g. atmos terraform apply -all) (was not implemented, at least yet, for the reason that people usually have hundreds or thousands of stacks, and triggering all of them would be a waste of resources). Help us understand what exactly you want to do in the pipeline, and we would be able to offer a solution

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos workflow is one of the solutions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but if you have a small static set of stacks and you know all of them, you can just execute atmos terraform apply ... sequentially in a script (or in a workflow)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the point is, it’s usually not feasible/practical to list all stacks in a workflow or in a script since there could be too many of them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

another solution:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos list stacks
   for each stack do: atmos list components -s <stack>
      for each component do: atmos terraform apply <component> -s <stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the atmos list stacks and atmos list components commands you can find here and add them to your atmos.yaml:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this solution is dynamic (you don’t have to know and hardcode all stacks and components in advance)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@pv I think maybe what you want is to apply only the affected stacks in the Pull Request?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(that solution is mentioned above)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv let us know if any of the above would help you to implement what you want (and let us know if you need help)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


But none of this really answers my initial question so I’m assuming that means there is no way to run all stacks unless I create a workflow that includes an apply of each individual stack?
Yes, the gist of it is we have not implemented plan-all and apply-all workflows because they have multiple problems. • Plan all never works if your root modules have any interdependencies. At best it gives you a false-sense of what will happen. At worst, it just errors. • Apply-all is practical for cold-starts, but should never be used after that. And since it’s for a cold-start, there are often other considerations. Therefore atmos workflows have been how we address it. • From a CI/CD perspective, neither plan-all nor apply-all should ever be used.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can go into more details any anyone of these. For example, there are alternative considerations for how to address apply-all in a safe, automated way in a CI/CD context, but that’s something solved in CI/CD and not atmos.

pv avatar

@Andriy Knysh (Cloud Posse) I think what you sent me is probably the best for what I am thinking of. I think what I prefer about straight up terraform in the past is just having my pipeline plan and apply and using separate repos for Landing Zones, and Products infra. This requires you to add extra steps and extra potential break points for the workflow but I think this more dynamic approach may help with that.

2
pv avatar

@Erik Osterman (Cloud Posse) I get those concern with how Atmos is setup but with traditional TF, when you run plan and apply, it looks over just any changes in your terraform path on the repo and only applies what is reflected as new, changed, or deleted compared to the state. Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform? Also with the first point, isn’t that reason “depends_on” exists? Sure you don’t see all the values until apply but its faster than running one dependency and then the next one until all dependencies are accounted for

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


ut with traditional TF, when you run plan and apply, it looks over just any changes in your terraform path on the repo and only applies what is reflected as new, changed, or deleted compared to the state.
We get that with our GitHub Actions.

  1. Describe Affected
  2. Run terraform plan on each affected stack
  3. Apply each change with GitHub Actions. To be clear, this has the outcome you want. It’s just not implemented as a “plan-all” or “apply-all”. It’s implemented using GitHub Action matrixes.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform
@mike186 has some interesting numbers he likes to share about the minutes of terraform they run per month. In their case, it’s a combination of Atmos+Spacelift, but the same would be true of Atmos+GitHub Actions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

as far as I’m concerned, atmos is natively supported on spacelift while you need to use custom settings to use gruntworks :)

According to spacelift we have ~1200 stacks (100% passing) and used more than 1.7 million minutes of atmos worker time including over 99k minutes of tracked runs over the last month. These are pretty typical monthly stats for us. Given that we run spacelift exclusively with atmos, every single stack, I very much feel like atmos is so completely, transparently, compatible and with Spacelift that Spacelift doesn’t need a setting for atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Also with the first point, isn’t that reason “depends_on” exists?
So atmos is designed to work with any number of systems, including spacelift. Not every underlying system can implement all the capabilities of the configuration. In this case depends_on is currently utilized by our Spacelift implementation. We plan on adding GHA support for this soon, but cannot commit to when.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sure you don’t see all the values until apply but its faster than running one dependency and then the next one until all dependencies are accounted for Agree, so what we really want it to trigger downstream dependencies when upstream components are affected. This is supported today with Spacelift and Atmos.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding “Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform?”. To be clear, Atmos is configuration on top on terraform root modules, in the end Atmos generates the varfiles/backend and executes plain Terraform (so you’d have the same number of runs using just plain terraform in Spacelift or GHA). But this is not about Atmos vs plain terraform, it’s about architecting your Terraform modules and splitting them into smaller parts to reduce complexity and plan/apply time and blast radius. Once you do it correctly in terraform, you can configure the root-modules with Atmos to be deployed into various environments (OUs, regions, accounts). And once you correctly design your Terraform root modules, you will save time and money when planning/applying them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv depending on what you decide to do, but if you still want atmos terraform apply-all (and atmos terraform plan-all) , I can impelement those as Atmos custom commands and put in the docs (so you could just copy them into your atmos.yaml) (but as mentioned, the best way forward is to use the GHA to execute atmos describe affected to trigger just the affected stacks, then atmos describe dependents to trigger all the dependencies )

pv avatar

@Andriy Knysh (Cloud Posse) No need to create a custom command if it is against practice. I will start with the documentation you sent me and should be good to work with that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think what I prefer about straight up terraform in the past is just having my pipeline plan and apply Aha, I think I misunderstood what you meant initially. Do you mean having larger root modules, that have, for example, the entire landing zone defined?

pv avatar

“Agree, so what we really want it to trigger downstream dependencies when upstream components are affected. This is supported today with Spacelift and Atmos”

@Erik Osterman (Cloud Posse) but how is that more cost effective if you have to pay to use spacelift?

And yes larger root modules that are dynamic in nature. I have a Landing Zone that I like to maintain and modify based on the needs of the company. But I know atmos is intended to be more modular so I am pivoting from what I would normally do

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s a larger calculus that will depend on what your needs are. Spacelift is clearly an enterprise solution for enterprise challenges. That’s why we created the GitHub Actions which are entirely self-hosted and have no SaaS option or tie-in.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The the larger root modules are definitely convenient from a developer perspective. Terraform handles the DAG. The problem is they do not scale as the complexity grows. This may not be a concern, then great - you can use them. It’s our experience, working in enterprise contexts, that these root modules grow and grow. The time to plan takes longer and longer, and they are more susceptible to transient errors. Additionally, the more that goes into a root module, the less reusable it is across organizations. Since Cloud Posse is primarily concerned with how to make infrastructure re-usable, atmos is optimized for this use-case.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The larger the root module, the harder it is to separate concerns, the harder it is to restrict what can change and when.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Every change risks changing everything everytime it’s plan/applied, which means a huge blast radius.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So the solution is to break it into smaller root modules, by lifecycle. But then the problem is as you say, the complexity is offloaded to the tooling that calls terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The first recommendation is to reduce the coupling between the layers, when possible, reducing how often those dependencies are triggered. Then implement CD to roll out the changes. So terraform is responsible for provisioning foundations, and CD is responsible for how to orchestrate those changes.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@pv also want to invite you to our weekly office hours. https://cloudposse.com/office-hours (we’ve run them for ~4-5 years and never missed a week)

pv avatar

Thanks @Erik Osterman (Cloud Posse) and @Andriy Knysh (Cloud Posse). I’ll review the docs you sent and look into the office hours. Appreciate all the attentive support

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anytime! We spend a lot of time thinking about these things, and really need to write up more documentation on “Well-Architected Terraform (according to Cloud Posse)” to make it easier to understand why we do the things we do. Especially when they go against a lot of norms, that we no longer ascribe to.

Andrew Ochsner avatar
Andrew Ochsner

hey sorry to revive an old thread a little bit but what about atmos terraform validate --all Just thinkin out loud… i’ll probably go the custom command path or something

Andrew Ochsner avatar
Andrew Ochsner

you know what… nevermind…bad idea… custom commands will work in the pinch i’ve put myself in…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do have plans to add support for commands that can be applied to a graph. But no eta yet.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But for now, a custom command will get you something close to that fastest

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Hans D is this what you were also attempting? (per our other DM)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


atmos terraform validate --all

Shiv avatar

How does one wrap a custom go binary around atoms cli ? I have a go binary to validate vpc connectivity when attaching vpc to transit gateway , I would like to run it as part of terraform execution . Has anyone tried this usecase?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


custom go binary … validate vpc connectivity when attaching vpc to transit gateway
It sounds like the implementation should be flipped around. Terraform should be using your go as a provider. Since it’s already in Go, that lift shouldn’t be too bad.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, something like what you want to do should be possible using atmos custom commands.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can create a custom command that first calls your command, then calls terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Custom commands have access to the config and accept standard command line parameters.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
commands:
  - name: terraform
    description: Execute 'terraform' commands
    # subcommands
    commands:
      - name: provision vpc-transit-gateway
        description: This command provisions the transit gateway after first performing a connectivity check.
        
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: true
        # ENV var values support Go templates
        env:
          - key: ATMOS_COMPONENT
            value: "transit-gateway"
          - key: ATMOS_STACK
            value: "{{ .Flags.stack }}"
        steps:
          - bin/vpc-connectivity-check
          - atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
          - atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then you could simply run:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos terraform provision vpc-transit-gateway --stack prod
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Many tools instead implement things like before/after hooks. We may implement it at some point, but it’s never been something we’ve required at Cloud Posse, and we do a lot of Terraform. Granted, for these types of situations, our route is to go with a Terraform provider.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So that’s why we’ve written our utils and awsutils providers, anytime we need an escape hatch. This is inline with how things should work in Terraform. You could say, our opinionated Best Practice.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As an aside, this sounds like a cool data source if you created one!

validate vpc connectivity when attaching vpc to transit gateway

Shiv avatar

This is great , thanks for validating this . I created it as a standalone go binary that leverages AWS api’s to test the vpc connectivity . I will need to figure out how to make it a tf provider . I am fairly new to the world of terraform . I would have to look into it . Not sure if it’s too much of an ask , have you come across any examples of making a go binary as a provider? I am guessing I need to leverage tf sdk?

Shiv avatar

Are there any non atmos custom command practical usecases that people take advantage of the atmos command feature? Or is it specially for atmos commands?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Well, I can!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We faced this same issue.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-provider-awsutils

Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We took the HashiCorp AWS provider, stripped everything out, and added in what we needed that was custom.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And, since you mention you’re new to Terraform, there’s also the local-exec provider, which is the escape hatch when you don’t have time to write a provider.

Shiv avatar


We took the HashiCorp AWS provider, stripped everything out, and added in what we needed that was custom.
This is exactly what I was thinking. Great .

1
Shiv avatar

I will try it out, thanks much for your help

1
Release notes from atmos avatar
Release notes from atmos
04:24:30 AM

v1.64.0 Create Landing page @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2137898667” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/540” data-hovercard-type=”pull_request”…

Release v1.64.0 · cloudposse/atmosattachment image

Create Landing page @osterman (#540) what

Create a custom landing page Update other docs

why

Explain what Atmos does in a few easy steps

osterman - Overview

To help our clients conquer the cloud by designing, building and implementing world-class infrastructures that delight developers and make business sense. - osterman

2024-02-23

2024-02-24

Release notes from atmos avatar
Release notes from atmos
02:34:32 PM

v1.64.1 Enhancements

Fix responsiveness of css @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2151917720” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/543“…

Release v1.64.1 · cloudposse/atmosattachment image

Enhancements

Fix responsiveness of css @osterman (#543) what

Use % instead of vw, since the outter container is capped at a certain px size. Set a minimum height of the header in px, otherwis…

osterman - Overview

Helping teams succeed today using Cloud Posse’s Reference Architectures for AWS. - osterman

Fix responsiveness of css by osterman · Pull Request #543 · cloudposse/atmosattachment image

what

Use % instead of vw, since the outter container is capped at a certain px size. Set a minimum height of the header in px, otherwise if based on vh it can be impossibly small and gets eaten by…

Release notes from atmos avatar
Release notes from atmos
02:54:36 PM

v1.64.1 Enhancements

Fix responsiveness of css @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2151917720” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/543“…

RB avatar

Hi all. Just wondering, should website or github action changes trigger a release? I always figured these changes would get a no-release label since there aren’t any changes to atmos cli

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we deploy website on release

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe we can deploy it on merging to main

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, I think we should change that.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s deploy website always on merge to main

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Without a release.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I updated it in my current PR to do that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks (I did the same in your PR :)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#544 Simplify install page

what

• Use tabs for each installation method • Add new “installer” script installation method

why

• Make it easier to get started

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

btw, @RB can you share the command I should add to the install page for nixos?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or maybe @Jeremy White (Cloud Posse) if you’re around

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

found it.

nix-env -iA nixpkgs.atmos
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) good for re

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

approved

RB avatar

Thanks for the quick turnaround ! This should make the atmos cli releases easier to follow.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, sorry about that!

np1

2024-02-25

2024-02-26

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-provider-utils
fidget_spinner2
bananadance3
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matt Gowie @kevcube

cloudposse/terraform-provider-utils
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can we do the same for awsutils

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, @Erik Osterman (Cloud Posse) please put the PGP keys in 1pass

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It should be the same

RB avatar

I’m having some trouble using the upstream [providers.tf](http://providers.tf) to assume the -admin suffixed role instead of the -terraform suffixed role when running atmos command locally.

RB avatar

When running locally, I imagine it should be using the human -<role> suffixed iam roles in delegated accounts

RB avatar

But for some reason, it’s thinking that my laptop is a terraform/spacelift/cicd user and trying to assume terraform instead. Is this expected? Should human users assume the -terraform role or should it assume the same -<role> that it originally assumes.

i.e. if primary role is identity-admin then the role for ue1-dev should be gbl-identity-dev-admin role, instead of gbl-identity-dev-terraform role, right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jeremy G (Cloud Posse) you implemented terraform dynamic roles for components, can you explain how to use it?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RB without using the dynamic roles that Jeremy implemented, yes it’s expected that all components assume the terraform role

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regardless of the initial role

RB avatar

Note that I deployed aws-teams and aws-team-roles using the readme yaml. I changed terraform to cicd in case I used tofu instead of terraform.

https://github.com/cloudposse/terraform-aws-components/tree/main/modules/aws-teams

https://github.com/cloudposse/terraform-aws-components/tree/main/modules/aws-team-roles

RB avatar


without using the dynamic roles that Jeremy implemented, yes it’s expected that all components assume the terraform role
regardless of the initial role
Oh interesting!

RB avatar

But then wouldn’t all AWS SSO roles have the same IAM permissions if they were all able to assume to -terraform roles in all child accounts?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you look at all components, the provider will use the terraform role by default https://github.com/cloudposse/terraform-aws-components/blob/main/modules/alb/providers.tf

provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all of this is configured in account-map/modules/iam-roles

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Jeremy will explain how to use/switch to the dynamic roles (meaning that instead of the TF role, account-map/modules/iam-roles should return the role you have already assumed when executing TF commands)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "terraform_dynamic_role_enabled" {
1
RB avatar

Thanks Andriy. That’s very helpful.

A separate issue I noticed is that I wanted to use cicd for the aws-team-roles and it looks like I have to use terraform because its currently hard coded in account-map

https://github.com/cloudposse/terraform-aws-components/blob/f32372b6a55797bdead5676bccf75ffc448e2687/modules/account-map/main.tf#L65-L90

  terraform_roles = {
    for name, info in local.account_info_map : name => format(local.iam_role_arn_templates[name],
      (local.legacy_terraform_uses_admin &&
        contains([
          var.root_account_account_name,
          var.identity_account_account_name
        ], name)
    ) ? "admin" : "terraform")
  }

  # legacy support for `aws` config profiles
  terraform_profiles = {
    for name, info in local.account_info_map : name => format(var.profile_template, compact(
      [
        module.this.namespace,
        lookup(info, "tenant", ""),
        module.this.environment,
        info.stage,
        ((local.legacy_terraform_uses_admin &&
          contains([
            var.root_account_account_name,
            var.identity_account_account_name
          ], name)
        ) ? "admin" : "terraform"),
      ]
    )...)
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The current version of account-map optionally supports having a “terraform” role and a “planner” role in each account. The former allows making all changes in the target account, while the latter is meant to provider read-only access. Roles which are allowed to assume the “terraform” role do so. Roles which cannot assume the “terraform” role but can assume the “plan” role assume the “plan” role. All other roles attempt to run Terraform using their existing role, without assuming a new role. You must have this feature enabled; it is disabled by support. Details are documented here .

The role names use to allow plan or apply access are configured in terraform_role_name_map and default to “planner” and “terraform”, but can be set to whatever you want.

A key principle remains that there is a uniform set of roles in all the accounts other than root and identity, and access is controlled by allowing (or not) access to those roles in those accounts. So if you want your cicd role to be able to run terraform apply, it should be allowed to assume the terraform (or “apply”) role in that account.

Note that this feature also requires support from tfstate-backend, which should be documented in the same document linked above regarding dynamic terraform roles.

variable "terraform_role_name_map" {
1
RB avatar

Thanks Jeremy for taking the time to write your response.

I cannot access the cloudposse docs link above. Is it behind a paywall? It says my account needs to be approved

i do see the reference to the role map. I didn’t realize there was an apply and plan role.

It does seem like the terraform role, in spite of the role map, is hard coded in account-map, but that’s an easy fix. For now, we’ll create the cicd role as an aws-team in identity and use the terraform role as aws-team-roles per account.

Andriy also mentioned the terraform_dynamic_role_enabled toggle that we can also optionally flip if we want to go that route. I think i like the idea that only admin can use the apply role and everyone else has to use the planner role provided there are few admins and everyone else goes through the cicd.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@RB wrote:
It does seem like the terraform role, in spite of the role map, is hard coded in account-map, but that’s an easy fix. For now, we’ll create the cicd role as an aws-team in identity and use the terraform role as aws-team-roles per account.
I’m not sure what you are talking about. The concept of a role used by default for people running Terraform is hard coded into our components, and referred to as the Terraform role, but the name of that role is configurable, as I pointed out previously.

RB avatar

Hi Jeremy. Sorry, I was referring to this code. I cannot change this string easily. Perhaps it needs to be exposed as an input? or maybe I’m using it incorrectly?

https://github.com/cloudposse/terraform-aws-components/blob/f32372b6a55797bdead5676bccf75ffc448e2687/modules/account-map/main.tf#L65-L90

  terraform_roles = {
    for name, info in local.account_info_map : name => format(local.iam_role_arn_templates[name],
      (local.legacy_terraform_uses_admin &&
        contains([
          var.root_account_account_name,
          var.identity_account_account_name
        ], name)
    ) ? "admin" : "terraform")
  }

  # legacy support for `aws` config profiles
  terraform_profiles = {
    for name, info in local.account_info_map : name => format(var.profile_template, compact(
      [
        module.this.namespace,
        lookup(info, "tenant", ""),
        module.this.environment,
        info.stage,
        ((local.legacy_terraform_uses_admin &&
          contains([
            var.root_account_account_name,
            var.identity_account_account_name
          ], name)
        ) ? "admin" : "terraform"),
      ]
    )...)
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

That code supports the legacy terraform_roles output for people not using dynamic Terraform roles. The newer, preferred, Dynamic Terraform Roles ignore that output, and use terraform_role_name_map and terraform_access_map instead.

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The existing providers.tf is not only an example of how to retrieve the role to assume, it is sufficient for nearly all our components to use without modification.

provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

1
RB avatar

Thank you very much!

2024-02-27

RB avatar

Regarding the tfstate-bucket component. Is there an alternative suggested account to deploy this bucket in? I don’t want to deploy it in root as i don’t want anyone to access root.

RB avatar

Could the corp (shared-services) account be a suitable alternative? What do you folks think?

RB avatar

(This is for a brownfield project so some of the accounts in this case already exist prior to starting)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re looking to change our strategy on this in v2 refarch

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Plan is to deploy one in each account, in a hierarchical fashion off of root.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Now, in your case, you don’t have access to root, you’d need to designate some other account.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t personally know the scope of this change. Others on the team might.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Could be corp or auto or artifacts maybe using some of our terms.

Hans D avatar

will impact some of the bootstrapping, as you need the new account to be available before you can move over to s3/dynamodb. But doing some split of priviliged, less priviliged sounds sane

Hans D avatar

personally would not mix it with one of the “core” “core” accounts, but a dedicated one if you want to have a central state.

Imran Hussain avatar
Imran Hussain

Hi I have a quick question around templating and the likes can this be used anywhere or just in certain places. I tried using it like below workspace_key_prefix: "infra-{{ .tenant }}-{{ .namespace }}-{{ .environment }}-{{ .stage }}-init" but it does not render the values and keeps them as is when I do a atmos describe stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Go templates are used just in imports https://atmos.tools/core-concepts/stacks/imports

Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not supported everywhere in Atmos manifests… fro the reason that Go templates need context variables, and in general it’s not possible to provide the context variables in advance b/c it could be a circular dependency - to get all the variables, we need to import everything and process it, so we don’t know all the vars yet

Imran Hussain avatar
Imran Hussain

Ok I so I can use templating or variable substitution in the vars: section and the import section in the stack file but not in other palces

Imran Hussain avatar
Imran Hussain

vars: environment: {{ .environment }} stage: “{{ .stage }}”

Imran Hussain avatar
Imran Hussain

and in the stack file

Imran Hussain avatar
Imran Hussain

import: #- path: "catalog/dvsa" - path: "mixins/project" context: tenant: "dvsa" stage: "{{ .stage }}" app_region: "dev" environment: "dev01" namespace: "mts"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can do it only in imports

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

meaning if you import a file that has workspace_key_prefix: "infra-{{ .tenant }}-{{ .namespace }}-{{ .environment }}-{{ .stage }}-init" and provide a context for it, it will work

Imran Hussain avatar
Imran Hussain

Yep I get that. Thanks

Imran Hussain avatar
Imran Hussain

It just how I structure the code

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for the very first import with templates, you have to provide a context with all the vars defined

Imran Hussain avatar
Imran Hussain

to make it work the way I want to

Imran Hussain avatar
Imran Hussain

yep thats what I am doing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then you can use https://atmos.tools/core-concepts/stacks/imports/#hierarchical-imports-with-context to propagate the context to all the child imports

Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you have specific questions, you can always DM me your code and I’ll try to help you

Imran Hussain avatar
Imran Hussain

yep I am doing the same I create a set of files to import and pass in the context at the top level and this then get propagated down as it the has an import to import other files

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, let’s review your question. Why do you need workspace_key_prefix with templates?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

usually, we have Terrafrom workspace that includes namespace, tenant, environment and stage

Imran Hussain avatar
Imran Hussain

There is a naming convention to follow and was just trying to bake that in

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the each component has workspace_key_prefix which is usually just the component name (added by Atmos automatically)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


There is a naming convention to follow and was just trying to bake that in

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok

Imran Hussain avatar
Imran Hussain

Ok I think I have the gist of it and a way forward thanks for the quick response

Matt Gowie avatar
Matt Gowie

Hey folks,

We have the following atmos config.

stacks:
  base_path: "stacks"
  included_paths:
    - "org/**/*"
  name_pattern: "{stage}"
  excluded_paths:
    - "**/_*.yaml" # Do not consider any file beginning with "_" as a stack file

We made a classic mistake: Our application was simple and we didn’t have multi-region on this project, so we went ahead and used name_pattern to be equal to just {stage} . We are of course now doing some disaster recovery work on this project and now adding a 2nd region, and this becomes a problem as we have Atmos thinking both are the same stage i.e. the VPC in ue1-dev can’t be differentiated from the VPC in uw2-dev.

Simple solution would be to prefix our component instance names in the dev/us-west-2.yaml with uw2-***** , but that feels rough. Are there any other suggestions or way to migrate this name pattern issue that we should try out?

Matt Gowie avatar
Matt Gowie

I should clarify: This is both a problem with workspace names AND with spacelift Stacks I believe. That ends up being our blocker here.

Hans D avatar

The spacelift bits are quite easy te replace (my experience), the more concerning bit is the terraform state including the workspace bit… (which is referenced in the spacelift config)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s possible to rename all Atmos components (change names to reflect regions, or prefix/suffix with something) and still keep the same Terraform workspace_key_prefix (so Tf will not attempt to destroy it). This means that the existing resources will be unde the old ``workspace_key_prefix , the new ones will be at diff workspace_key_prefix)`

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example, let’s say you had

    components:
      terraform:
        vpc:
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

deployed in us-east-2

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

now you want to change the Atmos component name for it and add another Atmos component for the other region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
    components:
      terraform:
        vpc/ue2:
          vars: {}
          backend:
            s3:
              # Keep the existing TF workspace key prefix
              workspace_key_prefix: "vpc"
        vpc/uw2:
           vars: {}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matt Gowie not sure if this would help you, let me know

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(spacelift stacks will be recreated, but that should be ok)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and to keep the Spacelift stack names the same, there is another trick

    components:
      terraform:
        vpc/ue2:
          vars: {}
          settings:
            spacelift:
              workspace_enabled: true
              # `stack_name_pattern` overrides Spacelift stack names
              # Supported tokens: {namespace}, {tenant}, {environment}, {stage}, {component}
              stack_name_pattern: "{stage}-vpc"
Matt Gowie avatar
Matt Gowie

Ah both of those help to know about @Andriy Knysh (Cloud Posse) – Thanks! I’ll look into what we would want to do here considering…

Release notes from atmos avatar
Release notes from atmos
05:24:36 AM

v1.64.2 Add Atmos CLI command aliases @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2158013928” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/547” data-hovercard-type=”pull_request”…

aknysh - Overview

aknysh has 265 repositories available. Follow their code on GitHub.

Add Atmos CLI command aliases by aknysh · Pull Request #547 · cloudposse/atmosattachment image

what

Add Atmos CLI command aliases Update docs https://pr-547.atmos-docs.ue2.dev.plat.cloudposse.org/cli/configuration/

why An alias lets you create a shortcut name for an existing CLI command. A…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB :point_up: now you can do a tf with aliases

Release notes from atmos avatar
Release notes from atmos
05:44:34 AM

v1.64.2 Add Atmos CLI command aliases @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2158013928” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/547” data-hovercard-type=”pull_request”…

Release v1.64.2 · cloudposse/atmosattachment image

Add Atmos CLI command aliases @aknysh (#547) what

Add Atmos CLI command aliases Update docs https://atmos.tools/cli/configuration/#aliases

why An alias lets you create a shortcut name for an exis…

2024-02-28

Hans D avatar

@Andriy Knysh (Cloud Posse) not really a patch version with the added functionality …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, some issues with the auto releaser, it did it for some reason (w/o having a patch label). Prob it was confused by the three commits

Hans D avatar

given the other why-is-this-a-patch that had afaik only a single commit, it might need some further investigation if this keeps popping up

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, we had similar issues in other repos, your help is appreciated

Hans D avatar

and given that patch releases in our env only get picked up by renovate bot once a week it made it more stand out for me (given the release announcement where I missed the renovate pr becoming available)… not a biggy, just triggered me.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s a misunderstanding on how release drafter works. The first PR that merges drafts an unpublished release. That might have been a patch. The next PR that merges updates the draft release. Before publishing, the person who clicks publish should review and make sure it makes sense. For example the version. Unlike our terraform modules, we do not automatically publish the releases for atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is nice because we can bundle 3-4 PRs into one release and get release notes for all of them.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I think it was our mistake on not reviewing the release before publishing.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it was my mistake. Before we always released it manually, now the autoreleaser tries to help creating Drafts releases, which in this case should have been discarded (the fact that it created a patch release is still a bug)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Discarded or edited? I am on my phone, so maybe not seeing the obvious.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since it was a patch release, discarded. We need to review this

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the autoreleaser was confused by the patch label on the first PR in the release https://github.com/cloudposse/atmos/pull/544

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s not confused though. It’s operating as designed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The key is it’s a draft. It’s not finalized.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That means subsequent PRs “merge” into that release.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But we can change the release numbering before publishing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, should we release 1.65.0 manually (for Atmos aliases)?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is a minor hiccup in the grander scheme of things.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It will get resolved in the next release. Let’s just pay attention to the release number before manually clicking publish on the draft release.

2
Release notes from atmos avatar
Release notes from atmos
01:44:34 PM

v1.64.2 Add Atmos CLI command aliases @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2158013928” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/547” data-hovercard-type=”pull_request”…

Release v1.64.2 · cloudposse/atmosattachment image

Add Atmos CLI command aliases @aknysh (#547) what

Add Atmos CLI command aliases Update docs https://atmos.tools/cli/configuration/#aliases

why An alias lets you create a shortcut name for an exis…

aknysh - Overview

aknysh has 265 repositories available. Follow their code on GitHub.

Add Atmos CLI command aliases by aknysh · Pull Request #547 · cloudposse/atmosattachment image

what

Add Atmos CLI command aliases Update docs https://pr-547.atmos-docs.ue2.dev.plat.cloudposse.org/cli/configuration/

why An alias lets you create a shortcut name for an existing CLI command. A…

pv avatar

I have a pipeline where I am running a atmos terraform plan and then apply with one resource and it deploys successfully:

- name: Atmos - Terraform Plan 
  run: 
    atmos terraform plan resource1 -s orgs/dir/fake/sandbox/us-central1/resource1

Then when I add another component from the same stack and attempt a plan, it wants to destroy the previous resource I created because ot os (not in the configuration):

- name: Atmos - Terraform Plan 
  run: 
    atmos terraform plan resource1 -s orgs/dir/fake/sandbox/us-central1/resource1
    atmos terrafora plan resource2 -s orgs/dir/fake/sandbox/us-central1/resource2

Why is this happening and how can I resolve that? Resources are for GCP and pipeline is GHA

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It doesn’t look like our GHAs are being leveraged for Atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Keep in mind, CI/CD with Terraform is non trivial. We’ve spent considerable time developing these actions, so it’s hard to answer what’s wrong with the simple case of just calling Atmos via a run statement.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, I see explicit calling of stacks. We developed a GitHub Action to describe affected stacks in the PR. Using this action, you can use a matrix to plan all affected stacks.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


it wants to destroy the previous resource I created because ot os (not in the configuration):
This sounds like a problem with the backend configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

By convention, we usually rely on Terrraform Workspaces. If the backend isn’t configured to use workspaces, then you would get that effect of overriding the state.

pv avatar

Ahh nvm my TF state got screwed for above, was way I was doing it the atmos command I messed it up pointing to path rather than stack name as appeared in atmos. I think we are all good

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok! great to hear

pv avatar

Followup: The issue was we had changed from terraform apply to terraform deploy. When switching the pipeline between those commands, you run into that issue. We destroyed the resource and ran them both as a deploy to resolve the issue

2024-02-29

Andrew Ochsner avatar
Andrew Ochsner

Hey just an FYI dropped a little PR to prevent atmos from crashing when using the azurerm backend and not providing a global key https://github.com/cloudposse/atmos/pull/548

#548 include some protection on the azurerm backend when global does not …

…exist

what

When using the azurerm backend, the logic assumes a global key is set and prepends that to the component key. However, when it doesn’t exist it causes atmos to crash. This checks if a global key is set and if not, then don’t prepend anything.

why

• prevent atmos from crashing • don’t require a global key

references

#95

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Andrew Ochsner, will review it (thanks for testing it on Azure, any help on Azure and GCP is greatly appreciated)

#548 include some protection on the azurerm backend when global does not …

…exist

what

When using the azurerm backend, the logic assumes a global key is set and prepends that to the component key. However, when it doesn’t exist it causes atmos to crash. This checks if a global key is set and if not, then don’t prepend anything.

why

• prevent atmos from crashing • don’t require a global key

references

#95

1
Andrew Ochsner avatar
Andrew Ochsner

if there’s a more idiomatic way of doing that in golang, lemme know… not my native programming language

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thank you @Andrew Ochsner for your contribution

Release notes from atmos avatar
Release notes from atmos
03:44:40 PM

v1.64.3 Enhancements

include some protection on the azurerm backend when global does not exist @aochsner (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2161523307” data-permission-text=”Title is private”…

Release v1.64.3 · cloudposse/atmosattachment image

Enhancements

include some protection on the azurerm backend when global does not exist @aochsner (#548) what When using the azurerm backend, the logic assumes a global key is set and prepends…

aochsner - Overview

aochsner has 53 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
04:04:33 PM

v1.64.3 Enhancements

include some protection on the azurerm backend when global does not exist @aochsner (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2161523307” data-permission-text=”Title is private”…

Amit avatar

Hi, I am trying to integrate Atlantis with Atmos i have generated varfile pushed it to gitlab repo and generated atlantis.yaml file too but i am getting an error while running the atlantis plan command Any help would be appreciated

Error: Failed to read variables file
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual you are familiar with Atlantis (and Atmos), can you please take a look at ^

jose.amengual avatar
jose.amengual

are you pushing the files after you generate them or after the pr is created ?

Amit avatar

I am pushing file after i generate them

jose.amengual avatar
jose.amengual

and you see that push on your PR and the variable file too?

Amit avatar

Yes, I see

Amit avatar

even on Atlantis server i checked the repo atlantis cloned has varfile exits

jose.amengual avatar
jose.amengual

can you post the full command that Atlantis is running?

Amit avatar

Sure

jose.amengual avatar
jose.amengual

and maybe your github action code too

Amit avatar

I am using Gitlab

jose.amengual avatar
jose.amengual

ok, no problem

jose.amengual avatar
jose.amengual

the command and workflow config

Amit avatar

I have used below command to generate varfile and then i pushed that file into my gitlab repo

atmos terraform generate varfiles --file-template={component-path}/varfiles/{namespace}-{environment}-{component}.tfvars.json
Amit avatar

I have generated atlantis file that i getting parsed too

version: 3
automerge: true
delete_source_branch_on_merge: true
parallel_plan: true
parallel_apply: true
allowed_regexp_prefixes:
  - dev/
  - staging/
  - prod/
projects:
  - name: test-uw1-root-tfstate-backend
    workspace: uw1-root
    workflow: workflow-1
    dir: /components/terraform/tfstate-backend
    terraform_version: v1.2
    delete_source_branch_on_merge: true
    autoplan:
      enabled: true
      when_modified:
        - "**/*.tf"
        - varfiles/$PROJECT_NAME.tfvars.json
    plan_requirements: []
    apply_requirements:
      - approved
workflows:
  workflow-1:
    apply:
      steps:
        - run: terraform apply $PLANFILE
    plan:
      steps:
        - run: terraform init -input=false
        - run: terraform workspace select $WORKSPACE || terraform workspace new $WORKSPACE
        - run: terraform plan -input=false -refresh -out $PLANFILE -var-file varfiles/$PROJECT_NAME.tfvars.json
Amit avatar

Atlantis command running from gitlab

running "terraform plan -input=false -refresh -out $PLANFILE -var-file varfiles/$PROJECT_NAME.tfvars.json" in "/home/atlantis/.atlantis/repos/amit/devops/atmos-example/10/uw1-root/components/terraform/tfstate-backend": exit status 1: running "terraform plan -input=false -refresh -out $PLANFILE -var-file varfiles/$PROJECT_NAME.tfvars.json" in "/home/atlantis/.atlantis/repos/amit/devops/atmos-example/10/uw1-root/components/terraform/tfstate-backend": 
Amit avatar

Error: Failed to read variables file

jose.amengual avatar
jose.amengual

the problem is I think the $PROJECT_NAME is not matching the variable name

jose.amengual avatar
jose.amengual

if you search in the atlantis.yaml , one project name, do you have a corresponding varfiles/$PROJECTNAME.tfvar.json?

Amit avatar

But if you look into the screenshot i shared it is showing the correct tfvars file after the error

Amit avatar

Yes, i have varfiles/test-uw1-root-tfstate-backend.tfvar.json file

jose.amengual avatar
jose.amengual

you looked inside this dir? /home/atlantis/.atlantis/repos/amit/devops/atmos-example/10/uw1-root/components/terraform/tfstate-backend

Amit avatar

Yes

jose.amengual avatar
jose.amengual

and the atlantis command you run?

Amit avatar
Atlantis plan
jose.amengual avatar
jose.amengual

ok, that will plan all projects

jose.amengual avatar
jose.amengual

can you try with one specific one?

jose.amengual avatar
jose.amengual

with -p project-name

Amit avatar

I have only one project

jose.amengual avatar
jose.amengual

can you show the generated atlantis.yaml?

Amit avatar

I just pinned it

jose.amengual avatar
jose.amengual

I’m checking and comparing against my setup

jose.amengual avatar
jose.amengual

seems to be the same

Amit avatar

I don’t know why it is not taking varfile from varifile folder something seems to off

jose.amengual avatar
jose.amengual

mmmm what is this : uw1-root?

jose.amengual avatar
jose.amengual

ahhh that is your workspace

Amit avatar

It’s my atmos stack name and workspace

jose.amengual avatar
jose.amengual

so on the atlantis server you have /home/atlantis/.atlantis/repos/amit/devops/atmos-example/10/uw1-root/components/terraform/tfstate-backend/varfiles/test-uw1-root-tfstate-backend.tfvars.json

jose.amengual avatar
jose.amengual

correct?

Amit avatar

Let me confirm

Amit avatar

Got it

jose.amengual avatar
jose.amengual

you mean the file is there and it has content?

Amit avatar

It was an issue with varfile i was generating with atmos that doesn’t include stage and generating a wrong file

Amit avatar

File is there name of that file is not correct

jose.amengual avatar
jose.amengual

ahhhhhhhh

jose.amengual avatar
jose.amengual

ok, cool

Amit avatar

I was shaking my head from last 4 hours Lol

Amit avatar

Thanks @jose.amengual

jose.amengual avatar
jose.amengual

no problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TL;DR: is something missing from our atmos integration doc?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual thanks for the help

jose.amengual avatar
jose.amengual

no, docs are good , this was a a mis label situation

1
pv avatar

Hi, I am confused by how the yaml is intended to be configured for this component:

https://github.com/slalombuild/terraform-atmos-accelerator/blob/main/components/terraform/gcp/network/README.md#input_cloud_nat

I need to configure source_ip_ranges_to_nat = optional(list(string), ["ALL_IP_RANGES"]) but no matter how I configure it, the tfvars leave that configuration empty. How is this part meant to be written in the yaml? I’ve tried everything

pv avatar

For example when I do,

cloud_nat:
          subnetworks:
            - name: subnetworkname
              secondary_ip_range_names: ["name1", "name2"]

then the tf plan shows secondary_ip_range_names = []

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "cloud_router" {
  source  = "terraform-google-modules/cloud-router/google"
  version = "~> 6.0.0"
  count   = local.enabled && var.cloud_nat != null ? 1 : 0
  name    = module.this.id
  project = var.project_id
  region  = var.region
  network = module.network[0].network_name
}

module "cloud_nat" {
  count      = local.enabled && var.cloud_nat != null ? 1 : 0
  source     = "terraform-google-modules/cloud-nat/google"
  version    = "~> 5.0.0"
  project_id = var.project_id
  region     = var.region

  router = module.cloud_router[0].router.name

  name                               = module.this.id
  nat_ips                            = var.cloud_nat.nat_ips
  subnetworks                        = var.cloud_nat.subnetworks
  source_subnetwork_ip_ranges_to_nat = var.cloud_nat.source_subnetwork_ip_ranges_to_nat

  enable_dynamic_port_allocation      = var.cloud_nat.enable_dynamic_port_allocation
  enable_endpoint_independent_mapping = var.cloud_nat.enable_endpoint_independent_mapping
  icmp_idle_timeout_sec               = var.cloud_nat.icmp_idle_timeout_sec
  log_config_enable                   = var.cloud_nat.log_config_enable
  log_config_filter                   = var.cloud_nat.log_config_filter
  min_ports_per_vm                    = var.cloud_nat.min_ports_per_vm
  udp_idle_timeout_sec                = var.cloud_nat.udp_idle_timeout_sec
  tcp_established_idle_timeout_sec    = var.cloud_nat.tcp_established_idle_timeout_sec
  tcp_transitory_idle_timeout_sec     = var.cloud_nat.tcp_transitory_idle_timeout_sec
  tcp_time_wait_timeout_sec           = var.cloud_nat.tcp_time_wait_timeout_sec
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s defined in the variable, but is not used

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual

pv avatar

That variable is from the public module this component uses https://github.com/terraform-google-modules/terraform-google-cloud-nat

terraform-google-modules/terraform-google-cloud-nat

Creates and configures Cloud NAT

pv avatar

@Andriy Knysh (Cloud Posse) it actually is in variables. It is a part of the “subnetworks” variable

jose.amengual avatar
jose.amengual

but if in our root module (SlalomBuild) repo is not used in the instantiation of the Google module, then it will not be respected

jose.amengual avatar
jose.amengual

I can look at this tomorrow.

jose.amengual avatar
jose.amengual

@pv ping me tomorrow

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv your YAML config looks correct. Look at the varfile that Atmos generates to see if the variable is there. If not, something must be wrong with the stacks config

pv avatar

When I describe it, it shows the values but it is still not a part of the plan. Not sure what the issue is with that one

jose.amengual avatar
jose.amengual

we are not using secondary_ip_range_names

jose.amengual avatar
jose.amengual

wait

jose.amengual avatar
jose.amengual

is part of this variable

jose.amengual avatar
jose.amengual

you will need to figure out how to pass the value from yaml so it gets render correctly in json

jose.amengual avatar
jose.amengual
# enabled        = true
# namespace      = "test"
# environment    = "network"
# stage          = "uw2"
# label_key_case = "lower"
# project_id     = "platlive-nonprod"
# region         = "us-west2"

# routing_mode          = "GLOBAL"
# shared_vpc_host       = false
# service_project_names = []

# subnets = [
#   {
#     subnet_name               = "subnet-1"
#     subnet_ip                 = "10.1.0.0/16"
#     subnet_region             = "us-west2"
#     subnet_private_access     = true
#     subnet_flow_logs          = true
#     subnet_flow_logs_interval = "INTERVAL_5_SEC"
#     subnet_flow_logs_sampling = 0.5
#     subnet_flow_logs_metadata = "INCLUDE_ALL_METADATA"
#   },
#   {
#     subnet_name           = "subnet-2"
#     subnet_ip             = "10.2.0.0/16"
#     subnet_region         = "us-west2"
#     subnet_private_access = false
#     subnet_flow_logs      = false
#   }
# ]

# secondary_ranges = {
#   "subnet-1" = [
#     {
#       ip_cidr_range = "172.16.1.0/24"
#       range_name    = "pods-1"
#     },
#     {
#       ip_cidr_range = "192.168.1.0/24"
#       range_name    = "services-1"
#     }
#   ]

#   "subnet-2" = [{
#     ip_cidr_range = "172.16.2.0/24"
#     range_name    = "pods-2"
#     },
#     {
#       ip_cidr_range = "192.168.2.0/24"
#       range_name    = "services-2"
#     }
#   ]
# }

# routes = [
#   {
#     name              = "egress-internet"
#     destination_range = "0.0.0.0/0"
#     tags              = "egress-inet,internet"
#     next_hop_internet = "true"
#   }
# ]

# firewall_rules = [
#   {
#     name      = "test"
#     direction = "INGRESS"
#     ranges    = ["10.2.0.0/16"]
#     allow = [{
#       protocol = "TCP"
#     }]
#   }
# ]

# cloud_nat = {
#   subnetworks = [
#     {
#       name                    = "subnet-1"
#       source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
#     },
#     {
#       name                    = "subnet-2"
#       source_ip_ranges_to_nat = ["10.2.0.0/16", "172.16.2.0/24", "192.168.2.0/24"]
#     }
#   ]
# }

# peers = []

# private_connections = [
#   {
#     name          = "test-data"
#     prefix_start  = "10.3.0.0"
#     prefix_length = 16
#   }
# ]



jose.amengual avatar
jose.amengual

here is the example

Dr.Gao avatar

hello, I am seeing this issue while using cloudposse/github-action-pre-commit

Run cloudposse/github-action-pre-commit@v3
install pre-commit
/opt/hostedtoolcache/Python/3.10.13/x64/bin/pre-commit run --show-diff-on-failure --color=always --all-files
[INFO] Initializing environment for <https://github.com/antonbabenko/pre-commit-terraform>.
[INFO] Initializing environment for <https://github.com/pre-commit/mirrors-prettier>.
[INFO] Initializing environment for <https://github.com/pre-commit/mirrors-prettier:[email protected]>.
[INFO] Installing environment for <https://github.com/pre-commit/mirrors-prettier>.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
Terraform fmt............................................................Failed
- hook id: terraform_fmt
- files were modified by this hook
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
main.tf
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
main.tf
variables.tf
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
main.tf
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
versions.tf
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
Terraform docs...........................................................Failed
- hook id: terraform_docs
- exit code: 1
ERROR: terraform-docs is required by terraform_docs pre-commit hook but is not installed or in the system's PATH.
prettier.............................................(no files to check)Skipped
rebuild-adr-docs.........................................................Passed
pre-commit hook(s) made changes.
If you are seeing this message in CI, reproduce locally with: `pre-commit run --all-files`.
To run `pre-commit` as part of git workflow, use `pre-commit install`.
All changes made by hooks:

My config is like this. It was working a month ago, I did not change anything, it suddenly had this error. Any idea how to debug it?

 # Install terraform-docs for pre-commit hook
      - name: Install terraform-docs
        shell: bash
        env:
          INSTALL_PATH: "${{ github.workspace }}/bin"
        run: |
          make init
          mkdir -p "${INSTALL_PATH}"
          make packages/install/terraform-docs
          echo "$INSTALL_PATH" >> $GITHUB_PATH
      # pre-commit prerequisites
      - uses: actions/setup-python@v4
        with:
          python-version: '3.10'

      - uses: actions/setup-node@v3
        with:
          node-version: '16'

      # Install adr-tools for pre-commit hook
      - name: Install adr-tools
        shell: bash
        run: |
          wget <https://github.com/npryce/adr-tools/archive/refs/tags/$ADR_TOOLS_VERSION.tar.gz>
          tar xvzf $ADR_TOOLS_VERSION.tar.gz
          echo "adr-tools-$ADR_TOOLS_VERSION/src" >> $GITHUB_PATH

          #pre-commit checks: fmt + terraform-docs
          #We skip tf_validate as it requires an init
          #of all root modules, which is to be avoided.

      - uses: cloudposse/github-action-pre-commit@v3
        env:
          SKIP: tf_validate
        with:
          token: ${{ secrets.CCH_GITHUB_BOT_TOKEN }}
          git_user_name: ${{ env.GIT_USER_NAME }}
          git_user_email: ${{ env.GIT_USER_EMAIL }}
          extra_args: --all-files
Hans D avatar

https://github.com/cloudposse/github-action-pre-commit/releases show the last v3 action on nov 2022. Fragment from your output

Terraform docs...........................................................Failed
- hook id: terraform_docs
- exit code: 1
ERROR: terraform-docs is required by terraform_docs pre-commit hook but is not installed or in the system's PATH.

seems to be more related to first section of your gha workflow

 # Install terraform-docs for pre-commit hook
      - name: Install terraform-docs
        shell: bash
        env:
          INSTALL_PATH: "${{ github.workspace }}/bin"
        run: |
          make init
          mkdir -p "${INSTALL_PATH}"
          make packages/install/terraform-docs
          echo "$INSTALL_PATH" >> $GITHUB_PATH
Dr.Gao avatar

Thanks!

    keyboard_arrow_up