#atmos (2024-02)

2024-02-01

Andy Wortman avatar
Andy Wortman

Question on how to handle stack-unique configuration files in atmos.

1
Andy Wortman avatar
Andy Wortman

The situation is that we have a terraform component, used by lots of stacks, but each stack has its own json configuration file for that component. These files are several hundred lines long, so it would be unwieldy to store in a variable. Currently, we store all these files inside the component, but this means if I change the file for one stack, atmos describe affected returns all stacks with that component, leading to a bunch of no-op plan and apply steps in our git automation.

I’d like to store the files elsewhere in the repo, and have each stack’s yaml configuration point to the file. When the file is changed, it should trigger a plan or apply of just that stack. Does anyone know of a good way to do this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman give me a few< I’ll show you how to do it

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe affected | atmos

This command produces a list of the affected Atmos components and stacks given two Git commits.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
file - if the Atmos component depends on an external file, and the file was changed (see affected.file below), the file attributes shows the modified file

folder - if the Atmos component depends on an external folder, and any file in the folder was changed (see affected.folder below), the folder attributes shows the modified folder
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
file - an external file on the local filesystem that the Atmos component depends on was changed.

Dependencies on external files (not in the component's folder) are defined using the file attribute in the settings.depends_on map. For example:
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    top-level-component3:
      metadata:
        component: "top-level-component1"
      settings:
        depends_on:
          1:
            file: "examples/tests/components/terraform/mixins/introspection.mixin.tf"
Andy Wortman avatar
Andy Wortman

oh, that is awesome!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is how to specify deps on external files and folders in the YAML stack manifests using the settings.depends_on attribute, which is used in atmos describe affected

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that you can also have dependencies on external (but local) terraform modules in your TF components - but in this case Atmos detects that automatically, no need to specify anything in YAML stack manifests

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
component.module - the Terraform component is affected because it uses a local Terraform module (not from the Terraform registry, but from the local filesystem), and that local module has been changed.
Andy Wortman avatar
Andy Wortman

Exactly what I was looking for. Thanks @Andriy Knysh (Cloud Posse)!

Dave avatar

Greetings.. running through a pared down version of https://atmos.tools/design-patterns/organizational-structure-configuration

I was able to run atmos terraform deploy vpc-flow-logs-bucket -s org1-plat-ue2-prod without a problem

Then when I run atmos terraform deploy vpc -s org1-plat-ue2-prod I’m getting the following error: ╷ │ Error: stack name pattern ‘{namespace}-{tenant}-{environment}-{stage}’ includes ‘{environment}’, but environment is not provided │ │ with module.vpc_flow_logs_bucket[0].data.utils_component_config.config[0], │ on .terraform/modules/vpc_flow_logs_bucket/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”: │ 1: data “utils_component_config” “config” { │ ╵ exit status 1

Here is the output of atmos describe component vpc --stack org1-plat-ue2-prod

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "vpc_flow_logs_bucket" {
  count = local.vpc_flow_logs_enabled ? 1 : 0

  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.5.0"

  # Specify the Atmos component name (defined in YAML stack config files)
  # for which to get the remote state outputs
  component = "vpc-flow-logs-bucket"

  # Override the context variables to point to a different Atmos stack if the
  # `vpc-flow-logs-bucket-1` Atmos component is provisioned in another AWS account, OU or region
  stage       = try(coalesce(var.vpc_flow_logs_bucket_stage_name, module.this.stage), null)
  environment = try(coalesce(var.vpc_flow_logs_bucket_environment_name, module.this.environment), null)
  tenant      = try(coalesce(var.vpc_flow_logs_bucket_tenant_name, module.this.tenant), null)

  # `context` input is a way to provide the information about the stack (using the context
  # variables `namespace`, `tenant`, `environment`, and `stage` defined in the stack config)
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and try again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that code was taken from a larger component whicj allows provisioning one VPC flow logs bucket for many VPCs, and the example did not change it to the case where the bucket is deployed in the same stack as the VPC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we’ll update the code and the docs in the next release)

Dave avatar

Thanks! That lead to the following error..

│ Error: Attempt to get attribute from null value
│
│   on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default":
│    5:   log_destination      = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn
│     ├────────────────
│     │ module.vpc_flow_logs_bucket[0].outputs is null
│
│ This value is null, so it does not have any attributes.
╵
exit status 1

after running: atmos terraform deploy vpc-flow-logs-bucket -s org1-plat-ue2-prod then atmos terraform deploy vpc -s org1-plat-ue2-prod

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dave this is another thing that we are going to add to the “Quick Start” in the next release. In fact, it’s documented here https://atmos.tools/core-concepts/components/remote-state

Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in short, the remote-state module uses the utils provider to read Atmos components, and Terraform executes all providers from the component’s folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos.yaml does not exist in the component’s folder, hence the utils provider can’t find the remote state

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are on the host, put atmos.yaml to /usr/local/etc/atmos/atmos.yaml

Dave avatar

Oh, I totally read that the other day, apologies for forgetting… best practices would be then to copy atmos config there any time it is updated?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or just set the ENV var ATMOS_CLI_CONFIG_PATH (this is what geodesic does automatically)

Dave avatar

Oh, I totally did that…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

best practices would be to use a Docker container and use the rootfs pattern

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

```

CLI config is loaded from the following locations (from lowest to highest priority):

system dir (‘/usr/local/etc/atmos’ on Linux, ‘%LOCALAPPDATA%/atmos’ on Windows)

home dir (~/.atmos)

current directory

ENV vars

Command-line arguments

#

It supports POSIX-style Globs for file names/paths (double-star ‘**’ is supported)

https://en.wikipedia.org/wiki/Glob_(programming)

Base path for components, stacks and workflows configurations.

Can also be set using ‘ATMOS_BASE_PATH’ ENV var, or ‘–base-path’ command-line argument.

Supports both absolute and relative paths.

If not provided or is an empty string, ‘components.terraform.base_path’, ‘components.helmfile.base_path’, ‘stacks.base_path’ and ‘workflows.base_path’

are independent settings (supporting both absolute and relative paths).

If ‘base_path’ is provided, ‘components.terraform.base_path’, ‘components.helmfile.base_path’, ‘stacks.base_path’ and ‘workflows.base_path’

are considered paths relative to ‘base_path’.

base_path: “”

components: terraform: # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_BASE_PATH’ ENV var, or ‘–terraform-dir’ command-line argument # Supports both absolute and relative paths base_path: “components/terraform” # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE’ ENV var apply_auto_approve: false # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT’ ENV var, or ‘–deploy-run-init’ command-line argument deploy_run_init: true # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE’ ENV var, or ‘–init-run-reconfigure’ command-line argument init_run_reconfigure: true # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE’ ENV var, or ‘–auto-generate-backend-file’ command-line argument auto_generate_backend_file: true helmfile: # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_BASE_PATH’ ENV var, or ‘–helmfile-dir’ command-line argument # Supports both absolute and relative paths base_path: “components/helmfile” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_USE_EKS’ ENV var # If not specified, defaults to ‘true’ use_eks: true # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH’ ENV var kubeconfig_path: “/dev/shm” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN’ ENV var helm_aws_profile_pattern: “{namespace}-{tenant}-gbl-{stage}-helm” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN’ ENV var cluster_name_pattern: “{namespace}-{tenant}-{environment}-{stage}-eks-cluster”

stacks: # Can also be set using ‘ATMOS_STACKS_BASE_PATH’ ENV var, or ‘–config-dir’ and ‘–stacks-dir’ command-line arguments # Supports both absolute and relative paths base_path: “stacks” # Can also be set using ‘ATMOS_STACKS_INCLUDED_PATHS’ ENV var (comma-separated values string) included_paths: - “orgs//*” # Can also be set using ‘ATMOS_STACKS_EXCLUDED_PATHS’ ENV var (comma-separated values string) excluded_paths: - “/_defaults.yaml” # Can also be set using ‘ATMOS_STACKS_NAME_PATTERN’ ENV var name_pattern: “{tenant}-{environment}-{stage}”

workflows: # Can also be set using ‘ATMOS_WORKFLOWS_BASE_PATH’ ENV var, or ‘–workflows-dir’ command-line arguments # Supports both absolute and relative paths base_path: “stacks/workflows”

logs: file: “/dev/stdout” # Supported log levels: Trace, Debug, Info, Warning, Off level: Info

Custom CLI commands

commands:

  • name: tf description: Execute ‘terraform’ commands

    subcommands

    commands:

    • name: plan description: This command plans terraform components arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true env:
      • key: ENV_VAR_1 value: ENV_VAR_1_value
      • key: ENV_VAR_2

        ‘valueCommand’ is an external command to execute to get the value for the ENV var

        Either ‘value’ or ‘valueCommand’ can be specified for the ENV var, but not both

        valueCommand: echo ENV_VAR_2_value

        steps support Go templates

        steps:

      • atmos terraform plan {{ .Arguments.component }} -s {{ .Flags.stack }}
  • name: terraform description: Execute ‘terraform’ commands

    subcommands

    commands:

    • name: provision description: This command provisions terraform components arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true

        ENV var values support Go templates

        env:

      • key: ATMOS_COMPONENT value: “{{ .Arguments.component }}”
      • key: ATMOS_STACK value: “{{ .Flags.stack }}” steps:
      • atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
      • atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
  • name: play description: This command plays games steps:
    • echo Playing…

      subcommands

      commands:

    • name: hello description: This command says Hello world steps:
      • echo Hello world
    • name: ping description: This command plays ping-pong

      If ‘verbose’ is set to ‘true’, atmos will output some info messages to the console before executing the command’s steps

      If ‘verbose’ is not defined, it implicitly defaults to ‘false’

      verbose: true steps:

      • echo Playing ping-pong…
      • echo pong
  • name: show description: Execute ‘show’ commands

    subcommands

    commands:

    • name: component description: Execute ‘show component’ command arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true

        ENV var values support Go templates and have access to {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables

        env:

      • key: ATMOS_COMPONENT value: “{{ .Arguments.component }}”
      • key: ATMOS_STACK value: “{{ .Flags.stack }}”
      • key: ATMOS_TENANT value: “{{ .ComponentConfig.vars.tenant }}”
      • key: ATMOS_STAGE value: “{{ .ComponentConfig.vars.stage }}”
      • key: ATMOS_ENVIRONMENT value: “{{ .ComponentConfig.vars.environment }}”

        If a custom command defines ‘component_config’ section with ‘component’ and ‘stack’, ‘atmos’ generates the config for the component in the stack

        and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,

        exposing all the component sections (which are also shown by ‘atmos describe component’ command)

        component_config: component: “{{ .Arguments.component }}” stack: “{{ .Flags.stack }}”

        Steps support using Go templates and can access all configuration settings (e.g. {{ .ComponentConfig.xxx.yyy.zzz }})

        Steps also have access to the ENV vars defined in the ‘env’ section of the ‘command’

        steps:

      • ‘echo Atmos component from argument: “{{ .Arguments.component }}”’
      • ‘echo ATMOS_COMPONENT: “$ATMOS_COMPONENT”’
      • ‘echo Atmos stack: “{{ .Flags.stack }}”’
      • ‘echo Terraform component: “{{ .ComponentConfig.component }}”’
      • ‘echo Backend S3 bucket: “{{ .ComponentConfig.backend.bucket }}”’
      • ‘echo Terraform workspace: “{{ .ComponentConfig.workspace }}”’
      • ‘echo Namespace: “{{ .ComponentConfig.vars.namespace }}”’
      • ‘echo Tenant: “{{ .Compo…
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or (especailly if you are on the host), use ATMOS_CLI_CONFIG_PATH to set the path to atmos.yaml to whatever location you like

Dave avatar

Yeah I have that set, I am using the container.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(this is an anooying feature, but that’s how Terraform works with the providers)

Dave avatar

Still having the issue, I’m clearly missing something….

I have done the following (automatically set on my docker run command)
just set the ENV var ATMOS_CLI_CONFIG_PATH (this is what geodesic does automatically)

# Tried both of the following
 √ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/

 √ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/atmos.yaml

I have tried this

if you are on the host, put atmos.yaml to /usr/local/etc/atmos/atmos.yaml

ls -l /usr/local/etc/atmos/
total 4
-rwxr-xr-x 1 root root 1931 Feb  2 09:08 atmos.yaml

Does using my current atmos.yaml suffice, or is there something special about your atmos.yaml from here?

https://github.com/cloudposse/atmos/blob/default-atmos-yaml/examples/quick-start/rootfs/usr/local/etc/atmos/atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

with

/usr/local/etc/atmos/atmos.yaml
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what error do you see?

Dave avatar


√ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/atmos.yaml

╷
> │ Error: Attempt to get attribute from null value
> │
> │   on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default":
> │    5:   log_destination      = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn
> │     ├────────────────
> │     │ module.vpc_flow_logs_bucket[0].outputs is null
> │
> │ This value is null, so it does not have any attributes.
> ╵
> exit status 1

√ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH /_repotest/

╷
│ Error: Attempt to get attribute from null value
│
│   on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default":
│    5:   log_destination      = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn
│     ├────────────────
│     │ module.vpc_flow_logs_bucket[0].outputs is null
│
│ This value is null, so it does not have any attributes.
╵
exit status 1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


are you using the ENV variables, or your atmos.yaml is in /usr/local/etc/atmos/atmos.yaml?

Dave avatar

I’ve tried both, are they mutually exclusive?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using the ENV vars, you need to set 2 vars:

Initial Atmos configuration can be controlled by these ENV vars:

ATMOS_CLI_CONFIG_PATH - where to find atmos.yaml. Path to a folder where the atmos.yaml CLI config file is located
ATMOS_BASE_PATH - base path to components and stacks folders
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos.yaml is loaded from the following locations (from lowest to highest priority):

System dir (/usr/local/etc/atmos/atmos.yaml on Linux, %LOCALAPPDATA%/atmos/atmos.yaml on Windows)
Home dir (~/.atmos/atmos.yaml)
Current directory
ENV variables ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH
Dave avatar

Yes read that

Dave avatar

Those are both set.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


if Atmos sees ATMOS_CLI_CONFIG_PATH, it will not try to use ``/usr/local/etc/atmos/atmos.yaml`

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s ATMOS_BASE_PATH?

Dave avatar
echo $ATMOS_CLI_CONFIG_PATH
/repotest/

echo $ATMOS_BASE_PATH
/repotest/
Dave avatar

Also tried

echo $ATMOS_CLI_CONFIG_PATH
/repotest

echo $ATMOS_BASE_PATH
/repotest
Dave avatar

Also tried

unset ATMOS_CLI_CONFIG_PATH
unset ATMOS_BASE_PATH

cp /repotest/atmos.yaml /usr/local/etc/atmos/
base_path: "/repotest" # atmos.yaml
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s review this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
For this to work for both the `atmos` CLI and the Terraform `utils` provider, we recommend doing one of the following:

- Put `atmos.yaml` at `/usr/local/etc/atmos/atmos.yaml` on local host and set the ENV var `ATMOS_BASE_PATH` to point to the absolute path of the root
  of the repo

- Put `atmos.yaml` into the home directory (`~/.atmos/atmos.yaml`) and set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of
  the repo

- Put `atmos.yaml` at a location in the file system and then set the ENV var `ATMOS_CLI_CONFIG_PATH` to point to that location. The ENV var must
  point to a folder without the `atmos.yaml` file name. For example, if `atmos.yaml` is at `/atmos/config/atmos.yaml`,
  set `ATMOS_CLI_CONFIG_PATH=/atmos/config`. Then set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of the repo

- When working in a Docker container, place `atmos.yaml` in the `rootfs` directory
  at [/rootfs/usr/local/etc/atmos/atmos.yaml](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/rootfs/usr/local/etc/atmos/atmos.yaml>)
  and then copy it into the container's file system in the [Dockerfile](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/Dockerfile>)
  by executing the `COPY rootfs/ /` Docker command. Then in the Dockerfile, set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the
  root of the repo. Note that the [Atmos example](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start>)
  uses [Geodesic](<https://github.com/cloudposse/geodesic>) as the base Docker image. [Geodesic](<https://github.com/cloudposse/geodesic>) sets the ENV
  var `ATMOS_BASE_PATH` automatically to the absolute path of the root of the repo on local host
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

pick one of the methods

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that ATMOS_BASE_PATH must be an absolute path

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is what geodesic is doing:

• In the Dockerfile, copies the rootfs to the container so we have /usr/local/etc/atmos/atmos.yaml in there • Sets ATMOS_BASE_PATH to /localhost/...../infra - NOTE: this is absolute path

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(again, sorry that this is complicated, but Terraform executes providers from the component folder, and we don’t want to place atmos.yaml in every component’s folder)

Dave avatar

Are we sure PATHING had anything to do with it?

As soon as I disabled vpc_flow_logs_enabled: false it worked just fine.

Dave avatar

I went through each of the OPTIONS here

For this to work for both the `atmos` CLI and the Terraform `utils` provider, we recommend doing one of the following:

- Put `atmos.yaml` at `/usr/local/etc/atmos/atmos.yaml` on local host and set the ENV var `ATMOS_BASE_PATH` to point to the absolute path of the root
  of the repo

- Put `atmos.yaml` into the home directory (`~/.atmos/atmos.yaml`) and set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of
  the repo

- Put `atmos.yaml` at a location in the file system and then set the ENV var `ATMOS_CLI_CONFIG_PATH` to point to that location. The ENV var must
  point to a folder without the `atmos.yaml` file name. For example, if `atmos.yaml` is at `/atmos/config/atmos.yaml`,
  set `ATMOS_CLI_CONFIG_PATH=/atmos/config`. Then set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of the repo

- When working in a Docker container, place `atmos.yaml` in the `rootfs` directory
  at [/rootfs/usr/local/etc/atmos/atmos.yaml](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/rootfs/usr/local/etc/atmos/atmos.yaml>)
  and then copy it into the container's file system in the [Dockerfile](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/Dockerfile>)
  by executing the `COPY rootfs/ /` Docker command. Then in the Dockerfile, set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the
  root of the repo. Note that the [Atmos example](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start>)
  uses [Geodesic](<https://github.com/cloudposse/geodesic>) as the base Docker image. [Geodesic](<https://github.com/cloudposse/geodesic>) sets the ENV
  var 

and same results everytime.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it might be something else. You can DM me your setup and i’ll review

2024-02-02

johncblandii avatar
johncblandii

We’re still on 1.44 and have been in the weeds so didn’t see all of the new stuff, but I read the latest releases since then and major kudos on the latest work. This is looking phenomenal and will upgrade soon.

1
1
2
Release notes from atmos avatar
Release notes from atmos
11:14:32 PM

v1.57.0 what

Add default CLI configuration to Atmos code Update/improve examples and docs Update demo.tape

why

Add default CLI configuration to Atmos code - this is useful when executing Atmos CLI commands (e.g. on CI/CD) that does not require components and stacks

If atmos.yaml is not found in any of the searched locations, Atmos will use the default CLI configuration: base_path: “.” components: terraform: base_path: components/terraform apply_auto_approve: false deploy_run_init:…

Release v1.57.0 · cloudposse/atmosattachment image

what

Add default CLI configuration to Atmos code Update/improve examples and docs Update demo.tape

why

Add default CLI configuration to Atmos code - this is useful when executing Atmos CLI comma…

Dr.Gao avatar

Hello When using atmos github actions for terraform drift detection, I saw an example config like below. How can I specify all components? How can I specify components in specific folders?

   select-components:
      runs-on: ubuntu-latest
      name: Select Components
      outputs:
        matrix: ${{ steps.components.outputs.matrix }}
      steps:
        - name: Selected Components
          id: components
          uses: cloudposse/github-action-atmos-terraform-select-components@v0
          with:
            jq-query: 'to_entries[] | .key as $parent | .value.components.terraform | to_entries[] | select(.value.settings.github.actions_enabled // false) | [$parent, .key] | join(",")'
            debug: ${{ env.DEBUG_ENABLED }}
Dr.Gao avatar

in the atmos terraform plan with github actions, the docs on website says Within the "plan" job, the "component" and "stack" are hardcoded (foobar and plat-ue2-sandbox). In practice, these are usually derived from another action..

Dr.Gao avatar

Is there an example that practically uses components from “affected stacks”

Dr.Gao avatar

I see it is using component as key in the yaml a lot, does that mean it support config one component? How we config multiple components in this case?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Terraform Drift Detection | atmos

The Cloud Posse GitHub Action for “Atmos Terraform Drift Detection” and “Atmos Terraform Drift Remediation” define a scalable pattern for detecting and remediating Terraform drift from within GitHub using workflows and Issues. “Atmos Terraform Drift Detection” will determine drifted Terraform state by running Atmos Terraform Plan and creating GitHub Issues for any drifted component and stack. Furthermore, “Atmos Terraform Drift Remediation” will run Atmos Terraform Apply for any open Issue if called and close the given Issue. With these two actions, we can fully support drift detection for Terraform directly within the GitHub UI.

name: "Atmos GitOps Select Components"
description: "A GitHub Action to get list of selected components by jq query"
author: [email protected]
branding:
  icon: "file"
  color: "white"
inputs:
  select-filter:
    description: jq query that will be used to select atmos components
    required: false
    default: '.'
  head-ref:
    description: The head ref to checkout. If not provided, the head default branch is used.
    required: false
    default: ${{ github.sha }}
  atmos-gitops-config-path:
    description: The path to the atmos-gitops.yaml file
    required: false
    default: ./.github/config/atmos-gitops.yaml
  jq-version:
    description: The version of jq to install if install-jq is true
    required: false
    default: "1.6"
  debug:
    description: "Enable action debug mode. Default: 'false'"
    default: 'false'
    required: false
  nested-matrices-count:
    required: false
    description: 'Number of nested matrices that should be returned as the output (from 1 to 3)'
    default: "2"
outputs:
  selected-components:
    description: Selected GitOps components
    value: ${{ steps.selected-components.outputs.components }}
  has-selected-components:
    description: Whether there are selected components
    value: ${{ steps.selected-components.outputs.components != '[]' }}
  matrix:
    description: The selected components as matrix structure suitable for extending matrix size workaround (see README)
    value: ${{ steps.matrix.outputs.matrix }}

runs:
  using: "composite"
  steps:
    - uses: actions/checkout@v3
      with:
        ref: ${{ inputs.head-ref }}

    - name: Read Atmos GitOps config
      ## We have to reference cloudposse fork of <https://github.com/blablacar/action-config-levels>
      ## before <https://github.com/blablacar/action-config-levels/pull/16> would be merged
      uses: cloudposse/github-action-config-levels@nodejs20
      id: config
      with:
        output_properties: true
        patterns: |
          - ${{ inputs.atmos-gitops-config-path }}

    - name: Install Terraform
      uses: hashicorp/setup-terraform@v2
      with:
        terraform_version:  ${{ steps.config.outputs.terraform-version }}
        terraform_wrapper: false

    - name: Install Atmos
      uses: cloudposse/github-action-setup-atmos@v1
      env:
       ATMOS_CLI_CONFIG_PATH: ${{inputs.atmos-config-path}}
      with:
        atmos-version: ${{ steps.config.outputs.atmos-version }}
        install-wrapper: false

    - name: Install JQ
      uses: dcarbone/[email protected]
      with:
        version: ${{ inputs.jq-version }}

    - name: Filter Components
      id: selected-components
      shell: bash
      env:
        ATMOS_CLI_CONFIG_PATH:  ${{ steps.config.outputs.atmos-config-path }}
        JQUERY: |
          with_entries(.value |= (.components.terraform)) |             ## Deal with components type of terraform
          map_values(map_values(select(${{ inputs.select-filter }}))) | ## Filter components by enabled github actions
          map_values(select(. != {})) |                                 ## Skip stacks that have 0 selected components
          map_values(. | keys) |                                        ## Reduce to component names
          with_entries(                                                 ## Construct component object
            .key as $stack | 
            .value |= map({
              "component": ., 
              "stack": $stack, 
              "stack_slug": [$stack, .] | join("-")
            })
          ) | map(.) | flatten                                          ## Reduce to flat array
      run: |
        atmos describe stacks --format json | jq -ce "${JQUERY}" > components.json

        components=$(cat components.json)
        echo "Selected components: $components"
        printf "%s" "components=$components" >> $GITHUB_OUTPUT

    - uses: cloudposse/github-action-matrix-extended@v0
      id: matrix
      with:
        matrix: components.json
        sort-by: ${{ steps.config.outputs.sort-by }}
        group-by: ${{ steps.config.outputs.group-by }}
        nested-matrices-count: ${{ inputs.nested-matrices-count }}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which executes atmos describe stacks --format json, which returns all components in all stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe stacks | atmos

Use this command to show the fully deep-merged configuration for all stacks and the components in the stacks.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


How can I specify all components?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the action returns all components, which is what you need for drift detection

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on the other hand, to detect only the affected component (affected byu the changes in a PR), you can use https://atmos.tools/integrations/github-actions/affected-stacks

Affected Stacks | atmos

Streamline Your Change Management Process

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Terraform Plan | atmos

The Cloud Posse GitHub Action for “Atmos Terraform Plan” simplifies provisioning Terraform from within GitHub using workflows. Understand precisely what to expect from running a terraform plan from directly within the GitHub UI for any Pull Request.

Dr.Gao avatar

Thanks very much! That is very helpful!

Dr.Gao avatar

Is Github action with Atmos production ready? @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are used in production

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need help or find any issues

Dr.Gao avatar

Thanks! Andriy!

Dr.Gao avatar

I will reach out again if we have any issues

2024-02-05

Guus avatar

Hi, when using Atmos + cloudposse components to setup a multi-account AWS organization setup (accounts for identity, dns, audit, …). Say we have a customer who is providing access to an AWS account within their own organization through a role we can assume from one of our own IAM roles. How would we be able to assume this role within our cloudposse setup so we can still use atmos & cloudposse components and store terraform state (S3) and locking (DynamoDB) on our own account while provisioning the actual infrastructure on the customer’s account?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is not related to Atmos since you want to use the same Terraform backend and have already configured backend.s3 section so all state will be stored in the same backend (even for the external account). You are probably using different IAM roles to access the backend and the AWS resources

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to provide an IAM role in assume_role that Terraform will assume to access the external account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using just provider "aws" you can always provide that role in assume_role

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then the account-map component needs to be configured to return that IAM role for a specific context, e.g. for a diff Org, or diff tenant, or diff account - depending on how you model the external account (is it a separate Org, or a separate tenant, or just a separate account)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in Atmos manifests, you create those configurations for the new Org/tenant/account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can alway model an external account as a separate Org, or tenant, or just a separate AWS accou nt in the existing Org/tenant

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Organizational Structure Configuration Atmos Design Pattern | atmos

Organizational Structure Configuration Atmos Design Pattern

Guus avatar

Thank you for the detailed reply, I think I understand what you mean, I’m just not sure how I would make the account-map component return the external role to assume? How would I add the external AWS account (which is part of the customers AWS Org) within my own AWS Org setup configured using atmos + cloud components?

Guus avatar

So I would just see it as a separate external account, which I want to provision resources on using my existing atmos+cloudposse project. So adding it using the manifests and allowing it to be using the assume_role (terraform_role_arn) with an external role would perfectly fit my needs. I just don’t know how to set that up and can’t immediately find any examples either.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using the account-map component, there is no examples like that. To use an existing external account as a separate stage and to use an existing terraform IAM role, you will need to modify the account-map component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  static_terraform_role  = local.account_map.terraform_roles[local.account_name]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. if stage=<new_stage> return the existing Terraform role ARN

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the account-map needs to be modified before you configure the new functionality with Atmos, eg. by providing it with a new input - a map of existing accounts to the terraform roles to assume

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then use that input in the component to add it to the outputs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then you can configure it with Atmos (which is just to configure that new input variable)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for eaxmple:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
   terraform:
     account-map:
       vars:
         existing_accounts_to_terraform_role_arns:
            acc1: arn1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what you are asking is brownfield environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#945 feat: use `account-map` component for brownfield env

what

• feat: use account-map component for brownfield env

why

• Allow brownfield environments to use the account-map component with existing accounts

references

• Related to PR #943

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#943 feat: use `account` component for brownfield env

what

• feat: use account component for brownfield env

why

• Allow brownfield environments to use the account component without managing org, org unit, or account creation • This will allow adding to the account outputs so account-map can ingest it without any changes

references

• Slack conversations in sweetops https://sweetops.slack.com/archives/C031919U8A0/p1702135734967949

tests

I tested with these toggled to true and false to ensure it worked as expected. When these are true, the resources are all created. When these are false, none of the resources are created and all the outputs are filled with existing account information with the ability to override using the yaml inputs.

        organization_enabled: true
        organizational_units_enabled: true
        accounts_enabled: true
Guus avatar

Interesting, thank you!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as you can see, the PRs are under discussion (which means there is no consensus on how to do this, so you need to make changes to the components by yourself for now, using any approarch, and configure it with Atmos)

Alex Soto avatar
Alex Soto

Hi, is there a document explaining why Atmos runs a reconfigure? As I look at example atmos.yamls, the default appears to alway reconfigure. Everytime I run a plan, even for the same component and stack in succession, it’s constantly asking to migrate all workspaces

the one caveat is that I’m playing around right now and using local state

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform init -reconfigure is used to be able to use a component in many stacks

Re-running init with an already-initialized backend will update the working directory to use the new backend settings. Either -reconfigure or -migrate-state must be supplied to update the backend configuration.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can always set it to false in atmos.yaml

Alex Soto avatar
Alex Soto

ok thanks, I’m still learning, not quite I understand why it needs to if it’s re-running in the same stack, unless it’s not detecting that it’s in the same stack and just running it always because the config var says true

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

-reconfigure was introduced (it’s configurable) for the cases when we use multiple Terraform backends per Org, per tenant, or per account. Then we can provision the same TF component into multiple stacks using diff TF backends for each stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and to make it configurable (and disable for the cases like yours), it was added to atmos.yaml config

Alex Soto avatar
Alex Soto

ok thx, my case is very simple right now as I’m bootstrapping. Hopefully warnings go away as I get more sophisticated infra in place.

Gabriel Tam avatar
Gabriel Tam

Hi, I was trying to deploy the https://github.com/cloudposse/terraform-aws-components/tree/main/modules/waf module, but I had a hard time figuring out how to use and_statement or the or_statement or the not_statement from the rules. I know I can do that with straight TF, but I can’t seem to be able to do that with Cloudposse module. Also, is Rule Groups not supported? Can someone please shed some lights? Thank you in advance.

The following snippet is what I had, but I was only able to specify one statement.

byte_match_statement_rules:
  - name: "invalid-path"
    priority: 30
    action: block

    statement:
      field_to_match:
        uri_path:
          rule:
      positional_constraint: "STARTS_WITH"
      search_string: "/api/v3/test/"
      text_transformation:
        rule:
          priority: 0
          type: "NONE"

    visibility_config:
      # Defines and enables Amazon CloudWatch metrics and web request sample collection.
      cloudwatch_metrics_enabled: true
      metric_name: "uri_path"
      sampled_requests_enabled: true
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

the waf component may not have everything that the terraform-aws-waf module has. We use components as root modules for customer implementations, so it likely only has what we’ve required for a given use case. You can likely add anything that the module supports to the component

https://github.com/cloudposse/terraform-aws-waf

cloudposse/terraform-aws-waf
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

rule groups are supported by the managed_rule_group_statement_rules input: https://github.com/cloudposse/terraform-aws-waf/blob/main/variables.tf#L361

variable "managed_rule_group_statement_rules" {
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

@Andriy Knysh (Cloud Posse) do you have an example of using an and_statement or or_statement or not_statement?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i think those or and and statements are not supported by the module. Prs are welcome

Gabriel Tam avatar
Gabriel Tam

@Dan Miller (Cloud Posse), I think the rule groups are not supported in the terraform-aws-components WAF module. The managed rule group are the ones that are managed by AWS, but not the ones we create.

So it sounds like we’ll need to customize it to do both rule groups and multi statements then. @johncblandii

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
    for_each = local.rule_group_reference_statement_rules
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Gabriel Tam are those not what you are describing?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "rule_group_reference_statement_rules" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "rule_group_reference_statement_rules" {
Gabriel Tam avatar
Gabriel Tam

Those are the rule group referencing rules to point to an existing rule groups (arn). I couldn’t find where I can create the rule groups. And the and, or, and not statement are needed for our use cases. Those are the ones we will need to create / customize.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you modify/update/improve it, your contribution to the WAF module would be greatly appreciated (taking into account that WAF is a complex thing, it would benefit many people)

johncblandii avatar
johncblandii

so it seems support for this resource is what @Gabriel Tam is referring to.

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_rule_group

2024-02-06

2024-02-08

Release notes from atmos avatar
Release notes from atmos
12:34:39 AM

v1.58.0 what

Improve Atmos UX and error handling

When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a component and stack and printing error messages if the component or stack is not found

If a user executes any Atmos command that requires Atmos components and stacks, including just atmos (and including from a random folder not related to Atmos configuration), and the CLI config points to an Atmos stacks…

Release v1.58.0 · cloudposse/atmosattachment image

what

Improve Atmos UX and error handling

When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a compone…

Release notes from atmos avatar
Release notes from atmos
12:54:27 AM

v1.58.0 what

Improve Atmos UX and error handling

When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a component and stack and printing error messages if the component or stack is not found

If a user executes any Atmos command that requires Atmos components and stacks, including just atmos (and including from a random folder not related to Atmos configuration), and the CLI config points to an Atmos stacks…

2024-02-09

Release notes from atmos avatar
Release notes from atmos
06:24:25 PM

v1.59.0 what

Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default to dark mode Stylize atmos brand

why

Make it more compelling Add missing context developers might lack without extensive terraform experience

Release v1.59.0 · cloudposse/atmosattachment image

what

Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default …

Introduction to Atmos | atmos

Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

Terraform Limitations | atmos

Terraform Limitations

Release notes from atmos avatar
Release notes from atmos
06:34:34 PM

v1.59.0 what

Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default to dark mode Stylize atmos brand

why

Make it more compelling Add missing context developers might lack without extensive terraform experience

Release notes from atmos avatar
Release notes from atmos
03:14:37 AM

v1.60.0 what

Fix an issue with the skip_if_missing attribute in Atmos imports with context Update docs titles and fix typos Update atmos version CLI command

why

The skip_if_missing attribute was introduced in Atmos release v1.58.0 and had some issues with checking Atmos imports if the imported manifests don’t exist

Docs had some typos

When executing the atmos version command, Atmos automatically checks for the latest…

Release v1.60.0 · cloudposse/atmosattachment image

what

Fix an issue with the skip_if_missing attribute in Atmos imports with context Update docs titles and fix typos Update atmos version CLI command

why

The skip_if_missing attribute was introd…

Release v1.58.0 · cloudposse/atmosattachment image

what

Improve Atmos UX and error handling

When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a compone…

2024-02-10

silopolis avatar
silopolis

This page alone deserves a conf to reveal all its gems and secrets!

https://atmos.tools/reference/terraform-limitations/

Overcoming Terraform Limitations with Atmos | atmos

Overcoming Terraform Limitations with Atmos

this2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Erik Osterman (Cloud Posse) created it, I enjoyed reading it, after reading it you will want to use Atmos :)

Overcoming Terraform Limitations with Atmos | atmos

Overcoming Terraform Limitations with Atmos

silopolis avatar
silopolis

You surely do! And you’re better equipped to do so with all the XP distilled in this page!

RB avatar

Is there a way to visualize the atmos stacks when using github actions to plan and apply? Or is it on the roadmap?

RB avatar

I know there is a native GitHub way to do it by clicking on actions, just wondering if there is another frontend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do not have any immediate plans for a front end

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Focused on adopting GitHub actions and GitHub enterprise functionality as much as possible right now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What specifically are you missing that a UI would solve? Visualizing atmos stacks is a bit broad.

RB avatar

I was thinking that it would be nice to have the ability to see which stacks have drifted visually

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We believe we’ve solved that. We open GitHub Issues. This way drift is actionable (_it can be assigned to someone to remediate, and supports remediation from issues, when possible) and _visual. We also create issues from failures.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem with existing UIs, is they are not actionable. They just show you, you have a bunch of drifted stacks.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Pro tip, you can use projects with github issues.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Github Issues can be synced to jira, if not using github issues.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(note our actions support this out of the box)

2024-02-11

RB avatar

I tried using the component.yaml’s mixins key to copy over my local providers file and it failed.

Does that key only work with the source?

RB avatar

If so, how do i vendor from upstream and copy in my local mixin?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RB are you asking about how to copy a local file to the component’s folder?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  mixins:
    # Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
    # - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
    # This mixin `uri` is relative to the current `vpc` folder
    - uri: ../../mixins/context.tf
      filename: context.tf
RB avatar


@RB are you asking about how to copy a local file to the component’s folder?
Yes

RB avatar

Oh i see, i can use the ../../ expression?

RB avatar

Hmm i tried this and i got an error last time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

is it working for you?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Btw have you also experimented with the new vendor manifest?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What’s nice about that is you can use yaml anchors within the file to DRY up provider copying

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Vendoring | atmos

Use Atmos vendoring to make copies of 3rd-party components, stacks, and other artifacts in your own repo.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah yes, @Andriy Knysh (Cloud Posse) shows an example there:

Option 1

- component: "context"
      # Copy a local file into a local file with a different file name
      # This `source` is relative to the current folder
      source: "components/terraform/mixins/context.tf"
      targets:
        - "components/terraform/vpc/context.tf"
        - "components/terraform/alb/context.tf"
        - "components/terraform/ecs/context.tf"
        # etc...
      # Tags can be used to vendor component that have the specific tags
      # `atmos vendor pull --tags test`
      # Refer to <https://atmos.tools/cli/commands/vendor/pull>
      tags:
        - context
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since there can only be one source, there’s no way to do the copying from a different location in each - component definition

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I could see this improving, but it’s not a pattern we right now use and dissuade against using due to the impossibility of testing it with open ended mixins.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re adding a terratest helper to do atmos testing for components and stacks

RB avatar

Ah that looks better than my current way which is to use a make target

i.e.

make vendor COMP=ecr
make vendor/edit COMP=ecr
.PHONY: vendor
vendor: ## Vendor the component
	@mkdir -p components/terraform/$(COMP)
	@sed 's,ecr,$(COMP),g' ./mixins/component.yaml > components/terraform/$(COMP)/component.yaml
	@atmos vendor pull -c $(COMP)

.PHONY: vendor/edit
vendor/edit: ## Vendor the component and edit the files
	@hcledit block rm 'variable.region' -f components/terraform/$(COMP)/variables.tf -u
	@cp ./mixins/providers.tf components/terraform/$(COMP)/
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On the one hand, make is nice because i understands modification times. Albeit, not in your implementation. But it’s an over optimizing for the most part to do that. Give the vendoring a shot.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In make though, I would do it like this: (functional looking pseudo code)

ALL_PROVIDERS = $(shell find . -type f -name 'providers.tf)

$(ALL_PROVIDERS): mixins/providers.tf
  cp $< $@

.PHONY : providers
providers: $(ALL_PROVIDERS)
1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That would have only copied it if the mixins/providers.tf is newer than the [proviers.tf](http://proviers.tf) inside of a component.

RB avatar

Thanks Erik. You got some make skills!

The other one is using hcledit to remove the variable.region from [variables.tf](http://variables.tf) , after vendoring, as I have moved that into the client’s [providers.tf](http://providers.tf) file

hcledit block rm 'variable.region' -f components/terraform/$(COMP)/variables.tf -u
RB avatar

Any chance vendoring can also include running a cli command via post_vendor key or similar ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since component-level generation (like terragrunt, terramate) is not something we ascribe to, it’s not yet something we can prioritize. I think we will inevitably support it, but for now recommend doing that in make, or like @Hans D does with Gotask

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can see supporting it, primarily for the reason making it easier for companies to migrate from other tools into atmos

RB avatar

no worries, for now I will look into the new vendoring yaml, and tie that back into a make target and i should be good to go

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, and also for very advanced vendoring requirements, there’s https://carvel.dev/vendir/docs/v0.39.x/vendir-spec/

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(which atmos vendoring is based on)_

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can add pre and post hooks to vendoring (no ETA, but in the near future)

2024-02-12

2024-02-13

RB avatar

Im trying to run cloudposse/github-action-atmos-terraform-plan but Im getting this error

Run cloudposse/github-action-atmos-get-setting@v1
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON

Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
RB avatar

This is my yaml

jobs:
  terraform:
    runs-on: ubuntu-latest
    steps:
    - name: Plan Atmos Component
      uses: cloudposse/github-action-atmos-terraform-plan@v1
      with:
        component: "github-oidc-role/cicd"
        stack: "gbl-prod"
        component-path: "component/terraform/github-oidc-role"
        terraform-plan-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
        terraform-state-bucket: "org-state-bucket"
        terraform-state-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
        aws-region: "us-east-1"
RB avatar

But if I run this, it works

    runs-on: ubuntu-latest
    steps:
    - uses: hashicorp/setup-terraform@v2
  
    - name: Setup atmos
      uses: cloudposse/github-action-setup-atmos@v1
      with:
        install-wrapper: true
  
    - name: Run atmos
      id: atmos
      run: atmos terraform plan github-oidc-role/cicd --stack=gbl-prod
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Igor Rodionov please take a look

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think these moved into the config.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
        component-path: "component/terraform/github-oidc-role"
        terraform-plan-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
        terraform-state-bucket: "org-state-bucket"
        terraform-state-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
        aws-region: "us-east-1"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(the aim being to keep config out of workflows so they are more easily disributed)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Matt Calhoun who has also been working on github-action-atmos-get-setting

RB avatar

Yes that makes sense. I was following the readme. I also tried only the component and the stack and received the same weird issue in github-action-atmos-get-setting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) @Igor Rodionov I think the readme may be out of date?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(it does have the part about the config up above)

Igor Rodionov avatar
Igor Rodionov

@RB have you created the config file >

Igor Rodionov avatar
Igor Rodionov

?

RB avatar

Oh boy no i did not. I thought i could give it that info as inputs to the workflow

RB avatar

If i create that config, wouldn’t i just be duplicating the atmos stack yaml in the gitops config?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov we should have a friendlier error message

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


If i create that config, wouldn’t i just be duplicating the atmos stack yaml in the gitops config?
We are moving most of the gitops config into the atmos stack config. And some of it will be moved into atmos.yaml

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(this was based on feedback we received, and it makes sense in hindsight)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That way the GHA should work more out-of-the-box

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The key thing we’re aiming for is that GHA workflows should not need to be edited.

1
Brett Au avatar
Brett Au

So we moved over to the .github/config/atmos-gitops.yaml pattern but still having the same error

Run cloudposse/github-action-atmos-get-setting@v1
  with:
    component: s3/some-new-bucket
    stack: ue1-devops-prod-01
    settings-path: settings.github.actions_enabled
  env:
    ATMOS_CLI_CONFIG_PATH: 
  
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
    at JSON.parse (<anonymous>)
    at getSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/lib/settings.ts:28:1)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at processSingleSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/useCase/process-single-setting.ts:40:1)
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/main.ts:30:1
Brett Au avatar
Brett Au

Our atmos-gitops.yaml file

  atmos-version: 1.45.3
  atmos-config-path: ./rootfs/usr/local/etc/atmos/
  terraform-state-bucket: bucket
  terraform-state-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
  terraform-plan-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
  terraform-apply-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
  terraform-version: 1.6.0
  aws-region: us-east-1
  enable-infracost: false
  sort-by: .stack_slug
  group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-") 
Brett Au avatar
Brett Au

I do notice that ATMOS_CLI_CONFIG_PATH is empty in the call to get-setting, not sure if thats part of the issue

Brett Au avatar
Brett Au

Looking at this documentation too it seems we may need to setup our atmos.yaml to output json so the settings github action can correct parse it https://atmos.tools/cli/configuration/

CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Igor Rodionov can you please look at this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, heads up, @Brett Au - sorry to do this to you, very soon we’re moving the configuration into atmos.yml for consistency)

Igor Rodionov avatar
Igor Rodionov

Hello @Brett Au What version of cloudposse/github-action-atmos-terraform-plan are you using?

Brian avatar

Hello, I am struggling to understand how atmos can provision resources into different AWS accounts in a multi-account AWS organization. For example, if I need to provision an IAM role in all my AWS accounts in my organization, how does atmos change provider configurations to gain access to my organization’s child accounts?

Brian avatar
Organizational Structure Configuration Atmos Design Pattern | atmos

Organizational Structure Configuration Atmos Design Pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


atmos change provider configurations to gain access to my organization’s child accounts
That’s up to the terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TL;DR providers support variables. So use other modules or variable inputs to supply the values.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our guiding principle is for atmos to have no cloud-provider specific requirements

Brian avatar

But if you put the provider in the configuration and have to backout the component, wont that inhibit that delete process since terraform will be looking for the provider to do that delete?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


inhibit that delete process since terraform will be looking for the provider to do that delete?
It’s using state from something outside of the current component (root module), so it does not inhibit the delete.

Brian avatar

Okay awesome. Thanks for the info Erik.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know what you’re referring to, however, and we have encountered that when we made mistakes. But it’s not something we encounter anymore.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

some of our engineers will be more familiar than me on the specific implementations.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
module "always" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  # account_map must always be enabled, even for components that are disabled
  enabled = true

  context = module.this.context
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s one of those examples.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. cannot have labels get disabled, if we need to successfully destroy

Brian avatar

Okay this makes sense. Thanks Erik. Was banging my head against a wall there for a while.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as Erik mentioned, it’s up to the provider config

provider "aws" {
  region = var.region

  assume_role = var.assume_role
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

assume_role can be provided from other TF components (as in our examples), or from TF vars, or from Atmos stack manifests for diff Orgs/tenants/accounts

Release notes from atmos avatar
Release notes from atmos
12:14:32 AM

v1.61.0 what

Update readme to be more consistent with atmos.tools Fix links Add features/benefits Add use-cases Add glossary

why

Better explain what atmos does and why

Release notes from atmos avatar
Release notes from atmos
01:54:35 AM

v1.62.0 Add atmos docs CLI command @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133369695” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/537” data-hovercard-type=”pull_request”…

Release v1.62.0 · cloudposse/atmosattachment image

Add atmos docs CLI command @aknysh (#537) what

Add atmos docs CLI command

why

Use this command to open the Atmos docs

aknysh - Overview

aknysh has 264 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
02:04:38 AM

v1.62.0 Add atmos docs CLI command @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133369695” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/537” data-hovercard-type=”pull_request”…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey all! some notable updates to the atmos docs. First, you can now open the from the command line. Just run atmos docs

Here are some notable additions.

Best Practices for Stacks. https://atmos.tools/core-concepts/stacks/#best-practices Best Practices for Components. https://atmos.tools/core-concepts/components/#best-practices Added an FAQ. https://atmos.tools/faq Challenges that led us to writing atmos. https://atmos.tools/reference/terraform-limitations

1
Release notes from atmos avatar
Release notes from atmos
06:34:35 AM

v1.63.0 Add integrations.github to atmos.yaml @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133610579” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/538“…

aknysh - Overview

aknysh has 264 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
06:54:33 AM

v1.63.0 Add integrations.github to atmos.yaml @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133610579” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/538“…

2024-02-14

Adam Markovski avatar
Adam Markovski

You guys are on fire with the Atmos changes

1
Adam Markovski avatar
Adam Markovski

Looks great

Dr.Gao avatar

Hello, I see you have this to support short form of aws region and zones, https://github.com/cloudposse/terraform-aws-utils#introduction do you have something similar for GCP

jose.amengual avatar
jose.amengual

CloudPosse does not have GCP modules

jose.amengual avatar
jose.amengual
Dr.Gao avatar

Thanks!

Dr.Gao avatar

This is very helpful!

1
Dr.Gao avatar

Hello, I need to use multiple modules from GCP module, can I config multiple source in component.yml.? it does not look like it support it. I should not use vendor.yaml in my case since I am not pulling it for the entire infra, just for that component. I did see that cloudposse solve this issue by adding another module to the main module so it only config one ex it needs to use both efs and kms module, instead of pulling two modules, it only need to pull efs since kms is also defined in efs[main.tf](http://main.tf)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

component.yaml is used to pull all files for a component. If you pulling two components, you can place them into separate filders and use 2 diff component.yaml files

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or use one component.yaml and mixins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

with mixins, you can pull anything from multiple sources (but just one by one)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
mixins:
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
      filename: context.tf
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
      version: 1.398.0
      filename: introspection.mixin.tf
Dr.Gao avatar

Thanks! I think compoent.yml with minxins is the way I am going to try

Dr.Gao avatar

Since it is not pull multiple component, it is myltiple module in one component

Dr.Gao avatar

This is really helpful, thanks @Andriy Knysh (Cloud Posse)

Dr.Gao avatar

How the mixins is structured when I have multiple modules from multiple components?

Dr.Gao avatar

Do I create multiple mixin config files for each component?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

a list of mixins is part of spec at the same level as source

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Vendoring | atmos

Use Component Vendoring to make copies of 3rd-party components in your own repo.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
  name: vpc-flow-logs-bucket-vendor-config
  description: Source and mixins config for vendoring of 'vpc-flow-logs-bucket' component
spec:
  source:
    # Source 'uri' supports the following protocols: OCI (<https://opencontainers.org>), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP,
    # and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
    # In 'uri', Golang templates are supported  <https://pkg.go.dev/text/template>
    # If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
    # To vendor a module from a Git repo, use the following format: 'github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
    uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
    version: 1.398.0

    # Only include the files that match the 'included_paths' patterns
    # If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
    # 'included_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
    # <https://en.wikipedia.org/wiki/Glob_(programming)>
    # <https://github.com/bmatcuk/doublestar#patterns>
    included_paths:
      - "**/*.tf"
      - "**/*.tfvars"
      - "**/*.md"

    # Exclude the files that match any of the 'excluded_paths' patterns
    # Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
    # 'excluded_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
    excluded_paths:
      - "**/context.tf"

  # Mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
  # All mixins are processed in the order they are declared in the list.
  mixins:
    # <https://github.com/hashicorp/go-getter/issues/98>
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
      filename: context.tf
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
      version: 1.398.0
      filename: introspection.mixin.tf
Dr.Gao avatar

Awesome!

Dr.Gao avatar

Is there other way to solve this issue?

2024-02-15

Dr.Gao avatar

Hello, for the label order described here https://github.com/cloudposse/terraform-null-label I see we can config label order as we would like. Does it work if the label order is {namespace}-{tenant}-{environment}-{stage} and the folder structure in stacks follows a different order namespace/stage/tenant/environment structure? I think it works, but would like to double check with the expert here. If it does work, is there any disadvantage of doing that

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

short answer: yes, it will work

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

long answer:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the label order in the context (from null-label which is used in all components) is to uniquely and consistently name the cloud (AWS) resources, so your resource names/IDs will look like {namespace}-{tenant}-{environment}-{stage}-{name} (or in whatever order you want)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on the other hand, the Atmos manifests folder structure is for humans to organize the Atmos config and make it DRY. Atmos does not care about the stacks folder structure, it’s for people to config, organize and manage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what Atmos cares about is the context variables defined in the stack manifests - that’s how Atmos finds the stack and component in the stack when you execute commands like atmos terraform plan <component> -s <stack>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Final Notes | atmos

Atmos provides unlimited flexibility in defining and configuring stacks and components in the stacks.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos stack manifests can have arbitrary names and can be located in any sub-folder in the stacks directory. Atmos stack filesystem layout is for people to better organize the stacks and make the configurations DRY. Atmos (the CLI) does not care about the filesystem layout, all it cares about is how to find the stacks and the components in the stacks by using the context variables namespace, tenant, environment and stage
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as described in https://atmos.tools/design-patterns/, you can organize the stacks (Atmos manifests) in many different ways depending on your Organization/tenants/regions/accounts structure

Atmos Design Patterns | atmos

Atmos Design Patterns. Elements of Reusable Infrastructure Configuration

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Provision | atmos

Having configured the Terraform components, the Atmos components catalog, all the mixins and defaults, and the Atmos top-level stacks, we can now

Dr.Gao avatar

Thanks!

Dr.Gao avatar

I love your design !

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all of that was done to be able to separately and independently configure 3 diff things: 1) cloud resource names; 2) Atmos manifests folder structure (for people); 3) Atmos stack names (e.g. plat-ue2-prod)

Hans D avatar

Interesting to follow https://github.com/opentofu/opentofu/issues/685#issuecomment-1945123152: use of the tf lockfiles …

Comment on #685 Provide a way to disable provider dependency lock file

I was going to say “supply chain attacks”, but @Yantrio got there first. :)

The other solution is to just break down and start including the lock file and try to educate my users about another hoop to jump through for almost 0 benefit.

IMO, this is a pretty fundamental cybersecurity concept. It helps ensure that what you got last time is the same as you got this time. It helps make sure that nobody took over the upstream repo and messed with it. It also helps ensure that the configuration you ship to dev is the same you ship to prod (e.g., a 12-factor app).

The problem I usually have is the inverse — too many platforms to support, and not enough people updating the lock file beyond their own personal os/arch.

2024-02-16

2024-02-18

2024-02-19

2024-02-22

Peter Dinsmore avatar
Peter Dinsmore

Hello, I am currently looking for tooling to optimize our Terraform environments. We previously used Terragrunt in our organization, but it has become too cumbersome over time and feels kind of “previous generation” tooling. We are in the process of setting up a PoC with Terramate, and it’s super neat. Especially the orchestration is powerful. We also looked at Spacelift, but I don’t see any reason to migrate to another CI/CD when we can get to a similar UX in GitHub Actions.

However, I just came across Atmos on Reddit and would love to understand how it compares to Terramate and Terragrunt.

MB avatar

You make a fair point but from what I’ve seen, building your own IaC automation with GitHub Actions can get messy. First of all lack of standardization (which creates silos and bottlenecks across projects), too much reliance on individual knowledge (so when someone leaves, there’re always big gaps) and as things get more complex, it demands more and more resources, limiting what you can build on top of it. Also to mention that maintaining homegrown solutions in-house is a ton of work

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have Atmos handling different Terraform environments for many companies, starting with a simple case with one Org and just a few accounts, to multi-Org, multi-tenant, multi-OU, multi-account with hundreds of components deployed to thousands of stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we also have Atmos working with Spacelift (handling tens of thousands of resources and thousands of stacks in some cases across multiple Orgs/OUs/regions/accounts), and with GitHub Actions (we can give you a demo on how to use Atmos with GHA) (discussion on GHA vs Spacelift vs other CI/CD tools is a completely diff topic, all of them have their own pros and cons, including cost, usability, user experience, access control, audit, messiness :) , etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here are some docs describing the core concepts of Atmos:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need more explanation or help (it’s difficult to answer/describe everything in one go)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a real example of Spacelift stacks using Atmos:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are 1274 Spacelift stacks configured across many Orgs, each having many OUs with multiple accounts, deployed into many diff regions

(Definition: a Spacelift stack is an Atmos component provisioned into an Atmos stack, for example a vpc component can be provisioned multiple times into different Org/OU/account/region stacks, keeping the entire config reusable and DRY using the concepts like imports and inheritance:

https://atmos.tools/core-concepts/stacks/imports

https://atmos.tools/core-concepts/components/inheritance

https://atmos.tools/core-concepts/components/component-oriented-programming

Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

Component-Oriented Programming | atmos

Component-Oriented Programming is a reuse-based approach to defining,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(also take a look at why the tool is called Atmos https://atmos.tools/faq)

Atmos FAQ | atmos

Why is the tool called Atmos?

Peter Dinsmore avatar
Peter Dinsmore

Thanks for sharing! I took some time to read through the resources and decided to stick to Terramate. In the end, I don’t see any significant benefits why I should be using Atmos over Terramate.

As said, we don’t want to use Spacelift because we don’t need any of the features Spacelift offers.

Here’s a bit of feedback:

• Atmos feels really cumbersome to get started with

• The orchestration and order of execution in Terramate helps us better to manage environments and split those up into stacks

• Using yaml for configuration sounds like a step back for us. We like the ability to manage all IaC related config with HCL, otherwise we wouldn’t use Terraform in the first place

• We need code generation and like the approach quite a bit. We also don’t see any issues with tests anyways, thanks for helping us out here!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, no problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding “Using yaml for configuration sounds like a step back for us”

Note that in Atmos, YAML is used for configuration of stacks for diff environments. You still use plain Terraform for the components (which can be used with Atmos or without). There is a clear separation of concerns: code (terraform root modules) and configuration for diff environments (Atmos manifests).

Also, YAML is everywhere now (kubernetes, kustomize, helm, helmfile, etc.), it looks like it’s the modern way to define configurations (regardless of whether you like or hate YAML )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the end, there is no one perfect tool for everything. Terramate has its advantages (the code generation, change detection and testing are cool, also using plain HCL), as well as Atmos (these are different approaches to solve similar problems in diff ways)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We need code generation and like the approach quite a bit. @Peter Dinsmore Can you help me understand how you leverage code generation?

jose.amengual avatar
jose.amengual
Using yaml for configuration sounds like a step back for us. We like the ability to manage all IaC related config with HCL, otherwise we wouldn't use Terraform in the first place

Kubernetes uses yaml and is moving forward….

Peter Dinsmore avatar
Peter Dinsmore

@jose.amengual for Kubernetes yaml makes sense too. For Terraform, I don’t see why I should use a different configuration language if HCL is exactly built for that? E.g. in Terramate, I can describe the purpose (metadata) and orchestration behavior of a stack as HCL in a stack configuration stack.tm.hcl.

Also, the code generation is configured with HCL, which allows the use of Terraform functions inside the code generation. I don’t think YAML would be suitable for that.

Peter Dinsmore avatar
Peter Dinsmore

@Erik Osterman (Cloud Posse) all sorts of things, to mention a few:

• We generate native backend and provider configuration in stacks interpolating stack metadata. It’s super powerful.

• We use Terramate modules for generating templates based on e.g. provider version used (we render different attributes in resource configuration based on if we use the stable or beta google provider)

• We generate Kubernetes manifests using Terraform outputs

Peter Dinsmore avatar
Peter Dinsmore

Anyways, there’s a ton of different tools in the market. Just because we favor a different approach doesn’t mean that Atmos isn’t a great tool! Any contribution to the ecosystem is appreciated.

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Peter Dinsmore I you shared all this. It’s easy to be surrounded by a lot of people who sing praises. The opportunity for improvement lies with constructive criticism.

1
Soren Martius avatar
Soren Martius

@Erik Osterman (Cloud Posse) we should catch up at some point

pv avatar

How do you apply all stacks in a pipeline? If I want my pipeline to run atmos terraform apply, can I do an all flag instead of listing the stack and component? This is with GitHub Actions

Brian avatar

Are you looking for workflows?

https://atmos.tools/core-concepts/workflows/

Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

Brian avatar

I am unaware of a atmos flag that just applies or plans all components defined for stack. However, atmos workflow allows you to synchronous run applies and plans for one or more components.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv please help us to understand what you mean by “apply all stacks in a pipeline”. You probably should not plan/apply all stacks at once (there could be thousands of them), but check what components/stacks have changed and plan/apply only those

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Affected Stacks | atmos

Streamline Your Change Management Process

pv avatar

Yes the workflow is the second best option of just adding each plan/apply command there but yes apply all stacks in a mono repo is better phrased

pv avatar

Also what is the downside to running them all if there is only a change to one stack? Wouldn’t all the terraform show as no changes other than the new stack that you added?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the downside is it will take time and consume resources

pv avatar

Does it consume more resources than running regular terraform? Or is it running more in the background that consumes more resources?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean if you have hundreds or thousands of stacks, why trigger all of them if only one or a few are changed and should be planned/applied. Triggering all ofnthem will take a lot of time and probably cost money

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(Atmos calls regular terraform for each plan/apply, there is no difference here, it does not do anything in the background)

pv avatar

I do not have hundreds of thousands of stacks. But none of this really answers my initial question so I’m assuming that means there is no way to run all stacks unless I create a workflow that includes an apply of each individual stack?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is no option in Atmos to trigger everything (e.g. atmos terraform apply -all) (was not implemented, at least yet, for the reason that people usually have hundreds or thousands of stacks, and triggering all of them would be a waste of resources). Help us understand what exactly you want to do in the pipeline, and we would be able to offer a solution

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos workflow is one of the solutions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but if you have a small static set of stacks and you know all of them, you can just execute atmos terraform apply ... sequentially in a script (or in a workflow)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the point is, it’s usually not feasible/practical to list all stacks in a workflow or in a script since there could be too many of them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

another solution:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos list stacks
   for each stack do: atmos list components -s <stack>
      for each component do: atmos terraform apply <component> -s <stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the atmos list stacks and atmos list components commands you can find here and add them to your atmos.yaml:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this solution is dynamic (you don’t have to know and hardcode all stacks and components in advance)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@pv I think maybe what you want is to apply only the affected stacks in the Pull Request?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(that solution is mentioned above)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv let us know if any of the above would help you to implement what you want (and let us know if you need help)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


But none of this really answers my initial question so I’m assuming that means there is no way to run all stacks unless I create a workflow that includes an apply of each individual stack?
Yes, the gist of it is we have not implemented plan-all and apply-all workflows because they have multiple problems. • Plan all never works if your root modules have any interdependencies. At best it gives you a false-sense of what will happen. At worst, it just errors. • Apply-all is practical for cold-starts, but should never be used after that. And since it’s for a cold-start, there are often other considerations. Therefore atmos workflows have been how we address it. • From a CI/CD perspective, neither plan-all nor apply-all should ever be used.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can go into more details any anyone of these. For example, there are alternative considerations for how to address apply-all in a safe, automated way in a CI/CD context, but that’s something solved in CI/CD and not atmos.

pv avatar

@Andriy Knysh (Cloud Posse) I think what you sent me is probably the best for what I am thinking of. I think what I prefer about straight up terraform in the past is just having my pipeline plan and apply and using separate repos for Landing Zones, and Products infra. This requires you to add extra steps and extra potential break points for the workflow but I think this more dynamic approach may help with that.

1
pv avatar

@Erik Osterman (Cloud Posse) I get those concern with how Atmos is setup but with traditional TF, when you run plan and apply, it looks over just any changes in your terraform path on the repo and only applies what is reflected as new, changed, or deleted compared to the state. Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform? Also with the first point, isn’t that reason “depends_on” exists? Sure you don’t see all the values until apply but its faster than running one dependency and then the next one until all dependencies are accounted for

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


ut with traditional TF, when you run plan and apply, it looks over just any changes in your terraform path on the repo and only applies what is reflected as new, changed, or deleted compared to the state.
We get that with our GitHub Actions.

  1. Describe Affected
  2. Run terraform plan on each affected stack
  3. Apply each change with GitHub Actions. To be clear, this has the outcome you want. It’s just not implemented as a “plan-all” or “apply-all”. It’s implemented using GitHub Action matrixes.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform
@mike186 has some interesting numbers he likes to share about the minutes of terraform they run per month. In their case, it’s a combination of Atmos+Spacelift, but the same would be true of Atmos+GitHub Actions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

as far as I’m concerned, atmos is natively supported on spacelift while you need to use custom settings to use gruntworks :)

According to spacelift we have ~1200 stacks (100% passing) and used more than 1.7 million minutes of atmos worker time including over 99k minutes of tracked runs over the last month. These are pretty typical monthly stats for us. Given that we run spacelift exclusively with atmos, every single stack, I very much feel like atmos is so completely, transparently, compatible and with Spacelift that Spacelift doesn’t need a setting for atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Also with the first point, isn’t that reason “depends_on” exists?
So atmos is designed to work with any number of systems, including spacelift. Not every underlying system can implement all the capabilities of the configuration. In this case depends_on is currently utilized by our Spacelift implementation. We plan on adding GHA support for this soon, but cannot commit to when.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sure you don’t see all the values until apply but its faster than running one dependency and then the next one until all dependencies are accounted for Agree, so what we really want it to trigger downstream dependencies when upstream components are affected. This is supported today with Spacelift and Atmos.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding “Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform?”. To be clear, Atmos is configuration on top on terraform root modules, in the end Atmos generates the varfiles/backend and executes plain Terraform (so you’d have the same number of runs using just plain terraform in Spacelift or GHA). But this is not about Atmos vs plain terraform, it’s about architecting your Terraform modules and splitting them into smaller parts to reduce complexity and plan/apply time and blast radius. Once you do it correctly in terraform, you can configure the root-modules with Atmos to be deployed into various environments (OUs, regions, accounts). And once you correctly design your Terraform root modules, you will save time and money when planning/applying them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv depending on what you decide to do, but if you still want atmos terraform apply-all (and atmos terraform plan-all) , I can impelement those as Atmos custom commands and put in the docs (so you could just copy them into your atmos.yaml) (but as mentioned, the best way forward is to use the GHA to execute atmos describe affected to trigger just the affected stacks, then atmos describe dependents to trigger all the dependencies )

pv avatar

@Andriy Knysh (Cloud Posse) No need to create a custom command if it is against practice. I will start with the documentation you sent me and should be good to work with that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think what I prefer about straight up terraform in the past is just having my pipeline plan and apply Aha, I think I misunderstood what you meant initially. Do you mean having larger root modules, that have, for example, the entire landing zone defined?

pv avatar

“Agree, so what we really want it to trigger downstream dependencies when upstream components are affected. This is supported today with Spacelift and Atmos”

@Erik Osterman (Cloud Posse) but how is that more cost effective if you have to pay to use spacelift?

And yes larger root modules that are dynamic in nature. I have a Landing Zone that I like to maintain and modify based on the needs of the company. But I know atmos is intended to be more modular so I am pivoting from what I would normally do

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s a larger calculus that will depend on what your needs are. Spacelift is clearly an enterprise solution for enterprise challenges. That’s why we created the GitHub Actions which are entirely self-hosted and have no SaaS option or tie-in.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The the larger root modules are definitely convenient from a developer perspective. Terraform handles the DAG. The problem is they do not scale as the complexity grows. This may not be a concern, then great - you can use them. It’s our experience, working in enterprise contexts, that these root modules grow and grow. The time to plan takes longer and longer, and they are more susceptible to transient errors. Additionally, the more that goes into a root module, the less reusable it is across organizations. Since Cloud Posse is primarily concerned with how to make infrastructure re-usable, atmos is optimized for this use-case.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The larger the root module, the harder it is to separate concerns, the harder it is to restrict what can change and when.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Every change risks changing everything everytime it’s plan/applied, which means a huge blast radius.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So the solution is to break it into smaller root modules, by lifecycle. But then the problem is as you say, the complexity is offloaded to the tooling that calls terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The first recommendation is to reduce the coupling between the layers, when possible, reducing how often those dependencies are triggered. Then implement CD to roll out the changes. So terraform is responsible for provisioning foundations, and CD is responsible for how to orchestrate those changes.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@pv also want to invite you to our weekly office hours. https://cloudposse.com/office-hours (we’ve run them for ~4-5 years and never missed a week)

pv avatar

Thanks @Erik Osterman (Cloud Posse) and @Andriy Knysh (Cloud Posse). I’ll review the docs you sent and look into the office hours. Appreciate all the attentive support

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anytime! We spend a lot of time thinking about these things, and really need to write up more documentation on “Well-Architected Terraform (according to Cloud Posse)” to make it easier to understand why we do the things we do. Especially when they go against a lot of norms, that we no longer ascribe to.

Shiv avatar

How does one wrap a custom go binary around atoms cli ? I have a go binary to validate vpc connectivity when attaching vpc to transit gateway , I would like to run it as part of terraform execution . Has anyone tried this usecase?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


custom go binary … validate vpc connectivity when attaching vpc to transit gateway
It sounds like the implementation should be flipped around. Terraform should be using your go as a provider. Since it’s already in Go, that lift shouldn’t be too bad.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, something like what you want to do should be possible using atmos custom commands.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can create a custom command that first calls your command, then calls terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Custom commands have access to the config and accept standard command line parameters.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
commands:
  - name: terraform
    description: Execute 'terraform' commands
    # subcommands
    commands:
      - name: provision vpc-transit-gateway
        description: This command provisions the transit gateway after first performing a connectivity check.
        
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: true
        # ENV var values support Go templates
        env:
          - key: ATMOS_COMPONENT
            value: "transit-gateway"
          - key: ATMOS_STACK
            value: "{{ .Flags.stack }}"
        steps:
          - bin/vpc-connectivity-check
          - atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
          - atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then you could simply run:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos terraform provision vpc-transit-gateway --stack prod
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Many tools instead implement things like before/after hooks. We may implement it at some point, but it’s never been something we’ve required at Cloud Posse, and we do a lot of Terraform. Granted, for these types of situations, our route is to go with a Terraform provider.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So that’s why we’ve written our utils and awsutils providers, anytime we need an escape hatch. This is inline with how things should work in Terraform. You could say, our opinionated Best Practice.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As an aside, this sounds like a cool data source if you created one!

validate vpc connectivity when attaching vpc to transit gateway

Shiv avatar

This is great , thanks for validating this . I created it as a standalone go binary that leverages AWS api’s to test the vpc connectivity . I will need to figure out how to make it a tf provider . I am fairly new to the world of terraform . I would have to look into it . Not sure if it’s too much of an ask , have you come across any examples of making a go binary as a provider? I am guessing I need to leverage tf sdk?

Shiv avatar

Are there any non atmos custom command practical usecases that people take advantage of the atmos command feature? Or is it specially for atmos commands?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Well, I can!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We faced this same issue.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-provider-awsutils

Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We took the HashiCorp AWS provider, stripped everything out, and added in what we needed that was custom.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And, since you mention you’re new to Terraform, there’s also the local-exec provider, which is the escape hatch when you don’t have time to write a provider.

Shiv avatar


We took the HashiCorp AWS provider, stripped everything out, and added in what we needed that was custom.
This is exactly what I was thinking. Great .

1
Shiv avatar

I will try it out, thanks much for your help

1
Release notes from atmos avatar
Release notes from atmos
04:24:30 AM

v1.64.0 Create Landing page @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2137898667” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/540” data-hovercard-type=”pull_request”…

Release v1.64.0 · cloudposse/atmosattachment image

Create Landing page @osterman (#540) what

Create a custom landing page Update other docs

why

Explain what Atmos does in a few easy steps

osterman - Overview

To help our clients conquer the cloud by designing, building and implementing world-class infrastructures that delight developers and make business sense. - osterman

2024-02-23

2024-02-24

Release notes from atmos avatar
Release notes from atmos
02:34:32 PM

v1.64.1 Enhancements

Fix responsiveness of css @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2151917720” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/543“…

Release v1.64.1 · cloudposse/atmosattachment image

Enhancements

Fix responsiveness of css @osterman (#543) what

Use % instead of vw, since the outter container is capped at a certain px size. Set a minimum height of the header in px, otherwis…

osterman - Overview

Helping teams succeed today using Cloud Posse’s Reference Architectures for AWS. - osterman

Fix responsiveness of css by osterman · Pull Request #543 · cloudposse/atmosattachment image

what

Use % instead of vw, since the outter container is capped at a certain px size. Set a minimum height of the header in px, otherwise if based on vh it can be impossibly small and gets eaten by…

Release notes from atmos avatar
Release notes from atmos
02:54:36 PM

v1.64.1 Enhancements

Fix responsiveness of css @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2151917720” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/543“…

RB avatar

Hi all. Just wondering, should website or github action changes trigger a release? I always figured these changes would get a no-release label since there aren’t any changes to atmos cli

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we deploy website on release

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe we can deploy it on merging to main

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, I think we should change that.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s deploy website always on merge to main

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Without a release.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I updated it in my current PR to do that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks (I did the same in your PR :)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#544 Simplify install page

what

• Use tabs for each installation method • Add new “installer” script installation method

why

• Make it easier to get started

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

btw, @RB can you share the command I should add to the install page for nixos?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or maybe @Jeremy White (Cloud Posse) if you’re around

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

found it.

nix-env -iA nixpkgs.atmos
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) good for re

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

approved

RB avatar

Thanks for the quick turnaround ! This should make the atmos cli releases easier to follow.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, sorry about that!

np1

2024-02-25

    keyboard_arrow_up