#atmos (2022-06)

2022-06-01

Zach Bridges avatar
Zach Bridges

huge shoutout to @Andriy Knysh (Cloud Posse), great help and awesome experience doing a PR for atmos

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks @Zach Bridges! @Andriy Knysh (Cloud Posse) is amazing indeed

Release notes from atmos avatar
Release notes from atmos
07:24:37 PM

v1.4.19 what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow command did not, and ATMOS_WORKFLOWS_BASE_PATH ENV var was not used

Release v1.4.19 · cloudposse/atmosattachment image

what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow com…

azec avatar

Does anyone mind explaining again the purpose of /catalog/ set of YAML configs and what does the tool internally do with this? https://github.com/cloudposse/atmos/tree/master/examples/complete/stacks/catalog

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the catalog we store YAML config for the component, not top-level stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all those files in the catalog are imported into top-level stacks (to import the component configs into each stack and to make the top-level stack configs DRY not repeating component configs in each stack, but just importing them)

azec avatar

So for the catalog YAML files, do you have to also explicitly import them in stack files, or they get auto-imported by tool ?

azec avatar
  - catalog/terraform/top-level-component1
  - catalog/terraform/test-component
  - catalog/terraform/test-component-override
  - catalog/terraform/test-component-override-2
  - catalog/terraform/test-component-override-3
  - catalog/terraform/vpc
  - catalog/terraform/tenant1-ue2-dev
  - catalog/helmfile/echo-server
  - catalog/helmfile/infra-server
  - catalog/helmfile/infra-server-override
azec avatar

Also, are you manually creating all those YAML files, or are you using some tooling to generate them from TF variables files ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

manually

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(but we chose YAML so that any tool can read and write them. right now we write them manually)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, we do it manually for now, but we can add some tooling )maybe to atmos CLI) to help with that

azec avatar

got it ..

Release notes from atmos avatar
Release notes from atmos
07:44:37 PM

v1.4.19 what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow command did not, and ATMOS_WORKFLOWS_BASE_PATH ENV var was not used

azec avatar

@Andriy Knysh (Cloud Posse), is there a way for tool to generate entire proposed directories structure in any directory in which it is run ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the components and stacks folders can be of any structures, depending on your requirements

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the toll supports any level folders for components and stacks

azec avatar
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>

# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: "."

components:
  terraform:
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
    # Supports both absolute and relative paths
    base_path: "components/terraform"
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
    apply_auto_approve: false
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
    deploy_run_init: true
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
    init_run_reconfigure: true
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
    auto_generate_backend_file: false
  helmfile:
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument
    # Supports both absolute and relative paths
    base_path: "components/helmfile"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var
    kubeconfig_path: "/dev/shm"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var
    helm_aws_profile_pattern: "{namespace}-{tenant}-gbl-{stage}-helm"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var
    cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"

stacks:
  # Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks"
  # Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
  included_paths:
    - "**/*"
  # Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
  excluded_paths:
    - "globals/**/*"
    - "catalog/**/*"
    - "**/*globals*"
  # Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
  name_pattern: "{tenant}-{environment}-{stage}"

workflows:
  # Can also be set using `ATMOS_WORKFLOWS_BASE_PATH` ENV var, or `--workflows-dir` command-line arguments
  # Supports both absolute and relative paths
  base_path: "workflows"

logs:
  verbose: false
  colors: true

azec avatar

Is that something you would put at <PROJECT_ROOT>/cli/ directory?

azec avatar

I am confused with that because recommended layout mentions cli dir: https://atmos.tools/#recommended-layout

azec avatar

But live example has that file atmos.yaml at the root level: https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml

# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>

# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: "."

components:
  terraform:
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
    # Supports both absolute and relative paths
    base_path: "components/terraform"
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
    apply_auto_approve: false
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
    deploy_run_init: true
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
    init_run_reconfigure: true
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
    auto_generate_backend_file: false
  helmfile:
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument
    # Supports both absolute and relative paths
    base_path: "components/helmfile"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var
    kubeconfig_path: "/dev/shm"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var
    helm_aws_profile_pattern: "{namespace}-{tenant}-gbl-{stage}-helm"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var
    cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"

stacks:
  # Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks"
  # Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
  included_paths:
    - "**/*"
  # Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
  excluded_paths:
    - "globals/**/*"
    - "catalog/**/*"
    - "**/*globals*"
  # Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
  name_pattern: "{tenant}-{environment}-{stage}"

workflows:
  # Can also be set using `ATMOS_WORKFLOWS_BASE_PATH` ENV var, or `--workflows-dir` command-line arguments
  # Supports both absolute and relative paths
  base_path: "workflows"

logs:
  verbose: false
  colors: true

azec avatar

Is there an environment variable that could be tapped (set in console) to tell atmos where to find CLI config for that specific project …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when using container, we put it into /usr/local/etc/atmos/atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then copy it into the container file system like this https://github.com/cloudposse/atmos/blob/master/examples/complete/Dockerfile#L36

COPY rootfs/ /
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you put it in the root of the repo, you could run atmos commands from the root of the repo (only)

2022-06-02

azec avatar

I have been able to run atmos terraform init and atmos terraform plan but it still uses local state files - it doesn’t start using s3 backend for state and DynamoDB.

azec avatar

I am not sure what I am doing wrong …

azec avatar

In my components/terraform/iam/user/backend.tf I have ..

terraform {
  backend "s3" {
    # Filled out by atmos from stacks/globals/globals.yaml
  }
}
azec avatar

In my stacks/globals/globals.yaml I have …

terraform:
  vars: {}

  backend_type: s3 # s3, remote, vault, static, azurerm, etc.
  backend:
    s3:
      encrypt: true
      bucket: "<REDACTED>"
      key: "terraform.tfstate"
      dynamodb_table: "<REDACTED>"
      acl: "bucket-owner-full-control"
      region: "us-west-2"
      role_arn: null

azec avatar

When I run atmos terraform init iam/user -s nbi-ops-uw2-devops … it respects backend configuration in [backend.tf](http://backend.tf) file but it is not being fed those backend-related variables from atmos… so it prompts me to enter values …

azec avatar

I feel like the example is missing this piece ….

azec avatar

I see also that majority of Cloudposse root modules (listed in terraform registry) are not defining any partial backends …

RB avatar

Re: @azec missing backend

RB avatar

We usually generate the backend

RB avatar

The example sets it to false, try setting it to true in your atmos.yaml file

RB avatar

When you run an atmos terraform plan you’ll see a backend.tf.json file generated by atmos within the component

RB avatar

This json file will be interpreted as hcl by terraform

Release notes from atmos avatar
Release notes from atmos
06:24:37 PM

v1.4.20 what Update Terraform workspace calculation for legacy Spacelift stack processor why LegacyTransformStackConfigToSpaceliftStacks function in the Spacelift stack processor was used to transform the infrastructure stacks to Spacelift stacks using legacy code (and old versions of terraform-yaml-stack-config module) that does not take into account atmos.yaml CLI config - this is very old code that does not know anything about atmos CLI config and it was maintained to support the old versions of…

Release v1.4.20 · cloudposse/atmosattachment image

what Update Terraform workspace calculation for legacy Spacelift stack processor why LegacyTransformStackConfigToSpaceliftStacks function in the Spacelift stack processor was used to transform t…

2022-06-03

Nimesh Amin avatar
Nimesh Amin

dumb question: How do you force-unlock with atmos? atmos terraform force-unlock ID -s <stack> ID becomes the component and fails.

RB avatar
atmos terraform force-unlock <component> -s <stack> <ID>
1
RB avatar

cc: @Nimesh Amin

Nimesh Amin avatar
Nimesh Amin

thank you ! I think I just tried every combination except that!

RB avatar

remember, you can always cd into the component directory and run the native terraform force-unlock command there too (provided you have selected the appropriate workspace)

RB avatar

atmos simply wraps the terraform command

RB avatar

you can see all the commands atmos runs by adding a --dry-run iirc

Nimesh Amin avatar
Nimesh Amin

oh! That’s very good to know!

Matt Gowie avatar
Matt Gowie

Other option is to run atmos terraform shell and then execute your tf commands from there. That’s my preferred workflow!

1

2022-06-07

2022-06-15

Release notes from atmos avatar
Release notes from atmos
07:24:38 PM

v1.4.21 what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) Use GitHub environments for deployment to take advantage of GitHub deployment API and UI (and not comment on PR with deployment URL to not pollute the PR with unnecessary comments) Go version 1.18 supports many new features including generics and allowing using any keyword instead of interface{} which…

Release v1.4.21 · cloudposse/atmosattachment image

what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) U…

Release notes from atmos avatar
Release notes from atmos
07:44:36 PM

v1.4.21 what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) Use GitHub environments for deployment to take advantage of GitHub deployment API and UI (and not comment on PR with deployment URL to not pollute the PR with unnecessary comments) Go version 1.18 supports many new features including generics and allowing using any keyword instead of interface{} which…

2022-06-16

dalekurt avatar
dalekurt

Hey everyone — I’m doing some catch up on Atmos and I have a question: 1, Would you recommend using Atmos for creating a reference architect (in AWS) such as the AWS Org?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use it to create all resources including AWS org, OUs and accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos is just a glue between components (code/logic) and stacks (config)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

whenever you use terraform, you can use atmos

dalekurt avatar
dalekurt

Thank you @Andriy Knysh (Cloud Posse) I’m reading through the documentation. I have a lot of catching up to do

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just assume different roles to provision regular components and root-level privileged components (e.g. accounts)

dalekurt avatar
dalekurt

Has it (Atmos) been discussed in any of the office hours, like an overview

dalekurt avatar
dalekurt

I would like to use it for setting up all the initial AWS accounts to get started.

dalekurt avatar
dalekurt

Thank you @Andy Miguel (Cloud Posse)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@dalekurt feel free to book some time and I can show you how to go about it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
party_parrot1

2022-06-17

Matt Gowie avatar
Matt Gowie

Hey @Andriy Knysh (Cloud Posse) — Running into an spacelift / stacks config error and wondering if you’ve seen it before or can point me in the right direction. I just quickly peeled back all the layers and I’m at the point that I would want to crack open the provider / atmos to get more debug information from the golang code, but I of course don’t want to do that

Matt Gowie avatar
Matt Gowie

Here is my issue —

module.spacelift.module.spacelift_config.data.utils_spacelift_stack_config.spacelift_stacks: Reading...
╷
│ Error: Failed to find a match for the import '/mnt/workspace/source/components/spacelift/stacks/**/*.yaml' ('/mnt/workspace/source/components/spacelift/stacks' + '**/*.yaml')
│ 
│   with module.spacelift.module.spacelift_config.data.utils_spacelift_stack_config.spacelift_stacks,
│   on .terraform/modules/spacelift.spacelift_config/modules/spacelift/main.tf line 1, in data "utils_spacelift_stack_config" "spacelift_stacks":
│    1: data "utils_spacelift_stack_config" "spacelift_stacks" {
│ 
╵
Releasing state lock. This may take a few moments...

I’m setting up a new spacelift administration stack for an existing org using the in-progress upstreamed spacelift stack from here. My stack_config_path_template is the default of stacks/%s.yaml. I have an atmos.yaml at the root of my project with the following config:

# See full configuration options @ <https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml>

base_path: ./

components:
  terraform:
    base_path: "components"
    auto_generate_backend_file: true

stacks:
  base_path: "stacks"
  name_pattern: "{environment}"

logs:
  verbose: true
  colors: true

Locally, atmos picks up that config fine and if I use atmos to plan my spacelift config all is good. Running in Spacelift however… . And to me it seems to be not picking up the atmos config considering it’s a pathing issue where it’s looking for the stacks directory at the root of the component instead of at the root of the repo.

what

• Upstreaming the spacelift component

why

• This update sets up catalog support for the spacelift admin stacks • Allows multiple admin stacks

references

https://github.com/cloudposse/infra-live

Matt Gowie avatar
Matt Gowie

Would appreciate your thoughts — Thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for tis to work in a container and Spacelift, we use atmos.yaml in /usr/local/etc/atmos - tis works for all cases

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in atmos.yaml,

base_path: ""
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then we set ENV var ATMOS_BASE_PATH for both geodesic and Spacelift:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. geodesic root: export ATMOS_BASE_PATH=$(pwd)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Spacelift .spacelift/config.yml file ``` stack_defaults: before_init:
    • spacelift-configure-paths
    • spacelift-write-vars
    • spacelift-tf-workspace

environment: ATMOS_BASE_PATH: /mnt/workspace/source ```

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the point of all of that is that /usr/local/etc/atmos/atmos.yaml is visible to all processes. In Spacelift, atmos does not see your atmos.yaml in the root of your repo since Spacelift clones the repo to /mnt/workspace/source

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all other combinations of the configs work in some case or another, but not in all cases and all systems. Use the one explained above

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Also in Spacelift we are using Docker image with atmos installed and assign it to each stack. The image is usually the same infrastructure image used with geodesic, pushed to ECR

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

For public workers, install atmos in one of the before init scripts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie are you using self hosted or public workers?

Matt Gowie avatar
Matt Gowie

Ah I’m using public workers — that’s my issue likely as I’ve done the config you mentioned above with private workers in the past without issue. This is an older client of mine that I’ve already upgraded onto Atmos and I’m now moving onto Spacelift. Their project is small so enterprise / self hosted workers doesn’t make sense sadly.

I’ll try setting ATMOS_BASE_PATH :thumbsup:

One question: You mentioned “For public workers, install atmos in one of the before init scripts” — Does atmos need to be installed for the atmos package that is used in the utils provider to work? I wouldn’t expect so. If I configure things correctly (using ATMOS_BASE_PATH or mounting my atmos.yaml to the public worker correctly), the atmos binary not being on the system will not create an issue, correct?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have a project right now that requires we use public hosted workers. We will be working on it. Ext with with @Andriy Knysh (Cloud Posse)

Matt Gowie avatar
Matt Gowie

Ah cool

I’ve done public workers with Atmos / Spacelift before. Don’t remember this hang up, but could’ve been something I just fixed and kept moving on.

I’ll let you folks know if I run into any other hiccups.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Thanks Matt

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos needs to be installed in the init scripts because we use atmos to select terraform workspace and parse yaml config to write the variables for the component in the stack

Matt Gowie avatar
Matt Gowie

Ah gotcha. I’ve just worked around that in the past with the following caveats:

  1. From what I’ve seen, Spacelift will select the correct workspace without the workspace script.

  2. The write variables script ensures that the stack runs with current vars in that commit (which is the proper way to do it for sure), but if you don’t run that script the vars that the admin stack sets will still be picked up and used just fine. This leads to having to re-run the stack after the admin stack runs… which is less than ideal but it does work.

Matt Gowie avatar
Matt Gowie

But after typing out #2… I think I will install Atmos via an init script and run the write vars script to help avoid that stale vars confusion.

Matt Gowie avatar
Matt Gowie

Public workers can use public images, right? I wonder if we should publish a public geodesic image with Atmos installed + the spacelift-* scripts that can be used with public workers… I might give that a shot.

Matt Gowie avatar
Matt Gowie

Answering my own question: They do.

The following small public image worked — https://github.com/masterpointio/spacelift-atmos-runner

Since the name was already generic enough, I ended up adding the following code to spacelift-configure-paths be able to utilize my repo’s atmos.yaml : https://github.com/masterpointio/spacelift-atmos-runner/blob/main/rootfs/usr/local/bin/spacelift-configure-paths#L6-L10

That did the trick and I’m able to run projects without needing to specify environment variables or the like.

masterpointio/spacelift-atmos-runner
Matt Gowie avatar
Matt Gowie

If it’s of interest, that image is available at: public.ecr.aws/w1j9e4y3/spacelift-atmos-runner:latest

That said, I’m sure you folks would want to build + maintain your own, which is the smart move. If you go that route and I can help in anyway, let me know.

Matt Gowie avatar
Matt Gowie

Last thing, I have started to question why this whole process is necessary. Why doesn’t the spacelift-automation module handle all of this for us regardless of public or private workers?

Since we know that all stacks created by the spacelift-automation module are going to be atmos stacks, then why don’t we just bake those before_init scripts into the automation module by default?

Doing the curl pipe pattern (curl <URL> | pipe) would enable us to run an arbitrary amount of setup on both public and private workers so we could do the configure paths, tf workspace, and write vars steps via one before_init command. Seems to me like that would reduce some of the complexity around this and avoid less passing around of these script files.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it should, but we haven’t had the chance to optimize it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think this is what want

1

2022-06-18

2022-06-19

2022-06-21

el avatar

Hi everyone :slightly_smiling_face: I’m struggling to set the AWS profile correctly here. I’ve looked through the docs and the Github repo and it’s still not clear what I’m missing. Thanks in advance for the help!

atmos helmfile template aws-load-balancer-controller --stack=uw2-sandbox

I have a uw2-sandbox.yaml file with this component:

  helmfile:
    aws-load-balancer-controller:
      vars:
        installed: true

and the profile is being set to an unexpected value, causing the update-kubeconfig command to fail:

Variables for the component 'aws-load-balancer-controller' in the stack 'uw2-sandbox':

environment: uw2
installed: true
region: us-west-2
stage: sandbox

Using AWS_PROFILE=--gbl-sandbox-helm

/usr/local/bin/aws --profile --gbl-sandbox-helm eks update-kubeconfig --name=--uw2-sandbox--eks-cluster --region=us-west-2 --kubeconfig=/dev/shm/uw2-sandbox-kubecfg

aws: error: argument --profile: expected one argument
el avatar

I have a feeling it’s tenant/namespace/etc weirdness since I’m not using a tenant and it looks like the profile and name values are missing some interpolated string

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

From your atmos.yaml remove the tenant tokens

el avatar

thank you haha

el avatar

that’s what I get for copying and pasting. Thank you, @Andriy Knysh (Cloud Posse)!

dalekurt avatar
dalekurt

Hello I’m planning on using Atmos in a test deploy (to learn) .

I’m reading through the accounts module https://github.com/cloudposse/terraform-aws-components/tree/master/modules/account Just to confirm using a dash - for the account names are not permitted?

Example:

components:
  terraform:
    account:
      backend:
        s3:
          role_arn: null
      vars:
        enabled: true
        account_email_format: aws+lops-%[email protected]
        account_iam_user_access_to_billing: DENY
        organization_enabled: true
        aws_service_access_principals:
          - cloudtrail.amazonaws.com
          - guardduty.amazonaws.com
          - ipam.amazonaws.com
          - ram.amazonaws.com
          - securityhub.amazonaws.com
          - servicequotas.amazonaws.com
          - sso.amazonaws.com
          - securityhub.amazonaws.com
          - auditmanager.amazonaws.com
        enabled_policy_types:
          - SERVICE_CONTROL_POLICY
          - TAG_POLICY
        organization_config:
          root_account:
            name: core-root
            stage: root
            tags:
              eks: false
          accounts: []
          organization:
            service_control_policies:
              - DenyNonNitroInstances
          organizational_units:
            - name: core
              accounts:
                - name: core-artifacts
                  tenant: core
                  stage: artifacts
                  tags:
                    eks: false
                - name: core-audit
                  tenant: core
                  stage: audit
                  tags:
                    eks: false
                - name: core-auto
                  tenant: core
                  stage: auto
                  tags:
                    eks: true
                - name: core-corp
                  tenant: core
                  stage: corp
                  tags:
                    eks: true
                - name: core-dns
                  tenant: core
                  stage: dns
                  tags:
                    eks: false
                - name: core-identity
                  tenant: core
                  stage: identity
                  tags:
                    eks: false
                - name: core-demo
                  tenant: core
                  stage: demo
                  tags:
                    eks: false
                - name: core-network
                  tenant: core
                  stage: network
                  tags:
                    eks: false
                - name: core-public
                  tenant: core
                  stage: public
                  tags:
                    eks: false
                - name: core-security
                  tenant: core
                  stage: security
                  tags:
                    eks: false
              service_control_policies:
                - DenyLeavingOrganization
            - name: plat
              accounts:
                - name: plat-dev
                  tenant: plat
                  stage: dev
                  tags:
                    eks: true
                - name: plat-sandbox
                  tenant: plat
                  stage: sandbox
                  tags:
                    eks: true
                - name: plat-staging
                  tenant: plat
                  stage: staging
                  tags:
                    eks: true
                - name: plat-prod
                  tenant: plat
                  stage: prod
                  tags:
                    eks: true
              service_control_policies:
                - DenyLeavingOrganization
RB avatar

I believe dashes are acceptable

1
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

It’s acceptable (although we don’t recommend to use it because your resources IDs will include an additional dash)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if your resource names contain dashes, it’s impossible to delimit based on - and know what field is what

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so that’s why we recommend not to do it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, when provisioning accounts, definitely look at the plan before applying because deleting accounts is a major PIA

2
1
RB avatar

@Erik Osterman (Cloud Posse) we use the same - like in the above example for plat-prod in our cplive infra, no ?

                - name: plat-prod
                  tenant: plat
                  stage: prod
                  tags:
                    eks: true
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB somethings wrong with that, because plat is the tenant

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so in null label, it should be:

• stage: prod

• tenant: plat

• namespace: cplive the account should be named after the ID, not the name.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

This is a confusing aspect of our current conventions. For customers NOT using tenant (which is a relatively recent addition to null-label), “account name” should not have a dash and is exactly the same as stage.) For customers using tenant the account name is tenant-stage and we have a special configuration in null-label

    descriptor_formats:
      account_name:
        format: "%v-%v"
        labels:
          - tenant
          - stage

that creates the account name from the tenant and stage labels. This leads to code like this

account_name = lookup(module.this.descriptors, "account_name", var.stage) 
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Other than this specific usage, we highly recommend not using hyphens in any of the null-label labels.

1
dalekurt avatar
dalekurt

Thank you all for that.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Added additional detail above

2022-06-22

el avatar

Hello again :wave: I’m struggling to use aws-vault with atmos because updating the atmos eks update-kubeconfig command uses --profile, which isn’t playing nicely with assuming a role through aws-vault that requires 2FA.

I know CloudPosse has moved on to using Leapp so I’m going to give that a try. In the meantime, is there anything obvious I might be missing to get aws-vault to play more nicely with atmos?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the command supports either profile or IAM role, see https://atmos.tools/cli/commands/aws-eks-update-kubeconfig

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t know if you are asking about the role

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you login with aws-vault first (using 2FA or not, it writes the temporary credentials on your fie system, then you can use a profile or a role with atmos or aws commands)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(i’m trying to say that atmos has nothing to do with how you login to your AWS account and does not know anything about that :slightly_smiling_face: ). In the end, it’s just a wrapper and a glue between components and stacks and it calls terraform commands in the end

el avatar

Thanks Andriy. I’m using aws-vault first, and then I’m prompted for 2FA again when I pass in the same profile with atmos

el avatar

yeah that’s what I was hoping for / assuming

el avatar

it would work perfectly if there were no --profile parameter in the underlying aws command

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


it would work perfectly if there were no --profile parameter in the underlying aws command

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it also supports IAM role, see the doc above

el avatar

I see. I think the real issue is slightly different then. I’m running atmos helmfile template ... and under the hood it’s calling aws eks update-kubeconfig. Should I be passing --role-arn along with my atmos helmfile template ... command?

el avatar

or can I set up my kubeconfig ahead of time, so it’s not called when I run atmos helmfile template?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh, atmos helmfile still uses only the profile - we wanted to update it but did not get to it yet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll try to update it to support role ARN n the next release (in a few days)

el avatar

ah very cool, thank you Andriy!

2022-06-23

el avatar

Hello yet again :wave: I encountered some unexpected behavior (and a misleading error message) with creating a Terraform component that has a variable named environment.

I’m planning on switching to using CloudPosse’s region/namespace/stage nomenclature soon, but didn’t expect this to fail in the meantime. Clearly there’s some variable shadowing going on. I can work around it, but wanted to paste it here anyway - and thanks for building such an awesome tool!

# uw2-sandbox.yaml

components:
  terraform:
    eks-iam:
      backend:
        s3:
          workspace_key_prefix: "eks-iam"
      vars:
        environment: "sandbox"

And the output:

$ atmos terraform plan eks-iam --stack=uw2-sandbox

Searched all stack files, but could not find config for the component 'eks-iam' in the stack 'uw2-sandbox'.
Check that all attributes in the stack name pattern '{environment}-{stage}' are defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@el a few observations here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
vars:
        environment: "sandbox"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t specify the context (tenant, environment (region), stage (account) in all the components, we specify that in separate YAML global files and then import them into stacks

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(but you know that since you mentioned “I’m planning on switching to using CloudPosse’s region/namespace/stage nomenclature”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

where do you have atmos.yaml file located?

el avatar

at the root level, next to components and stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

place it into /usr/local/etc/atmos/atmos.yaml -this works in all cases for all processes (atmos itself and the TF utils provider which is used to get the remote state of TF components and it uses atmos.yaml CLI config as well)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

set base_path to empty string

# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: ""
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then execute

export ATMOS_BASE_PATH=$(pwd)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this works in a Docker container (geodesic does it automatically if you use it)

el avatar

ah yeah right now I’m just using the atmos CLI on MacOS

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform:
    eks-iam:
      backend:
        s3:
          workspace_key_prefix: "eks-iam"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no need to specify workspace_key_prefix, the latest atmos does it automatically (you can override it though)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it’s still not working

atmos terraform plan eks-iam --stack=uw2-sandbox
el avatar

sweet, thank you for the helpful tips

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

send me DM with your code, I’ll review it

el avatar

thanks will give it a shot later and let you know how it goes. appreciate the help!

2022-06-24

azec avatar

Hi there! I’ve been using atmos now for 1 month successfully form my workstation to decouple Terraform modules from config parameters/variables. I am getting ready to try this within GitLab CI/CD. I am curious whether you have any recommendations or examples (even if they are from different CI/CD system, e.g. GitHub Actions) on how you work with atmos from CI/CD pipelines? Do you use things like GNU Make for each infra repo with tasks that use atmos , or something else? I have non-root modules Terraform repository and for now just 1 repository with live infra (similar to what CloudPosse presented in office hours multiple times). My live repository is broken down to folders for each AWS Account and each of those top-level folders then has atmos -suggested structure.

RB avatar

Have you seen atmos workflows subcommand ?

RB avatar

We usually use github but we’ve worked with gitlab as a version control source. All of our terraform CICD is done using spacelift.

RB avatar

It’s technically also possible to setup atmos using other terraform automation tools

RB avatar

but to answer your question, we do not run atmos from any Makefile targets at the moment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@azec we use atmos with Spacelift (with both public/shared workers and private worker pool). We call atmos commands from Spacelift hooks. We can show you how to do it if you are interested

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding calling atmos from CI/CD pipelines, it should be similar

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you install atmos (I suppose in the pipeline container)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

something like

ATMOS_VERSION=1.4.21

# Using `registry.hub.docker.com/cloudposse/geodesic:latest-debian` as Spacelift runner image on public worker pool
apt-get update && apt-get install -y --allow-downgrades atmos="${ATMOS_VERSION}-*"

# If runner image is Alpine Linux
# apk add atmos@cloudposse~=${ATMOS_VERSION}

atmos version

# Copy the atmos CLI config file into the destination `/usr/local/etc/atmos` where all processes can see it
mkdir -p /usr/local/etc/atmos
cp /mnt/workspace/source/rootfs/usr/local/etc/atmos/atmos.yaml /usr/local/etc/atmos/atmos.yaml
cat /usr/local/etc/atmos/atmos.yaml
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that you need to create some IAM role that your pipeline runners can assume with permissions to call into your AWS account(s)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the role needs a trust policy to allow the Ci/CD system to assume it (it could be an AWS account or another role)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then we execute two more commands before terraform plan/apply

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
echo "Selecting Terraform workspace..."
echo "...with AWS_PROFILE=$AWS_PROFILE"
echo "...with AWS_CONFIG_FILE=$AWS_CONFIG_FILE"

atmos terraform workspace "$ATMOS_COMPONENT" --stack="$ATMOS_STACK"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  echo "Writing Stack variables to spacelift.auto.tfvars.json for Spacelift..."

  atmos terraform generate varfile "$ATMOS_COMPONENT" --stack="$ATMOS_STACK" -f spacelift.auto.tfvars.json >/dev/null
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so basically, you need:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Install atmos on the image that your CICD runs. Place atmos.yaml into /usr/local/etc/atmos/atmos.yaml on the image
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Configure IAM role for CICD to assume with permissions to call into your AWS account(s)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Call atmos terraform workspace and atmos terraform generate varfile for the component in the stack
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Then, if your CICD executes just plain TF commands, it will run terraform plan/apply (that’s’ what Spacelift does). Or, instead of calling atmos terraform workspace and atmos terraform generate varfile, you just run atmos terraform plan (apply) <component> -s <stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. CICD will clone the repo with components and stacks into the container where atmos is installed and atmos.yaml placed into /usr/local/etc/atmos/atmos.yaml
azec avatar

Ok, those are all really great insights.

azec avatar

With Spacelift, you just simplify the CI/CD workflows - because it runs meaningful Terraform steps as tasks via webhooks ?

azec avatar

I am curious what your Spacelift workflow looks like for live infra repo.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have all the stacks in Spacelift so it shows you the complete picture of your infra

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I am curious what your Spacelift workflow looks like for live infra repo

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Spacelift workflow - in terms of how we handle changes in PRs? Or how we provision Spacelift stacks?

azec avatar

I was trying to ask how you handle changes in PRs and outcomes in Spacelift? The git flow…

azec avatar

Related to AWS IAM Role assume from CI/CD, we solved for that with installing Gitlab OIDC IdP in IAM and Terraform power role has Trust Policy with IdP audience and subject checks (using GitLab org:project:repo:branch filters).

Connect to cloud services | GitLab

Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.

azec avatar

I guess I need to read through Spacelift docs to really get better understanding of that. But it seems like it integrates well with GitLab as well.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

OIDC IdP is good

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding git flow:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
## Pull Request Workflow

1. Create a new branch & make changes
2. Create a new pull request (targeting the main branch)
3. View the modified resources directly in the pull request
4. View the Spacelift run (terraform plan) in Spacelift PRs tab for the stack (we provision a Rego policy to detect changes in PRs and trigger Spacelift proposed runs)
5. If the changes look good, merge the pull request
6. View the Spacelift run (terraform plan) in Spacelift Runs tab for the stack (we provision a Rego policy to detect merging to the main branch and trigger Spacelift tracked runs)
7. If the changes look good, confirm the Spacelift run (Confirm button) - Spacelift will run `terraform apply` on the stack
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

simplified version

azec avatar

That looks great. I want to be able to really simplify git flow as much as possible on the live infra repo.

azec avatar

I am going to zone out on the Spacelift docs throughout the next week.

azec avatar

I already went through section on GitLab VCS integration.

azec avatar

What is the relation between atmos stacks & workflows to Spacelift stacks ?

azec avatar

I guess it is hard to get the feel of this without trying …

azec avatar

But if you can answer just simplified version, that is good enough for me and thank you!

azec avatar

I have picked this naming schema with terraform null label CP module:

{namespace}-{tenant}-{environment}-{stage}--{name}--------{attributes}
Example:
nbi---------[cto]----[gbl]--------[devops]-[terraformer]-[cicd,...]

My stage for now is always the most simplified name of the AWS Account alias, because org choose to break out environments with individual AWS account isolation. I don’t expect that I will have classic dev|staging|prod stages within single account. But it would be supported with the above schema.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in atmos, we have components (logic) and stacks (config)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos stacks define vars and other settings for all regions and accounts where you deploy the components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. we have vpc component and we have ue2-dev and ue2-prod atmos stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

now in Spacelift, we want to see the entries for all possible combinations of components in all infra (atmos) stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to simplify, Spacelift stacks are a Cartesian product of atmos components and atmos stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Spacelift stacks ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the names are constructed by using the context (tenant, environment/region, stage/account) + the component name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this way, Spacelift shows you everything deployed into your environments

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in other words, a Spacelift stack is atmos stack (e.g. ue2-dev) + component deployed into the stack (e.g. vpc)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all of that is calculated and deployed to Spacelift using these components/modules/providers:

 - <https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift>
 - <https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation>
 - <https://github.com/cloudposse/terraform-provider-utils/tree/main/internal/spacelift>
 - <https://github.com/cloudposse/atmos/tree/master/pkg/spacelift>
azec avatar

Catching up from last week …

azec avatar

I still have a lot to read, but on https://docs.cloudposse.com/reference/stacks/ page in the most complete example on the bottom I have found :

terraform:
  first-component:
    settings:
      spacelift:
        ...
        # Which git branch trigger's this workspace
        branch: develop
        ...

Does this assume that Spacelift can be configured to run plans in lower environments when changes are pushed to branches other than main ?

One of the git flows proposed by GitLab is to have branch per environment, so this would be match to that. But it seems like you were successful configuring Spacelift to work with any changes to main regardless what environment is being changed on the feature branch.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each Spacelift stack can be assigned a GH branch

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the branches can be different

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but yes, we trigger a stack in two cases: pushes to a PR that changed the stack’s code, and merging ti the default (e.g. main) branch

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can def provision the same stack twice (but with diff names) using diff GH branches

azec avatar

My infra-universal repo is broken down in this way :

infra-universal # this is git repo root
 |
 |--aws_account_1
     |
     |-- components
     |-- stacks
     |-- atmos.yaml
 |--aws_account_2
     |
     |-- components
     |-- stacks
     |-- atmos.yaml
 |--aws_account_3
     |
     |-- components
     |-- stacks
     |-- atmos.yaml
 |--aws_account_4
     |
     |-- components
     |-- stacks
     |-- atmos.yaml
azec avatar

Do I need to place https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift in each of my directories under components/terraform/spacelift … ? And then apply it manually from my workstation for each environment against Spacelift (using spacelift provider) ?

azec avatar

I think I got it. https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift actually references https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation module. It is recommended to place module form 1st link into live infra repo under components/terraform/spacelift . For me there would be 4x instances of that, so I assume I would have to link corresponding 4x projects in Spacelift.

cloudposse/terraform-spacelift-cloud-infrastructure-automation
azec avatar

Once I have Spacelift workers pool deployed (for each AWS account for the start), do I need to directly apply that 1st stack from components/terraform/spacelift (I know atmos can’t be used for this) against Spacelift ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t remember what you are using, but we usually provision as many Spacelift admin stacks (an admin stack manages and provisions regular stacks) as we have OUs in the org

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding this structure

--aws_account_1
     |
     |-- components
     |-- stacks
     |-- atmos.yaml
 |--aws_account_2
     |
     |-- components
     |-- stacks
     |-- atmos.yaml
 |--aws_account_3
     |
     |-- components
     |-- stacks
     |-- atmos.yaml
 |--aws_account_4
     |
     |-- components
     |-- stacks
     |-- atmos.yaml
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the components folder we have terraform code for the components, they are “stateless” meaning they can be deployed into any account/region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why there is only one components folder per repo

azec avatar

Hmmm .. I see … so this would be a beginner mistake then.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a structure we are using

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

catalog is for the components YAML config, and all files from the catalog are imported into top-level stacks in orgs

azec avatar

But could I still get by with configuring 1 Root project in Spacelift for each of the target accounts. And then pointing them to components/terraform/spacelift/ of the each individual directory im my structure above ?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

mixins are some common vars/settings for each account and region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

orgs -> OUs (tenants) -> accounts -> regions

azec avatar

Would this approach lead to 4x charge from Spacelift in terms of licensing … ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no, b/c Spacelift charges per workers or per minute (depending on the plan), does not matter how many stacks you have

azec avatar

cool …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and yes, you can provision one admin stack (manually using TF or atmos)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then that admin stack kick off all other stacks (admin and regular)

azec avatar

From there, addition of any new stacks to the repo is auto-detected and they are provisioned as workspaces/stacks in Spacelift ?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the point is, by separating components (logic) from `stacks (config), you don’t have to repeat anything (like you showed above repeated 4 times)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you just configure the corresponding YAML stacks

azec avatar

Yes, right now I repeat root-level terraform config for each of the AWS accounts/directories above. It is tedious, not very DRY, but allows me to have them pinned to different versions of backing TF modules.

azec avatar

For each component 4x …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t think you have/use 4 diff versions at the same time

azec avatar

I guess I leaned towards this approach to have more decoupling, but actually I may need to rework that to keep things simpler. I don’t mind destructing resources I have already. There isn’t many.

azec avatar

It is like a greenfield project.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so in our structure, that could be done by placing 2-3 versions of the component into components under diff names, e.g. vpc-v1.0

azec avatar

yes …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then in YAML

components:
  terraform:
    vpc-defauts:
      metadata:
        type: abstract 
      vars:
        #default vars here 

    my-vpc-1:
      metadata:
        component: vpc-v1.0
        inherits:
          - vpc-defaults
      vars: .....

    my-vpc-2:
      metadata:
        component: vpc-v2.0
        inherits:
          - vpc-defaults
      vars: .....
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform plan my-vpc-1 -s xxxx
atmos terraform plan my-vpc-2 -s yyyy
1
azec avatar

Also most of my provider configurations for each component are currently hardcoded, but that could be improved with some variables, that just like all other variables can be passed down from stacks configs…. e.g.

provider "aws" {
  region              = var.region
  allowed_account_ids = ["<REDACTED>"]
  # Web Identity Role Federation only used in CI/CD
  assume_role_with_web_identity {
    # NOTE: Variables are not allowed in provider configurations block, so these values are hardcoded for all CI/CD purposes
    role_arn           = "<REDACTED>"
    session_name       = "<REDACTED>"
    duration           = "1h"
    # NOTE: Ensure this is substituted in CI/CD runtime (either within gnu make step or GitLab pipeline)
    # Hint: sed -i '' -e "s/GITLAB_OPENIDC_TOKEN/$CI_JOB_JWT_V2/g" providers.tf
    web_identity_token = "GITLAB_OPENIDC_TOKEN" # This is just placeholder
  }

  default_tags {
    tags = {
      "Owner"              = "<REDACTED>"
      "GitLab-Owners"      = "<REDACTED>"
      "Repository"         = "<REDACTED>"
      "VSAD"               = "<REDACTED>"
      "DataClassification" = ""
      "Application"        = ""
      "ProductPortfolio"   = ""
    }
  }
}
azec avatar

How do you solve AWS access and role assume from:

  1. Spacelift public workers
  2. Spacelift private workers
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s a separate issue diff for public and private workers

azec avatar

I have Terraform-power role in each account that can have expanding policies on it as time goes by … depending what AWS services we end up needing to deploy.

azec avatar

Ok, I think I will try to get buy-in on Spacelift medium tier in the next month from my team.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for public workers, we allow each stack to assume an IAM role https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/aws_role

azec avatar

And then try testing all this …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for private pools, since you deploy it in your VPC, use instance profiles with permissions to assume IAM roles into AWS accounts

1
azec avatar

for private pools, probably worker pool per account/vpc instead of just 1 worker pool in some central account …

azec avatar

1-1 mappings between those, separate worker pool configurations for each Spacelift admin project/stack …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not really, depends on many factors. We deploy one worker pool for all accounts and regions, or a worker pool per region, or a worker pool for prod-related stuff and non-prod-related stuff - depending on security, compliance, maintenance, cost and other requirements

azec avatar


for public workers, we allow each stack to assume an IAM role https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/aws_role (edited)
Is this done by one of those CP Terraform modules for Spacelift that you provided above ?

1
azec avatar

Or has to be done separately ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, it’s a lot of things that need to be put together, so ask questions if you have them, we deployed all of those combinations and can help, also ask in spacelift

azec avatar

thank you so much! this is very useful and you have helped me to get oriented in the right direction!

azec avatar

I’ve taken time to refactor my environments organization so I only have 1 components and 1 stacks under the repo root dir.

azec avatar

However, I started hitting into this problem where 1st environment persists components/terraform/<COMPONENT_DIR>/.terraform/environment value for the workspace. Then when I run same stack for different AWS account, I get prompted to select workspace.

azec avatar
Initializing the backend...

The currently selected workspace (nbi-cto-gbl-devops) does not exist.
  This is expected behavior when the selected workspace did not have an
  existing non-empty state. Please enter a number to select a workspace:

  1. default
  2. nbi-cto-gbl-innov8
azec avatar
Selecting a workspace when running Terraform in automation

Introduction When running Terraform CLI with multiple workspaces, the terraform init command will prompt to select a workspace, like so: $ terraform init Initializing the backend… Successfully …

azec avatar

Curious if you have remedy for this @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you using atmos or plain terraform commands?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos selects the correct workspace every time you run atmos terraform plan/apply

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but in general, take a look at TF_DATA_DIR ENV var

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can set it something like this

# Org-specific Terraform work director
ENV TF_DATA_DIR=".terraform${TENANT:+-$TENANT}"
azec avatar

all atmos

azec avatar

is TENANT something atmos exports before it proceeds with selecting/creating workspace ?

azec avatar

At the root of the repo where atmos.yaml is , I also maintain tf-cli-config.tfrc file. Currently the only config I have is:

plugin_cache_dir = "$HOME/.cache/terraform" # Must pre-exist on file system

In addition to that I am using direnv tool for config of env variables as soon as I enter the project root dir. Currently the content of .envrc file that direnv respects is only:

export TF_CLI_CONFIG_FILE=$(pwd)/tf-cli-config.tfrc

so it plays with the above and tells TF where to look for .tfrc file before anything runs.

direnv – unclutter your .profile

unclutter your .profile

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Is TENANT something atmos exports before it proceeds with selecting/creating workspace ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using tenants (var.tenant defined in YAML stack configs), atmos automatically includes tenant in all calculations (TF workspace, varfile, planfile, etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF workspace will be in the format tenant1-ue2-dev in the example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for a diff tenant, it will be tenant2-ue2-dev etc.

azec avatar

I have all that in place regarding tenant and workspaces. But I guess it matters at what time TF_DATA_DIR from your example gets exported. In my scenario, I have same tenant deployed in multiple AWS accounts, so binding tenant var to path of TF_DATA_DIR still doesn’t make sense.

But, since I am using desk, I added to my ~/.desk/desks/tf.sh (which is config for Terraform desk) a function which assumes SAML-federated role to each account. When I call that function it exports AWS_ACCOUNT env var. In addition I added

export TF_DATA_DIR=".terraform${AWS_ACCOUNT:+-$AWS_ACCOUNT}"

to the <PROJECT_ROOT>/.envrc file. So , whenever I call function to assume another role/hop account , new value for AWS_ACCOUNT gets exported. Then in the <PROJECT_ROOT> I just do

direnv allow .

which reloads/recomputes all env variables from <PROJECT_ROOT>/.envrc and always gives me a new value for TF_DATA_DIR .

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we actually not using TF_DATA_DIR, it was just an example how to handle it in a diff way. We just use atmos (and in Spacelift as well) which just calculates TF workspace from the context (tenant, environment, stage)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you try https://www.leapp.cloud/ - we use it to assume a primary role into identity account

Leapp - Make the switch, today.attachment image

Manage your Cloud credentials locally and improve your workflow with the only open-source desktop app you’ll ever need.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then each TF component has [providers.tf](http://providers.tf) file with the following

provider "aws" {
  region = var.region

  profile = module.iam_roles.profiles_enabled ? coalesce(var.import_profile_name, module.iam_roles.terraform_profile_name) : null

  dynamic "assume_role" {
    for_each = module.iam_roles.profiles_enabled ? [] : ["role"]
    content {
      role_arn = coalesce(var.import_role_arn, module.iam_roles.terraform_role_arn)
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

variable "import_profile_name" {
  type        = string
  default     = null
  description = "AWS Profile name to use when importing a resource"
}

variable "import_role_arn" {
  type        = string
  default     = null
  description = "IAM Role ARN to use when importing a resource"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which just means (and i’m trying to point you in that direction) that we don’t manually assume roles into each account when provisioning resources. We assume roles into identity account, and then each component is configured with the correct role or profile to assume into other accounts (dev, prod, staging, etc.), which is all done automatically by terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for this to work, the primary role(s) in the identity account has permissions to assume roles in the other infra acounts, while the delegated roles in other accounts have a trust policy to allow the primary role from identity to assume it

azec avatar


we actually not using TF_DATA_DIR, it was just an example how to handle it in a diff way. We just use atmos (and in Spacelift as well) which just calculates TF workspace from the context (tenant, environment, stage)
Again, there is no problem with atmos. Now that I don’t have separate directories for each AWS account, each holding it’s own components/terraform/<COMPONENT_NAME> copies, what ends up happening (and only in local workflow) is that: • running for AWS account A, atmos computes well the workspace name and it gets saved to components/terraform/<COMPONENT_NAME>/.terraform/environment file with value of computed workspace name my-workspace-something-A (note that this is when you don’t have any TF_DATA_DIR set, so default is .terraform • after that running AWS account B, atmos computes well the workspace name again but before it proceeds with selecting it, on terraform init command it finds above components/terraform/<COMPONENT_NAME>/.terraform/environment with value my-workspace-something-A inside. That makes it confused, and then it causes the prompt that I have linked here

azec avatar

In CI/CD, this issue doesn’t exist because that .terraform dir is always ephemeral on 1 job run for specific AWS account. Then for new AWS account (also stack and workspace), on new job, new container , source checked out doesn’t come with components/terraform/<COMPONENT_NAME>/.terraform dir. So atmos (internally terraform init ) doesn’t get confused about workspaces. It always does compute them right though.

azec avatar

I have these in atmos.yaml :

components:
  terraform:
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
    # Supports both absolute and relative paths
    base_path: "components/terraform"
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
    apply_auto_approve: false
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
    deploy_run_init: true
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
    init_run_reconfigure: true
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
    auto_generate_backend_file: true

, so I keep seeing terraform init pre-run , no matter what other atmos terraform <command> I issue.

azec avatar

Thank you for pointing me to TF_DATA_DIR, it was very useful.

azec avatar

Here is what my new project organization looks like.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

from the issue above, looks like TF_DATA_DIR should be set for each account in your case (not tenant)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or you can run atmos terraform clean xxx -s yyy before switching the accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
> atmos terraform clean test/test-component-override -s tenant1-ue2-de

Deleting '.terraform' folder
Deleting '.terraform.lock.hcl' file
Deleting terraform varfile: tenant1-ue2-dev-test-test-component-override.terraform.tfvars.json
Deleting terraform planfile: tenant1-ue2-dev-test-test-component-override.planfile
azec avatar

That is neat! I wasn’t aware of atmos terraform clean

azec avatar

Even better …

azec avatar

There might be a bug with atmos terraform clean when TF_DATA_DIR is set.

$ env | grep 'TF_DATA_DIR'
TF_DATA_DIR=.terraform-devops

$ atmos terraform clean ecr/private -s nbi-cto-uw2-devops

Deleting '.terraform' folder
Deleting '.terraform.lock.hcl' file
Deleting terraform varfile: nbi-cto-uw2-devops-ecr-private.terraform.tfvars.json
Deleting terraform planfile: nbi-cto-uw2-devops-ecr-private.planfile
Deleting 'backend.tf.json' file
Found ENV var TF_DATA_DIR=.terraform-devops
Do you want to delete the folder '.terraform-devops'? (only 'yes' will be accepted to approve)
Enter a value: yes
Deleting folder '.terraform-devops'

$ ls -lah components/terraform/ecr/private

drwxr-xr-x  12 zecam  staff   384B Jul  1 10:26 .
drwxr-xr-x   3 zecam  staff    96B Jun 28 13:18 ..
drwxr-xr-x   6 zecam  staff   192B Jun 30 15:16 .terraform-devops    <-- still present on FS
drwxr-xr-x   6 zecam  staff   192B Jun 30 15:32 .terraform-innov8
drwxr-xr-x   6 zecam  staff   192B Jun 30 15:10 .terraform-rkstr8
-rw-r--r--   1 zecam  staff   9.9K Jun 30 16:06 context.tf
...
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm, we actually did not use TF_DATA_DIR much so it was not tested 100%, i’ll look into that

azec avatar

I liked that it paired well together with TF_DATA_DIR at first and figured out which specific dir needs to be deleted, but for some reason it didn’t delete.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

found an issue, will fix shortly

1
azec avatar


did you try https://www.leapp.cloud/ - we use it to assume a primary role into identity account
I am looking into this. I am fan of https://awsu.me/ , with some custom-written plugins for SAML 2.0 Federation.

Leapp - Make the switch, today.attachment image

Manage your Cloud credentials locally and improve your workflow with the only open-source desktop app you’ll ever need.

AWSume: AWS Assume Made Awesome! | AWSume

Awsume - A cli that makes using AWS IAM credentials easy

azec avatar

So I might build custom image on top of geodesic that includes our forks of awsume and our plugin for SAML 2.0

azec avatar

Based on docs, Leapp supports only a few Identity Providers, none of them being one we work with.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
1
azec avatar

thanks!

2022-06-27

Release notes from atmos avatar
Release notes from atmos
04:04:39 PM

v1.4.22 what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why

ATMOS_CLI_CONFIG_PATH ENV var allows specifying the location of atmos.yaml CLI config file. This is useful for CI/CD environments (e.g. Spacelift) where an infrastructure repository gets loaded into a custom path and atmos.yaml is not in the locations where atmos expects to find it (no need to copy atmos.yaml into /usr/local/etc/atmos/atmos.yaml)

Detect…

Release v1.4.22 · cloudposse/atmos

what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why ATMOS_CLI_CONFIG_PATH ENV var allows specifying the loc…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is a very exciting release! (I think we should have bumped the major. @Andriy Knysh (Cloud Posse))

This adds the ability to add any number of commands/subcommands to atmos to further streamline your tooling under one interface.

Let’s say you wanted to add a new command called “provision”, you would do it like this:

- name: terraform
    description: Execute terraform commands
    # subcommands
    commands:
      - name: provision
        description: This command provisions terraform components
        arguments:
          - name: component
            description: Name of the component
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: true
        # ENV var values support Go templates
        env:
          - key: ATMOS_COMPONENT
            value: "{{ .Arguments.component }}"
          - key: ATMOS_STACK
            value: "{{ .Flags.stack }}"
        steps:
          - atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
          - atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK

But other practical use-cases are:

• you use ansible and you want to call it from atmos

• you use the serverless framework, and want to stream line it

• you want to give developers a simple “env up” and “env down” command, this would be how.

Release v1.4.22 · cloudposse/atmos

what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why ATMOS_CLI_CONFIG_PATH ENV var allows specifying the loc…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll do a major release (let’s add that to the docs first)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

another useful thing to add would be hooks, e.g. a before and after hooks for terraform plan/apply

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is a very exciting release! (I think we should have bumped the major. @Andriy Knysh (Cloud Posse))

This adds the ability to add any number of commands/subcommands to atmos to further streamline your tooling under one interface.

Let’s say you wanted to add a new command called “provision”, you would do it like this:

- name: terraform
    description: Execute terraform commands
    # subcommands
    commands:
      - name: provision
        description: This command provisions terraform components
        arguments:
          - name: component
            description: Name of the component
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: true
        # ENV var values support Go templates
        env:
          - key: ATMOS_COMPONENT
            value: "{{ .Arguments.component }}"
          - key: ATMOS_STACK
            value: "{{ .Flags.stack }}"
        steps:
          - atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
          - atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK

But other practical use-cases are: • you use ansible and you want to call it from atmos • you use the serverless framework, and want to stream line it • you want to give developers a simple “env up” and “env down” command, this would be how.

2

2022-06-30

    keyboard_arrow_up