#atmos (2022-06)
2022-06-01
![Zach Bridges avatar](https://avatars.slack-edge.com/2022-04-11/3376078893556_cdcada3ce1fe71328fc7_72.jpg)
huge shoutout to @Andriy Knysh (Cloud Posse), great help and awesome experience doing a PR for atmos
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
thanks @Zach Bridges! @Andriy Knysh (Cloud Posse) is amazing indeed
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.4.19 what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow command did not, and ATMOS_WORKFLOWS_BASE_PATH ENV var was not used
what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow com…
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Does anyone mind explaining again the purpose of /catalog/
set of YAML configs and what does the tool internally do with this?
https://github.com/cloudposse/atmos/tree/master/examples/complete/stacks/catalog
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
in the catalog
we store YAML config for the component, not top-level stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
all those files in the catalog are imported into top-level stacks (to import the component configs into each stack and to make the top-level stack configs DRY not repeating component configs in each stack, but just importing them)
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
So for the catalog
YAML files, do you have to also explicitly import them in stack files, or they get auto-imported by tool ?
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Oh I see … https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/tenant1/ue2/dev.yaml#L4-L13
- catalog/terraform/top-level-component1
- catalog/terraform/test-component
- catalog/terraform/test-component-override
- catalog/terraform/test-component-override-2
- catalog/terraform/test-component-override-3
- catalog/terraform/vpc
- catalog/terraform/tenant1-ue2-dev
- catalog/helmfile/echo-server
- catalog/helmfile/infra-server
- catalog/helmfile/infra-server-override
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Also, are you manually creating all those YAML files, or are you using some tooling to generate them from TF variables files ?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
manually
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
(but we chose YAML so that any tool can read and write them. right now we write them manually)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
yes, we do it manually for now, but we can add some tooling )maybe to atmos CLI) to help with that
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
got it ..
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.4.19 what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow command did not, and ATMOS_WORKFLOWS_BASE_PATH ENV var was not used
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
@Andriy Knysh (Cloud Posse), is there a way for tool to generate entire proposed directories structure in any directory in which it is run ?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
the components
and stacks
folders can be of any structures, depending on your requirements
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
the toll supports any level folders for components and stacks
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
and how about atmos.yaml
: https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml
?
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: "."
components:
terraform:
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
apply_auto_approve: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
deploy_run_init: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
init_run_reconfigure: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
auto_generate_backend_file: false
helmfile:
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/helmfile"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var
kubeconfig_path: "/dev/shm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var
helm_aws_profile_pattern: "{namespace}-{tenant}-gbl-{stage}-helm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var
cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"
stacks:
# Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
included_paths:
- "**/*"
# Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
excluded_paths:
- "globals/**/*"
- "catalog/**/*"
- "**/*globals*"
# Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
name_pattern: "{tenant}-{environment}-{stage}"
workflows:
# Can also be set using `ATMOS_WORKFLOWS_BASE_PATH` ENV var, or `--workflows-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "workflows"
logs:
verbose: false
colors: true
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Is that something you would put at <PROJECT_ROOT>/cli/
directory?
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I am confused with that because recommended layout mentions cli dir: https://atmos.tools/#recommended-layout
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
But live example has that file atmos.yaml
at the root level: https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: "."
components:
terraform:
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
apply_auto_approve: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
deploy_run_init: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
init_run_reconfigure: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
auto_generate_backend_file: false
helmfile:
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/helmfile"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var
kubeconfig_path: "/dev/shm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var
helm_aws_profile_pattern: "{namespace}-{tenant}-gbl-{stage}-helm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var
cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"
stacks:
# Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
included_paths:
- "**/*"
# Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
excluded_paths:
- "globals/**/*"
- "catalog/**/*"
- "**/*globals*"
# Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
name_pattern: "{tenant}-{environment}-{stage}"
workflows:
# Can also be set using `ATMOS_WORKFLOWS_BASE_PATH` ENV var, or `--workflows-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "workflows"
logs:
verbose: false
colors: true
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Is there an environment variable that could be tapped (set in console) to tell atmos
where to find CLI config for that specific project …
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
when using container, we put it into /usr/local/etc/atmos/atmos.yaml
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
then copy it into the container file system like this https://github.com/cloudposse/atmos/blob/master/examples/complete/Dockerfile#L36
COPY rootfs/ /
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
if you put it in the root of the repo, you could run atmos commands from the root of the repo (only)
2022-06-02
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I have been able to run atmos terraform init
and atmos terraform plan
but it still uses local state files - it doesn’t start using s3 backend for state and DynamoDB.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I am not sure what I am doing wrong …
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
In my components/terraform/iam/user/backend.tf
I have ..
terraform {
backend "s3" {
# Filled out by atmos from stacks/globals/globals.yaml
}
}
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
In my stacks/globals/globals.yaml
I have …
terraform:
vars: {}
backend_type: s3 # s3, remote, vault, static, azurerm, etc.
backend:
s3:
encrypt: true
bucket: "<REDACTED>"
key: "terraform.tfstate"
dynamodb_table: "<REDACTED>"
acl: "bucket-owner-full-control"
region: "us-west-2"
role_arn: null
…
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
When I run atmos terraform init iam/user -s nbi-ops-uw2-devops
… it respects backend configuration in [backend.tf](http://backend.tf)
file but it is not being fed those backend-related variables from atmos… so it prompts me to enter values …
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I feel like the example is missing this piece ….
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I see also that majority of Cloudposse root modules (listed in terraform registry) are not defining any partial backends …
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Re: @azec missing backend
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
We usually generate the backend
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
See this option
auto_generate_backend_file: false
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
The example sets it to false, try setting it to true in your atmos.yaml file
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
When you run an atmos terraform plan
you’ll see a backend.tf.json
file generated by atmos within the component
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
This json file will be interpreted as hcl by terraform
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.4.20 what Update Terraform workspace calculation for legacy Spacelift stack processor why LegacyTransformStackConfigToSpaceliftStacks function in the Spacelift stack processor was used to transform the infrastructure stacks to Spacelift stacks using legacy code (and old versions of terraform-yaml-stack-config module) that does not take into account atmos.yaml CLI config - this is very old code that does not know anything about atmos CLI config and it was maintained to support the old versions of…
what Update Terraform workspace calculation for legacy Spacelift stack processor why LegacyTransformStackConfigToSpaceliftStacks function in the Spacelift stack processor was used to transform t…
2022-06-03
![Nimesh Amin avatar](https://avatars.slack-edge.com/2022-03-01/3175287937013_6196d4e8ede5f6d560e9_72.png)
dumb question: How do you force-unlock with atmos? atmos terraform force-unlock ID -s <stack> ID becomes the component and fails.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
cc: @Nimesh Amin
![Nimesh Amin avatar](https://avatars.slack-edge.com/2022-03-01/3175287937013_6196d4e8ede5f6d560e9_72.png)
thank you ! I think I just tried every combination except that!
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
remember, you can always cd
into the component directory and run the native terraform force-unlock
command there too (provided you have selected the appropriate workspace)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
atmos simply wraps the terraform command
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
you can see all the commands atmos runs by adding a --dry-run
iirc
![Nimesh Amin avatar](https://avatars.slack-edge.com/2022-03-01/3175287937013_6196d4e8ede5f6d560e9_72.png)
oh! That’s very good to know!
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Other option is to run atmos terraform shell
and then execute your tf commands from there. That’s my preferred workflow!
2022-06-07
2022-06-15
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.4.21 what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) Use GitHub environments for deployment to take advantage of GitHub deployment API and UI (and not comment on PR with deployment URL to not pollute the PR with unnecessary comments) Go version 1.18 supports many new features including generics and allowing using any keyword instead of interface{} which…
what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) U…
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.4.21 what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) Use GitHub environments for deployment to take advantage of GitHub deployment API and UI (and not comment on PR with deployment URL to not pollute the PR with unnecessary comments) Go version 1.18 supports many new features including generics and allowing using any keyword instead of interface{} which…
2022-06-16
![dalekurt avatar](https://avatars.slack-edge.com/2022-06-16/3703363393968_abccd57f2124dd3b0f25_72.jpg)
Hey everyone — I’m doing some catch up on Atmos and I have a question: 1, Would you recommend using Atmos for creating a reference architect (in AWS) such as the AWS Org?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we use it to create all resources including AWS org, OUs and accounts
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
atmos
is just a glue between components
(code/logic) and stacks
(config)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
whenever you use terraform, you can use atmos
![dalekurt avatar](https://avatars.slack-edge.com/2022-06-16/3703363393968_abccd57f2124dd3b0f25_72.jpg)
Thank you @Andriy Knysh (Cloud Posse) I’m reading through the documentation. I have a lot of catching up to do
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
just assume different roles to provision regular components and root-level privileged components (e.g. accounts)
![dalekurt avatar](https://avatars.slack-edge.com/2022-06-16/3703363393968_abccd57f2124dd3b0f25_72.jpg)
Has it (Atmos) been discussed in any of the office hours, like an overview
![dalekurt avatar](https://avatars.slack-edge.com/2022-06-16/3703363393968_abccd57f2124dd3b0f25_72.jpg)
I would like to use it for setting up all the initial AWS accounts to get started.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andy Miguel (Cloud Posse) avatar](https://avatars.slack-edge.com/2021-01-31/1681606086343_27574601efa96f8283e4_72.png)
![dalekurt avatar](https://avatars.slack-edge.com/2022-06-16/3703363393968_abccd57f2124dd3b0f25_72.jpg)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
@dalekurt feel free to book some time and I can show you how to go about it
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
2022-06-17
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Hey @Andriy Knysh (Cloud Posse) — Running into an spacelift / stacks config error and wondering if you’ve seen it before or can point me in the right direction. I just quickly peeled back all the layers and I’m at the point that I would want to crack open the provider / atmos to get more debug information from the golang code, but I of course don’t want to do that
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Here is my issue —
module.spacelift.module.spacelift_config.data.utils_spacelift_stack_config.spacelift_stacks: Reading...
╷
│ Error: Failed to find a match for the import '/mnt/workspace/source/components/spacelift/stacks/**/*.yaml' ('/mnt/workspace/source/components/spacelift/stacks' + '**/*.yaml')
│
│ with module.spacelift.module.spacelift_config.data.utils_spacelift_stack_config.spacelift_stacks,
│ on .terraform/modules/spacelift.spacelift_config/modules/spacelift/main.tf line 1, in data "utils_spacelift_stack_config" "spacelift_stacks":
│ 1: data "utils_spacelift_stack_config" "spacelift_stacks" {
│
╵
Releasing state lock. This may take a few moments...
I’m setting up a new spacelift administration stack for an existing org using the in-progress upstreamed spacelift stack from here. My stack_config_path_template
is the default of stacks/%s.yaml
. I have an atmos.yaml
at the root of my project with the following config:
# See full configuration options @ <https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml>
base_path: ./
components:
terraform:
base_path: "components"
auto_generate_backend_file: true
stacks:
base_path: "stacks"
name_pattern: "{environment}"
logs:
verbose: true
colors: true
Locally, atmos picks up that config fine and if I use atmos to plan my spacelift config all is good. Running in Spacelift however… . And to me it seems to be not picking up the atmos config considering it’s a pathing issue where it’s looking for the stacks directory at the root of the component instead of at the root of the repo.
what
• Upstreaming the spacelift
component
why
• This update sets up catalog support for the spacelift
admin stacks
• Allows multiple admin stacks
references
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Would appreciate your thoughts — Thanks!
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
for tis to work in a container and Spacelift, we use atmos.yaml
in /usr/local/etc/atmos
- tis works for all cases
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
in atmos.yaml
,
base_path: ""
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
then we set ENV var ATMOS_BASE_PATH
for both geodesic and Spacelift:
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- geodesic root:
export ATMOS_BASE_PATH=$(pwd)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- Spacelift
.spacelift/config.yml
file ``` stack_defaults: before_init:- spacelift-configure-paths
- spacelift-write-vars
- spacelift-tf-workspace
environment: ATMOS_BASE_PATH: /mnt/workspace/source ```
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
the point of all of that is that /usr/local/etc/atmos/atmos.yaml
is visible to all processes. In Spacelift, atmos
does not see your atmos.yaml
in the root of your repo since Spacelift clones the repo to /mnt/workspace/source
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
all other combinations of the configs work in some case or another, but not in all cases and all systems. Use the one explained above
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
Also in Spacelift we are using Docker image with atmos installed and assign it to each stack. The image is usually the same infrastructure image used with geodesic, pushed to ECR
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
For public workers, install atmos in one of the before init scripts
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
@Matt Gowie are you using self hosted or public workers?
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Ah I’m using public workers — that’s my issue likely as I’ve done the config you mentioned above with private workers in the past without issue. This is an older client of mine that I’ve already upgraded onto Atmos and I’m now moving onto Spacelift. Their project is small so enterprise / self hosted workers doesn’t make sense sadly.
I’ll try setting ATMOS_BASE_PATH
:thumbsup:
One question: You mentioned “For public workers, install atmos in one of the before init scripts” — Does atmos need to be installed for the atmos package that is used in the utils provider to work? I wouldn’t expect so. If I configure things correctly (using ATMOS_BASE_PATH
or mounting my atmos.yaml
to the public worker correctly), the atmos binary not being on the system will not create an issue, correct?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
We have a project right now that requires we use public hosted workers. We will be working on it. Ext with with @Andriy Knysh (Cloud Posse)
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Ah cool
I’ve done public workers with Atmos / Spacelift before. Don’t remember this hang up, but could’ve been something I just fixed and kept moving on.
I’ll let you folks know if I run into any other hiccups.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
Thanks Matt
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
atmos needs to be installed in the init scripts because we use atmos to select terraform workspace and parse yaml config to write the variables for the component in the stack
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Ah gotcha. I’ve just worked around that in the past with the following caveats:
-
From what I’ve seen, Spacelift will select the correct workspace without the workspace script.
-
The write variables script ensures that the stack runs with current vars in that commit (which is the proper way to do it for sure), but if you don’t run that script the vars that the admin stack sets will still be picked up and used just fine. This leads to having to re-run the stack after the admin stack runs… which is less than ideal but it does work.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
But after typing out #2… I think I will install Atmos via an init script and run the write vars script to help avoid that stale vars confusion.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Public workers can use public images, right? I wonder if we should publish a public geodesic image with Atmos installed + the spacelift-* scripts that can be used with public workers… I might give that a shot.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Answering my own question: They do.
The following small public image worked — https://github.com/masterpointio/spacelift-atmos-runner
Since the name was already generic enough, I ended up adding the following code to spacelift-configure-paths
be able to utilize my repo’s atmos.yaml
: https://github.com/masterpointio/spacelift-atmos-runner/blob/main/rootfs/usr/local/bin/spacelift-configure-paths#L6-L10
That did the trick and I’m able to run projects without needing to specify environment variables or the like.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
If it’s of interest, that image is available at: public.ecr.aws/w1j9e4y3/spacelift-atmos-runner:latest
That said, I’m sure you folks would want to build + maintain your own, which is the smart move. If you go that route and I can help in anyway, let me know.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Last thing, I have started to question why this whole process is necessary. Why doesn’t the spacelift-automation module handle all of this for us regardless of public or private workers?
Since we know that all stacks created by the spacelift-automation module are going to be atmos stacks, then why don’t we just bake those before_init
scripts into the automation module by default?
Doing the curl pipe pattern (curl <URL> | pipe
) would enable us to run an arbitrary amount of setup on both public and private workers so we could do the configure paths, tf workspace, and write vars steps via one before_init
command. Seems to me like that would reduce some of the complexity around this and avoid less passing around of these script files.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
I think it should, but we haven’t had the chance to optimize it
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
2022-06-18
2022-06-19
2022-06-21
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
Hi everyone :slightly_smiling_face: I’m struggling to set the AWS profile correctly here. I’ve looked through the docs and the Github repo and it’s still not clear what I’m missing. Thanks in advance for the help!
atmos helmfile template aws-load-balancer-controller --stack=uw2-sandbox
I have a uw2-sandbox.yaml
file with this component:
helmfile:
aws-load-balancer-controller:
vars:
installed: true
and the profile is being set to an unexpected value, causing the update-kubeconfig
command to fail:
Variables for the component 'aws-load-balancer-controller' in the stack 'uw2-sandbox':
environment: uw2
installed: true
region: us-west-2
stage: sandbox
Using AWS_PROFILE=--gbl-sandbox-helm
/usr/local/bin/aws --profile --gbl-sandbox-helm eks update-kubeconfig --name=--uw2-sandbox--eks-cluster --region=us-west-2 --kubeconfig=/dev/shm/uw2-sandbox-kubecfg
aws: error: argument --profile: expected one argument
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
I have a feeling it’s tenant/namespace/etc weirdness since I’m not using a tenant and it looks like the profile
and name
values are missing some interpolated string
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
From your atmos.yaml remove the tenant tokens
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
thank you haha
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
that’s what I get for copying and pasting. Thank you, @Andriy Knysh (Cloud Posse)!
![dalekurt avatar](https://avatars.slack-edge.com/2022-06-16/3703363393968_abccd57f2124dd3b0f25_72.jpg)
Hello I’m planning on using Atmos in a test deploy (to learn) .
I’m reading through the accounts module https://github.com/cloudposse/terraform-aws-components/tree/master/modules/account
Just to confirm using a dash -
for the account names are not permitted?
Example:
components:
terraform:
account:
backend:
s3:
role_arn: null
vars:
enabled: true
account_email_format: aws+lops-%[email protected]
account_iam_user_access_to_billing: DENY
organization_enabled: true
aws_service_access_principals:
- cloudtrail.amazonaws.com
- guardduty.amazonaws.com
- ipam.amazonaws.com
- ram.amazonaws.com
- securityhub.amazonaws.com
- servicequotas.amazonaws.com
- sso.amazonaws.com
- securityhub.amazonaws.com
- auditmanager.amazonaws.com
enabled_policy_types:
- SERVICE_CONTROL_POLICY
- TAG_POLICY
organization_config:
root_account:
name: core-root
stage: root
tags:
eks: false
accounts: []
organization:
service_control_policies:
- DenyNonNitroInstances
organizational_units:
- name: core
accounts:
- name: core-artifacts
tenant: core
stage: artifacts
tags:
eks: false
- name: core-audit
tenant: core
stage: audit
tags:
eks: false
- name: core-auto
tenant: core
stage: auto
tags:
eks: true
- name: core-corp
tenant: core
stage: corp
tags:
eks: true
- name: core-dns
tenant: core
stage: dns
tags:
eks: false
- name: core-identity
tenant: core
stage: identity
tags:
eks: false
- name: core-demo
tenant: core
stage: demo
tags:
eks: false
- name: core-network
tenant: core
stage: network
tags:
eks: false
- name: core-public
tenant: core
stage: public
tags:
eks: false
- name: core-security
tenant: core
stage: security
tags:
eks: false
service_control_policies:
- DenyLeavingOrganization
- name: plat
accounts:
- name: plat-dev
tenant: plat
stage: dev
tags:
eks: true
- name: plat-sandbox
tenant: plat
stage: sandbox
tags:
eks: true
- name: plat-staging
tenant: plat
stage: staging
tags:
eks: true
- name: plat-prod
tenant: plat
stage: prod
tags:
eks: true
service_control_policies:
- DenyLeavingOrganization
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
It’s acceptable (although we don’t recommend to use it because your resources IDs will include an additional dash)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
if your resource names contain dashes, it’s impossible to delimit based on -
and know what field is what
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
so that’s why we recommend not to do it
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
also, when provisioning accounts, definitely look at the plan before applying because deleting accounts is a major PIA
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
@Erik Osterman (Cloud Posse) we use the same -
like in the above example for plat-prod
in our cplive infra, no ?
- name: plat-prod
tenant: plat
stage: prod
tags:
eks: true
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
so in null label, it should be:
• stage: prod
• tenant: plat
• namespace: cplive the account should be named after the ID, not the name.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
@Jeremy G (Cloud Posse)
![Jeremy G (Cloud Posse) avatar](https://avatars.slack-edge.com/2020-07-04/1229022582372_22757dbc9ef96d371614_72.jpg)
This is a confusing aspect of our current conventions. For customers NOT using tenant
(which is a relatively recent addition to null-label
), “account name” should not have a dash and is exactly the same as stage
.) For customers using tenant
the account name is tenant-stage
and we have a special configuration in null-label
descriptor_formats:
account_name:
format: "%v-%v"
labels:
- tenant
- stage
that creates the account name from the tenant
and stage
labels. This leads to code like this
account_name = lookup(module.this.descriptors, "account_name", var.stage)
![Jeremy G (Cloud Posse) avatar](https://avatars.slack-edge.com/2020-07-04/1229022582372_22757dbc9ef96d371614_72.jpg)
Other than this specific usage, we highly recommend not using hyphens in any of the null-label
labels.
![dalekurt avatar](https://avatars.slack-edge.com/2022-06-16/3703363393968_abccd57f2124dd3b0f25_72.jpg)
Thank you all for that.
![Jeremy G (Cloud Posse) avatar](https://avatars.slack-edge.com/2020-07-04/1229022582372_22757dbc9ef96d371614_72.jpg)
Added additional detail above
2022-06-22
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
Hello again :wave: I’m struggling to use aws-vault
with atmos
because updating the atmos eks update-kubeconfig
command uses --profile
, which isn’t playing nicely with assuming a role through aws-vault
that requires 2FA.
I know CloudPosse has moved on to using Leapp so I’m going to give that a try. In the meantime, is there anything obvious I might be missing to get aws-vault
to play more nicely with atmos
?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
the command supports either profile or IAM role, see https://atmos.tools/cli/commands/aws-eks-update-kubeconfig
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
i don’t know if you are asking about the role
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you login with aws-vault
first (using 2FA or not, it writes the temporary credentials on your fie system, then you can use a profile or a role with atmos or aws commands)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
(i’m trying to say that atmos has nothing to do with how you login to your AWS account and does not know anything about that :slightly_smiling_face: ). In the end, it’s just a wrapper and a glue between components
and stacks
and it calls terraform commands in the end
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
Thanks Andriy. I’m using aws-vault
first, and then I’m prompted for 2FA again when I pass in the same profile with atmos
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
yeah that’s what I was hoping for / assuming
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
it would work perfectly if there were no --profile
parameter in the underlying aws command
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
it would work perfectly if there were no --profile
parameter in the underlying aws command
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
it also supports IAM role, see the doc above
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
I see. I think the real issue is slightly different then. I’m running atmos helmfile template ...
and under the hood it’s calling aws eks update-kubeconfig
. Should I be passing --role-arn
along with my atmos helmfile template ...
command?
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
or can I set up my kubeconfig ahead of time, so it’s not called when I run atmos helmfile template
?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
oh, atmos helmfile
still uses only the profile - we wanted to update it but did not get to it yet
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we’ll try to update it to support role ARN n the next release (in a few days)
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
ah very cool, thank you Andriy!
2022-06-23
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
Hello yet again :wave: I encountered some unexpected behavior (and a misleading error message) with creating a Terraform component that has a variable named environment
.
I’m planning on switching to using CloudPosse’s region/namespace/stage nomenclature soon, but didn’t expect this to fail in the meantime. Clearly there’s some variable shadowing going on. I can work around it, but wanted to paste it here anyway - and thanks for building such an awesome tool!
# uw2-sandbox.yaml
components:
terraform:
eks-iam:
backend:
s3:
workspace_key_prefix: "eks-iam"
vars:
environment: "sandbox"
And the output:
$ atmos terraform plan eks-iam --stack=uw2-sandbox
Searched all stack files, but could not find config for the component 'eks-iam' in the stack 'uw2-sandbox'.
Check that all attributes in the stack name pattern '{environment}-{stage}' are defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
@el a few observations here
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
vars:
environment: "sandbox"
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we don’t specify the context (tenant, environment (region), stage (account) in all the components, we specify that in separate YAML global files and then import them into stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
(but you know that since you mentioned “I’m planning on switching to using CloudPosse’s region/namespace/stage nomenclature”
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
where do you have atmos.yaml
file located?
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
at the root level, next to components
and stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
place it into /usr/local/etc/atmos/atmos.yaml
-this works in all cases for all processes (atmos itself and the TF utils
provider which is used to get the remote state of TF components and it uses atmos.yaml
CLI config as well)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
set base_path
to empty string
# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: ""
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
and then execute
export ATMOS_BASE_PATH=$(pwd)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
this works in a Docker container (geodesic
does it automatically if you use it)
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
ah yeah right now I’m just using the atmos
CLI on MacOS
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
terraform:
eks-iam:
backend:
s3:
workspace_key_prefix: "eks-iam"
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
no need to specify workspace_key_prefix
, the latest atmos
does it automatically (you can override it though)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
if it’s still not working
atmos terraform plan eks-iam --stack=uw2-sandbox
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
sweet, thank you for the helpful tips
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
send me DM with your code, I’ll review it
![el avatar](https://avatars.slack-edge.com/2022-05-04/3480721233332_7a9a22090dd08ffc260f_72.png)
thanks will give it a shot later and let you know how it goes. appreciate the help!
2022-06-24
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Hi there!
I’ve been using atmos
now for 1 month successfully form my workstation to decouple Terraform modules from config parameters/variables.
I am getting ready to try this within GitLab CI/CD.
I am curious whether you have any recommendations or examples (even if they are from different CI/CD system, e.g. GitHub Actions) on how you work with atmos
from CI/CD pipelines?
Do you use things like GNU Make for each infra repo with tasks that use atmos
, or something else?
I have non-root modules Terraform repository and for now just 1 repository with live infra (similar to what CloudPosse presented in office hours multiple times). My live repository is broken down to folders for each AWS Account and each of those top-level folders then has atmos
-suggested structure.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Have you seen atmos workflows
subcommand ?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
We usually use github but we’ve worked with gitlab as a version control source. All of our terraform CICD is done using spacelift.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
It’s technically also possible to setup atmos
using other terraform automation tools
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
but to answer your question, we do not run atmos
from any Makefile targets at the moment
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
@azec we use atmos with Spacelift (with both public/shared workers and private worker pool). We call atmos commands from Spacelift hooks. We can show you how to do it if you are interested
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
regarding calling atmos from CI/CD pipelines, it should be similar
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you install atmos (I suppose in the pipeline container)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
something like
ATMOS_VERSION=1.4.21
# Using `registry.hub.docker.com/cloudposse/geodesic:latest-debian` as Spacelift runner image on public worker pool
apt-get update && apt-get install -y --allow-downgrades atmos="${ATMOS_VERSION}-*"
# If runner image is Alpine Linux
# apk add atmos@cloudposse~=${ATMOS_VERSION}
atmos version
# Copy the atmos CLI config file into the destination `/usr/local/etc/atmos` where all processes can see it
mkdir -p /usr/local/etc/atmos
cp /mnt/workspace/source/rootfs/usr/local/etc/atmos/atmos.yaml /usr/local/etc/atmos/atmos.yaml
cat /usr/local/etc/atmos/atmos.yaml
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
note that you need to create some IAM role that your pipeline runners can assume with permissions to call into your AWS account(s)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
the role needs a trust policy to allow the Ci/CD system to assume it (it could be an AWS account or another role)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
then we execute two more commands before terraform plan/apply
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
echo "Selecting Terraform workspace..."
echo "...with AWS_PROFILE=$AWS_PROFILE"
echo "...with AWS_CONFIG_FILE=$AWS_CONFIG_FILE"
atmos terraform workspace "$ATMOS_COMPONENT" --stack="$ATMOS_STACK"
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
echo "Writing Stack variables to spacelift.auto.tfvars.json for Spacelift..."
atmos terraform generate varfile "$ATMOS_COMPONENT" --stack="$ATMOS_STACK" -f spacelift.auto.tfvars.json >/dev/null
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
so basically, you need:
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- Install
atmos
on the image that your CICD runs. Placeatmos.yaml
into/usr/local/etc/atmos/atmos.yaml
on the image
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- Configure IAM role for CICD to assume with permissions to call into your AWS account(s)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- Call
atmos terraform workspace
andatmos terraform generate varfile
for the component in the stack
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- Then, if your CICD executes just plain TF commands, it will run
terraform plan/apply
(that’s’ what Spacelift does). Or, instead of callingatmos terraform workspace
andatmos terraform generate varfile,
you just runatmos terraform plan (apply) <component> -s <stack>
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- CICD will clone the repo with
components
andstacks
into the container whereatmos
is installed andatmos.yaml
placed into/usr/local/etc/atmos/atmos.yaml
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Ok, those are all really great insights.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
With Spacelift, you just simplify the CI/CD workflows - because it runs meaningful Terraform steps as tasks via webhooks ?
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I am curious what your Spacelift workflow looks like for live infra repo.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we have all the stacks in Spacelift so it shows you the complete picture of your infra
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
I am curious what your Spacelift workflow looks like for live infra repo
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
Spacelift workflow - in terms of how we handle changes in PRs? Or how we provision Spacelift stacks?
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I was trying to ask how you handle changes in PRs and outcomes in Spacelift? The git flow…
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Related to AWS IAM Role assume from CI/CD, we solved for that with installing Gitlab OIDC IdP in IAM and Terraform power role has Trust Policy with IdP audience and subject checks (using GitLab org:project:repo:branch
filters).
Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I guess I need to read through Spacelift docs to really get better understanding of that. But it seems like it integrates well with GitLab as well.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
OIDC IdP is good
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
regarding git flow:
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
## Pull Request Workflow
1. Create a new branch & make changes
2. Create a new pull request (targeting the main branch)
3. View the modified resources directly in the pull request
4. View the Spacelift run (terraform plan) in Spacelift PRs tab for the stack (we provision a Rego policy to detect changes in PRs and trigger Spacelift proposed runs)
5. If the changes look good, merge the pull request
6. View the Spacelift run (terraform plan) in Spacelift Runs tab for the stack (we provision a Rego policy to detect merging to the main branch and trigger Spacelift tracked runs)
7. If the changes look good, confirm the Spacelift run (Confirm button) - Spacelift will run `terraform apply` on the stack
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
simplified version
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
That looks great. I want to be able to really simplify git flow as much as possible on the live infra repo.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I am going to zone out on the Spacelift docs throughout the next week.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I already went through section on GitLab VCS integration.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
What is the relation between atmos stacks & workflows to Spacelift stacks ?
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I guess it is hard to get the feel of this without trying …
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
But if you can answer just simplified version, that is good enough for me and thank you!
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I have picked this naming schema with terraform null label CP module:
{namespace}-{tenant}-{environment}-{stage}--{name}--------{attributes}
Example:
nbi---------[cto]----[gbl]--------[devops]-[terraformer]-[cicd,...]
My stage
for now is always the most simplified name of the AWS Account alias, because org choose to break out environments with individual AWS account isolation. I don’t expect that I will have classic dev|staging|prod
stages within single account. But it would be supported with the above schema.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
in atmos
, we have components
(logic) and stacks
(config)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
atmos stacks define vars and other settings for all regions and accounts where you deploy the components
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
e.g. we have vpc
component and we have ue2-dev
and ue2-prod
atmos stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
now in Spacelift, we want to see the entries for all possible combinations of components in all infra (atmos) stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
to simplify, Spacelift stacks are a Cartesian product of atmos components and atmos stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
Spacelift stacks ^
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
the names are constructed by using the context (tenant, environment/region, stage/account) + the component name
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
this way, Spacelift shows you everything deployed into your environments
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
in other words, a Spacelift stack is atmos stack (e.g. ue2-dev
) + component deployed into the stack (e.g. vpc
)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
all of that is calculated and deployed to Spacelift using these components/modules/providers:
- <https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift>
- <https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation>
- <https://github.com/cloudposse/terraform-provider-utils/tree/main/internal/spacelift>
- <https://github.com/cloudposse/atmos/tree/master/pkg/spacelift>
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Catching up from last week …
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I still have a lot to read, but on https://docs.cloudposse.com/reference/stacks/ page in the most complete example on the bottom I have found :
terraform:
first-component:
settings:
spacelift:
...
# Which git branch trigger's this workspace
branch: develop
...
Does this assume that Spacelift can be configured to run plans in lower environments when changes are pushed to branches other than main
?
One of the git flows proposed by GitLab is to have branch per environment, so this would be match to that. But it seems like you were successful configuring Spacelift to work with any changes to main
regardless what environment is being changed on the feature branch.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
each Spacelift stack can be assigned a GH branch
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
the branches can be different
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
but yes, we trigger a stack in two cases: pushes to a PR that changed the stack’s code, and merging ti the default (e.g. main) branch
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you can def provision the same stack twice (but with diff names) using diff GH branches
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I am struggling to understand relationship of docs in https://github.com/cloudposse/terraform-aws-components/blob/master/modules/spacelift/docs/spacelift-overview.md and https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
My infra-universal
repo is broken down in this way :
infra-universal # this is git repo root
|
|--aws_account_1
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_2
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_3
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_4
|
|-- components
|-- stacks
|-- atmos.yaml
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Do I need to place https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift in each of my directories under components/terraform/spacelift
… ? And then apply it manually from my workstation for each environment against Spacelift (using spacelift provider) ?
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I think I got it.
https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift actually references https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation module.
It is recommended to place module form 1st link into live infra repo under components/terraform/spacelift
. For me there would be 4x instances of that, so I assume I would have to link corresponding 4x projects in Spacelift.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Once I have Spacelift workers pool deployed (for each AWS account for the start), do I need to directly apply that 1st stack from components/terraform/spacelift
(I know atmos can’t be used for this) against Spacelift ?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
i don’t remember what you are using, but we usually provision as many Spacelift admin stacks (an admin stack manages and provisions regular stacks) as we have OUs in the org
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
regarding this structure
--aws_account_1
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_2
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_3
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_4
|
|-- components
|-- stacks
|-- atmos.yaml
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
in the components
folder we have terraform code for the components, they are “stateless” meaning they can be deployed into any account/region
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
that’s why there is only one components
folder per repo
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Hmmm .. I see … so this would be a beginner mistake then.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
regarding stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
this is a structure we are using
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
catalog
is for the components YAML config, and all files from the catalog are imported into top-level stacks in orgs
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
But could I still get by with configuring 1 Root project in Spacelift for each of the target accounts. And then pointing them to components/terraform/spacelift/
of the each individual directory im my structure above ?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
mixins
are some common vars/settings for each account and region
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
orgs
-> OUs (tenants) -> accounts -> regions
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Would this approach lead to 4x charge from Spacelift in terms of licensing … ?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
no, b/c Spacelift charges per workers or per minute (depending on the plan), does not matter how many stacks you have
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
cool …
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
and yes, you can provision one admin stack (manually using TF or atmos)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
and then that admin stack kick off all other stacks (admin and regular)
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
From there, addition of any new stacks to the repo is auto-detected and they are provisioned as workspaces/stacks in Spacelift ?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
the point is, by separating components
(logic) from `stacks (config), you don’t have to repeat anything (like you showed above repeated 4 times)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you just configure the corresponding YAML stacks
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Yes, right now I repeat root-level terraform config for each of the AWS accounts/directories above. It is tedious, not very DRY, but allows me to have them pinned to different versions of backing TF modules.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
For each component 4x …
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
i don’t think you have/use 4 diff versions at the same time
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I guess I leaned towards this approach to have more decoupling, but actually I may need to rework that to keep things simpler. I don’t mind destructing resources I have already. There isn’t many.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
It is like a greenfield project.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
so in our structure, that could be done by placing 2-3 versions of the component into components
under diff names, e.g. vpc-v1.0
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
yes …
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
and then in YAML
components:
terraform:
vpc-defauts:
metadata:
type: abstract
vars:
#default vars here
my-vpc-1:
metadata:
component: vpc-v1.0
inherits:
- vpc-defaults
vars: .....
my-vpc-2:
metadata:
component: vpc-v2.0
inherits:
- vpc-defaults
vars: .....
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
atmos terraform plan my-vpc-1 -s xxxx
atmos terraform plan my-vpc-2 -s yyyy
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Also most of my provider configurations for each component are currently hardcoded, but that could be improved with some variables, that just like all other variables can be passed down from stacks configs…. e.g.
provider "aws" {
region = var.region
allowed_account_ids = ["<REDACTED>"]
# Web Identity Role Federation only used in CI/CD
assume_role_with_web_identity {
# NOTE: Variables are not allowed in provider configurations block, so these values are hardcoded for all CI/CD purposes
role_arn = "<REDACTED>"
session_name = "<REDACTED>"
duration = "1h"
# NOTE: Ensure this is substituted in CI/CD runtime (either within gnu make step or GitLab pipeline)
# Hint: sed -i '' -e "s/GITLAB_OPENIDC_TOKEN/$CI_JOB_JWT_V2/g" providers.tf
web_identity_token = "GITLAB_OPENIDC_TOKEN" # This is just placeholder
}
default_tags {
tags = {
"Owner" = "<REDACTED>"
"GitLab-Owners" = "<REDACTED>"
"Repository" = "<REDACTED>"
"VSAD" = "<REDACTED>"
"DataClassification" = ""
"Application" = ""
"ProductPortfolio" = ""
}
}
}
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
How do you solve AWS access and role assume from:
- Spacelift public workers
- Spacelift private workers
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
that’s a separate issue diff for public and private workers
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I have Terraform-power role in each account that can have expanding policies on it as time goes by … depending what AWS services we end up needing to deploy.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Ok, I think I will try to get buy-in on Spacelift medium tier in the next month from my team.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
for public workers, we allow each stack to assume an IAM role https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/aws_role
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
And then try testing all this …
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
for private pools, since you deploy it in your VPC, use instance profiles with permissions to assume IAM roles into AWS accounts
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
for private pools, probably worker pool per account/vpc instead of just 1 worker pool in some central account …
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
1-1 mappings between those, separate worker pool configurations for each Spacelift admin project/stack …
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
not really, depends on many factors. We deploy one worker pool for all accounts and regions, or a worker pool per region, or a worker pool for prod-related stuff and non-prod-related stuff - depending on security, compliance, maintenance, cost and other requirements
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
for public workers, we allow each stack to assume an IAM role https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/aws_role (edited)
Is this done by one of those CP Terraform modules for Spacelift that you provided above ?
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Or has to be done separately ?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
yes
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
so yes, it’s a lot of things that need to be put together, so ask questions if you have them, we deployed all of those combinations and can help, also ask in spacelift
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
thank you so much! this is very useful and you have helped me to get oriented in the right direction!
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I’ve taken time to refactor my environments organization so I only have 1 components and 1 stacks under the repo root dir.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
However, I started hitting into this problem where 1st environment persists components/terraform/<COMPONENT_DIR>/.terraform/environment
value for the workspace.
Then when I run same stack for different AWS account, I get prompted to select workspace.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Initializing the backend...
The currently selected workspace (nbi-cto-gbl-devops) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. default
2. nbi-cto-gbl-innov8
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Basically this issue: https://support.hashicorp.com/hc/en-us/articles/360043550953-Selecting-a-workspace-when-running-Terraform-in-automation
Introduction When running Terraform CLI with multiple workspaces, the terraform init command will prompt to select a workspace, like so: $ terraform init Initializing the backend… Successfully …
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Curious if you have remedy for this @Andriy Knysh (Cloud Posse)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
are you using atmos or plain terraform commands?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
atmos
selects the correct workspace every time you run atmos terraform plan/apply
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
but in general, take a look at TF_DATA_DIR
ENV var
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you can set it something like this
# Org-specific Terraform work director
ENV TF_DATA_DIR=".terraform${TENANT:+-$TENANT}"
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
all atmos
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
is TENANT something atmos exports before it proceeds with selecting/creating workspace ?
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
At the root of the repo where atmos.yaml
is , I also maintain tf-cli-config.tfrc
file.
Currently the only config I have is:
plugin_cache_dir = "$HOME/.cache/terraform" # Must pre-exist on file system
In addition to that I am using direnv tool for config of env variables as soon as I enter the project root dir. Currently the content of .envrc
file that direnv respects is only:
export TF_CLI_CONFIG_FILE=$(pwd)/tf-cli-config.tfrc
so it plays with the above and tells TF where to look for .tfrc
file before anything runs.
unclutter your .profile
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
Is TENANT something atmos exports before it proceeds with selecting/creating workspace ?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
if you are using tenants (var.tenant defined in YAML stack configs), atmos automatically includes tenant
in all calculations (TF workspace, varfile, planfile, etc.)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
tenant: tenant1
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
TF workspace will be in the format tenant1-ue2-dev
in the example
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
for a diff tenant, it will be tenant2-ue2-dev
etc.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I have all that in place regarding tenant and workspaces. But I guess it matters at what time TF_DATA_DIR
from your example gets exported.
In my scenario, I have same tenant deployed in multiple AWS accounts, so binding tenant var to path of TF_DATA_DIR
still doesn’t make sense.
But, since I am using desk, I added to my ~/.desk/desks/tf.sh
(which is config for Terraform desk) a function which assumes SAML-federated role to each account. When I call that function it exports AWS_ACCOUNT
env var. In addition I added
export TF_DATA_DIR=".terraform${AWS_ACCOUNT:+-$AWS_ACCOUNT}"
to the <PROJECT_ROOT>/.envrc
file. So , whenever I call function to assume another role/hop account , new value for AWS_ACCOUNT
gets exported. Then in the <PROJECT_ROOT>
I just do
direnv allow .
which reloads/recomputes all env variables from <PROJECT_ROOT>/.envrc
and always gives me a new value for TF_DATA_DIR
.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we actually not using TF_DATA_DIR
, it was just an example how to handle it in a diff way. We just use atmos
(and in Spacelift as well) which just calculates TF workspace from the context (tenant, environment, stage)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
did you try https://www.leapp.cloud/ - we use it to assume a primary role into identity account
![attachment image](https://www.leapp.cloud/assets/img/Leapp_Icon.png)
Manage your Cloud credentials locally and improve your workflow with the only open-source desktop app you’ll ever need.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
and then each TF component has [providers.tf](http://providers.tf)
file with the following
provider "aws" {
region = var.region
profile = module.iam_roles.profiles_enabled ? coalesce(var.import_profile_name, module.iam_roles.terraform_profile_name) : null
dynamic "assume_role" {
for_each = module.iam_roles.profiles_enabled ? [] : ["role"]
content {
role_arn = coalesce(var.import_role_arn, module.iam_roles.terraform_role_arn)
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}
variable "import_profile_name" {
type = string
default = null
description = "AWS Profile name to use when importing a resource"
}
variable "import_role_arn" {
type = string
default = null
description = "IAM Role ARN to use when importing a resource"
}
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
which just means (and i’m trying to point you in that direction) that we don’t manually assume roles into each account when provisioning resources. We assume roles into identity account, and then each component is configured with the correct role or profile to assume into other accounts (dev, prod, staging, etc.), which is all done automatically by terraform
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
for this to work, the primary role(s) in the identity account has permissions to assume roles in the other infra acounts, while the delegated roles in other accounts have a trust policy to allow the primary role from identity to assume it
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
we actually not using TF_DATA_DIR
, it was just an example how to handle it in a diff way. We just use atmos
(and in Spacelift as well) which just calculates TF workspace from the context (tenant, environment, stage)
Again, there is no problem with atmos
. Now that I don’t have separate directories for each AWS account, each holding it’s own components/terraform/<COMPONENT_NAME>
copies, what ends up happening (and only in local workflow) is that:
• running for AWS account A, atmos
computes well the workspace name and it gets saved to components/terraform/<COMPONENT_NAME>/.terraform/environment
file with value of computed workspace name my-workspace-something-A
(note that this is when you don’t have any TF_DATA_DIR
set, so default is .terraform
• after that running AWS account B, atmos
computes well the workspace name again but before it proceeds with selecting it, on terraform init
command it finds above components/terraform/<COMPONENT_NAME>/.terraform/environment
with value my-workspace-something-A
inside. That makes it confused, and then it causes the prompt that I have linked here
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
In CI/CD, this issue doesn’t exist because that .terraform
dir is always ephemeral on 1 job run for specific AWS account.
Then for new AWS account (also stack and workspace), on new job, new container , source checked out doesn’t come with components/terraform/<COMPONENT_NAME>/.terraform
dir.
So atmos
(internally terraform init
) doesn’t get confused about workspaces. It always does compute them right though.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I have these in atmos.yaml
:
components:
terraform:
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
apply_auto_approve: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
deploy_run_init: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
init_run_reconfigure: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
auto_generate_backend_file: true
, so I keep seeing terraform init
pre-run , no matter what other atmos terraform <command>
I issue.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Thank you for pointing me to TF_DATA_DIR
, it was very useful.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Here is what my new project organization looks like.
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
from the issue above, looks like TF_DATA_DIR
should be set for each account in your case (not tenant)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
or you can run atmos terraform clean xxx -s yyy
before switching the accounts
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
> atmos terraform clean test/test-component-override -s tenant1-ue2-de
Deleting '.terraform' folder
Deleting '.terraform.lock.hcl' file
Deleting terraform varfile: tenant1-ue2-dev-test-test-component-override.terraform.tfvars.json
Deleting terraform planfile: tenant1-ue2-dev-test-test-component-override.planfile
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
That is neat! I wasn’t aware of atmos terraform clean
…
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Even better …
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
There might be a bug with atmos terraform clean
when TF_DATA_DIR
is set.
$ env | grep 'TF_DATA_DIR'
TF_DATA_DIR=.terraform-devops
$ atmos terraform clean ecr/private -s nbi-cto-uw2-devops
Deleting '.terraform' folder
Deleting '.terraform.lock.hcl' file
Deleting terraform varfile: nbi-cto-uw2-devops-ecr-private.terraform.tfvars.json
Deleting terraform planfile: nbi-cto-uw2-devops-ecr-private.planfile
Deleting 'backend.tf.json' file
Found ENV var TF_DATA_DIR=.terraform-devops
Do you want to delete the folder '.terraform-devops'? (only 'yes' will be accepted to approve)
Enter a value: yes
Deleting folder '.terraform-devops'
$ ls -lah components/terraform/ecr/private
drwxr-xr-x 12 zecam staff 384B Jul 1 10:26 .
drwxr-xr-x 3 zecam staff 96B Jun 28 13:18 ..
drwxr-xr-x 6 zecam staff 192B Jun 30 15:16 .terraform-devops <-- still present on FS
drwxr-xr-x 6 zecam staff 192B Jun 30 15:32 .terraform-innov8
drwxr-xr-x 6 zecam staff 192B Jun 30 15:10 .terraform-rkstr8
-rw-r--r-- 1 zecam staff 9.9K Jun 30 16:06 context.tf
...
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
hmm, we actually did not use TF_DATA_DIR
much so it was not tested 100%, i’ll look into that
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
I liked that it paired well together with TF_DATA_DIR
at first and figured out which specific dir needs to be deleted, but for some reason it didn’t delete.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
did you try https://www.leapp.cloud/ - we use it to assume a primary role into identity account
I am looking into this. I am fan of https://awsu.me/ , with some custom-written plugins for SAML 2.0 Federation.
![attachment image](https://www.leapp.cloud/assets/img/Leapp_Icon.png)
Manage your Cloud credentials locally and improve your workflow with the only open-source desktop app you’ll ever need.
Awsume - A cli that makes using AWS IAM credentials easy
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
So I might build custom image on top of geodesic that includes our forks of awsume
and our plugin for SAML 2.0
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
Based on docs, Leapp supports only a few Identity Providers, none of them being one we work with.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
@azec please use this new version https://github.com/cloudposse/atmos/releases/tag/v1.4.23
![azec avatar](https://avatars.slack-edge.com/2021-06-03/2135711472162_b509c10abc0535615548_72.jpg)
thanks!
2022-06-27
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.4.22 what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why
ATMOS_CLI_CONFIG_PATH ENV var allows specifying the location of atmos.yaml CLI config file. This is useful for CI/CD environments (e.g. Spacelift) where an infrastructure repository gets loaded into a custom path and atmos.yaml is not in the locations where atmos expects to find it (no need to copy atmos.yaml into /usr/local/etc/atmos/atmos.yaml)
Detect…
what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why ATMOS_CLI_CONFIG_PATH ENV var allows specifying the loc…
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
this is a very exciting release! (I think we should have bumped the major. @Andriy Knysh (Cloud Posse))
This adds the ability to add any number of commands/subcommands to atmos to further streamline your tooling under one interface.
Let’s say you wanted to add a new command called “provision”, you would do it like this:
- name: terraform
description: Execute terraform commands
# subcommands
commands:
- name: provision
description: This command provisions terraform components
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
# ENV var values support Go templates
env:
- key: ATMOS_COMPONENT
value: "{{ .Arguments.component }}"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
steps:
- atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
- atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
But other practical use-cases are:
• you use ansible and you want to call it from atmos
• you use the serverless framework, and want to stream line it
• you want to give developers a simple “env up” and “env down” command, this would be how.
what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why ATMOS_CLI_CONFIG_PATH ENV var allows specifying the loc…
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we’ll do a major release (let’s add that to the docs first)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
another useful thing to add would be hooks, e.g. a before
and after
hooks for terraform plan/apply
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
this is a very exciting release! (I think we should have bumped the major. @Andriy Knysh (Cloud Posse))
This adds the ability to add any number of commands/subcommands to atmos to further streamline your tooling under one interface.
Let’s say you wanted to add a new command called “provision”, you would do it like this:
- name: terraform
description: Execute terraform commands
# subcommands
commands:
- name: provision
description: This command provisions terraform components
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
# ENV var values support Go templates
env:
- key: ATMOS_COMPONENT
value: "{{ .Arguments.component }}"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
steps:
- atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
- atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
But other practical use-cases are: • you use ansible and you want to call it from atmos • you use the serverless framework, and want to stream line it • you want to give developers a simple “env up” and “env down” command, this would be how.