#atmos (2022-06)
2022-06-01
huge shoutout to @Andriy Knysh (Cloud Posse), great help and awesome experience doing a PR for atmos
thanks @Zach Bridges! @Andriy Knysh (Cloud Posse) is amazing indeed
v1.4.19 what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow command did not, and ATMOS_WORKFLOWS_BASE_PATH ENV var was not used
what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow com…
Does anyone mind explaining again the purpose of /catalog/
set of YAML configs and what does the tool internally do with this?
https://github.com/cloudposse/atmos/tree/master/examples/complete/stacks/catalog
in the catalog
we store YAML config for the component, not top-level stacks
all those files in the catalog are imported into top-level stacks (to import the component configs into each stack and to make the top-level stack configs DRY not repeating component configs in each stack, but just importing them)
So for the catalog
YAML files, do you have to also explicitly import them in stack files, or they get auto-imported by tool ?
Oh I see … https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/tenant1/ue2/dev.yaml#L4-L13
- catalog/terraform/top-level-component1
- catalog/terraform/test-component
- catalog/terraform/test-component-override
- catalog/terraform/test-component-override-2
- catalog/terraform/test-component-override-3
- catalog/terraform/vpc
- catalog/terraform/tenant1-ue2-dev
- catalog/helmfile/echo-server
- catalog/helmfile/infra-server
- catalog/helmfile/infra-server-override
Also, are you manually creating all those YAML files, or are you using some tooling to generate them from TF variables files ?
manually
(but we chose YAML so that any tool can read and write them. right now we write them manually)
yes, we do it manually for now, but we can add some tooling )maybe to atmos CLI) to help with that
got it ..
v1.4.19 what add the processing of ENV vars to atmos workflow command why take into account ATMOS_WORKFLOWS_BASE_PATH ENV var While all steps in a workflow processed the ENV vars, the atmos workflow command did not, and ATMOS_WORKFLOWS_BASE_PATH ENV var was not used
@Andriy Knysh (Cloud Posse), is there a way for tool to generate entire proposed directories structure in any directory in which it is run ?
the components
and stacks
folders can be of any structures, depending on your requirements
the toll supports any level folders for components and stacks
and how about atmos.yaml
: https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml
?
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: "."
components:
terraform:
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
apply_auto_approve: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
deploy_run_init: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
init_run_reconfigure: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
auto_generate_backend_file: false
helmfile:
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/helmfile"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var
kubeconfig_path: "/dev/shm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var
helm_aws_profile_pattern: "{namespace}-{tenant}-gbl-{stage}-helm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var
cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"
stacks:
# Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
included_paths:
- "**/*"
# Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
excluded_paths:
- "globals/**/*"
- "catalog/**/*"
- "**/*globals*"
# Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
name_pattern: "{tenant}-{environment}-{stage}"
workflows:
# Can also be set using `ATMOS_WORKFLOWS_BASE_PATH` ENV var, or `--workflows-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "workflows"
logs:
verbose: false
colors: true
Is that something you would put at <PROJECT_ROOT>/cli/
directory?
I am confused with that because recommended layout mentions cli dir: https://atmos.tools/#recommended-layout
But live example has that file atmos.yaml
at the root level: https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: "."
components:
terraform:
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
apply_auto_approve: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
deploy_run_init: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
init_run_reconfigure: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
auto_generate_backend_file: false
helmfile:
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/helmfile"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var
kubeconfig_path: "/dev/shm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var
helm_aws_profile_pattern: "{namespace}-{tenant}-gbl-{stage}-helm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var
cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"
stacks:
# Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
included_paths:
- "**/*"
# Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
excluded_paths:
- "globals/**/*"
- "catalog/**/*"
- "**/*globals*"
# Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
name_pattern: "{tenant}-{environment}-{stage}"
workflows:
# Can also be set using `ATMOS_WORKFLOWS_BASE_PATH` ENV var, or `--workflows-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "workflows"
logs:
verbose: false
colors: true
Is there an environment variable that could be tapped (set in console) to tell atmos
where to find CLI config for that specific project …
when using container, we put it into /usr/local/etc/atmos/atmos.yaml
then copy it into the container file system like this https://github.com/cloudposse/atmos/blob/master/examples/complete/Dockerfile#L36
COPY rootfs/ /
if you put it in the root of the repo, you could run atmos commands from the root of the repo (only)
2022-06-02
I have been able to run atmos terraform init
and atmos terraform plan
but it still uses local state files - it doesn’t start using s3 backend for state and DynamoDB.
I am not sure what I am doing wrong …
In my components/terraform/iam/user/backend.tf
I have ..
terraform {
backend "s3" {
# Filled out by atmos from stacks/globals/globals.yaml
}
}
In my stacks/globals/globals.yaml
I have …
terraform:
vars: {}
backend_type: s3 # s3, remote, vault, static, azurerm, etc.
backend:
s3:
encrypt: true
bucket: "<REDACTED>"
key: "terraform.tfstate"
dynamodb_table: "<REDACTED>"
acl: "bucket-owner-full-control"
region: "us-west-2"
role_arn: null
…
When I run atmos terraform init iam/user -s nbi-ops-uw2-devops
… it respects backend configuration in [backend.tf](http://backend.tf)
file but it is not being fed those backend-related variables from atmos… so it prompts me to enter values …
I feel like the example is missing this piece ….
I see also that majority of Cloudposse root modules (listed in terraform registry) are not defining any partial backends …
Re: @azec missing backend
We usually generate the backend
See this option
auto_generate_backend_file: false
The example sets it to false, try setting it to true in your atmos.yaml file
When you run an atmos terraform plan
you’ll see a backend.tf.json
file generated by atmos within the component
This json file will be interpreted as hcl by terraform
v1.4.20 what Update Terraform workspace calculation for legacy Spacelift stack processor why LegacyTransformStackConfigToSpaceliftStacks function in the Spacelift stack processor was used to transform the infrastructure stacks to Spacelift stacks using legacy code (and old versions of terraform-yaml-stack-config module) that does not take into account atmos.yaml CLI config - this is very old code that does not know anything about atmos CLI config and it was maintained to support the old versions of…
what Update Terraform workspace calculation for legacy Spacelift stack processor why LegacyTransformStackConfigToSpaceliftStacks function in the Spacelift stack processor was used to transform t…
2022-06-03
dumb question: How do you force-unlock with atmos? atmos terraform force-unlock ID -s <stack> ID becomes the component and fails.
cc: @Nimesh Amin
thank you ! I think I just tried every combination except that!
remember, you can always cd
into the component directory and run the native terraform force-unlock
command there too (provided you have selected the appropriate workspace)
atmos simply wraps the terraform command
you can see all the commands atmos runs by adding a --dry-run
iirc
oh! That’s very good to know!
Other option is to run atmos terraform shell
and then execute your tf commands from there. That’s my preferred workflow!
2022-06-07
2022-06-15
v1.4.21 what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) Use GitHub environments for deployment to take advantage of GitHub deployment API and UI (and not comment on PR with deployment URL to not pollute the PR with unnecessary comments) Go version 1.18 supports many new features including generics and allowing using any keyword instead of interface{} which…
what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) U…
v1.4.21 what Update atmos docs and GitHub workflows Use GitHub environments for deployment Upgrade to Go version 1.18 why New CLI documentation sections (describe all CLI commands that atmos supports) Use GitHub environments for deployment to take advantage of GitHub deployment API and UI (and not comment on PR with deployment URL to not pollute the PR with unnecessary comments) Go version 1.18 supports many new features including generics and allowing using any keyword instead of interface{} which…
2022-06-16
Hey everyone — I’m doing some catch up on Atmos and I have a question: 1, Would you recommend using Atmos for creating a reference architect (in AWS) such as the AWS Org?
we use it to create all resources including AWS org, OUs and accounts
atmos
is just a glue between components
(code/logic) and stacks
(config)
whenever you use terraform, you can use atmos
Thank you @Andriy Knysh (Cloud Posse) I’m reading through the documentation. I have a lot of catching up to do
just assume different roles to provision regular components and root-level privileged components (e.g. accounts)
Has it (Atmos) been discussed in any of the office hours, like an overview
I would like to use it for setting up all the initial AWS accounts to get started.
@dalekurt feel free to book some time and I can show you how to go about it
2022-06-17
Hey @Andriy Knysh (Cloud Posse) — Running into an spacelift / stacks config error and wondering if you’ve seen it before or can point me in the right direction. I just quickly peeled back all the layers and I’m at the point that I would want to crack open the provider / atmos to get more debug information from the golang code, but I of course don’t want to do that
Here is my issue —
module.spacelift.module.spacelift_config.data.utils_spacelift_stack_config.spacelift_stacks: Reading...
╷
│ Error: Failed to find a match for the import '/mnt/workspace/source/components/spacelift/stacks/**/*.yaml' ('/mnt/workspace/source/components/spacelift/stacks' + '**/*.yaml')
│
│ with module.spacelift.module.spacelift_config.data.utils_spacelift_stack_config.spacelift_stacks,
│ on .terraform/modules/spacelift.spacelift_config/modules/spacelift/main.tf line 1, in data "utils_spacelift_stack_config" "spacelift_stacks":
│ 1: data "utils_spacelift_stack_config" "spacelift_stacks" {
│
╵
Releasing state lock. This may take a few moments...
I’m setting up a new spacelift administration stack for an existing org using the in-progress upstreamed spacelift stack from here. My stack_config_path_template
is the default of stacks/%s.yaml
. I have an atmos.yaml
at the root of my project with the following config:
# See full configuration options @ <https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml>
base_path: ./
components:
terraform:
base_path: "components"
auto_generate_backend_file: true
stacks:
base_path: "stacks"
name_pattern: "{environment}"
logs:
verbose: true
colors: true
Locally, atmos picks up that config fine and if I use atmos to plan my spacelift config all is good. Running in Spacelift however… . And to me it seems to be not picking up the atmos config considering it’s a pathing issue where it’s looking for the stacks directory at the root of the component instead of at the root of the repo.
what
• Upstreaming the spacelift
component
why
• This update sets up catalog support for the spacelift
admin stacks
• Allows multiple admin stacks
references
Would appreciate your thoughts — Thanks!
for tis to work in a container and Spacelift, we use atmos.yaml
in /usr/local/etc/atmos
- tis works for all cases
in atmos.yaml
,
base_path: ""
then we set ENV var ATMOS_BASE_PATH
for both geodesic and Spacelift:
- geodesic root:
export ATMOS_BASE_PATH=$(pwd)
- Spacelift
.spacelift/config.yml
file ``` stack_defaults: before_init:- spacelift-configure-paths
- spacelift-write-vars
- spacelift-tf-workspace
environment: ATMOS_BASE_PATH: /mnt/workspace/source ```
the point of all of that is that /usr/local/etc/atmos/atmos.yaml
is visible to all processes. In Spacelift, atmos
does not see your atmos.yaml
in the root of your repo since Spacelift clones the repo to /mnt/workspace/source
all other combinations of the configs work in some case or another, but not in all cases and all systems. Use the one explained above
Also in Spacelift we are using Docker image with atmos installed and assign it to each stack. The image is usually the same infrastructure image used with geodesic, pushed to ECR
For public workers, install atmos in one of the before init scripts
@Matt Gowie are you using self hosted or public workers?
Ah I’m using public workers — that’s my issue likely as I’ve done the config you mentioned above with private workers in the past without issue. This is an older client of mine that I’ve already upgraded onto Atmos and I’m now moving onto Spacelift. Their project is small so enterprise / self hosted workers doesn’t make sense sadly.
I’ll try setting ATMOS_BASE_PATH
:thumbsup:
One question: You mentioned “For public workers, install atmos in one of the before init scripts” — Does atmos need to be installed for the atmos package that is used in the utils provider to work? I wouldn’t expect so. If I configure things correctly (using ATMOS_BASE_PATH
or mounting my atmos.yaml
to the public worker correctly), the atmos binary not being on the system will not create an issue, correct?
We have a project right now that requires we use public hosted workers. We will be working on it. Ext with with @Andriy Knysh (Cloud Posse)
Ah cool
I’ve done public workers with Atmos / Spacelift before. Don’t remember this hang up, but could’ve been something I just fixed and kept moving on.
I’ll let you folks know if I run into any other hiccups.
Thanks Matt
atmos needs to be installed in the init scripts because we use atmos to select terraform workspace and parse yaml config to write the variables for the component in the stack
Ah gotcha. I’ve just worked around that in the past with the following caveats:
-
From what I’ve seen, Spacelift will select the correct workspace without the workspace script.
-
The write variables script ensures that the stack runs with current vars in that commit (which is the proper way to do it for sure), but if you don’t run that script the vars that the admin stack sets will still be picked up and used just fine. This leads to having to re-run the stack after the admin stack runs… which is less than ideal but it does work.
But after typing out #2… I think I will install Atmos via an init script and run the write vars script to help avoid that stale vars confusion.
Public workers can use public images, right? I wonder if we should publish a public geodesic image with Atmos installed + the spacelift-* scripts that can be used with public workers… I might give that a shot.
Answering my own question: They do.
The following small public image worked — https://github.com/masterpointio/spacelift-atmos-runner
Since the name was already generic enough, I ended up adding the following code to spacelift-configure-paths
be able to utilize my repo’s atmos.yaml
: https://github.com/masterpointio/spacelift-atmos-runner/blob/main/rootfs/usr/local/bin/spacelift-configure-paths#L6-L10
That did the trick and I’m able to run projects without needing to specify environment variables or the like.
If it’s of interest, that image is available at: public.ecr.aws/w1j9e4y3/spacelift-atmos-runner:latest
That said, I’m sure you folks would want to build + maintain your own, which is the smart move. If you go that route and I can help in anyway, let me know.
Last thing, I have started to question why this whole process is necessary. Why doesn’t the spacelift-automation module handle all of this for us regardless of public or private workers?
Since we know that all stacks created by the spacelift-automation module are going to be atmos stacks, then why don’t we just bake those before_init
scripts into the automation module by default?
Doing the curl pipe pattern (curl <URL> | pipe
) would enable us to run an arbitrary amount of setup on both public and private workers so we could do the configure paths, tf workspace, and write vars steps via one before_init
command. Seems to me like that would reduce some of the complexity around this and avoid less passing around of these script files.
I think it should, but we haven’t had the chance to optimize it
2022-06-18
2022-06-19
2022-06-21
Hi everyone :slightly_smiling_face: I’m struggling to set the AWS profile correctly here. I’ve looked through the docs and the Github repo and it’s still not clear what I’m missing. Thanks in advance for the help!
atmos helmfile template aws-load-balancer-controller --stack=uw2-sandbox
I have a uw2-sandbox.yaml
file with this component:
helmfile:
aws-load-balancer-controller:
vars:
installed: true
and the profile is being set to an unexpected value, causing the update-kubeconfig
command to fail:
Variables for the component 'aws-load-balancer-controller' in the stack 'uw2-sandbox':
environment: uw2
installed: true
region: us-west-2
stage: sandbox
Using AWS_PROFILE=--gbl-sandbox-helm
/usr/local/bin/aws --profile --gbl-sandbox-helm eks update-kubeconfig --name=--uw2-sandbox--eks-cluster --region=us-west-2 --kubeconfig=/dev/shm/uw2-sandbox-kubecfg
aws: error: argument --profile: expected one argument
I have a feeling it’s tenant/namespace/etc weirdness since I’m not using a tenant and it looks like the profile
and name
values are missing some interpolated string
From your atmos.yaml remove the tenant tokens
thank you haha
that’s what I get for copying and pasting. Thank you, @Andriy Knysh (Cloud Posse)!
Hello I’m planning on using Atmos in a test deploy (to learn) .
I’m reading through the accounts module https://github.com/cloudposse/terraform-aws-components/tree/master/modules/account
Just to confirm using a dash -
for the account names are not permitted?
Example:
components:
terraform:
account:
backend:
s3:
role_arn: null
vars:
enabled: true
account_email_format: aws+lops-%[email protected]
account_iam_user_access_to_billing: DENY
organization_enabled: true
aws_service_access_principals:
- cloudtrail.amazonaws.com
- guardduty.amazonaws.com
- ipam.amazonaws.com
- ram.amazonaws.com
- securityhub.amazonaws.com
- servicequotas.amazonaws.com
- sso.amazonaws.com
- securityhub.amazonaws.com
- auditmanager.amazonaws.com
enabled_policy_types:
- SERVICE_CONTROL_POLICY
- TAG_POLICY
organization_config:
root_account:
name: core-root
stage: root
tags:
eks: false
accounts: []
organization:
service_control_policies:
- DenyNonNitroInstances
organizational_units:
- name: core
accounts:
- name: core-artifacts
tenant: core
stage: artifacts
tags:
eks: false
- name: core-audit
tenant: core
stage: audit
tags:
eks: false
- name: core-auto
tenant: core
stage: auto
tags:
eks: true
- name: core-corp
tenant: core
stage: corp
tags:
eks: true
- name: core-dns
tenant: core
stage: dns
tags:
eks: false
- name: core-identity
tenant: core
stage: identity
tags:
eks: false
- name: core-demo
tenant: core
stage: demo
tags:
eks: false
- name: core-network
tenant: core
stage: network
tags:
eks: false
- name: core-public
tenant: core
stage: public
tags:
eks: false
- name: core-security
tenant: core
stage: security
tags:
eks: false
service_control_policies:
- DenyLeavingOrganization
- name: plat
accounts:
- name: plat-dev
tenant: plat
stage: dev
tags:
eks: true
- name: plat-sandbox
tenant: plat
stage: sandbox
tags:
eks: true
- name: plat-staging
tenant: plat
stage: staging
tags:
eks: true
- name: plat-prod
tenant: plat
stage: prod
tags:
eks: true
service_control_policies:
- DenyLeavingOrganization
It’s acceptable (although we don’t recommend to use it because your resources IDs will include an additional dash)
if your resource names contain dashes, it’s impossible to delimit based on -
and know what field is what
so that’s why we recommend not to do it
also, when provisioning accounts, definitely look at the plan before applying because deleting accounts is a major PIA
@Erik Osterman (Cloud Posse) we use the same -
like in the above example for plat-prod
in our cplive infra, no ?
- name: plat-prod
tenant: plat
stage: prod
tags:
eks: true
so in null label, it should be:
• stage: prod
• tenant: plat
• namespace: cplive the account should be named after the ID, not the name.
@Jeremy G (Cloud Posse)
This is a confusing aspect of our current conventions. For customers NOT using tenant
(which is a relatively recent addition to null-label
), “account name” should not have a dash and is exactly the same as stage
.) For customers using tenant
the account name is tenant-stage
and we have a special configuration in null-label
descriptor_formats:
account_name:
format: "%v-%v"
labels:
- tenant
- stage
that creates the account name from the tenant
and stage
labels. This leads to code like this
account_name = lookup(module.this.descriptors, "account_name", var.stage)
Other than this specific usage, we highly recommend not using hyphens in any of the null-label
labels.
Thank you all for that.
Added additional detail above
2022-06-22
Hello again :wave: I’m struggling to use aws-vault
with atmos
because updating the atmos eks update-kubeconfig
command uses --profile
, which isn’t playing nicely with assuming a role through aws-vault
that requires 2FA.
I know CloudPosse has moved on to using Leapp so I’m going to give that a try. In the meantime, is there anything obvious I might be missing to get aws-vault
to play more nicely with atmos
?
the command supports either profile or IAM role, see https://atmos.tools/cli/commands/aws-eks-update-kubeconfig
i don’t know if you are asking about the role
you login with aws-vault
first (using 2FA or not, it writes the temporary credentials on your fie system, then you can use a profile or a role with atmos or aws commands)
(i’m trying to say that atmos has nothing to do with how you login to your AWS account and does not know anything about that :slightly_smiling_face: ). In the end, it’s just a wrapper and a glue between components
and stacks
and it calls terraform commands in the end
Thanks Andriy. I’m using aws-vault
first, and then I’m prompted for 2FA again when I pass in the same profile with atmos
yeah that’s what I was hoping for / assuming
it would work perfectly if there were no --profile
parameter in the underlying aws command
it would work perfectly if there were no --profile
parameter in the underlying aws command
it also supports IAM role, see the doc above
I see. I think the real issue is slightly different then. I’m running atmos helmfile template ...
and under the hood it’s calling aws eks update-kubeconfig
. Should I be passing --role-arn
along with my atmos helmfile template ...
command?
or can I set up my kubeconfig ahead of time, so it’s not called when I run atmos helmfile template
?
oh, atmos helmfile
still uses only the profile - we wanted to update it but did not get to it yet
we’ll try to update it to support role ARN n the next release (in a few days)
ah very cool, thank you Andriy!
2022-06-23
Hello yet again :wave: I encountered some unexpected behavior (and a misleading error message) with creating a Terraform component that has a variable named environment
.
I’m planning on switching to using CloudPosse’s region/namespace/stage nomenclature soon, but didn’t expect this to fail in the meantime. Clearly there’s some variable shadowing going on. I can work around it, but wanted to paste it here anyway - and thanks for building such an awesome tool!
# uw2-sandbox.yaml
components:
terraform:
eks-iam:
backend:
s3:
workspace_key_prefix: "eks-iam"
vars:
environment: "sandbox"
And the output:
$ atmos terraform plan eks-iam --stack=uw2-sandbox
Searched all stack files, but could not find config for the component 'eks-iam' in the stack 'uw2-sandbox'.
Check that all attributes in the stack name pattern '{environment}-{stage}' are defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
@el a few observations here
vars:
environment: "sandbox"
we don’t specify the context (tenant, environment (region), stage (account) in all the components, we specify that in separate YAML global files and then import them into stacks
(but you know that since you mentioned “I’m planning on switching to using CloudPosse’s region/namespace/stage nomenclature”
where do you have atmos.yaml
file located?
at the root level, next to components
and stacks
place it into /usr/local/etc/atmos/atmos.yaml
-this works in all cases for all processes (atmos itself and the TF utils
provider which is used to get the remote state of TF components and it uses atmos.yaml
CLI config as well)
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
set base_path
to empty string
# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: ""
and then execute
export ATMOS_BASE_PATH=$(pwd)
this works in a Docker container (geodesic
does it automatically if you use it)
ah yeah right now I’m just using the atmos
CLI on MacOS
terraform:
eks-iam:
backend:
s3:
workspace_key_prefix: "eks-iam"
no need to specify workspace_key_prefix
, the latest atmos
does it automatically (you can override it though)
if it’s still not working
atmos terraform plan eks-iam --stack=uw2-sandbox
sweet, thank you for the helpful tips
send me DM with your code, I’ll review it
thanks will give it a shot later and let you know how it goes. appreciate the help!
2022-06-24
Hi there!
I’ve been using atmos
now for 1 month successfully form my workstation to decouple Terraform modules from config parameters/variables.
I am getting ready to try this within GitLab CI/CD.
I am curious whether you have any recommendations or examples (even if they are from different CI/CD system, e.g. GitHub Actions) on how you work with atmos
from CI/CD pipelines?
Do you use things like GNU Make for each infra repo with tasks that use atmos
, or something else?
I have non-root modules Terraform repository and for now just 1 repository with live infra (similar to what CloudPosse presented in office hours multiple times). My live repository is broken down to folders for each AWS Account and each of those top-level folders then has atmos
-suggested structure.
Have you seen atmos workflows
subcommand ?
We usually use github but we’ve worked with gitlab as a version control source. All of our terraform CICD is done using spacelift.
It’s technically also possible to setup atmos
using other terraform automation tools
but to answer your question, we do not run atmos
from any Makefile targets at the moment
@azec we use atmos with Spacelift (with both public/shared workers and private worker pool). We call atmos commands from Spacelift hooks. We can show you how to do it if you are interested
regarding calling atmos from CI/CD pipelines, it should be similar
you install atmos (I suppose in the pipeline container)
something like
ATMOS_VERSION=1.4.21
# Using `registry.hub.docker.com/cloudposse/geodesic:latest-debian` as Spacelift runner image on public worker pool
apt-get update && apt-get install -y --allow-downgrades atmos="${ATMOS_VERSION}-*"
# If runner image is Alpine Linux
# apk add atmos@cloudposse~=${ATMOS_VERSION}
atmos version
# Copy the atmos CLI config file into the destination `/usr/local/etc/atmos` where all processes can see it
mkdir -p /usr/local/etc/atmos
cp /mnt/workspace/source/rootfs/usr/local/etc/atmos/atmos.yaml /usr/local/etc/atmos/atmos.yaml
cat /usr/local/etc/atmos/atmos.yaml
note that you need to create some IAM role that your pipeline runners can assume with permissions to call into your AWS account(s)
the role needs a trust policy to allow the Ci/CD system to assume it (it could be an AWS account or another role)
then we execute two more commands before terraform plan/apply
echo "Selecting Terraform workspace..."
echo "...with AWS_PROFILE=$AWS_PROFILE"
echo "...with AWS_CONFIG_FILE=$AWS_CONFIG_FILE"
atmos terraform workspace "$ATMOS_COMPONENT" --stack="$ATMOS_STACK"
echo "Writing Stack variables to spacelift.auto.tfvars.json for Spacelift..."
atmos terraform generate varfile "$ATMOS_COMPONENT" --stack="$ATMOS_STACK" -f spacelift.auto.tfvars.json >/dev/null
so basically, you need:
- Install
atmos
on the image that your CICD runs. Placeatmos.yaml
into/usr/local/etc/atmos/atmos.yaml
on the image
- Configure IAM role for CICD to assume with permissions to call into your AWS account(s)
- Call
atmos terraform workspace
andatmos terraform generate varfile
for the component in the stack
- Then, if your CICD executes just plain TF commands, it will run
terraform plan/apply
(that’s’ what Spacelift does). Or, instead of callingatmos terraform workspace
andatmos terraform generate varfile,
you just runatmos terraform plan (apply) <component> -s <stack>
- CICD will clone the repo with
components
andstacks
into the container whereatmos
is installed andatmos.yaml
placed into/usr/local/etc/atmos/atmos.yaml
Ok, those are all really great insights.
With Spacelift, you just simplify the CI/CD workflows - because it runs meaningful Terraform steps as tasks via webhooks ?
I am curious what your Spacelift workflow looks like for live infra repo.
we have all the stacks in Spacelift so it shows you the complete picture of your infra
I am curious what your Spacelift workflow looks like for live infra repo
Spacelift workflow - in terms of how we handle changes in PRs? Or how we provision Spacelift stacks?
I was trying to ask how you handle changes in PRs and outcomes in Spacelift? The git flow…
Related to AWS IAM Role assume from CI/CD, we solved for that with installing Gitlab OIDC IdP in IAM and Terraform power role has Trust Policy with IdP audience and subject checks (using GitLab org:project:repo:branch
filters).
Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.
I guess I need to read through Spacelift docs to really get better understanding of that. But it seems like it integrates well with GitLab as well.
OIDC IdP is good
regarding git flow:
## Pull Request Workflow
1. Create a new branch & make changes
2. Create a new pull request (targeting the main branch)
3. View the modified resources directly in the pull request
4. View the Spacelift run (terraform plan) in Spacelift PRs tab for the stack (we provision a Rego policy to detect changes in PRs and trigger Spacelift proposed runs)
5. If the changes look good, merge the pull request
6. View the Spacelift run (terraform plan) in Spacelift Runs tab for the stack (we provision a Rego policy to detect merging to the main branch and trigger Spacelift tracked runs)
7. If the changes look good, confirm the Spacelift run (Confirm button) - Spacelift will run `terraform apply` on the stack
simplified version
That looks great. I want to be able to really simplify git flow as much as possible on the live infra repo.
I am going to zone out on the Spacelift docs throughout the next week.
I already went through section on GitLab VCS integration.
What is the relation between atmos stacks & workflows to Spacelift stacks ?
I guess it is hard to get the feel of this without trying …
But if you can answer just simplified version, that is good enough for me and thank you!
I have picked this naming schema with terraform null label CP module:
{namespace}-{tenant}-{environment}-{stage}--{name}--------{attributes}
Example:
nbi---------[cto]----[gbl]--------[devops]-[terraformer]-[cicd,...]
My stage
for now is always the most simplified name of the AWS Account alias, because org choose to break out environments with individual AWS account isolation. I don’t expect that I will have classic dev|staging|prod
stages within single account. But it would be supported with the above schema.
in atmos
, we have components
(logic) and stacks
(config)
atmos stacks define vars and other settings for all regions and accounts where you deploy the components
e.g. we have vpc
component and we have ue2-dev
and ue2-prod
atmos stacks
now in Spacelift, we want to see the entries for all possible combinations of components in all infra (atmos) stacks
to simplify, Spacelift stacks are a Cartesian product of atmos components and atmos stacks
Spacelift stacks ^
the names are constructed by using the context (tenant, environment/region, stage/account) + the component name
this way, Spacelift shows you everything deployed into your environments
in other words, a Spacelift stack is atmos stack (e.g. ue2-dev
) + component deployed into the stack (e.g. vpc
)
all of that is calculated and deployed to Spacelift using these components/modules/providers:
- <https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift>
- <https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation>
- <https://github.com/cloudposse/terraform-provider-utils/tree/main/internal/spacelift>
- <https://github.com/cloudposse/atmos/tree/master/pkg/spacelift>
Catching up from last week …
I still have a lot to read, but on https://docs.cloudposse.com/reference/stacks/ page in the most complete example on the bottom I have found :
terraform:
first-component:
settings:
spacelift:
...
# Which git branch trigger's this workspace
branch: develop
...
Does this assume that Spacelift can be configured to run plans in lower environments when changes are pushed to branches other than main
?
One of the git flows proposed by GitLab is to have branch per environment, so this would be match to that. But it seems like you were successful configuring Spacelift to work with any changes to main
regardless what environment is being changed on the feature branch.
each Spacelift stack can be assigned a GH branch
the branches can be different
but yes, we trigger a stack in two cases: pushes to a PR that changed the stack’s code, and merging ti the default (e.g. main) branch
you can def provision the same stack twice (but with diff names) using diff GH branches
I am struggling to understand relationship of docs in https://github.com/cloudposse/terraform-aws-components/blob/master/modules/spacelift/docs/spacelift-overview.md and https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift
My infra-universal
repo is broken down in this way :
infra-universal # this is git repo root
|
|--aws_account_1
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_2
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_3
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_4
|
|-- components
|-- stacks
|-- atmos.yaml
Do I need to place https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift in each of my directories under components/terraform/spacelift
… ? And then apply it manually from my workstation for each environment against Spacelift (using spacelift provider) ?
I think I got it.
https://github.com/cloudposse/terraform-aws-components/tree/master/modules/spacelift actually references https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation module.
It is recommended to place module form 1st link into live infra repo under components/terraform/spacelift
. For me there would be 4x instances of that, so I assume I would have to link corresponding 4x projects in Spacelift.
Once I have Spacelift workers pool deployed (for each AWS account for the start), do I need to directly apply that 1st stack from components/terraform/spacelift
(I know atmos can’t be used for this) against Spacelift ?
i don’t remember what you are using, but we usually provision as many Spacelift admin stacks (an admin stack manages and provisions regular stacks) as we have OUs in the org
regarding this structure
--aws_account_1
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_2
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_3
|
|-- components
|-- stacks
|-- atmos.yaml
|--aws_account_4
|
|-- components
|-- stacks
|-- atmos.yaml
in the components
folder we have terraform code for the components, they are “stateless” meaning they can be deployed into any account/region
that’s why there is only one components
folder per repo
Hmmm .. I see … so this would be a beginner mistake then.
regarding stacks
this is a structure we are using
catalog
is for the components YAML config, and all files from the catalog are imported into top-level stacks in orgs
But could I still get by with configuring 1 Root project in Spacelift for each of the target accounts. And then pointing them to components/terraform/spacelift/
of the each individual directory im my structure above ?
mixins
are some common vars/settings for each account and region
orgs
-> OUs (tenants) -> accounts -> regions
Would this approach lead to 4x charge from Spacelift in terms of licensing … ?
no, b/c Spacelift charges per workers or per minute (depending on the plan), does not matter how many stacks you have
cool …
and yes, you can provision one admin stack (manually using TF or atmos)
and then that admin stack kick off all other stacks (admin and regular)
From there, addition of any new stacks to the repo is auto-detected and they are provisioned as workspaces/stacks in Spacelift ?
the point is, by separating components
(logic) from `stacks (config), you don’t have to repeat anything (like you showed above repeated 4 times)
you just configure the corresponding YAML stacks
Yes, right now I repeat root-level terraform config for each of the AWS accounts/directories above. It is tedious, not very DRY, but allows me to have them pinned to different versions of backing TF modules.
For each component 4x …
i don’t think you have/use 4 diff versions at the same time
I guess I leaned towards this approach to have more decoupling, but actually I may need to rework that to keep things simpler. I don’t mind destructing resources I have already. There isn’t many.
It is like a greenfield project.
so in our structure, that could be done by placing 2-3 versions of the component into components
under diff names, e.g. vpc-v1.0
yes …
and then in YAML
components:
terraform:
vpc-defauts:
metadata:
type: abstract
vars:
#default vars here
my-vpc-1:
metadata:
component: vpc-v1.0
inherits:
- vpc-defaults
vars: .....
my-vpc-2:
metadata:
component: vpc-v2.0
inherits:
- vpc-defaults
vars: .....
atmos terraform plan my-vpc-1 -s xxxx
atmos terraform plan my-vpc-2 -s yyyy
Also most of my provider configurations for each component are currently hardcoded, but that could be improved with some variables, that just like all other variables can be passed down from stacks configs…. e.g.
provider "aws" {
region = var.region
allowed_account_ids = ["<REDACTED>"]
# Web Identity Role Federation only used in CI/CD
assume_role_with_web_identity {
# NOTE: Variables are not allowed in provider configurations block, so these values are hardcoded for all CI/CD purposes
role_arn = "<REDACTED>"
session_name = "<REDACTED>"
duration = "1h"
# NOTE: Ensure this is substituted in CI/CD runtime (either within gnu make step or GitLab pipeline)
# Hint: sed -i '' -e "s/GITLAB_OPENIDC_TOKEN/$CI_JOB_JWT_V2/g" providers.tf
web_identity_token = "GITLAB_OPENIDC_TOKEN" # This is just placeholder
}
default_tags {
tags = {
"Owner" = "<REDACTED>"
"GitLab-Owners" = "<REDACTED>"
"Repository" = "<REDACTED>"
"VSAD" = "<REDACTED>"
"DataClassification" = ""
"Application" = ""
"ProductPortfolio" = ""
}
}
}
How do you solve AWS access and role assume from:
- Spacelift public workers
- Spacelift private workers
that’s a separate issue diff for public and private workers
I have Terraform-power role in each account that can have expanding policies on it as time goes by … depending what AWS services we end up needing to deploy.
Ok, I think I will try to get buy-in on Spacelift medium tier in the next month from my team.
for public workers, we allow each stack to assume an IAM role https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/aws_role
And then try testing all this …
for private pools, since you deploy it in your VPC, use instance profiles with permissions to assume IAM roles into AWS accounts
for private pools, probably worker pool per account/vpc instead of just 1 worker pool in some central account …
1-1 mappings between those, separate worker pool configurations for each Spacelift admin project/stack …
not really, depends on many factors. We deploy one worker pool for all accounts and regions, or a worker pool per region, or a worker pool for prod-related stuff and non-prod-related stuff - depending on security, compliance, maintenance, cost and other requirements
for public workers, we allow each stack to assume an IAM role https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/aws_role (edited)
Is this done by one of those CP Terraform modules for Spacelift that you provided above ?
Or has to be done separately ?
yes
so yes, it’s a lot of things that need to be put together, so ask questions if you have them, we deployed all of those combinations and can help, also ask in spacelift
thank you so much! this is very useful and you have helped me to get oriented in the right direction!
I’ve taken time to refactor my environments organization so I only have 1 components and 1 stacks under the repo root dir.
However, I started hitting into this problem where 1st environment persists components/terraform/<COMPONENT_DIR>/.terraform/environment
value for the workspace.
Then when I run same stack for different AWS account, I get prompted to select workspace.
Initializing the backend...
The currently selected workspace (nbi-cto-gbl-devops) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. default
2. nbi-cto-gbl-innov8
Basically this issue: https://support.hashicorp.com/hc/en-us/articles/360043550953-Selecting-a-workspace-when-running-Terraform-in-automation
Introduction When running Terraform CLI with multiple workspaces, the terraform init command will prompt to select a workspace, like so: $ terraform init Initializing the backend… Successfully …
Curious if you have remedy for this @Andriy Knysh (Cloud Posse)
are you using atmos or plain terraform commands?
atmos
selects the correct workspace every time you run atmos terraform plan/apply
but in general, take a look at TF_DATA_DIR
ENV var
you can set it something like this
# Org-specific Terraform work director
ENV TF_DATA_DIR=".terraform${TENANT:+-$TENANT}"
all atmos
is TENANT something atmos exports before it proceeds with selecting/creating workspace ?
At the root of the repo where atmos.yaml
is , I also maintain tf-cli-config.tfrc
file.
Currently the only config I have is:
plugin_cache_dir = "$HOME/.cache/terraform" # Must pre-exist on file system
In addition to that I am using direnv tool for config of env variables as soon as I enter the project root dir. Currently the content of .envrc
file that direnv respects is only:
export TF_CLI_CONFIG_FILE=$(pwd)/tf-cli-config.tfrc
so it plays with the above and tells TF where to look for .tfrc
file before anything runs.
unclutter your .profile
Is TENANT something atmos exports before it proceeds with selecting/creating workspace ?
if you are using tenants (var.tenant defined in YAML stack configs), atmos automatically includes tenant
in all calculations (TF workspace, varfile, planfile, etc.)
tenant: tenant1
TF workspace will be in the format tenant1-ue2-dev
in the example
for a diff tenant, it will be tenant2-ue2-dev
etc.
I have all that in place regarding tenant and workspaces. But I guess it matters at what time TF_DATA_DIR
from your example gets exported.
In my scenario, I have same tenant deployed in multiple AWS accounts, so binding tenant var to path of TF_DATA_DIR
still doesn’t make sense.
But, since I am using desk, I added to my ~/.desk/desks/tf.sh
(which is config for Terraform desk) a function which assumes SAML-federated role to each account. When I call that function it exports AWS_ACCOUNT
env var. In addition I added
export TF_DATA_DIR=".terraform${AWS_ACCOUNT:+-$AWS_ACCOUNT}"
to the <PROJECT_ROOT>/.envrc
file. So , whenever I call function to assume another role/hop account , new value for AWS_ACCOUNT
gets exported. Then in the <PROJECT_ROOT>
I just do
direnv allow .
which reloads/recomputes all env variables from <PROJECT_ROOT>/.envrc
and always gives me a new value for TF_DATA_DIR
.
we actually not using TF_DATA_DIR
, it was just an example how to handle it in a diff way. We just use atmos
(and in Spacelift as well) which just calculates TF workspace from the context (tenant, environment, stage)
did you try https://www.leapp.cloud/ - we use it to assume a primary role into identity account
Manage your Cloud credentials locally and improve your workflow with the only open-source desktop app you’ll ever need.
and then each TF component has [providers.tf](http://providers.tf)
file with the following
provider "aws" {
region = var.region
profile = module.iam_roles.profiles_enabled ? coalesce(var.import_profile_name, module.iam_roles.terraform_profile_name) : null
dynamic "assume_role" {
for_each = module.iam_roles.profiles_enabled ? [] : ["role"]
content {
role_arn = coalesce(var.import_role_arn, module.iam_roles.terraform_role_arn)
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}
variable "import_profile_name" {
type = string
default = null
description = "AWS Profile name to use when importing a resource"
}
variable "import_role_arn" {
type = string
default = null
description = "IAM Role ARN to use when importing a resource"
}
which just means (and i’m trying to point you in that direction) that we don’t manually assume roles into each account when provisioning resources. We assume roles into identity account, and then each component is configured with the correct role or profile to assume into other accounts (dev, prod, staging, etc.), which is all done automatically by terraform
for this to work, the primary role(s) in the identity account has permissions to assume roles in the other infra acounts, while the delegated roles in other accounts have a trust policy to allow the primary role from identity to assume it
we actually not using TF_DATA_DIR
, it was just an example how to handle it in a diff way. We just use atmos
(and in Spacelift as well) which just calculates TF workspace from the context (tenant, environment, stage)
Again, there is no problem with atmos
. Now that I don’t have separate directories for each AWS account, each holding it’s own components/terraform/<COMPONENT_NAME>
copies, what ends up happening (and only in local workflow) is that:
• running for AWS account A, atmos
computes well the workspace name and it gets saved to components/terraform/<COMPONENT_NAME>/.terraform/environment
file with value of computed workspace name my-workspace-something-A
(note that this is when you don’t have any TF_DATA_DIR
set, so default is .terraform
• after that running AWS account B, atmos
computes well the workspace name again but before it proceeds with selecting it, on terraform init
command it finds above components/terraform/<COMPONENT_NAME>/.terraform/environment
with value my-workspace-something-A
inside. That makes it confused, and then it causes the prompt that I have linked here
In CI/CD, this issue doesn’t exist because that .terraform
dir is always ephemeral on 1 job run for specific AWS account.
Then for new AWS account (also stack and workspace), on new job, new container , source checked out doesn’t come with components/terraform/<COMPONENT_NAME>/.terraform
dir.
So atmos
(internally terraform init
) doesn’t get confused about workspaces. It always does compute them right though.
I have these in atmos.yaml
:
components:
terraform:
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
apply_auto_approve: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
deploy_run_init: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
init_run_reconfigure: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
auto_generate_backend_file: true
, so I keep seeing terraform init
pre-run , no matter what other atmos terraform <command>
I issue.
Thank you for pointing me to TF_DATA_DIR
, it was very useful.
Here is what my new project organization looks like.
from the issue above, looks like TF_DATA_DIR
should be set for each account in your case (not tenant)
or you can run atmos terraform clean xxx -s yyy
before switching the accounts
> atmos terraform clean test/test-component-override -s tenant1-ue2-de
Deleting '.terraform' folder
Deleting '.terraform.lock.hcl' file
Deleting terraform varfile: tenant1-ue2-dev-test-test-component-override.terraform.tfvars.json
Deleting terraform planfile: tenant1-ue2-dev-test-test-component-override.planfile
That is neat! I wasn’t aware of atmos terraform clean
…
Even better …
There might be a bug with atmos terraform clean
when TF_DATA_DIR
is set.
$ env | grep 'TF_DATA_DIR'
TF_DATA_DIR=.terraform-devops
$ atmos terraform clean ecr/private -s nbi-cto-uw2-devops
Deleting '.terraform' folder
Deleting '.terraform.lock.hcl' file
Deleting terraform varfile: nbi-cto-uw2-devops-ecr-private.terraform.tfvars.json
Deleting terraform planfile: nbi-cto-uw2-devops-ecr-private.planfile
Deleting 'backend.tf.json' file
Found ENV var TF_DATA_DIR=.terraform-devops
Do you want to delete the folder '.terraform-devops'? (only 'yes' will be accepted to approve)
Enter a value: yes
Deleting folder '.terraform-devops'
$ ls -lah components/terraform/ecr/private
drwxr-xr-x 12 zecam staff 384B Jul 1 10:26 .
drwxr-xr-x 3 zecam staff 96B Jun 28 13:18 ..
drwxr-xr-x 6 zecam staff 192B Jun 30 15:16 .terraform-devops <-- still present on FS
drwxr-xr-x 6 zecam staff 192B Jun 30 15:32 .terraform-innov8
drwxr-xr-x 6 zecam staff 192B Jun 30 15:10 .terraform-rkstr8
-rw-r--r-- 1 zecam staff 9.9K Jun 30 16:06 context.tf
...
hmm, we actually did not use TF_DATA_DIR
much so it was not tested 100%, i’ll look into that
I liked that it paired well together with TF_DATA_DIR
at first and figured out which specific dir needs to be deleted, but for some reason it didn’t delete.
did you try https://www.leapp.cloud/ - we use it to assume a primary role into identity account
I am looking into this. I am fan of https://awsu.me/ , with some custom-written plugins for SAML 2.0 Federation.
Manage your Cloud credentials locally and improve your workflow with the only open-source desktop app you’ll ever need.
Awsume - A cli that makes using AWS IAM credentials easy
So I might build custom image on top of geodesic that includes our forks of awsume
and our plugin for SAML 2.0
Based on docs, Leapp supports only a few Identity Providers, none of them being one we work with.
@azec please use this new version https://github.com/cloudposse/atmos/releases/tag/v1.4.23
thanks!
2022-06-27
v1.4.22 what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why
ATMOS_CLI_CONFIG_PATH ENV var allows specifying the location of atmos.yaml CLI config file. This is useful for CI/CD environments (e.g. Spacelift) where an infrastructure repository gets loaded into a custom path and atmos.yaml is not in the locations where atmos expects to find it (no need to copy atmos.yaml into /usr/local/etc/atmos/atmos.yaml)
Detect…
what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why ATMOS_CLI_CONFIG_PATH ENV var allows specifying the loc…
this is a very exciting release! (I think we should have bumped the major. @Andriy Knysh (Cloud Posse))
This adds the ability to add any number of commands/subcommands to atmos to further streamline your tooling under one interface.
Let’s say you wanted to add a new command called “provision”, you would do it like this:
- name: terraform
description: Execute terraform commands
# subcommands
commands:
- name: provision
description: This command provisions terraform components
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
# ENV var values support Go templates
env:
- key: ATMOS_COMPONENT
value: "{{ .Arguments.component }}"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
steps:
- atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
- atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
But other practical use-cases are:
• you use ansible and you want to call it from atmos
• you use the serverless framework, and want to stream line it
• you want to give developers a simple “env up” and “env down” command, this would be how.
what Add ATMOS_CLI_CONFIG_PATH ENV var Detect more YAML stack misconfigurations Add functionality to define atmos custom CLI commands why ATMOS_CLI_CONFIG_PATH ENV var allows specifying the loc…
we’ll do a major release (let’s add that to the docs first)
another useful thing to add would be hooks, e.g. a before
and after
hooks for terraform plan/apply
this is a very exciting release! (I think we should have bumped the major. @Andriy Knysh (Cloud Posse))
This adds the ability to add any number of commands/subcommands to atmos to further streamline your tooling under one interface.
Let’s say you wanted to add a new command called “provision”, you would do it like this:
- name: terraform
description: Execute terraform commands
# subcommands
commands:
- name: provision
description: This command provisions terraform components
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
# ENV var values support Go templates
env:
- key: ATMOS_COMPONENT
value: "{{ .Arguments.component }}"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
steps:
- atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
- atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
But other practical use-cases are: • you use ansible and you want to call it from atmos • you use the serverless framework, and want to stream line it • you want to give developers a simple “env up” and “env down” command, this would be how.