#atmos (2024-02)
2024-02-01
Question on how to handle stack-unique configuration files in atmos.
The situation is that we have a terraform component, used by lots of stacks, but each stack has its own json configuration file for that component. These files are several hundred lines long, so it would be unwieldy to store in a variable. Currently, we store all these files inside the component, but this means if I change the file for one stack, atmos describe affected
returns all stacks with that component, leading to a bunch of no-op plan and apply steps in our git automation.
I’d like to store the files elsewhere in the repo, and have each stack’s yaml configuration point to the file. When the file is changed, it should trigger a plan or apply of just that stack. Does anyone know of a good way to do this?
This command produces a list of the affected Atmos components and stacks given two Git commits.
file - if the Atmos component depends on an external file, and the file was changed (see affected.file below), the file attributes shows the modified file
folder - if the Atmos component depends on an external folder, and any file in the folder was changed (see affected.folder below), the folder attributes shows the modified folder
file - an external file on the local filesystem that the Atmos component depends on was changed.
Dependencies on external files (not in the component's folder) are defined using the file attribute in the settings.depends_on map. For example:
components:
terraform:
top-level-component3:
metadata:
component: "top-level-component1"
settings:
depends_on:
1:
file: "examples/tests/components/terraform/mixins/introspection.mixin.tf"
oh, that is awesome!
this is how to specify deps on external files and folders in the YAML stack manifests using the settings.depends_on
attribute, which is used in atmos describe affected
note that you can also have dependencies on external (but local) terraform modules in your TF components - but in this case Atmos detects that automatically, no need to specify anything in YAML stack manifests
component.module - the Terraform component is affected because it uses a local Terraform module (not from the Terraform registry, but from the local filesystem), and that local module has been changed.
Exactly what I was looking for. Thanks @Andriy Knysh (Cloud Posse)!
Greetings.. running through a pared down version of https://atmos.tools/design-patterns/organizational-structure-configuration
I was able to run atmos terraform deploy vpc-flow-logs-bucket -s org1-plat-ue2-prod
without a problem
Then when I run atmos terraform deploy vpc -s org1-plat-ue2-prod
I’m getting the following error:
╷
│ Error: stack name pattern ‘{namespace}-{tenant}-{environment}-{stage}’ includes ‘{environment}’, but environment is not provided
│
│ with module.vpc_flow_logs_bucket[0].data.utils_component_config.config[0],
│ on .terraform/modules/vpc_flow_logs_bucket/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”:
│ 1: data “utils_component_config” “config” {
│
╵
exit status 1
Here is the output of atmos describe component vpc --stack org1-plat-ue2-prod
module "vpc_flow_logs_bucket" {
count = local.vpc_flow_logs_enabled ? 1 : 0
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.5.0"
# Specify the Atmos component name (defined in YAML stack config files)
# for which to get the remote state outputs
component = "vpc-flow-logs-bucket"
# Override the context variables to point to a different Atmos stack if the
# `vpc-flow-logs-bucket-1` Atmos component is provisioned in another AWS account, OU or region
stage = try(coalesce(var.vpc_flow_logs_bucket_stage_name, module.this.stage), null)
environment = try(coalesce(var.vpc_flow_logs_bucket_environment_name, module.this.environment), null)
tenant = try(coalesce(var.vpc_flow_logs_bucket_tenant_name, module.this.tenant), null)
# `context` input is a way to provide the information about the stack (using the context
# variables `namespace`, `tenant`, `environment`, and `stage` defined in the stack config)
context = module.this.context
}
and try again
that code was taken from a larger component whicj allows provisioning one VPC flow logs bucket for many VPCs, and the example did not change it to the case where the bucket is deployed in the same stack as the VPC
(we’ll update the code and the docs in the next release)
Thanks! That lead to the following error..
│ Error: Attempt to get attribute from null value
│
│ on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default":
│ 5: log_destination = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn
│ ├────────────────
│ │ module.vpc_flow_logs_bucket[0].outputs is null
│
│ This value is null, so it does not have any attributes.
╵
exit status 1
after running: atmos terraform deploy vpc-flow-logs-bucket -s org1-plat-ue2-prod
then atmos terraform deploy vpc -s org1-plat-ue2-prod
@Dave this is another thing that we are going to add to the “Quick Start” in the next release. In fact, it’s documented here https://atmos.tools/core-concepts/components/remote-state
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
in short, the remote-state
module uses the utils
provider to read Atmos components, and Terraform executes all providers from the component’s folder
atmos.yaml
does not exist in the component’s folder, hence the utils
provider can’t find the remote state
if you are on the host, put atmos.yaml
to /usr/local/etc/atmos/atmos.yaml
Oh, I totally read that the other day, apologies for forgetting… best practices would be then to copy atmos config there any time it is updated?
or just set the ENV var ATMOS_CLI_CONFIG_PATH
(this is what geodesic
does automatically)
Oh, I totally did that…
best practices would be to use a Docker container and use the rootfs
pattern
COPY rootfs/ /
```
CLI config is loaded from the following locations (from lowest to highest priority):
system dir (‘/usr/local/etc/atmos’ on Linux, ‘%LOCALAPPDATA%/atmos’ on Windows)
home dir (~/.atmos)
current directory
ENV vars
Command-line arguments
#
It supports POSIX-style Globs for file names/paths (double-star ‘**’ is supported)
https://en.wikipedia.org/wiki/Glob_(programming)
Base path for components, stacks and workflows configurations.
Can also be set using ‘ATMOS_BASE_PATH’ ENV var, or ‘–base-path’ command-line argument.
Supports both absolute and relative paths.
If not provided or is an empty string, ‘components.terraform.base_path’, ‘components.helmfile.base_path’, ‘stacks.base_path’ and ‘workflows.base_path’
are independent settings (supporting both absolute and relative paths).
If ‘base_path’ is provided, ‘components.terraform.base_path’, ‘components.helmfile.base_path’, ‘stacks.base_path’ and ‘workflows.base_path’
are considered paths relative to ‘base_path’.
base_path: “”
components: terraform: # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_BASE_PATH’ ENV var, or ‘–terraform-dir’ command-line argument # Supports both absolute and relative paths base_path: “components/terraform” # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE’ ENV var apply_auto_approve: false # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT’ ENV var, or ‘–deploy-run-init’ command-line argument deploy_run_init: true # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE’ ENV var, or ‘–init-run-reconfigure’ command-line argument init_run_reconfigure: true # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE’ ENV var, or ‘–auto-generate-backend-file’ command-line argument auto_generate_backend_file: true helmfile: # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_BASE_PATH’ ENV var, or ‘–helmfile-dir’ command-line argument # Supports both absolute and relative paths base_path: “components/helmfile” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_USE_EKS’ ENV var # If not specified, defaults to ‘true’ use_eks: true # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH’ ENV var kubeconfig_path: “/dev/shm” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN’ ENV var helm_aws_profile_pattern: “{namespace}-{tenant}-gbl-{stage}-helm” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN’ ENV var cluster_name_pattern: “{namespace}-{tenant}-{environment}-{stage}-eks-cluster”
stacks: # Can also be set using ‘ATMOS_STACKS_BASE_PATH’ ENV var, or ‘–config-dir’ and ‘–stacks-dir’ command-line arguments # Supports both absolute and relative paths base_path: “stacks” # Can also be set using ‘ATMOS_STACKS_INCLUDED_PATHS’ ENV var (comma-separated values string) included_paths: - “orgs//*” # Can also be set using ‘ATMOS_STACKS_EXCLUDED_PATHS’ ENV var (comma-separated values string) excluded_paths: - “/_defaults.yaml” # Can also be set using ‘ATMOS_STACKS_NAME_PATTERN’ ENV var name_pattern: “{tenant}-{environment}-{stage}”
workflows: # Can also be set using ‘ATMOS_WORKFLOWS_BASE_PATH’ ENV var, or ‘–workflows-dir’ command-line arguments # Supports both absolute and relative paths base_path: “stacks/workflows”
logs: file: “/dev/stdout” # Supported log levels: Trace, Debug, Info, Warning, Off level: Info
Custom CLI commands
commands:
- name: tf
description: Execute ‘terraform’ commands
subcommands
commands:
- name: plan
description: This command plans terraform components
arguments:
- name: component description: Name of the component flags:
- name: stack shorthand: s description: Name of the stack required: true env:
- key: ENV_VAR_1 value: ENV_VAR_1_value
- key: ENV_VAR_2
‘valueCommand’ is an external command to execute to get the value for the ENV var
Either ‘value’ or ‘valueCommand’ can be specified for the ENV var, but not both
valueCommand: echo ENV_VAR_2_value
steps support Go templates
steps:
- atmos terraform plan {{ .Arguments.component }} -s {{ .Flags.stack }}
- name: plan
description: This command plans terraform components
arguments:
- name: terraform
description: Execute ‘terraform’ commands
subcommands
commands:
- name: provision
description: This command provisions terraform components
arguments:
- name: component description: Name of the component flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
ENV var values support Go templates
env:
- key: ATMOS_COMPONENT value: “{{ .Arguments.component }}”
- key: ATMOS_STACK value: “{{ .Flags.stack }}” steps:
- atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
- atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
- name: provision
description: This command provisions terraform components
arguments:
- name: play
description: This command plays games
steps:
- echo Playing…
subcommands
commands:
- name: hello
description: This command says Hello world
steps:
- echo Hello world
- name: ping
description: This command plays ping-pong
If ‘verbose’ is set to ‘true’, atmos will output some info messages to the console before executing the command’s steps
If ‘verbose’ is not defined, it implicitly defaults to ‘false’
verbose: true steps:
- echo Playing ping-pong…
- echo pong
- echo Playing…
- name: show
description: Execute ‘show’ commands
subcommands
commands:
- name: component
description: Execute ‘show component’ command
arguments:
- name: component description: Name of the component flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
ENV var values support Go templates and have access to {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables
env:
- key: ATMOS_COMPONENT value: “{{ .Arguments.component }}”
- key: ATMOS_STACK value: “{{ .Flags.stack }}”
- key: ATMOS_TENANT value: “{{ .ComponentConfig.vars.tenant }}”
- key: ATMOS_STAGE value: “{{ .ComponentConfig.vars.stage }}”
- key: ATMOS_ENVIRONMENT
value: “{{ .ComponentConfig.vars.environment }}”
If a custom command defines ‘component_config’ section with ‘component’ and ‘stack’, ‘atmos’ generates the config for the component in the stack
and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,
exposing all the component sections (which are also shown by ‘atmos describe component’ command)
component_config: component: “{{ .Arguments.component }}” stack: “{{ .Flags.stack }}”
Steps support using Go templates and can access all configuration settings (e.g. {{ .ComponentConfig.xxx.yyy.zzz }})
Steps also have access to the ENV vars defined in the ‘env’ section of the ‘command’
steps:
- ‘echo Atmos component from argument: “{{ .Arguments.component }}”’
- ‘echo ATMOS_COMPONENT: “$ATMOS_COMPONENT”’
- ‘echo Atmos stack: “{{ .Flags.stack }}”’
- ‘echo Terraform component: “{{ .ComponentConfig.component }}”’
- ‘echo Backend S3 bucket: “{{ .ComponentConfig.backend.bucket }}”’
- ‘echo Terraform workspace: “{{ .ComponentConfig.workspace }}”’
- ‘echo Namespace: “{{ .ComponentConfig.vars.namespace }}”’
- ‘echo Tenant: “{{ .Compo…
- name: component
description: Execute ‘show component’ command
arguments:
or (especailly if you are on the host), use ATMOS_CLI_CONFIG_PATH
to set the path to atmos.yaml
to whatever location you like
Yeah I have that set, I am using the container.
(this is an anooying feature, but that’s how Terraform works with the providers)
Still having the issue, I’m clearly missing something….
I have done the following (automatically set on my docker run command)
just set the ENV var ATMOS_CLI_CONFIG_PATH
(this is what geodesic
does automatically)
# Tried both of the following
√ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/
√ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/atmos.yaml
I have tried this
if you are on the host, put atmos.yaml to /usr/local/etc/atmos/atmos.yaml
ls -l /usr/local/etc/atmos/ total 4 -rwxr-xr-x 1 root root 1931 Feb 2 09:08 atmos.yaml
Does using my current
atmos.yaml
suffice, or is there something special about your atmos.yaml from here?
with
/usr/local/etc/atmos/atmos.yaml
what error do you see?
√ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH
/_repotest/atmos.yaml
╷
> │ Error: Attempt to get attribute from null value
> │
> │ on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default":
> │ 5: log_destination = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn
> │ ├────────────────
> │ │ module.vpc_flow_logs_bucket[0].outputs is null
> │
> │ This value is null, so it does not have any attributes.
> ╵
> exit status 1
√ . [NLS] (HOST) _repotest ⨠ echo $ATMOS_CLI_CONFIG_PATH /_repotest/
╷ │ Error: Attempt to get attribute from null value │ │ on vpc-flow-logs.tf line 5, in resource "aws_flow_log" "default": │ 5: log_destination = module.vpc_flow_logs_bucket[0].outputs.vpc_flow_logs_bucket_arn │ ├──────────────── │ │ module.vpc_flow_logs_bucket[0].outputs is null │ │ This value is null, so it does not have any attributes. ╵ exit status 1
are you using the ENV variables, or your atmos.yaml
is in /usr/local/etc/atmos/atmos.yaml
?
I’ve tried both, are they mutually exclusive?
if you are using the ENV vars, you need to set 2 vars:
Initial Atmos configuration can be controlled by these ENV vars:
ATMOS_CLI_CONFIG_PATH - where to find atmos.yaml. Path to a folder where the atmos.yaml CLI config file is located
ATMOS_BASE_PATH - base path to components and stacks folders
atmos.yaml is loaded from the following locations (from lowest to highest priority):
System dir (/usr/local/etc/atmos/atmos.yaml on Linux, %LOCALAPPDATA%/atmos/atmos.yaml on Windows)
Home dir (~/.atmos/atmos.yaml)
Current directory
ENV variables ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH
Yes read that
Those are both set.
if Atmos sees ATMOS_CLI_CONFIG_PATH
, it will not try to use ``/usr/local/etc/atmos/atmos.yaml`
what’s ATMOS_BASE_PATH
?
echo $ATMOS_CLI_CONFIG_PATH
/repotest/
echo $ATMOS_BASE_PATH
/repotest/
Also tried
echo $ATMOS_CLI_CONFIG_PATH
/repotest
echo $ATMOS_BASE_PATH
/repotest
Also tried
unset ATMOS_CLI_CONFIG_PATH
unset ATMOS_BASE_PATH
cp /repotest/atmos.yaml /usr/local/etc/atmos/
base_path: "/repotest" # atmos.yaml
let’s review this:
For this to work for both the `atmos` CLI and the Terraform `utils` provider, we recommend doing one of the following:
- Put `atmos.yaml` at `/usr/local/etc/atmos/atmos.yaml` on local host and set the ENV var `ATMOS_BASE_PATH` to point to the absolute path of the root
of the repo
- Put `atmos.yaml` into the home directory (`~/.atmos/atmos.yaml`) and set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of
the repo
- Put `atmos.yaml` at a location in the file system and then set the ENV var `ATMOS_CLI_CONFIG_PATH` to point to that location. The ENV var must
point to a folder without the `atmos.yaml` file name. For example, if `atmos.yaml` is at `/atmos/config/atmos.yaml`,
set `ATMOS_CLI_CONFIG_PATH=/atmos/config`. Then set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of the repo
- When working in a Docker container, place `atmos.yaml` in the `rootfs` directory
at [/rootfs/usr/local/etc/atmos/atmos.yaml](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/rootfs/usr/local/etc/atmos/atmos.yaml>)
and then copy it into the container's file system in the [Dockerfile](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/Dockerfile>)
by executing the `COPY rootfs/ /` Docker command. Then in the Dockerfile, set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the
root of the repo. Note that the [Atmos example](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start>)
uses [Geodesic](<https://github.com/cloudposse/geodesic>) as the base Docker image. [Geodesic](<https://github.com/cloudposse/geodesic>) sets the ENV
var `ATMOS_BASE_PATH` automatically to the absolute path of the root of the repo on local host
pick one of the methods
note that ATMOS_BASE_PATH
must be an absolute path
this is what geodesic
is doing:
• In the Dockerfile, copies the rootfs
to the container so we have /usr/local/etc/atmos/atmos.yaml
in there
• Sets ATMOS_BASE_PATH
to /localhost/...../infra
- NOTE: this is absolute path
(again, sorry that this is complicated, but Terraform executes providers from the component folder, and we don’t want to place atmos.yaml
in every component’s folder)
Are we sure PATHING had anything to do with it?
As soon as I disabled vpc_flow_logs_enabled: false
it worked just fine.
I went through each of the OPTIONS here
For this to work for both the `atmos` CLI and the Terraform `utils` provider, we recommend doing one of the following:
- Put `atmos.yaml` at `/usr/local/etc/atmos/atmos.yaml` on local host and set the ENV var `ATMOS_BASE_PATH` to point to the absolute path of the root
of the repo
- Put `atmos.yaml` into the home directory (`~/.atmos/atmos.yaml`) and set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of
the repo
- Put `atmos.yaml` at a location in the file system and then set the ENV var `ATMOS_CLI_CONFIG_PATH` to point to that location. The ENV var must
point to a folder without the `atmos.yaml` file name. For example, if `atmos.yaml` is at `/atmos/config/atmos.yaml`,
set `ATMOS_CLI_CONFIG_PATH=/atmos/config`. Then set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the root of the repo
- When working in a Docker container, place `atmos.yaml` in the `rootfs` directory
at [/rootfs/usr/local/etc/atmos/atmos.yaml](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/rootfs/usr/local/etc/atmos/atmos.yaml>)
and then copy it into the container's file system in the [Dockerfile](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start/Dockerfile>)
by executing the `COPY rootfs/ /` Docker command. Then in the Dockerfile, set the ENV var `ATMOS_BASE_PATH` pointing to the absolute path of the
root of the repo. Note that the [Atmos example](<https://github.com/cloudposse/atmos/blob/master/examples/quick-start>)
uses [Geodesic](<https://github.com/cloudposse/geodesic>) as the base Docker image. [Geodesic](<https://github.com/cloudposse/geodesic>) sets the ENV
var
and same results everytime.
so it might be something else. You can DM me your setup and i’ll review
2024-02-02
We’re still on 1.44 and have been in the weeds so didn’t see all of the new stuff, but I read the latest releases since then and major kudos on the latest work. This is looking phenomenal and will upgrade soon.
v1.57.0 what
Add default CLI configuration to Atmos code Update/improve examples and docs Update demo.tape
why
Add default CLI configuration to Atmos code - this is useful when executing Atmos CLI commands (e.g. on CI/CD) that does not require components and stacks
If atmos.yaml is not found in any of the searched locations, Atmos will use the default CLI configuration: base_path: “.” components: terraform: base_path: components/terraform apply_auto_approve: false deploy_run_init:…
what
Add default CLI configuration to Atmos code Update/improve examples and docs Update demo.tape
why
Add default CLI configuration to Atmos code - this is useful when executing Atmos CLI comma…
Hello When using atmos github actions for terraform drift detection, I saw an example config like below. How can I specify all components? How can I specify components in specific folders?
select-components:
runs-on: ubuntu-latest
name: Select Components
outputs:
matrix: ${{ steps.components.outputs.matrix }}
steps:
- name: Selected Components
id: components
uses: cloudposse/github-action-atmos-terraform-select-components@v0
with:
jq-query: 'to_entries[] | .key as $parent | .value.components.terraform | to_entries[] | select(.value.settings.github.actions_enabled // false) | [$parent, .key] | join(",")'
debug: ${{ env.DEBUG_ENABLED }}
in the atmos terraform plan with github actions, the docs on website says Within the "plan" job, the "component" and "stack" are hardcoded (foobar and plat-ue2-sandbox). In practice, these are usually derived from another action.
.
Is there an example that practically uses components from “affected stacks”
I see it is using component
as key in the yaml a lot, does that mean it support config one component? How we config multiple components in this case?
@Dr.Gao if you look at this doc https://atmos.tools/integrations/github-actions/atmos-terraform-drift-detection, you can see that the action uses this GH action https://github.com/cloudposse/github-action-atmos-terraform-select-components/blob/main/action.yml
The Cloud Posse GitHub Action for “Atmos Terraform Drift Detection” and “Atmos Terraform Drift Remediation” define a scalable pattern for detecting and remediating Terraform drift from within GitHub using workflows and Issues. “Atmos Terraform Drift Detection” will determine drifted Terraform state by running Atmos Terraform Plan and creating GitHub Issues for any drifted component and stack. Furthermore, “Atmos Terraform Drift Remediation” will run Atmos Terraform Apply for any open Issue if called and close the given Issue. With these two actions, we can fully support drift detection for Terraform directly within the GitHub UI.
name: "Atmos GitOps Select Components"
description: "A GitHub Action to get list of selected components by jq query"
author: [email protected]
branding:
icon: "file"
color: "white"
inputs:
select-filter:
description: jq query that will be used to select atmos components
required: false
default: '.'
head-ref:
description: The head ref to checkout. If not provided, the head default branch is used.
required: false
default: ${{ github.sha }}
atmos-gitops-config-path:
description: The path to the atmos-gitops.yaml file
required: false
default: ./.github/config/atmos-gitops.yaml
jq-version:
description: The version of jq to install if install-jq is true
required: false
default: "1.6"
debug:
description: "Enable action debug mode. Default: 'false'"
default: 'false'
required: false
nested-matrices-count:
required: false
description: 'Number of nested matrices that should be returned as the output (from 1 to 3)'
default: "2"
outputs:
selected-components:
description: Selected GitOps components
value: ${{ steps.selected-components.outputs.components }}
has-selected-components:
description: Whether there are selected components
value: ${{ steps.selected-components.outputs.components != '[]' }}
matrix:
description: The selected components as matrix structure suitable for extending matrix size workaround (see README)
value: ${{ steps.matrix.outputs.matrix }}
runs:
using: "composite"
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.head-ref }}
- name: Read Atmos GitOps config
## We have to reference cloudposse fork of <https://github.com/blablacar/action-config-levels>
## before <https://github.com/blablacar/action-config-levels/pull/16> would be merged
uses: cloudposse/github-action-config-levels@nodejs20
id: config
with:
output_properties: true
patterns: |
- ${{ inputs.atmos-gitops-config-path }}
- name: Install Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: ${{ steps.config.outputs.terraform-version }}
terraform_wrapper: false
- name: Install Atmos
uses: cloudposse/github-action-setup-atmos@v1
env:
ATMOS_CLI_CONFIG_PATH: ${{inputs.atmos-config-path}}
with:
atmos-version: ${{ steps.config.outputs.atmos-version }}
install-wrapper: false
- name: Install JQ
uses: dcarbone/[email protected]
with:
version: ${{ inputs.jq-version }}
- name: Filter Components
id: selected-components
shell: bash
env:
ATMOS_CLI_CONFIG_PATH: ${{ steps.config.outputs.atmos-config-path }}
JQUERY: |
with_entries(.value |= (.components.terraform)) | ## Deal with components type of terraform
map_values(map_values(select(${{ inputs.select-filter }}))) | ## Filter components by enabled github actions
map_values(select(. != {})) | ## Skip stacks that have 0 selected components
map_values(. | keys) | ## Reduce to component names
with_entries( ## Construct component object
.key as $stack |
.value |= map({
"component": .,
"stack": $stack,
"stack_slug": [$stack, .] | join("-")
})
) | map(.) | flatten ## Reduce to flat array
run: |
atmos describe stacks --format json | jq -ce "${JQUERY}" > components.json
components=$(cat components.json)
echo "Selected components: $components"
printf "%s" "components=$components" >> $GITHUB_OUTPUT
- uses: cloudposse/github-action-matrix-extended@v0
id: matrix
with:
matrix: components.json
sort-by: ${{ steps.config.outputs.sort-by }}
group-by: ${{ steps.config.outputs.group-by }}
nested-matrices-count: ${{ inputs.nested-matrices-count }}
which executes atmos describe stacks --format json
, which returns all components in all stacks
Use this command to show the fully deep-merged configuration for all stacks and the components in the stacks.
How can I specify all components?
the action returns all components, which is what you need for drift detection
on the other hand, to detect only the affected component (affected byu the changes in a PR), you can use https://atmos.tools/integrations/github-actions/affected-stacks
Streamline Your Change Management Process
and then to plan the affected components, https://atmos.tools/integrations/github-actions/atmos-terraform-plan
The Cloud Posse GitHub Action for “Atmos Terraform Plan” simplifies provisioning Terraform from within GitHub using workflows. Understand precisely what to expect from running a terraform plan from directly within the GitHub UI for any Pull Request.
Thanks very much! That is very helpful!
Is Github action with Atmos production ready? @Andriy Knysh (Cloud Posse)
they are used in production
let us know if you need help or find any issues
Thanks! Andriy!
I will reach out again if we have any issues
2024-02-05
Hi, when using Atmos + cloudposse components to setup a multi-account AWS organization setup (accounts for identity, dns, audit, …). Say we have a customer who is providing access to an AWS account within their own organization through a role we can assume from one of our own IAM roles. How would we be able to assume this role within our cloudposse setup so we can still use atmos & cloudposse components and store terraform state (S3) and locking (DynamoDB) on our own account while provisioning the actual infrastructure on the customer’s account?
this is not related to Atmos since you want to use the same Terraform backend and have already configured backend.s3
section so all state will be stored in the same backend (even for the external account). You are probably using different IAM roles to access the backend and the AWS resources
for AWS resources, see an example here https://github.com/cloudposse/terraform-aws-components/blob/main/modules/aurora-mysql/providers.tf
provider "aws" {
region = var.region
# Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
profile = module.iam_roles.terraform_profile_name
dynamic "assume_role" {
# module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
for_each = compact([module.iam_roles.terraform_role_arn])
content {
role_arn = assume_role.value
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}
you need to provide an IAM role in assume_role
that Terraform will assume to access the external account
if you are using just provider "aws"
you can always provide that role in assume_role
if you are using
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}
then the account-map
component needs to be configured to return that IAM role for a specific context, e.g. for a diff Org, or diff tenant, or diff account - depending on how you model the external account (is it a separate Org, or a separate tenant, or just a separate account)
then in Atmos manifests, you create those configurations for the new Org/tenant/account
so you can alway model an external account as a separate Org, or tenant, or just a separate AWS accou nt in the existing Org/tenant
see https://atmos.tools/design-patterns/organizational-structure-configuration for the explanation of that design pattern
Organizational Structure Configuration Atmos Design Pattern
Thank you for the detailed reply, I think I understand what you mean, I’m just not sure how I would make the account-map
component return the external role to assume? How would I add the external AWS account (which is part of the customers AWS Org) within my own AWS Org setup configured using atmos + cloud components?
So I would just see it as a separate external account, which I want to provision resources on using my existing atmos+cloudposse project. So adding it using the manifests and allowing it to be using the assume_role (terraform_role_arn) with an external role would perfectly fit my needs. I just don’t know how to set that up and can’t immediately find any examples either.
if you are using the account-map
component, there is no examples like that. To use an existing external account as a separate stage
and to use an existing terraform IAM role, you will need to modify the account-map
component
output "terraform_role_arn" {
some logic needs to be aded here https://github.com/cloudposse/terraform-aws-components/blob/main/modules/account-map/modules/iam-roles/main.tf#L43
static_terraform_role = local.account_map.terraform_roles[local.account_name]
e.g. if stage=<new_stage>
return the existing Terraform role ARN
the account-map
needs to be modified before you configure the new functionality with Atmos, eg. by providing it with a new input - a map of existing accounts to the terraform roles to assume
then use that input in the component to add it to the outputs
and then you can configure it with Atmos (which is just to configure that new input variable)
for eaxmple:
components:
terraform:
account-map:
vars:
existing_accounts_to_terraform_role_arns:
acc1: arn1
what you are asking is brownfield environment
see this PR https://github.com/cloudposse/terraform-aws-components/pull/945 which discuss that
what
• feat: use account-map
component for brownfield env
why
• Allow brownfield environments to use the account-map component with existing accounts
references
• Related to PR #943
what
• feat: use account
component for brownfield env
why
• Allow brownfield environments to use the account component without managing org, org unit, or account creation • This will allow adding to the account outputs so account-map can ingest it without any changes
references
• Slack conversations in sweetops https://sweetops.slack.com/archives/C031919U8A0/p1702135734967949
tests
I tested with these toggled to true
and false
to ensure it worked as expected. When these are true
, the resources are all created. When these are false
, none of the resources are created and all the outputs are filled with existing account information with the ability to override using the yaml inputs.
organization_enabled: true
organizational_units_enabled: true
accounts_enabled: true
Interesting, thank you!
as you can see, the PRs are under discussion (which means there is no consensus on how to do this, so you need to make changes to the components by yourself for now, using any approarch, and configure it with Atmos)
Hi, is there a document explaining why Atmos runs a reconfigure? As I look at example atmos.yamls, the default appears to alway reconfigure. Everytime I run a plan, even for the same component and stack in succession, it’s constantly asking to migrate all workspaces
the one caveat is that I’m playing around right now and using local state
terraform init -reconfigure
is used to be able to use a component in many stacks
Re-running init with an already-initialized backend will update the working directory to use the new backend settings. Either -reconfigure or -migrate-state must be supplied to update the backend configuration.
you can always set it to false in atmos.yaml
ok thanks, I’m still learning, not quite I understand why it needs to if it’s re-running in the same stack, unless it’s not detecting that it’s in the same stack and just running it always because the config var says true
-reconfigure
was introduced (it’s configurable) for the cases when we use multiple Terraform backends per Org, per tenant, or per account. Then we can provision the same TF component into multiple stacks using diff TF backends for each stack
and to make it configurable (and disable for the cases like yours), it was added to atmos.yaml
config
ok thx, my case is very simple right now as I’m bootstrapping. Hopefully warnings go away as I get more sophisticated infra in place.
Hi, I was trying to deploy the https://github.com/cloudposse/terraform-aws-components/tree/main/modules/waf module, but I had a hard time figuring out how to use and_statement or the or_statement or the not_statement from the rules. I know I can do that with straight TF, but I can’t seem to be able to do that with Cloudposse module. Also, is Rule Groups not supported? Can someone please shed some lights? Thank you in advance.
The following snippet is what I had, but I was only able to specify one statement.
byte_match_statement_rules:
- name: "invalid-path"
priority: 30
action: block
statement:
field_to_match:
uri_path:
rule:
positional_constraint: "STARTS_WITH"
search_string: "/api/v3/test/"
text_transformation:
rule:
priority: 0
type: "NONE"
visibility_config:
# Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled: true
metric_name: "uri_path"
sampled_requests_enabled: true
@Dan Miller (Cloud Posse)
the waf
component may not have everything that the terraform-aws-waf
module has. We use components as root modules for customer implementations, so it likely only has what we’ve required for a given use case. You can likely add anything that the module supports to the component
rule groups are supported by the managed_rule_group_statement_rules
input: https://github.com/cloudposse/terraform-aws-waf/blob/main/variables.tf#L361
variable "managed_rule_group_statement_rules" {
@Andriy Knysh (Cloud Posse) do you have an example of using an and_statement or or_statement or not_statement?
i think those or and and statements are not supported by the module. Prs are welcome
@Dan Miller (Cloud Posse), I think the rule groups are not supported in the terraform-aws-components WAF module. The managed rule group are the ones that are managed by AWS, but not the ones we create.
So it sounds like we’ll need to customize it to do both rule groups and multi statements then. @johncblandii
for_each = local.rule_group_reference_statement_rules
@Gabriel Tam are those not what you are describing?
variable "rule_group_reference_statement_rules" {
variable "rule_group_reference_statement_rules" {
Those are the rule group referencing rules to point to an existing rule groups (arn). I couldn’t find where I can create the rule groups. And the and, or, and not statement are needed for our use cases. Those are the ones we will need to create / customize.
if you modify/update/improve it, your contribution to the WAF module would be greatly appreciated (taking into account that WAF is a complex thing, it would benefit many people)
so it seems support for this resource is what @Gabriel Tam is referring to.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_rule_group
2024-02-06
2024-02-08
v1.58.0 what
Improve Atmos UX and error handling
When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a component and stack and printing error messages if the component or stack is not found
If a user executes any Atmos command that requires Atmos components and stacks, including just atmos (and including from a random folder not related to Atmos configuration), and the CLI config points to an Atmos stacks…
what
Improve Atmos UX and error handling
When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a compone…
v1.58.0 what
Improve Atmos UX and error handling
When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a component and stack and printing error messages if the component or stack is not found
If a user executes any Atmos command that requires Atmos components and stacks, including just atmos (and including from a random folder not related to Atmos configuration), and the CLI config points to an Atmos stacks…
2024-02-09
v1.59.0 what
Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default to dark mode Stylize atmos brand
why
Make it more compelling Add missing context developers might lack without extensive terraform experience
what
Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default …
Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
Terraform Limitations
v1.59.0 what
Update intro of Atmos (https://atmos.tools/) Add page on Terraform limitations (https://atmos.tools/reference/terraform-limitations/) Add backend.tf.json to .gitignore for QuickStart Default to dark mode Stylize atmos brand
why
Make it more compelling Add missing context developers might lack without extensive terraform experience
v1.60.0 what
Fix an issue with the skip_if_missing attribute in Atmos imports with context Update docs titles and fix typos Update atmos version CLI command
why
The skip_if_missing attribute was introduced in Atmos release v1.58.0 and had some issues with checking Atmos imports if the imported manifests don’t exist
Docs had some typos
When executing the atmos version command, Atmos automatically checks for the latest…
what
Fix an issue with the skip_if_missing attribute in Atmos imports with context Update docs titles and fix typos Update atmos version CLI command
why
The skip_if_missing attribute was introd…
what
Improve Atmos UX and error handling
When a user just types atmos terraform or atmos helmfile, Atmos will show the corresponding Terraform and Helmfile help instead of checking for a compone…
2024-02-10
This page alone deserves a conf to reveal all its gems and secrets!
Overcoming Terraform Limitations with Atmos
@Erik Osterman (Cloud Posse) created it, I enjoyed reading it, after reading it you will want to use Atmos :)
Overcoming Terraform Limitations with Atmos
You surely do! And you’re better equipped to do so with all the XP distilled in this page!
Is there a way to visualize the atmos stacks when using github actions to plan and apply? Or is it on the roadmap?
I know there is a native GitHub way to do it by clicking on actions, just wondering if there is another frontend
We do not have any immediate plans for a front end
Focused on adopting GitHub actions and GitHub enterprise functionality as much as possible right now
What specifically are you missing that a UI would solve? Visualizing atmos stacks is a bit broad.
I was thinking that it would be nice to have the ability to see which stacks have drifted visually
We believe we’ve solved that. We open GitHub Issues. This way drift is actionable (_it can be assigned to someone to remediate, and supports remediation from issues, when possible) and _visual. We also create issues from failures.
The problem with existing UIs, is they are not actionable. They just show you, you have a bunch of drifted stacks.
Pro tip, you can use projects with github issues.
Github Issues can be synced to jira, if not using github issues.
(note our actions support this out of the box)
2024-02-11
I tried using the component.yaml
’s mixins
key to copy over my local providers file and it failed.
Does that key only work with the source?
If so, how do i vendor from upstream and copy in my local mixin?
@RB are you asking about how to copy a local file to the component’s folder?
mixins:
# Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
# - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
# This mixin `uri` is relative to the current `vpc` folder
- uri: ../../mixins/context.tf
filename: context.tf
@RB are you asking about how to copy a local file to the component’s folder?
Yes
Oh i see, i can use the ../../ expression?
Hmm i tried this and i got an error last time
Btw have you also experimented with the new vendor manifest?
What’s nice about that is you can use yaml anchors within the file to DRY up provider copying
Use Atmos vendoring to make copies of 3rd-party components, stacks, and other artifacts in your own repo.
Ah yes, @Andriy Knysh (Cloud Posse) shows an example there:
Option 1
- component: "context"
# Copy a local file into a local file with a different file name
# This `source` is relative to the current folder
source: "components/terraform/mixins/context.tf"
targets:
- "components/terraform/vpc/context.tf"
- "components/terraform/alb/context.tf"
- "components/terraform/ecs/context.tf"
# etc...
# Tags can be used to vendor component that have the specific tags
# `atmos vendor pull --tags test`
# Refer to <https://atmos.tools/cli/commands/vendor/pull>
tags:
- context
Since there can only be one source, there’s no way to do the copying from a different location in each - component
definition
I could see this improving, but it’s not a pattern we right now use and dissuade against using due to the impossibility of testing it with open ended mixins.
We’re adding a terratest helper to do atmos testing for components and stacks
Ah that looks better than my current way which is to use a make
target
i.e.
make vendor COMP=ecr
make vendor/edit COMP=ecr
.PHONY: vendor
vendor: ## Vendor the component
@mkdir -p components/terraform/$(COMP)
@sed 's,ecr,$(COMP),g' ./mixins/component.yaml > components/terraform/$(COMP)/component.yaml
@atmos vendor pull -c $(COMP)
.PHONY: vendor/edit
vendor/edit: ## Vendor the component and edit the files
@hcledit block rm 'variable.region' -f components/terraform/$(COMP)/variables.tf -u
@cp ./mixins/providers.tf components/terraform/$(COMP)/
On the one hand, make
is nice because i understands modification times. Albeit, not in your implementation. But it’s an over optimizing for the most part to do that. Give the vendoring a shot.
In make though, I would do it like this: (functional looking pseudo code)
ALL_PROVIDERS = $(shell find . -type f -name 'providers.tf)
$(ALL_PROVIDERS): mixins/providers.tf
cp $< $@
.PHONY : providers
providers: $(ALL_PROVIDERS)
That would have only copied it if the mixins/providers.tf
is newer than the [proviers.tf](http://proviers.tf)
inside of a component.
Thanks Erik. You got some make skills
!
The other one is using hcledit
to remove the variable.region
from [variables.tf](http://variables.tf)
, after vendoring, as I have moved that into the client’s [providers.tf](http://providers.tf)
file
hcledit block rm 'variable.region' -f components/terraform/$(COMP)/variables.tf -u
Any chance vendoring can also include running a cli command via post_vendor
key or similar ?
Since component-level generation (like terragrunt, terramate) is not something we ascribe to, it’s not yet something we can prioritize. I think we will inevitably support it, but for now recommend doing that in make, or like @Hans D does with Gotask
I can see supporting it, primarily for the reason making it easier for companies to migrate from other tools into atmos
no worries, for now I will look into the new vendoring yaml, and tie that back into a make target and i should be good to go
Oh, and also for very advanced vendoring requirements, there’s https://carvel.dev/vendir/docs/v0.39.x/vendir-spec/
(which atmos vendoring is based on)_
we can add pre and post hooks to vendoring (no ETA, but in the near future)
Hi, I’ve encountered the same issue here, not sure if this thread resolved the issue from within the ComponentVendorConfig?
I’m trying to replace the supplied providers.tf
file with one that sources the iam_roles module from a private registry on vendor (to kill relative paths to the iam_roles
module):
mixins:
- uri: ../../helpers/providers.registry.tf
filename: providers.tf
I get the following error
Pulling the mixin '../../helpers/providers.registry.tf' for the component 'my-component' into '/localhost/path/to/components/terraform/my-component/providers.tf'
relative paths require a module with a pwd
This is on Atmos v1.60.0
# mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
# mixins are processed in the order they are declared in the list
mixins:
# <https://github.com/hashicorp/go-getter/issues/98>
# Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
# - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
# This mixin `uri` is relative to the current `vpc` folder
- uri: ../../mixins/context.tf
filename: context.tf
atmos vendor pull -c infra/vpc
Pulling sources for the component 'infra/vpc' from 'github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref=1.372.0' into 'examples/tests/components/terraform/infra/vpc'
Pulling the mixin '.......mixins/context.tf' for the component 'infra/vpc' into 'examples/tests/components/terraform/infra/vpc/context.tf'
@Matthew Reggler if that still is not working for you, please DM me with your config and I’ll take a look
also, update Atmos to the latest
2024-02-12
2024-02-13
Im trying to run cloudposse/github-action-atmos-terraform-plan but Im getting this error
Run cloudposse/github-action-atmos-get-setting@v1
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
This is my yaml
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v1
with:
component: "github-oidc-role/cicd"
stack: "gbl-prod"
component-path: "component/terraform/github-oidc-role"
terraform-plan-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
terraform-state-bucket: "org-state-bucket"
terraform-state-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
aws-region: "us-east-1"
It looks like it’s failing in cloudposse/github-action-atmos-get-setting
But if I run this, it works
runs-on: ubuntu-latest
steps:
- uses: hashicorp/setup-terraform@v2
- name: Setup atmos
uses: cloudposse/github-action-setup-atmos@v1
with:
install-wrapper: true
- name: Run atmos
id: atmos
run: atmos terraform plan github-oidc-role/cicd --stack=gbl-prod
@Igor Rodionov please take a look
I think these moved into the config.
component-path: "component/terraform/github-oidc-role"
terraform-plan-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
terraform-state-bucket: "org-state-bucket"
terraform-state-role: "arn:aws:iam::123456789012:role/org-gbl-prod-gha-cicd"
aws-region: "us-east-1"
(the aim being to keep config out of workflows so they are more easily disributed)
cc @Matt Calhoun who has also been working on github-action-atmos-get-setting
Yes that makes sense. I was following the readme. I also tried only the component
and the stack
and received the same weird issue in github-action-atmos-get-setting
@Dan Miller (Cloud Posse) @Igor Rodionov I think the readme may be out of date?
(it does have the part about the config up above)
@RB have you created the config file >
?
Oh boy no i did not. I thought i could give it that info as inputs to the workflow
If i create that config, wouldn’t i just be duplicating the atmos stack yaml in the gitops config?
@Igor Rodionov we should have a friendlier error message
If i create that config, wouldn’t i just be duplicating the atmos stack yaml in the gitops config?
We are moving most of the gitops config into the atmos stack config. And some of it will be moved into atmos.yaml
(this was based on feedback we received, and it makes sense in hindsight)
The key thing we’re aiming for is that GHA workflows should not need to be edited.
So we moved over to the .github/config/atmos-gitops.yaml
pattern but still having the same error
Run cloudposse/github-action-atmos-get-setting@v1
with:
component: s3/some-new-bucket
stack: ue1-devops-prod-01
settings-path: settings.github.actions_enabled
env:
ATMOS_CLI_CONFIG_PATH:
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
Error: SyntaxError: Unexpected token 'F', "
Found stac"... is not valid JSON
at JSON.parse (<anonymous>)
at getSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/lib/settings.ts:28:1)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at processSingleSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/useCase/process-single-setting.ts:40:1)
at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/main.ts:30:1
Our atmos-gitops.yaml file
atmos-version: 1.45.3
atmos-config-path: ./rootfs/usr/local/etc/atmos/
terraform-state-bucket: bucket
terraform-state-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
terraform-plan-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
terraform-apply-role: arn:aws:iam::OMMITTED:role/org-gbl-prod-cicd
terraform-version: 1.6.0
aws-region: us-east-1
enable-infracost: false
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
I do notice that ATMOS_CLI_CONFIG_PATH
is empty in the call to get-setting, not sure if thats part of the issue
Looking at this documentation too it seems we may need to setup our atmos.yaml to output json so the settings github action can correct parse it https://atmos.tools/cli/configuration/
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
@Igor Rodionov can you please look at this
(also, heads up, @Brett Au - sorry to do this to you, very soon we’re moving the configuration into atmos.yml
for consistency)
Hello @Brett Au
What version of cloudposse/github-action-atmos-terraform-plan
are you using?
Sorry getting back to this, @RB and I moved into the aws-team-roles and aws-teams modules and we didn’t want to pollute this issue with something custom we did.
I have re-tried this today with cloudposse/github-action-atmos-terraform-plan@v2
I added the configuration into atmos.yaml
integrations:
github:
gitops:
terraform-version: 1.8.1
infracost-enabled: false
artifact-storage:
region: us-east-1
bucket: bucket
#table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::OMITTED:role/l360-gbl-identity-cicd
role:
plan: arn:aws:iam::OMITTED:role/l360-gbl-identity-cicd
apply: arn:aws:iam::OMITTED:role/l360-gbl-identity-cicd
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
I still have atmos-gitops.yaml as it appears uses: cloudposse/github-action-atmos-affected-stacks@v3
still requires it.
I get the following error when running the plan github action
SyntaxError: Unexpected token 'p', "path: /hom"... is not valid JSON
at JSON.parse (<anonymous>)
at getSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/lib/settings.ts:28:1)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at processSingleSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/useCase/process-single-setting.ts:40:1)
at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v1/src/main.ts:30:1
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v2
env:
ATMOS_CLI_CONFIG_PATH: ${{ github.workspace }}
with:
component: ${{ matrix.component }}
stack: ${{ matrix.stack }}
atmos-config-path: ${GITHUB_WORKSPACE}
atmos-version: 1.70.0
I should state that my manual github action (bash) is working just fine
- name: Run atmos
if: ${{ steps.shouldrun.outputs.status == 'true' }}
id: atmos
run: |
export ATMOS_CLI_CONFIG_PATH=${GITHUB_WORKSPACE}
export ATMOS_BASE_PATH=${GITHUB_WORKSPACE}
atmos terraform plan ${{ matrix.component }} --stack=${{ matrix.stack }} -no-color
So I know the environment can properly run atmos, but it appears the output from the github-action-atmos-get-setting
github action is not valid json and failing
We do have a custom name pattern in atmos
name_pattern: "{environment}-{stage}"
Sorry for the tags, but just wanted to bubble up this old issue
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) @Igor Rodionov
I still have atmos-gitops.yaml as it appears uses: cloudposse/github-action-atmos-affected-stacks@v3
still requires it.
The atmos-gitops.yaml
file has been entirely eliminated from all actions and replaced with the integrations.github
section
Including for affected stacks.
https://github.com/cloudposse/github-action-atmos-affected-stacks?tab=readme-ov-file#config
ACK I think I may have been on a v2 branch
hopefully an easy fix
@Brett Au, have you succeeded in solving the issue?
This error is coming up now. I added some logs to it and using the same integrations key in the yaml file.
https://github.com/cloudposse/github-action-matrix-extended/issues/9
Only notable difference is that a dynamodb lock table
is not supplied.
I see the issue.
⨠ atmos describe config -f json | jq .
parse error: Invalid numeric literal at line 1, column 6
which fails because of these 2 outputs shown
⨠ atmos describe config -f json | head -2
Found ENV var ATMOS_BASE_PATH=/localhost/git/work/atmos
Found ENV var ATMOS_BASE_PATH=/localhost/git/work/atmos
This is because
# atmos.yaml
logs:
level: Trace
I also tried setting ATMOS_LOGS_LEVEL and get the same error
⨠ export ATMOS_LOGS_LEVEL=Off
⨠ atmos describe config -f json | jq .
parse error: Invalid numeric literal at line 1, column 6
Options
- Set log level
Off
inatmos.yaml
and setATMOS_LOGS_LEVEL=Trace
in geodesic - or fix atmos to allow using an environment variable to turn off the
Found ENV
in the output - or update to strip out
Found ENV
from the output I think I’ll just do (1) for now but i’d prefer (2) and (3) seems like a temp fix
Aha, yes I think we have an issue track things. @Gabriela Campana (Cloud Posse) can you confirm we have an issue to fix the output from atmos that makes it I possible to use trace level debugging together with automation. Cc @Andriy Knysh (Cloud Posse)
All debug/trace output should be going to stderr, but might be missed in some places
Hello, I am struggling to understand how atmos can provision resources into different AWS accounts in a multi-account AWS organization. For example, if I need to provision an IAM role in all my AWS accounts in my organization, how does atmos change provider configurations to gain access to my organization’s child accounts?
Im following the Organization Design Pattern: https://atmos.tools/design-patterns/organizational-structure-configuration
Organizational Structure Configuration Atmos Design Pattern
atmos change provider configurations to gain access to my organization’s child accounts
That’s up to the terraform.
Here’s how we do it https://github.com/cloudposse/terraform-aws-components/blob/main/modules/ecr/providers.tf
provider "aws" {
region = var.region
# Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
profile = module.iam_roles.terraform_profile_name
dynamic "assume_role" {
# module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
for_each = compact([module.iam_roles.terraform_role_arn])
content {
role_arn = assume_role.value
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}
TL;DR providers support variables. So use other modules or variable inputs to supply the values.
Our guiding principle is for atmos to have no cloud-provider specific requirements
But if you put the provider in the configuration and have to backout the component, wont that inhibit that delete process since terraform will be looking for the provider to do that delete?
inhibit that delete process since terraform will be looking for the provider to do that delete?
It’s using state from something outside of the current component (root module), so it does not inhibit the delete.
Okay awesome. Thanks for the info Erik.
I know what you’re referring to, however, and we have encountered that when we made mistakes. But it’s not something we encounter anymore.
some of our engineers will be more familiar than me on the specific implementations.
module "always" {
source = "cloudposse/label/null"
version = "0.25.0"
# account_map must always be enabled, even for components that are disabled
enabled = true
context = module.this.context
}
Here’s one of those examples.
E.g. cannot have labels get disabled, if we need to successfully destroy
Okay this makes sense. Thanks Erik. Was banging my head against a wall there for a while.
as Erik mentioned, it’s up to the provider config
provider "aws" {
region = var.region
assume_role = var.assume_role
}
assume_role
can be provided from other TF components (as in our examples), or from TF vars, or from Atmos stack manifests for diff Orgs/tenants/accounts
v1.61.0 what
Update readme to be more consistent with atmos.tools Fix links Add features/benefits Add use-cases Add glossary
why
Better explain what atmos does and why
v1.62.0
Add atmos docs
CLI command @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133369695” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/537” data-hovercard-type=”pull_request”…
Add atmos docs
CLI command @aknysh (#537)
what
Add atmos docs CLI command
why
Use this command to open the Atmos docs
aknysh has 264 repositories available. Follow their code on GitHub.
v1.62.0
Add atmos docs
CLI command @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133369695” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/537” data-hovercard-type=”pull_request”…
Hey all! some notable updates to the atmos docs. First, you can now open the from the command line. Just run atmos docs
Here are some notable additions.
Best Practices for Stacks. https://atmos.tools/core-concepts/stacks/#best-practices Best Practices for Components. https://atmos.tools/core-concepts/components/#best-practices Added an FAQ. https://atmos.tools/faq Challenges that led us to writing atmos. https://atmos.tools/reference/terraform-limitations
v1.63.0
Add integrations.github
to atmos.yaml
@aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133610579” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/538“…
aknysh has 264 repositories available. Follow their code on GitHub.
v1.63.0
Add integrations.github
to atmos.yaml
@aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2133610579” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/538“…
2024-02-14
Looks great
Hello, I see you have this to support short form of aws region and zones, https://github.com/cloudposse/terraform-aws-utils#introduction do you have something similar for GCP
CloudPosse does not have GCP modules
we used CP modules and created this : https://github.com/slalombuild/terraform-atmos-accelerator
Thanks!
Hello, I need to use multiple modules from GCP module, can I config multiple source in component.yml
.? it does not look like it support it. I should not use vendor.yaml
in my case since I am not pulling it for the entire infra, just for that component. I did see that cloudposse solve this issue by adding another module to the main module so it only config one ex it needs to use both efs and kms module, instead of pulling two modules, it only need to pull efs since kms is also defined in efs[main.tf](http://main.tf)
component.yaml
is used to pull all files for a component. If you pulling two components, you can place them into separate filders and use 2 diff component.yaml
files
or use one component.yaml
and mixins
with mixins, you can pull anything from multiple sources (but just one by one)
mixins:
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
filename: context.tf
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
version: 1.398.0
filename: introspection.mixin.tf
Thanks! I think compoent.yml
with minxins is the way I am going to try
Since it is not pull multiple component, it is myltiple module in one component
This is really helpful, thanks @Andriy Knysh (Cloud Posse)
How the mixins is structured when I have multiple modules from multiple components?
Do I create multiple mixin config files for each component?
a list of mixins
is part of spec
at the same level as source
Use Component Vendoring to make copies of 3rd-party components in your own repo.
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
name: vpc-flow-logs-bucket-vendor-config
description: Source and mixins config for vendoring of 'vpc-flow-logs-bucket' component
spec:
source:
# Source 'uri' supports the following protocols: OCI (<https://opencontainers.org>), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP,
# and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
# In 'uri', Golang templates are supported <https://pkg.go.dev/text/template>
# If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
# To vendor a module from a Git repo, use the following format: 'github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
version: 1.398.0
# Only include the files that match the 'included_paths' patterns
# If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
# 'included_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# <https://github.com/bmatcuk/doublestar#patterns>
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
# Exclude the files that match any of the 'excluded_paths' patterns
# Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
# 'excluded_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
excluded_paths:
- "**/context.tf"
# Mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
# All mixins are processed in the order they are declared in the list.
mixins:
# <https://github.com/hashicorp/go-getter/issues/98>
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
filename: context.tf
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
version: 1.398.0
filename: introspection.mixin.tf
Awesome!
Is there other way to solve this issue?
2024-02-15
Hello, for the label order described here https://github.com/cloudposse/terraform-null-label
I see we can config label order as we would like. Does it work if the label order is {namespace}-{tenant}-{environment}-{stage}
and the folder structure in stacks follows a different order namespace/stage/tenant/environment
structure? I think it works, but would like to double check with the expert here. If it does work, is there any disadvantage of doing that
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
short answer: yes, it will work
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
long answer:
the label order in the context
(from null-label
which is used in all components) is to uniquely and consistently name the cloud (AWS) resources, so your resource names/IDs will look like {namespace}-{tenant}-{environment}-{stage}-{name}
(or in whatever order you want)
on the other hand, the Atmos manifests folder structure is for humans to organize the Atmos config and make it DRY. Atmos does not care about the stacks
folder structure, it’s for people to config, organize and manage
what Atmos cares about is the context
variables defined in the stack manifests - that’s how Atmos finds the stack and component in the stack when you execute commands like atmos terraform plan <component> -s <stack>
Atmos provides unlimited flexibility in defining and configuring stacks and components in the stacks.
Atmos stack manifests can have arbitrary names and can be located in any sub-folder in the stacks directory. Atmos stack filesystem layout is for people to better organize the stacks and make the configurations DRY. Atmos (the CLI) does not care about the filesystem layout, all it cares about is how to find the stacks and the components in the stacks by using the context variables namespace, tenant, environment and stage
as described in https://atmos.tools/design-patterns/, you can organize the stacks
(Atmos manifests) in many different ways depending on your Organization/tenants/regions/accounts structure
Atmos Design Patterns. Elements of Reusable Infrastructure Configuration
Having configured the Terraform components, the Atmos components catalog, all the mixins and defaults, and the Atmos top-level stacks, we can now
Thanks!
all of that was done to be able to separately and independently configure 3 diff things: 1) cloud resource names; 2) Atmos manifests folder structure (for people); 3) Atmos stack names (e.g. plat-ue2-prod
)
Interesting to follow https://github.com/opentofu/opentofu/issues/685#issuecomment-1945123152: use of the tf lockfiles …
I was going to say “supply chain attacks”, but @Yantrio got there first. :)
The other solution is to just break down and start including the lock file and try to educate my users about another hoop to jump through for almost 0 benefit.
IMO, this is a pretty fundamental cybersecurity concept. It helps ensure that what you got last time is the same as you got this time. It helps make sure that nobody took over the upstream repo and messed with it. It also helps ensure that the configuration you ship to dev is the same you ship to prod (e.g., a 12-factor app).
The problem I usually have is the inverse — too many platforms to support, and not enough people updating the lock file beyond their own personal os/arch.
2024-02-16
set the channel topic:
2024-02-18
2024-02-19
2024-02-22
Hello, I am currently looking for tooling to optimize our Terraform environments. We previously used Terragrunt in our organization, but it has become too cumbersome over time and feels kind of “previous generation” tooling. We are in the process of setting up a PoC with Terramate, and it’s super neat. Especially the orchestration is powerful. We also looked at Spacelift, but I don’t see any reason to migrate to another CI/CD when we can get to a similar UX in GitHub Actions.
However, I just came across Atmos on Reddit and would love to understand how it compares to Terramate and Terragrunt.
You make a fair point but from what I’ve seen, building your own IaC automation with GitHub Actions can get messy. First of all lack of standardization (which creates silos and bottlenecks across projects), too much reliance on individual knowledge (so when someone leaves, there’re always big gaps) and as things get more complex, it demands more and more resources, limiting what you can build on top of it. Also to mention that maintaining homegrown solutions in-house is a ton of work
we have Atmos handling different Terraform environments for many companies, starting with a simple case with one Org and just a few accounts, to multi-Org, multi-tenant, multi-OU, multi-account with hundreds of components deployed to thousands of stacks
we also have Atmos working with Spacelift (handling tens of thousands of resources and thousands of stacks in some cases across multiple Orgs/OUs/regions/accounts), and with GitHub Actions (we can give you a demo on how to use Atmos with GHA) (discussion on GHA vs Spacelift vs other CI/CD tools is a completely diff topic, all of them have their own pros and cons, including cost, usability, user experience, access control, audit, messiness :) , etc.)
here are some docs describing the core concepts of Atmos:
https://atmos.tools/quick-start/
https://atmos.tools/reference/terraform-limitations
https://atmos.tools/design-patterns/
https://atmos.tools/design-patterns/organizational-structure-configuration
https://atmos.tools/integrations/spacelift
https://atmos.tools/integrations/github-actions/
https://atmos.tools/cli/commands/workflow
https://atmos.tools/core-concepts/vendoring/
Atmos is the Ultimate Terraform Environment Configuration and Orchestration Tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
Take 30 minutes to learn the most important Atmos concepts.
Overcoming Terraform Limitations with Atmos
Atmos Design Patterns. Elements of Reusable Infrastructure Configuration
Organizational Structure Configuration Atmos Design Pattern
let us know if you need more explanation or help (it’s difficult to answer/describe everything in one go)
this is a real example of Spacelift stacks using Atmos:
there are 1274 Spacelift stacks configured across many Orgs, each having many OUs with multiple accounts, deployed into many diff regions
(Definition: a Spacelift stack is an Atmos component provisioned into an Atmos stack, for example a vpc
component can be provisioned multiple times into different Org/OU/account/region stacks, keeping the entire config reusable and DRY using the concepts like imports and inheritance:
https://atmos.tools/core-concepts/stacks/imports
https://atmos.tools/core-concepts/components/inheritance
https://atmos.tools/core-concepts/components/component-oriented-programming
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
Component Inheritance is one of the principles of Component-Oriented Programming (COP)
Component-Oriented Programming is a reuse-based approach to defining,
(also take a look at why the tool is called Atmos https://atmos.tools/faq)
Why is the tool called Atmos?
Thanks for sharing! I took some time to read through the resources and decided to stick to Terramate. In the end, I don’t see any significant benefits why I should be using Atmos over Terramate.
As said, we don’t want to use Spacelift because we don’t need any of the features Spacelift offers.
Here’s a bit of feedback:
• Atmos feels really cumbersome to get started with
• The orchestration and order of execution in Terramate helps us better to manage environments and split those up into stacks
• Using yaml for configuration sounds like a step back for us. We like the ability to manage all IaC related config with HCL, otherwise we wouldn’t use Terraform in the first place
• We need code generation and like the approach quite a bit. We also don’t see any issues with tests anyways, thanks for helping us out here!
ok, no problem
regarding “Using yaml for configuration sounds like a step back for us”
Note that in Atmos, YAML is used for configuration of stacks for diff environments. You still use plain Terraform for the components (which can be used with Atmos or without). There is a clear separation of concerns: code (terraform root modules) and configuration for diff environments (Atmos manifests).
Also, YAML is everywhere now (kubernetes, kustomize, helm, helmfile, etc.), it looks like it’s the modern way to define configurations (regardless of whether you like or hate YAML )
in the end, there is no one perfect tool for everything. Terramate has its advantages (the code generation, change detection and testing are cool, also using plain HCL), as well as Atmos (these are different approaches to solve similar problems in diff ways)
We need code generation and like the approach quite a bit. @Peter Dinsmore Can you help me understand how you leverage code generation?
Using yaml for configuration sounds like a step back for us. We like the ability to manage all IaC related config with HCL, otherwise we wouldn't use Terraform in the first place
Kubernetes uses yaml and is moving forward….
@jose.amengual for Kubernetes yaml makes sense too. For Terraform, I don’t see why I should use a different configuration language if HCL is exactly built for that? E.g. in Terramate, I can describe the purpose (metadata) and orchestration behavior of a stack as HCL in a stack configuration stack.tm.hcl
.
Also, the code generation is configured with HCL, which allows the use of Terraform functions inside the code generation. I don’t think YAML would be suitable for that.
@Erik Osterman (Cloud Posse) all sorts of things, to mention a few:
• We generate native backend and provider configuration in stacks interpolating stack metadata. It’s super powerful.
• We use Terramate modules for generating templates based on e.g. provider version used (we render different attributes in resource configuration based on if we use the stable or beta google provider)
• We generate Kubernetes manifests using Terraform outputs
Anyways, there’s a ton of different tools in the market. Just because we favor a different approach doesn’t mean that Atmos isn’t a great tool! Any contribution to the ecosystem is appreciated.
@Peter Dinsmore I you shared all this. It’s easy to be surrounded by a lot of people who sing praises. The opportunity for improvement lies with constructive criticism.
@Erik Osterman (Cloud Posse) we should catch up at some point
How do you apply all stacks in a pipeline? If I want my pipeline to run atmos terraform apply, can I do an all flag instead of listing the stack and component? This is with GitHub Actions
Are you looking for workflows?
Workflows are a way of combining multiple commands into one executable unit of work.
I am unaware of a atmos
flag that just applies or plans all components defined for stack. However, atmos workflow
allows you to synchronous run applies and plans for one or more components.
@pv please help us to understand what you mean by “apply all stacks in a pipeline”. You probably should not plan/apply all stacks at once (there could be thousands of them), but check what components/stacks have changed and plan/apply only those
see this for example https://atmos.tools/integrations/github-actions/affected-stacks
Streamline Your Change Management Process
Yes the workflow is the second best option of just adding each plan/apply command there but yes apply all stacks in a mono repo is better phrased
Also what is the downside to running them all if there is only a change to one stack? Wouldn’t all the terraform show as no changes other than the new stack that you added?
the downside is it will take time and consume resources
Does it consume more resources than running regular terraform? Or is it running more in the background that consumes more resources?
i mean if you have hundreds or thousands of stacks, why trigger all of them if only one or a few are changed and should be planned/applied. Triggering all ofnthem will take a lot of time and probably cost money
(Atmos calls regular terraform for each plan/apply, there is no difference here, it does not do anything in the background)
I do not have hundreds of thousands of stacks. But none of this really answers my initial question so I’m assuming that means there is no way to run all stacks unless I create a workflow that includes an apply of each individual stack?
there is no option in Atmos to trigger everything (e.g. atmos terraform apply -all
) (was not implemented, at least yet, for the reason that people usually have hundreds or thousands of stacks, and triggering all of them would be a waste of resources). Help us understand what exactly you want to do in the pipeline, and we would be able to offer a solution
Atmos workflow is one of the solutions
but if you have a small static set of stacks and you know all of them, you can just execute atmos terraform apply ...
sequentially in a script (or in a workflow)
the point is, it’s usually not feasible/practical to list all stacks in a workflow or in a script since there could be too many of them
another solution:
atmos list stacks
for each stack do: atmos list components -s <stack>
for each component do: atmos terraform apply <component> -s <stack>
the atmos list stacks
and atmos list components
commands you can find here and add them to your atmos.yaml
:
Atmos can be easily extended to support any number of custom CLI commands.
this solution is dynamic (you don’t have to know and hardcode all stacks and components in advance)
@pv I think maybe what you want is to apply only the affected stacks in the Pull Request?
(that solution is mentioned above)
@pv let us know if any of the above would help you to implement what you want (and let us know if you need help)
But none of this really answers my initial question so I’m assuming that means there is no way to run all stacks unless I create a workflow that includes an apply of each individual stack?
Yes, the gist of it is we have not implemented plan-all
and apply-all
workflows because they have multiple problems.
• Plan all never works if your root modules have any interdependencies. At best it gives you a false-sense of what will happen. At worst, it just errors.
• Apply-all is practical for cold-starts, but should never be used after that. And since it’s for a cold-start, there are often other considerations. Therefore atmos workflows
have been how we address it.
• From a CI/CD perspective, neither plan-all
nor apply-all
should ever be used.
I can go into more details any anyone of these. For example, there are alternative considerations for how to address apply-all
in a safe, automated way in a CI/CD context, but that’s something solved in CI/CD and not atmos.
@Andriy Knysh (Cloud Posse) I think what you sent me is probably the best for what I am thinking of. I think what I prefer about straight up terraform in the past is just having my pipeline plan and apply and using separate repos for Landing Zones, and Products infra. This requires you to add extra steps and extra potential break points for the workflow but I think this more dynamic approach may help with that.
@Erik Osterman (Cloud Posse) I get those concern with how Atmos is setup but with traditional TF, when you run plan and apply, it looks over just any changes in your terraform path on the repo and only applies what is reflected as new, changed, or deleted compared to the state. Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform? Also with the first point, isn’t that reason “depends_on” exists? Sure you don’t see all the values until apply but its faster than running one dependency and then the next one until all dependencies are accounted for
ut with traditional TF, when you run plan and apply, it looks over just any changes in your terraform path on the repo and only applies what is reflected as new, changed, or deleted compared to the state.
We get that with our GitHub Actions.
- Describe Affected
- Run
terraform plan
on each affected stack - Apply each change with GitHub Actions. To be clear, this has the outcome you want. It’s just not implemented as a “plan-all” or “apply-all”. It’s implemented using GitHub Action matrixes.
Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform
@mike186 has some interesting numbers he likes to share about the minutes of terraform they run per month. In their case, it’s a combination of Atmos+Spacelift, but the same would be true of Atmos+GitHub Actions.
as far as I’m concerned, atmos is natively supported on spacelift while you need to use custom settings to use gruntworks :)
According to spacelift we have ~1200 stacks (100% passing) and used more than 1.7 million minutes of atmos worker time including over 99k minutes of tracked runs over the last month. These are pretty typical monthly stats for us. Given that we run spacelift exclusively with atmos, every single stack, I very much feel like atmos is so completely, transparently, compatible and with Spacelift that Spacelift doesn’t need a setting for atmos.
Also with the first point, isn’t that reason “depends_on” exists?
So atmos is designed to work with any number of systems, including spacelift. Not every underlying system can implement all the capabilities of the configuration. In this case depends_on
is currently utilized by our Spacelift implementation. We plan on adding GHA support for this soon, but cannot commit to when.
Sure you don’t see all the values until apply but its faster than running one dependency and then the next one until all dependencies are accounted for Agree, so what we really want it to trigger downstream dependencies when upstream components are affected. This is supported today with Spacelift and Atmos.
regarding “Have you ever done a time or cost analysis showing how much money you save doing the atmos approach over traditional terraform?”. To be clear, Atmos is configuration on top on terraform root modules, in the end Atmos generates the varfiles/backend and executes plain Terraform (so you’d have the same number of runs using just plain terraform in Spacelift or GHA). But this is not about Atmos vs plain terraform, it’s about architecting your Terraform modules and splitting them into smaller parts to reduce complexity and plan/apply time and blast radius. Once you do it correctly in terraform, you can configure the root-modules with Atmos to be deployed into various environments (OUs, regions, accounts). And once you correctly design your Terraform root modules, you will save time and money when planning/applying them
@pv depending on what you decide to do, but if you still want atmos terraform apply-all
(and atmos terraform plan-all
) , I can impelement those as Atmos custom commands and put in the docs (so you could just copy them into your atmos.yaml
) (but as mentioned, the best way forward is to use the GHA to execute atmos describe affected
to trigger just the affected stacks, then atmos describe dependents
to trigger all the dependencies )
@Andriy Knysh (Cloud Posse) No need to create a custom command if it is against practice. I will start with the documentation you sent me and should be good to work with that
I think what I prefer about straight up terraform in the past is just having my pipeline plan and apply Aha, I think I misunderstood what you meant initially. Do you mean having larger root modules, that have, for example, the entire landing zone defined?
“Agree, so what we really want it to trigger downstream dependencies when upstream components are affected. This is supported today with Spacelift and Atmos”
@Erik Osterman (Cloud Posse) but how is that more cost effective if you have to pay to use spacelift?
And yes larger root modules that are dynamic in nature. I have a Landing Zone that I like to maintain and modify based on the needs of the company. But I know atmos is intended to be more modular so I am pivoting from what I would normally do
That’s a larger calculus that will depend on what your needs are. Spacelift is clearly an enterprise solution for enterprise challenges. That’s why we created the GitHub Actions which are entirely self-hosted and have no SaaS option or tie-in.
The the larger root modules are definitely convenient from a developer perspective. Terraform handles the DAG. The problem is they do not scale as the complexity grows. This may not be a concern, then great - you can use them. It’s our experience, working in enterprise contexts, that these root modules grow and grow. The time to plan takes longer and longer, and they are more susceptible to transient errors. Additionally, the more that goes into a root module, the less reusable it is across organizations. Since Cloud Posse is primarily concerned with how to make infrastructure re-usable, atmos is optimized for this use-case.
The larger the root module, the harder it is to separate concerns, the harder it is to restrict what can change and when.
Every change risks changing everything everytime it’s plan/applied, which means a huge blast radius.
So the solution is to break it into smaller root modules, by lifecycle. But then the problem is as you say, the complexity is offloaded to the tooling that calls terraform.
The first recommendation is to reduce the coupling between the layers, when possible, reducing how often those dependencies are triggered. Then implement CD to roll out the changes. So terraform is responsible for provisioning foundations, and CD is responsible for how to orchestrate those changes.
@pv please take a look at https://atmos.tools/reference/terraform-limitations
@pv also want to invite you to our weekly office hours. https://cloudposse.com/office-hours (we’ve run them for ~4-5 years and never missed a week)
Thanks @Erik Osterman (Cloud Posse) and @Andriy Knysh (Cloud Posse). I’ll review the docs you sent and look into the office hours. Appreciate all the attentive support
Anytime! We spend a lot of time thinking about these things, and really need to write up more documentation on “Well-Architected Terraform (according to Cloud Posse)” to make it easier to understand why we do the things we do. Especially when they go against a lot of norms, that we no longer ascribe to.
hey sorry to revive an old thread a little bit but what about atmos terraform validate --all
Just thinkin out loud… i’ll probably go the custom command path or something
you know what… nevermind…bad idea… custom commands will work in the pinch i’ve put myself in…
We do have plans to add support for commands that can be applied to a graph. But no eta yet.
But for now, a custom command will get you something close to that fastest
@Hans D is this what you were also attempting? (per our other DM)
atmos terraform validate --all
How does one wrap a custom go binary around atoms cli ? I have a go binary to validate vpc connectivity when attaching vpc to transit gateway , I would like to run it as part of terraform execution . Has anyone tried this usecase?
custom go binary … validate vpc connectivity when attaching vpc to transit gateway
It sounds like the implementation should be flipped around. Terraform should be using your go as a provider. Since it’s already in Go, that lift shouldn’t be too bad.
However, something like what you want to do should be possible using atmos custom commands.
You can create a custom command that first calls your command, then calls terraform.
Custom commands have access to the config and accept standard command line parameters.
Atmos can be easily extended to support any number of custom CLI commands.
commands:
- name: terraform
description: Execute 'terraform' commands
# subcommands
commands:
- name: provision vpc-transit-gateway
description: This command provisions the transit gateway after first performing a connectivity check.
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
# ENV var values support Go templates
env:
- key: ATMOS_COMPONENT
value: "transit-gateway"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
steps:
- bin/vpc-connectivity-check
- atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
- atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
then you could simply run:
atmos terraform provision vpc-transit-gateway --stack prod
Many tools instead implement things like before/after hooks. We may implement it at some point, but it’s never been something we’ve required at Cloud Posse, and we do a lot of Terraform. Granted, for these types of situations, our route is to go with a Terraform provider.
So that’s why we’ve written our utils
and awsutils
providers, anytime we need an escape hatch. This is inline with how things should work in Terraform. You could say, our opinionated Best Practice.
As an aside, this sounds like a cool data source if you created one!
validate vpc connectivity when attaching vpc to transit gateway
This is great , thanks for validating this . I created it as a standalone go binary that leverages AWS api’s to test the vpc connectivity . I will need to figure out how to make it a tf provider . I am fairly new to the world of terraform . I would have to look into it . Not sure if it’s too much of an ask , have you come across any examples of making a go binary as a provider? I am guessing I need to leverage tf sdk?
Are there any non atmos custom command practical usecases that people take advantage of the atmos command feature? Or is it specially for atmos commands?
Well, I can!
We faced this same issue.
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
We took the HashiCorp AWS provider, stripped everything out, and added in what we needed that was custom.
And, since you mention you’re new to Terraform, there’s also the local-exec
provider, which is the escape hatch when you don’t have time to write a provider.
We took the HashiCorp AWS provider, stripped everything out, and added in what we needed that was custom.
This is exactly what I was thinking. Great .
v1.64.0 Create Landing page @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2137898667” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/540” data-hovercard-type=”pull_request”…
Create Landing page @osterman (#540) what
Create a custom landing page Update other docs
why
Explain what Atmos does in a few easy steps
To help our clients conquer the cloud by designing, building and implementing world-class infrastructures that delight developers and make business sense. - osterman
2024-02-23
2024-02-24
v1.64.1 Enhancements
Fix responsiveness of css @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2151917720” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/543“…
Enhancements
Fix responsiveness of css @osterman (#543) what
Use % instead of vw, since the outter container is capped at a certain px size. Set a minimum height of the header in px, otherwis…
Helping teams succeed today using Cloud Posse’s Reference Architectures for AWS. - osterman
what
Use % instead of vw, since the outter container is capped at a certain px size. Set a minimum height of the header in px, otherwise if based on vh it can be impossibly small and gets eaten by…
v1.64.1 Enhancements
Fix responsiveness of css @osterman (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2151917720” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/543“…
Hi all. Just wondering, should website or github action changes trigger a release? I always figured these changes would get a no-release
label since there aren’t any changes to atmos cli
we deploy website on release
maybe we can deploy it on merging to main
Without a release.
I updated it in my current PR to do that.
cc @Andriy Knysh (Cloud Posse)
thanks (I did the same in your PR :)
oh lol
what
• Use tabs for each installation method • Add new “installer” script installation method
why
• Make it easier to get started
btw, @RB can you share the command I should add to the install page for nixos?
or maybe @Jeremy White (Cloud Posse) if you’re around
There are many ways to install Atmos.
@Andriy Knysh (Cloud Posse) good for re
approved
Thanks for the quick turnaround ! This should make the atmos cli releases easier to follow.
2024-02-25
2024-02-26
FYI, the https://github.com/cloudposse/terraform-provider-utils PGP key has been added to the OpenTofu Registry to sign the provider
https://github.com/opentofu/registry/blob/main/keys/c/cloudposse/provider.asc
can we do the same for awsutils
yes, @Erik Osterman (Cloud Posse) please put the PGP keys in 1pass
It should be the same
I’m having some trouble using the upstream [providers.tf](http://providers.tf)
to assume the -admin
suffixed role instead of the -terraform
suffixed role when running atmos
command locally.
When running locally, I imagine it should be using the human -<role>
suffixed iam roles in delegated accounts
But for some reason, it’s thinking that my laptop is a terraform/spacelift/cicd user and trying to assume terraform instead. Is this expected? Should human users assume the -terraform
role or should it assume the same -<role>
that it originally assumes.
i.e. if primary role is identity-admin
then the role for ue1-dev
should be gbl-identity-dev-admin
role, instead of gbl-identity-dev-terraform
role, right?
@Jeremy G (Cloud Posse) you implemented terraform dynamic roles for components, can you explain how to use it?
@RB without using the dynamic roles that Jeremy implemented, yes it’s expected that all components assume the terraform role
regardless of the initial role
Note that I deployed aws-teams
and aws-team-roles
using the readme yaml. I changed terraform
to cicd
in case I used tofu
instead of terraform.
• https://github.com/cloudposse/terraform-aws-components/tree/main/modules/aws-teams
• https://github.com/cloudposse/terraform-aws-components/tree/main/modules/aws-team-roles
without using the dynamic roles that Jeremy implemented, yes it’s expected that all components assume the terraform role
regardless of the initial role
Oh interesting!
But then wouldn’t all AWS SSO roles have the same IAM permissions if they were all able to assume to -terraform
roles in all child accounts?
if you look at all components, the provider will use the terraform role by default https://github.com/cloudposse/terraform-aws-components/blob/main/modules/alb/providers.tf
provider "aws" {
region = var.region
# Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
profile = module.iam_roles.terraform_profile_name
dynamic "assume_role" {
# module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
for_each = compact([module.iam_roles.terraform_role_arn])
content {
role_arn = assume_role.value
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}
Jeremy will explain how to use/switch to the dynamic roles (meaning that instead of the TF role, account-map/modules/iam-roles
should return the role you have already assumed when executing TF commands)
I know we have https://github.com/cloudposse/terraform-aws-components/blob/main/modules/account-map/variables.tf#L100, but Jeremy has more context and knowledge
variable "terraform_dynamic_role_enabled" {
Thanks Andriy. That’s very helpful.
A separate issue I noticed is that I wanted to use cicd
for the aws-team-roles
and it looks like I have to use terraform
because its currently hard coded in account-map
terraform_roles = {
for name, info in local.account_info_map : name => format(local.iam_role_arn_templates[name],
(local.legacy_terraform_uses_admin &&
contains([
var.root_account_account_name,
var.identity_account_account_name
], name)
) ? "admin" : "terraform")
}
# legacy support for `aws` config profiles
terraform_profiles = {
for name, info in local.account_info_map : name => format(var.profile_template, compact(
[
module.this.namespace,
lookup(info, "tenant", ""),
module.this.environment,
info.stage,
((local.legacy_terraform_uses_admin &&
contains([
var.root_account_account_name,
var.identity_account_account_name
], name)
) ? "admin" : "terraform"),
]
)...)
The current version of account-map optionally supports having a “terraform” role and a “planner” role in each account. The former allows making all changes in the target account, while the latter is meant to provider read-only access. Roles which are allowed to assume the “terraform” role do so. Roles which cannot assume the “terraform” role but can assume the “plan” role assume the “plan” role. All other roles attempt to run Terraform using their existing role, without assuming a new role. You must have this feature enabled; it is disabled by support. Details are documented here .
The role names use to allow plan or apply access are configured in terraform_role_name_map
and default to “planner” and “terraform”, but can be set to whatever you want.
A key principle remains that there is a uniform set of roles in all the accounts other than root and identity, and access is controlled by allowing (or not) access to those roles in those accounts. So if you want your cicd
role to be able to run terraform apply
, it should be allowed to assume the terraform
(or “apply”) role in that account.
Note that this feature also requires support from tfstate-backend
, which should be documented in the same document linked above regarding dynamic terraform roles.
variable "terraform_role_name_map" {
Thanks Jeremy for taking the time to write your response.
I cannot access the cloudposse docs link above. Is it behind a paywall? It says my account needs to be approved
i do see the reference to the role map. I didn’t realize there was an apply
and plan
role.
It does seem like the terraform
role, in spite of the role map, is hard coded in account-map
, but that’s an easy fix. For now, we’ll create the cicd
role as an aws-team
in identity and use the terraform
role as aws-team-roles
per account.
Andriy also mentioned the terraform_dynamic_role_enabled
toggle that we can also optionally flip if we want to go that route. I think i like the idea that only admin
can use the apply
role and everyone else has to use the planner
role provided there are few admins and everyone else goes through the cicd.
@RB wrote:
It does seem like the terraform role, in spite of the role map, is hard coded in account-map, but that’s an easy fix. For now, we’ll create the cicd role as an aws-team in identity and use the terraform role as aws-team-roles per account.
I’m not sure what you are talking about. The concept of a role used by default for people running Terraform is hard coded into our components, and referred to as the Terraform role, but the name of that role is configurable, as I pointed out previously.
Hi Jeremy. Sorry, I was referring to this code. I cannot change this string easily. Perhaps it needs to be exposed as an input? or maybe I’m using it incorrectly?
terraform_roles = {
for name, info in local.account_info_map : name => format(local.iam_role_arn_templates[name],
(local.legacy_terraform_uses_admin &&
contains([
var.root_account_account_name,
var.identity_account_account_name
], name)
) ? "admin" : "terraform")
}
# legacy support for `aws` config profiles
terraform_profiles = {
for name, info in local.account_info_map : name => format(var.profile_template, compact(
[
module.this.namespace,
lookup(info, "tenant", ""),
module.this.environment,
info.stage,
((local.legacy_terraform_uses_admin &&
contains([
var.root_account_account_name,
var.identity_account_account_name
], name)
) ? "admin" : "terraform"),
]
)...)
That code supports the legacy terraform_roles
output for people not using dynamic Terraform roles. The newer, preferred, Dynamic Terraform Roles ignore that output, and use terraform_role_name_map
and terraform_access_map
instead.
The existing providers.tf is not only an example of how to retrieve the role to assume, it is sufficient for nearly all our components to use without modification.
provider "aws" {
region = var.region
# Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
profile = module.iam_roles.terraform_profile_name
dynamic "assume_role" {
# module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
for_each = compact([module.iam_roles.terraform_role_arn])
content {
role_arn = assume_role.value
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}
Thank you very much!
2024-02-27
Regarding the tfstate-bucket
component. Is there an alternative suggested account to deploy this bucket in? I don’t want to deploy it in root
as i don’t want anyone to access root
.
Could the corp
(shared-services) account be a suitable alternative? What do you folks think?
(This is for a brownfield project so some of the accounts in this case already exist prior to starting)
Plan is to deploy one in each account, in a hierarchical fashion off of root.
Now, in your case, you don’t have access to root, you’d need to designate some other account.
I don’t personally know the scope of this change. Others on the team might.
Could be corp
or auto
or artifacts
maybe using some of our terms.
will impact some of the bootstrapping, as you need the new account to be available before you can move over to s3/dynamodb. But doing some split of priviliged, less priviliged sounds sane
personally would not mix it with one of the “core” “core” accounts, but a dedicated one if you want to have a central state.
Hi I have a quick question around templating and the likes can this be used anywhere or just in certain places. I tried using it like below
workspace_key_prefix: "infra-{{ .tenant }}-{{ .namespace }}-{{ .environment }}-{{ .stage }}-init"
but it does not render the values and keeps them as is when I do a atmos describe stacks
Go
templates are used just in imports https://atmos.tools/core-concepts/stacks/imports
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
it’s not supported everywhere in Atmos manifests… fro the reason that Go templates need context variables, and in general it’s not possible to provide the context variables in advance b/c it could be a circular dependency - to get all the variables, we need to import everything and process it, so we don’t know all the vars yet
Ok I so I can use templating or variable substitution in the vars: section and the import section in the stack file but not in other palces
vars: environment: {{ .environment }} stage: “{{ .stage }}”
and in the stack file
import:
#- path: "catalog/dvsa"
- path: "mixins/project"
context:
tenant: "dvsa"
stage: "{{ .stage }}"
app_region: "dev"
environment: "dev01"
namespace: "mts"
you can do it only in imports
meaning if you import a file that has workspace_key_prefix: "infra-{{ .tenant }}-{{ .namespace }}-{{ .environment }}-{{ .stage }}-init"
and provide a context
for it, it will work
Yep I get that. Thanks
It just how I structure the code
for the very first import with templates, you have to provide a context
with all the vars defined
to make it work the way I want to
yep thats what I am doing
then you can use https://atmos.tools/core-concepts/stacks/imports/#hierarchical-imports-with-context to propagate the context
to all the child imports
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
if you have specific questions, you can always DM me your code and I’ll try to help you
yep I am doing the same I create a set of files to import and pass in the context at the top level and this then get propagated down as it the has an import to import other files
also, let’s review your question. Why do you need workspace_key_prefix
with templates?
usually, we have Terrafrom workspace that includes namespace, tenant, environment and stage
There is a naming convention to follow and was just trying to bake that in
the each component has workspace_key_prefix
which is usually just the component name (added by Atmos automatically)
There is a naming convention to follow and was just trying to bake that in
ok
Ok I think I have the gist of it and a way forward thanks for the quick response
Hey folks,
We have the following atmos config.
stacks:
base_path: "stacks"
included_paths:
- "org/**/*"
name_pattern: "{stage}"
excluded_paths:
- "**/_*.yaml" # Do not consider any file beginning with "_" as a stack file
We made a classic mistake: Our application was simple and we didn’t have multi-region on this project, so we went ahead and used name_pattern
to be equal to just {stage}
. We are of course now doing some disaster recovery work on this project and now adding a 2nd region, and this becomes a problem as we have Atmos thinking both are the same stage i.e. the VPC in ue1-dev can’t be differentiated from the VPC in uw2-dev.
Simple solution would be to prefix our component instance names in the dev/us-west-2.yaml
with uw2-*****
, but that feels rough. Are there any other suggestions or way to migrate this name pattern issue that we should try out?
I should clarify: This is both a problem with workspace names AND with spacelift Stacks I believe. That ends up being our blocker here.
The spacelift bits are quite easy te replace (my experience), the more concerning bit is the terraform state including the workspace bit… (which is referenced in the spacelift config)
it’s possible to rename all Atmos components (change names to reflect regions, or prefix/suffix with something) and still keep the same Terraform workspace_key_prefix
(so Tf will not attempt to destroy it). This means that the existing resources will be unde the old ``workspace_key_prefix , the new ones will be at diff
workspace_key_prefix)`
for example, let’s say you had
components:
terraform:
vpc:
deployed in us-east-2
now you want to change the Atmos component name for it and add another Atmos component for the other region
components:
terraform:
vpc/ue2:
vars: {}
backend:
s3:
# Keep the existing TF workspace key prefix
workspace_key_prefix: "vpc"
vpc/uw2:
vars: {}
@Matt Gowie not sure if this would help you, let me know
(spacelift stacks will be recreated, but that should be ok)
and to keep the Spacelift stack names the same, there is another trick
components:
terraform:
vpc/ue2:
vars: {}
settings:
spacelift:
workspace_enabled: true
# `stack_name_pattern` overrides Spacelift stack names
# Supported tokens: {namespace}, {tenant}, {environment}, {stage}, {component}
stack_name_pattern: "{stage}-vpc"
Ah both of those help to know about @Andriy Knysh (Cloud Posse) – Thanks! I’ll look into what we would want to do here considering…
v1.64.2 Add Atmos CLI command aliases @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2158013928” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/547” data-hovercard-type=”pull_request”…
aknysh has 265 repositories available. Follow their code on GitHub.
what
Add Atmos CLI command aliases Update docs https://pr-547.atmos-docs.ue2.dev.plat.cloudposse.org/cli/configuration/
why An alias lets you create a shortcut name for an existing CLI command. A…
@RB :point_up: now you can do a tf
with aliases
v1.64.2 Add Atmos CLI command aliases @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2158013928” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/547” data-hovercard-type=”pull_request”…
Add Atmos CLI command aliases @aknysh (#547) what
Add Atmos CLI command aliases Update docs https://atmos.tools/cli/configuration/#aliases
why An alias lets you create a shortcut name for an exis…
2024-02-28
@Andriy Knysh (Cloud Posse) not really a patch version with the added functionality …
yes, some issues with the auto releaser, it did it for some reason (w/o having a patch
label). Prob it was confused by the three commits
given the other why-is-this-a-patch that had afaik only a single commit, it might need some further investigation if this keeps popping up
yes, we had similar issues in other repos, your help is appreciated
and given that patch releases in our env only get picked up by renovate bot once a week it made it more stand out for me (given the release announcement where I missed the renovate pr becoming available)… not a biggy, just triggered me.
It’s a misunderstanding on how release drafter works. The first PR that merges drafts an unpublished release. That might have been a patch. The next PR that merges updates the draft release. Before publishing, the person who clicks publish should review and make sure it makes sense. For example the version. Unlike our terraform modules, we do not automatically publish the releases for atmos.
This is nice because we can bundle 3-4 PRs into one release and get release notes for all of them.
So I think it was our mistake on not reviewing the release before publishing.
it was my mistake. Before we always released it manually, now the autoreleaser tries to help creating Drafts releases, which in this case should have been discarded (the fact that it created a patch release is still a bug)
Discarded or edited? I am on my phone, so maybe not seeing the obvious.
since it was a patch release, discarded. We need to review this
the autoreleaser was confused by the patch
label on the first PR in the release https://github.com/cloudposse/atmos/pull/544
It’s not confused though. It’s operating as designed.
The key is it’s a draft. It’s not finalized.
That means subsequent PRs “merge” into that release.
But we can change the release numbering before publishing
ok, should we release 1.65.0 manually (for Atmos aliases)?
This is a minor hiccup in the grander scheme of things.
It will get resolved in the next release. Let’s just pay attention to the release number before manually clicking publish on the draft release.
v1.64.2 Add Atmos CLI command aliases @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2158013928” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/547” data-hovercard-type=”pull_request”…
Add Atmos CLI command aliases @aknysh (#547) what
Add Atmos CLI command aliases Update docs https://atmos.tools/cli/configuration/#aliases
why An alias lets you create a shortcut name for an exis…
aknysh has 265 repositories available. Follow their code on GitHub.
what
Add Atmos CLI command aliases Update docs https://pr-547.atmos-docs.ue2.dev.plat.cloudposse.org/cli/configuration/
why An alias lets you create a shortcut name for an existing CLI command. A…
I have a pipeline where I am running a atmos terraform plan and then apply with one resource and it deploys successfully:
- name: Atmos - Terraform Plan
run:
atmos terraform plan resource1 -s orgs/dir/fake/sandbox/us-central1/resource1
Then when I add another component from the same stack and attempt a plan, it wants to destroy the previous resource I created because ot os (not in the configuration):
- name: Atmos - Terraform Plan
run:
atmos terraform plan resource1 -s orgs/dir/fake/sandbox/us-central1/resource1
atmos terrafora plan resource2 -s orgs/dir/fake/sandbox/us-central1/resource2
Why is this happening and how can I resolve that? Resources are for GCP and pipeline is GHA
It doesn’t look like our GHAs are being leveraged for Atmos
Keep in mind, CI/CD with Terraform is non trivial. We’ve spent considerable time developing these actions, so it’s hard to answer what’s wrong with the simple case of just calling Atmos via a run
statement.
Also, I see explicit calling of stacks. We developed a GitHub Action to describe affected stacks in the PR. Using this action, you can use a matrix to plan all affected stacks.
it wants to destroy the previous resource I created because ot os (not in the configuration):
This sounds like a problem with the backend configuration.
By convention, we usually rely on Terrraform Workspaces. If the backend isn’t configured to use workspaces, then you would get that effect of overriding the state.
Ahh nvm my TF state got screwed for above, was way I was doing it the atmos command I messed it up pointing to path rather than stack name as appeared in atmos. I think we are all good
Ok! great to hear
Followup: The issue was we had changed from terraform apply to terraform deploy. When switching the pipeline between those commands, you run into that issue. We destroyed the resource and ran them both as a deploy to resolve the issue
2024-02-29
Hey just an FYI dropped a little PR to prevent atmos from crashing when using the azurerm
backend and not providing a global key
https://github.com/cloudposse/atmos/pull/548
…exist
what
When using the azurerm
backend, the logic assumes a global key
is set and prepends that to the component key
. However, when it doesn’t exist it causes atmos to crash. This checks if a global key
is set and if not, then don’t prepend anything.
why
• prevent atmos from crashing
• don’t require a global key
references
thanks @Andrew Ochsner, will review it (thanks for testing it on Azure, any help on Azure and GCP is greatly appreciated)
…exist
what
When using the azurerm
backend, the logic assumes a global key
is set and prepends that to the component key
. However, when it doesn’t exist it causes atmos to crash. This checks if a global key
is set and if not, then don’t prepend anything.
why
• prevent atmos from crashing
• don’t require a global key
references
if there’s a more idiomatic way of doing that in golang, lemme know… not my native programming language
thank you @Andrew Ochsner for your contribution
v1.64.3 Enhancements
include some protection on the azurerm backend when global does not exist @aochsner (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2161523307” data-permission-text=”Title is private”…
Enhancements
include some protection on the azurerm backend when global does not exist @aochsner (#548) what When using the azurerm backend, the logic assumes a global key is set and prepends…
aochsner has 53 repositories available. Follow their code on GitHub.
Hi,
I am trying to integrate Atlantis with Atmos i have generated varfile pushed it to gitlab repo and generated atlantis.yaml file too but i am getting an error while running the atlantis plan
command
Any help would be appreciated
Error: Failed to read variables file
@jose.amengual you are familiar with Atlantis (and Atmos), can you please take a look at ^
are you pushing the files after you generate them or after the pr is created ?
I am pushing file after i generate them
and you see that push on your PR and the variable file too?
Yes, I see
even on Atlantis server i checked the repo atlantis cloned has varfile exits
can you post the full command that Atlantis is running?
Sure
and maybe your github action code too
I am using Gitlab
ok, no problem
the command and workflow config
I have used below command to generate varfile and then i pushed that file into my gitlab repo
atmos terraform generate varfiles --file-template={component-path}/varfiles/{namespace}-{environment}-{component}.tfvars.json
I have generated atlantis file that i getting parsed too
version: 3
automerge: true
delete_source_branch_on_merge: true
parallel_plan: true
parallel_apply: true
allowed_regexp_prefixes:
- dev/
- staging/
- prod/
projects:
- name: test-uw1-root-tfstate-backend
workspace: uw1-root
workflow: workflow-1
dir: /components/terraform/tfstate-backend
terraform_version: v1.2
delete_source_branch_on_merge: true
autoplan:
enabled: true
when_modified:
- "**/*.tf"
- varfiles/$PROJECT_NAME.tfvars.json
plan_requirements: []
apply_requirements:
- approved
workflows:
workflow-1:
apply:
steps:
- run: terraform apply $PLANFILE
plan:
steps:
- run: terraform init -input=false
- run: terraform workspace select $WORKSPACE || terraform workspace new $WORKSPACE
- run: terraform plan -input=false -refresh -out $PLANFILE -var-file varfiles/$PROJECT_NAME.tfvars.json
Atlantis command running from gitlab
running "terraform plan -input=false -refresh -out $PLANFILE -var-file varfiles/$PROJECT_NAME.tfvars.json" in "/home/atlantis/.atlantis/repos/amit/devops/atmos-example/10/uw1-root/components/terraform/tfstate-backend": exit status 1: running "terraform plan -input=false -refresh -out $PLANFILE -var-file varfiles/$PROJECT_NAME.tfvars.json" in "/home/atlantis/.atlantis/repos/amit/devops/atmos-example/10/uw1-root/components/terraform/tfstate-backend":
Error: Failed to read variables file
the problem is I think the $PROJECT_NAME
is not matching the variable name
if you search in the atlantis.yaml , one project name, do you have a corresponding varfiles/$PROJECTNAME.tfvar.json
?
But if you look into the screenshot i shared it is showing the correct tfvars file after the error
Yes, i have varfiles/test-uw1-root-tfstate-backend.tfvar.json
file
you looked inside this dir? /home/atlantis/.atlantis/repos/amit/devops/atmos-example/10/uw1-root/components/terraform/tfstate-backend
Yes
and the atlantis command you run?
Atlantis plan
ok, that will plan all projects
can you try with one specific one?
with -p project-name
I have only one project
can you show the generated atlantis.yaml?
I just pinned it
I’m checking and comparing against my setup
seems to be the same
I don’t know why it is not taking varfile from varifile folder something seems to off
mmmm what is this : uw1-root
?
ahhh that is your workspace
It’s my atmos stack name and workspace
so on the atlantis server you have /home/atlantis/.atlantis/repos/amit/devops/atmos-example/10/uw1-root/components/terraform/tfstate-backend/varfiles/test-uw1-root-tfstate-backend.tfvars.json
correct?
Let me confirm
Got it
you mean the file is there and it has content?
It was an issue with varfile i was generating with atmos that doesn’t include stage
and generating a wrong file
File is there name of that file is not correct
ahhhhhhhh
ok, cool
I was shaking my head from last 4 hours Lol
Thanks @jose.amengual
no problem
TL;DR: is something missing from our atmos integration doc?
@jose.amengual thanks for the help
Hi, I am confused by how the yaml is intended to be configured for this component:
I need to configure source_ip_ranges_to_nat = optional(list(string), ["ALL_IP_RANGES"])
but no matter how I configure it, the tfvars leave that configuration empty. How is this part meant to be written in the yaml? I’ve tried everything
For example when I do,
cloud_nat:
subnetworks:
- name: subnetworkname
secondary_ip_range_names: ["name1", "name2"]
then the tf plan shows
secondary_ip_range_names = []
i don’t see secondary_ip_range_names
is used in the code https://github.com/slalombuild/terraform-atmos-accelerator/blob/main/components/terraform/gcp/network/nat.tf
module "cloud_router" {
source = "terraform-google-modules/cloud-router/google"
version = "~> 6.0.0"
count = local.enabled && var.cloud_nat != null ? 1 : 0
name = module.this.id
project = var.project_id
region = var.region
network = module.network[0].network_name
}
module "cloud_nat" {
count = local.enabled && var.cloud_nat != null ? 1 : 0
source = "terraform-google-modules/cloud-nat/google"
version = "~> 5.0.0"
project_id = var.project_id
region = var.region
router = module.cloud_router[0].router.name
name = module.this.id
nat_ips = var.cloud_nat.nat_ips
subnetworks = var.cloud_nat.subnetworks
source_subnetwork_ip_ranges_to_nat = var.cloud_nat.source_subnetwork_ip_ranges_to_nat
enable_dynamic_port_allocation = var.cloud_nat.enable_dynamic_port_allocation
enable_endpoint_independent_mapping = var.cloud_nat.enable_endpoint_independent_mapping
icmp_idle_timeout_sec = var.cloud_nat.icmp_idle_timeout_sec
log_config_enable = var.cloud_nat.log_config_enable
log_config_filter = var.cloud_nat.log_config_filter
min_ports_per_vm = var.cloud_nat.min_ports_per_vm
udp_idle_timeout_sec = var.cloud_nat.udp_idle_timeout_sec
tcp_established_idle_timeout_sec = var.cloud_nat.tcp_established_idle_timeout_sec
tcp_transitory_idle_timeout_sec = var.cloud_nat.tcp_transitory_idle_timeout_sec
tcp_time_wait_timeout_sec = var.cloud_nat.tcp_time_wait_timeout_sec
}
it’s defined in the variable, but is not used
@jose.amengual
That variable is from the public module this component uses https://github.com/terraform-google-modules/terraform-google-cloud-nat
Creates and configures Cloud NAT
@Andriy Knysh (Cloud Posse) it actually is in variables. It is a part of the “subnetworks” variable
but if in our root module (SlalomBuild) repo is not used in the instantiation of the Google module, then it will not be respected
I can look at this tomorrow.
@pv ping me tomorrow
@pv your YAML config looks correct. Look at the varfile that Atmos generates to see if the variable is there. If not, something must be wrong with the stacks config
When I describe it, it shows the values but it is still not a part of the plan. Not sure what the issue is with that one
in our instantiation https://github.com/slalombuild/terraform-atmos-accelerator/blob/main/components/terraform/gcp/network/nat.tf#L11
module "cloud_nat" {
we are not using secondary_ip_range_names
wait
is part of this variable
subnetworks = var.cloud_nat.subnetworks
you will need to figure out how to pass the value from yaml so it gets render correctly in json
this are some examples on how it should looks like https://github.com/slalombuild/terraform-atmos-accelerator/blob/main/components/terraform/gcp/network/example-vars.auto.tfvars
# enabled = true
# namespace = "test"
# environment = "network"
# stage = "uw2"
# label_key_case = "lower"
# project_id = "platlive-nonprod"
# region = "us-west2"
# routing_mode = "GLOBAL"
# shared_vpc_host = false
# service_project_names = []
# subnets = [
# {
# subnet_name = "subnet-1"
# subnet_ip = "10.1.0.0/16"
# subnet_region = "us-west2"
# subnet_private_access = true
# subnet_flow_logs = true
# subnet_flow_logs_interval = "INTERVAL_5_SEC"
# subnet_flow_logs_sampling = 0.5
# subnet_flow_logs_metadata = "INCLUDE_ALL_METADATA"
# },
# {
# subnet_name = "subnet-2"
# subnet_ip = "10.2.0.0/16"
# subnet_region = "us-west2"
# subnet_private_access = false
# subnet_flow_logs = false
# }
# ]
# secondary_ranges = {
# "subnet-1" = [
# {
# ip_cidr_range = "172.16.1.0/24"
# range_name = "pods-1"
# },
# {
# ip_cidr_range = "192.168.1.0/24"
# range_name = "services-1"
# }
# ]
# "subnet-2" = [{
# ip_cidr_range = "172.16.2.0/24"
# range_name = "pods-2"
# },
# {
# ip_cidr_range = "192.168.2.0/24"
# range_name = "services-2"
# }
# ]
# }
# routes = [
# {
# name = "egress-internet"
# destination_range = "0.0.0.0/0"
# tags = "egress-inet,internet"
# next_hop_internet = "true"
# }
# ]
# firewall_rules = [
# {
# name = "test"
# direction = "INGRESS"
# ranges = ["10.2.0.0/16"]
# allow = [{
# protocol = "TCP"
# }]
# }
# ]
# cloud_nat = {
# subnetworks = [
# {
# name = "subnet-1"
# source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
# },
# {
# name = "subnet-2"
# source_ip_ranges_to_nat = ["10.2.0.0/16", "172.16.2.0/24", "192.168.2.0/24"]
# }
# ]
# }
# peers = []
# private_connections = [
# {
# name = "test-data"
# prefix_start = "10.3.0.0"
# prefix_length = 16
# }
# ]
here is the example
hello, I am seeing this issue while using cloudposse/github-action-pre-commit
Run cloudposse/github-action-pre-commit@v3
install pre-commit
/opt/hostedtoolcache/Python/3.10.13/x64/bin/pre-commit run --show-diff-on-failure --color=always --all-files
[INFO] Initializing environment for <https://github.com/antonbabenko/pre-commit-terraform>.
[INFO] Initializing environment for <https://github.com/pre-commit/mirrors-prettier>.
[INFO] Initializing environment for <https://github.com/pre-commit/mirrors-prettier:[email protected]>.
[INFO] Installing environment for <https://github.com/pre-commit/mirrors-prettier>.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
Terraform fmt............................................................Failed
- hook id: terraform_fmt
- files were modified by this hook
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
main.tf
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
main.tf
variables.tf
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
main.tf
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
versions.tf
/home/runner/work/_temp/f964d667-191a-4b39-8afa-169af08623eb/terraform-bin fmt
Terraform docs...........................................................Failed
- hook id: terraform_docs
- exit code: 1
ERROR: terraform-docs is required by terraform_docs pre-commit hook but is not installed or in the system's PATH.
prettier.............................................(no files to check)Skipped
rebuild-adr-docs.........................................................Passed
pre-commit hook(s) made changes.
If you are seeing this message in CI, reproduce locally with: `pre-commit run --all-files`.
To run `pre-commit` as part of git workflow, use `pre-commit install`.
All changes made by hooks:
My config is like this. It was working a month ago, I did not change anything, it suddenly had this error. Any idea how to debug it?
# Install terraform-docs for pre-commit hook
- name: Install terraform-docs
shell: bash
env:
INSTALL_PATH: "${{ github.workspace }}/bin"
run: |
make init
mkdir -p "${INSTALL_PATH}"
make packages/install/terraform-docs
echo "$INSTALL_PATH" >> $GITHUB_PATH
# pre-commit prerequisites
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- uses: actions/setup-node@v3
with:
node-version: '16'
# Install adr-tools for pre-commit hook
- name: Install adr-tools
shell: bash
run: |
wget <https://github.com/npryce/adr-tools/archive/refs/tags/$ADR_TOOLS_VERSION.tar.gz>
tar xvzf $ADR_TOOLS_VERSION.tar.gz
echo "adr-tools-$ADR_TOOLS_VERSION/src" >> $GITHUB_PATH
#pre-commit checks: fmt + terraform-docs
#We skip tf_validate as it requires an init
#of all root modules, which is to be avoided.
- uses: cloudposse/github-action-pre-commit@v3
env:
SKIP: tf_validate
with:
token: ${{ secrets.CCH_GITHUB_BOT_TOKEN }}
git_user_name: ${{ env.GIT_USER_NAME }}
git_user_email: ${{ env.GIT_USER_EMAIL }}
extra_args: --all-files
https://github.com/cloudposse/github-action-pre-commit/releases show the last v3 action on nov 2022. Fragment from your output
Terraform docs...........................................................Failed
- hook id: terraform_docs
- exit code: 1
ERROR: terraform-docs is required by terraform_docs pre-commit hook but is not installed or in the system's PATH.
seems to be more related to first section of your gha workflow
# Install terraform-docs for pre-commit hook
- name: Install terraform-docs
shell: bash
env:
INSTALL_PATH: "${{ github.workspace }}/bin"
run: |
make init
mkdir -p "${INSTALL_PATH}"
make packages/install/terraform-docs
echo "$INSTALL_PATH" >> $GITHUB_PATH
Thanks!