#atmos (2022-04)
2022-04-01
So I’ve been thinking about how Atmos plays with larger infra deployments (read multiple accounts / envs).
Typically for AWS, most people have several accounts across their org:
• management account (e.g. root)
• identity
• audit
• infrastructure
◦ dev
◦ prd
• <org_unit> (e.g. SaaS)
◦ <team> (e.g. payments)
▪︎ dev
▪︎ prd
So far, I’ve had the stacks/
dir mirror this layout and then been using tenant
to ensure unique statefile paths (e.g. tenant: saas-payments
)
e.g.
stacks/
gbl-root.yaml
gbl-identity.yaml
gbl-audit.yaml
infra/
gbl-dev.yaml
ue1-dev.yaml
gbl-prd.yaml
saas/
payments/
gbl-dev.yaml
But also wondered what others have done so far / might suggest.
one thing noticeably absent is a catalog/
folder
this is where we centralize all the common patterns
import:
- mgmt-uw2-globals
- catalog/acm
- catalog/vpc
- catalog/aws-waf-acl
- catalog/aws-backup/non-prod
- catalog/eks/eks
- catalog/eks/datadog-agent
- catalog/eks/reloader
- catalog/eks/ocean-controller
- catalog/eks/cert-manager
- catalog/eks/external-dns
- catalog/eks/alb-controller
- catalog/eks/alb-controller-ingress-group
- catalog/eks/metrics-server
- catalog/eks/aws-node-termination-handler
- catalog/eks/sumologic
- catalog/eks/sops-secrets-operator
- catalog/documentdb/mgmt/sandbox
- catalog/ses
- catalog/dynamodb/mgmt/sandbox
- catalog/s3/alb-access-logs
- catalog/s3/cloudfront-logs
- catalog/s3/s3-access-logs
- catalog/elasticsearch/defaults
- catalog/elasticache/elasticache-redis-defaults
- catalog/msk/mgmt/sandbox
- catalog/aurora/mgmt/sandbox
- catalog/argocd/non-prod
- catalog/okta-saml-apps/argocd/non-prod
- catalog/spa-s3-cloudfront/mgmt/sandbox
- catalog/bastion/bastion-ssm
- catalog/maintenance-page
- catalog/argo-workflows/non-prod
vars:
stage: sandbox
terraform:
vars: {}
helmfile:
vars: {}
components:
terraform:
acm:
vars:
domain_name: uw2.sandbox.mgmt.acme.net
zone_name: sandbox.mgmt.acme.net
vpc:
vars:
cidr_block: 10.2.0.0/18
eks:
vars:
# This cluster was created before `cloudposse/eks-cluster/aws` `v0.45.0`
# See <https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/docs/migration-0.45.x%2B.md>
backwards_compatibility_v0_45_0_enabled: true
spotinst_oceans:
main:
max_group_size: 10
min_group_size: 4
desired_group_size: 5
spotinst_ocean_vngs:
fleet:
max_group_size: 2
min_group_size: 2
instance_types: null
preferred_spot_types: null
security_groups: null
subnet_ids: ['subnet-xxxxx', 'subnet-001955481398e8b77']
kubelet_additional_options: --allowed-unsafe-sysctls=net.ipv6.conf.all.disable_ipv6,net.ipv6.conf.default.disable_ipv6
kubernetes_labels:
eks.acme.net/vng: fleet
kubernetes_taints:
net.ipv6.conf.all.disable_ipv6:
value: 0
effect: NoExecute
net.ipv6.conf.default.disable_ipv6:
value: 0
effect: NoExecute
fennec:
max_group_size: 3
min_group_size: 2
instance_types: null
preferred_spot_types: null
security_groups: null
subnet_ids: [ 'subnet-xxx', 'subnet-xxx' ]
kubelet_additional_options: --allowed-unsafe-sysctls=net.ipv6.conf.all.disable_ipv6,net.ipv6.conf.default.disable_ipv6,net.ipv6.conf.all.forwarding,net.ipv6.conf.default.forwarding
workflows
vars:
template_referencing: ''
workflow_namespaces:
- argo-workflows-test
This is an example of how we import all the defaults we want, and then override what is different.
WIth this pattern, you can easily define as many accounts and combinations of components as you want, just by using improts from the catalog.
Oh yeah, sorry I should’ve included the catalog/
directory for completeness. But thanks, this is awesome.
One thing I noticed with my pattern actually is that the remote-state
module doesn’t work because the path to the stack_config_local_path
is wrong when I’m working with a stack inside a sub directory.
2022-04-04
v1.4.1 what Add settings.spacelift.stack_name_pattern Fix parsing YAML config and detection of stacks when the stack name (on the command line) is the same as the YAML config file name in a subfolder why settings.spacelift.stack_name_pattern allows overriding Spacelift stack names. Supported tokens: {namespace}, {tenant}, {environment}, {stage}, {component} components: terraform: “test/test-component-override-2”: settings: spacelift: workspace_enabled: true #…
what Add settings.spacelift.stack_name_pattern Fix parsing YAML config and detection of stacks when the stack name (on the command line) is the same as the YAML config file name in a subfolder wh…
v1.4.2 what Add init_run_reconfigure CLI config Update stack_name_pattern Disable running terraform plan and terraform workspace on abstract components why init_run_reconfigure CLI config allows enabling/disabling the -reconfigure argument for terraform init when running it before running other terraform commands Don’t use the default stack_name_pattern because it used {tenant} which is not available for all clients Running terraform plan and terraform workspace on abstract components creates…
what Add init_run_reconfigure CLI config Update stack_name_pattern Disable running terraform plan and terraform workspace on abstract components why init_run_reconfigure CLI config allows enabli…
v1.4.2 what Add init_run_reconfigure CLI config Update stack_name_pattern Disable running terraform plan and terraform workspace on abstract components why init_run_reconfigure CLI config allows enabling/disabling the -reconfigure argument for terraform init when running it before running other terraform commands Don’t use the default stack_name_pattern because it used {tenant} which is not available for all clients Running terraform plan and terraform workspace on abstract components creates…
2022-04-05
2022-04-11
hey yall! hope everyone is doing well. I have run into problems with atmos reading really large templates. i tried to read through the code to see if i could fix it but atmos v0.22 is written in variant and is not that amenable to being changed (the problem lies in a dependency’s dependency).
as a workaround I am wondering if there is an alternative in recent versions of atmos to the atmos v0.22 command atmos stack config --config-type all
it looks like you can get the config per component but not per stack.
we are working on new atmos commands (in the new atmos, not in the variant version) to show all the config for all stacks and components)
(we are not supporting the old atmos written in variant, if you are using it, please upgrade to the new one)
thanks so much for the quick reply!
v1.4.3 what Add –dry-run command-line flag to all commands and workflows why Helps debugging atmos commands and workflows The –dry-run flag shows all the flow and commands without executing them and without writing files to the file system (e.g. varfiles and backend config are not written) The –dry-run flag shows all the workflow steps without executing them test atmos terraform plan test/test-component-override -s=tenant1-ue2-dev –dry-run
Variables for the component…
what Add –dry-run command-line flag to all commands and workflows why Helps debugging atmos commands and workflows The –dry-run flag shows all the flow and commands without executing them and …
2022-04-12
v1.4.4 what Add atmos describe stacks command Allow writing the result to a file by using –file command-line flag Allow formatting the result as YAML or JSON by using –format command-line flag Allow filtering of the result by using the command-line flags: stack, components-types, components, sections Available component sections: backend, backend_type, deps, env, inheritance, metadata, remote_state_backend, remote_state_backend_type, settings, vars why Command to show stack configs and all the…
what Add atmos describe stacks command Allow writing the result to a file by using –file command-line flag Allow formatting the result as YAML or JSON by using –format command-line flag Allow fi…
@here this is awesome! you can now query your stack configurations
v1.4.4 what Add atmos describe stacks command Allow writing the result to a file by using –file command-line flag Allow formatting the result as YAML or JSON by using –format command-line flag Allow filtering of the result by using the command-line flags: stack, component-types, components, sections Available component sections: backend, backend_type, deps, env, inheritance, metadata, remote_state_backend, remote_state_backend_type, settings, vars why Command to show stack configs and all the…
what Add atmos describe stacks command Allow writing the result to a file by using –file command-line flag Allow formatting the result as YAML or JSON by using –format command-line flag Allow fi…
2022-04-13
is there a way to create state information for resources (Like an aws org account) that already exist (and may have been created outside of TF)?
@Michael Dizon in atmos
YAML config?
i can just set the variables and it’ll just work?
you can set any variables, if they are supported by your module
remote_state_backend:
where you just set the variables for the backend as if they were loaded from s3 backend
Yep, this static backend is probably what you want. We implemented it for this similar use-case where we couldn’t fully adopt an existing account into the organization, but we wanted to adopt it into our stack configurations.
would i be able to migrate from the static backend to s3 (tfstate-backend)?
you can always change the backend_type
and backend settings for a component. If the state is in S3, change it to s3 and provide the bucket and DynamoDB config. If the state is not in s3 and you are using an already provisioned resource, use static
backend
awesome, i’ll give this a go in the morn
circling back on this, i was able to use terraform import to achieve my goal. i may have miscommunicated my intended needs
v1.4.5 what Fix detection of component dependencies for imported YAML config files why If a file was imported and did not contain a vars section, global or related to the component, the imported file was not included in the component dependencies (deps section), and a Spacelift label for the file was not created (if the imported file changes, Spacelift would not notice the change and would not run the stack) Update the dependencies logic to check for these sections: backend, backend_type, env,…
what Fix detection of component dependencies for imported YAML config files why If a file was imported and did not contain a vars section, global or related to the component, the imported file w…
2022-04-14
v1.4.6 what If auto_generate_backend_file is true (we are auto-generating backend files), remove backend.tf.json when executing atmos terraform clean command why Useful when using different backends for the same component in different stacks
what If auto_generate_backend_file is true (we are auto-generating backend files), remove backend.tf.json when executing atmos terraform clean command why Useful when using different backends fo…
2022-04-15
v1.4.7 what Take into account init_run_reconfigure CLI config and –init-run-reconfigure command-line argument when running atmos terraform init … command why atmos terraform init must behave the same as all other commands that use init_run_reconfigure CLI config and –init-run-reconfigure command-line argument
what Take into account init_run_reconfigure CLI config and –init-run-reconfigure command-line argument when running atmos terraform init … command why atmos terraform init must behave the sam…
2022-04-16
2022-04-18
v1.4.8 what Add metadata.terraform_workspace_pattern why We already have metadata.terraform_workspace using which we can specify/override the terraform workspace for a component metadata.terraform_workspace_pattern introduces a pattern to override a terraform workspace for a component metadata: # Override Terraform workspace terraform_workspace: xxxxxxxxxxxxxxx terraform_workspace_pattern: “{tenant}-{environment}-{stage}-{component}”
The following tokens are supported…
what Add metadata.terraform_workspace_pattern why We already have metadata.terraform_workspace using which we can specify/override the terraform workspace for a component metadata.terraform_work…
2022-04-21
v1.4.9 what Add atmos aws commands Add atmos aws eks update-kubeconfig command why Execute aws CLI commands using atmos context (atmos.yaml CLI config and component/stack configurations) Downloading kubeconfig from an EKS cluster using `atmos Allow using this functionality in Terraform by implementing a new data source in the terraform-provider-utils provider (this will be added in a PR in <a…
what Add atmos aws commands Add atmos aws eks update-kubeconfig command why Execute aws CLI commands using atmos context (atmos.yaml CLI config and component/stack configurations) Downloading ku…
2022-04-22
v1.4.10 what If functions that are used by Terraform providers throw errors, print errors to std.Error why Terraform providers only see errors that are sent to std.Error
what If functions that are used by Terraform providers throw errors, print errors to std.Error why Terraform providers only see errors that are sent to std.Error
Another Atmos in the terraform space — https://github.com/simplygenius/atmos
Just heard about it on a call with a company. Seems fairly dead and not all the interesting of a project, but it is funny that name would get chosen twice in the TF space.
Breathe easier with terraform. Cloud system architectures made easy
Yes, it appears to be on life support.
Breathe easier with terraform. Cloud system architectures made easy
I think the company behind it was acquired by CloudTruth
Ah do you know the CloudTruth folks? They’re who mentioned it to me.
Why we chose atmos name: https://www.space.com/terraforming-in-alien-universe
Terraforming a planet and building better worlds is not as easy as it seems.
“Automated Atmosphere Processor”
Terraforming a planet and building better worlds is not as easy as it seems.
Was called atmos
for short
I love part 2, Aliens. the rest of the series gets questionable from there.
i remember reading something about nuking the martian poles to create at atmosphere
atmos > nuke
v1.4.11 what Support dashes - in the tenant, environment and stage names In the examples, add a new stage test-1 and add tests for components, stacks, and spacelift to test having a dash in the stage name (the file name itself being without dashes) why The old atmos supported it (because it was filename-based, not logical stack name based) Some clients want to name tenants/environment/stages with dashes in the names (and some already have it, so we need to support that when converting from the old to…
what Support dashes - in the tenant, environment and stage names In the examples, add a new stage test-1 and add tests for components, stacks, and spacelift to test having a dash in the stage name…
2022-04-25
v1.4.3 what Add –dry-run command-line flag to all commands and workflows why Helps debugging atmos commands and workflows The –dry-run flag shows all the flow and commands without executing them and without writing files to the file system (e.g. varfiles and backend config are not written) The –dry-run flag shows all the workflow steps without executing them tests and examples Click to expandatmos terraform plan test/test-component-override -s=tenant1-ue2-dev –dry-run
Variables for the…
what Add –dry-run command-line flag to all commands and workflows why Helps debugging atmos commands and workflows The –dry-run flag shows all the flow and commands without executing them and …
v1.4.4 what Add atmos describe stacks command Allow writing the result to a file by using –file command-line flag Allow formatting the result as YAML or JSON by using –format command-line flag Allow filtering of the result by using the command-line flags: stack, component-types, components, sections Available component sections: backend, backend_type, deps, env, inheritance, metadata, remote_state_backend, remote_state_backend_type, settings, vars why Command to show stack configs and all the…
what Add atmos describe stacks command Allow writing the result to a file by using –file command-line flag Allow formatting the result as YAML or JSON by using –format command-line flag Allow fi…
2022-04-29
Good morning all! We are trying to add a new mono repo for our SRE team to Spacelift with Atmos and it seems like Spacelift can’t find the stacks based on base_path
$ atmos describe config | jq '.Components.Terraform.base_path'
"/components/terraform"
$ atmos describe config | jq '.Stacks.base_path'
"/stacks"
Spacelift run error:
No stack config files found in the provided paths:
│ - /mnt/workspace/source/components/terraform/spacelift/stacks/**/*
Our directory structure is the same as the atmos example here where we have at the root of the repo:
components
stacks
atmos.yaml
Trying to see if there is another variable overriding stacks.base_path
somewhere else.
for spacelift, try setting ATMOS_BASE_PATH=/mnt/workspace/source
in .spacelift/config.yml
in the repo, we have:
# <https://docs.spacelift.io/concepts/runtime-configuration>
version: "1"
stack_defaults:
before_init:
- spacelift-configure-paths
- spacelift-git-use-https
- spacelift-write-vars
- spacelift-tf-workspace
before_plan:
- spacelift-configure-paths
before_apply:
- spacelift-configure-paths
environment:
AWS_SDK_LOAD_CONFIG: true
AWS_CONFIG_FILE: /etc/aws-config/aws-config-cicd
AWS_PROFILE: eg-gbl-identity
ATMOS_BASE_PATH: /mnt/workspace/source
stacks:
infrastructure:
before_init: []
before_plan: []
before_apply: []
the important part is ATMOS_BASE_PATH: /mnt/workspace/source
also, in rootfs/usr/local/etc/atmos/atmos.yaml
(which ends up in usr/local/etc/atmos/atmos.yaml
in the Docker image that Spacelift is using to execute commands), we have:
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# Base path for components and stacks configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path` and `stacks.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path` and `stacks.base_path`
# are considered paths relative to `base_path`.
base_path: ""
components:
# Settings for all terraform components
terraform:
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
apply_auto_approve: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
deploy_run_init: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
init_run_reconfigure: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
auto_generate_backend_file: true
# Settings for all helmfile components
helmfile:
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/helmfile"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var
kubeconfig_path: "/dev/shm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var
helm_aws_profile_pattern: "{namespace}-gbl-{stage}-helm"
# Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var
cluster_name_pattern: "{namespace}-{environment}-{stage}-eks-cluster"
# Settings for all stacks
stacks:
# Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
included_paths:
- "**/*"
# Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
excluded_paths:
- "globals/**/*"
- "**/*globals*"
- "catalog/**/*"
# exclude workflows
- "workflows/**/*"
# Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
name_pattern: "{environment}-{stage}"
# <https://github.com/cloudposse/atmos/releases/tag/v1.4.0>
workflows:
# Can also be set using `ATMOS_WORKFLOWS_BASE_PATH` ENV var, or `--workflows-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "stacks/workflows"
logs:
verbose: false
colors: true
oh interesting, I was missing the workflow base path var .spacelift/config.yml
I was just using our platform_infrastructure
repo as copy pasta but we aren’t using atmos latest there. Trying that now.
So essentially the rootfs atmos.yaml
needs to be identical as the repo root atmos.yaml ?
Now my error is Error: stack name pattern must be provided in 'stacks.name_pattern' config or 'ATMOS_STACKS_NAME_PATTERN' ENV variable
In both of my atmos configs, stacks.name_pattern
== "{tenant}-{environment}-{stage}"
This error makes it seem like it’s not being set?
what’s your atmos command that is failing
are you using tenant
if not, you can modify the pattern to only contain the environment and stage
I’m not sure exactly which atmos command spacelift is doing here. This file stacks/globals/datadog_slo-globals-dev.yaml
has the following vars.
vars:
namespace: bd
region: us-east-2
tenant: datadog_slo
environment: global
oh so you are using tenant, then id keep the pattern the same.
from that config the stack name should be datadog_slo-global-<stage>
but youre missing the stage name
try this as an example
vars:
namespace: bd
region: us-east-2
tenant: datadog_slo
environment: global
stage: banana
now the stack name will be datadog_slo-global-banana
usually we use stage to be the account name
so prod, dev, staging, sandbox, qa, or similar
I’m taking over a project from someone that has been out and I don’t have much context on how these directory structures worked getting this new project going
If you are using geodesic, you don’t need atmos.yaml in the root of the repo. Both geodesic and Spacelift should take it from the rootfs
We are using geodesic here. So I can remove the repo’s root atmost.yaml
Hmm. still getting the same error
╷
│ Error: stack name pattern must be provided in 'stacks.name_pattern' config or 'ATMOS_STACKS_NAME_PATTERN' ENV variable
│
│ with module.spacelift.module.yaml_stack_config.data.utils_spacelift_stack_config.spacelift_stacks,
│ on .terraform/modules/spacelift.yaml_stack_config/modules/spacelift/main.tf line 1, in data "utils_spacelift_stack_config" "spacelift_stacks":
│ 1: data "utils_spacelift_stack_config" "spacelift_stacks" {
This platform_sre
is an Administrative stack if that gives more contexts, and currently the regular stacks are not shown.
it would be easier to test this locally before running it in space lift
get the atmos commands to work locally and then spacelift will be easier to troubleshoot
Good point. I do recall this working locally in geodesic.. but seems my changes now it is not working
√ . [bd-gbl-identity] / ⨠ atmos-local terraform plan -s datadog_slo-global-dev "datadog/sre-slo"
/localhost/git/bread/platform_sre /
Found ENV var ATMOS_STACKS_BASE_PATH=/localhost/git/bread/platform_sre/stacks
Could not find config for the component 'datadog/sre-slo' in the stack 'datadog_slo-global-dev'.
Check that all attributes in the stack name pattern '{tenant}-{environment}-{stage}' are defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
This is our directory tree of stacks
stacks
|-- datadog_slo
| `-- global
| `-- dev.yaml
|-- globals
| |-- datadog_slo-globals-dev.yaml
| `-- globals.yaml
`-- spacelift
`-- global
`-- dev.yaml
the file structure doesn’t matter, it’s all about how the inputs in the yaml
is your component datadog/sre-slo
in the datadog_slo-global-dev
stack ?
can you revert your atmos.yaml file and get it to work again ?
I reverted back on my branch that was working. and now I’m trying to do diffs and figure out what broke the local atmos run
atmos-local terraform plan -s datadog_slo-global-dev "datadog/sre-slo"
seems to be working now.
my stacks/globals/datadog_slo-globals-dev.yaml
is as follows:
vars:
namespace: bd
region: us-east-2
tenant: datadog_slo
environment: global
terraform:
vars: {}
components:
terraform:
"datadog/sre-slo":
vars:
gh_org: getbread
okay so I figured out what is breaking my local atmos.
when I add the following to stacks/spacelift/global/dev.yaml
vars:
namespace: bd
region: us-east-2
tenant: spacelift
environment: global
stage: banana
terraform:
vars: {}
I get this error. I think I am misunderstanding the relationship between the files/variables.
√ . [bd-gbl-identity] / ⨠ atmos-local terraform plan -s datadog_slo-global-dev "datadog/sre-slo"
/localhost/git/bread/platform_sre /
Found ENV var ATMOS_STACKS_BASE_PATH=/localhost/git/bread/platform_sre/stacks
Could not find config for the component 'datadog/sre-slo' in the stack 'datadog_slo-global-dev'.
Check that all attributes in the stack name pattern '{tenant}-{environment}-{stage}' are defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
the file names are independent of the stacks themselves
i was suggesting to append your vars
with stage: whatever
, but i think you created a new file side by side that may have duplicated the existing file which probably confused atmos
Picking back up here on this issue. I have atmos-local
running fine in geodesic. We are exec’d into the spacelift runner
ash-5.1$ pwd
/
bash-5.1$ find . -type f -executable | grep atmos
./mnt/workspace/source/atmos.yaml
./mnt/workspace/source/rootfs/etc/profile.d/atmos-local.sh
find: ./proc/tty/driver: Permission denied
./mnt/workspace/source/rootfs/usr/local/etc/atmos/atmos.yaml
find: ./root: Permission denied
bash-5.1$ cat /mnt/workspace/source/rootfs/etc/profile.d/atmos-local.sh
export LOCAL_PREFIX=/localhost${SRC_RELATIVE_PATH}
function atmos-local() {
pushd $LOCAL_PREFIX
ATMOS_STACKS_BASE_PATH=$LOCAL_PREFIX/stacks atmos $@
popd
}
Looks like it’s a wrapper here? I can’t find atmos binary anywhere in that image. How would we debug this trying to running atmos locally in the spacelift runner? All we can see is atmos is making its way into the provider here? Or is spacelift running a custom terraform command here?
github.com/cloudposse/atmos v1.4.11