#atmos (2024-08)
2024-08-01
I am currently working on enabling Go templates in my Atmos config, and I’m having some trouble excluding a Go template from processing by Atmos. In my manifest, I originally had this:
vars:
username: "aws-sso-qventus-admin-latam-{{SessionName}}"
After enabling templates globally in atmos.yaml, I got this error:
template: all-atmos-sections:136: function "SessionName" not defined
I changed the definition to
vars:
username: "aws-sso-qventus-admin-latam-{{`{{SessionName}}`}}"
but I’m still getting the same error. I tried the printf syntax as well:
vars:
username: '{{ printf "aws-sso-qventus-admin-latam-{{SessionName}}" }}'
Same error. Anyone know what I’m missing?
Atmos 1.85.0, btw
@Andriy Knysh (Cloud Posse) is it related to the number of evaluations?
@Andy Wortman can you mess with https://atmos.tools/core-concepts/stacks/templates/#processing-pipelines
templates:
settings:
# Enable `Go` templates in Atmos stack manifests
enabled: true
# Number of evaluations/passes to process `Go` templates
# If not defined, `evaluations` is automatically set to `1`
evaluations: 1
change evaluations from 2 to 1 for example
see if it makes a difference
Yup, that did it. Changed evaluations from 2 to 1, and now it’s working. Thanks @Erik Osterman (Cloud Posse)!
thanks Erik
Please note, that with evaluations at 1, you’ll be limited in otherways Always some tradeoffs.
Is there a repository (yet) for common OPA policy patterns that are used with atmos ?
If not, is there one planned ?
Some potential examples
• Avoid s3 bucket policies in s3 components that contain wildcard actions
• Avoid s3 bucket policies in s3 components that allow cross-account access with NOT allowlisted account ids
• etc
I’d contribute to a community supported atmos opa library fwiw
Agreed I am currently writing policies at the moment, if there was an open repo I would contribute!
nice idea, we will consider it, thanks
Yes, this is much needed
I’m heads down on the new public docs for refarch, but after that will put something up
Could be a good office hours discussion too
2024-08-05
i’m so used to using .yml , is there a reason atmos doesn’t find stacks named like that
it’s a historical reason, doesn’t make sense now, we’ll improve it to read both extensions
nice it’s nearly caused me to throw my machine out the window a couple of times from stacks not being found
yep, sorry for that. We have a task to add support for .yml
extensions. We’ll try to do it this week
i like the way atmos generated the tfvars from yaml but i’m wondering if I could disable the different workspace per component behaviour
i’m a real beginner here but it seems like there’s a lot of complexity to share things between each state file
if I could disable the different workspace per component behaviour
can you provide more details on what you want to do here?
i like the breaking up of the state file idea a bit because it scared me to have everything in one file
we have buckets for each account
but the consequences of splitting it up are quite painful
terraform-provider-utils needing an absolute path in there wasn’t ideal
and the number of modules in s3-buckets root module example
really complicated for me and the devs here.. i think it won’t go down well
i was hopeful that atmos would be fast because the statefile is much smaller
but the speed of a run is quite slow because of all this
Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Reusing previous version of cloudposse/template from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of cloudposse/awsutils from the dependency lock file
- Reusing previous version of hashicorp/external from the dependency lock file
- Reusing previous version of cloudposse/utils from the dependency lock file
- Reusing previous version of hashicorp/time from the dependency lock file
- Reusing previous version of hashicorp/http from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Using previously-installed hashicorp/aws v5.61.0
- Using previously-installed cloudposse/awsutils v0.19.1
- Using previously-installed hashicorp/external v2.3.3
- Using previously-installed cloudposse/utils v1.24.0
- Using previously-installed hashicorp/time v0.12.0
- Using previously-installed hashicorp/http v3.4.4
- Using previously-installed hashicorp/local v2.5.1
- Using previously-installed cloudposse/template v2.2.0
so in a roundabouts way i’m saying i’m struggling to justify the splits to myself
account_map for instance seems to be mapping account ids to names or something. Couldn’t I just make a static list var of account_maps in the stack config and just share it between my different stacks?
a real world example I can think of is that I’ll want to build a vpc and after that I’ll need the details of the db subnet ids to build an rds instance somehow
ok I see what you are saying. Let me post a few links here how it could be done:
magic thank you
if you are using Cloud Posse architecture with account-map
, and you don’t want to provision account-map
component, you can use Atmos static
backend to provide static values (it’s called bronwfield configuration)
There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.
second, we provide support for remote state backend using two diff methods:
• using the remote-state
Terraform module (which uses the terraform-provider-utils
)
• Using the Go templates in Atmos stack manifests and the atmos.Component
template function
Read the remote state or configuration of any Atmos component
For example, you have a tgw component that requires the vpc_id output from the vpc component in the same stack:
components:
terraform:
tgw:
vars:
vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
by using the Go template function atmos.Component
, you don’t have to use the remote-state
Terraform component
please review the links and let us know if you have any questions
@Andriy Knysh (Cloud Posse) i’ve had issues using those atmos functions with github actions. when i have gomplate
enabled, it hangs
am i missing somethng
@Michael Dizon make sure your GH action downloads the latest Atmos version (the template function atmos.Component
was added not so long ago)
let me know if that fixes the issue
hmm so i got atmos.component to work on an output from a vpc module for vpc_id
12seconds
vs sub second
12 seconds is too much (it just calls terraform output
). please share your config (you can DM me)
hey did you guys manage to make the time go down? I’m having a similar time
if you are using the atmos.Component
function and get the component’s outputs, it’s just a wrapper on top of terraform output
.
For example:
{{ (atmos.Component vpc .stack).outputs.vpc_id }}
is the same as executing
atmos terraform output vpc -s <stack>
can you execute atmos terraform output
on your component in the stack and see how long it takes?
if it takes that long, than Atmos could not help here b/c it executes the same terraform output
command in
let me know how long atmos terraform output
takes for you
also FYI, Atmos caches the outputs of each function for the same component in the same stack. So if you have many instances of this function for the vpc component in the same stack {{ (atmos.Component vpc .stack).outputs.vpc_id }}
, Atmos will execute terraform output
only once
but it will execute it once per stack, so for example, if you are using it in many diff top-level stacks, all of them will be executed when processing the Go templates - this is how the templates work
maybe the 12 seconds you mentioned is b/c the function is executed many times whenAtmos processes all the component
this cmd:
atmos terraform output vpc -s <stack>
<- takes 13s
but this one takes the same amount of time:
atmos list stacks
<- takes 1m and 08s
yea, that’s an issue
atmos terraform output vpc -s <stack>
takes 13 s - this is too long, you prob need to review why it’s taking so much time to run
and you prob have many of those in the Atmos stacks manifests
I will check that although is probably bc of how I authenticate but I will confirm
but is this why it takes 1m to run the atmos list stacks
? dose it like do the outputs as well on that cmd?
since you have Go templates in the files, the command processes all of them b/c when you execute atmos list stacks
you want to see the final values for all components in those stacks. Note that the command is custom and just calls atmos describe stacks
which processes all components in all stacks and all Go templates in order to show you the final values for all the components (and not the template strings)
are you using the atmos.Component
function to get the outputs of the components (remote state), or you are using it to get other sections from the components?
I’m using it to get the outputs of the componets this are example of how I use it:
vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
vpc_private_subnets: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) }}'
also I get the same time when using the UI with the cmd atmos
dose this also evaluate this?
the code is correct
one call to atmos.Component
takes 13 seconds for you
you have two calls in one stack
if you have a few stacks, then it explains why you are seeing
atmos list stacks <- takes 1m and 08s
this is not easy to optimize in Atmos since we process all Go templates in the manifests when executing atmos describe stacks
(to show you the final values and not the template strings) - but let me think about it.
atmos list stacks
def has nothing to do with Go templates (should not), but it’s how its implemented - it just calls atmos describe stacks
. Maybe we can implement a separate function atmos list stacks
that does not need to process Go templates and show just the stack names
I have 5 stacks so maybe it dose make sense maybe we can add a flag to skip that? and I can just add it to the custom cli cmd
also is there a way to optimize the calls to atmos.Component
? like if 1 call takes 13s but with that call I can get all of the outputs I need from it is there a way to maybe saved that into a variable and use that on the variables I pass in?
yes, we can add a flag to atmos describe stacks
to skip template processing, we’d us it when we don’t need to show the components, but just stack names
like if 1 call takes 13s but with that call I can get all of the outputs I need from it is there a way to maybe saved that into a variable and use that on the variables I pass in?
Atmos already does that - if you use the functions for the same component in the same stack, Atmos caches the result and reuse it in the next calls but for the SAME component in the same stack
but for diff stacks, the result is diff, so it calls it again (as many times as you have stacks)
these two calls will execute terraform output
just once for the same stack
vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
vpc_private_subnets: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) }}'
so you have 13 sec per stack multiply by the number of stacks, giving you 1 min to execute atmos describe stacks
let me think about adding a flag to atmos describe stacks
to disable template processing. Then you would update your custom command atmos list stacks
to use that flag
Read the remote state or configuration of any Atmos component
@Miguel Zablah please take a look at the new Atmos release https://github.com/cloudposse/atmos/releases/tag/v1.86.0
you can add the --process-templates=false
flag to the atmos describe stacks
command in your custom atmos list stacks
command, and Atmos will not process templates, which should make the atmos list stacks
command much faster
Oh this is great!!! Thanks I will update and try it in a bit
@Andriy Knysh (Cloud Posse) sorry I just test it and it’s awesome and crazy fast! this will save me sooo much time!! thanks a loot!
2024-08-06
I’ve been rummaging around in the Atmos docs for hours, and I’m coming up blank, sorry. I have a stack I’m deploying to dev in an Azure subscription. My name_pattern
is {namespace}-{environment}
and so my stack is called dev-wus3
. I’d like to provision a version of the stack in the same Azure subscription per pull request via a pipeline so that PRs can be tested, and destroy the stack when the PR is closed. If I set the namespace to pr192
atmos throws an error because the pr192
stack doesn’t exist.
I created a pr.yaml
which imports the dev stack, and overrides the namespace, and while it works - it’s super hacky!
---
import:
- development/westus3.yaml
vars:
environment: wus3
namespace: pr192
Thoughts on how best to achieve this? Thanks
you could potentially overwrite namespace
at the component level for that but that is not recommended
this is where the attributes
variable of the null-label comes in handy
if you do it at the vars
level all the stuff created on that stack will have dev-wus3-pr192
so it will look like
vars:
environment: wus3
attributes:
- pr192
Won’t I need a conditional in each component to implement the attribute?
as long as all your components use the null-label module to set the name then you do not need to
you can pass the atmos variable at the cli level
if it needs to be set up at run time
Using an overrides
block in my pr.yaml to set the namespace var per component could work, I guess. You said it’s not recommended…what’s the downside?
you are changing a stackname dynamically , which is a core definition in atmos
Struggling to find docs on how to use attributes
, do you have a link?
null-label module docs, not atmos
How does the attribute get from CLI to vars: attributes?
that will be a TF_VAR
not atmos related actually
Ah, got it
2024-08-08
Is there a way to inject environment variables on each plan/apply using the stack yaml ?
use the env
section as globals or per component
the env
section is a first-class section (map), same as the vars
section - it perticipates in all the inheritance and deep-merging
components:
terraform:
test-component:
vars:
enabled: true
env:
TEST_ENV_VAR1: "val1"
TEST_ENV_VAR2: "val2"
TEST_ENV_VAR3: "val3"
the ENV vars will be set in the process that executes the Atmos commands
Oh man and i couldn’t find this in the doc ugh
Can that work globally too
globally too
same way as vars
What about with atmos component interpretation
env:
TF_AWS_DEFAULT_TAGS_repo: org/repo
TF_AWS_DEFAULT_TAGS_component: {{ atmos.component }}
TF_AWS_DEFAULT_TAGS_component_real: {{ atmos.component_real }}
works
the templates need to be quoted (to make it a valid YAML), also for template data a dot .
needs to be used (otherwise it will be a function)
TF_AWS_DEFAULT_TAGS_component: '{{ .atmos.component }}'
@Dave Nicoll could be an answer to your question too
New here but is there something I need to add to the atmos.yaml file before i can use atmos.component in my stack?
Whenever i use it to use the output of component 1 in component 2, the terraform screams that its not legible
It literally passes in ‘{{ (atmos,Component…}}’ in as a string
The docs doesnt say much more than just howbto use atmos.Competent in the yaml
Go templates in Atmos stack manifests are not enabled by default (they are enabled in imports, but those serve diff purpose)
in your atmos.yaml
, add the following
templates:
settings:
# Enable `Go` templates in Atmos stack manifests
enabled: true
# Number of evaluations/passes to process `Go` templates
# If not defined, `evaluations` is automatically set to `1`
evaluations: 1
# <https://masterminds.github.io/sprig>
sprig:
# Enable Sprig functions in `Go` templates in Atmos stack manifests
enabled: true
# <https://docs.gomplate.ca>
# <https://docs.gomplate.ca/functions>
gomplate:
# Enable Gomplate functions and data sources in `Go` templates in Atmos stack manifests
enabled: true
# Timeout in seconds to execute the data sources
timeout: 5
datasources: {}
Thanks a lot, got me further
But now im running into the issue of retrieving aws creds to access the remote backend
I see there is documentation of envs to use in templates.settings which include aws profiles but ive tried multiple cases
1- i set it equal to {{ .Flags.stack }} since my stack names correspond to the aws profiles
2- hardcoded my profile name in but still get the same error
3- im currently using aws sso for the profiles, so i tried inputting access and secret keys instead but no difference
the error
template: all-atmos-sections:293:31: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exit status 1
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Please see <https://www.terraform.io/docs/language/settings/backends/s3.html>
for more information about providing credentials.
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I can see it works when i manually set the aws_profile value locally to match my profile name
Im trying to set the profile name so atmos can find it via the atmos.yaml but it wont work even when I set
env: AWS_PROFILE: abc
In templates.settings.env
I think you should let your pipeline to pass the profile to then atmos run the command if you can
Atmos is not a cicd tool, usually the selection of profiles is determined by the naming convention in your module by passing descriptive name + account number
the atmos.Component
template functions executes terraform output
on the provided component and stack. This means that the referenced component must be already provisioned using a backend
you should be able to execute atmos terraform output <component> -s <stack>
and see the component outputs from the remote state. If this command does not executes, then oyu need to make sure you have defined the backend correctly and a role with permissions to access the backend
{{ (atmos.Component <component> .stack).[outputs.xxx](http://outputs.xxx) }}
is just a wrapper on top of atmos terraform output <component> -s <stack>
please review this doc on how to configure TF backends in Atmos https://atmos.tools/quick-start/advanced/configure-terraform-backend/
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts
2024-08-09
2024-08-10
2024-08-12
Having trouble with stack templates in github actions. My config works when run locally, but when run in a github action, the go templates return “<no value>“, and I haven’t been able to figure out why.
I’ve verified that my local atmos version and the one installed by the github action workflow are both 1.85.0. Both are using the same atmos.yaml, which has templating enabled. Obviously, both are using the same terraform and atmos code as well. Where else should I be looking for differences?
@Andriy Knysh (Cloud Posse)
Here’s my template config from atmos.yaml:
templates:
settings:
enabled: true
evaluations: 1
delimiters: ["{{", "}}"]
sprig:
enabled: true
gomplate:
enabled: true
timeout: 5
datasources: {}
Variable I’m trying to pull:
rds_multitenant_sg_id: '{{ (atmos.Component "aioa-core/multitenant-rds" .atmos_stack).outputs.rds_sg_id }}'
Output from github action:
# aws_security_group_rule.db_ingress must be replaced
-/+ resource "aws_security_group_rule" "db_ingress" {
~ id = "//redacted//" -> (known after apply)
~ security_group_id = "//redacted//" -> "<no value>" # forces replacement
~ security_group_rule_id = "//redacted//" -> (known after apply)
Is this related? https://sweetops.slack.com/archives/C031919U8A0/p1722024240662429
Hello. Previous behavior for stack imports would fail if a value is missing, now it does not fail and it replaces my missing import values with "<no value>"
. I have Go Templating enabled and gomplate disabled in my atmos.yaml
. Is this new intended behavior? If so, is the solution to use schemas/opa to ensure stack files have the proper values?
it could happen if the GH action does not see the atmos.yaml
(that you use when running locally, which works), so it does not see
templates:
settings:
enabled: true
or it points so another atmos.yaml
where you have evaluations: 2
locally, try to set evaluations: 2
and atmos.yaml
and run atmos describe component
and you will probably see <no value>
@Andy Wortman can you do that and let us know what output do you see?
I set evaluations: 2
, but the local run still worked.
To Erik’s question, we’re not using import templates, as far as I can tell.
This component does import an abstract component, but all the templating is in the real component
maybe in your GH action you could run the https://atmos.tools/cli/commands/describe/config/ command and see the config that’s used?
Use this command to show the final (deep-merged) CLI configuration of all atmos.yaml
file(s).
Here’s the templates section from the github runner:
"templates": {
"settings": {
"enabled": true,
"sprig": {
"enabled": true
},
"gomplate": {
"enabled": true,
"timeout": 5,
"datasources": null
},
"delimiters": [
"{{",
"}}"
],
"evaluations": 1
}
},
thanks
looks ok
i don’t know why the issue is present in the GH action, we are investigating it, we’ll let you know once we figure it out
Thanks. I’ll keep digging on my side too.
cc @Dan Miller (Cloud Posse) , in case this is related to the thing you were looking into as well
I’m not sure unfortunately - this isn’t related to anything I was working on. It might’ve been @Ben Smith (Cloud Posse), but iirc he said that issue was an older version of Atmos. Which sounds like isn’t the issue here
@Andy Wortman please take a look at the new Atmos release https://github.com/cloudposse/atmos/releases/tag/v1.86.0
you can use the env var ATMOS_LOGS_LEVEL=Trace
in the GH action, and the atmos.Component
function will log debug messages and the outputs
section of the component
Nice!
we are testing it by ourselves now, let us know if you get to it faster and find anything regarding the issue with templates in GH action
Got the new version in place and added trace logging. I’m getting this error:
template: describe-stacks-all-sections:41:21: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: invalid character 'c' looking for beginning of value
The template I’m using to test is:
alb_arn: '{{ (atmos.Component "alb" "ue2-dev-common").outputs.alb.alb_arn }}'
@Andy Wortman what do you have in your atmos.yaml
file in the logs
section
logs:
# Can also be set using 'ATMOS_LOGS_FILE' ENV var, or '--logs-file' command-line argument
# File or standard file descriptor to write logs to
# Logs can be written to any file or any standard file descriptor, including `/dev/stdout`, `/dev/stderr` and `/dev/null`
file: "/dev/stderr"
# Supported log levels: Trace, Debug, Info, Warning, Off
# Can also be set using 'ATMOS_LOGS_LEVEL' ENV var, or '--logs-level' command-line argument
level: Info
Check that you have file: "/dev/stderr"
to log to stderr
instead of stdout
(which breaks the JSON outout)
also, we will release a new Atmos version today with more improvements to the atmos.Component
function
Not much in the logs section:
logs:
verbose: false
colors: true
please update to the above code
Updated to atmos 1.86.1, updated the logs section. Getting a little more now:
Executing template function 'atmos.Component(alb, ue2-dev-common)'
template: describe-stacks-all-sections:41:21: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: invalid character 'c' looking for beginning of value
Error: Process completed with exit code 1.
Added atmos describe stacks -s ue2-dev-aioa
to the action workflow. Getting basically the same thing:
Validating all YAML files in the 'stacks' folder and all subfolders
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
Executing template function 'atmos.Component(alb, ue2-dev-common)'
Found stack manifests:
<...>
Found component 'alb' in the stack 'ue2-dev-common' in the stack manifest 'dev/common/ue2-dev-common'
ProcessTmplWithDatasources(): processing template 'all-atmos-sections'
ProcessTmplWithDatasources(): template 'all-atmos-sections' - evaluation 1
ProcessTmplWithDatasources(): processed template 'all-atmos-sections'
template: describe-stacks-all-sections:41:21: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: invalid character 'c' looking for beginning of value
Error: atmos exited with code 1.
Error: Process completed with exit code 1.
For completeness, this all still works locally. It only fails in the github action.
we’ll release a new atmos version today/tomorrow which can help to identify the issues
@Andy Wortman please try this version https://github.com/cloudposse/atmos/releases/tag/v1.86.2
Ran against 1.86.2. Some more logging about the process, but nothing further about the error:
Found component 'alb' in the stack 'ue2-dev-common' in the stack manifest 'dev/common/ue2-dev-common'
ProcessTmplWithDatasources(): processing template 'all-atmos-sections'
ProcessTmplWithDatasources(): template 'all-atmos-sections' - evaluation 1
ProcessTmplWithDatasources(): processed template 'all-atmos-sections'
Writing the backend config to file:
components/terraform/alb/backend.tf.json
Wrote the backend config to file:
components/terraform/alb/backend.tf.json
Writing the provider overrides to file:
components/terraform/alb/providers_override.tf.json
Wrote the provider overrides to file:
components/terraform/alb/providers_override.tf.json
Executing 'terraform init alb -s ue2-dev-common'
template: describe-stacks-all-sections:41:21: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: exit status 1
Error: atmos exited with code 1.
Error: Process completed with exit code 1.
we’ll be testing 1.86.2 today as well. You can DM me your YAML config for review @Andy Wortman
2024-08-13
All the docs for our AWS reference architecture are now public
https://sweetops.slack.com/archives/C04NBF4JYJV/p1723233360401039
New docs are live! https://docs.cloudposse.com/
You can now access all the docs without registration.
cc @jose.amengual
New docs are live! https://docs.cloudposse.com/
You can now access all the docs without registration.
amazing
2024-08-14
is there a custom way to generate files? kind of like the backend.tf.json is created? but I will like to create the version.tf files as well it can be .json as well
Not at this time. We support only backend and provider generation.
I could see the benefit of generating versions
yeah this ca be an awesome thing to add
Add --process-templates
flag to atmos describe stacks
and atmos describe component
commands. Update docs @aknysh (#669) ## what
• Add logging to the template functions atmos.Component
and atmos.GomplateDatasource
• Add --process-templates
flag to atmos describe stacks
and atmos describe component
commands
• Update docs
why
• When the environment variable ATMOS_LOGS_LEVEL
is set to Trace
, the template functions atmos.Component
and atmos.GomplateDatasource
will log the execution flow and the results of template evaluation - useful for debugging
ATMOS_LOGS_LEVEL=Trace atmos terraform plan
Process Go
templates in stack manifests and show the final values
atmos describe component
Process Go
templates in stack manifests and show the final values
atmos describe component
Do not process Go
templates in stack manifests and show the template tokens in the output
atmos describe component
• For atmos describe stacks
command, use the --process-templates
flag to see the stack configurations before and after the templates are processed. If the flag is not provided, it’s set to true
by default
Process Go
templates in stack manifests and show the final values
atmos describe stacks
Process Go
templates in stack manifests and show the final values
atmos describe stacks –process-templates=true
Do not process Go
templates in stack manifests and show the template tokens in the output
atmos describe stacks –process-templates=false
The command atmos describe stacks --process-templates=false
can also be used in Atmos custom commands that just list Atmos stacks does not require template processing. This will significantly speed up the custom command execution. For example, the custom command atmos list stacks
just outputs the top-level stack names and might not require template processing. It will execute much faster if implemented like this (using the --process-templates=false
flag with the atmos describe stacks
command :
- name: list
commands:
why
• The affected step was missed when the plan example was updated
references
• closes https://github.com/orgs/cloudposse/discussions/18 Updated Documentation for GHA Versions @milldr (#657) ## what - Update documentation for Atmos GitHub Action version management
why
• New major releases for both actions
references
team, question wrt to atmos inheritance from catalogs. i.e. mainly seeing in var.tags This is kind of new behavior, I am noticing the inline tag values takes precedence over any catalog imports correct? but the import values are taking over? anything to do with atmos version or new configs to be aware off
not related to the catalog itself, related to any imports. In short, Atmos does the following
• Reads all YAML config files and processes all imports (import order matters)
• Then for each section, it deep-merges the values in the following order of precedence:
Component values -> base component values -> global values
inline values (defined for a component in a stack) take precedence over the imported values
please see this doc https://atmos.tools/quick-start/advanced/provision/#stack-search-algorithm
Having configured the Terraform components, the Atmos components catalog, all the mixins and defaults, and the Atmos top-level stacks, we can now
here is the var section snippet when I describe the component
tags:
final_value:
compliance: na
criticalsystem: "false"
env: p1np
managedby: cloudposse
pillar: test1
publiclyaccessible: pri3
service: envoy-proxy
name: tags
stack_dependencies:
- dependency_type: import
stack_file: catalog/aws-irsa-role/defaults
stack_file_section: components.terraform.vars
variable_value:
compliance: na
criticalsystem: "false"
managedby: cloudposse
publiclyaccessible: pri3
service: envoy-proxy
- dependency_type: inline
stack_file: orgs/test1/p-line/usw2/p1np/mm-experiment-store-recs-inference-mastermind
stack_file_section: terraform.vars
variable_value:
compliance: gdpr
criticalsystem: true
publiclyaccessible: pri3
service: mm-experiment-store-recs-inference-mastermind1
- dependency_type: import
stack_file: catalog/aws-irsa-role/p1np-defaults
stack_file_section: terraform.vars
variable_value:
env: p1np
- dependency_type: import
stack_file: globals
stack_file_section: vars
variable_value:
pillar: test1
here I would expect the final value for service to be ‘mm-experiment-store-recs-inference-mastermind1’ but I see ‘envoy-proxy’
well… I can’t answer that question w/o seeing your config (it all depends on component config and imports) you can DM me and I’ll take a look
ok let me do that
after reviewing this with @Alcp, here is the short answer (in case someone else would need it):
component-level values (e.g. vars.tags)
override the same global values regardless of whether the component values are defined inline or imported
Is there a good way to do repo separation for
• Risky components such as root components
• Segment high priv accounts from developer friendly accounts This would require sharing configurations between a root level atmos repo and the segmented ones
Yes, this is possible, but we don’t have it well documented.
Good news is we have a project coming up with a customer to address this and multiple other enhancements. No ETA yet, but likely this year.
As an infrastructure repo gets larger and larger, it can become unwieldy and intimidating for teams. We want to make it easier to break it apart, while maintaining the benefits of the atmos framework.
Just spitballing… Is one way simply
• Spacelift: adding the stack configs to geodesic and then pulling down the geodesic container
• Github actions: use deploy key to clone root repos configs to get aws account, aws account map and other components In either approach, it would be grabbing the root yaml, then combining and deep merging with the child accounts repo yaml, and then finally the atmos workflow
2024-08-15
Improve logging for the template function atmos.Component
@aknysh (#672) ## what
• Improve logging for the template function atmos.Component
• Update Golang
to the latest version 1.23
why
• When the environment variable ATMOS_LOGS_LEVEL
is set to Trace
, the template functions atmos.Component
and atmos.GomplateDatasource
will log the execution flow and the results of template evaluation - useful for debugging
ATMOS_LOGS_LEVEL=Trace atmos terraform plan
This PR adds more debugging information and shows the results of the atmos.Component
execution, and shows if the result was found in the cache:
Found component ‘template-functions-test’ in the stack ‘tenant1-ue2-prod’ in the stack manifest ‘orgs/cp/tenant1/prod/us-east-2’ ProcessTmplWithDatasources(): template ‘all-atmos-sections’ - evaluation 1
Converting the variable ‘test_list’ with the value [ “list_item_1”, “list_item_2”, “list_item_3” ] from JSON to ‘Go’ data type
Converted the variable ‘test_list’ with the value [ “list_item_1”, “list_item_2”, “list_item_3” ] from JSON to ‘Go’ data type Result: [list_item_1 list_item_2 list_item_3]
Converting the variable ‘test_map’ with the value { “a”: 1, “b”: 2, “c”: 3 } from JSON to ‘Go’ data type
Converted the variable ‘test_map’ with the value { “a”: 1, “b”: 2, “c”: 3 } from JSON to ‘Go’ data type Result: map[a:1 b:2 c:3]
Converting the variable ‘test_label_id’ with the value “cp-ue2-prod-test” from JSON to ‘Go’ data type
Converted the variable ‘test_label_id’ with the value “cp-ue2-prod-test” from JSON to ‘Go’ data type Result: cp-ue2-prod-test
Executed template function ‘atmos.Component(template-functions-test, tenant1-ue2-prod)’
‘outputs’ section: test_label_id: cp-ue2-prod-test test_list:
- list_item_1
- list_item_2
- list_item_3 test_map: a: 1 b: 2 c: 3
Executing template function ‘atmos.Component(template-functions-test, tenant1-ue2-prod)’ Found the result of the template function ‘atmos.Component(template-functions-test, tenant1-ue2-prod)’ in the cache ‘outputs’ section: test_label_id: cp-ue2-prod-test test_list:
- list_item_1
- list_item_2
- list_item_3 test_map: a: 1 b: 2 c: 3
add install instructions for atmos on windows/scoop @dennisroche (#649) ## what
add option/documentation for installing atmos
using scoop.sh on windows.
scoop install atmos
scoop
manifests will check GitHub releases and automatically update. no additional maintenance required for anyone :partying_face:.
why
needed an easy way for my team to install atmos
references
Fix docker build ATMOS_VERSION
support v*
tags @goruha (#671) ## what * Remove v
for ATMOS_VERSION
on docker build
why
• Cloudposse changed the tag template policy - now the tag is always prefixed with v
.
The tag passed as ATMOS_VERSION
docker build argument
references
• https://github.com/cloudposse/atmos/actions/runs/10391667666/job/28775133282#step<i class="em em-3"https://github.com/cloudposse/atmos/actions/runs/10391667666/job/28775133282#step<i class=”em em-3”</i>4480>
Thoughts on mixins per stack (i.e. per workspace per component) ?
I think it would help solve one off migrations.tf terraform state blocks per workspace
@Andriy Knysh (Cloud Posse) Can we accept https://github.com/cloudposse/atmos/issues/673 from triage?
Describe the Feature
I’d like to migrate existing terraform into atmos, without having to run terraform commands, by taking advantage of import blocks.
Usually this can be done easily without workspaces by adding a migrations.tf file in a terraform directory and applying.
This is impossible to do with atmos without manually copying and pasting a file into the directory, then apply, and then remove the migrations file. This works but prevents us from keeping the migration file in git and gives more incentive to running the import commands instead of using import blocks.
Expected Behavior
A method to allow a migrations file per stack.
Perhaps an optional migrations directory within each component terraform with the following convention.
<base component>/migrations/<stage>/<region>/<component>/*.tf
or
<base component>/migrations/<stage>-<region>-<component>.tf
or
<base component>/migraions/<workspace>.tf
Could be a directory of migration files.
Use Case
I recall having issues once with the eks component if there was a partial apply failure which resulted in 8 resources created without it stored in state. These had to be reimported manually. I used a script at the time. Using a method like this would also ease developer frustration.
Describe Ideal Solution
A migrations directory would need to pull a file or directory of files into the base component directory before the terraform workflow began.
At this point, you could also rename this to be a mixin directory which would allow users to change the terraform code per workspace if needed. The migrations would be one use case of that.
Alternatives Considered
No response
Additional Context
No response
2024-08-16
Any thoughts why atmos.Component is failing when using Geodesic 3.1.0 and atmos 1.86.1 ?
infrastructure ⨠ ATMOS_LOGS_LEVEL=Trace atmos dc jenkins-efs -s core-ue1-auto
Executing command:
atmos describe component jenkins-efs -s core-ue1-auto
invalid stack manifest 'mixins/automation/jenkins/ecs-service-tmpl.yaml'
template: mixins/automation/jenkins/ecs-service-tmpl.yaml:23:81: executing "mixins/automation/jenkins/ecs-service-tmpl.yaml" at <.stack>: invalid value; expected string
import:
- path: catalog/terraform/efs/defaults
- path: catalog/terraform/ecs-service/default
context:
stage: "{{ .stage }}"
components:
terraform:
jenkins-efs:
metadata:
component: efs
inherits:
- efs/defaults
vars:
name: jenkins-efs
dns_name: jenkins-efs
hostname_template: "%s-%s-%s-%s"
provisioned_throughput_in_mibps: 10
efs_backup_policy_enabled: true
additional_security_group_rules:
# ingress
- cidr_blocks: []
source_security_group_id: '{{ (atmos.Component "jenkins-ecs-service" .stack).outputs.ecs_service_sg_id }}'
from_port: 2049
protocol: TCP
to_port: 2049
type: "ingress"
description: "Allow Local subnet to access"
jenkins-ecs-service:
metadata:
component: ecs-service
inherits:
- ecs-service/default
settings:
depends_on:
alb:
component: jenkins-alb
cluster:
component: common-ecs-cluster
vars:
alb_name: jenkins-alb
use_lb: true
ecs_cluster_name: common-ecs-cluster
health_check_path: /
health_check_port: 8080
health_check_timeout: 5
health_check_interval: 60
health_check_healthy_threshold: 5
health_check_unhealthy_threshold: 5
health_check_matcher: 200,302,301,403
name: jenkins
s3_mirroring_enabled: false
unauthenticated_paths:
- "/*"
unauthenticated_priority: 10
task:
task_cpu: 2048
task_memory: 4096
use_alb_security_group: true
containers:
service:
cpu: 2048
name: jenkins
image: jenkins/jenkins:lts
memory: 4096
readonly_root_filesystem: false
map_environment: {}
map_secrets:
NEW_RELIC_LICENSE_KEY: "{{ .stage }}/common/newrelic/license_key"
port_mappings:
- containerPort: 8080
hostPort: 8080
protocol: tcp
- containerPort: 50000
hostPort: 50000
protocol: tcp
exit status 1
looks like it’s an invalid YAML (although I can’t see any issues looking at the code above)
try to rename mixins/automation/jenkins/ecs-service-tmpl.yaml
to mixins/automation/jenkins/ecs-service-tmpl
and import it like this
imports:
mixins/automation/jenkins/ecs-service-tmpl
if the file is not YAML, Atmos will not check YAML syntax
does it happen to you with atmos 1.86.1, or older versions as well?
if renaming the file does not help, then something is wrong with .stack
, it does not gets evaluated to a string
Hust updated to 1.86.1 to use the atmos.Component, It haven’t happened before. I don’t think we are using .stack anywhere else.
please try the previous version again. 1.86.1 did not change anything (just added more debug messages)
if the code above is in one file, then the doc applies to your case
and you need to update
source_security_group_id: '{{ (atmos.Component "jenkins-ecs-service" .stack).outputs.ecs_service_sg_id }}'
to use double curly braces + backtick + double curly braces instead of just double curly braces
The same error when using atmos 1.86.0
atmos describe component jenkins-efs -s core-ue1-auto
invalid stack manifest 'mixins/automation/jenkins/ecs-service-tmpl.yaml'
template: mixins/automation/jenkins/ecs-service-tmpl.yaml:23:81: executing "mixins/automation/jenkins/ecs-service-tmpl.yaml" at <.stack>: invalid value; expected string
yes, should not be related to atmos version
review the doc
I see, thanks
I fixed the context, now it’s failing with different error
ATMOS_LOGS_LEVEL=Trace atmos dc jenkins-efs -s core-ue1-auto
Executing command:
atmos describe component jenkins-efs -s core-ue1-auto
exit status 137
it’s using a lot of memory
try to run it without the Trace log level
The same result, It doesn’t work with more RAM and SWAP
how many of atmos.Component
functions are you using (for dif components in diff stacks, Atmos caches the result for the same component/stack)
2024-08-17
Improve logging for the template function atmos.Component
. Generate backend config and provider override files in atmos.Component
function @aknysh (#674)
what
• Improve logging for the template function atmos.Component
• Generate backend config and provider override files in atmos.Component
function
• Update docs
why
• Add more debugging information and fix issues with the initial implementation of the atmos.Component
function when the backend config file backend.tf.json
(if enabled in atmos.yaml
) and provider overrides file providers_override.tf.json
(if configured in the providers
section) were not generated, which prevented the function atmos.Component
from returning the outputs of the component when executing in GitHub actions
• When the environment variable ATMOS_LOGS_LEVEL
is set to Trace
, the template function atmos.Component
will log the execution flow and the results of template evaluation - useful for debugging
ATMOS_LOGS_LEVEL=Trace atmos describe component <component> -s <stack>
ATMOS_LOGS_LEVEL=Trace atmos terraform plan <component> -s <stack>
ATMOS_LOGS_LEVEL=Trace atmos terraform apply <component> -s <stack>
bump http context timeout for upload to atmos pro @mcalhoun (#656)
what
Increase the maximum http timeout when uploading to atmos pro
why
There are cases when there are a large number of stacks and a large number of workflows that this call can exceed 10 seconds
2024-08-19
https://atmos.tools/quick-start/advanced/configure-terraform-backend/#provision-terraform-s3-backend
In the above example, once I provision the component for the s3 backend, how do I manage the state of the generated stack? I don’t understand the lifecycle. I understand the stack created after this stack is provisioned, but not the stack for the s3 backend. can someone help?
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts
this is the first step in the Cold Start.
You don’t have an S3 backend yet, so you can’t store any state in it.
So to provision the tfstate-backend
component, you have to do it locally first (using TF local
state file), then add the backend config for for the component (in Atmos, as described in the doc), then run atmos terraform apply
again - Terraform will detect the new state backend and offer you to migrate the state for the component from the local state to the S3
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts
See
Manual guide to setting up AWS Organization with SweetOps
with Atmos, you can do the foloowing:
• Comment out the s3
backend config
• Run atmos terraform apply tfstate-backend -s <stack>
to provision the component using the local state file
• Uncomment the s3
backend config in YAML
• Run atmos terraform apply tfstate-backend -s <stack>
again - Terraform will ask you to migrate the state to S3
• After that, you will have your S3 backend configured and its own state in the S3. All other components will be able store their state in S3 since it’s already provisioned
I’ll take note, thank you !
2024-08-20
I there any way to stop atmos from processing ALL atmos.Component template functions
everytime you do a plan / apply?
Example
when I run:
atmos.exe terraform plan app/apigw --stack org-test-cc1 --skip-init
it retrieves all of the following:
Found component 'app/lambda-api' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split'
Found component 'app/s3-exports' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split'
Found component 'app/ecr' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split'
Found component 'app/s3-graphs' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split'
but only this one is required for the particular stack / plan I am running
Found component 'app/lambda-api' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split'
we will review it
thanks for reporting @Dave
Dammit, I was hoping I was doing something wrong
It also happens ( at least for me ) when you just run atmos from the cli to get the selection window:
awesome feature though!
you can share you YAML stacks
with me (DM me), I will review the config and let you know how many calls to atmos.Component
are configured
@Dave please try this release https://github.com/cloudposse/atmos/releases/tag/v1.87.0, it fixes the TUI borders and does not evalutae Go templates in Atmos stack manifests when showing the TUI components and stacks
2024-08-21
Quick question, what’s the best way to add flags to an existing Atmos command? I see in the docs that there is an example for overriding an existing Terraform apply with an auto-approve command, but if I wanted to add an additional flag for only showing variables when an atmos describe component -light
is ran, would this be a good way to do it?
- name: describe
description: "Shows the configuration for an Atmos component in an Atmos stack: atmos describe component <component> -s <stack>"
commands:
- name: component
description: "Describe an Atmos component in an Atmos Stack"
arguments:
- name: component
description: "Name of the component"
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
- name: light
shorthand: l
description: Only display component variables
required: false
component_config:
component: "{{ .Arguments.component }}"
stack: "{{ .Flags.stack }}"
steps:
- >
{{ if .Flags.light }}
atmos describe component {{ .Arguments.component }} -s {{ .Flags.stack }} | yq -r '.vars'
{{ else }}
atmos describe component {{ .Arguments.component }} -s {{ .Flags.stack }}
{{ end }}
yes that should work
you can add a new atmos custom command or subcommand
and you also can override an existing command or subcommand (and call the native ones from it)
nit: i would call it -vars
if it’s just going to describe vars
I was inspired by the terraform plan -light
for this idea but agreed, that’s definitely more cohesive
Strange, the earlier config doesn’t override it, but I’ll try some other things
and then run atmos secribe --help
to check if the command is shown
Fwiw, we’ve used the atmos list
namespace for these kinds of customizations
e.g. atmos list vars
I like that idea better than what I was thinking. Initially, I thought I would just introduce another command atmos describe component-values
, but that just adds another command when we have a good pattern going with our list commands
Yea, agreed. What I don’t like about overwriting built-in functionality is it is confusing for end-users what is built-in/native vs extended. If you could to the atmos.tools docs, the customizations will not be documented.
Hello, I had an inquiry about the demo-helmfile example:
• Helmfile My understanding is that the stack file pulls in the catalog file and then changes it to be of the “real” type. Then I believe that the helmfile gets included via the key/value pair “component: nginx”. Some of the previous terminology may be off, but I think that’s the general idea.
My inquiry is are the vars of the catalog entry supposed to be injected into the helmfile’s nginx release? How mapping to the nginx release work and how are they picked up? Does it use the state-values-file? I noticed this empty values section in the helmfile. Is that related?
The values
section is a mistake, I believe
It should be vars
in the stack configurations
What happens is it deep merged all the vars, in order of imports, then writes a temporary values file, which is passed to helmfile.
Please note, the current helmfile implementation is (unfortunately) very EKS specific
To get it to work for k3s, here’s how we did it:
https://github.com/cloudposse/atmos/blob/main/examples/demo-helmfile/atmos.yaml#L33-L37
- atmos validate stacks
# FIXME: `atmos helmfile apply` assumes EKS
#- atmos helmfile apply demo -s dev
- atmos helmfile generate varfile demo -s dev
- helmfile -f components/helmfile/nginx/helmfile.yaml apply --values dev-demo.helmfile.vars.yaml
But that also serves to show what atmos helmfile apply
does (more or less) under the hood
So those vars in the catalog file go get added to the file that gets generated by atmos helmfile generate varfile...
and are they not currently being used? I’ve used the {{ .Values.... }}
type notation in a helmfile and gotten that to work, but I’m not seeing how the demo helmfile accesses such passed through values. Thanks!!
Atmos uses helmfile --state-values-file=....
to point the helmfile to the generated values file
you can also execute atmos helmfile generate varfile <component> -s <stack>
to generate the value file and review it
Use this command to generate a varfile for a helmfile
component in a stack.
the generated file when used in the command helmfile diif/apply --state-values-file=
Atmos does not know anytjhing about the {{ .Value }}
templates in the helm/helmfile manifests. Helm itself will merge the values provided in the --state-values-file
file with all the values defined in the Helm/Helmfile templates and value files
~@Brennan I understand your point. That’s an oversight. The ~demo-helmfile
example is not showing any values passed into the helm charts via atmos & helmfile.
I’ll create a task to update that.
Actually it does
image:
tag: "latest"
service:
type: ClusterIP
port: 80
replicaCount: 1
ingress:
enabled: true
hostname: fbi.com
paths:
- /
extraHosts:
- name: '*.app.github.dev'
path: /
- name: 'localhost'
path: /
readinessProbe:
initialDelaySeconds: 1
periodSeconds: 2
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
persistence:
enabled: false
extraVolumes:
- name: custom-html
configMap:
name: custom-html
extraVolumeMounts:
- name: custom-html
mountPath: /app/index.html
subPath: index.html
readOnly: true
All of these are passed as the values to the nginx helm chart
The files in manifest/
are applied with the kustomize extension for helmfile
So this demo shows how to use atmos+k3+helmfile+helm+kustomize all together
Update/improve Atmos UX @aknysh (#679)
what & why
• Improve error messages in the cases when atmos.yaml
CLI config file is not found, and if’s it found, but points to an Atmos stacks directory that does not exist
• When executing atmos
command to show the terminal UI, do not evaluate the Go
templates in Atmos stack manifests to make the command faster since the TUI just shows the names of the components and stacks and does not need the components’ configurations
• Fix/restore the TUI borders for the commands atmos
and atmos workflow
around the selected columns. The BorderStyle
functionality was changed in the latest releases of the charmbracelet/lipgloss library, preventing the borders around the selected column from showing
atmos
atmos workflow
Announce Cloud Posse’s Refarch Docs @osterman (#680)
what
• Announce cloud posse refarch docs Add atmos pro stack locking @mcalhoun (#678)
what
• add the atmos pro lock
and atmos pro unlock
commands
2024-08-22
I have a dump question:
I’ve read about schema validation: https://atmos.tools/core-concepts/validate/json-schema
Question: if i check variables in terraform with input validation, why would i use json schema for components? What would be the advantage (as i need to convert terraform variables into json schema for the most part)
How do i validate vendoring? This is mentioned but not explained.
Using Rego policies, you can do much more than just validating Terraform inputs.
You can validate relations between components and stacks, not only variables of one component
see this for Atmos Rego policy examples
you can check/validate such things as
# Check if the component has a `Team` tag
# Check if the Team has permissions to provision components in an OU (tenant)
# Check that the `app_config.hostname` variable is defined only once for the stack across all stack config files
# This policy checks that the 'bar' variable is not defined in any of the '_defaults.yaml' Atmos stack config files
# This policy checks that if the 'foo' variable is defined in the 'stack1.yaml' stack config file, it cannot be overriden in 'stack2.yaml'
# This policy shows an example on how to check the imported files in the stacks
which you would not be able to do in Terraform. You can use both Terraform inputs validation for TF vars and Atmos OPA policies for more advanced validation
ok. thx for the explanation
I think @Andriy Knysh (Cloud Posse) mostly answered it, but to add my 2c:
• Atmos OPA policies allow you to enforce different policies for the same components depending on where/how they are used. You can imagine using different policies in sandbox than for production, for teamA vs teamB, for compliance or out-of-scope environments. As you encourage reuse of components, it doesn’t necessarily mean the policies are the same.
• They are not mutually exclusive. Use tools like tfsec
and conftest
to enforce policies on the terraform code itself.
How do i validate vendoring? This is mentioned but not explained.
@Andriy Knysh (Cloud Posse)
@Stephan Helas currently we support OPA policies for Atmos stacks only, not for vendoring.
Can you provide more details on what you want to validate in vendoring (I suppose to validate the vendoring manifest files)? (we will review it after we understand all the details).
thanks
What about JSON schema validation of vendoring file?
not currently in Atmos, but we can add it
regarding vendoring : i was just wondering, because it is mentioned here:
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse)
https://atmos.tools/core-concepts/validate/
which can validate the schema of configurations such as stacks, workflows, and vendoring manifests
regarding OPA ( i never used it before).
How can i reuse rules for different components (for example check for tags or check for name pattern):
do i have to use `errors[message] {}’ only or can i use any opa rule
2024-08-23
2024-08-24
Hi everyone! I have been excitedly exploring Atmos recently and have a few questions. Overall, it has been relatively smooth so far, but there are a few things I am unclear on:
• Sharing state between components
◦ I get the impression that template functions with atmos.component
is the current preferred mechanism over remote-state
. Is this correct?
▪︎ A section in the Share Data Between Components page titled “Template Functions vs Remote State” (or something similar) would be very helpful.
▪︎ If someone could distill their knowledge/thoughts on the tradeoffs I would be happy to make a PR to add a distilled version to the docs!
• What is Atmos Pro? ◦ Is this just an optional add-on? or will it be a subscription service? ▪︎ My initial thought was that it was a paid subscription service, but I was able to sign up and install the GH app add a repo and did not see anything related to pricing. ▪︎ Then again, I can’t find a public repo for it, which makes me think it is destined to be a subscription service, but it is just brand new. ◦ To share some perspective, when I saw this, it tarnished my excitement about Atmos a bit. ▪︎ I have used and trusted Cloud Posse TF modules for a long time and really enjoy the business model. ▪︎ Seeing Atmos Pro immediately made me think there must be some known painful parts of Atmos that will require me to add another subscription to fix. This didn’t follow the Cloud Posse ethos that I was used to, and I am left with a nagging concern now that: • Atmos will have new exciting features directed towards Atmos Pro instead of Atmos • I will eventually run into some pain points of Atmos, which is why Atmos Pro exists.
Share data between loosely-coupled components in Atmos
Hi @Ryan Ernst, all good questions
Share data between loosely-coupled components in Atmos
atmos.Component
is one way to get a component remote state.
The other way is using the remote-state
Terraform module (with configuration in Atmos).
As always, they both have their pros and cons. Using them depends on your use-cases.
atmos.Component
- you do everything in stack manifests in YAML, it works in almost all the cases. It’s a relatively new addition to Atmos, and we are improving it to be able to assume roles to use it in CI/CD (e.g. GH actions).
remote-state
TF module - everything is done in TF (including role assumtion to access AWS and TF state backend). Faster Atmos stack processing (no need to execute terraform output
for all atmos.Componet
functions in the stack manifests).
We can chat about the pros and cons of these two ways more, but the above is a short description.
Regarding Atmos Pro, it’s not related to the Atmos CLI, it’s a completely diff product (so no, it’s not about the pain points in Atmos). It’s about pain points in CI/CD.
It’s Cloud Posse’s new solution for CI/CD with GitHub Actions. It will run Atmos CLI commands and execute CI/CD workflows, and show the results in a UI.
It’s still in development.
@Erik Osterman (Cloud Posse) can provide more details on it
Thanks for the quick response!
My interpretation so far on remote-state vs atmos.compnent
is that I would prefer atmos.component
. atmos.component
has less coupling with atmos, making my TF modules more portable.
From your comment it sounds like under the hood, atmos.component
is using terraform output
as a gomplate (I assume) data source, and this requires role assumption so that in CI/CD we have access to the needed state backends.
So eventually Atmos would create and manage these roles for me, but I could probably still use Github Actions now if I managed the roles myself? Or am I just unable to use GHA right now if I use atmos.component
?
yes, exactly
For terraform output
, the function uses a Go
lib from HashiCorp, which executes terrafrom output
in code (gomplate
is not involved here)
we’ll add role assumption to Atmos to be able to use atmos.Component
in GH actions. But if your action assumes the correct role to access the TF state backend, you can use the atmos.Component
function now
My team is currently at Stage 8 right now so CI/CD is one of the things I am evaluating. We use Github Actions already, and already have GH OIDC set up for assuming roles, but we don’t have any workflows for IaC yet.
The rest of my questions might be more for @Erik Osterman (Cloud Posse) and what the experience would be like using Atmos with GH Actions with and without Pro.
Team expands and begins to struggle with Terraform
I am also trying to better understand how Atlantis fits. My current assumption around Atlantis is that it:
• Does not solve the problems that Atmos Pro does
• Adds the ability to plan/apply IaC changes from GH PR comments.
Thanks for the feedback, will get back later this weekend
I agree, that a chapter like:
“Template Functions vs Remote State”
Would be a good one. There are definitely tradeoffs. One that isn’t immediately obvious is that the YAML stack configs that rely on templating, will first need to evaluate all the template functions. If you rely on a lot of atmos.Component
function calls, it will slow down your stacks because they must be compute in order to render the final configuration.
What is Atmos Pro?
Atmos Pro is an exciting new project we’re working on, though we haven’t made an official announcement yet. The goal is to enhance the GitHub Actions experience by overcoming some of the current limitations with matrixes and concurrency groups, which don’t behave quite like queues in GitHub Actions. We’re building this to help some of our customers using GitHub Enterprise scale their parallelism beyond what’s possible today.
For example, GitHub Matrixes are capped at 250 concurrent runs, which can be a roadblock for larger workflows. We’ve managed to break through this with our own solution, the https://github.com/cloudposse/github-action-matrix-extended, but then we encountered issues with the GitHub UI itself—it tends to crash when matrixes get too big! We’re still developing and refining Atmos Pro, and we plan to eventually roll it out more widely. Stay tuned for more updates!
GitHub Action that when used together with reusable workflows makes it easier to workaround the limit of 256 jobs in a matrix
Is this just an optional add-on? or will it be a subscription service?
Atmos Pro will be available as a SaaS offering once it’s ready for release, though the name may change. It’s not required but as I alluded to earlier, it solves key GitHub Actions limitations for better scaling with a GitHub App. We’re excited to bring this to the community soon—feel free to DM me for more details!
My current assumption around Atlantis is that it:
Atlantis is great, but it’s a much simpler platform. Atmos GitHub Actions have more or less feature parity with Atlantis today, and adds a ton of capabilities not available in Atlantis, such as drift detection, failure handling and better support for monorepos with parallelism. Plus, since it’s built with GitHub Actions, it works with your existing GHA infrastructure and you can customize the workflows. Bear in mind, that Atlantis was born in an era long before GHA.
I have used and trusted Cloud Posse TF modules for a long time and really enjoy the business model. We appreciate that and the support! We continue to believe in Open Source as a great way to build a community and provide a transparent way of distributing software. We have literally hundreds of projects and we do our best to support them. Unfortunately, open source by itself is not a business model.
Seeing Atmos Pro immediately made me think there must be some known painful parts of Atmos that will require me to add another subscription to fix.
The painful parts we are fixing are not inheritant to Atmos, but to GitHub Actions, when introducing large scale concurrency, between jobs which have dependencies, and need approvals using GitHub Environment Protection Rules.
This didn’t follow the Cloud Posse ethos that I was used to, and I am left with a nagging concern now that:
Supporting open source costs real money, and while very few companies contribute financially, there needs to be a commercial driver behind it unless it’s under a nonprofit like CNCF/FSF, for it to succeed. We value every PR and contribution, but that doesn’t foot the bill. Consulting is a poor business model too, that scales linearly at best and is very susceptible to market conditions. In my view, we give away a ton of value for free, more than most SaaS businesses, but we also need to sustain our business somewhere along the way.
Atmos will have new exciting features directed towards Atmos Pro instead of Atmos
Almost certaintly, there will be exciting features that are only possible to do with this offering. But only if you need to do those things. We’re still giving away all of our modules, atmos, and documentation for our reference architecture.
docs.cloudposse.com
I will eventually run into some pain points of Atmos, which is why Atmos Pro exists.
This is probably true. But I’m curious—at what point does it make sense to financially support the companies behind the open source who overall share your ethos?
@Erik Osterman (Cloud Posse) Thank you for the very thorough response! This all makes sense, and I appreciate you taking the time to address my questions.
I fully agree that Cloud Posse gives away an amazing amount of value for free! I have learned a lot from the conventions and reference architectures.
From you response, my main concern is definitely alleviated. We are not operating at an enterprise level and might have 10 concurrent jobs at any given time, nowhere near 256.
Again, thank you for the detailed response! I will continue my PoC with Atmos and will get back to you if I have any other questions!
Thanks @Ryan Ernst! I appreciate your message and speaking up.
Yep, if you can get away with a lower concurrency and do not need approval gates for ordered dependencies (e.g. approve plan A, then apply A, approve Plan B, then apply B, approve plan C, then apply C), you can use everything we put forth.
2024-08-25
2024-08-26
Hi everyone! I have a question:
Can I overwrite a config in CI?
for local development I use aws_profile
to authenticate to aws but on CI I will like to use OIDC but it looks like it’s looking for the profile when I use the cloudposse/github-action-atmos-affected-stacks@v4
with the atmos config for roles under ./rootfs/usr/local/etc/atmos/
is there a way to maybe set aws_profile
to null?
@Andriy Knysh (Cloud Posse)
@Igor Rodionov have you had the chance to review it?
We had a conversation with @Miguel Zablah a month ago. I showed him a workaround because, actually, we have poor support AWS auth with profiles. So @Gabriela Campana (Cloud Posse) you can think this request adrressed
Thanks, Igor
2024-08-28
Hey guys I found a potential bug on the atmos plan workflow for github actions,
You guys have this step: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L110
It will run atmos describe <component> -s <stack> ..
but this requires the authentication to happen and that is something that happens after this on this step:
https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L268
I have fix this by doing the authentication before the actions runs on the same job that looks to do the trick but maybe is something that should be documented or fix on the action
For anyone that haves this issue here is my fix:
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set AWS Region from YAML
id: aws_region
run: |
aws_region=$(yq e '.integrations.github.gitops["artifact-storage"].region' ${{ env.ATMOS_CONFIG_PATH }}atmos.yaml)
echo "aws-region=$aws_region" >> $GITHUB_OUTPUT
- name: Set Terraform Plan Role from YAML
id: aws_role
run: |
terraform_plan_role=$(yq e '.integrations.github.gitops.role.plan' ${{ env.ATMOS_CONFIG_PATH }}atmos.yaml)
echo "terraform-plan-role=$terraform_plan_role" >> $GITHUB_OUTPUT
- name: Setup AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-region: ${{ steps.aws_region.outputs.aws-region }}
role-to-assume: ${{ steps.aws_role.outputs.terraform-plan-role }}
role-session-name: "atmos-tf-plan"
mask-aws-account-id: "no"
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v3
with:
component: ${{ matrix.component }}
stack: ${{ matrix.stack }}
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: ${{ env.ATMOS_VERSION }}
@Igor Rodionov @Yonatan Koren
@Miguel Zablah thank you for looking into this — however AFAIK the root of the issue is that we run https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/85cbbaca9e11b1f932b5d49eebf0faa63832e980/action.yml#L111 prior to obtaining credentials. The cloudposse/github-action-atmos-get-setting
which i just mentioned is doing atmos describe component
(link). Template processing needs to read the terraform state because template functions such as atmos.Component
are able to read outputs from other components.
So, I have a PR for to make --process-templates
conditional in cloudposse/github-action-atmos-get-setting
and it should be getting merged today.
Then, we will update cloudposse/github-action-atmos-terraform-plan
to move authentication earlier in the chain as you’ve described if someone needs template processing, and also the option to disable it.
For further context this authentication requirement comes from https://github.com/cloudposse/atmos/releases/tag/v1.86.2
Admittedly, it’s unexpected for a patch release to introduce something as breaking such as this. But actually 1.86.2
was fixing the feature not working in the first place, i.e. it was not reading from the state when it should have been. And when we released atmos 1.86.2
, some of our actions started breaking due to the authentication requirement I’ve described above.
I see thanks for the explanation and awesome that you already have a fix!!
I’m also working on adding the apply workflow and it looks like it has the same issue so if that one can have the same fix it will be awesome!
@Igor Rodionov if you’d like, you can take over this draft PR I made last week to move up authentication earlier in the steps: https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/86
what
• assume IAM role before running cloudposse/github-action-atmos-get-setting
why
As of atmos 1.86.2
, when atmos.Component
began actually retrieving the TF state, it broke cloudposse/github-action-atmos-affected-stacks
which we resolved as part of this release of the aforementioned action. We just had the action assume the IAM role, and that was it. However in cases where this function is used, appropriate IAM credentials to also be a requirement for cloudposse/github-action-atmos-get-setting
:
Run cloudposse/github-action-atmos-get-setting@v1 template: all-atmos-sections26: executing “all-atmos-sections” at
: error calling Component: exit status 1
Error: error configuring S3 Backend: IAM Role (arniam:role/xxxx-core-gbl-root-tfstate) cannot be assumed.
There are a number of possible causes of this - the most common are:
- The credentials used in order to assume the role are invalid
- The credentials do not have appropriate permission to assume the role
- The role ARN is not valid
Error: NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
references
But we should also add a new input to make template processing conditional after https://github.com/cloudposse/github-action-atmos-get-setting/pull/58 is merged — i.e. add this input to both github-action-atmos-terraform-plan
and github-action-atmos-terraform-apply
and relay it to cloudposse/github-action-atmos-get-setting
.
what
• Add process-templates
input as the value of the --process-templates
flag to pass to atmos describe component
.
• Add appropriate test cases.
why
• Some template functions such as atmos.Component
require authentication to remote backends. We should allow disabling this.
references
Run cloudposse/github-action-atmos-get-setting@v1 template: all-atmos-sections:163:26: executing “all-atmos-sections” at
: error calling Component: exit status 1
Error: error configuring S3 Backend: IAM Role (arn:aws:iam::xxxxxxxxxxxx:role/xxxx-core-gbl-root-tfstate) cannot be assumed.
There are a number of possible causes of this - the most common are:
- The credentials used in order to assume the role are invalid
- The credentials do not have appropriate permission to assume the role
- The role ARN is not valid
Error: NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I’ve kept the default value of process-templates
as true
in order to ensure backwards compatibility.
@Yonatan Koren I make a comment
pls address it and we will merge
@Igor Rodionov @Yonatan Koren On this draft PR: https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/86/files the If needs to change, since it’s expecting a value from the get settings step that has not be run:
if: ${{ fromJson(steps.component.outputs.settings).enabled }}
@Igor Rodionov do we have to do anything to resolve this? i’m seeing it now in my gh workflow
@Michael Dizon I just merged PR with the fix. Could you try it in couple minutes
ok
hey! I’m trying to use the plan action for Terraform and it’s not generating any output I’m getting
Run cloudposse/github-action-atmos-get-setting@v1
with:
component: aws-to-github
stack: dev
settings-path: settings.github.actions_enabled
env:
AWS_ACCOUNT: << AWS ACCOUNT >>
ENVIRONMENT: dev
ATMOS_CLI_CONFIG_PATH: /home/runner/work/infra-identity/infra-identity/.github/config/atmos-gitops.yaml
result returned successfully
Run if [[ "false" == "true" ]]; then
if [[ "false" == "true" ]]; then
STEP_SUMMARY_FILE=""
if [[ "" == "true" ]]; then
rm -f ${STEP_SUMMARY_FILE}
fi
else
STEP_SUMMARY_FILE=""
fi
if [ -f ${STEP_SUMMARY_FILE} ]; then
echo "${STEP_SUMMARY_FILE} found"
STEP_SUMMARY=$(cat ${STEP_SUMMARY_FILE} | jq -Rs .)
echo "result=${STEP_SUMMARY}" >> $GITHUB_OUTPUT
if [[ "false" == "false" ]]; then
echo "Drift detection mode disabled"
cat $STEP_SUMMARY_FILE >> $GITHUB_STEP_SUMMARY
fi
else
echo "${STEP_SUMMARY_FILE} not found"
echo "result=\"\"" >> $GITHUB_OUTPUT
fi
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
env:
AWS_ACCOUNT: 166189870787
ENVIRONMENT: dev
ATMOS_CLI_CONFIG_PATH: /home/runner/work/infra-identity/infra-identity/.github/config/atmos-gitops.yaml
found
Drift detection mode disabled
my atmos-gitops.yaml file looks like this
integrations:
github:
gitops:
terraform-version: 1.9.0
infracost-enabled: false
role:
plan: arn:aws:iam::${AWS_ACCOUNT}:role/iam-manager-${ENVIRONMENT}
apply: arn:aws:iam::${AWS_ACCOUNT}:role/iam-manager-${ENVIRONMENT}
my atmos.yaml file looks like
base_path: "./"
components:
terraform:
base_path: "resource/components/terraform/aws"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
stacks:
base_path: "resource/stacks"
included_paths:
- "deploy/**/*"
excluded_paths:
- "deploy/*/_defaults.yaml"
name_pattern: "{environment}"
logs:
file: "/dev/stderr"
level: Info
settings:
github:
actions_enabled: true
# <https://pkg.go.dev/text/template>
templates:
settings:
# Enable `Go` templates in Atmos stack manifests
enabled: true
... yadda yadda template stuff
any thoughts? I feel like I must be missing a flag somewhere - I tried setting settings.github.actions_enabled
to true but no luck
@Igor Rodionov
hey! I’m trying to use the plan action for Terraform and it’s not generating any output I’m getting
Run cloudposse/github-action-atmos-get-setting@v1
with:
component: aws-to-github
stack: dev
settings-path: settings.github.actions_enabled
env:
AWS_ACCOUNT: << AWS ACCOUNT >>
ENVIRONMENT: dev
ATMOS_CLI_CONFIG_PATH: /home/runner/work/infra-identity/infra-identity/.github/config/atmos-gitops.yaml
result returned successfully
Run if [[ "false" == "true" ]]; then
if [[ "false" == "true" ]]; then
STEP_SUMMARY_FILE=""
if [[ "" == "true" ]]; then
rm -f ${STEP_SUMMARY_FILE}
fi
else
STEP_SUMMARY_FILE=""
fi
if [ -f ${STEP_SUMMARY_FILE} ]; then
echo "${STEP_SUMMARY_FILE} found"
STEP_SUMMARY=$(cat ${STEP_SUMMARY_FILE} | jq -Rs .)
echo "result=${STEP_SUMMARY}" >> $GITHUB_OUTPUT
if [[ "false" == "false" ]]; then
echo "Drift detection mode disabled"
cat $STEP_SUMMARY_FILE >> $GITHUB_STEP_SUMMARY
fi
else
echo "${STEP_SUMMARY_FILE} not found"
echo "result=\"\"" >> $GITHUB_OUTPUT
fi
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
env:
AWS_ACCOUNT: 166189870787
ENVIRONMENT: dev
ATMOS_CLI_CONFIG_PATH: /home/runner/work/infra-identity/infra-identity/.github/config/atmos-gitops.yaml
found
Drift detection mode disabled
my atmos-gitops.yaml file looks like this
integrations:
github:
gitops:
terraform-version: 1.9.0
infracost-enabled: false
role:
plan: arn:aws:iam::${AWS_ACCOUNT}:role/iam-manager-${ENVIRONMENT}
apply: arn:aws:iam::${AWS_ACCOUNT}:role/iam-manager-${ENVIRONMENT}
my atmos.yaml file looks like
base_path: "./"
components:
terraform:
base_path: "resource/components/terraform/aws"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
stacks:
base_path: "resource/stacks"
included_paths:
- "deploy/**/*"
excluded_paths:
- "deploy/*/_defaults.yaml"
name_pattern: "{environment}"
logs:
file: "/dev/stderr"
level: Info
settings:
github:
actions_enabled: true
# <https://pkg.go.dev/text/template>
templates:
settings:
# Enable `Go` templates in Atmos stack manifests
enabled: true
... yadda yadda template stuff
any thoughts? I feel like I must be missing a flag somewhere - I tried setting settings.github.actions_enabled
to true but no luck
I put these in the orgs defaults.yaml file and something is happening
settings:
enabled: true
github:
actions_enabled: true
alright I got it to try to plan, it’s failing on not finding the role
is there any way to pass in the role dynamically? I want the aws account and environment to be sourced from the environment the Github Action is run from
with the v2 plan action that is
I see how I would do it with v1
@Sara Jarjoura could you pls post me the whole github actions logs ?
sure - here you go
thank you
how come that your AWS ROLE looks like this
arn:aws:iam::{{ .vars.account }}:role/iam-manager-{{ .vars.environment }}
did you fork the action?
no I didn’t I’m trying to source the role from variables
I’d like to grab the AWS Account and environment from the Github Action environment if possible
how I got there was adding this to my atmos.yaml
integrations:
github:
gitops:
terraform-version: 1.9.0
infracost-enabled: false
artifact-storage:
region: "{{ .vars.region }}"
role:
plan: "arn:aws:iam::{{ .vars.account }}:role/iam-manager-{{ .vars.environment }}"
apply: "arn:aws:iam::{{ .vars.account }}:role/iam-manager-{{ .vars.environment }}"
ok
how am I supposed to do that?
with v2 I mean - with v1 I could pass these into the plan action
I see v1 is deprecated though
I can’t overwrite the atmos.yaml file since I can’t stop the plan action from doing a checkout
if I wanted to use envsubst for example to append the required info
actually I belive that atmos.yaml
integrations.github.gitops.*
do not supports any templating
ok, so there’s no way to use v2 for what I want to do?
would it be possible to make the checkout step optional so I could append some environment sourced variables to the atmos.yaml?
then I could do the checkout myself, insert fuzzy stuff here, then use the plan action
I know the workaround you can use
pls check workaround that we use for testing
it’s the same problem solve - override atmos.yaml for test purpose
oh so I can use APPLY_ROLE ?
or PLAN_ROLE ?
only if you will specify them in your atmos.yaml
gotcha
ok let me try that
oh but won’t the checkout still run over this?
oh never mind - I see you’re setting the atmos path to something other than the original atmos file
@Igor Rodionov @Yonatan Koren I see you update the apply as well should we delete the second aws authentication? https://github.com/cloudposse/github-action-atmos-terraform-apply/blob/579c9c99f8ebabc15fbffa5868e569017f73497e/action.yml#L181
- name: Configure State AWS Credentials
no. that’s different authentifiactions
one to pull plan file
another to apply changes
aah perfect! I miss this!
thanks!
is it possible to import a mixin for only one component?
@Andriy Knysh (Cloud Posse)
I’m reading up on this thread https://sweetops.slack.com/archives/C031919U8A0/p1718379641179009
Any thoughts about supporting imports at a component level rather than stack level? Example - I have 1 stack (a management group in azure) that has 3 subscriptions (eng, test, prod). And i have mixins for each “environment” eng/test/prod. I want to import the particular mixin w/ the particular subscription within the same stack
in my case I am using the mixin to allow a complex variable to be inherited by two environments but not by a third
this variable should only be passed into one component
it depends on the scope.
If you import a manifest with global variables, those global vars will be applied to all components in the stacks
If you import a manifest with component definitions (components.terraform.....
), then the config will be applied to the same components
ah ok - let me try that
I suggest you do the following:
• In catalog
, add a default component with all the default values
• Import the manifest only in the stacks (environments) where it’s needed
you can do this in 2 diff ways:
• In the catalog
, define the same Atmos component name as in defined inlined in the stacks. When you import, the parts from diff manifest will be combined into the final values for the component
• Or, in the catalog
, define a default base abstract component with the default values, then import it, and inherit the component in the stack from the default component
ooh nice
regarding
Any thoughts about supporting imports at a component level rather than stack level?
yes lovely - let me give that a go
it’s already supported by the 2 mechanisms I explained above
this is far clearer than the thread I read btw
let me know if you need help (you can DM me your config)
How do you folks integrate opensource scanning tools into atmos such as trivy/tfsec, checkov, tflint, etc ?
I couldn’t find a doc on this
True, we have not written up anything on this.
So at this time, it’s not something we’ve entertained as being built into atmos, but rather something you can do today with the standard github actions
If there something you’re struggling with?
Yes! I’m unsure how to do this. I’d like to do it with reviewdog
I was hoping maybe you folks already had prior art for it
(even if not reviewdog, just the cli tool itself would be more than enough to start with)
Note, a lot of what reviewdog did, you don’t need them for anymore as the underlying tools support the sarif format
I believe you are familiar with sarif
oh good to know. I can use the sarif output to write inline comments directly into a github pr ?
yup
without codeql ?
hrmm
Maybe not
But one sec
codeql is very expensive if you’re not open source
anyway, reviewdog is more of a second thought. the main issue im having is trying to bring in tflint and other sast tools but maybe we just need to manually add them our gha ci pipeline.
im lazy and wanted to see if i can copy and paste a quick atmos-friendly gha if you folks had one already
I would look for an action like this https://github.com/Ayrx/sarif_to_github_annotations
That was just a quick google and it looks unmaintained
but bascially something that takes the sarif format an creates annotations. Also, check the tools. THey may already create annotations
hmm ok thank you!
tflint
was the tool I was thinking of that I recently saw supported sarif
-f, --format=[default|json|checkstyle|junit|compact|sarif] Output format
Anyways, the types of problems you may run into are:
• needing the tfvars files
• needing proper roles for terraform init
• properly supporting a mono repo
I think the solution will depend on which tool.
Ah ok thanks a lot. I will try and see what issues I hit. If I succeed, i’ll post back
Also, if you’re using dynamic backends, dynamic providers, that can complicate things
I want to document how to best do this, so happy to work with you on it
Yes. I was thinking i just need to do an atmos generate
commands or atmos terraform init
command, then run the sast scanners, then run the atmos terraform plan
workflows
checkov and other tools also support scanning the planfile itself
so the plan can be saved to a planfile, then throw checkov and similar tools at it
then finally allow the apply
Are you using our GitHub actions for atmos?
Yes
Then it sounds like where ever you have the plan storage action, you could easily integrate checks on the planfile
This one looks promising for sarif and trivy
GitHub Action to check for vulnerabilities in your container image
oh it uses codeql to upload the sarif again… ugh
I guess it’s better to just go with reviewdog for this since it supports the annotations out of the box
I tried adding annotations using reviewdog trivy. I can see the outputted annotations in json but cannot see the annotations in the pr itself.
I switched to the older reviewdog tfsec and it worked out of the box
https://github.com/nitrocode/reviewdog-trivy-test/pull/1/files
2024-08-29
Related to the above, one thing that came up is inline suppressions with trivy/tfsec and how to do that on a stack-input level instead of in the base-component.
One thing that gets tricky is since we’re using workspaces here in atmos, you can have one set of inputs/tfvars that triggers trivy/tfsec issues and another set of inputs that doesn’t, but the in-line suppression may have to be done in the terraform code itself which suppresses it for all inputs.
two kinds of tfvars. The prod ones doesn’t cause the issue.
# services-prod-some-bucket.tfvars
logging_enabled = true
The dev one does cause an issue.
# services-dev-some-bucket.tfvars
logging_enabled = false
so does that mean we have to do this in the base component which can impact all stacks/inputs of the base component ?
#trivy:ignore:aws-s3-enable-logging:exp:2024-03-10
resource "aws_s3_bucket"
A Simple and Comprehensive Vulnerability Scanner for Containers and other Artifacts, Suitable for CI
the above issue i believe impacts both tfsec and trivy
same issue with checkov
https://www.checkov.io/2.Basics/Suppressing%20and%20Skipping%20Policies.html
The suppression inline comments seem only applicable to the resource definition itself and not the input tfvar to the resource definition
Description
I use workspaces to reuse terraform root directories
For example, the terraform code to provision an s3 bucket is located here
components/terraform/s3/main.tf
I setup workspaces such as this
ue1-dev-my-bucket2.tfvars
ue1-prod-my-bucket3.tfvars
My ue1-dev-my-bucket2.tfvars
contains
kms_master_key_arn = “”
My ue1-prod-my-bucket3.tfvars
contains
kms_master_key_arn = “aws:kms”
Then I run trivy for ue1-prod and no issues.
When I run trivy for ue1-dev, I have one issue.
trivy config . –tf-vars=ue1-dev-my-bucket2.tfvars 2024-08-29T17:44:46-05:00 INFO [misconfig] Misconfiguration scanning is enabled 2024-08-29T17:44:47-05:00 INFO Detected config files num=3
cloudposse/s3-bucket/aws/main.tf (terraform)
Tests: 9 (SUCCESSES: 8, FAILURES: 1, EXCEPTIONS: 0) Failures: 1 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 1, CRITICAL: 0)
HIGH: Bucket does not encrypt data with a customer managed key. ═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════ Encryption using AWS keys provides protection for your S3 buckets. To increase control of the encryption and manage factors like rotation use customer managed keys.
See https://avd.aquasec.com/misconfig/avd-aws-0132 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── cloudposse/s3-bucket/aws/main.tf:80-94 via main.tf:3-21 (module.s3_bucket) ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 80 ┌ resource “aws_s3_bucket_server_side_encryption_configuration” “default” { 81 │ count = local.enabled ? 1 : 0 82 │ 83 │ bucket = local.bucket_id 84 │ expected_bucket_owner = var.expected_bucket_owner 85 │ 86 │ rule { 87 │ bucket_key_enabled = var.bucket_key_enabled
How do I only suppress this issue for ue1-dev-my-bucket2.tfvars
and not ue1-prod-my-bucket3.tfvars
?
I can technically do this but this will suppress for both my tfvars files.
#trivyAVD-AWS-0132 module “s3_bucket” {
I cannot add the comment in beside the input directly to the tfvars file. It seems it has to be in before the module definition.
e.g.
#trivyAVD-AWS-0132 kms_master_key_arn = “”
I looked into filtering using the ignore file which would essentially be the same thing as the module definition inline comment.
https://aquasecurity.github.io/trivy/test/docs/configuration/filtering/#by-finding-ids
I looked into filtering by open policy agent
https://aquasecurity.github.io/trivy/test/docs/configuration/filtering/#by-open-policy-agent
This has some promise but looking at the json output, I don’t see any way to ignore based on the inputs. Do I have the ability to see which individual inputs were passed in to the outputted json?
Here is my output json
{ “SchemaVersion”: 2, “CreatedAt”: “2024-08-29T1807.025472-05:00”, “ArtifactName”: “.”, “ArtifactType”: “filesystem”, “Metadata”: { “ImageConfig”: { “architecture”: “”, “created”: “0001-01-01T0000Z”, “os”: “”, “rootfs”: { “type”: “”, “diff_ids”: null }, “config”: {} } }, “Results”: [ { “Target”: “.”, “Class”: “config”, “Type”: “terraform”, “MisconfSummary”: { “Successes”: 3, “Failures”: 0, “Exceptions”: 0 } }, { “Target”: “cloudposse/s3-bucket/aws/cloudposse/iam-s3-user/aws/cloudposse/iam-system-user/aws/main.tf”, “Class”: “config”, “Type”: “terraform”, “MisconfSummary”: { “Successes”: 1, “Failures”: 0, “Exceptions”: 0 } }, { “Target”: “cloudposse/s3-bucket/aws/main.tf”, “Class”: “config”, “Type”: “terraform”, “MisconfSummary”: { “Successes”: 8, “Failures”: 1, “Exceptions”: 0 }, “Misconfigurations”: [ { “Type”: “Terraform Security Check”, “ID”: “AVD-AWS-0132”, “AVDID”: “AVD-AWS-0132”, “Title”: “S3 encryption should use Customer Managed Keys”, “Description”: “Encryption using AWS keys provides protection for your S3 buckets. To increase control of the encryption and manage factors like rotation use customer managed keys.”, “Message”: “Bucket does not encrypt data with a customer managed key.”, “Query”: “data..”, “Resolution”: “Enable encryption using customer managed keys”, “Severity”: “HIGH”, “PrimaryURL”: “https://avd.aquasec.com/misconfig/avd-aws-0132”, “References”: [ “https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html”, “https://avd.aquasec.com/misconfig/avd-aws-0132” ], “Status”: “FAIL”, “Layer”: {}, “CauseMetadata”: { “Resource”: “module.s3_bucket”, “Provider”: “AWS”, “Service”: “s3”, “StartLine”: 80, “EndLine”: 94, “Code”: { “Lines”: [ { “Number”: 80, “Content”: “resource "aws_s3_bucket_server_side_encryption_configuration" "default" {“, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “\u001b[0m\u001b[38;5;33mresource\u001b[0m \u001b[38;5;37m"aws_s3_bucket_server_side_encryption_configuration"\u001b[0m \u001b[38;5;37m"default"\u001b[0m {“, “FirstCause”: true, “LastCause”: false }, { “Number”: 81, “Content”: “ count = local.enabled ? 1 : 0”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “ \u001b[38;5;245mcount\u001b[0m = local.enabled \u001b[38;5;245m?\u001b[0m \u001b[38;5;37m1\u001b[0m \u001b[38;5;245m:\u001b[0m \u001b[38;5;37m0”, “FirstCause”: false, “LastCause”: false }, { “Number”: 82, “Content”: “”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “\u001b[0m”, “FirstCause”: false, “LastCause”: false }, { “Number”: 83, “Content”: “ bucket = local.bucket_id”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “ \u001b[38;5;245mbucket\u001b[0m = local.bucket_id”, “FirstCause”: false, “LastCause”: false }, { “Number”: 84, “Content”: “ expected_bucket_owner = var.expected_bucket_owner”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “ \u001b[38;5;245mexpected_bucket_owner\u001b[0m = \u001b[38;5;33mvar\u001b[0m.expected_bucket_owner”, “FirstCause”: false, “LastCause”: false }, { “Number”: 85, “Content”: “”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “FirstCause”: false, “LastCause”…
Hrmm that seems tricky to solve, and shows ultimately the limitations of tools like trivy, despite their comprehensive capabilities
We recognize the need for different stacks to have different policies which is why our OPA implementation supports that. Unfortunately only works on atmos configs and not HCL
Based on the issue, and the suggestion, “Ignoring by attributes” can you ignore a policy based on the value of a tag?
Hmm that may be a possibility.
We would either want to ignore on the value of a tag OR ignore on some other unique identifier like a name
Ok it worked. I can stack exclusions based on attributes directly on the module definition.
e.g.
#trivy:ignore:avd-aws-0132[name=my-dev-bucket]
#trivy:ignore:avd-aws-0132[name=my-xyz-bucket]
module "s3_bucket" {
source = "cloudposse/s3-bucket/aws"
It’s still not ideal since I have to change the component code which makes vendoring code from cloudposse/terraform-aws-components more difficult.
If I can get their ignore rego policy file to work based on tfvars or module arguments, that would be ideal because then we can remove all the inline comments and keep the cloudposse component up to date
This gives me Deja Vu. I recall complaining to bridgecrew that their solution will never work for open source, where different people have different policies. It’s for the same reason.
With a separate file for the ignore, per workspace, it would be suitable for open source, no?
I think checkov (bridgecrew/prisma) is a little behind trivy when it comes to that compatibility. Checkov doesn’t allow module level or attribute/argument level suppressions for example but trivy does.
Looks like this feature https://github.com/aquasecurity/trivy/issues/7180 in their upcoming 0.55.0 version will allow us to remove the inline comments from the module definition and keep the code in line with upstream.
That sounds great!
This will make integration into atmos natural
But wouldn’t work well with deep merging due to the list usage
Oh interesting, i think you were taking this a step further than me.
I was thinking we could do something like this (for now)
components/terraform/xyz/trivy/ignore-policies/<workspace>.rego
So then when trivy runs, it could run within the base component as workspace specific
trivy config . --ignore-file ./trivy/ignore-policies/ue1-dev-my-s3-bucket.rego
Or just base component specific
trivy config . --ignore-file ./trivy/ignore/policy.rego
Note that it would need to be in its own folder even if it were just one ignore policy because the other form of policies for trivy, when adding new checks, requires a directory path to recursively search through and therefore it could incorrectly pick up the ignore file.
Long term, it would be very nice to pass in the trivy ignores directly through the atmos stack yaml.
I imagine you could create the interface in atmos so it uses a map and then translate that into a list in the format that trivyignore understand
2024-08-30
2024-08-31
Support atmos terraform apply --from-plan
with additional flags @duncanaf (#684)
what
• Change argument generation for atmos terraform apply
to make sure the plan-file arg is placed after any flags specified by the user
why
• Terraform is very picky about the order of flags and args, and requires all args (e.g. plan-file) to come after any flags (e.g. --parallelism=1
), or it crashes.
• atmos terraform apply
accepts a plan-file
arg, or generates one when --from-plan
is used. When this happens, it currently puts the plan-file arg first, before any additional flags specified by the user.
• This breaks when additional flags are specified, e.g. atmos terraform apply --from-plan -- -parallelism=1
. In this example, atmos tries to call terraform apply <planfile> --paralellism=1
and terraform crashes with Error: Too many command line arguments