#atmos (2024-08)

2024-08-01

Andy Wortman avatar
Andy Wortman

I am currently working on enabling Go templates in my Atmos config, and I’m having some trouble excluding a Go template from processing by Atmos. In my manifest, I originally had this:

vars:
  username: "aws-sso-qventus-admin-latam-{{SessionName}}"

After enabling templates globally in atmos.yaml, I got this error: template: all-atmos-sections:136: function "SessionName" not defined

I changed the definition to

vars:
  username: "aws-sso-qventus-admin-latam-{{`{{SessionName}}`}}"

but I’m still getting the same error. I tried the printf syntax as well:

vars:
  username: '{{ printf "aws-sso-qventus-admin-latam-{{SessionName}}" }}'

Same error. Anyone know what I’m missing?

Andy Wortman avatar
Andy Wortman

Atmos 1.85.0, btw

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) is it related to the number of evaluations?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
templates:
  settings:
    # Enable `Go` templates in Atmos stack manifests
    enabled: true
    # Number of evaluations/passes to process `Go` templates
    # If not defined, `evaluations` is automatically set to `1`
    evaluations: 1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

change evaluations from 2 to 1 for example

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

see if it makes a difference

Andy Wortman avatar
Andy Wortman

Yup, that did it. Changed evaluations from 2 to 1, and now it’s working. Thanks @Erik Osterman (Cloud Posse)!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks Erik

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please note, that with evaluations at 1, you’ll be limited in otherways Always some tradeoffs.

1
RB avatar

Is there a repository (yet) for common OPA policy patterns that are used with atmos ?

RB avatar

If not, is there one planned ?

RB avatar

Some potential examples

• Avoid s3 bucket policies in s3 components that contain wildcard actions

• Avoid s3 bucket policies in s3 components that allow cross-account access with NOT allowlisted account ids

• etc

Michael avatar
Michael

I really like this idea. We have been looking into this recently

1
RB avatar

I’d contribute to a community supported atmos opa library fwiw

Brett Au avatar
Brett Au

Agreed I am currently writing policies at the moment, if there was an open repo I would contribute!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice idea, we will consider it, thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, this is much needed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m heads down on the new public docs for refarch, but after that will put something up

1
Michael avatar
Michael

Could be a good office hours discussion too

2024-08-05

Giles Westwood avatar
Giles Westwood

i’m so used to using .yml , is there a reason atmos doesn’t find stacks named like that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a historical reason, doesn’t make sense now, we’ll improve it to read both extensions

Giles Westwood avatar
Giles Westwood

nice it’s nearly caused me to throw my machine out the window a couple of times from stacks not being found

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yep, sorry for that. We have a task to add support for .yml extensions. We’ll try to do it this week

Giles Westwood avatar
Giles Westwood

i like the way atmos generated the tfvars from yaml but i’m wondering if I could disable the different workspace per component behaviour

Giles Westwood avatar
Giles Westwood

i’m a real beginner here but it seems like there’s a lot of complexity to share things between each state file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


if I could disable the different workspace per component behaviour

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you provide more details on what you want to do here?

Giles Westwood avatar
Giles Westwood

i like the breaking up of the state file idea a bit because it scared me to have everything in one file

Giles Westwood avatar
Giles Westwood

we have buckets for each account

Giles Westwood avatar
Giles Westwood

but the consequences of splitting it up are quite painful

Giles Westwood avatar
Giles Westwood

terraform-provider-utils needing an absolute path in there wasn’t ideal

Giles Westwood avatar
Giles Westwood

and the number of modules in s3-buckets root module example

Giles Westwood avatar
Giles Westwood

really complicated for me and the devs here.. i think it won’t go down well

Giles Westwood avatar
Giles Westwood

i was hopeful that atmos would be fast because the statefile is much smaller

Giles Westwood avatar
Giles Westwood

but the speed of a run is quite slow because of all this

Giles Westwood avatar
Giles Westwood
Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Reusing previous version of cloudposse/template from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of cloudposse/awsutils from the dependency lock file
- Reusing previous version of hashicorp/external from the dependency lock file
- Reusing previous version of cloudposse/utils from the dependency lock file
- Reusing previous version of hashicorp/time from the dependency lock file
- Reusing previous version of hashicorp/http from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Using previously-installed hashicorp/aws v5.61.0
- Using previously-installed cloudposse/awsutils v0.19.1
- Using previously-installed hashicorp/external v2.3.3
- Using previously-installed cloudposse/utils v1.24.0
- Using previously-installed hashicorp/time v0.12.0
- Using previously-installed hashicorp/http v3.4.4
- Using previously-installed hashicorp/local v2.5.1
- Using previously-installed cloudposse/template v2.2.0
Giles Westwood avatar
Giles Westwood

so in a roundabouts way i’m saying i’m struggling to justify the splits to myself

Giles Westwood avatar
Giles Westwood

account_map for instance seems to be mapping account ids to names or something. Couldn’t I just make a static list var of account_maps in the stack config and just share it between my different stacks?

Giles Westwood avatar
Giles Westwood

a real world example I can think of is that I’ll want to build a vpc and after that I’ll need the details of the db subnet ids to build an rds instance somehow

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok I see what you are saying. Let me post a few links here how it could be done:

Giles Westwood avatar
Giles Westwood

magic thank you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using Cloud Posse architecture with account-map, and you don’t want to provision account-map component, you can use Atmos static backend to provide static values (it’s called bronwfield configuration)

https://atmos.tools/core-concepts/components/terraform/brownfield/#hacking-remote-state-with-static-backends

Brownfield Considerations | atmos

There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

second, we provide support for remote state backend using two diff methods:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• using the remote-state Terraform module (which uses the terraform-provider-utils)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• Using the Go templates in Atmos stack manifests and the atmos.Component template function

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos.Component | atmos

Read the remote state or configuration of any Atmos component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
For example, you have a tgw component that requires the vpc_id output from the vpc component in the same stack:

components:
    terraform:
      tgw:
        vars:
          vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

by using the Go template function atmos.Component, you don’t have to use the remote-state Terraform component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please review the links and let us know if you have any questions

Giles Westwood avatar
Giles Westwood

yea you’ve really helped thanks

sweetops1
Michael Dizon avatar
Michael Dizon

@Andriy Knysh (Cloud Posse) i’ve had issues using those atmos functions with github actions. when i have gomplate enabled, it hangs

Michael Dizon avatar
Michael Dizon

am i missing somethng

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Michael Dizon make sure your GH action downloads the latest Atmos version (the template function atmos.Component was added not so long ago)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know if that fixes the issue

Michael Dizon avatar
Michael Dizon

@Andriy Knysh (Cloud Posse) that was it

1
Giles Westwood avatar
Giles Westwood

hmm so i got atmos.component to work on an output from a vpc module for vpc_id

Giles Westwood avatar
Giles Westwood

12seconds

Giles Westwood avatar
Giles Westwood

vs sub second

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

12 seconds is too much (it just calls terraform output). please share your config (you can DM me)

Miguel Zablah avatar
Miguel Zablah

hey did you guys manage to make the time go down? I’m having a similar time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using the atmos.Component function and get the component’s outputs, it’s just a wrapper on top of terraform output.

For example:

{{ (atmos.Component vpc .stack).outputs.vpc_id }} is the same as executing

atmos terraform output vpc -s <stack>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you execute atmos terraform output on your component in the stack and see how long it takes?

if it takes that long, than Atmos could not help here b/c it executes the same terraform output command in

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know how long atmos terraform output takes for you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also FYI, Atmos caches the outputs of each function for the same component in the same stack. So if you have many instances of this function for the vpc component in the same stack {{ (atmos.Component vpc .stack).outputs.vpc_id }}, Atmos will execute terraform output only once

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but it will execute it once per stack, so for example, if you are using it in many diff top-level stacks, all of them will be executed when processing the Go templates - this is how the templates work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe the 12 seconds you mentioned is b/c the function is executed many times whenAtmos processes all the component

Miguel Zablah avatar
Miguel Zablah

this cmd: atmos terraform output vpc -s <stack> <- takes 13s but this one takes the same amount of time: atmos list stacks <- takes 1m and 08s

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, that’s an issue

atmos terraform output vpc -s <stack> takes 13 s - this is too long, you prob need to review why it’s taking so much time to run

and you prob have many of those in the Atmos stacks manifests

Miguel Zablah avatar
Miguel Zablah

I will check that although is probably bc of how I authenticate but I will confirm

Miguel Zablah avatar
Miguel Zablah

but is this why it takes 1m to run the atmos list stacks ? dose it like do the outputs as well on that cmd?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since you have Go templates in the files, the command processes all of them b/c when you execute atmos list stacks you want to see the final values for all components in those stacks. Note that the command is custom and just calls atmos describe stacks which processes all components in all stacks and all Go templates in order to show you the final values for all the components (and not the template strings)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you using the atmos.Component function to get the outputs of the components (remote state), or you are using it to get other sections from the components?

Miguel Zablah avatar
Miguel Zablah

I’m using it to get the outputs of the componets this are example of how I use it:

vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
vpc_private_subnets: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) }}'

also I get the same time when using the UI with the cmd atmos dose this also evaluate this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the code is correct

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one call to atmos.Component takes 13 seconds for you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have two calls in one stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you have a few stacks, then it explains why you are seeing

atmos list stacks <- takes 1m and 08s
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is not easy to optimize in Atmos since we process all Go templates in the manifests when executing atmos describe stacks (to show you the final values and not the template strings) - but let me think about it.

atmos list stacks def has nothing to do with Go templates (should not), but it’s how its implemented - it just calls atmos describe stacks. Maybe we can implement a separate function atmos list stacks that does not need to process Go templates and show just the stack names

Miguel Zablah avatar
Miguel Zablah

I have 5 stacks so maybe it dose make sense maybe we can add a flag to skip that? and I can just add it to the custom cli cmd

Miguel Zablah avatar
Miguel Zablah

also is there a way to optimize the calls to atmos.Component ? like if 1 call takes 13s but with that call I can get all of the outputs I need from it is there a way to maybe saved that into a variable and use that on the variables I pass in?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, we can add a flag to atmos describe stacks to skip template processing, we’d us it when we don’t need to show the components, but just stack names

party_parrot1
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


like if 1 call takes 13s but with that call I can get all of the outputs I need from it is there a way to maybe saved that into a variable and use that on the variables I pass in?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos already does that - if you use the functions for the same component in the same stack, Atmos caches the result and reuse it in the next calls but for the SAME component in the same stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but for diff stacks, the result is diff, so it calls it again (as many times as you have stacks)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

these two calls will execute terraform output just once for the same stack

vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
vpc_private_subnets: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you have 13 sec per stack multiply by the number of stacks, giving you 1 min to execute atmos describe stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me think about adding a flag to atmos describe stacks to disable template processing. Then you would update your custom command atmos list stacks to use that flag

Miguel Zablah avatar
Miguel Zablah

I see that makes sense niceeee!! and thanks that will be great!!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Miguel Zablah please take a look at the new Atmos release https://github.com/cloudposse/atmos/releases/tag/v1.86.0

party_parrot1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can add the --process-templates=false flag to the atmos describe stacks command in your custom atmos list stacks command, and Atmos will not process templates, which should make the atmos list stacks command much faster

Miguel Zablah avatar
Miguel Zablah

Oh this is great!!! Thanks I will update and try it in a bit

Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) sorry I just test it and it’s awesome and crazy fast! this will save me sooo much time!! thanks a loot!

1

2024-08-06

Dave Nicoll avatar
Dave Nicoll

I’ve been rummaging around in the Atmos docs for hours, and I’m coming up blank, sorry. I have a stack I’m deploying to dev in an Azure subscription. My name_pattern is {namespace}-{environment} and so my stack is called dev-wus3. I’d like to provision a version of the stack in the same Azure subscription per pull request via a pipeline so that PRs can be tested, and destroy the stack when the PR is closed. If I set the namespace to pr192 atmos throws an error because the pr192 stack doesn’t exist.

I created a pr.yaml which imports the dev stack, and overrides the namespace, and while it works - it’s super hacky!

---
import:
  - development/westus3.yaml
vars:
  environment: wus3
  namespace: pr192

Thoughts on how best to achieve this? Thanks

jose.amengual avatar
jose.amengual

you could potentially overwrite namespace at the component level for that but that is not recommended

jose.amengual avatar
jose.amengual

this is where the attributes variable of the null-label comes in handy

jose.amengual avatar
jose.amengual

if you do it at the vars level all the stuff created on that stack will have dev-wus3-pr192

jose.amengual avatar
jose.amengual

so it will look like

vars:
  environment: wus3
  attributes: 
         - pr192
Dave Nicoll avatar
Dave Nicoll

Won’t I need a conditional in each component to implement the attribute?

jose.amengual avatar
jose.amengual

as long as all your components use the null-label module to set the name then you do not need to

jose.amengual avatar
jose.amengual

you can pass the atmos variable at the cli level

jose.amengual avatar
jose.amengual

if it needs to be set up at run time

Dave Nicoll avatar
Dave Nicoll

Using an overrides block in my pr.yaml to set the namespace var per component could work, I guess. You said it’s not recommended…what’s the downside?

jose.amengual avatar
jose.amengual

you are changing a stackname dynamically , which is a core definition in atmos

Dave Nicoll avatar
Dave Nicoll

Struggling to find docs on how to use attributes, do you have a link?

jose.amengual avatar
jose.amengual

null-label module docs, not atmos

Dave Nicoll avatar
Dave Nicoll

How does the attribute get from CLI to vars: attributes?

jose.amengual avatar
jose.amengual

that will be a TF_VAR

jose.amengual avatar
jose.amengual

not atmos related actually

Dave Nicoll avatar
Dave Nicoll

Ah, got it

2024-08-08

RB avatar

Is there a way to inject environment variables on each plan/apply using the stack yaml ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use the env section as globals or per component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the env section is a first-class section (map), same as the vars section - it perticipates in all the inheritance and deep-merging

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    test-component:
      vars:
        enabled: true
      env:
        TEST_ENV_VAR1: "val1"
        TEST_ENV_VAR2: "val2"
        TEST_ENV_VAR3: "val3"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the ENV vars will be set in the process that executes the Atmos commands

RB avatar

Oh man and i couldn’t find this in the doc ugh

RB avatar

Can that work globally too

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

globally too

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

same way as vars

RB avatar

What about with atmos component interpretation

RB avatar
env:
  TF_AWS_DEFAULT_TAGS_repo: org/repo
  TF_AWS_DEFAULT_TAGS_component: {{ atmos.component }}
  TF_AWS_DEFAULT_TAGS_component_real: {{ atmos.component_real }}
    
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

works

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the templates need to be quoted (to make it a valid YAML), also for template data a dot . needs to be used (otherwise it will be a function)

TF_AWS_DEFAULT_TAGS_component: '{{ .atmos.component }}'
1
jose.amengual avatar
jose.amengual

@Dave Nicoll could be an answer to your question too

hamza-25 avatar
hamza-25

New here but is there something I need to add to the atmos.yaml file before i can use atmos.component in my stack?

Whenever i use it to use the output of component 1 in component 2, the terraform screams that its not legible

It literally passes in ‘{{ (atmos,Component…}}’ in as a string

The docs doesnt say much more than just howbto use atmos.Competent in the yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Go templates in Atmos stack manifests are not enabled by default (they are enabled in imports, but those serve diff purpose)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your atmos.yaml , add the following

templates:
  settings:
    # Enable `Go` templates in Atmos stack manifests
    enabled: true
    # Number of evaluations/passes to process `Go` templates
    # If not defined, `evaluations` is automatically set to `1`
    evaluations: 1
    # <https://masterminds.github.io/sprig>
    sprig:
      # Enable Sprig functions in `Go` templates in Atmos stack manifests
      enabled: true
    # <https://docs.gomplate.ca>
    # <https://docs.gomplate.ca/functions>
    gomplate:
      # Enable Gomplate functions and data sources in `Go` templates in Atmos stack manifests
      enabled: true
      # Timeout in seconds to execute the data sources
      timeout: 5
      datasources: {}
hamza-25 avatar
hamza-25

Thanks a lot, got me further

But now im running into the issue of retrieving aws creds to access the remote backend

I see there is documentation of envs to use in templates.settings which include aws profiles but ive tried multiple cases

1- i set it equal to {{ .Flags.stack }} since my stack names correspond to the aws profiles

2- hardcoded my profile name in but still get the same error

3- im currently using aws sso for the profiles, so i tried inputting access and secret keys instead but no difference

hamza-25 avatar
hamza-25

the error

template: all-atmos-sections:293:31: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exit status 1

Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.

Please see <https://www.terraform.io/docs/language/settings/backends/s3.html>
for more information about providing credentials.

Error: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
hamza-25 avatar
hamza-25

I can see it works when i manually set the aws_profile value locally to match my profile name

Im trying to set the profile name so atmos can find it via the atmos.yaml but it wont work even when I set

env: AWS_PROFILE: abc

In templates.settings.env

jose.amengual avatar
jose.amengual

I think you should let your pipeline to pass the profile to then atmos run the command if you can

jose.amengual avatar
jose.amengual

Atmos is not a cicd tool, usually the selection of profiles is determined by the naming convention in your module by passing descriptive name + account number

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the atmos.Component template functions executes terraform output on the provided component and stack. This means that the referenced component must be already provisioned using a backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you should be able to execute atmos terraform output <component> -s <stack> and see the component outputs from the remote state. If this command does not executes, then oyu need to make sure you have defined the backend correctly and a role with permissions to access the backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

{{ (atmos.Component <component> .stack).[outputs.xxx](http://outputs.xxx) }} is just a wrapper on top of atmos terraform output <component> -s <stack>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please review this doc on how to configure TF backends in Atmos https://atmos.tools/quick-start/advanced/configure-terraform-backend/

Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

2024-08-09

2024-08-10

2024-08-12

Andy Wortman avatar
Andy Wortman

Having trouble with stack templates in github actions. My config works when run locally, but when run in a github action, the go templates return “<no value>“, and I haven’t been able to figure out why.

I’ve verified that my local atmos version and the one installed by the github action workflow are both 1.85.0. Both are using the same atmos.yaml, which has templating enabled. Obviously, both are using the same terraform and atmos code as well. Where else should I be looking for differences?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andy Wortman avatar
Andy Wortman

Here’s my template config from atmos.yaml:

templates:
  settings:
    enabled: true
    evaluations: 1
    delimiters: ["{{", "}}"]
    sprig:
      enabled: true
    gomplate:
      enabled: true
      timeout: 5
      datasources: {}

Variable I’m trying to pull:

rds_multitenant_sg_id: '{{ (atmos.Component "aioa-core/multitenant-rds" .atmos_stack).outputs.rds_sg_id }}'

Output from github action:

  # aws_security_group_rule.db_ingress must be replaced
-/+ resource "aws_security_group_rule" "db_ingress" {
      ~ id                       = "//redacted//" -> (known after apply)
      ~ security_group_id        = "//redacted//" -> "<no value>" # forces replacement
      ~ security_group_rule_id   = "//redacted//" -> (known after apply)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hello. Previous behavior for stack imports would fail if a value is missing, now it does not fail and it replaces my missing import values with "<no value>" . I have Go Templating enabled and gomplate disabled in my atmos.yaml. Is this new intended behavior? If so, is the solution to use schemas/opa to ensure stack files have the proper values?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it could happen if the GH action does not see the atmos.yaml (that you use when running locally, which works), so it does not see

templates:
  settings:
    enabled: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or it points so another atmos.yaml where you have evaluations: 2

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

locally, try to set evaluations: 2 and atmos.yaml and run atmos describe component and you will probably see <no value>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman can you do that and let us know what output do you see?

Andy Wortman avatar
Andy Wortman

I set evaluations: 2, but the local run still worked.

To Erik’s question, we’re not using import templates, as far as I can tell.

Andy Wortman avatar
Andy Wortman

This component does import an abstract component, but all the templating is in the real component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe in your GH action you could run the https://atmos.tools/cli/commands/describe/config/ command and see the config that’s used?

atmos describe config | atmos

Use this command to show the final (deep-merged) CLI configuration of all atmos.yaml file(s).

1
Andy Wortman avatar
Andy Wortman

Here’s the templates section from the github runner:

   "templates": {
      "settings": {
         "enabled": true,
         "sprig": {
            "enabled": true
         },
         "gomplate": {
            "enabled": true,
            "timeout": 5,
            "datasources": null
         },
         "delimiters": [
            "{{",
            "}}"
         ],
         "evaluations": 1
      }
   },
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks ok

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t know why the issue is present in the GH action, we are investigating it, we’ll let you know once we figure it out

1
Andy Wortman avatar
Andy Wortman

Thanks. I’ll keep digging on my side too.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Dan Miller (Cloud Posse) , in case this is related to the thing you were looking into as well

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

I’m not sure unfortunately - this isn’t related to anything I was working on. It might’ve been @Ben Smith (Cloud Posse), but iirc he said that issue was an older version of Atmos. Which sounds like isn’t the issue here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman please take a look at the new Atmos release https://github.com/cloudposse/atmos/releases/tag/v1.86.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use the env var ATMOS_LOGS_LEVEL=Trace in the GH action, and the atmos.Component function will log debug messages and the outputs section of the component

Andy Wortman avatar
Andy Wortman

Nice!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are testing it by ourselves now, let us know if you get to it faster and find anything regarding the issue with templates in GH action

1
Andy Wortman avatar
Andy Wortman

Got the new version in place and added trace logging. I’m getting this error:

template: describe-stacks-all-sections:41:21: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: invalid character 'c' looking for beginning of value

The template I’m using to test is:

alb_arn: '{{ (atmos.Component "alb" "ue2-dev-common").outputs.alb.alb_arn }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman what do you have in your atmos.yaml file in the logs section

logs:
  # Can also be set using 'ATMOS_LOGS_FILE' ENV var, or '--logs-file' command-line argument
  # File or standard file descriptor to write logs to
  # Logs can be written to any file or any standard file descriptor, including `/dev/stdout`, `/dev/stderr` and `/dev/null`
  file: "/dev/stderr"
  # Supported log levels: Trace, Debug, Info, Warning, Off
  # Can also be set using 'ATMOS_LOGS_LEVEL' ENV var, or '--logs-level' command-line argument
  level: Info
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Check that you have file: "/dev/stderr" to log to stderr instead of stdout (which breaks the JSON outout)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, we will release a new Atmos version today with more improvements to the atmos.Component function

Andy Wortman avatar
Andy Wortman

Not much in the logs section:

logs:
  verbose: false
  colors: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please update to the above code

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the one you have is very old and deprecated

1
Andy Wortman avatar
Andy Wortman

Updated to atmos 1.86.1, updated the logs section. Getting a little more now:

Executing template function 'atmos.Component(alb, ue2-dev-common)'
template: describe-stacks-all-sections:41:21: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: invalid character 'c' looking for beginning of value
Error: Process completed with exit code 1.
Andy Wortman avatar
Andy Wortman

Added atmos describe stacks -s ue2-dev-aioa to the action workflow. Getting basically the same thing:

Validating all YAML files in the 'stacks' folder and all subfolders
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
Executing template function 'atmos.Component(alb, ue2-dev-common)'
Found stack manifests:
<...>
Found component 'alb' in the stack 'ue2-dev-common' in the stack manifest 'dev/common/ue2-dev-common'
ProcessTmplWithDatasources(): processing template 'all-atmos-sections'
ProcessTmplWithDatasources(): template 'all-atmos-sections' - evaluation 1
ProcessTmplWithDatasources(): processed template 'all-atmos-sections'
template: describe-stacks-all-sections:41:21: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: invalid character 'c' looking for beginning of value
Error: atmos exited with code 1.
Error: Process completed with exit code 1.

For completeness, this all still works locally. It only fails in the github action.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll release a new atmos version today/tomorrow which can help to identify the issues

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman please try this version https://github.com/cloudposse/atmos/releases/tag/v1.86.2

Andy Wortman avatar
Andy Wortman

Ran against 1.86.2. Some more logging about the process, but nothing further about the error:

Found component 'alb' in the stack 'ue2-dev-common' in the stack manifest 'dev/common/ue2-dev-common'
ProcessTmplWithDatasources(): processing template 'all-atmos-sections'
ProcessTmplWithDatasources(): template 'all-atmos-sections' - evaluation 1
ProcessTmplWithDatasources(): processed template 'all-atmos-sections'
Writing the backend config to file:
components/terraform/alb/backend.tf.json
Wrote the backend config to file:
components/terraform/alb/backend.tf.json
Writing the provider overrides to file:
components/terraform/alb/providers_override.tf.json
Wrote the provider overrides to file:
components/terraform/alb/providers_override.tf.json
Executing 'terraform init alb -s ue2-dev-common'
template: describe-stacks-all-sections:41:21: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: exit status 1
Error: atmos exited with code 1.
Error: Process completed with exit code 1.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll be testing 1.86.2 today as well. You can DM me your YAML config for review @Andy Wortman

2024-08-13

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All the docs for our AWS reference architecture are now public

https://sweetops.slack.com/archives/C04NBF4JYJV/p1723233360401039

New docs are live! https://docs.cloudposse.com/

You can now access all the docs without registration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @jose.amengual

New docs are live! https://docs.cloudposse.com/

You can now access all the docs without registration.

jose.amengual avatar
jose.amengual

amazing

2024-08-14

Miguel Zablah avatar
Miguel Zablah

is there a custom way to generate files? kind of like the backend.tf.json is created? but I will like to create the version.tf files as well it can be .json as well

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not at this time. We support only backend and provider generation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I could see the benefit of generating versions

Miguel Zablah avatar
Miguel Zablah

yeah this ca be an awesome thing to add

github3 avatar
github3
04:49:29 PM

Add --process-templates flag to atmos describe stacks and atmos describe component commands. Update docs @aknysh (#669) ## what • Add logging to the template functions atmos.Component and atmos.GomplateDatasource • Add --process-templates flag to atmos describe stacks and atmos describe component commands • Update docs

why

• When the environment variable ATMOS_LOGS_LEVEL is set to Trace, the template functions atmos.Component and atmos.GomplateDatasource will log the execution flow and the results of template evaluation - useful for debugging
ATMOS_LOGS_LEVEL=Trace atmos terraform plan -s • Enable/disable processing of `Go` templates in Atmos stacks manifests when executing the commands • For `atmos describe component -s ` command, use the `--process-templates` flag to see the component configuration before and after the templates are processed. If the flag is not provided, it's set to `true` by default

Process Go templates in stack manifests and show the final values

atmos describe component -s

Process Go templates in stack manifests and show the final values

atmos describe component -s --process-templates=true

Do not process Go templates in stack manifests and show the template tokens in the output

atmos describe component -s --process-templates=false

• For atmos describe stacks command, use the --process-templates flag to see the stack configurations before and after the templates are processed. If the flag is not provided, it’s set to true by default

Process Go templates in stack manifests and show the final values

atmos describe stacks

Process Go templates in stack manifests and show the final values

atmos describe stacks –process-templates=true

Do not process Go templates in stack manifests and show the template tokens in the output

atmos describe stacks –process-templates=false
The command atmos describe stacks --process-templates=false can also be used in Atmos custom commands that just list Atmos stacks does not require template processing. This will significantly speed up the custom command execution. For example, the custom command atmos list stacks just outputs the top-level stack names and might not require template processing. It will execute much faster if implemented like this (using the --process-templates=false flag with the atmos describe stacks command :

  • name: list
    commands:
    • name: stacks
      description: |
      List all Atmos stacks.
      steps:
      • >
        atmos describe stacks –process-templates=false –sections none | grep -e “^\S” | sed s/://g fix: Atmos Affected GitHub Action Documentation @milldr (#661) ## what - Update affected-stacks job outputs and matrix integration

why

• The affected step was missed when the plan example was updated

references

• closes https://github.com/orgs/cloudposse/discussions/18 Updated Documentation for GHA Versions @milldr (#657) ## what - Update documentation for Atmos GitHub Action version management

why

• New major releases for both actions

references

  1. https://github.com/cloudposse/github-action-atmos-terraform-plan/releases/tag/v3.0.0
  2. https://github.com/cloudposse/github-action-atmos-terraform-drift-detection/releases/tag/v2.0.0
party_parrot1
Alcp avatar

team, question wrt to atmos inheritance from catalogs. i.e. mainly seeing in var.tags This is kind of new behavior, I am noticing the inline tag values takes precedence over any catalog imports correct? but the import values are taking over? anything to do with atmos version or new configs to be aware off

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not related to the catalog itself, related to any imports. In short, Atmos does the following

• Reads all YAML config files and processes all imports (import order matters)

• Then for each section, it deep-merges the values in the following order of precedence:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Component values -> base component values -> global values

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

inline values (defined for a component in a stack) take precedence over the imported values

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Provision | atmos

Having configured the Terraform components, the Atmos components catalog, all the mixins and defaults, and the Atmos top-level stacks, we can now

Alcp avatar

here is the var section snippet when I describe the component

    tags:
      final_value:
        compliance: na
        criticalsystem: "false"
        env: p1np
        managedby: cloudposse
        pillar: test1
        publiclyaccessible: pri3
        service: envoy-proxy
      name: tags
      stack_dependencies:
      - dependency_type: import
        stack_file: catalog/aws-irsa-role/defaults
        stack_file_section: components.terraform.vars
        variable_value:
          compliance: na
          criticalsystem: "false"
          managedby: cloudposse
          publiclyaccessible: pri3
          service: envoy-proxy
      - dependency_type: inline
        stack_file: orgs/test1/p-line/usw2/p1np/mm-experiment-store-recs-inference-mastermind
        stack_file_section: terraform.vars
        variable_value:
          compliance: gdpr
          criticalsystem: true
          publiclyaccessible: pri3
          service: mm-experiment-store-recs-inference-mastermind1
      - dependency_type: import
        stack_file: catalog/aws-irsa-role/p1np-defaults
        stack_file_section: terraform.vars
        variable_value:
          env: p1np
      - dependency_type: import
        stack_file: globals
        stack_file_section: vars
        variable_value:
          pillar: test1
Alcp avatar

here I would expect the final value for service to be ‘mm-experiment-store-recs-inference-mastermind1’ but I see ‘envoy-proxy’

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well… I can’t answer that question w/o seeing your config (it all depends on component config and imports) you can DM me and I’ll take a look

Alcp avatar

ok let me do that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

after reviewing this with @Alcp, here is the short answer (in case someone else would need it):

component-level values (e.g. vars.tags) override the same global values regardless of whether the component values are defined inline or imported

Alcp avatar

thanks @Andriy Knysh (Cloud Posse) that was very helpful

sweetops1
RB avatar

Is there a good way to do repo separation for

• Risky components such as root components

• Segment high priv accounts from developer friendly accounts This would require sharing configurations between a root level atmos repo and the segmented ones

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, this is possible, but we don’t have it well documented.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Good news is we have a project coming up with a customer to address this and multiple other enhancements. No ETA yet, but likely this year.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As an infrastructure repo gets larger and larger, it can become unwieldy and intimidating for teams. We want to make it easier to break it apart, while maintaining the benefits of the atmos framework.

2
RB avatar

Just spitballing… Is one way simply

• Spacelift: adding the stack configs to geodesic and then pulling down the geodesic container

• Github actions: use deploy key to clone root repos configs to get aws account, aws account map and other components In either approach, it would be grabbing the root yaml, then combining and deep merging with the child accounts repo yaml, and then finally the atmos workflow

2024-08-15

github3 avatar
github3
08:24:26 PM

Improve logging for the template function atmos.Component @aknysh (#672) ## what • Improve logging for the template function atmos.Component • Update Golang to the latest version 1.23

why

• When the environment variable ATMOS_LOGS_LEVEL is set to Trace, the template functions atmos.Component and atmos.GomplateDatasource will log the execution flow and the results of template evaluation - useful for debugging
ATMOS_LOGS_LEVEL=Trace atmos terraform plan -s

This PR adds more debugging information and shows the results of the atmos.Component execution, and shows if the result was found in the cache:

Found component ‘template-functions-test’ in the stack ‘tenant1-ue2-prod’ in the stack manifest ‘orgs/cp/tenant1/prod/us-east-2’ ProcessTmplWithDatasources(): template ‘all-atmos-sections’ - evaluation 1

Converting the variable ‘test_list’ with the value [ “list_item_1”, “list_item_2”, “list_item_3” ] from JSON to ‘Go’ data type

Converted the variable ‘test_list’ with the value [ “list_item_1”, “list_item_2”, “list_item_3” ] from JSON to ‘Go’ data type Result: [list_item_1 list_item_2 list_item_3]

Converting the variable ‘test_map’ with the value { “a”: 1, “b”: 2, “c”: 3 } from JSON to ‘Go’ data type

Converted the variable ‘test_map’ with the value { “a”: 1, “b”: 2, “c”: 3 } from JSON to ‘Go’ data type Result: map[a:1 b:2 c:3]

Converting the variable ‘test_label_id’ with the value “cp-ue2-prod-test” from JSON to ‘Go’ data type

Converted the variable ‘test_label_id’ with the value “cp-ue2-prod-test” from JSON to ‘Go’ data type Result: cp-ue2-prod-test

Executed template function ‘atmos.Component(template-functions-test, tenant1-ue2-prod)’

‘outputs’ section: test_label_id: cp-ue2-prod-test test_list:

  • list_item_1
  • list_item_2
  • list_item_3 test_map: a: 1 b: 2 c: 3

Executing template function ‘atmos.Component(template-functions-test, tenant1-ue2-prod)’ Found the result of the template function ‘atmos.Component(template-functions-test, tenant1-ue2-prod)’ in the cache ‘outputs’ section: test_label_id: cp-ue2-prod-test test_list:

  • list_item_1
  • list_item_2
  • list_item_3 test_map: a: 1 b: 2 c: 3

add install instructions for atmos on windows/scoop @dennisroche (#649) ## what

add option/documentation for installing atmos using scoop.sh on windows.

scoop install atmos

scoop manifests will check GitHub releases and automatically update. no additional maintenance required for anyone :partying_face:.

why

needed an easy way for my team to install atmos

references

Fix docker build ATMOS_VERSION support v* tags @goruha (#671) ## what * Remove v for ATMOS_VERSION on docker build

why

• Cloudposse changed the tag template policy - now the tag is always prefixed with v.
The tag passed as ATMOS_VERSION docker build argument

references

https://github.com/cloudposse/atmos/actions/runs/10391667666/job/28775133282#step<i class="em em-3"https://github.com/cloudposse/atmos/actions/runs/10391667666/job/28775133282#step<i class=”em em-3”</i>4480>

CleanShot 2024-08-14 at 19 58 32@2x

1
RB avatar

Thoughts on mixins per stack (i.e. per workspace per component) ?

I think it would help solve one off migrations.tf terraform state blocks per workspace

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @RB , let us review this

1
1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) Can we accept https://github.com/cloudposse/atmos/issues/673 from triage?

#673 Allow terraform state migration blocks per stack/workspace

Describe the Feature

I’d like to migrate existing terraform into atmos, without having to run terraform commands, by taking advantage of import blocks.

Usually this can be done easily without workspaces by adding a migrations.tf file in a terraform directory and applying.

This is impossible to do with atmos without manually copying and pasting a file into the directory, then apply, and then remove the migrations file. This works but prevents us from keeping the migration file in git and gives more incentive to running the import commands instead of using import blocks.

Expected Behavior

A method to allow a migrations file per stack.

Perhaps an optional migrations directory within each component terraform with the following convention.

<base component>/migrations/<stage>/<region>/<component>/*.tf

or

<base component>/migrations/<stage>-<region>-<component>.tf

or

<base component>/migraions/<workspace>.tf


Could be a directory of migration files.

Use Case

I recall having issues once with the eks component if there was a partial apply failure which resulted in 8 resources created without it stored in state. These had to be reimported manually. I used a script at the time. Using a method like this would also ease developer frustration.

Describe Ideal Solution

A migrations directory would need to pull a file or directory of files into the base component directory before the terraform workflow began.

At this point, you could also rename this to be a mixin directory which would allow users to change the terraform code per workspace if needed. The migrations would be one use case of that.

Alternatives Considered

No response

Additional Context

No response

2024-08-16

Gheorghe Casian avatar
Gheorghe Casian

Any thoughts why atmos.Component is failing when using Geodesic 3.1.0 and atmos 1.86.1 ?

infrastructure ⨠ ATMOS_LOGS_LEVEL=Trace atmos dc jenkins-efs -s core-ue1-auto

Executing command:
atmos describe component jenkins-efs -s core-ue1-auto
invalid stack manifest 'mixins/automation/jenkins/ecs-service-tmpl.yaml'
template: mixins/automation/jenkins/ecs-service-tmpl.yaml:23:81: executing "mixins/automation/jenkins/ecs-service-tmpl.yaml" at <.stack>: invalid value; expected string

import:
  - path: catalog/terraform/efs/defaults
  - path: catalog/terraform/ecs-service/default
    context:
      stage: "{{ .stage }}"

components:
  terraform:
    jenkins-efs:
      metadata:
        component: efs
        inherits:
          - efs/defaults
      vars:
        name: jenkins-efs
        dns_name: jenkins-efs
        hostname_template: "%s-%s-%s-%s"
        provisioned_throughput_in_mibps: 10
        efs_backup_policy_enabled: true
        additional_security_group_rules:
          # ingress
          - cidr_blocks: []
            source_security_group_id: '{{ (atmos.Component "jenkins-ecs-service" .stack).outputs.ecs_service_sg_id }}'
            from_port: 2049
            protocol: TCP
            to_port: 2049
            type: "ingress"
            description: "Allow Local subnet to access"
    jenkins-ecs-service:
      metadata:
        component: ecs-service
        inherits:
          - ecs-service/default
      settings:
        depends_on:
          alb:
            component: jenkins-alb
          cluster:
            component: common-ecs-cluster
      vars:
        alb_name: jenkins-alb
        use_lb: true
        ecs_cluster_name: common-ecs-cluster
        health_check_path: /
        health_check_port: 8080
        health_check_timeout: 5
        health_check_interval: 60
        health_check_healthy_threshold: 5
        health_check_unhealthy_threshold: 5
        health_check_matcher: 200,302,301,403
        name: jenkins
        s3_mirroring_enabled: false
        unauthenticated_paths:
          - "/*"
        unauthenticated_priority: 10
        task:
          task_cpu: 2048
          task_memory: 4096
          use_alb_security_group: true
        containers:
          service:
            cpu: 2048
            name: jenkins
            image: jenkins/jenkins:lts
            memory: 4096
            readonly_root_filesystem: false

            map_environment: {}

            map_secrets:
              NEW_RELIC_LICENSE_KEY: "{{ .stage }}/common/newrelic/license_key"

            port_mappings:
              - containerPort: 8080
                hostPort: 8080
                protocol: tcp
              - containerPort: 50000
                hostPort: 50000
                protocol: tcp


exit status 1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like it’s an invalid YAML (although I can’t see any issues looking at the code above)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to rename mixins/automation/jenkins/ecs-service-tmpl.yaml to mixins/automation/jenkins/ecs-service-tmpl and import it like this

imports:
  mixins/automation/jenkins/ecs-service-tmpl
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if the file is not YAML, Atmos will not check YAML syntax

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

does it happen to you with atmos 1.86.1, or older versions as well?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if renaming the file does not help, then something is wrong with .stack, it does not gets evaluated to a string

Gheorghe Casian avatar
Gheorghe Casian

Hust updated to 1.86.1 to use the atmos.Component, It haven’t happened before. I don’t think we are using .stack anywhere else.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please try the previous version again. 1.86.1 did not change anything (just added more debug messages)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if the code above is in one file, then the doc applies to your case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you need to update

source_security_group_id: '{{ (atmos.Component "jenkins-ecs-service" .stack).outputs.ecs_service_sg_id }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to use double curly braces + backtick + double curly braces instead of just double curly braces

Gheorghe Casian avatar
Gheorghe Casian

The same error when using atmos 1.86.0

atmos describe component jenkins-efs -s core-ue1-auto
invalid stack manifest 'mixins/automation/jenkins/ecs-service-tmpl.yaml'
template: mixins/automation/jenkins/ecs-service-tmpl.yaml:23:81: executing "mixins/automation/jenkins/ecs-service-tmpl.yaml" at <.stack>: invalid value; expected string
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, should not be related to atmos version

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

review the doc

Gheorghe Casian avatar
Gheorghe Casian

I see, thanks

Gheorghe Casian avatar
Gheorghe Casian

I fixed the context, now it’s failing with different error

ATMOS_LOGS_LEVEL=Trace atmos dc jenkins-efs -s core-ue1-auto

Executing command:
atmos describe component jenkins-efs -s core-ue1-auto
exit status 137
Gheorghe Casian avatar
Gheorghe Casian

it’s using a lot of memory

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to run it without the Trace log level

Gheorghe Casian avatar
Gheorghe Casian

The same result, It doesn’t work with more RAM and SWAP

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how many of atmos.Component functions are you using (for dif components in diff stacks, Atmos caches the result for the same component/stack)

2024-08-17

github3 avatar
github3
09:30:24 PM

Improve logging for the template function atmos.Component. Generate backend config and provider override files in atmos.Component function @aknysh (#674)

what

• Improve logging for the template function atmos.Component • Generate backend config and provider override files in atmos.Component function • Update docs

why

• Add more debugging information and fix issues with the initial implementation of the atmos.Component function when the backend config file backend.tf.json (if enabled in atmos.yaml) and provider overrides file providers_override.tf.json (if configured in the providers section) were not generated, which prevented the function atmos.Component from returning the outputs of the component when executing in GitHub actions • When the environment variable ATMOS_LOGS_LEVEL is set to Trace, the template function atmos.Component will log the execution flow and the results of template evaluation - useful for debugging
ATMOS_LOGS_LEVEL=Trace atmos describe component <component> -s <stack>
ATMOS_LOGS_LEVEL=Trace atmos terraform plan <component> -s <stack>
ATMOS_LOGS_LEVEL=Trace atmos terraform apply <component> -s <stack> bump http context timeout for upload to atmos pro @mcalhoun (#656)

what

Increase the maximum http timeout when uploading to atmos pro

why

There are cases when there are a large number of stacks and a large number of workflows that this call can exceed 10 seconds

1

2024-08-19

Junk avatar

https://atmos.tools/quick-start/advanced/configure-terraform-backend/#provision-terraform-s3-backend

In the above example, once I provision the component for the s3 backend, how do I manage the state of the generated stack? I don’t understand the lifecycle. I understand the stack created after this stack is provisioned, but not the stack for the s3 backend. can someone help?

Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is the first step in the Cold Start.

You don’t have an S3 backend yet, so you can’t store any state in it.

So to provision the tfstate-backend component, you have to do it locally first (using TF local state file), then add the backend config for for the component (in Atmos, as described in the doc), then run atmos terraform apply again - Terraform will detect the new state backend and offer you to migrate the state for the component from the local state to the S3

Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

See

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

with Atmos, you can do the foloowing:

• Comment out the s3 backend config • Run atmos terraform apply tfstate-backend -s <stack> to provision the component using the local state file • Uncomment the s3 backend config in YAML • Run atmos terraform apply tfstate-backend -s <stack> again - Terraform will ask you to migrate the state to S3

• After that, you will have your S3 backend configured and its own state in the S3. All other components will be able store their state in S3 since it’s already provisioned

Junk avatar

I’ll take note, thank you !

2024-08-20

Dave avatar

I there any way to stop atmos from processing ALL atmos.Component template functions everytime you do a plan / apply?

Example

when I run: atmos.exe terraform plan app/apigw --stack org-test-cc1 --skip-init

it retrieves all of the following:

Found component 'app/lambda-api' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split' Found component 'app/s3-exports' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split' Found component 'app/ecr' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split' Found component 'app/s3-graphs' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split'

but only this one is required for the particular stack / plan I am running

Found component 'app/lambda-api' in the stack 'dhe-test-cc1' in the stack manifest '.../app-split'

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we will review it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks for reporting @Dave

Dave avatar

Dammit, I was hoping I was doing something wrong

Dave avatar

It also happens ( at least for me ) when you just run atmos from the cli to get the selection window:

Dave avatar

awesome feature though!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can share you YAML stacks with me (DM me), I will review the config and let you know how many calls to atmos.Component are configured

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dave please try this release https://github.com/cloudposse/atmos/releases/tag/v1.87.0, it fixes the TUI borders and does not evalutae Go templates in Atmos stack manifests when showing the TUI components and stacks

2024-08-21

Michael avatar
Michael

Quick question, what’s the best way to add flags to an existing Atmos command? I see in the docs that there is an example for overriding an existing Terraform apply with an auto-approve command, but if I wanted to add an additional flag for only showing variables when an atmos describe component -light is ran, would this be a good way to do it?

  - name: describe
    description: "Shows the configuration for an Atmos component in an Atmos stack: atmos describe component <component> -s <stack>"
    commands:
      - name: component
        description: "Describe an Atmos component in an Atmos Stack"
        arguments:
          - name: component
            description: "Name of the component"
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: true
          - name: light
            shorthand: l
            description: Only display component variables
            required: false
        component_config:
          component: "{{ .Arguments.component }}"
          stack: "{{ .Flags.stack }}"
        steps:
          - >
            {{ if .Flags.light }} 
            atmos describe component {{ .Arguments.component }} -s {{ .Flags.stack }} | yq -r '.vars'
            {{ else }}
            atmos describe component {{ .Arguments.component }} -s {{ .Flags.stack }}
            {{ end }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes that should work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can add a new atmos custom command or subcommand

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you also can override an existing command or subcommand (and call the native ones from it)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the overridden ones take precedence

1
Michael avatar
Michael

Awesome, I’ll give it a try!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nit: i would call it -vars if it’s just going to describe vars

1
Michael avatar
Michael

I was inspired by the terraform plan -light for this idea but agreed, that’s definitely more cohesive

Michael avatar
Michael

Strange, the earlier config doesn’t override it, but I’ll try some other things

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

change the name to something else, e.g.

name: component2
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then run atmos secribe --help to check if the command is shown

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Fwiw, we’ve used the atmos list namespace for these kinds of customizations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. atmos list vars

Michael avatar
Michael

I like that idea better than what I was thinking. Initially, I thought I would just introduce another command atmos describe component-values, but that just adds another command when we have a good pattern going with our list commands

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, agreed. What I don’t like about overwriting built-in functionality is it is confusing for end-users what is built-in/native vs extended. If you could to the atmos.tools docs, the customizations will not be documented.

cricketsc avatar
cricketsc

Hello, I had an inquiry about the demo-helmfile example:

Catalog file

Stack file

Helmfile My understanding is that the stack file pulls in the catalog file and then changes it to be of the “real” type. Then I believe that the helmfile gets included via the key/value pair “component: nginx”. Some of the previous terminology may be off, but I think that’s the general idea.

My inquiry is are the vars of the catalog entry supposed to be injected into the helmfile’s nginx release? How mapping to the nginx release work and how are they picked up? Does it use the state-values-file? I noticed this empty values section in the helmfile. Is that related?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The values section is a mistake, I believe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It should be vars in the stack configurations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What happens is it deep merged all the vars, in order of imports, then writes a temporary values file, which is passed to helmfile.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please note, the current helmfile implementation is (unfortunately) very EKS specific

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  - atmos validate stacks
  # FIXME: `atmos helmfile apply` assumes EKS
  #- atmos helmfile apply demo -s dev
  - atmos helmfile generate varfile demo -s dev
  - helmfile -f components/helmfile/nginx/helmfile.yaml apply --values dev-demo.helmfile.vars.yaml
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But that also serves to show what atmos helmfile apply does (more or less) under the hood

cricketsc avatar
cricketsc

So those vars in the catalog file go get added to the file that gets generated by atmos helmfile generate varfile... and are they not currently being used? I’ve used the {{ .Values.... }} type notation in a helmfile and gotten that to work, but I’m not seeing how the demo helmfile accesses such passed through values. Thanks!!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos uses helmfile --state-values-file=.... to point the helmfile to the generated values file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also execute atmos helmfile generate varfile <component> -s <stack> to generate the value file and review it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos helmfile generate varfile | atmos

Use this command to generate a varfile for a helmfile component in a stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the generated file when used in the command helmfile diif/apply --state-values-file=

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos does not know anytjhing about the {{ .Value }} templates in the helm/helmfile manifests. Helm itself will merge the values provided in the --state-values-file file with all the values defined in the Helm/Helmfile templates and value files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

~@cricketsc I understand your point. That’s an oversight. The demo-helmfile example is not showing any values passed into the helm charts via atmos & helmfile.~

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ll create a task to update that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Actually it does

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
        image:
          tag: "latest"
        service:
          type: ClusterIP
          port: 80
        replicaCount: 1
        ingress:
          enabled: true
          hostname: fbi.com
          paths:
            - /
          extraHosts:
            - name: '*.app.github.dev'
              path: /
            - name: 'localhost'
              path: /
        readinessProbe:
          initialDelaySeconds: 1
          periodSeconds: 2
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 3
        persistence:
          enabled: false
        extraVolumes:
          - name: custom-html
            configMap:
              name: custom-html
        extraVolumeMounts:
          - name: custom-html
            mountPath: /app/index.html
            subPath: index.html
            readOnly: true
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All of these are passed as the values to the nginx helm chart

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The files in manifest/ are applied with the kustomize extension for helmfile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this demo shows how to use atmos+k3+helmfile+helm+kustomize all together

1
github3 avatar
github3
04:01:48 AM

Update/improve Atmos UX @aknysh (#679)

what & why

• Improve error messages in the cases when atmos.yaml CLI config file is not found, and if’s it found, but points to an Atmos stacks directory that does not exist

image

image

• When executing atmos command to show the terminal UI, do not evaluate the Go templates in Atmos stack manifests to make the command faster since the TUI just shows the names of the components and stacks and does not need the components’ configurations • Fix/restore the TUI borders for the commands atmos and atmos workflow around the selected columns. The BorderStyle functionality was changed in the latest releases of the charmbracelet/lipgloss library, preventing the borders around the selected column from showing

atmos

image

atmos workflow

image

Announce Cloud Posse’s Refarch Docs @osterman (#680)

what

• Announce cloud posse refarch docs Add atmos pro stack locking @mcalhoun (#678)

what

• add the atmos pro lock and atmos pro unlock commands

2024-08-22

Stephan Helas avatar
Stephan Helas

I have a dump question:

I’ve read about schema validation: https://atmos.tools/core-concepts/validate/json-schema

Question: if i check variables in terraform with input validation, why would i use json schema for components? What would be the advantage (as i need to convert terraform variables into json schema for the most part)

How do i validate vendoring? This is mentioned but not explained.

2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Using Rego policies, you can do much more than just validating Terraform inputs.

You can validate relations between components and stacks, not only variables of one component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can check/validate such things as

# Check if the component has a `Team` tag

# Check if the Team has permissions to provision components in an OU (tenant)

# Check that the `app_config.hostname` variable is defined only once for the stack across all stack config files

# This policy checks that the 'bar' variable is not defined in any of the '_defaults.yaml' Atmos stack config files

# This policy checks that if the 'foo' variable is defined in the 'stack1.yaml' stack config file, it cannot be overriden in 'stack2.yaml'

# This policy shows an example on how to check the imported files in the stacks
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which you would not be able to do in Terraform. You can use both Terraform inputs validation for TF vars and Atmos OPA policies for more advanced validation

Stephan Helas avatar
Stephan Helas

ok. thx for the explanation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think @Andriy Knysh (Cloud Posse) mostly answered it, but to add my 2c: • Atmos OPA policies allow you to enforce different policies for the same components depending on where/how they are used. You can imagine using different policies in sandbox than for production, for teamA vs teamB, for compliance or out-of-scope environments. As you encourage reuse of components, it doesn’t necessarily mean the policies are the same. • They are not mutually exclusive. Use tools like tfsec and conftest to enforce policies on the terraform code itself.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


How do i validate vendoring? This is mentioned but not explained.
@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas currently we support OPA policies for Atmos stacks only, not for vendoring.

Can you provide more details on what you want to validate in vendoring (I suppose to validate the vendoring manifest files)? (we will review it after we understand all the details).

thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What about JSON schema validation of vendoring file?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not currently in Atmos, but we can add it

Stephan Helas avatar
Stephan Helas

regarding vendoring : i was just wondering, because it is mentioned here:

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse)

https://atmos.tools/core-concepts/validate/

 which can validate the schema of configurations such as stacks, workflows, and vendoring manifests
Stephan Helas avatar
Stephan Helas

regarding OPA ( i never used it before).

How can i reuse rules for different components (for example check for tags or check for name pattern):

do i have to use `errors[message] {}’ only or can i use any opa rule

2024-08-23

2024-08-24

Ryan Ernst avatar
Ryan Ernst

Hi everyone! I have been excitedly exploring Atmos recently and have a few questions. Overall, it has been relatively smooth so far, but there are a few things I am unclear on:

• Sharing state between components ◦ I get the impression that template functions with atmos.component is the current preferred mechanism over remote-state. Is this correct? ▪︎ A section in the Share Data Between Components page titled “Template Functions vs Remote State” (or something similar) would be very helpful. ▪︎ If someone could distill their knowledge/thoughts on the tradeoffs I would be happy to make a PR to add a distilled version to the docs!

• What is Atmos Pro? ◦ Is this just an optional add-on? or will it be a subscription service? ▪︎ My initial thought was that it was a paid subscription service, but I was able to sign up and install the GH app add a repo and did not see anything related to pricing. ▪︎ Then again, I can’t find a public repo for it, which makes me think it is destined to be a subscription service, but it is just brand new. ◦ To share some perspective, when I saw this, it tarnished my excitement about Atmos a bit. ▪︎ I have used and trusted Cloud Posse TF modules for a long time and really enjoy the business model. ▪︎ Seeing Atmos Pro immediately made me think there must be some known painful parts of Atmos that will require me to add another subscription to fix. This didn’t follow the Cloud Posse ethos that I was used to, and I am left with a nagging concern now that: • Atmos will have new exciting features directed towards Atmos Pro instead of Atmos • I will eventually run into some pain points of Atmos, which is why Atmos Pro exists.

Share Data Between Components | atmos

Share data between loosely-coupled components in Atmos

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Hi @Ryan Ernst, all good questions

Share Data Between Components | atmos

Share data between loosely-coupled components in Atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos.Component is one way to get a component remote state. The other way is using the remote-state Terraform module (with configuration in Atmos).

As always, they both have their pros and cons. Using them depends on your use-cases.

atmos.Component - you do everything in stack manifests in YAML, it works in almost all the cases. It’s a relatively new addition to Atmos, and we are improving it to be able to assume roles to use it in CI/CD (e.g. GH actions).

remote-state TF module - everything is done in TF (including role assumtion to access AWS and TF state backend). Faster Atmos stack processing (no need to execute terraform output for all atmos.Componet functions in the stack manifests).

We can chat about the pros and cons of these two ways more, but the above is a short description.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Regarding Atmos Pro, it’s not related to the Atmos CLI, it’s a completely diff product (so no, it’s not about the pain points in Atmos). It’s about pain points in CI/CD.

It’s Cloud Posse’s new solution for CI/CD with GitHub Actions. It will run Atmos CLI commands and execute CI/CD workflows, and show the results in a UI.

It’s still in development.

@Erik Osterman (Cloud Posse) can provide more details on it

Ryan Ernst avatar
Ryan Ernst

Thanks for the quick response!

My interpretation so far on remote-state vs atmos.compnent is that I would prefer atmos.component. atmos.component has less coupling with atmos, making my TF modules more portable.

From your comment it sounds like under the hood, atmos.component is using terraform output as a gomplate (I assume) data source, and this requires role assumption so that in CI/CD we have access to the needed state backends.

So eventually Atmos would create and manage these roles for me, but I could probably still use Github Actions now if I managed the roles myself? Or am I just unable to use GHA right now if I use atmos.component?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, exactly

For terraform output, the function uses a Go lib from HashiCorp, which executes terrafrom output in code (gomplate is not involved here)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll add role assumption to Atmos to be able to use atmos.Component in GH actions. But if your action assumes the correct role to access the TF state backend, you can use the atmos.Component function now

1
Ryan Ernst avatar
Ryan Ernst

My team is currently at Stage 8 right now so CI/CD is one of the things I am evaluating. We use Github Actions already, and already have GH OIDC set up for assuming roles, but we don’t have any workflows for IaC yet.

The rest of my questions might be more for @Erik Osterman (Cloud Posse) and what the experience would be like using Atmos with GH Actions with and without Pro.

Stage 8: Team Challenges | atmos

Team expands and begins to struggle with Terraform

Ryan Ernst avatar
Ryan Ernst

I am also trying to better understand how Atlantis fits. My current assumption around Atlantis is that it:

• Does not solve the problems that Atmos Pro does

• Adds the ability to plan/apply IaC changes from GH PR comments.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for the feedback, will get back later this weekend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I agree, that a chapter like:
“Template Functions vs Remote State”
Would be a good one. There are definitely tradeoffs. One that isn’t immediately obvious is that the YAML stack configs that rely on templating, will first need to evaluate all the template functions. If you rely on a lot of atmos.Component function calls, it will slow down your stacks because they must be compute in order to render the final configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


What is Atmos Pro?
Atmos Pro is an exciting new project we’re working on, though we haven’t made an official announcement yet. The goal is to enhance the GitHub Actions experience by overcoming some of the current limitations with matrixes and concurrency groups, which don’t behave quite like queues in GitHub Actions. We’re building this to help some of our customers using GitHub Enterprise scale their parallelism beyond what’s possible today.

For example, GitHub Matrixes are capped at 250 concurrent runs, which can be a roadblock for larger workflows. We’ve managed to break through this with our own solution, the https://github.com/cloudposse/github-action-matrix-extended, but then we encountered issues with the GitHub UI itself—it tends to crash when matrixes get too big! We’re still developing and refining Atmos Pro, and we plan to eventually roll it out more widely. Stay tuned for more updates!

cloudposse/github-action-matrix-extended

GitHub Action that when used together with reusable workflows makes it easier to workaround the limit of 256 jobs in a matrix

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Is this just an optional add-on? or will it be a subscription service?
Atmos Pro will be available as a SaaS offering once it’s ready for release, though the name may change. It’s not required but as I alluded to earlier, it solves key GitHub Actions limitations for better scaling with a GitHub App. We’re excited to bring this to the community soon—feel free to DM me for more details!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


My current assumption around Atlantis is that it:
Atlantis is great, but it’s a much simpler platform. Atmos GitHub Actions have more or less feature parity with Atlantis today, and adds a ton of capabilities not available in Atlantis, such as drift detection, failure handling and better support for monorepos with parallelism. Plus, since it’s built with GitHub Actions, it works with your existing GHA infrastructure and you can customize the workflows. Bear in mind, that Atlantis was born in an era long before GHA.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have used and trusted Cloud Posse TF modules for a long time and really enjoy the business model. We appreciate that and the support! We continue to believe in Open Source as a great way to build a community and provide a transparent way of distributing software. We have literally hundreds of projects and we do our best to support them. Unfortunately, open source by itself is not a business model.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Seeing Atmos Pro immediately made me think there must be some known painful parts of Atmos that will require me to add another subscription to fix.
The painful parts we are fixing are not inheritant to Atmos, but to GitHub Actions, when introducing large scale concurrency, between jobs which have dependencies, and need approvals using GitHub Environment Protection Rules.
This didn’t follow the Cloud Posse ethos that I was used to, and I am left with a nagging concern now that:
Supporting open source costs real money, and while very few companies contribute financially, there needs to be a commercial driver behind it unless it’s under a nonprofit like CNCF/FSF, for it to succeed. We value every PR and contribution, but that doesn’t foot the bill. Consulting is a poor business model too, that scales linearly at best and is very susceptible to market conditions. In my view, we give away a ton of value for free, more than most SaaS businesses, but we also need to sustain our business somewhere along the way.
Atmos will have new exciting features directed towards Atmos Pro instead of Atmos
Almost certaintly, there will be exciting features that are only possible to do with this offering. But only if you need to do those things. We’re still giving away all of our modules, atmos, and documentation for our reference architecture.

docs.cloudposse.com
I will eventually run into some pain points of Atmos, which is why Atmos Pro exists.
This is probably true. But I’m curious—at what point does it make sense to financially support the companies behind the open source who overall share your ethos?

1
Ryan Ernst avatar
Ryan Ernst

@Erik Osterman (Cloud Posse) Thank you for the very thorough response! This all makes sense, and I appreciate you taking the time to address my questions.

I fully agree that Cloud Posse gives away an amazing amount of value for free! I have learned a lot from the conventions and reference architectures.

From you response, my main concern is definitely alleviated. We are not operating at an enterprise level and might have 10 concurrent jobs at any given time, nowhere near 256.

Again, thank you for the detailed response! I will continue my PoC with Atmos and will get back to you if I have any other questions!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Ryan Ernst! I appreciate your message and speaking up.

Yep, if you can get away with a lower concurrency and do not need approval gates for ordered dependencies (e.g. approve plan A, then apply A, approve Plan B, then apply B, approve plan C, then apply C), you can use everything we put forth.

1

2024-08-25

2024-08-26

Miguel Zablah avatar
Miguel Zablah

Hi everyone! I have a question: Can I overwrite a config in CI? for local development I use aws_profile to authenticate to aws but on CI I will like to use OIDC but it looks like it’s looking for the profile when I use the cloudposse/github-action-atmos-affected-stacks@v4 with the atmos config for roles under ./rootfs/usr/local/etc/atmos/

is there a way to maybe set aws_profile to null?

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Igor Rodionov can you please review the GH action?

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov have you had the chance to review it?

Igor Rodionov avatar
Igor Rodionov

We had a conversation with @Miguel Zablah a month ago. I showed him a workaround because, actually, we have poor support AWS auth with profiles. So @Gabriela Campana (Cloud Posse) you can think this request adrressed

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Thanks, Igor

2024-08-28

Miguel Zablah avatar
Miguel Zablah

Hey guys I found a potential bug on the atmos plan workflow for github actions,

You guys have this step: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L110

It will run atmos describe <component> -s <stack> .. but this requires the authentication to happen and that is something that happens after this on this step: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L268

I have fix this by doing the authentication before the actions runs on the same job that looks to do the trick but maybe is something that should be documented or fix on the action

Miguel Zablah avatar
Miguel Zablah

For anyone that haves this issue here is my fix:

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Set AWS Region from YAML
        id: aws_region
        run: |
          aws_region=$(yq e '.integrations.github.gitops["artifact-storage"].region' ${{ env.ATMOS_CONFIG_PATH }}atmos.yaml)
          echo "aws-region=$aws_region" >> $GITHUB_OUTPUT

      - name: Set Terraform Plan Role from YAML
        id: aws_role
        run: |
          terraform_plan_role=$(yq e '.integrations.github.gitops.role.plan' ${{ env.ATMOS_CONFIG_PATH }}atmos.yaml)
          echo "terraform-plan-role=$terraform_plan_role" >> $GITHUB_OUTPUT

      - name: Setup AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-region: ${{ steps.aws_region.outputs.aws-region }}
          role-to-assume: ${{ steps.aws_role.outputs.terraform-plan-role }}
          role-session-name: "atmos-tf-plan"
          mask-aws-account-id: "no"

      - name: Plan Atmos Component
        uses: cloudposse/github-action-atmos-terraform-plan@v3
        with:
          component: ${{ matrix.component }}
          stack: ${{ matrix.stack }}
          atmos-config-path: ./rootfs/usr/local/etc/atmos/
          atmos-version: ${{ env.ATMOS_VERSION }}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov @Yonatan Koren

Yonatan Koren avatar
Yonatan Koren

@Miguel Zablah thank you for looking into this — however AFAIK the root of the issue is that we run https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/85cbbaca9e11b1f932b5d49eebf0faa63832e980/action.yml#L111 prior to obtaining credentials. The cloudposse/github-action-atmos-get-setting which i just mentioned is doing atmos describe component (link). Template processing needs to read the terraform state because template functions such as atmos.Component are able to read outputs from other components.

So, I have a PR for to make --process-templates conditional in cloudposse/github-action-atmos-get-setting and it should be getting merged today.

Then, we will update cloudposse/github-action-atmos-terraform-plan to move authentication earlier in the chain as you’ve described if someone needs template processing, and also the option to disable it.

Yonatan Koren avatar
Yonatan Koren

For further context this authentication requirement comes from https://github.com/cloudposse/atmos/releases/tag/v1.86.2

Admittedly, it’s unexpected for a patch release to introduce something as breaking such as this. But actually 1.86.2 was fixing the feature not working in the first place, i.e. it was not reading from the state when it should have been. And when we released atmos 1.86.2, some of our actions started breaking due to the authentication requirement I’ve described above.

Igor Rodionov avatar
Igor Rodionov

@Miguel Zablah I will fix the action in an hour

1
Miguel Zablah avatar
Miguel Zablah

I see thanks for the explanation and awesome that you already have a fix!!

I’m also working on adding the apply workflow and it looks like it has the same issue so if that one can have the same fix it will be awesome!

1
Yonatan Koren avatar
Yonatan Koren

@Igor Rodionov if you’d like, you can take over this draft PR I made last week to move up authentication earlier in the steps: https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/86

#86 fix: assume IAM role before running `cloudposse/github-action-atmos-get-setting`

what

• assume IAM role before running cloudposse/github-action-atmos-get-setting

why

As of atmos 1.86.2, when atmos.Component began actually retrieving the TF state, it broke cloudposse/github-action-atmos-affected-stacks which we resolved as part of this release of the aforementioned action. We just had the action assume the IAM role, and that was it. However in cases where this function is used, appropriate IAM credentials to also be a requirement for cloudposse/github-action-atmos-get-setting:

Run cloudposse/github-action-atmos-get-setting@v1 template: all-atmos-sections26: executing “all-atmos-sections” at : error calling Component: exit status 1

Error: error configuring S3 Backend: IAM Role (arnawsiam:role/xxxx-core-gbl-root-tfstate) cannot be assumed.

There are a number of possible causes of this - the most common are:

  • The credentials used in order to assume the role are invalid
  • The credentials do not have appropriate permission to assume the role
  • The role ARN is not valid

Error: NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors

references

https://github.com/cloudposse/atmos/releases/tag/v1.86.2

Yonatan Koren avatar
Yonatan Koren

But we should also add a new input to make template processing conditional after https://github.com/cloudposse/github-action-atmos-get-setting/pull/58 is merged — i.e. add this input to both github-action-atmos-terraform-plan and github-action-atmos-terraform-apply and relay it to cloudposse/github-action-atmos-get-setting.

#58 feature: add `process-templates` input

what

• Add process-templates input as the value of the --process-templates flag to pass to atmos describe component. • Add appropriate test cases.

why

• Some template functions such as atmos.Component require authentication to remote backends. We should allow disabling this.

references

Run cloudposse/github-action-atmos-get-setting@v1 template: all-atmos-sections:163:26: executing “all-atmos-sections” at : error calling Component: exit status 1

Error: error configuring S3 Backend: IAM Role (arn:aws:iam::xxxxxxxxxxxx:role/xxxx-core-gbl-root-tfstate) cannot be assumed.

There are a number of possible causes of this - the most common are:

  • The credentials used in order to assume the role are invalid
  • The credentials do not have appropriate permission to assume the role
  • The role ARN is not valid

Error: NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I’ve kept the default value of process-templates as true in order to ensure backwards compatibility.

Igor Rodionov avatar
Igor Rodionov

@Yonatan Koren I make a comment

Igor Rodionov avatar
Igor Rodionov

pls address it and we will merge

Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov @Yonatan Koren On this draft PR: https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/86/files the If needs to change, since it’s expecting a value from the get settings step that has not be run:

if: ${{ fromJson(steps.component.outputs.settings).enabled }}
1
Miguel Zablah avatar
Miguel Zablah

sorry if this was a know issue I just saw it hehe

1
Michael Dizon avatar
Michael Dizon

@Igor Rodionov do we have to do anything to resolve this? i’m seeing it now in my gh workflow

Igor Rodionov avatar
Igor Rodionov

@Michael Dizon I just merged PR with the fix. Could you try it in couple minutes

Michael Dizon avatar
Michael Dizon

ok

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hey! I’m trying to use the plan action for Terraform and it’s not generating any output I’m getting

Run cloudposse/github-action-atmos-get-setting@v1
  with:
    component: aws-to-github
    stack: dev
    settings-path: settings.github.actions_enabled
  env:
    AWS_ACCOUNT: << AWS ACCOUNT >>
    ENVIRONMENT: dev
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/infra-identity/infra-identity/.github/config/atmos-gitops.yaml
result returned successfully
Run if [[ "false" == "true" ]]; then
  if [[ "false" == "true" ]]; then
    STEP_SUMMARY_FILE=""
  
    if [[ "" == "true" ]]; then
      rm -f ${STEP_SUMMARY_FILE}
    fi
  
  else
    STEP_SUMMARY_FILE=""
  fi
  
  
  if [ -f ${STEP_SUMMARY_FILE} ]; then
    echo "${STEP_SUMMARY_FILE} found"
  
    STEP_SUMMARY=$(cat ${STEP_SUMMARY_FILE} | jq -Rs .)
    echo "result=${STEP_SUMMARY}" >> $GITHUB_OUTPUT
  
    if [[ "false" == "false" ]]; then
      echo "Drift detection mode disabled"
      cat $STEP_SUMMARY_FILE >> $GITHUB_STEP_SUMMARY
    fi
  else 
    echo "${STEP_SUMMARY_FILE} not found"
    echo "result=\"\"" >> $GITHUB_OUTPUT
  fi
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
  env:
    AWS_ACCOUNT: 166189870787
    ENVIRONMENT: dev
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/infra-identity/infra-identity/.github/config/atmos-gitops.yaml
 found
Drift detection mode disabled

my atmos-gitops.yaml file looks like this

integrations:
  github:
    gitops:
      terraform-version: 1.9.0
      infracost-enabled: false
      role:
        plan: arn:aws:iam::${AWS_ACCOUNT}:role/iam-manager-${ENVIRONMENT}
        apply: arn:aws:iam::${AWS_ACCOUNT}:role/iam-manager-${ENVIRONMENT}

my atmos.yaml file looks like

base_path: "./"

components:
  terraform:
    base_path: "resource/components/terraform/aws"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: true

stacks:
  base_path: "resource/stacks"
  included_paths:
    - "deploy/**/*"
  excluded_paths:
    - "deploy/*/_defaults.yaml"
  name_pattern: "{environment}"

logs:
  file: "/dev/stderr"
  level: Info

settings:
  github:
    actions_enabled: true

# <https://pkg.go.dev/text/template>
templates:
  settings:
    # Enable `Go` templates in Atmos stack manifests
    enabled: true
... yadda yadda template stuff

any thoughts? I feel like I must be missing a flag somewhere - I tried setting settings.github.actions_enabled to true but no luck

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov

hey! I’m trying to use the plan action for Terraform and it’s not generating any output I’m getting

Run cloudposse/github-action-atmos-get-setting@v1
  with:
    component: aws-to-github
    stack: dev
    settings-path: settings.github.actions_enabled
  env:
    AWS_ACCOUNT: << AWS ACCOUNT >>
    ENVIRONMENT: dev
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/infra-identity/infra-identity/.github/config/atmos-gitops.yaml
result returned successfully
Run if [[ "false" == "true" ]]; then
  if [[ "false" == "true" ]]; then
    STEP_SUMMARY_FILE=""
  
    if [[ "" == "true" ]]; then
      rm -f ${STEP_SUMMARY_FILE}
    fi
  
  else
    STEP_SUMMARY_FILE=""
  fi
  
  
  if [ -f ${STEP_SUMMARY_FILE} ]; then
    echo "${STEP_SUMMARY_FILE} found"
  
    STEP_SUMMARY=$(cat ${STEP_SUMMARY_FILE} | jq -Rs .)
    echo "result=${STEP_SUMMARY}" >> $GITHUB_OUTPUT
  
    if [[ "false" == "false" ]]; then
      echo "Drift detection mode disabled"
      cat $STEP_SUMMARY_FILE >> $GITHUB_STEP_SUMMARY
    fi
  else 
    echo "${STEP_SUMMARY_FILE} not found"
    echo "result=\"\"" >> $GITHUB_OUTPUT
  fi
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
  env:
    AWS_ACCOUNT: 166189870787
    ENVIRONMENT: dev
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/infra-identity/infra-identity/.github/config/atmos-gitops.yaml
 found
Drift detection mode disabled

my atmos-gitops.yaml file looks like this

integrations:
  github:
    gitops:
      terraform-version: 1.9.0
      infracost-enabled: false
      role:
        plan: arn:aws:iam::${AWS_ACCOUNT}:role/iam-manager-${ENVIRONMENT}
        apply: arn:aws:iam::${AWS_ACCOUNT}:role/iam-manager-${ENVIRONMENT}

my atmos.yaml file looks like

base_path: "./"

components:
  terraform:
    base_path: "resource/components/terraform/aws"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: true

stacks:
  base_path: "resource/stacks"
  included_paths:
    - "deploy/**/*"
  excluded_paths:
    - "deploy/*/_defaults.yaml"
  name_pattern: "{environment}"

logs:
  file: "/dev/stderr"
  level: Info

settings:
  github:
    actions_enabled: true

# <https://pkg.go.dev/text/template>
templates:
  settings:
    # Enable `Go` templates in Atmos stack manifests
    enabled: true
... yadda yadda template stuff

any thoughts? I feel like I must be missing a flag somewhere - I tried setting settings.github.actions_enabled to true but no luck

Sara Jarjoura avatar
Sara Jarjoura

I put these in the orgs defaults.yaml file and something is happening

settings:
  enabled: true
  github:
    actions_enabled: true
Sara Jarjoura avatar
Sara Jarjoura

alright I got it to try to plan, it’s failing on not finding the role

is there any way to pass in the role dynamically? I want the aws account and environment to be sourced from the environment the Github Action is run from

Sara Jarjoura avatar
Sara Jarjoura

with the v2 plan action that is

Sara Jarjoura avatar
Sara Jarjoura

I see how I would do it with v1

Igor Rodionov avatar
Igor Rodionov

@Sara Jarjoura could you pls post me the whole github actions logs ?

Sara Jarjoura avatar
Sara Jarjoura

sure - here you go

Sara Jarjoura avatar
Sara Jarjoura

thank you

Igor Rodionov avatar
Igor Rodionov

how come that your AWS ROLE looks like this

Igor Rodionov avatar
Igor Rodionov
arn:aws:iam::{{ .vars.account }}:role/iam-manager-{{ .vars.environment }}
Igor Rodionov avatar
Igor Rodionov

did you fork the action?

Sara Jarjoura avatar
Sara Jarjoura

no I didn’t I’m trying to source the role from variables

Sara Jarjoura avatar
Sara Jarjoura

I’d like to grab the AWS Account and environment from the Github Action environment if possible

Sara Jarjoura avatar
Sara Jarjoura

how I got there was adding this to my atmos.yaml

integrations:
  github:
    gitops:
      terraform-version: 1.9.0
      infracost-enabled: false
      artifact-storage:
        region: "{{ .vars.region }}"
      role:
        plan: "arn:aws:iam::{{ .vars.account }}:role/iam-manager-{{ .vars.environment }}"
        apply: "arn:aws:iam::{{ .vars.account }}:role/iam-manager-{{ .vars.environment }}"
Igor Rodionov avatar
Igor Rodionov

ok

Sara Jarjoura avatar
Sara Jarjoura

how am I supposed to do that?

Sara Jarjoura avatar
Sara Jarjoura

with v2 I mean - with v1 I could pass these into the plan action

Sara Jarjoura avatar
Sara Jarjoura

I see v1 is deprecated though

Sara Jarjoura avatar
Sara Jarjoura

I can’t overwrite the atmos.yaml file since I can’t stop the plan action from doing a checkout

Sara Jarjoura avatar
Sara Jarjoura

if I wanted to use envsubst for example to append the required info

Igor Rodionov avatar
Igor Rodionov

actually I belive that atmos.yaml integrations.github.gitops.* do not supports any templating

Sara Jarjoura avatar
Sara Jarjoura

ok, so there’s no way to use v2 for what I want to do?

Sara Jarjoura avatar
Sara Jarjoura

would it be possible to make the checkout step optional so I could append some environment sourced variables to the atmos.yaml?

Sara Jarjoura avatar
Sara Jarjoura

then I could do the checkout myself, insert fuzzy stuff here, then use the plan action

Igor Rodionov avatar
Igor Rodionov

I know the workaround you can use

Igor Rodionov avatar
Igor Rodionov

pls check workaround that we use for testing

Igor Rodionov avatar
Igor Rodionov

it’s the same problem solve - override atmos.yaml for test purpose

Sara Jarjoura avatar
Sara Jarjoura

oh so I can use APPLY_ROLE ?

Sara Jarjoura avatar
Sara Jarjoura

or PLAN_ROLE ?

Igor Rodionov avatar
Igor Rodionov

only if you will specify them in your atmos.yaml

Sara Jarjoura avatar
Sara Jarjoura

gotcha

Sara Jarjoura avatar
Sara Jarjoura

ok let me try that

Sara Jarjoura avatar
Sara Jarjoura

oh but won’t the checkout still run over this?

Sara Jarjoura avatar
Sara Jarjoura

oh never mind - I see you’re setting the atmos path to something other than the original atmos file

1
Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov @Yonatan Koren I see you update the apply as well should we delete the second aws authentication? https://github.com/cloudposse/github-action-atmos-terraform-apply/blob/579c9c99f8ebabc15fbffa5868e569017f73497e/action.yml#L181

    - name: Configure State AWS Credentials
Igor Rodionov avatar
Igor Rodionov

no. that’s different authentifiactions

Igor Rodionov avatar
Igor Rodionov

one to pull plan file

Igor Rodionov avatar
Igor Rodionov

another to apply changes

Miguel Zablah avatar
Miguel Zablah

aah perfect! I miss this!

Miguel Zablah avatar
Miguel Zablah

thanks!

Sara Jarjoura avatar
Sara Jarjoura

is it possible to import a mixin for only one component?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Sara Jarjoura avatar
Sara Jarjoura

Any thoughts about supporting imports at a component level rather than stack level? Example - I have 1 stack (a management group in azure) that has 3 subscriptions (eng, test, prod). And i have mixins for each “environment” eng/test/prod. I want to import the particular mixin w/ the particular subscription within the same stack

Sara Jarjoura avatar
Sara Jarjoura

in my case I am using the mixin to allow a complex variable to be inherited by two environments but not by a third

Sara Jarjoura avatar
Sara Jarjoura

this variable should only be passed into one component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it depends on the scope. If you import a manifest with global variables, those global vars will be applied to all components in the stacks If you import a manifest with component definitions (components.terraform.....), then the config will be applied to the same components

Sara Jarjoura avatar
Sara Jarjoura

ah ok - let me try that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I suggest you do the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• In catalog, add a default component with all the default values

• Import the manifest only in the stacks (environments) where it’s needed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can do this in 2 diff ways:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• In the catalog, define the same Atmos component name as in defined inlined in the stacks. When you import, the parts from diff manifest will be combined into the final values for the component • Or, in the catalog, define a default base abstract component with the default values, then import it, and inherit the component in the stack from the default component

Sara Jarjoura avatar
Sara Jarjoura

ooh nice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding
Any thoughts about supporting imports at a component level rather than stack level?

Sara Jarjoura avatar
Sara Jarjoura

yes lovely - let me give that a go

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s already supported by the 2 mechanisms I explained above

Sara Jarjoura avatar
Sara Jarjoura

this is far clearer than the thread I read btw

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know if you need help (you can DM me your config)

Sara Jarjoura avatar
Sara Jarjoura

awesome - super appreciated!

sweetops1
RB avatar

How do you folks integrate opensource scanning tools into atmos such as trivy/tfsec, checkov, tflint, etc ?

RB avatar

I couldn’t find a doc on this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

True, we have not written up anything on this.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So at this time, it’s not something we’ve entertained as being built into atmos, but rather something you can do today with the standard github actions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If there something you’re struggling with?

RB avatar

Yes! I’m unsure how to do this. I’d like to do it with reviewdog

RB avatar

I was hoping maybe you folks already had prior art for it

RB avatar

(even if not reviewdog, just the cli tool itself would be more than enough to start with)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, a lot of what reviewdog did, you don’t need them for anymore as the underlying tools support the sarif format

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe you are familiar with sarif

RB avatar

oh good to know. I can use the sarif output to write inline comments directly into a github pr ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yup

RB avatar

without codeql ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrmm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe not

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But one sec

RB avatar

codeql is very expensive if you’re not open source

RB avatar

anyway, reviewdog is more of a second thought. the main issue im having is trying to bring in tflint and other sast tools but maybe we just need to manually add them our gha ci pipeline.

RB avatar

im lazy and wanted to see if i can copy and paste a quick atmos-friendly gha if you folks had one already

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would look for an action like this https://github.com/Ayrx/sarif_to_github_annotations

Ayrx/sarif_to_github_annotations
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That was just a quick google and it looks unmaintained

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but bascially something that takes the sarif format an creates annotations. Also, check the tools. THey may already create annotations

RB avatar

hmm ok thank you!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

tflint was the tool I was thinking of that I recently saw supported sarif

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
-f, --format=[default|json|checkstyle|junit|compact|sarif]    Output format
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anyways, the types of problems you may run into are:

• needing the tfvars files

• needing proper roles for terraform init

• properly supporting a mono repo

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think the solution will depend on which tool.

RB avatar

Ah ok thanks a lot. I will try and see what issues I hit. If I succeed, i’ll post back

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, if you’re using dynamic backends, dynamic providers, that can complicate things

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I want to document how to best do this, so happy to work with you on it

1
RB avatar

Yes. I was thinking i just need to do an atmos generate commands or atmos terraform init command, then run the sast scanners, then run the atmos terraform plan workflows

RB avatar

checkov and other tools also support scanning the planfile itself

RB avatar

so the plan can be saved to a planfile, then throw checkov and similar tools at it

RB avatar

then finally allow the apply

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using our GitHub actions for atmos?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then it sounds like where ever you have the plan storage action, you could easily integrate checks on the planfile

1
RB avatar

This one looks promising for sarif and trivy

https://github.com/crazy-max/ghaction-container-scan

crazy-max/ghaction-container-scan

GitHub Action to check for vulnerabilities in your container image

RB avatar

oh it uses codeql to upload the sarif again… ugh

RB avatar

I guess it’s better to just go with reviewdog for this since it supports the annotations out of the box

https://github.com/reviewdog/action-trivy

1
RB avatar

I tried adding annotations using reviewdog trivy. I can see the outputted annotations in json but cannot see the annotations in the pr itself.

I switched to the older reviewdog tfsec and it worked out of the box

https://github.com/nitrocode/reviewdog-trivy-test/pull/1/files

2024-08-29

RB avatar

Related to the above, one thing that came up is inline suppressions with trivy/tfsec and how to do that on a stack-input level instead of in the base-component.

RB avatar

https://aquasecurity.github.io/trivy/v0.54/docs/scanner/misconfiguration/#skipping-resources-by-inline-comments

One thing that gets tricky is since we’re using workspaces here in atmos, you can have one set of inputs/tfvars that triggers trivy/tfsec issues and another set of inputs that doesn’t, but the in-line suppression may have to be done in the terraform code itself which suppresses it for all inputs.

two kinds of tfvars. The prod ones doesn’t cause the issue.

# services-prod-some-bucket.tfvars
logging_enabled = true

The dev one does cause an issue.

# services-dev-some-bucket.tfvars
logging_enabled = false

so does that mean we have to do this in the base component which can impact all stacks/inputs of the base component ?

#trivy:ignore:aws-s3-enable-logging:exp:2024-03-10
resource "aws_s3_bucket"
Overview - Trivy

A Simple and Comprehensive Vulnerability Scanner for Containers and other Artifacts, Suitable for CI

RB avatar

the above issue i believe impacts both tfsec and trivy

same issue with checkov

https://www.checkov.io/2.Basics/Suppressing%20and%20Skipping%20Policies.html

RB avatar

The suppression inline comments seem only applicable to the resource definition itself and not the input tfvar to the resource definition

RB avatar
#7422 Inline terraform suppression rule for specific tfvar input instead of on the resource definition

Description

I use workspaces to reuse terraform root directories

For example, the terraform code to provision an s3 bucket is located here

components/terraform/s3/main.tf

I setup workspaces such as this

ue1-dev-my-bucket2.tfvars
ue1-prod-my-bucket3.tfvars

My ue1-dev-my-bucket2.tfvars contains

kms_master_key_arn = “”

My ue1-prod-my-bucket3.tfvars contains

kms_master_key_arn = “aws:kms”

Then I run trivy for ue1-prod and no issues.

When I run trivy for ue1-dev, I have one issue.

trivy config . –tf-vars=ue1-dev-my-bucket2.tfvars 2024-08-29T17:44:46-05:00 INFO [misconfig] Misconfiguration scanning is enabled 2024-08-29T17:44:47-05:00 INFO Detected config files num=3

cloudposse/s3-bucket/aws/main.tf (terraform)

Tests: 9 (SUCCESSES: 8, FAILURES: 1, EXCEPTIONS: 0) Failures: 1 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 1, CRITICAL: 0)

HIGH: Bucket does not encrypt data with a customer managed key. ═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════ Encryption using AWS keys provides protection for your S3 buckets. To increase control of the encryption and manage factors like rotation use customer managed keys.

See https://avd.aquasec.com/misconfig/avd-aws-0132 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── cloudposse/s3-bucket/aws/main.tf:80-94 via main.tf:3-21 (module.s3_bucket) ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 80 ┌ resource “aws_s3_bucket_server_side_encryption_configuration” “default” { 81 │ count = local.enabled ? 1 : 0 82 │ 83 │ bucket = local.bucket_id 84 │ expected_bucket_owner = var.expected_bucket_owner 85 │ 86 │ rule { 87 │ bucket_key_enabled = var.bucket_key_enabled

How do I only suppress this issue for ue1-dev-my-bucket2.tfvars and not ue1-prod-my-bucket3.tfvars ?


I can technically do this but this will suppress for both my tfvars files.

#trivyAVD-AWS-0132 module “s3_bucket” {

I cannot add the comment in beside the input directly to the tfvars file. It seems it has to be in before the module definition.

e.g.

#trivyAVD-AWS-0132 kms_master_key_arn = “”


I looked into filtering using the ignore file which would essentially be the same thing as the module definition inline comment.

https://aquasecurity.github.io/trivy/test/docs/configuration/filtering/#by-finding-ids

I looked into filtering by open policy agent

https://aquasecurity.github.io/trivy/test/docs/configuration/filtering/#by-open-policy-agent

This has some promise but looking at the json output, I don’t see any way to ignore based on the inputs. Do I have the ability to see which individual inputs were passed in to the outputted json?

Here is my output json

{ “SchemaVersion”: 2, “CreatedAt”: “2024-08-29T1807.025472-05:00”, “ArtifactName”: “.”, “ArtifactType”: “filesystem”, “Metadata”: { “ImageConfig”: { “architecture”: “”, “created”: “0001-01-01T0000Z”, “os”: “”, “rootfs”: { “type”: “”, “diff_ids”: null }, “config”: {} } }, “Results”: [ { “Target”: “.”, “Class”: “config”, “Type”: “terraform”, “MisconfSummary”: { “Successes”: 3, “Failures”: 0, “Exceptions”: 0 } }, { “Target”: “cloudposse/s3-bucket/aws/cloudposse/iam-s3-user/aws/cloudposse/iam-system-user/aws/main.tf”, “Class”: “config”, “Type”: “terraform”, “MisconfSummary”: { “Successes”: 1, “Failures”: 0, “Exceptions”: 0 } }, { “Target”: “cloudposse/s3-bucket/aws/main.tf”, “Class”: “config”, “Type”: “terraform”, “MisconfSummary”: { “Successes”: 8, “Failures”: 1, “Exceptions”: 0 }, “Misconfigurations”: [ { “Type”: “Terraform Security Check”, “ID”: “AVD-AWS-0132”, “AVDID”: “AVD-AWS-0132”, “Title”: “S3 encryption should use Customer Managed Keys”, “Description”: “Encryption using AWS keys provides protection for your S3 buckets. To increase control of the encryption and manage factors like rotation use customer managed keys.”, “Message”: “Bucket does not encrypt data with a customer managed key.”, “Query”: “data..”, “Resolution”: “Enable encryption using customer managed keys”, “Severity”: “HIGH”, “PrimaryURL”: “https://avd.aquasec.com/misconfig/avd-aws-0132”, “References”: [ “https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html”, “https://avd.aquasec.com/misconfig/avd-aws-0132” ], “Status”: “FAIL”, “Layer”: {}, “CauseMetadata”: { “Resource”: “module.s3_bucket”, “Provider”: “AWS”, “Service”: “s3”, “StartLine”: 80, “EndLine”: 94, “Code”: { “Lines”: [ { “Number”: 80, “Content”: “resource "aws_s3_bucket_server_side_encryption_configuration" "default" {“, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “\u001b[0m\u001b[38;5;33mresource\u001b[0m \u001b[38;5;37m"aws_s3_bucket_server_side_encryption_configuration"\u001b[0m \u001b[38;5;37m"default"\u001b[0m {“, “FirstCause”: true, “LastCause”: false }, { “Number”: 81, “Content”: “ count = local.enabled ? 1 : 0”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “ \u001b[38;5;245mcount\u001b[0m = local.enabled \u001b[38;5;245m?\u001b[0m \u001b[38;5;37m1\u001b[0m \u001b[38;5;245m:\u001b[0m \u001b[38;5;37m0”, “FirstCause”: false, “LastCause”: false }, { “Number”: 82, “Content”: “”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “\u001b[0m”, “FirstCause”: false, “LastCause”: false }, { “Number”: 83, “Content”: “ bucket = local.bucket_id”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “ \u001b[38;5;245mbucket\u001b[0m = local.bucket_id”, “FirstCause”: false, “LastCause”: false }, { “Number”: 84, “Content”: “ expected_bucket_owner = var.expected_bucket_owner”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “Highlighted”: “ \u001b[38;5;245mexpected_bucket_owner\u001b[0m = \u001b[38;5;33mvar\u001b[0m.expected_bucket_owner”, “FirstCause”: false, “LastCause”: false }, { “Number”: 85, “Content”: “”, “IsCause”: true, “Annotation”: “”, “Truncated”: false, “FirstCause”: false, “LastCause”…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmm that seems tricky to solve, and shows ultimately the limitations of tools like trivy, despite their comprehensive capabilities

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We recognize the need for different stacks to have different policies which is why our OPA implementation supports that. Unfortunately only works on atmos configs and not HCL

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Based on the issue, and the suggestion, “Ignoring by attributes” can you ignore a policy based on the value of a tag?

RB avatar

Hmm that may be a possibility.

We would either want to ignore on the value of a tag OR ignore on some other unique identifier like a name

RB avatar

Ok it worked. I can stack exclusions based on attributes directly on the module definition.

e.g.

#trivy:ignore:avd-aws-0132[name=my-dev-bucket]
#trivy:ignore:avd-aws-0132[name=my-xyz-bucket]
module "s3_bucket" {
  source = "cloudposse/s3-bucket/aws"

It’s still not ideal since I have to change the component code which makes vendoring code from cloudposse/terraform-aws-components more difficult.

RB avatar

If I can get their ignore rego policy file to work based on tfvars or module arguments, that would be ideal because then we can remove all the inline comments and keep the cloudposse component up to date

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This gives me Deja Vu. I recall complaining to bridgecrew that their solution will never work for open source, where different people have different policies. It’s for the same reason.

RB avatar

With a separate file for the ignore, per workspace, it would be suitable for open source, no?

I think checkov (bridgecrew/prisma) is a little behind trivy when it comes to that compatibility. Checkov doesn’t allow module level or attribute/argument level suppressions for example but trivy does.

Looks like this feature https://github.com/aquasecurity/trivy/issues/7180 in their upcoming 0.55.0 version will allow us to remove the inline comments from the module definition and keep the code in line with upstream.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That sounds great!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This will make integration into atmos natural

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But wouldn’t work well with deep merging due to the list usage

RB avatar

Oh interesting, i think you were taking this a step further than me.

I was thinking we could do something like this (for now)

components/terraform/xyz/trivy/ignore-policies/<workspace>.rego

So then when trivy runs, it could run within the base component as workspace specific

trivy config . --ignore-file ./trivy/ignore-policies/ue1-dev-my-s3-bucket.rego

Or just base component specific

trivy config . --ignore-file ./trivy/ignore/policy.rego

Note that it would need to be in its own folder even if it were just one ignore policy because the other form of policies for trivy, when adding new checks, requires a directory path to recursively search through and therefore it could incorrectly pick up the ignore file.

RB avatar

Long term, it would be very nice to pass in the trivy ignores directly through the atmos stack yaml.

I imagine you could create the interface in atmos so it uses a map and then translate that into a list in the format that trivyignore understand

2024-08-30

2024-08-31

github3 avatar
github3
01:33:09 AM

Support atmos terraform apply --from-plan with additional flags @duncanaf (#684)

what

• Change argument generation for atmos terraform apply to make sure the plan-file arg is placed after any flags specified by the user

why

• Terraform is very picky about the order of flags and args, and requires all args (e.g. plan-file) to come after any flags (e.g. --parallelism=1), or it crashes. • atmos terraform apply accepts a plan-file arg, or generates one when --from-plan is used. When this happens, it currently puts the plan-file arg first, before any additional flags specified by the user. • This breaks when additional flags are specified, e.g. atmos terraform apply --from-plan -- -parallelism=1. In this example, atmos tries to call terraform apply <planfile> --paralellism=1 and terraform crashes with Error: Too many command line arguments

    keyboard_arrow_up