#atmos (2024-04)

2024-04-01

Ryan avatar

Hopefully a small question this monday morning, I’m trying to get atmos.exe functioning on Win11 with our current ver (v1.4.25), and it looks like she wants to fire up, but fails to find terraform in the %PATH%. I dropped it in path and even updated path with terraform.exe, not sure where atmos is searching for that path. See what I mean here -

C:\path\>
Executing command:
terraform init -reconfigure
exec: "terraform": executable file not found in %PATH%


C:\path\>terraform
Usage: terraform [global options] <subcommand> [args]

The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe we’ve predominantly tested it on WSL2

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using WSL?

Ryan avatar

No but I can redo in wsl, kinda hacking away at improving the dev environment this morning.

Ryan avatar

I’ll come back in a bit after I give wsl a shot

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan I’m not sure why Atmos can’t see the Terraform binary in the PATH on Windows (we’ll investigate that). It spawns a separate process to execute TF and other binaries. For now, maybe you can try something like this:

terraform:
  command: <path to TF binary>
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the command attribute allows you to select any binary to execute (instead of relying on it to be in PATH), and also it aloows you to set the binary for OpenTofy instead of Terraform (if you want to use it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this config

terraform:
  command: <path to TF binary>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can be set globally (per Org, tenant, account), or even per component (if, for example, you want to use diff TF or OpenTofu binaries for diff components)

components:
  terraform:
    my-component:
      command: <TF or OpenTofu binary>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if that works for you and how it can be improved

Ryan avatar

Will play around with it today, I appreciate your responses. We’re in geodesic now and only really leverage atmos so I’d like to try to make it easier to access for my team directly from vscode in native os.

Ryan avatar

Ok, I’ll probably have to talk to you gentlemen friday about risk. the issue is we’re on 1.4.x and I was pulling that atmos version, when I grabbed the latest atmos it was fine. I could tell it was a separate process too but im like uhhh where the heck is that little guy going for env.

Ryan avatar

I dont know the risk involved in using the updated exe, we are very narrow in our atmos usage specifically around terraform and the name structuring

Ryan avatar

i would think thats all just atmos yaml but the updated exe is like 4x the size of the one I was using previously.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the latest Atmos version works OK on Windows?

Ryan avatar

yea but now its complaining about my structure so new issues, its ok though

Ryan avatar

the terraform piece no longer complained when i updated

Ryan avatar

sorry i didnt want to present a bug before i checked that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no problem at all. We can help you with any issues and with the structure

1
Ryan avatar

Yea it all works happy now besides that it cannot find the stack based on atmos.yaml stacks: config, I’ll read deeper on this. From what I can see , my stacksbaseabsolutepaths seems to be pointing at my components /../.. directory. Somethings fishy with my atmos.yaml.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can DM us the file, we’ll review

alla2 avatar

Hi, everyone. I’m evaluating Atmos for the company to enhance Terraform. In Atmos everything revolves around a library of components which understandably where a majority of reusable modules should be stored. But I don’t understand (and docs don’t help) if I can define a single resource using plain terraform syntax on the deepest level of a stack without creating a component for it.

For the following file tree:

.
├── atmos.yaml
├── components
│   └── terraform
│       └── aws-vpc
│           ├── main.tf
│           ├── outputs.tf
│           ├── versions.tf
├── stacks
│   ├── aws
│   │   ├── _defaults.yaml
│   │   └── general_12345678901
│   │       ├── core
│   │       │   ├── _defaults.yaml
│   │       │   ├── [eks.tf](http://eks.tf)
│   │       │   └── us-east-2.yaml
│   │       └── _defaults.yaml
│   ├── catalog
│   │   └── aws-vpc
│   │       └── defaults.yaml
└── vendor.yaml

I’d like to just drop [eks.tf](http://eks.tf) (a YAML version of HCL is also fine) into stacks/aws/general_12345678901/core and expect Atmos to include it into the deployment. Is it possible?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrm… let me see if I can help.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


if I can define a single resource using plain terraform syntax on the deepest level of a stack without creating a component for it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So let’s take a step back. What we call a “component” in atmos is something that is deployable. For terraform, that’s a root module. In terraform, it’s impossible to deploy anything outside of a root module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we’re confusing 2 concepts here. And it happens easily because many different vendors define “stacks” differently.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cloud Posse, defines a stack as yaml configuration that specifies how to deploy components (e.g. root modules).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if you have a file like [eks.tf](http://eks.tf), you will need to stick that in a root module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In atmos, we keep the configuration (stacks) separate from the components (e.g. terraform root modules).

So inside the stacks/ folder you would never see terraform code.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All terraform code, is typically stored in terraform/components/<something>

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this is quite different from tools like terragrunt/terramate that I believe combine terraform code with configuration.

alla2 avatar

so if I need to deploy a single resource to a single environment only, I’d need to create a wrapper module in components and use in the stack?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Perhaps… That said, I’d have to understand more about what you’re trying to do. For example, at Cloud Posse, we would never “drop in” an EKS cluster.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

An EKS cluster has it’s own lifecycle, disjoint from a VPC, and applications. It deserves it’s own component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Many times, users will want to extend some functionality of a component. In that case, a it’s great to use the native Terraform _override.tf pattern.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For example, if there’s already an eks.tf file in a component, and you want to tweak how it behaves, you could create an eks_override.tf file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Override Files - Configuration Language | Terraform | HashiCorp Developerattachment image

Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.

alla2 avatar

an eks cluster is a bad example. But a lighter resource like ECR registry which does not require too many input parameters to configure is a better one. Having a module for every AWS resource would be infeasible to maintain with the small team. I checked out the library of TF components from Cloudposse and it looks impressive but probably require an orchestrated effort of the entire team to support. AWS provider changes very frequently and you have to keep up to update the modules. Another point is that our org comes from a very chaotic tf organization trying to implement a better structure. Replacing everything with modules in a single sweep is not realistic. So we would like to start using Atmos with our existing tf code organization gradually swapping out individual resources with modules. In our use case environments (AWS accounts) do not have much in common. Having a module for a resource that only used in a single place does not make much sense

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@alla2 you don’t need to replace everything. If you want a new component, just put it into components/terraform/<new-component> folder (it could be one TF file like [main.tf](http://main.tf), or the standard config like [variables.tf](http://variables.tf) [main.tf](http://main.tf) , [outputs.tf](http://outputs.tf) etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then, in stacks, you provide the configuration for your new component (TF root module)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and in stacks, you can make it DRY by speccifying the common config for all accounts and environments in stacks/catalog/<my-component>/defaults.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then in the top-level stacks you can override or provide specific values for your component for each account, region, tenant etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is to separate the code (TF components) from the configuration (stacks) (separation of concerns)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so your component (TF code) is completely generic and reusable and does not know/care about where it will be deployed - the deployment configs are specified in stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


So we would like to start using Atmos with our existing tf code organization gradually swapping out individual resources with modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s easily done - put your TF code in components/terraform and configure it in stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in the future, you can create a repo with your TF modules, and in your TF components you would just instantiate your modules making your components very small

alla2 avatar

Thanks, Andriy and Erik for your input and the options you provided. I’ll have another look. I agree that having a module for a logical group of resources is a right thing to do. But it usually only works when you already have a good code organization and a somewhat large codebase. I’d argue that managing everything with modules is not productive at the start as you spend a lot of time creating modules which you don’t even know if you can reuse later. Even within a mature org parts of the configuration can be defined as plain resources as they are only useful in this particular combination for this particular environment and does not demand a separate module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, so I know where you’re coming from, and if you’re stuck in that situation - other tools might be better suited for dynamically constructing terraform “on the fly”. Right now, atmos is not optimized for that use-case.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I’d argue that managing everything with modules is not productive at the start as you spend a lot of time creating modules which you don’t even know if you can reuse later.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is true… There’s a significant up front investment. Our belief though, is once you’ve created that investment, you almost don’t need to write any HCL code any more.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Everything becomes configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(We’ve of course spent considerable time building that library of code, which is why we have so much)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You might have already come across this, but here’s how we we think of components for AWS: https://github.com/cloudposse/terraform-aws-components

These are all the “building blocks” we use in our commercial engagements

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

alla2 avatar

yeah, I was referring to it above, impressive piece of work. Right now we’re mostly using another modules library which I guess should also work fine with Atmos.

Terraform AWS modules

Collection of Terraform AWS modules supported by the community

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, though keep in mind those are child modules, and not root modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said, it would be interesting to show a reference architecture using those child modules. It would be easy to vendor them in with atmos vendor, and atmos can automatically generate the provider configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let us know if you need some guidance…

1
Roy avatar

Hey, I would to add some follow up to this discussion under the @alla2 message. As far as I see that atmos doesn’t provide any option to pass output from one module as an input to another (with optional conditionals and iteration). It can only be achieved with instrumenting the module’s code by context with remote backend module, isn’t it? P.S. It probably could be achieved also with some scripting in workflows , but it would be more like workaround, then I’m not taking it into account.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, we have intentionally not implemented it in atmos since we want as much as possible to rely on native Terraform functionality. Terraform can already read remote state either by the remote state data source or any of the plethora of other data source look ups. We see that as a better approach because it is not something that will tie you to “atmos”. It will work no matter how you run terraform, and we see that as a win for interoperability.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To make this easier we have a module to simplify remote state look ups that is context aware. This is what @Roy refers to.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Roy avatar

@Erik Osterman (Cloud Posse) okey, because I was hoping if there is possibility to have implementation-agnostic (then context-agnostic) building blocks (that could be vendored from i.e. Cloud Posse ,Gruntwork, Azure Verified Modules, etc.) and to glue things together with business logic with use of higher-level tooling (i.e. Atmos), without usage of any intermediate wrapper modules and without instrumenting the low-level blocks. But it would require capabilities to connect outputs to inputs in declarative way with some basic logic support (conditionals, iteration). Of course for more sophisticated logic some “logic modules” could be written, but after all it would be still the totally flat structure of modules with orchestration on the config management layer, i.e. Atmos. For now with Atmos if I want to, i.e. use output from one module in for_each for other module, I need to have additional module that: 1. grab the output from remote state and 2. implements this for_each for the second module. Am I right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So what I tried to say, was we take that implementation agnostic approach. By not building anything natively into atmos it’s agnostic to how you run terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are also building some other things (open source) but cannot discuss here. DM me for details.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

These are terraform mechanisms to further improve what you are trying to do across multiple clouds, ideally suited for enterprise.

Roy avatar

For now I have 3-layer layout of modules. First is set of context-agnostic building blocks (like Gruntwork modules ). The second is the orchestrator layer that carries the context with use of external config store and implements business logic. The last is environment layer that just calls the second with exact input. I’m looking for some solution that could eliminate the 2. and 3. layer and Atmos deals with the 3. in the way that I loved

Roy avatar

now I’m looking at layer 2. and try to figure out how to deal with it :P

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmm think there might have have been a punctuation problem in the previous message. You want to eliminate layer 2 and 3?

Roy avatar

yes, I’m wondering about eliminating layer 2 and 3

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Do you mean calling “child” modules as “root” modules by dynamically configuring the backend? Thus being able to use any module as a root module. Then needing to solve the issue of passing dynamic values to those child modules, without the child modules being aware of its context or how it got those values.

Roy avatar

sometimes it would require “logic modules”, that would act as a procedure in general purpose lang (some generalisation of locals), but after that it would be simple as

resource_component:
  vars:
    var1: something
    var2: something
roles_logic_component:
  vars:
    domain_to_query: example.com
{{ for role in roles_logic_component.output.roles }}
{{ role.name}}-role-component:
  name: {{ role.name }}
  scope: {{ resource_component.output.id }}
  permissions: some-permissions
Roy avatar

Of course it would require to build dependency tree on the tool level, what is duplication of tf responsibility, but.. it would be not only for tf

Roy avatar

As You can see, the context is carried by tool itself then

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A forth coming version of atmos will support all of these https://docs.gomplate.ca/datasources/

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Will that solve what you want to do?

Roy avatar

depends on final implementation, but sounds like it would

Roy avatar

did You foresee this use case I described?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It makes sense, although it’s not a use case we are actively looking to solve for ourselves, as we predominantly want nice and neat components that are documented and tested. The more dynamic and meta things become, the harder to document, test and explain how it works.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

correct, templating can become difficult to understand and read very fast. Having said that, we’ll release support for https://docs.gomplate.ca/functions/ in Atmos manifests this week

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(Sprig is already supported https://masterminds.github.io/sprig/)

Sprig Function Documentation

Useful template functions for Go templates.

Roy avatar

then what is for You the recommended way of gluing things together? Let’s take a resource <-> roles example from my para-yaml-file.

Roy avatar

do You have some ETA for datasources?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

recommended way of gluing things together? We take a different approach to glueing things together, which is the approach you want to eliminate We design components to work together, using data sources. This keeps configuration lighter. We are weary of re-inventing HCL in YAML.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Keep modules generic and reusuable (e.g. github.com/cloudposse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then create opinionated root modules that combine functionality. Root modules can use data sources to look up and glue things together.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Segment root modules by life cycle. E.g. a VPC is disjoint from an EKS cluster which is disjoint from the lifecycle of the applications running on the cluster.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The way we segment root modules, is demonstrated by our terraform-aws-components repo.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ETA for gomplate - we’ll try to release it this week (this will include the datasources)

Roy avatar

but probably it won’t have such wonderful UX as {{ component.output }} reference? :D

2024-04-02

Release notes from atmos avatar
Release notes from atmos
07:34:38 PM

v1.67.0 Add Terraform Cloud backend. Add/update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2221110085” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/572“…

Release v1.67.0 · cloudposse/atmosattachment image

Add Terraform Cloud backend. Add/update docs @aknysh (#572) what

Add Terraform Cloud backend Add docs:

Terraform Workspaces in Atmos Terraform Backends. Describes how to configure backends for AW…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

Add Terraform Cloud backend. Add/update docs by aknysh · Pull Request #572 · cloudposse/atmosattachment image

what

Add Terraform Cloud backend Add docs:

Terraform Workspaces in Atmos Terraform Backends. Describes how to configure backends for AWS S3, Azure Blob Storage, Google Cloud Storage, and Terrafor…

Release notes from atmos avatar
Release notes from atmos
07:54:31 PM

v1.67.0 Add Terraform Cloud backend. Add/update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2221110085” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/572“…

Shiv avatar

So I am examining the output for atmos describe component <component-name> —stack <stack_name>

I am trying to understand the output of the command .

  1. what’s the difference between deps and deps_all
  2. What does the imports section mean? I see catalog/account-dns and quite lot files under catalog dir
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

imports, deps and deps_all are not related to the component, but rather to the stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
imports - a list of all imports in the Atmos stack (this shows all imports in the stack, related to the component and not)

deps_all - a list of all component stack dependencies (stack manifests where the component settings are defined, either inline or via imports)

deps - a list of component stack dependencies where the final values of all component configurations are defined (after the deep-merging and processing all the inheritance chains and all the base components)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe component | atmos

Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are added to describe component as an additional info that might be useful in some cases (e.g. to find all manifests where any config for the component is defined)

Shiv avatar

Thanks much @Andriy Knysh (Cloud Posse). I think my main intent to understand these things are related to a project that am trying to work is by creating a single file which will serve as a registry for bootstrapping new accounts with foundational infra (such as account dns , region dns , vpc , compute layer , and preferably storage , and some global component need for each account (each product gets an account) . So the registry file i think will be a stack in the atmos file with values in it and a template file preferably to create all the files and put in the path that atmos needs . So this way if I need to bring up an account , all I need is to add the contents to the registry file , run a cli tool probably to generate the file needed for atmos

Shiv avatar

Preferably have the account related meta data config to be stored in a json in a S3 to start with( relational db in the future ) an api to for the the platform team so devs can use platform cli to get the values to manipulate their configs etc .

Shiv avatar

Does it makes sense? Or I am approaching this wrong?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it does make sense. In fact, we will be working on atmos generate functionality to generate diff things (projects, components, stacks, files, etc.) from templates, and what you are explaining is exactly what the new Atmos functionality will do cc: @Erik Osterman (Cloud Posse)

Shiv avatar

Interesting! Including the metadata store with an api?

Shiv avatar

When is the generate feature set to release ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Interesting! Including the metadata store with an api?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll be leveraging the beautiful Go lib

https://docs.gomplate.ca/

https://github.com/hairyhenderson/gomplate

(embedded in Atmos)

so all these Datasources will be supported

https://docs.gomplate.ca/datasources/

and functions

https://docs.gomplate.ca/functions/

gomplate - gomplate documentation

gomplate documentation

hairyhenderson/gomplate

A flexible commandline tool for template rendering. Supports lots of local and remote datasources.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a multi-phase process, phase #1 will be to add gomplate support to Atmos stack manifests. Currently https://masterminds.github.io/sprig/ is supported, soon (this or next week) gomplate will be supported as well

Sprig Function Documentation

Useful template functions for Go templates.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Shiv avatar

This looks nice . What do you think about how we could implement the metadata config store with atmos in picture? For storing environment specific metadata , say like I want to look logs for one environment , atmos somewhere does the deep merge and have the information , (other examples could be anything that folks need that information but don’t need to look at the tfstate or log into AWS account , cluster url , sqs used by the service (non secrets) . Is this possible ? If so a general idea would help . Thanks again Andriy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll be implementing that by using https://docs.gomplate.ca/datasources/ - whatever is supported by gomplate will be supported by Atmos (and that’s already a lot including S3, GCP, HTTP, etc.). So at least in the first version it will not be “anything” b/c it’s not possible to create a swiss-knife tool that would do everything. Maybe later we might consider adding a plugin functionality so you would create your own plugin to read from whatever source you need (we have not considered it yet)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Shiv if you are asking how you could do it now, you can do it in Terraform using the many providers that are already supported by Terraform (https://registry.terraform.io/browse/providers). Create Terraform components using providers you need (to get the metadata), and use Atmos to configure and provision them

2024-04-03

2024-04-05

Roy avatar

hmmm

> atmos describe stacks
the stack name pattern '{tenant}-{environment}-{stage}' specifies 'tenant', but the stack 'catalog/component/_defaults' does not have a tenant defined in the stack file 'catalog/component/_defaults'

how to avoid including catalog items from describing the stacks? All components have type: abstract meta attribute.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can do one of the following:

• exclude all defaults.yaml • exclude the entire catalog folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
stacks:
  # Can also be set using 'ATMOS_STACKS_BASE_PATH' ENV var, or '--config-dir' and '--stacks-dir' command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks"
  # Can also be set using 'ATMOS_STACKS_INCLUDED_PATHS' ENV var (comma-separated values string)
  included_paths:
    - "orgs/**/*"
  # Can also be set using 'ATMOS_STACKS_EXCLUDED_PATHS' ENV var (comma-separated values string)
  excluded_paths:
    - "**/_defaults.yaml"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure CLI | atmos

In the previous step, we’ve decided on the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depending on the structure of your stacks folder, you can use included_paths to include everything and then excluded_paths to exclude some paths, or just included_paths to only include what’s needed

Roy avatar

oooh, now I see, thanks!

1
Shiv avatar

What are some recommended patterns to add / enforce tagging as part of workflows? So if a component is not tagged as per standards . Do not apply sort of thing

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this is where the OPA policies come into play.

@Andriy Knysh (Cloud Posse) might have an example somewhere.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since you’ve probably become familiar with atmos describe (per other question), anything in that output can be used to enforce a policy.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See:

# Check if the component has a `Team` tag

here: https://atmos.tools/core-concepts/components/validation#opa-policy-examples

Component Validation | atmos

Use JSON Schema and OPA policies to validate Components.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure Validation | atmos

Atmos supports Atmos Manifests Validation and Atmos Components Validation

Shiv avatar

I see , so even if the stack’s vars section is missing the necessary tags the opa policies can add the tags during the runtime ? Don’t think so yes? Because there is not state for the atmos to have that metadata context? Because the devs could be adding incorrect tags and then tags pretty much becomes useless if it doesn’t fit the company’s tagging scheme

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


opa policies can add the tags during the runtime
Policies are simply about enforcement, but don’t modify the state.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, since with atmos, you get inheritance, it’s super easy to ensure proper tagging across the entire infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

those tags can be set at a level above where the devs operate. They basically don’t even need to be aware of it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Based on where a component configuration gets imported, it will inherit all the context. This is assuming you’re adopting the terraform-null-label conventions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Without using something like null-label it will be hard to enforce consistency in terraform across an enterprise.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, there’s still an alternative approach. Since we support provider config generation, it’s possible to automatically set the required tags on any component also based on the whole inheritance modell.

Shiv avatar

Enforcements makes sense . Yes we have context.tf in all our components . What do you mean by
tags are set at a level above where the devs operate

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes… let me try to explain that another way.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we can establish tags in the _defaults.yaml that get imported in to the stacks. Stacks are at different levels

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What makes it hard to describe, is we don’t know how you organize your infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In our reference architecture (not public), we organize everything in a very hierarchical manner. By setting tags in the appropriate place of the hierarchy, those tags natuarlly flow through to the component based on where it’s deployed in the stack.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So when I say “above where the devs operate”, I just mean higher up in the hierarchy of configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Organizational Structure Configuration Atmos Design Pattern | atmos

Organizational Structure Configuration Atmos Design Pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So at each level of that hierarchy, we can establish the required tags. Developers don’t need to be aware of that, so long as they use the [context.tf](http://context.tf) pattern, they’ll get the right tags.

Shiv avatar

I was also thinking say if we have a tagging scheme for the company . And the tagging scheme is purely account for cost model for teams . So it goes something like say domains -> subdomains -> service repos . And probably product tag as well .

My thought is I just want to ask for information once. If we ask for it in a standard place such as the backstage , I would want to be able to reuse that data where we need it. So the tagging scheme would inform the data that we want to ask for.

I am thinking terraform provider is if we need as a way to grab information on a backstage or something similar to that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Possibly, but now you have configuration sprawl. It’s an interesting use-case with backstage. I’d be open to more brainstorming.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

DM me and we can do a zoom sometime.

2024-04-08

Release notes from atmos avatar
Release notes from atmos
08:04:37 PM

v1.68.0 Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…

Release v1.68.0 · cloudposse/atmosattachment image

Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (#578) Breaking changes If you used Go templates in Atmos stack manifest before, they were enabled by default. Startin…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
08:14:28 PM

v1.68.0 Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…

Andrew Ochsner avatar
Andrew Ochsner

is there an easy way to have atmos describe stacks --stacks <stackname> skip abstract components? or have any jq handy cause i just suck at jq

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
      - name: components
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos list components -s plat-ue2-dev -t enabled will skip abstract (and you can modify the command to improve or add more filters)

Andrew Ochsner avatar
Andrew Ochsner

ahhh huzzah tysm

2024-04-09

Justin avatar

Hi all,

I’m working through a project right now where I need to collect the security group ID for a security group

vendored: https://github.com/cloudposse/terraform-aws-security-group

/components/terraform/networking/security_group/v2.2.0

vendored: https://github.com/cloudposse/terraform-aws-ecs-web-app

/components/terraform/ecs/web/v2.0.1

For my stack configuration, I’d like to create two security groups, and then provide the IDs to the ecs-web-app.

components:
  terraform:
    security_group/1:
      vars:
        ingress:
	      - key: ingress
	        type: ingress
	        from_port: 0
	        to_port: 8080
	        protocol: tcp
	        cidr_blocks: 169.254.169.254
	        self: null
	        description: Apipa example group 1
	    egress:
	      - key: "egress"
		    type: "egress"
		    from_port: 0
		    to_port: 0
		    protocol: "-1"
		    cidr_blocks: ["0.0.0.0/0"]
		    self: null
		    description: "All output traffic
    security_group/2:
      vars:
        ingress:
	      - key: ingress
	        type: ingress
	        from_port: 0
	        to_port: 8009
	        protocol: tcp
	        cidr_blocks: 169.254.169.254
	        self: null
	        description: Apipa example group 1
	    egress:
	      - key: "egress"
		    type: "egress"
		    from_port: 0
		    to_port: 0
		    protocol: "-1"
		    cidr_blocks: ["0.0.0.0/0"]
		    self: null
		    description: "All output traffic
	web/1:
	  name: sampleapp
	  vpc_id: <reference remote state from core stack>
	  ecs_security_group_ids:
	    - remote_state_reference_security_group/1
	    - remote_state_reference_security_group/2 

If the security group and ecs modules have been vendored in, what is the best practice to get the CloudPosse remote_state file into place and configured so that I can reference each security group ID in the creation of my web/1 stack? Same with the VPC created in a completely different stack.

My thinking is that I’d like to keep everything versioned and vendored from separate repositories that have their own tests / QA passes performed on them unique to Terraform, or vendored in from CloudPosse / Terraform.

I’m missing the connection of how to fetch the remote state from each security group and reference the ids in the web1 component.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Good question. Cool that you’re kicking the tires.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So vendoring child modules and using them as root modules, would be more of an advanced use-case.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The way Cloud Posse typically uses and advises for the use of vendoring is with “root” modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s interesting, because what want to do I think is very similar to another recent thread we have. Let me dig it up.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Read the thread here, to get on the same page. I’d be curious if there’s some overlap with what you’re trying to do.
I’m missing the connection of how to fetch the remote state from each security group and reference the ids in the web1 component
This is why I think what you’re trying to do is related.

https://sweetops.slack.com/archives/C031919U8A0/p1712062674605219?thread_ts=1711983108.729749&cid=C031919U8A0

@Erik Osterman (Cloud Posse) okey, because I was hoping if there is possibility to have implementation-agnostic (then context-agnostic) building blocks (that could be vendored from i.e. Cloud Posse ,Gruntwork, Azure Verified Modules, etc.) and to glue things together with business logic with use of higher-level tooling (i.e. Atmos), without usage of any intermediate wrapper modules and without instrumenting the low-level blocks. But it would require capabilities to connect outputs to inputs in declarative way with some basic logic support (conditionals, iteration). Of course for more sophisticated logic some “logic modules” could be written, but after all it would be still the totally flat structure of modules with orchestration on the config management layer, i.e. Atmos. For now with Atmos if I want to, i.e. use output from one module in for_each for other module, I need to have additional module that: 1. grab the output from remote state and 2. implements this for_each for the second module. Am I right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TL;DR: we probably don’t support in Atmos (today) what you’re trying to do. That’s because we take a different approach.

You can see how we write components here. https://github.com/cloudposse/terraform-aws-components

The gist of it is, we’ve historically taken a very delibrate and different approach from tools like terragrunt (And now terramate), that are optimized for terraform code generation.

There are a lot of reasons for this, but chief amongst them is we haven’t needed to, despite the insane amounts of terraform we write.

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In our approach, we write context-aware, purpose build and highly re-usable components with Terraform. In terraform, it’s so easy to look up remote state, like “from each security group and reference the ids in the web1 component”, so we don’t do that in atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Instead, we write our components so they know how to look up that remote state based on declarative keys,

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if we look at your example here:

    web/1:
	  name: sampleapp
	  vpc_id: <reference remote state from core stack>
	  ecs_security_group_ids:
	    - remote_state_reference_security_group/1
	    - remote_state_reference_security_group/2 

The “web” component should be aware how to look up security groups, e.g. by tag.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Child modules, shouldn’t know how to do this. Because they are by design less opinionated, so they work for anyone - even those who do not subscribe to the “Cloud Posse” way of doing Terraform with Atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Atmos is opionated, in that it follows our guidelines for how to write terraform for maximum reusability without code generation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I wouldn’t write it off that we’ll never support it in atmos, just there are other things we’re working on right now before we consider how to do code generation, the pattern we don’t ourselves use right now.

Justin avatar

Yeah, I think my ultimate goal is to eliminate rewriting the same structure over and over again for a technology. So if I have a CloudPosse module that knows how to write a subnet very well, already has the context capabilities, and can be reused anywhere. I’d like to be able to take one product, say a kuberenets stack, and build subnets and what not by reference, gluing the pieces together like what you linked in the previous article. The modules can be opinionated enough that they meet our needs….or we can write our own more opinionated root modules, and then reuse them across all of our business needs.

Then a new product release can just tie those things together via the catalog and tweaked as needed on a stack by stack basis.

Justin avatar

So with this in mind and just to check my comprehension of the current state:

• Vendor in root modules that can be reused over and over again.

• Create my child modules that reference those root modules which are more opinionated and build core catalogs off of the child modules which can have a baseline configuration in the catalog and then tweak based on a stack basis.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your component you just need to add this part of code (in remote-state.tf ) using the remote-state module:

module "vpc_flow_logs_bucket" {
  count = local.vpc_flow_logs_enabled ? 1 : 0

  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.5.0"

  # Specify the Atmos component name (defined in YAML stack config files) 
  # for which to get the remote state outputs
  component = var.vpc_flow_logs_bucket_component_name

  # Override the context variables to point to a different Atmos stack if the 
  # `vpc-flow-logs-bucket-1` Atmos component is provisioned in another AWS account, OU or region
  stage       = try(coalesce(var.vpc_flow_logs_bucket_stage_name, module.this.stage), null)
  tenant      = try(coalesce(var.vpc_flow_logs_bucket_tenant_name, module.this.tenant), null)
  environment = try(coalesce(var.vpc_flow_logs_bucket_environment_name, module.this.environment), null)

  # `context` input is a way to provide the information about the stack (using the context
  # variables `namespace`, `tenant`, `environment`, `stage` defined in the stack config)
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and configure the variables in Atmos

Justin avatar

yeah, I was reading through this, and I my disconnect has been trying to incorporate this with root modules that don’t already have this in place.

So, in my example, I want to build multiple subnets by declaring the same subnet module vendored from CloudPosse. If I don’t need a child module, I don’t want to create one. However, it sounds like what I need to do is build these child modules with the remote state configuration with the exports listed that I’ll need in other stacks.

Justin avatar

I’ve only been working with atmos for a couple of days, so my apologies if I missed something obvious in that document.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want us to review your config, DM me your repo and we’ll try to provide some ideas

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Other things that work well with vendoring in this sort of situation can be using [_override.tf](http://_override.tf) files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can vendor in the the upstream child module and essentially do monkey patching using override files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Monkey patch

In computer programming, monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. It is used to extend or modify the runtime code of dynamic languages such as Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, and Lisp without altering the original source code.

Override Files - Configuration Language | Terraform | HashiCorp Developerattachment image

Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(what we want to avoid doing is reimplementing HCL in YAML )

Justin avatar

Hrm, that’s an interesting thought. Currently what I’m thinking is that I need to evaluate a polyrepo structure for my highly opinionated modules which build the solution we’re looking for. There I can run through assertion testing which was added recently and pull in passing versions to my “monorepo” which helps controls different stacks/solutions and build the catalog/mixins for those opinionated repositories.

So I can still use the CloudPosse subnet module should I need it in a root module by simply calling and pinning it, and then build my YAML catalogs around those vendored root modules. That brings less imports into the control repository and less things to maintain in vendor files…I just need to make sure that we are consistent and clean with versioned repos of our own that we’re vendoring in.

Justin avatar

That would then allow me to bring in the remote_state configuration that Andriy referenced to build out all of my “core” services and export anything that would need to be referenced in other modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, I think that might work.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If it would be helpful, happy to do a zoom and “white board” on some ideas.

Justin avatar

I really appreciate that, thank you. I’m going to take some time and rework a personal project I’ve been working on to learn Atmos and will report back once I have it a bit more fleshed out. Really appreciate your time and conversation today, thank you so much.

2
Andrew Ochsner avatar
Andrew Ochsner

any recommendations on how to pass output from 1 workflow command to another workflow command? right now just thinking of writing out/reading from a file…. just curious if there are other mechanisms that aren’t as kludgy

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heh, wow. A lot of similar requests to this lately. If not workflows steps, then state between components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think using an intermediary file is your best bet right now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For advanced workflows, you may want to consider something like Go task. You can still call gotask as a custom command from atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have improvements planned (not started) for workflows. If you could share your use-case, it could help inform future implementations.

Andrew Ochsner avatar
Andrew Ochsner

yeah i mean ultimately i need to run a shell command as part of my cold start (azure) based on an ID that got generated by terraform… long term i wont’ have this probably because there’s another provider i can use to skip the shell command and just do it all in terraform but going through corp approvals

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, then avoid the yak shaving for a elegant solution and go with an intermediary command.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or, a variation of the above. Ensure the “ID that got generated by terraform” is an output of the component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then in your workflow, just call atmos terraform output.... to retrieve the output.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can use that in $(atmos terraform output ....) to use it inline within your workflow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g.

workflow:
  foobar:
    steps:
    - echo "Hello $(atmos terraform output ....) "
Andrew Ochsner avatar
Andrew Ochsner

Yep that’s true. Haven’t quite discovered how complex this morphs into but best to start small and simple

Andrew Ochsner avatar
Andrew Ochsner

Thanks

Release notes from atmos avatar
Release notes from atmos
05:54:33 AM

v1.69.0 Restore Terraform workspace selection side effect In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands….

Release v1.69.0 Restore Terraform workspace selection side effect · cloudposse/atmosattachment image

In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands. This had the unintended consequence that the …

use TF_WORKSPACE to select and/or create the active workspace by mcalhoun · Pull Request #515 · cloudposse/atmosattachment image

what Use the TF_WORKSPACE environment variable to select and/or create the terraform workspace. why This is a better solution than the existing one of running two external terraform workspace selec…

Release notes from atmos avatar
Release notes from atmos
06:04:37 AM

v1.69.0 In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands….

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heads up @RB this rolls back whaet we implemented in 515 and TF_WORKSPACE due to it behaving differently

RB avatar

Ah yes no worries, i saw the issue that Jeremy raised. Thanks for the ping

1

2024-04-10

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
08:28:07 PM

added an integration to this channel: Linear Asks

1
Chris King-Parra avatar
Chris King-Parra

What’s the recommended approach to set up data dependencies between stacks (use the output of one stack as the input for another stack)? Data blocks?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your component where you want to read a remote state of another component, add

module "xxxxxx" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is one approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re working on refining another appraoch as well, but it’s not published. It’s also a native-terraform approach.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s important to address the heart of it: we try to leverage native terraform everywhere possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since terraform supports this use-case natively, we haven’t invested in solving it in atmos because that forces vendor lock-in

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We try to not force vendor lock-in

2

2024-04-11

Release notes from atmos avatar
Release notes from atmos
03:44:32 PM

v1.68.0 Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…

Release v1.68.0 · cloudposse/atmosattachment image

Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (#578) Breaking changes If you used Go templates in Atmos stack manifest before, they were enabled by default. Startin…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

2024-04-12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For anyone else working on GHA with Atmos https://sweetops.slack.com/archives/CB6GHNLG0/p1712912684845849

wave What schema does the DDB table used by https://github.com/cloudposse/github-action-terraform-plan-storage/tree/main need to have? cc @Erik Osterman (Cloud Posse)

RB avatar

What is a good SCP for the identity account ? Is it only iam roles that belong here so perhaps ?

2024-04-13

Ben avatar

hi all, i’m new to atmos and try to follow along the quick-start quide. i got a weird issue were it seems like {{ .Version }} isn’t rendered when running atmos vendor pull:

❯ atmos vendor pull
Processing vendor config file 'vendor.yaml'
Pulling sources for the component 'vpc' from 'github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref={{.Version}}' into 'components/terraform/vpc'
error downloading '<https://github.com/cloudposse/terraform-aws-components.git?ref=%7B%7B.Version%7D%7D>': /usr/bin/git exited with 1: error: pathspec '{{.Version}}' did not match any file(s) known to git

it works when I replace the templated version with a real one.

i’ve got the stock config from https://atmos.tools/quick-start/vendor-components and running atmos 1.69.0:

❯ atmos version

 █████  ████████ ███    ███  ██████  ███████
██   ██    ██    ████  ████ ██    ██ ██
███████    ██    ██ ████ ██ ██    ██ ███████
██   ██    ██    ██  ██  ██ ██    ██      ██
██   ██    ██    ██      ██  ██████  ███████


👽 Atmos 1.69.0 on darwin/arm64
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ben in Atmos 1.68.0 we added gomplate to Atmos Go templates, and added the enabled flag to enable/disable templating, but the flag affected not only Atmos stack manifests, but all other places where templates are used (including vendoring and imports with templates), see https://atmos.tools/core-concepts/stacks/templating

Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll release a new Atmos release today which fixes that issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for now, either use Atmos 1.67.0, or add

templates:
  settings:
    enabled: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to your atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

today’s release 1.70.0 will fix that so you don’t need to do it

Ben avatar

thanks! enabling templating in the config solved the issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks. That flag should not affect templating in vendoring and imports, so it’s a bug which will be fixed in 1.70.0

1
dropinboy avatar
dropinboy

Hi All :wave: ,

I’m just getting started with Atmos and wanted to check if nested variable interpolation is possible? My example is creating an ECR repo and want the name to have a prefix. I’ve put the prefix: in a _defaults.yaml file and used {{ .vars.prefix }} in a stack/stage.yaml and it doesn’t work.

vars:
  prefix: "{{ .vars.namespace }}-{{ .vars.environment }}-{{ .vars.stage }}"

repository_name: "{{ .vars.prefix }}-storybook"

How are others doing this resource prefixing? Thank you

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently Atmos does one pass at Go template processing, your example requires two passes. It’s not currently supported, but we’ll consider implementing it in the near future

dropinboy avatar
dropinboy

Thank you for the quick response (not expected on a Saturday). I believe it’s helpful/convenient.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for now consider repeating the template (not very DRY, but will work)

dropinboy avatar
dropinboy

yes, that’ll work, thank you

dropinboy avatar
dropinboy

BTW really enjoying using Atmos (as a current Terragrunt user)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thank you

Release notes from atmos avatar
Release notes from atmos
03:14:34 AM

v1.70.0 Add gomplate datasources to Go templates in Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2241696365” data-permission-text=”Title is private”…

Release v1.70.0 · cloudposse/atmosattachment image

Add gomplate datasources to Go templates in Atmos stack manifests. Update docs @aknysh (#582) what

Add gomplate datasources to Go templates in Atmos stack manifests Fix an issue with enabling/…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
03:34:35 AM

v1.70.0 Add gomplate datasources to Go templates in Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2241696365” data-permission-text=”Title is private”…

2024-04-15

dropinboy avatar
dropinboy

Hi, Is there a way to use the file-path of the Stack.yaml file to be used as the workspace_key_prefix value? Looking in the S3 bucket, the tfstate file prefix appears to be stored as bucket_name/component_name/stack_name-component_name . Currently the repo structure is Stacks/app_name/env_name/stack_name and would prefer them to be aligned but maybe they don’t need to be. Thank you

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) would know the specifics of this, but one word of caution - this couples more of the filesystem to the terraform state, making it more cumbersome to reorganize stacks in the future

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s really nice to be able to move stack files around, and it doesn’t affect your terraform state.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(It was a design consideration that of configuration be derived from configuration rather than filesystem location)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said, now with some of the go template/gomplate stuff, it could be possible.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if you add something like this in your stack manifests (where depends on your import/inheritance chain), it will work as you describe

vars:
  stage: staging
  tags:
    stack_file: '{{ .atmos_stack_file }}'
    
import:
  - catalog/myapp

terraform:
  backend_type: s3
  backend:
    s3:
      workspace_key_prefix: '{{ .atmos_stack_file }}'

components:
  terraform:
    myapp:
      vars:
        location: Los Angeles
        lang: en
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, that var.tags.stack_file is entirely optional and has no bearing on the workspace_key_prefix. I just added it as an example for how you could also tag every resource with the stack file that manages it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For this to work, make sure you have the following enabled in your atmos.yml

# <https://pkg.go.dev/text/template>
templates:
  settings:
    enabled: true
    # <https://masterminds.github.io/sprig>
    sprig:
      enabled: true
    # <https://docs.gomplate.ca>
    gomplate:
      enabled: true
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or you might get an error like this:

❯ atmos describe stacks          
invalid stack manifest 'deploy/staging.yaml'
yaml: invalid map key: map[interface {}]interface {}{".atmos_stack_file":interface {}(nil)}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the templates support all values that are returned from atmos describe component <component> -s <stack>. So you can also do something like

workspace_key_prefix: '{{ .vars.app_name }}/{{ .vars.environment }}...'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh if you configure the settings section with some metadata, you could you it in the templates as well, e,g, (just an example)

workspace_key_prefix: '{{ .settings.app_name }}/{{ .settings.workspace_key }}...'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the settings section is a free-form map, you can add any properties to it, including embedded maps/objects. Then it cam be used anywhere in the stack manifests in Go templates and Sprig Functions, Gomplate Functions and Gomplate Datasources

https://atmos.tools/core-concepts/stacks/templating

Sprig Function Documentation

Useful template functions for Go templates.

Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

dropinboy avatar
dropinboy

thank you both for the response. I see the value in disconnecting the path to the state-file and the file-system, I’ll give that more thought and thank you for informing about the template support of atmos describe component ... values.

2024-04-16

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hi,

i’m currently using terragrunt and want to migrate to atmos. One very convenient thing about terragrunt is that i can simply overwrite the terraform module git repo urls with a local path (https://terragrunt.gruntwork.io/docs/reference/cli-options/#terragrunt-source-map). This allows me to develop tf modules using terragrunt in a efficient way.

Is there anything like it in atmos? If not, what is the best way to develop tf modules while using atmos?

toka avatar

Hey guys, I’m looking to adopt adopt Atmos for the whole organisation I’m currently working/building for.

I’m trying plan the monorepo structure and how to organise things to avoid the pain re-organising later on. I see tools is pretty flexible, so right know I’m looking at https://atmos.tools/design-patterns/organizational-structure-configuration to find some convention guidance. Looking at atmos cli terraform commands, I understand that org, tenant, region and environment/stage/account are somehow merged into one

 atmos terraform apply vpc-flow-logs-bucket -s org1-plat-ue2-dev

but it’s a bit hard to grasp how to approach the structure, when my goal is to deploy the same infrastructure in every region - for the most part, in multi-cloud setup. Any tips?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey @toka this is the exact use-case we solve in our reference architecture and our customers use. I’ll take a stab at answering it.

1
toka avatar

Right know was thinking should I split the code between different orgs directories, or between different tenats, or both

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So first to understand is how we achive this at the terraform layer. It’s important that your terraform is written to a) support parameterized region in [providers.tf](http://providers.tf) b) leverage something like our terraform-null-label module, c) use some field to connote the region.

We use env in the terraform-null-label to connote the region, and stage to connote where it’s in the lifecycle (e.g. dev, staging, prod, etc.)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So basically, to go “multi-region” requires first solving how you naming your resources, using a naming convention.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you already familiar with our cloudposse/terraform-null-label module?

toka avatar

Regarding naming, I have a naming-module implemented already, all module resources are named by sourcing the naming module first

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Right know was thinking should I split the code between different orgs directories, or between different tenats, or both
So, “code” might be a bit ambiguous here since “code” could refer to the stacks (YAML) or maybe the terraform (HCL).

We recommend organizing the stacks the way you describe but organizing the terraform much the way you organize any software application’s code. So for example stick all “EKS” related root module components into components/terraform/eks/<blah>

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, I’d need to know more about the naming module to answer specifically, but if it works similar to null label, you should be good. To be clear, atmos does not know anything about null-label, it’s just most of our documentation assumes usage.

https://masterpoint.io/updates/terraform-null-label/

terraform-null-label: the why and how it should be used | Masterpoint Consultingattachment image

A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s how we organize it in our commercial reference architecture

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

How we organize components is largely the same as this (which is very different from how stacks are organized): https://github.com/cloudposse/terraform-aws-components/tree/main/modules

Note, components don’t have to live in a separate repo. Ours do because this is how we distribute them as open souce.

In general, component should be organized by “service” (e.g. EKS, vs ECS), or “app1” and “app2”, and if multi-cloud, but provider, so aws/eks and azure/aks

toka avatar


but if it works similar to null label, you should be good
Yes, it works very similar, almost the same, though Cloud Posse implementation seems to be better since you can inherit with

 # Here's the important bit
    context = module.public_alb_label.context

avoiding code duplication - in my TF codebase, I need to instantiate naming_module for each and every resource.

So, “code” might be a bit ambiguous here since “code” could refer to the stacks (YAML) I’m referring to stacks/YAML files structure. For HCL TF files, I’m planning to re-use all TF modules existing know, such as common_vpc , common_compute_instance , common_iam etc.

1
toka avatar

I think my need is to not duplicate stack configs across regions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, so far I think we’re on the same page.

Now, use inheritance to set vars at each level of your directory structure.

We use the convention of a file called _defaults.yaml that we stash at each level. This then sets the namespace (At the root of the directory), the tenant (aka OU) at the OU level, the stage in the stage folder, the env in the region folder, and so on. Each default file can import the parent default.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I think my need is to not duplicate stack configs across regions
Then we use the “catalog” diretory to define the baseline configuration for “something”

• Something could be a service

• Something could be a region

• Something could be an application

• Something could be a layer What “something” is will depend on how you want to logically organize your configuration for maximum DRYness and reusability

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s a quick demo of how we organize it in our refarch

toka avatar

What “something” is will depend on how you want to logically organize your configuration for maximum DRYness and reusability Ok Erick I think now I get it, this seems to be it. This is very lit, very promising. I needed to see that example in order to understand that though catalog, I can set a baseline for regions, for services or a layer and not only a TF component which is how I saw it initially

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, exactly! It’s more than just for individual TF components, which makes it very powerful. You can define the configuration for how a set of components behave when in a “region” or in an OU.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@toka please take a look at https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks, which is described in Atmos Quick Start

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

As Erik mentioned, the defaults go to _defaults.yaml at every scope (org, tenant, account, region)

https://github.com/cloudposse/atmos/blob/master/examples/quick-start/stacks/orgs/acme/_defaults.yaml

vars:
  namespace: acme

terraform:
  vars:
    tags:
      # <https://atmos.tools/core-concepts/stacks/templating>
      atmos_component: "{{ .atmos_component }}"
      atmos_stack: "{{ .atmos_stack }}"
      atmos_manifest: "{{ .atmos_stack_file }}"
      terraform_workspace: "{{ .workspace }}"
      # Examples of using the Sprig and Gomplate functions
      # <https://masterminds.github.io/sprig/os.html>
      provisioned_by_user: '{{ env "USER" }}'
      # <https://docs.gomplate.ca/functions/strings>
      atmos_component_description: "{{ strings.Title .atmos_component }} component {{ .vars.name | strings.Quote }} provisioned in the stack {{ .atmos_stack | strings.Quote }}"

  # Terraform backend configuration
  # <https://atmos.tools/core-concepts/components/terraform-backends>
  # <https://developer.hashicorp.com/terraform/language/settings/backends/configuration>
  #  backend_type: cloud  # s3, cloud
  #  backend:
  #    # AWS S3 backend
  #    s3:
  #      acl: "bucket-owner-full-control"
  #      encrypt: true
  #      bucket: "your-s3-bucket-name"
  #      dynamodb_table: "your-dynamodb-table-name"
  #      key: "terraform.tfstate"
  #      region: "us-east-2"
  #      role_arn: "arn:aws:iam::<your account ID>:role/<IAM Role with permissions to access the Terraform backend>"
  #    # Terraform Cloud backend
  #    # <https://developer.hashicorp.com/terraform/cli/cloud/settings>
  #    cloud:
  #      organization: "your-org"
  #      hostname: "app.terraform.io"
  #      workspaces:
  #        # The token `{terraform_workspace}` will be automatically replaced with the
  #        # Terraform workspace for each Atmos component
  #        name: "{terraform_workspace}"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the mixins are the catalog/defaults/baseline for all defaults for tenants, accounts and regions

https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks/mixins

(the mixins are imported into the top-level stacks to make them DRY)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the catalog is the catalog/baseline for the components config

https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks/catalog

(the component catalog is imported into mixins if you use the same component in many tenants/accounts/regions, or is imported into the top-level stacks directly)

1
toka avatar

Thank you for you help! I understand you put a lot of work for the documentation and examples and I appreciate it a lot, but I won’t pretend its easy to switch the thinking

toka avatar

I’ll try my best to adopt atmos and prove it’s value, hopefully some day we’ll have some resources to contribute back to the project. Wish me luck

1
1
Andrew Ochsner avatar
Andrew Ochsner

I also got a lot of value out of the design patterns documentation. https://atmos.tools/design-patterns/

Atmos Design Patterns | atmos

Atmos Design Patterns. Elements of Reusable Infrastructure Configuration

2
Dave avatar

What are best practices for handling the scenario when you haven’t used a tenant in your naming scheme to overcome NULL errors?

label_order: ["namespace", "stage","environment", "name", "attributes"] 

To date I have just been removing the code, realizing this is only a temporary solution

EXAMPLE { “APP_ENV” = format(“%s-%s-%s-%s”, var.namespace, var.tenant, var.environment, var.stage) },

https://github.com/cloudposse/terraform-aws-components/blob/add978eb5cf2c24a4de2ba080367bf0fdc97847d/modules/ecs-service/main.tf

  map_environment = lookup(each.value, "map_environment", null) != null ? merge(
    { for k, v in local.env_map_subst : split(",", k)[1] => v if split(",", k)[0] == each.key },
    { "APP_ENV" = format("%s-%s-%s-%s", var.namespace, var.tenant, var.environment, var.stage) },
    { "RUNTIME_ENV" = format("%s-%s-%s", var.namespace, var.tenant, var.stage) },
    { "CLUSTER_NAME" = module.ecs_cluster.outputs.cluster_name },
    var.datadog_agent_sidecar_enabled ? {
      "DD_DOGSTATSD_PORT"      = 8125,
      "DD_TRACING_ENABLED"     = "true",
      "DD_SERVICE_NAME"        = var.name,
      "DD_ENV"                 = var.stage,
      "DD_PROFILING_EXPORTERS" = "agent"
    } : {},
    lookup(each.value, "map_environment", null)
  ) : null

ERROR: Null Value for Tenant

╷
│ Error: Error in function call
│
│   on main.tf line 197, in module "container_definition":
│  197:     { "APP_ENV" = format("%s-%s-%s-%s", var.namespace, var.tenant, var.environment, var.stage) },
│     ├────────────────
│     │ while calling format(format, args...)
│     │ var.environment is "cc1"
│     │ var.namespace is "dhe"
│     │ var.stage is "dev"
│     │ var.tenant is null
│
│ Call to function "format" failed: unsupported value for "%s" at 3: null value cannot be formatted.
╵

```

Generic non company specific locals

locals { enabled = module.this.enabled

s3_mirroring_enabled = local.enabled && try(length(var.s3_mirror_name) > 0, false)

service_container = lookup(var.containers, “service”) # Get the first containerPort in var.container[“service”][“port_mappings”] container_port = try(lookup(local.service_container, “port_mappings”)[0].containerPort, null)

assign_public_ip = lookup(local.task, “assign_public_ip”, false)

container_definition = concat([ for container in module.container_definition : container.json_map_object ], [ for container in module.datadog_container_definition : container.json_map_object ], var.datadog_log_method_is_firelens ? [ for container in module.datadog_fluent_bit_container_definition : container.json_map_object ] : [], )

kinesis_kms_id = try(one(data.aws_kms_alias.selected[*].id), null)

use_alb_security_group = local.is_alb ? lookup(local.task, “use_alb_security_group”, true) : false

task_definition_s3_key = format(“%s/%s/task-definition.json”, module.ecs_cluster.outputs.cluster_name, module.this.id) task_definition_use_s3 = local.enabled && local.s3_mirroring_enabled && contains(flatten(data.aws_s3_objects.mirror[].keys), local.task_definition_s3_key) task_definition_s3_objects = flatten(data.aws_s3_objects.mirror[].keys)

task_definition_s3 = try(jsondecode(data.aws_s3_object.task_definition[0].body), {})

task_s3 = local.task_definition_use_s3 ? { launch_type = try(local.task_definition_s3.requiresCompatibilities[0], null) network_mode = lookup(local.task_definition_s3, “networkMode”, null) task_memory = try(tonumber(lookup(local.task_definition_s3, “memory”)), null) task_cpu = try(tonumber(lookup(local.task_definition_s3, “cpu”)), null) } : {}

task = merge(var.task, local.task_s3)

efs_component_volumes = lookup(local.task, “efs_component_volumes”, []) efs_component_map = { for efs in local.efs_component_volumes : efs[“name”] => efs } efs_component_remote_state = { for efs in local.efs_component_volumes : efs[“name”] => module.efs[efs[“name”]].outputs } efs_component_merged = [ for efs_volume_name, efs_component_output in local.efs_component_remote_state : { host_path = local.efs_component_map[efs_volume_name].host_path name = efs_volume_name efs_volume_configuration = [ #again this is a hardcoded array because AWS does not support multiple configurations per volume { file_system_id = efs_component_output.efs_id root_directory = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].root_directory transit_encryption = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption transit_encryption_port = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption_port authorization_config = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].authorization_config } ] } ] efs_volumes = concat(lookup(local.task, “efs_volumes”, []), local.efs_component_merged) }

data “aws_s3_objects” “mirror” { count = local.s3_mirroring_enabled ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) prefix = format(“%s/%s”, module.ecs_cluster.outputs.cluster_name, module.this.id) }

data “aws_s3_object” “task_definition” { count = local.task_definition_use_s3 ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) key = try(element(local.task_definition_s3_objects, index(local.task_definition_s3_objects, local.task_definition_s3_key)), null) }

module “logs” { source = “cloudposse/cloudwatch-logs/aws” version = “0.6.8”

# if we are using datadog firelens we don’t need to create a log group count = local.enabled && (!var.datadog_agent_sidecar_enabled || !var.datadog_log_method_is_firelens) ? 1 : 0

stream_names = lookup(var.logs, “stream_names”, []) retention_in_days = lookup(var.logs, “retention_in_days”, 90)

principals = merge({ Service = [“ecs.amazonaws.com”, “ecs-tasks.amazonaws.com”] }, lookup(var.logs, “principals”, {}))

additional_permissions = concat([ “logs:CreateLogStream”, “logs:DeleteLogStream”, ], lookup(var.logs, “additional_permissions”, []))

context = module.this.context }

module “roles_to_principals” { source = “../account-map/modules/roles-to-principals” context = module.this.context role_map = {} }

locals { container_chamber = { for name, result in data.aws_ssm_parameters_by_path.default : name => { for key, value in zipmap(result.names, result.values) : element(reverse(split(“/”, key)), 0) => value } }

container_aliases = { for name, settings in var.containers : settings[“name”] => name if local.enabled }

container_s3 = { for item in lookup(local.task_definition_s3, “containerDefinitions”, []) : local.container_aliases[item.name] => { container_definition = item } }

containers_priority_terraform = { for name, settings in var.containers : name => merge(local.container_chamber[name], lookup(local.container_s3, name, {}), settings, ) if local.enabled } containers_priority_s3 = { for name, settings in var.containers : name => merge(settings, local.container_chamber[name], lookup(local.container_s3, name, {})) if local.enabled } }

data “aws_ssm_parameters_by_path” “default” { for_each = { for k, v in var.containers : k => v if local.enabled } path = format(“/%s/%s/%s”, var.chamber_service, var.name, each.key) }

locals { containers_envs = merge([ for name, settings in var.containers : { for k, v in lookup(settings, “map_environment”, {}) : “${name},${k}” => v if local.enabled } ]…) }

data “template_file” “envs” { for_each = { for k, v in local.containers_envs : k => v if local.enabled }

template = replace(each.value, “$$”, “$”)

vars = { stage = module.this.stage namespace = module.this.namespace name = module.this.name full_domain = local.full_domain vanity_domain = var.vanity_domain # service_domain uses whatever the current service is (public/private) service_domain = local.domain_no_service_name service_domain_public = local.public_domain_no_service_name service_domain_private = local.private_domain_no_service_name } }

locals { env_map_subst = { for k, v in data.template_file.envs : k => v.rendered } map_secrets = { for k, v in local.containers_priority_terraform : k => lookup(v, “map_secrets”, null) != null ? zipmap( keys(lookup(v, “map_secrets”, null)), formatlist(“%s/%s”, format(“arnawsssm%s:parameter”, var.region, module.roles_to_principals.full_account_map[format(“%s-%s”, var.tenant, var.stage)]), values(lookup(v, “map_secrets”, null))) ) : null } }

module “container_definition” { source = “cloudposse/ecs-container-definition/aws” version = “0.61.1”

for_each = { for k, v in local.containers_priority_terraform : k => v if local.enabled }

container_name = each.value[“name”]

container_image = lookup(each.value, “ecr_image”, null) != null ? format( “%s.dkr.ecr.%s.amazonaws.com/%s”, module.roles_to_principals.full_account_map[var.ecr_stage_name], coalesce(var.ecr_region, var.region), lookup(local.containers_priority_s3[each.key], “ecr_image”, null) ) : lookup(local.containers_priority_s3[each.key], “image”)

container_memory = each.value[“memory”] container_memory_reservation = each.value[“memory_reservation”] container_cpu = each.value[“cpu”] essential = each.value[“essential”] readonly_root_filesystem = each.value[“readonly_root_filesystem”…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, so you’re trying to pull down some Cloud Posse managed components, which assume you’re using tenant

```

Generic non company specific locals

locals { enabled = module.this.enabled

s3_mirroring_enabled = local.enabled && try(length(var.s3_mirror_name) > 0, false)

service_container = lookup(var.containers, “service”) # Get the first containerPort in var.container[“service”][“port_mappings”] container_port = try(lookup(local.service_container, “port_mappings”)[0].containerPort, null)

assign_public_ip = lookup(local.task, “assign_public_ip”, false)

container_definition = concat([ for container in module.container_definition : container.json_map_object ], [ for container in module.datadog_container_definition : container.json_map_object ], var.datadog_log_method_is_firelens ? [ for container in module.datadog_fluent_bit_container_definition : container.json_map_object ] : [], )

kinesis_kms_id = try(one(data.aws_kms_alias.selected[*].id), null)

use_alb_security_group = local.is_alb ? lookup(local.task, “use_alb_security_group”, true) : false

task_definition_s3_key = format(“%s/%s/task-definition.json”, module.ecs_cluster.outputs.cluster_name, module.this.id) task_definition_use_s3 = local.enabled && local.s3_mirroring_enabled && contains(flatten(data.aws_s3_objects.mirror[].keys), local.task_definition_s3_key) task_definition_s3_objects = flatten(data.aws_s3_objects.mirror[].keys)

task_definition_s3 = try(jsondecode(data.aws_s3_object.task_definition[0].body), {})

task_s3 = local.task_definition_use_s3 ? { launch_type = try(local.task_definition_s3.requiresCompatibilities[0], null) network_mode = lookup(local.task_definition_s3, “networkMode”, null) task_memory = try(tonumber(lookup(local.task_definition_s3, “memory”)), null) task_cpu = try(tonumber(lookup(local.task_definition_s3, “cpu”)), null) } : {}

task = merge(var.task, local.task_s3)

efs_component_volumes = lookup(local.task, “efs_component_volumes”, []) efs_component_map = { for efs in local.efs_component_volumes : efs[“name”] => efs } efs_component_remote_state = { for efs in local.efs_component_volumes : efs[“name”] => module.efs[efs[“name”]].outputs } efs_component_merged = [ for efs_volume_name, efs_component_output in local.efs_component_remote_state : { host_path = local.efs_component_map[efs_volume_name].host_path name = efs_volume_name efs_volume_configuration = [ #again this is a hardcoded array because AWS does not support multiple configurations per volume { file_system_id = efs_component_output.efs_id root_directory = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].root_directory transit_encryption = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption transit_encryption_port = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption_port authorization_config = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].authorization_config } ] } ] efs_volumes = concat(lookup(local.task, “efs_volumes”, []), local.efs_component_merged) }

data “aws_s3_objects” “mirror” { count = local.s3_mirroring_enabled ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) prefix = format(“%s/%s”, module.ecs_cluster.outputs.cluster_name, module.this.id) }

data “aws_s3_object” “task_definition” { count = local.task_definition_use_s3 ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) key = try(element(local.task_definition_s3_objects, index(local.task_definition_s3_objects, local.task_definition_s3_key)), null) }

module “logs” { source = “cloudposse/cloudwatch-logs/aws” version = “0.6.8”

# if we are using datadog firelens we don’t need to create a log group count = local.enabled && (!var.datadog_agent_sidecar_enabled || !var.datadog_log_method_is_firelens) ? 1 : 0

stream_names = lookup(var.logs, “stream_names”, []) retention_in_days = lookup(var.logs, “retention_in_days”, 90)

principals = merge({ Service = [“ecs.amazonaws.com”, “ecs-tasks.amazonaws.com”] }, lookup(var.logs, “principals”, {}))

additional_permissions = concat([ “logs:CreateLogStream”, “logs:DeleteLogStream”, ], lookup(var.logs, “additional_permissions”, []))

context = module.this.context }

module “roles_to_principals” { source = “../account-map/modules/roles-to-principals” context = module.this.context role_map = {} }

locals { container_chamber = { for name, result in data.aws_ssm_parameters_by_path.default : name => { for key, value in zipmap(result.names, result.values) : element(reverse(split(“/”, key)), 0) => value } }

container_aliases = { for name, settings in var.containers : settings[“name”] => name if local.enabled }

container_s3 = { for item in lookup(local.task_definition_s3, “containerDefinitions”, []) : local.container_aliases[item.name] => { container_definition = item } }

containers_priority_terraform = { for name, settings in var.containers : name => merge(local.container_chamber[name], lookup(local.container_s3, name, {}), settings, ) if local.enabled } containers_priority_s3 = { for name, settings in var.containers : name => merge(settings, local.container_chamber[name], lookup(local.container_s3, name, {})) if local.enabled } }

data “aws_ssm_parameters_by_path” “default” { for_each = { for k, v in var.containers : k => v if local.enabled } path = format(“/%s/%s/%s”, var.chamber_service, var.name, each.key) }

locals { containers_envs = merge([ for name, settings in var.containers : { for k, v in lookup(settings, “map_environment”, {}) : “${name},${k}” => v if local.enabled } ]…) }

data “template_file” “envs” { for_each = { for k, v in local.containers_envs : k => v if local.enabled }

template = replace(each.value, “$$”, “$”)

vars = { stage = module.this.stage namespace = module.this.namespace name = module.this.name full_domain = local.full_domain vanity_domain = var.vanity_domain # service_domain uses whatever the current service is (public/private) service_domain = local.domain_no_service_name service_domain_public = local.public_domain_no_service_name service_domain_private = local.private_domain_no_service_name } }

locals { env_map_subst = { for k, v in data.template_file.envs : k => v.rendered } map_secrets = { for k, v in local.containers_priority_terraform : k => lookup(v, “map_secrets”, null) != null ? zipmap( keys(lookup(v, “map_secrets”, null)), formatlist(“%s/%s”, format(“arnawsssm%s:parameter”, var.region, module.roles_to_principals.full_account_map[format(“%s-%s”, var.tenant, var.stage)]), values(lookup(v, “map_secrets”, null))) ) : null } }

module “container_definition” { source = “cloudposse/ecs-container-definition/aws” version = “0.61.1”

for_each = { for k, v in local.containers_priority_terraform : k => v if local.enabled }

container_name = each.value[“name”]

container_image = lookup(each.value, “ecr_image”, null) != null ? format( “%s.dkr.ecr.%s.amazonaws.com/%s”, module.roles_to_principals.full_account_map[var.ecr_stage_name], coalesce(var.ecr_region, var.region), lookup(local.containers_priority_s3[each.key], “ecr_image”, null) ) : lookup(local.containers_priority_s3[each.key], “image”)

container_memory = each.value[“memory”] container_memory_reservation = each.value[“memory_reservation”] container_cpu = each.value[“cpu”] essential = each.value[“essential”] readonly_root_filesystem = each.value[“readonly_root_filesystem”…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s a good question, I’m afraid right now that’s not possible without forking the components. Our components are designed/optimized to work with our commercial reference architecture. We give them away in the event they are useful for others, but they are not as generic as our child modules (e.g. https://github.com/cloudposse)

Cloud Posse

DevOps Accelerator for Funded Startups & Enterprises Hire Us! https://cloudposse.com/services

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We intend to generalize this in future versions of our refarch, but for now that’s not supported.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But I can tell you how to work around it!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we’ve seen more and more similar requests to these, and I can really identify with this request as something we could/should support via the atmos vendor command.

One of our design goal in atmos is to avoid code generation / manipulation as much as possible. This ensures future compatibility with terraform.

So while we don’t support it the way terragrunt does, we support it the way terraform already supports it. :smiley:

That’s using the [_override.tf](http://_override.tf) pattern.

We like this approach because it keeps code as vanilla as possible while sticking with features native to Terraform.

https://developer.hashicorp.com/terraform/language/files/override

So, let’s say you have a [main.tf](http://main.tf) with something like this (from the terragrunt docs you linked)

module "example" {
  source = "github.com/org/modules.git//example"
  // other parameters
}

To do what you want to do in native terraform, create a file called [main_override.tf](http://main_override.tf)

module "example" {
  source = "/local/path/to/modules//example"
}

You don’t need to duplicate the rest of the definition, only the parts you want to “override”, like the source

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In this case, you would create an overrides that alters the map_environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This way you can preserve all the rest of the functionality, without forking.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In this case, you would use [main_override.tf](http://main_override.tf) to replace the local

locals {

  map_secrets = ....
}
Dave avatar

Ah k, and the override is there so you can pull updates with getting it overwritten? From time to time you might need to update the override in order to match the new schema?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, since the overrides are in a file named ....[_override.tf](http://_override.tf) they will never get overwritten by our upstream. This is a “contract” we have to ensure vendoring + overrides always works.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In otherwords, we will not have files named ....[_override.tf](http://_override.tf) in our upstream components repo.

1
Dave avatar

k, thanks for the fast answer!

1

2024-04-17

RB avatar

Besides renaming an account, what can be done if the account-name is too long causing module.this.id ’s to hit max character restrictions for aws resources?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So there’s no strict enforcement on AWS or in atmos on what those names are. It’s purely self-inflicted

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s not clear why you cannot change the account name in stack configuration

RB avatar

If we make the stage shorter, then new account names have to be learned when targeting a stack from the command line.

I did think maybe using stage aliases would help with this problem and wrote up https://github.com/cloudposse/atmos/issues/581

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, yes, I recall that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think this is an interesting use case.

RB avatar

oh i didnt see you commented there. I was also opening this thread to see alternatives but ill read up the comments now, sorry

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For example, a recent customer didn’t like that we abbreviated the stage name for production to be prod to conserve length. If we had support of aliases, the full length can still be named

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Short term, I don’t see any solution besides using an abbreviation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Longer term, there’s something else we’re about to release that could be where this is solved.

this1
RB avatar

If we go with short term of an abbreviation, that would be the codified naming convention.

Another alternative is dropping namespace from the null label for accounts that hit these restraints…. but then we have inconsistency.

Both short terms are hard to back out from

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, it predominantly affects specific resources in AWS.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You only need the exceptions in those cases.

1
RB avatar

Are relative path imports for catalogs supported in atmos yaml ?

import:
  # relative path from stacks/
  - path: catalog/services/echo-server/resources/*.yaml
  # relative path from catalog itself
  - path: ./resources/*.yaml
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have an issue on our roadmap to implement this, but no ETA

RB avatar
#293 Feature Request: Relative Path Imports

Describe the Feature

Instead of requring the full path relative to the top-level, support relative paths to the current file.

Today, we have to write:

# orgs/acme/mgmt/auto/network.yaml
import:
  - orgs/acme/mgmt/auto/_defaults

Instead, we should be able to write:

# orgs/acme/mgmt/auto/network.yaml
import:
  - _defaults

Expected Behavior

A clear and concise description of what you expected to happen.

Where by _default resolves to orgs/acme/mgmt/auto/_default.yaml;

Use Case

Make configurations easier to templatize without knowing the full path to the current working file.

Describe Ideal Solution

Basically, take the dirname('orgs/acme/mgmt/auto/network.yaml') and concatenate it with /_default.yaml

Alternatives Considered

Explain what alternative solutions or features you’ve considered.

Today, we’re using cookie-cutter when generating the files:

import:
  - orgs/{{ cookiecutter.namespace }}/mgmt/auto/_defaults
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently, all paths are relative to the “stacks” folder

RB avatar

ya but i was trying to import relative to another catalog

RB avatar

you’re right, both are relatives

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you are talking about paths relative to the current manifest file. This will save you from typing the path prefix (e.g. catalog/xxx), but at the same time will make the manifest non-portable (always some compromises/tradeoffs)

1

2024-04-19

pv avatar

Is there anything that changed on the Atmos side regarding GHA pipelines? I have my workflow file that runs certain files that have commands for plan and apply. Last week, I was able to run multiple plan and apply commands in one file. Now, my workflow hangs in GHA. The only fix I have is to run one command at a time which takes a lot more time for me to deploy things

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey @pv - this is the first report we hear of that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I wonder if it could be a concurrency setting somewhere?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using self-hosted runners?

pv avatar

@Erik Osterman (Cloud Posse) No, in this setup it is not self hosted

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov any ideas on this one?

Igor Rodionov avatar
Igor Rodionov

@pv can you share the logs of GHA failed run ?

pv avatar

There are no logs of value. On the plan or deploy stage, it just gets stuck and runs forever until the workflow is cancelled

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using the version of the GHA that uses atmos.yml or the “gitops” config?

Igor Rodionov avatar
Igor Rodionov

weird

pv avatar

We have an atmos.yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would you happen to be leveraging -lock-timeout anywhere?

pv avatar

@Erik Osterman (Cloud Posse) Nope, just checked the yaml and don’t see that value anywhere

pv avatar
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir ('/usr/local/etc/atmos' on Linux, '%LOCALAPPDATA%/atmos' on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star '**' is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>

# Base path for components, stacks and workflows configurations.
# Can also be set using 'ATMOS_BASE_PATH' ENV var, or '--base-path' command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, 'components.terraform.base_path', 'components.helmfile.base_path', 'stacks.base_path'
# and 'workflows.base_path' are independent settings (supporting both absolute and relative paths).
# If 'base_path' is provided, 'components.terraform.base_path', 'components.helmfile.base_path', 'stacks.base_path'
# and 'workflows.base_path' are considered paths relative to 'base_path'.
base_path: ""

components:
  terraform:
    # Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_BASE_PATH' ENV var, or '--terraform-dir' command-line argument
    # Supports both absolute and relative paths
    base_path: "components/terraform"
    # Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE' ENV var
    apply_auto_approve: true
    # Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT' ENV var, or '--deploy-run-init' command-line argument
    deploy_run_init: true
    # Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE' ENV var, or '--init-run-reconfigure' command-line argument
    init_run_reconfigure: true
    # Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE' ENV var, or '--auto-generate-backend-file' command-line argument
    auto_generate_backend_file: true

stacks:
  # Can also be set using 'ATMOS_STACKS_BASE_PATH' ENV var, or '--config-dir' and '--stacks-dir' command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks"
  # Can also be set using 'ATMOS_STACKS_INCLUDED_PATHS' ENV var (comma-separated values string)
  included_paths:
    - "orgs/**/*"
  # Can also be set using 'ATMOS_STACKS_EXCLUDED_PATHS' ENV var (comma-separated values string)
  excluded_paths:
    - "**/_defaults.yaml"
  # Can also be set using 'ATMOS_STACKS_NAME_PATTERN' ENV var
  name_pattern: "{tenant}-{environment}-{stage}"

workflows:
  # Can also be set using 'ATMOS_WORKFLOWS_BASE_PATH' ENV var, or '--workflows-dir' command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks/workflows"

logs:
  file: "/dev/stdout"
  # Supported log levels: Trace, Debug, Info, Warning, Off
  level: Trace

# Custom CLI commands
commands: []

# Integrations
integrations: {}

# Validation schemas (for validating atmos stacks and components)
schemas:
  # <https://json-schema.org>
  jsonschema:
    # Can also be set using 'ATMOS_SCHEMAS_JSONSCHEMA_BASE_PATH' ENV var, or '--schemas-jsonschema-dir' command-line arguments
    # Supports both absolute and relative paths
    base_path: "stacks/schemas/jsonschema"
  # <https://www.openpolicyagent.org>
  opa:
    # Can also be set using 'ATMOS_SCHEMAS_OPA_BASE_PATH' ENV var, or '--schemas-opa-dir' command-line arguments
    # Supports both absolute and relative paths
    base_path: "stacks/schemas/opa"
  # JSON Schema to validate Atmos manifests
  # <https://atmos.tools/reference/schemas/>
  # <https://atmos.tools/cli/commands/validate/stacks/>
  # <https://atmos.tools/quick-start/configure-validation/>
  # <https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json>
  # <https://json-schema.org/draft/2020-12/release-notes>
  atmos:
    # Can also be set using 'ATMOS_SCHEMAS_ATMOS_MANIFEST' ENV var, or '--schemas-atmos-manifest' command-line arguments
    # Supports both absolute and relative paths (relative to the `base_path` setting in `atmos.yaml`)
    manifest: "stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Your integrations section is empty, which tells me it’s not configured for the current version of the actions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am afk, but if you check the actions themselves the u have a migration guide

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov @Gabriela Campana (Cloud Posse) we should error if the integration configuration is absent

1
pv avatar

What integration do I need? The documentation only shows the run Atlantis integration and I can’t find any list of other integrations

pv avatar

Thanks! It would be nice to have the list of integrations linked in the docs integrations page as well

1
pv avatar

Most of these arguments seem to be related to AWS so I don’t know if this will actually solve my issue. I am using GCP. The only thing I could really use from this is setting the Atmos version but we already have that set as latest in our pipeline

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov any action item here?

2024-04-22

2024-04-23

Ryan avatar

Hey All, I’m confused as to the best way to handle this, though I’m sure I’ll learn as I get into working with Cloudposse more. In a few of the small modules I’ve made, I build a main or a couple main object variables with everything tied back to that master object var. In looking at terraform-aws-network-firewall, the variables are more of an any type and I have to figure out the YAML data structures for input. Both ways make sense, especially because network firewalls kind of a lot of settings to wrangle. I had most of my module built for network-firewall but based on Eriks suggestion I figured I’d give your module a try. The examples are helping but GPT is definitely helping me more than I’m helping myself in TF to YAML conversions lol.

1
Ryan avatar

Morning btw everyone.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmm so basically, are you trying to figure out the shape of the YAML for the component?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ideally, our components all have a working example to make that easier.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In this case, you’re referring to our child module terraform-aws-network-firewall

Ryan avatar

yea but network_firewall has ALOT of variables

Ryan avatar

yea

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here’s an example on how to configure terraform-aws-network-firewall in Atmos :

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:

    # <https://catalog.workshops.aws/networkfirewall/en-US/intro>
    # <https://d1.awsstatic.com/events/aws-reinforce-2022/NIS308_Deploying-AWS-Network-Firewall-at-scale-athenahealths-journey.pdf>
    network-firewall:
      metadata:
        component: "network-firewall"
      vars:
        enabled: true
        name: "network-firewall"

        # The name of a VPC component where the Network Firewall is provisioned
        vpc_component_name: "vpc"

        all_traffic_cidr_block: "0.0.0.0/0"

        delete_protection: false
        firewall_policy_change_protection: false
        subnet_change_protection: false

        # Logging config
        logging_enabled: true
        flow_logs_bucket_component_name: "network-firewall-logs-bucket-flow"
        alert_logs_bucket_component_name: "network-firewall-logs-bucket-alert"

        # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateless-default-actions.html>
        # <https://docs.aws.amazon.com/network-firewall/latest/APIReference/API_FirewallPolicy.html>
        # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-action.html#rule-action-stateless>
        stateless_default_actions:
          - "aws:forward_to_sfe"
        stateless_fragment_default_actions:
          - "aws:forward_to_sfe"
        stateless_custom_actions: [ ]

        # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-strict-rule-evaluation-order.html>
        # <https://github.com/aws-samples/aws-network-firewall-strict-rule-ordering-terraform>
        policy_stateful_engine_options_rule_order: "STRICT_ORDER"

        # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-default-actions.html>
        # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-default-rule-evaluation-order>
        # <https://docs.aws.amazon.com/network-firewall/latest/APIReference/API_FirewallPolicy.html>
        stateful_default_actions:
          - "aws:alert_established"
        #  - "aws:alert_strict"
        #  - "aws:drop_established"
        #  - "aws:drop_strict"

        # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-groups.html>
        # Map of arbitrary rule group names to rule group configs
        rule_group_config:
          stateful-inspection:
            # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-group-managing.html#nwfw-rule-group-capacity>
            # For stateful rules, `capacity` means the max number of rules in the rule group
            capacity: 1000
            name: "stateful-inspection"
            description: "Stateful inspection of packets"
            type: "STATEFUL"

            rule_group:
              rule_variables:
                port_sets: [ ]
                ip_sets:
                  - key: "APPS_CIDR"
                    definition:
                      - "10.96.0.0/10"
                  - key: "SCANNER"
                    definition:
                      - "10.80.40.0/32"
                  - key: "CIDR_1"
                    definition:
                      - "10.32.0.0/12"
                  - key: "CIDR_2"
                    definition:
                      - "10.64.0.0/12"
                  # bad actors today on 8 blacklists
                  - key: "BLACKLIST"
                    definition:
                      - "193.142.146.35/32"
                      - "69.40.195.236/32"
                      - "125.17.153.207/32"
                      - "185.220.101.4/32"

              stateful_rule_options:
                # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-strict-rule-evaluation-order.html>
                # All the stateful rule groups are provided to the rule engine as Suricata compatible strings
                # Suricata can evaluate stateful rule groups by using the default rule group ordering method,
                # or you can set an exact order using the strict ordering method.
                # The settings for your rule groups must match the settings for the firewall policy that they belong to.
                # With strict ordering, the rule groups are evaluated by order of priority, starting from the lowest number,
                # and the rules in each rule group are processed in the order in which they're defined.
                rule_order: "STRICT_ORDER"

              # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-how-to-provide-rules.html>
              rules_source:

                # Suricata rules for the rule group
                # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html>
                # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html>
                # <https://github.com/aws-samples/aws-network-firewall-terraform/blob/main/firewall.tf#L66>
                # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-suricata.html>
                # <https://coralogix.com/blog/writing-effective-suricata-rules-for-the-sta/>
                # <https://suricata.readthedocs.io/en/suricata-6.0.10/rules/intro.html>
                # <https://suricata.readthedocs.io/en/suricata-6.0.0/rules/header-keywords.html>
                # <https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-action.html>
                # <https://yaml-multiline.info>
                #
                # With Strict evaluation order, the rules in each rule group are processed in the order in which they're defined
                #
                # Pass – Discontinue inspection of the matching packet and permit it to go to its intended destination
                #
                # Drop or Alert– Evaluate the packet against all rules with drop or alert action settings.
                # If the firewall has alert logging configured, send a message to the firewall's alert logs for each matching rule.
                # The first log entry for the packet will be for the first rule that matched the packet.
                # After all rules have been evaluated, handle the packet according to the action setting in the first rule that matched the packet.
                # If the first rule has a drop action, block the packet. If it has an alert action, continue evaluation.
                #
                # Reject – Drop traffic that matches the conditions of the stateful rule and send a TCP reset packet back to sender of the packet.
                # A TCP reset packet is a packet with no payload and a RST bit contained in the TCP header flags.
                # Reject is available only for TCP traffic. This option doesn't support FTP and IMAP protocols.
                rules_string: |
                  alert ip $BLACKLIST any <> any any ( msg:"Alert on blacklisted traffic"; sid:100; rev:1; )
                  drop ip $BLACKLIST any <> any any ( msg:"Blocked blacklisted traffic"; sid:200; rev:1; )

                  pass ip $SCANNER any -> any any ( msg: "Allow scanner"; sid:300; rev:1; )

                  alert ip $APPS_CIDR any -> $CIDR_1 any ( msg:"Alert on APPS_CIDR to CIDR_1 traffic"; sid:400; rev:1; )
                  drop ip $APPS_CIDR any -> $CIDR_1 any ( msg:"Blocked APPS_CIDR to CIDR_1 traffic"; sid:410; rev:1; )

                  alert ip $APPS_CIDR any -> $CIDR_2 any ( msg:"Alert on APPS_CIDR to CIDR_2 traffic"; sid:500; rev:1; )
                  drop ip $APPS_CIDR any -> $CIDR_2 any ( msg:"Blocked APPS_CIDR to CIDR_2 traffic"; sid:510; rev:1; )
Ryan avatar

nice ty andriy, give me a few to read im split between a call and this.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Make sure you are using our component and not the child module

2
Ryan avatar

ok maybe i pulled the wrong module

Ryan avatar

thats much closer in the example, i was trying to figure out the YAML objects myself

Ryan avatar

ooo likely interested in the zscaler module but with govcloud configs btw

Ryan avatar

im still very much learning terraform, so if there are better ways to handle inputs like this im all ears on learning it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

rule_group_config is a very complex variable with a LOT of diff combinations of data types. That’s why it’s set to any - this simplifies the variable, but complicates figuring out how to define it (either in plain Terraform or in Atmos stack manifests in YAML)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the example in the component repo (and the one I posted above) should help you to start (that’s a working example)

1
Ryan avatar

yea i started laughing out loud when i realized what i was doing was futile and then seeing how you handled it

Ryan avatar

it was complex either way but your way is much better, i appreciate the help this am

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if oyu need any help

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you vendor the https://github.com/cloudposse/terraform-aws-components/tree/main/modules/network-firewall component and use the YAML above to configure Atmos component, you should be able to provision. Pay attention to the VPC component and the remote-state (this needs to be already provisioned)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
        # The name of a VPC component where the Network Firewall is provisioned
        vpc_component_name: "vpc"
Ryan avatar

i might need help but i need to struggle through this a bit and learn, ill come back in a few hours.

Ryan avatar

and thank you, idk if ill have to modify based on dependency?

Ryan avatar

its a thought i hadnt had

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Vendor Components | atmos

In the previous steps, we’ve configured the repository and decided to provision the vpc-flow-logs-bucket and vpc Terraform

Ryan avatar

yea we’re on remote state under globals i believe but we still need to have env review

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in short, you provision the vpc component first (e.g. atmos terraform apply vpc -s <stack> )

Ryan avatar

yea i know it has a vpc dependency and i was kinda hacking vpc.id or vpc.arn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in other components that need to get the remote state from the vpc component (e.g. to get the VPC ID), you define the remote-state Terraform module and configure a variable in Atmos to specify the name of the component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


i was kinda hacking vpc.id

Ryan avatar

yea ive done that for my own modules in and out

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can def hack that for testing and to start faster with the network firewall

Ryan avatar

yea thats basically where i am now in testing

Ryan avatar

vpcid and subnetid as statics

Ryan avatar

theyre probably not going to care if my modules interact properly btw, i have an unreasonable deadline of making 5 weeks work in 2 1/2 weeks

Ryan avatar

ive made most of the work needed but im down to this piece

Ryan avatar

i can always try it in dev bc i believe were using vpc but unsure of integration

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i hear you. The network firewall is def not an easy thing

Ryan avatar

yea when i got to the rules after saying it was easy im like oh…………….

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

anyway, adding remote state for vpc is much simpler than network firewall, and you can do it later

Ryan avatar

good lesson on data structures though

Ryan avatar

ok cool, i appreciate it

Ryan avatar

so a few thoughts - I had to edit providers.tf to include root_account_environment_name under module iam_roles - then plan fired up local.vpc_outputs and shared config profile do not exist per plan results. i could probably cut out reliance on the buckets and make them myself as annoying as that would be, otherwise i think I see defaults I can maybe configure in remote-state.tf and just point to resources. i appreciate the intergration btw i hadnt really seen tf talk across resources that way.

Ryan avatar

going to take a walk and get some fresh air.

Ryan avatar

appreciate your teams help on this.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not sure if I follow…. need more context.

Ryan avatar

Sorry I should’ve provided context, I tried network-firewall out in the environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to disable reading remote state for the log buckets

        # Logging config
        logging_enabled: true
        flow_logs_bucket_component_name: "network-firewall-logs-bucket-flow"
        alert_logs_bucket_component_name: "network-firewall-logs-bucket-alert"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

set

logging_enabled: false
Ryan avatar

finally seeing green andriy but im unsure of my hacking consequences lol. i had to modify providers.tf as follows -

provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  #profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  root_account_environment_name = var.root_account_environment_name
  context = module.this.context

}

variable "root_account_environment_name" {
  type = string
  description = "Global Environment Name"
}
Ryan avatar

I also went into remotestate.tf and created a defaults = {vpc_id} and adjusted main.tf firewall_subnet_ids to a subnet map

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what it does is it reads the terraform role to assume

Ryan avatar

not my final solution

Ryan avatar

i was kinda hacking it together to get it to show me green text

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as long as you have it, and it can read it, and the role has the required permission, should be fine

Ryan avatar

i could probably pipe those back to variables in the yaml, im trying not to get too hacky as im still learning atmos structure

Ryan avatar

im going to run my first deploy now and see how she goes, half of what sucks about working with this resource is the coffee break each time you destroy/recreate

Ryan avatar

mmmmm

Ryan avatar

thanks again andriiy and erik, will come back again soon

1
Ryan avatar

hey andriy - do you have any examples of multiple rulegroups or stateful/stateless combo examples? Im swapping back and forth between the the tf code and yaml right now trying to get multiple rules going. the siracata parse of ip sets was very cool btw, i was trying to think of ways to deal with the massive rule lists

Ryan avatar

just the yaml example would be much appreciated

Ryan avatar

ill dig around otherwise, most of what i have works though, just trying to get syntax correct for more rules

Ryan avatar

this is what im struggling with but im going to take a break for a bit, i went back and forth between the main.tf and into the vars to be sure it was configuring correctly, but my yaml attempts have failed thus far for a stateless rule example -

              rules_source:
                stateless_rules_and_custom_actions:
                  stateless_rule:
                  - priority: 1
                    rule_definition:
                      actions:
                        - "aws:drop"
                      match_attributes:
                        protocols:
                        - "1"
                        source: 
                        - address_definition: $SOURCE
                        destination: 
                        - address_definition: $DST

Planned Result:

      + rules_source {
              + stateless_rules_and_custom_actions {
                  + stateless_rule {
                      + priority = 1

                      + rule_definition {
                          + actions = [
                              + "aws:drop",
                            ]

                          + match_attributes {
                              + protocols = [
                                  + 1,
                                ]
                            }
                        }
                    }
                }
            }
Ryan avatar

i feel like im giving it source and destination but my spacing or something is off

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan I think it should be done like this

              rules_source:
                stateless_rules_and_custom_actions:
                  stateless_rule:
                  - priority: 1
                    rule_definition:
                      actions:
                        - "aws:drop"
                      match_attributes:
                        protocols:
                        - "1"
                        source: 
                        - $SOURCE
                        destination: 
                        - $DST
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
            dynamic "stateless_rule" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
                    dynamic "destination" {
                      for_each = lookup(stateless_rule.value.rule_definition.match_attributes, "destination", [])
                      content {
                        address_definition = destination.value
                      }
                    }
Ryan avatar

yea that is exactly what i was referencing but i am terrible still going from tf to yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know if it works for you

Ryan avatar

im glad i was referencing the right place

Ryan avatar

ok i see now and yea its planning

Ryan avatar

for some reason i thought i had to define address_definition, but looking now i was wrong

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I understand. It’s just how it’s implemented in the TF module. We could have easily parsed address_definition in TF

content {
                        address_definition = destination.value.address_definition
                      }

and then you’d have to do

                        source: 
                        - address_definition: $SOURCE
                        destination: 
                        - address_definition: $DST
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

network firewall is complicated

Ryan avatar

yea thats what i was figuring in my head after you pasted and i went oh

Ryan avatar

and i appreciate the complication of it, its good practice for me

1
Andrew Chemis avatar
Andrew Chemis

Hello - was wondering if there is any updates on refactoring of the account and account-map modules to enable brownfield / control tower deployments https://sweetops.slack.com/archives/C031919U8A0/p1702136079102269?thread_ts=1702135734.967949&cid=C031919U8A0 . I see 2 related PRs that look like they will never be approved. Dont know if it matters, this particular project is using TF Cloud as a backend.

until then, what is the suggested work around to enable using those modules in an existing organization - it seems to be creating a remote-state module that represents the output of account-map ? Does anyone have an example of what this is supposed to look like? And if I go this approach, how will this change when the refactored modules become available?

Well, those are precisely the components that won’t work well in brownfield, at least the brownfield we plan to address.

However our plan for next quarter (2024) is to refactor our components for ala carte deployment in brown field settings. E.g. enterprise account management team issues your product team 3 accounts from a centrally managed Control Tower. You want to deploy Cloud Posse components and solutions in those accounts, without any sort of shared s3 bucket for state, no account map, no account management.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately, Cloud Posse has not been able to prioritize this work over other work. Note, this relates more to refarch than Atmos.

Well, those are precisely the components that won’t work well in brownfield, at least the brownfield we plan to address.

However our plan for next quarter (2024) is to refactor our components for ala carte deployment in brown field settings. E.g. enterprise account management team issues your product team 3 accounts from a centrally managed Control Tower. You want to deploy Cloud Posse components and solutions in those accounts, without any sort of shared s3 bucket for state, no account map, no account management.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB has success I believe doing this, and may have more guidance.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

At this point, we would recommend you use the code from the associated branches submitted by RB

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We haven’t merged/incorporated it, because we (Cloud Posse) are responsible for LTS support and have no way to test the changes.

Also, since our plans involve signficant changes to these components, we don’t want to take on more than we can chew.

Andrew Chemis avatar
Andrew Chemis

Excellent - thank you. This is useful

pv avatar

How would I pass this variable in a yaml?

variable "database_encryption" {
  description = "Application-layer Secrets Encryption settings. The object format is {state = string, key_name = string}. Valid values of state are: \"ENCRYPTED\"; \"DECRYPTED\". key_name is the name of a CloudKMS key."
  type        = list(object({ state = string, key_name = string }))

  default = [{
    state    = "DECRYPTED"
    key_name = ""
  }]
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

list of objects

vars:
  database_encryption:
    - state: DECRYPT
      key_name: xxx
1
pv avatar

That worked, thanks!

2024-04-24

Stephan Helas avatar
Stephan Helas

Hello,

i try to pass values from settings into vars. this only works after componets are processed. what i mean by that is

this is working

import:
  - accounts/_defaults

settings:
  account: '0'

vars:
  tenant: account
  environment: test
  stage: '0'
  tags:
    account: '{{ .settings.account }}'

This is not

import:
  - accounts/_defaults

settings:
  account: '0'

vars:
  tenant: account
  environment: test
  stage: '{{ .settings.account }}'

Is there a way to pass settings to vars before the components are processed?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So there are a few observations here. And…. while we can directly address what you’re trying to do, I think there could be an xyproblem.info

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

When you say it’s not working, what’s the error or the observed behavior?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…is it a syntax error?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(note, that templating right now is a single pass, it’s not re-entrant. We plan to make this configurable, e.g. make it 3 passes, but it won’t be infinitely recursive)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas it’s not working not b/c any errors in the template and not b/c it’s multi=pass (it’s a single pass)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not working b/c you are overriding the stage variable, which is a context var which Atmos uses to find components and stacks in stack manifests

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using namespace, tenant, environment and stage as context vars, and in your atmos.yaml you have stacks.name_pattern, those context vars must be provided in stack manifests and known before any processing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but, what you are trying to do, it looks like you want to override the context vars. Here’s how to do it:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

look at stacks.name_pattern and stacks.name_template

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your case/example, you can do this in atmos.yaml:

stacks:
  name_template: "{{.settings.tenant}}-{{.settings.region}}-{{.settings.accoint}}"
RB avatar

I noticed in geodesic that this env var is correct ATMOS_BASE_PATH . I didn’t see ATMOS_CLI_CONFIG_PATH and since that is unset, atmos cannot fully understand the stack yaml.

1
RB avatar

Is this something I need to manually set in the Dockerfile ? or is this a bug in geodesic ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we usually place atmos.yaml in rootfs/usr/local/etc/atmos/atmos.yaml in the repo, and then in Dockerfile

COPY rootfs/ /
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which becomes /usr/local/etc/atmos/atmos.yaml in the container, and that’s a location which Atmos always checks

RB avatar

ohhhh i see

RB avatar

thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s how we usually do it, but you can place atmos.yaml in any folder and then use the ENV var ATMOS_CLI_CONFIG_PATH (if that’s what you want)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you don’t use the ENV var, Atmos searches for atmos.yaml in these locations. https://atmos.tools/cli/configuration#configuration-file-atmosyaml

CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

RB avatar

my problem was that i want to run atmos from outside geodesic and inside geodesic so I have my atmos.yaml file in the repo root so I copied the above linked atmos.sh script and added the following to it

export ATMOS_CLI_CONFIG_PATH="${ATMOS_BASE_PATH}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did it work?

RB avatar


if you don’t use the ENV var, Atmos searches for atmos.yaml in these locations. https://atmos.tools/cli/configuration#configuration-file-atmosyaml
this is odd because it says

Current directory (./atmos.yaml) and that is where my file is located. I get an error unless I also set ATMOS_CLI_CONFIG_PATH.

⨠ unset ATMOS_CLI_CONFIG_PATH
⨠ atmos version | grep -i atmos
Found ENV var ATMOS_BASE_PATH=/localhost/git/work/atmos
👽 Atmos v1.70.0 on linux/arm64
⨠ atmos terraform plan service/s3 --stack ue1-dev

...

module.iam_roles.module.account_map.data.utils_component_config.config[0]: Reading...

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error:
│ Searched all stack YAML files, but could not find config for the component 'account-map' in the stack 'core-gbl-root'.
│ Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the stack config files.
│ Are the component and stack names correct? Did you forget an import?
│
│
│   with module.iam_roles.module.account_map.data.utils_component_config.config[0],
│   on .terraform/modules/iam_roles.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

RB avatar

Seems like a bug. Should I write it up ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no, this is not a bug The error is from the remote state module, see https://atmos.tools/core-concepts/components/remote-state#caveats

Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for the remote-state to work, atmos.yaml needs to be in a “global” directory (e.g. /usr/local/etc/atmos/atmos.yaml)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(all of that is b/c Terraform executes providers in the component folder, so any relative paths will not work, and atmos.yaml placed in one component folder will not affect another component)

RB avatar

so basically i should always have the ATMOS_CLI_CONFIG_PATH set

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we always have it in /usr/local/etc/atmos/atmos.yaml (where both Atmos binary and the Terraform utils provider for remote state can find it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and yes, you can use the ENV var

RB avatar

so then the alternative is

COPY atmos.yaml /usr/local/etc/atmos/atmos.yaml
RB avatar

I suppose that’s easier than overwriting the atmos.sh file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can try that (there are many diff ways of doing it, copying, using ENV vars). The idea is to have atmos.yaml (or pointer to it in the ENV var) at some path where all involved binaries can find it

1
RB avatar

kk thanks andriy!

RB avatar

How does the components/terraform/account-map/account-info/acme-gbl-root.sh get used by other scripts?

RB avatar
#!/bin/bash

# This script is automatically generated by `atmos terraform account-map`.
# Do not modify this script directly. Instead, modify the template file.
# Path: components/terraform/account-map/account-info.tftmpl

# CAUTION: this script is appended to other scripts,
# so it must not destroy variables like `functions`.
# On the other hand, this script is repeated for each
# organization, so it must destroy/override variables
# like `accounts` and `account_roles`.


functions+=(namespace)
function namespace() {
  echo ${namespace}
}

functions+=("source-profile")
function source-profile() {
  echo ${source_profile}
}


declare -A accounts

# root account included
accounts=(
  %{ for k, v in account_info_map ~}
  ["${k}"]="${v.id}"
  %{ endfor ~}
)

declare -A account_profiles

# root account included
account_profiles=(
  %{ for k, v in account_profiles ~}
  ["${k}"]="${v}"
  %{ endfor ~}
)

declare -A account_roles

account_roles=(
  %{ for k, v in account_role_map ~}
  ["${k}"]="${v}"
  %{ endfor ~}
)

functions+=("account-names")
function _account-names() {
  printf "%s\n" "$${!accounts[@]}" | sort
}
function account-names() {
  printf "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}-}%s\n" $(_account-names)
}

functions+=("account-ids")
function account-ids() {
  for name in $(_account-names); do
    printf "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}-}%s = %s\n" "$name" "$${accounts[$name]}"
  done
}

functions+=("account-roles")
function _account-roles() {
  printf "%s\n" "$${!account_roles[@]}" | sort
}
function account-roles() {
	for role in $(_account-roles); do
  	printf "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}: }%s -> $${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}-}%s\n" $role "$${account_roles[$role]}"
	done
}

########### non-template helpers ###########

functions+=("account-profile")
function account-profile() {
  printf "%s\n" "$${account_profiles[$1]}"
}

functions+=("account-id")
function account-id() {
	local id="$${accounts[$1]}"
	if [[ -n $id ]]; then
		echo "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}: }$id"
	else
		echo "Account $1 not found" >&2
		exit 1
	fi
}

functions+=("account-for-role")
function account-for-role() {
	local account="$${account_roles[$1]}"
	if [[ -n $account ]]; then
		echo "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}: }$account"
	else
		echo "Account $1 not found" >&2
		exit 1
	fi
}

function account_info_main() {
  if printf '%s\0' "$${functions[@]}" | grep -Fxqz -- "$1"; then
	  "$@"
  else
    fns=$(printf '%s\n' "$${functions[@]}" | sort | uniq)
    usage=$${fns//$'\n'/ | }
    echo "Usage: $0 [ $usage ]"
    exit 99
  fi
}

if ! command -v main >/dev/null; then
  function main() {
    account_info_main "$@"
  }
fi

# If this script is being sourced, do not execute main
(return 0 2>/dev/null) && sourced=1 || main "$@"

RB avatar

It’s a handy script, just kind of painful to run manually since it’s so far deep into the components dir.

⨠ components/terraform/account-map/account-info/acme-gbl-root.sh "account-id" root
1234567890

Also curious how I can integrate it, otherwise i might gitignore it for now or copy it into /usr/local/bin in geodesic

    keyboard_arrow_up