#atmos (2024-04)

2024-04-01

Ryan avatar

Hopefully a small question this monday morning, I’m trying to get atmos.exe functioning on Win11 with our current ver (v1.4.25), and it looks like she wants to fire up, but fails to find terraform in the %PATH%. I dropped it in path and even updated path with terraform.exe, not sure where atmos is searching for that path. See what I mean here -

C:\path\>
Executing command:
terraform init -reconfigure
exec: "terraform": executable file not found in %PATH%


C:\path\>terraform
Usage: terraform [global options] <subcommand> [args]

The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe we’ve predominantly tested it on WSL2

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using WSL?

Ryan avatar

No but I can redo in wsl, kinda hacking away at improving the dev environment this morning.

Ryan avatar

I’ll come back in a bit after I give wsl a shot

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan I’m not sure why Atmos can’t see the Terraform binary in the PATH on Windows (we’ll investigate that). It spawns a separate process to execute TF and other binaries. For now, maybe you can try something like this:

terraform:
  command: <path to TF binary>
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the command attribute allows you to select any binary to execute (instead of relying on it to be in PATH), and also it aloows you to set the binary for OpenTofy instead of Terraform (if you want to use it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this config

terraform:
  command: <path to TF binary>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can be set globally (per Org, tenant, account), or even per component (if, for example, you want to use diff TF or OpenTofu binaries for diff components)

components:
  terraform:
    my-component:
      command: <TF or OpenTofu binary>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if that works for you and how it can be improved

Ryan avatar

Will play around with it today, I appreciate your responses. We’re in geodesic now and only really leverage atmos so I’d like to try to make it easier to access for my team directly from vscode in native os.

Ryan avatar

Ok, I’ll probably have to talk to you gentlemen friday about risk. the issue is we’re on 1.4.x and I was pulling that atmos version, when I grabbed the latest atmos it was fine. I could tell it was a separate process too but im like uhhh where the heck is that little guy going for env.

Ryan avatar

I dont know the risk involved in using the updated exe, we are very narrow in our atmos usage specifically around terraform and the name structuring

Ryan avatar

i would think thats all just atmos yaml but the updated exe is like 4x the size of the one I was using previously.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the latest Atmos version works OK on Windows?

Ryan avatar

yea but now its complaining about my structure so new issues, its ok though

Ryan avatar

the terraform piece no longer complained when i updated

Ryan avatar

sorry i didnt want to present a bug before i checked that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no problem at all. We can help you with any issues and with the structure

1
Ryan avatar

Yea it all works happy now besides that it cannot find the stack based on atmos.yaml stacks: config, I’ll read deeper on this. From what I can see , my stacksbaseabsolutepaths seems to be pointing at my components /../.. directory. Somethings fishy with my atmos.yaml.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can DM us the file, we’ll review

alla2 avatar

Hi, everyone. I’m evaluating Atmos for the company to enhance Terraform. In Atmos everything revolves around a library of components which understandably where a majority of reusable modules should be stored. But I don’t understand (and docs don’t help) if I can define a single resource using plain terraform syntax on the deepest level of a stack without creating a component for it.

For the following file tree:

.
├── atmos.yaml
├── components
│   └── terraform
│       └── aws-vpc
│           ├── main.tf
│           ├── outputs.tf
│           ├── versions.tf
├── stacks
│   ├── aws
│   │   ├── _defaults.yaml
│   │   └── general_12345678901
│   │       ├── core
│   │       │   ├── _defaults.yaml
│   │       │   ├── [eks.tf](http://eks.tf)
│   │       │   └── us-east-2.yaml
│   │       └── _defaults.yaml
│   ├── catalog
│   │   └── aws-vpc
│   │       └── defaults.yaml
└── vendor.yaml

I’d like to just drop [eks.tf](http://eks.tf) (a YAML version of HCL is also fine) into stacks/aws/general_12345678901/core and expect Atmos to include it into the deployment. Is it possible?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrm… let me see if I can help.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


if I can define a single resource using plain terraform syntax on the deepest level of a stack without creating a component for it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So let’s take a step back. What we call a “component” in atmos is something that is deployable. For terraform, that’s a root module. In terraform, it’s impossible to deploy anything outside of a root module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we’re confusing 2 concepts here. And it happens easily because many different vendors define “stacks” differently.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cloud Posse, defines a stack as yaml configuration that specifies how to deploy components (e.g. root modules).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if you have a file like [eks.tf](http://eks.tf), you will need to stick that in a root module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In atmos, we keep the configuration (stacks) separate from the components (e.g. terraform root modules).

So inside the stacks/ folder you would never see terraform code.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All terraform code, is typically stored in terraform/components/<something>

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this is quite different from tools like terragrunt/terramate that I believe combine terraform code with configuration.

alla2 avatar

so if I need to deploy a single resource to a single environment only, I’d need to create a wrapper module in components and use in the stack?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Perhaps… That said, I’d have to understand more about what you’re trying to do. For example, at Cloud Posse, we would never “drop in” an EKS cluster.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

An EKS cluster has it’s own lifecycle, disjoint from a VPC, and applications. It deserves it’s own component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Many times, users will want to extend some functionality of a component. In that case, a it’s great to use the native Terraform _override.tf pattern.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For example, if there’s already an eks.tf file in a component, and you want to tweak how it behaves, you could create an eks_override.tf file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Override Files - Configuration Language | Terraform | HashiCorp Developerattachment image

Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.

alla2 avatar

an eks cluster is a bad example. But a lighter resource like ECR registry which does not require too many input parameters to configure is a better one. Having a module for every AWS resource would be infeasible to maintain with the small team. I checked out the library of TF components from Cloudposse and it looks impressive but probably require an orchestrated effort of the entire team to support. AWS provider changes very frequently and you have to keep up to update the modules. Another point is that our org comes from a very chaotic tf organization trying to implement a better structure. Replacing everything with modules in a single sweep is not realistic. So we would like to start using Atmos with our existing tf code organization gradually swapping out individual resources with modules. In our use case environments (AWS accounts) do not have much in common. Having a module for a resource that only used in a single place does not make much sense

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@alla2 you don’t need to replace everything. If you want a new component, just put it into components/terraform/<new-component> folder (it could be one TF file like [main.tf](http://main.tf), or the standard config like [variables.tf](http://variables.tf) [main.tf](http://main.tf) , [outputs.tf](http://outputs.tf) etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then, in stacks, you provide the configuration for your new component (TF root module)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and in stacks, you can make it DRY by speccifying the common config for all accounts and environments in stacks/catalog/<my-component>/defaults.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then in the top-level stacks you can override or provide specific values for your component for each account, region, tenant etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is to separate the code (TF components) from the configuration (stacks) (separation of concerns)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so your component (TF code) is completely generic and reusable and does not know/care about where it will be deployed - the deployment configs are specified in stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


So we would like to start using Atmos with our existing tf code organization gradually swapping out individual resources with modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s easily done - put your TF code in components/terraform and configure it in stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in the future, you can create a repo with your TF modules, and in your TF components you would just instantiate your modules making your components very small

alla2 avatar

Thanks, Andriy and Erik for your input and the options you provided. I’ll have another look. I agree that having a module for a logical group of resources is a right thing to do. But it usually only works when you already have a good code organization and a somewhat large codebase. I’d argue that managing everything with modules is not productive at the start as you spend a lot of time creating modules which you don’t even know if you can reuse later. Even within a mature org parts of the configuration can be defined as plain resources as they are only useful in this particular combination for this particular environment and does not demand a separate module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, so I know where you’re coming from, and if you’re stuck in that situation - other tools might be better suited for dynamically constructing terraform “on the fly”. Right now, atmos is not optimized for that use-case.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I’d argue that managing everything with modules is not productive at the start as you spend a lot of time creating modules which you don’t even know if you can reuse later.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is true… There’s a significant up front investment. Our belief though, is once you’ve created that investment, you almost don’t need to write any HCL code any more.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Everything becomes configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(We’ve of course spent considerable time building that library of code, which is why we have so much)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You might have already come across this, but here’s how we we think of components for AWS: https://github.com/cloudposse/terraform-aws-components

These are all the “building blocks” we use in our commercial engagements

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

alla2 avatar

yeah, I was referring to it above, impressive piece of work. Right now we’re mostly using another modules library which I guess should also work fine with Atmos.

Terraform AWS modules

Collection of Terraform AWS modules supported by the community

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, though keep in mind those are child modules, and not root modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said, it would be interesting to show a reference architecture using those child modules. It would be easy to vendor them in with atmos vendor, and atmos can automatically generate the provider configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let us know if you need some guidance…

1
Roy avatar

Hey, I would to add some follow up to this discussion under the @alla2 message. As far as I see that atmos doesn’t provide any option to pass output from one module as an input to another (with optional conditionals and iteration). It can only be achieved with instrumenting the module’s code by context with remote backend module, isn’t it? P.S. It probably could be achieved also with some scripting in workflows , but it would be more like workaround, then I’m not taking it into account.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, we have intentionally not implemented it in atmos since we want as much as possible to rely on native Terraform functionality. Terraform can already read remote state either by the remote state data source or any of the plethora of other data source look ups. We see that as a better approach because it is not something that will tie you to “atmos”. It will work no matter how you run terraform, and we see that as a win for interoperability.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To make this easier we have a module to simplify remote state look ups that is context aware. This is what @Roy refers to.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Roy avatar

@Erik Osterman (Cloud Posse) okey, because I was hoping if there is possibility to have implementation-agnostic (then context-agnostic) building blocks (that could be vendored from i.e. Cloud Posse ,Gruntwork, Azure Verified Modules, etc.) and to glue things together with business logic with use of higher-level tooling (i.e. Atmos), without usage of any intermediate wrapper modules and without instrumenting the low-level blocks. But it would require capabilities to connect outputs to inputs in declarative way with some basic logic support (conditionals, iteration). Of course for more sophisticated logic some “logic modules” could be written, but after all it would be still the totally flat structure of modules with orchestration on the config management layer, i.e. Atmos. For now with Atmos if I want to, i.e. use output from one module in for_each for other module, I need to have additional module that: 1. grab the output from remote state and 2. implements this for_each for the second module. Am I right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So what I tried to say, was we take that implementation agnostic approach. By not building anything natively into atmos it’s agnostic to how you run terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are also building some other things (open source) but cannot discuss here. DM me for details.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

These are terraform mechanisms to further improve what you are trying to do across multiple clouds, ideally suited for enterprise.

Roy avatar

For now I have 3-layer layout of modules. First is set of context-agnostic building blocks (like Gruntwork modules ). The second is the orchestrator layer that carries the context with use of external config store and implements business logic. The last is environment layer that just calls the second with exact input. I’m looking for some solution that could eliminate the 2. and 3. layer and Atmos deals with the 3. in the way that I loved

Roy avatar

now I’m looking at layer 2. and try to figure out how to deal with it :P

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmm think there might have have been a punctuation problem in the previous message. You want to eliminate layer 2 and 3?

Roy avatar

yes, I’m wondering about eliminating layer 2 and 3

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Do you mean calling “child” modules as “root” modules by dynamically configuring the backend? Thus being able to use any module as a root module. Then needing to solve the issue of passing dynamic values to those child modules, without the child modules being aware of its context or how it got those values.

Roy avatar

sometimes it would require “logic modules”, that would act as a procedure in general purpose lang (some generalisation of locals), but after that it would be simple as

resource_component:
  vars:
    var1: something
    var2: something
roles_logic_component:
  vars:
    domain_to_query: example.com
{{ for role in roles_logic_component.output.roles }}
{{ role.name}}-role-component:
  name: {{ role.name }}
  scope: {{ resource_component.output.id }}
  permissions: some-permissions
Roy avatar

Of course it would require to build dependency tree on the tool level, what is duplication of tf responsibility, but.. it would be not only for tf

Roy avatar

As You can see, the context is carried by tool itself then

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A forth coming version of atmos will support all of these https://docs.gomplate.ca/datasources/

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Will that solve what you want to do?

Roy avatar

depends on final implementation, but sounds like it would

Roy avatar

did You foresee this use case I described?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It makes sense, although it’s not a use case we are actively looking to solve for ourselves, as we predominantly want nice and neat components that are documented and tested. The more dynamic and meta things become, the harder to document, test and explain how it works.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

correct, templating can become difficult to understand and read very fast. Having said that, we’ll release support for https://docs.gomplate.ca/functions/ in Atmos manifests this week

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(Sprig is already supported https://masterminds.github.io/sprig/)

Sprig Function Documentation

Useful template functions for Go templates.

Roy avatar

then what is for You the recommended way of gluing things together? Let’s take a resource <-> roles example from my para-yaml-file.

Roy avatar

do You have some ETA for datasources?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

recommended way of gluing things together? We take a different approach to glueing things together, which is the approach you want to eliminate We design components to work together, using data sources. This keeps configuration lighter. We are weary of re-inventing HCL in YAML.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Keep modules generic and reusuable (e.g. github.com/cloudposse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then create opinionated root modules that combine functionality. Root modules can use data sources to look up and glue things together.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Segment root modules by life cycle. E.g. a VPC is disjoint from an EKS cluster which is disjoint from the lifecycle of the applications running on the cluster.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The way we segment root modules, is demonstrated by our terraform-aws-components repo.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ETA for gomplate - we’ll try to release it this week (this will include the datasources)

Roy avatar

but probably it won’t have such wonderful UX as {{ component.output }} reference? :D

2024-04-02

Release notes from atmos avatar
Release notes from atmos
07:34:38 PM

v1.67.0 Add Terraform Cloud backend. Add/update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2221110085” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/572“…

Release v1.67.0 · cloudposse/atmosattachment image

Add Terraform Cloud backend. Add/update docs @aknysh (#572) what

Add Terraform Cloud backend Add docs:

Terraform Workspaces in Atmos Terraform Backends. Describes how to configure backends for AW…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

Add Terraform Cloud backend. Add/update docs by aknysh · Pull Request #572 · cloudposse/atmosattachment image

what

Add Terraform Cloud backend Add docs:

Terraform Workspaces in Atmos Terraform Backends. Describes how to configure backends for AWS S3, Azure Blob Storage, Google Cloud Storage, and Terrafor…

Release notes from atmos avatar
Release notes from atmos
07:54:31 PM

v1.67.0 Add Terraform Cloud backend. Add/update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2221110085” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/572“…

Shiv avatar

So I am examining the output for atmos describe component <component-name> —stack <stack_name>

I am trying to understand the output of the command .

  1. what’s the difference between deps and deps_all
  2. What does the imports section mean? I see catalog/account-dns and quite lot files under catalog dir
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

imports, deps and deps_all are not related to the component, but rather to the stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
imports - a list of all imports in the Atmos stack (this shows all imports in the stack, related to the component and not)

deps_all - a list of all component stack dependencies (stack manifests where the component settings are defined, either inline or via imports)

deps - a list of component stack dependencies where the final values of all component configurations are defined (after the deep-merging and processing all the inheritance chains and all the base components)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe component | atmos

Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are added to describe component as an additional info that might be useful in some cases (e.g. to find all manifests where any config for the component is defined)

Shiv avatar

Thanks much @Andriy Knysh (Cloud Posse). I think my main intent to understand these things are related to a project that am trying to work is by creating a single file which will serve as a registry for bootstrapping new accounts with foundational infra (such as account dns , region dns , vpc , compute layer , and preferably storage , and some global component need for each account (each product gets an account) . So the registry file i think will be a stack in the atmos file with values in it and a template file preferably to create all the files and put in the path that atmos needs . So this way if I need to bring up an account , all I need is to add the contents to the registry file , run a cli tool probably to generate the file needed for atmos

Shiv avatar

Preferably have the account related meta data config to be stored in a json in a S3 to start with( relational db in the future ) an api to for the the platform team so devs can use platform cli to get the values to manipulate their configs etc .

Shiv avatar

Does it makes sense? Or I am approaching this wrong?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it does make sense. In fact, we will be working on atmos generate functionality to generate diff things (projects, components, stacks, files, etc.) from templates, and what you are explaining is exactly what the new Atmos functionality will do cc: @Erik Osterman (Cloud Posse)

Shiv avatar

Interesting! Including the metadata store with an api?

Shiv avatar

When is the generate feature set to release ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Interesting! Including the metadata store with an api?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll be leveraging the beautiful Go lib

https://docs.gomplate.ca/

https://github.com/hairyhenderson/gomplate

(embedded in Atmos)

so all these Datasources will be supported

https://docs.gomplate.ca/datasources/

and functions

https://docs.gomplate.ca/functions/

gomplate - gomplate documentation

gomplate documentation

hairyhenderson/gomplate

A flexible commandline tool for template rendering. Supports lots of local and remote datasources.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a multi-phase process, phase #1 will be to add gomplate support to Atmos stack manifests. Currently https://masterminds.github.io/sprig/ is supported, soon (this or next week) gomplate will be supported as well

Sprig Function Documentation

Useful template functions for Go templates.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Shiv avatar

This looks nice . What do you think about how we could implement the metadata config store with atmos in picture? For storing environment specific metadata , say like I want to look logs for one environment , atmos somewhere does the deep merge and have the information , (other examples could be anything that folks need that information but don’t need to look at the tfstate or log into AWS account , cluster url , sqs used by the service (non secrets) . Is this possible ? If so a general idea would help . Thanks again Andriy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll be implementing that by using https://docs.gomplate.ca/datasources/ - whatever is supported by gomplate will be supported by Atmos (and that’s already a lot including S3, GCP, HTTP, etc.). So at least in the first version it will not be “anything” b/c it’s not possible to create a swiss-knife tool that would do everything. Maybe later we might consider adding a plugin functionality so you would create your own plugin to read from whatever source you need (we have not considered it yet)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Shiv if you are asking how you could do it now, you can do it in Terraform using the many providers that are already supported by Terraform (https://registry.terraform.io/browse/providers). Create Terraform components using providers you need (to get the metadata), and use Atmos to configure and provision them

2024-04-03

2024-04-05

Roy avatar

hmmm

> atmos describe stacks
the stack name pattern '{tenant}-{environment}-{stage}' specifies 'tenant', but the stack 'catalog/component/_defaults' does not have a tenant defined in the stack file 'catalog/component/_defaults'

how to avoid including catalog items from describing the stacks? All components have type: abstract meta attribute.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can do one of the following:

• exclude all defaults.yaml • exclude the entire catalog folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
stacks:
  # Can also be set using 'ATMOS_STACKS_BASE_PATH' ENV var, or '--config-dir' and '--stacks-dir' command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks"
  # Can also be set using 'ATMOS_STACKS_INCLUDED_PATHS' ENV var (comma-separated values string)
  included_paths:
    - "orgs/**/*"
  # Can also be set using 'ATMOS_STACKS_EXCLUDED_PATHS' ENV var (comma-separated values string)
  excluded_paths:
    - "**/_defaults.yaml"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure CLI | atmos

In the previous step, we’ve decided on the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depending on the structure of your stacks folder, you can use included_paths to include everything and then excluded_paths to exclude some paths, or just included_paths to only include what’s needed

Roy avatar

oooh, now I see, thanks!

1
Shiv avatar

What are some recommended patterns to add / enforce tagging as part of workflows? So if a component is not tagged as per standards . Do not apply sort of thing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this is where the OPA policies come into play.

@Andriy Knysh (Cloud Posse) might have an example somewhere.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since you’ve probably become familiar with atmos describe (per other question), anything in that output can be used to enforce a policy.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See:

# Check if the component has a `Team` tag

here: https://atmos.tools/core-concepts/components/validation#opa-policy-examples

Component Validation | atmos

Use JSON Schema and OPA policies to validate Components.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure Validation | atmos

Atmos supports Atmos Manifests Validation and Atmos Components Validation

Shiv avatar

I see , so even if the stack’s vars section is missing the necessary tags the opa policies can add the tags during the runtime ? Don’t think so yes? Because there is not state for the atmos to have that metadata context? Because the devs could be adding incorrect tags and then tags pretty much becomes useless if it doesn’t fit the company’s tagging scheme

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


opa policies can add the tags during the runtime
Policies are simply about enforcement, but don’t modify the state.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, since with atmos, you get inheritance, it’s super easy to ensure proper tagging across the entire infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

those tags can be set at a level above where the devs operate. They basically don’t even need to be aware of it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Based on where a component configuration gets imported, it will inherit all the context. This is assuming you’re adopting the terraform-null-label conventions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Without using something like null-label it will be hard to enforce consistency in terraform across an enterprise.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, there’s still an alternative approach. Since we support provider config generation, it’s possible to automatically set the required tags on any component also based on the whole inheritance modell.

Shiv avatar

Enforcements makes sense . Yes we have context.tf in all our components . What do you mean by
tags are set at a level above where the devs operate

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes… let me try to explain that another way.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we can establish tags in the _defaults.yaml that get imported in to the stacks. Stacks are at different levels

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What makes it hard to describe, is we don’t know how you organize your infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In our reference architecture (not public), we organize everything in a very hierarchical manner. By setting tags in the appropriate place of the hierarchy, those tags natuarlly flow through to the component based on where it’s deployed in the stack.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So when I say “above where the devs operate”, I just mean higher up in the hierarchy of configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Organizational Structure Configuration Atmos Design Pattern | atmos

Organizational Structure Configuration Atmos Design Pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So at each level of that hierarchy, we can establish the required tags. Developers don’t need to be aware of that, so long as they use the [context.tf](http://context.tf) pattern, they’ll get the right tags.

Shiv avatar

I was also thinking say if we have a tagging scheme for the company . And the tagging scheme is purely account for cost model for teams . So it goes something like say domains -> subdomains -> service repos . And probably product tag as well .

My thought is I just want to ask for information once. If we ask for it in a standard place such as the backstage , I would want to be able to reuse that data where we need it. So the tagging scheme would inform the data that we want to ask for.

I am thinking terraform provider is if we need as a way to grab information on a backstage or something similar to that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Possibly, but now you have configuration sprawl. It’s an interesting use-case with backstage. I’d be open to more brainstorming.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

DM me and we can do a zoom sometime.

2024-04-08

Release notes from atmos avatar
Release notes from atmos
08:04:37 PM

v1.68.0 Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…

Release v1.68.0 · cloudposse/atmosattachment image

Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (#578) Breaking changes If you used Go templates in Atmos stack manifest before, they were enabled by default. Startin…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
08:14:28 PM

v1.68.0 Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…

Andrew Ochsner avatar
Andrew Ochsner

is there an easy way to have atmos describe stacks --stacks <stackname> skip abstract components? or have any jq handy cause i just suck at jq

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
      - name: components
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos list components -s plat-ue2-dev -t enabled will skip abstract (and you can modify the command to improve or add more filters)

Andrew Ochsner avatar
Andrew Ochsner

ahhh huzzah tysm

2024-04-09

Justin avatar

Hi all,

I’m working through a project right now where I need to collect the security group ID for a security group

vendored: https://github.com/cloudposse/terraform-aws-security-group

/components/terraform/networking/security_group/v2.2.0

vendored: https://github.com/cloudposse/terraform-aws-ecs-web-app

/components/terraform/ecs/web/v2.0.1

For my stack configuration, I’d like to create two security groups, and then provide the IDs to the ecs-web-app.

components:
  terraform:
    security_group/1:
      vars:
        ingress:
	      - key: ingress
	        type: ingress
	        from_port: 0
	        to_port: 8080
	        protocol: tcp
	        cidr_blocks: 169.254.169.254
	        self: null
	        description: Apipa example group 1
	    egress:
	      - key: "egress"
		    type: "egress"
		    from_port: 0
		    to_port: 0
		    protocol: "-1"
		    cidr_blocks: ["0.0.0.0/0"]
		    self: null
		    description: "All output traffic
    security_group/2:
      vars:
        ingress:
	      - key: ingress
	        type: ingress
	        from_port: 0
	        to_port: 8009
	        protocol: tcp
	        cidr_blocks: 169.254.169.254
	        self: null
	        description: Apipa example group 1
	    egress:
	      - key: "egress"
		    type: "egress"
		    from_port: 0
		    to_port: 0
		    protocol: "-1"
		    cidr_blocks: ["0.0.0.0/0"]
		    self: null
		    description: "All output traffic
	web/1:
	  name: sampleapp
	  vpc_id: <reference remote state from core stack>
	  ecs_security_group_ids:
	    - remote_state_reference_security_group/1
	    - remote_state_reference_security_group/2 

If the security group and ecs modules have been vendored in, what is the best practice to get the CloudPosse remote_state file into place and configured so that I can reference each security group ID in the creation of my web/1 stack? Same with the VPC created in a completely different stack.

My thinking is that I’d like to keep everything versioned and vendored from separate repositories that have their own tests / QA passes performed on them unique to Terraform, or vendored in from CloudPosse / Terraform.

I’m missing the connection of how to fetch the remote state from each security group and reference the ids in the web1 component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Good question. Cool that you’re kicking the tires.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So vendoring child modules and using them as root modules, would be more of an advanced use-case.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The way Cloud Posse typically uses and advises for the use of vendoring is with “root” modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s interesting, because what want to do I think is very similar to another recent thread we have. Let me dig it up.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Read the thread here, to get on the same page. I’d be curious if there’s some overlap with what you’re trying to do.
I’m missing the connection of how to fetch the remote state from each security group and reference the ids in the web1 component
This is why I think what you’re trying to do is related.

https://sweetops.slack.com/archives/C031919U8A0/p1712062674605219?thread_ts=1711983108.729749&cid=C031919U8A0

@Erik Osterman (Cloud Posse) okey, because I was hoping if there is possibility to have implementation-agnostic (then context-agnostic) building blocks (that could be vendored from i.e. Cloud Posse ,Gruntwork, Azure Verified Modules, etc.) and to glue things together with business logic with use of higher-level tooling (i.e. Atmos), without usage of any intermediate wrapper modules and without instrumenting the low-level blocks. But it would require capabilities to connect outputs to inputs in declarative way with some basic logic support (conditionals, iteration). Of course for more sophisticated logic some “logic modules” could be written, but after all it would be still the totally flat structure of modules with orchestration on the config management layer, i.e. Atmos. For now with Atmos if I want to, i.e. use output from one module in for_each for other module, I need to have additional module that: 1. grab the output from remote state and 2. implements this for_each for the second module. Am I right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TL;DR: we probably don’t support in Atmos (today) what you’re trying to do. That’s because we take a different approach.

You can see how we write components here. https://github.com/cloudposse/terraform-aws-components

The gist of it is, we’ve historically taken a very delibrate and different approach from tools like terragrunt (And now terramate), that are optimized for terraform code generation.

There are a lot of reasons for this, but chief amongst them is we haven’t needed to, despite the insane amounts of terraform we write.

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In our approach, we write context-aware, purpose build and highly re-usable components with Terraform. In terraform, it’s so easy to look up remote state, like “from each security group and reference the ids in the web1 component”, so we don’t do that in atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Instead, we write our components so they know how to look up that remote state based on declarative keys,

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if we look at your example here:

    web/1:
	  name: sampleapp
	  vpc_id: <reference remote state from core stack>
	  ecs_security_group_ids:
	    - remote_state_reference_security_group/1
	    - remote_state_reference_security_group/2 

The “web” component should be aware how to look up security groups, e.g. by tag.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Child modules, shouldn’t know how to do this. Because they are by design less opinionated, so they work for anyone - even those who do not subscribe to the “Cloud Posse” way of doing Terraform with Atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Atmos is opionated, in that it follows our guidelines for how to write terraform for maximum reusability without code generation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I wouldn’t write it off that we’ll never support it in atmos, just there are other things we’re working on right now before we consider how to do code generation, the pattern we don’t ourselves use right now.

Justin avatar

Yeah, I think my ultimate goal is to eliminate rewriting the same structure over and over again for a technology. So if I have a CloudPosse module that knows how to write a subnet very well, already has the context capabilities, and can be reused anywhere. I’d like to be able to take one product, say a kuberenets stack, and build subnets and what not by reference, gluing the pieces together like what you linked in the previous article. The modules can be opinionated enough that they meet our needs….or we can write our own more opinionated root modules, and then reuse them across all of our business needs.

Then a new product release can just tie those things together via the catalog and tweaked as needed on a stack by stack basis.

Justin avatar

So with this in mind and just to check my comprehension of the current state:

• Vendor in root modules that can be reused over and over again.

• Create my child modules that reference those root modules which are more opinionated and build core catalogs off of the child modules which can have a baseline configuration in the catalog and then tweak based on a stack basis.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your component you just need to add this part of code (in remote-state.tf ) using the remote-state module:

module "vpc_flow_logs_bucket" {
  count = local.vpc_flow_logs_enabled ? 1 : 0

  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.5.0"

  # Specify the Atmos component name (defined in YAML stack config files) 
  # for which to get the remote state outputs
  component = var.vpc_flow_logs_bucket_component_name

  # Override the context variables to point to a different Atmos stack if the 
  # `vpc-flow-logs-bucket-1` Atmos component is provisioned in another AWS account, OU or region
  stage       = try(coalesce(var.vpc_flow_logs_bucket_stage_name, module.this.stage), null)
  tenant      = try(coalesce(var.vpc_flow_logs_bucket_tenant_name, module.this.tenant), null)
  environment = try(coalesce(var.vpc_flow_logs_bucket_environment_name, module.this.environment), null)

  # `context` input is a way to provide the information about the stack (using the context
  # variables `namespace`, `tenant`, `environment`, `stage` defined in the stack config)
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and configure the variables in Atmos

Justin avatar

yeah, I was reading through this, and I my disconnect has been trying to incorporate this with root modules that don’t already have this in place.

So, in my example, I want to build multiple subnets by declaring the same subnet module vendored from CloudPosse. If I don’t need a child module, I don’t want to create one. However, it sounds like what I need to do is build these child modules with the remote state configuration with the exports listed that I’ll need in other stacks.

Justin avatar

I’ve only been working with atmos for a couple of days, so my apologies if I missed something obvious in that document.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want us to review your config, DM me your repo and we’ll try to provide some ideas

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Other things that work well with vendoring in this sort of situation can be using [_override.tf](http://_override.tf) files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can vendor in the the upstream child module and essentially do monkey patching using override files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Monkey patch

In computer programming, monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. It is used to extend or modify the runtime code of dynamic languages such as Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, and Lisp without altering the original source code.

Override Files - Configuration Language | Terraform | HashiCorp Developerattachment image

Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(what we want to avoid doing is reimplementing HCL in YAML )

Justin avatar

Hrm, that’s an interesting thought. Currently what I’m thinking is that I need to evaluate a polyrepo structure for my highly opinionated modules which build the solution we’re looking for. There I can run through assertion testing which was added recently and pull in passing versions to my “monorepo” which helps controls different stacks/solutions and build the catalog/mixins for those opinionated repositories.

So I can still use the CloudPosse subnet module should I need it in a root module by simply calling and pinning it, and then build my YAML catalogs around those vendored root modules. That brings less imports into the control repository and less things to maintain in vendor files…I just need to make sure that we are consistent and clean with versioned repos of our own that we’re vendoring in.

Justin avatar

That would then allow me to bring in the remote_state configuration that Andriy referenced to build out all of my “core” services and export anything that would need to be referenced in other modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, I think that might work.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If it would be helpful, happy to do a zoom and “white board” on some ideas.

Justin avatar

I really appreciate that, thank you. I’m going to take some time and rework a personal project I’ve been working on to learn Atmos and will report back once I have it a bit more fleshed out. Really appreciate your time and conversation today, thank you so much.

1
Andrew Ochsner avatar
Andrew Ochsner

any recommendations on how to pass output from 1 workflow command to another workflow command? right now just thinking of writing out/reading from a file…. just curious if there are other mechanisms that aren’t as kludgy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heh, wow. A lot of similar requests to this lately. If not workflows steps, then state between components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think using an intermediary file is your best bet right now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For advanced workflows, you may want to consider something like Go task. You can still call gotask as a custom command from atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have improvements planned (not started) for workflows. If you could share your use-case, it could help inform future implementations.

Andrew Ochsner avatar
Andrew Ochsner

yeah i mean ultimately i need to run a shell command as part of my cold start (azure) based on an ID that got generated by terraform… long term i wont’ have this probably because there’s another provider i can use to skip the shell command and just do it all in terraform but going through corp approvals

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, then avoid the yak shaving for a elegant solution and go with an intermediary command.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or, a variation of the above. Ensure the “ID that got generated by terraform” is an output of the component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then in your workflow, just call atmos terraform output.... to retrieve the output.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can use that in $(atmos terraform output ....) to use it inline within your workflow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g.

workflow:
  foobar:
    steps:
    - echo "Hello $(atmos terraform output ....) "
Andrew Ochsner avatar
Andrew Ochsner

Yep that’s true. Haven’t quite discovered how complex this morphs into but best to start small and simple

Andrew Ochsner avatar
Andrew Ochsner

Thanks

Release notes from atmos avatar
Release notes from atmos
05:54:33 AM

v1.69.0 Restore Terraform workspace selection side effect In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands….

Release v1.69.0 Restore Terraform workspace selection side effect · cloudposse/atmosattachment image

In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands. This had the unintended consequence that the …

use TF_WORKSPACE to select and/or create the active workspace by mcalhoun · Pull Request #515 · cloudposse/atmosattachment image

what Use the TF_WORKSPACE environment variable to select and/or create the terraform workspace. why This is a better solution than the existing one of running two external terraform workspace selec…

Release notes from atmos avatar
Release notes from atmos
06:04:37 AM

v1.69.0 In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands….

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heads up @RB this rolls back whaet we implemented in 515 and TF_WORKSPACE due to it behaving differently

RB avatar

Ah yes no worries, i saw the issue that Jeremy raised. Thanks for the ping

1

2024-04-10

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
08:28:07 PM

added an integration to this channel: Linear Asks

Chris King-Parra avatar
Chris King-Parra

What’s the recommended approach to set up data dependencies between stacks (use the output of one stack as the input for another stack)? Data blocks?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your component where you want to read a remote state of another component, add

module "xxxxxx" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is one approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re working on refining another appraoch as well, but it’s not published. It’s also a native-terraform approach.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s important to address the heart of it: we try to leverage native terraform everywhere possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since terraform supports this use-case natively, we haven’t invested in solving it in atmos because that forces vendor lock-in

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We try to not force vendor lock-in

2

2024-04-11

Release notes from atmos avatar
Release notes from atmos
03:44:32 PM

v1.68.0 Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…

Release v1.68.0 · cloudposse/atmosattachment image

Add gomplate templating engine to Atmos stack manifests. Update docs @aknysh (#578) Breaking changes If you used Go templates in Atmos stack manifest before, they were enabled by default. Startin…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

2024-04-12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For anyone else working on GHA with Atmos https://sweetops.slack.com/archives/CB6GHNLG0/p1712912684845849

wave What schema does the DDB table used by https://github.com/cloudposse/github-action-terraform-plan-storage/tree/main need to have? cc @Erik Osterman (Cloud Posse)

RB avatar

What is a good SCP for the identity account ? Is it only iam roles that belong here so perhaps ?

2024-04-13

Ben avatar

hi all, i’m new to atmos and try to follow along the quick-start quide. i got a weird issue were it seems like {{ .Version }} isn’t rendered when running atmos vendor pull:

❯ atmos vendor pull
Processing vendor config file 'vendor.yaml'
Pulling sources for the component 'vpc' from 'github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref={{.Version}}' into 'components/terraform/vpc'
error downloading '<https://github.com/cloudposse/terraform-aws-components.git?ref=%7B%7B.Version%7D%7D>': /usr/bin/git exited with 1: error: pathspec '{{.Version}}' did not match any file(s) known to git

it works when I replace the templated version with a real one.

i’ve got the stock config from https://atmos.tools/quick-start/vendor-components and running atmos 1.69.0:

❯ atmos version

 █████  ████████ ███    ███  ██████  ███████
██   ██    ██    ████  ████ ██    ██ ██
███████    ██    ██ ████ ██ ██    ██ ███████
██   ██    ██    ██  ██  ██ ██    ██      ██
██   ██    ██    ██      ██  ██████  ███████


👽 Atmos 1.69.0 on darwin/arm64
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ben in Atmos 1.68.0 we added gomplate to Atmos Go templates, and added the enabled flag to enable/disable templating, but the flag affected not only Atmos stack manifests, but all other places where templates are used (including vendoring and imports with templates), see https://atmos.tools/core-concepts/stacks/templating

Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll release a new Atmos release today which fixes that issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for now, either use Atmos 1.67.0, or add

templates:
  settings:
    enabled: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to your atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

today’s release 1.70.0 will fix that so you don’t need to do it

Ben avatar

thanks! enabling templating in the config solved the issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks. That flag should not affect templating in vendoring and imports, so it’s a bug which will be fixed in 1.70.0

1
dropinboy avatar
dropinboy

Hi All :wave: ,

I’m just getting started with Atmos and wanted to check if nested variable interpolation is possible? My example is creating an ECR repo and want the name to have a prefix. I’ve put the prefix: in a _defaults.yaml file and used {{ .vars.prefix }} in a stack/stage.yaml and it doesn’t work.

vars:
  prefix: "{{ .vars.namespace }}-{{ .vars.environment }}-{{ .vars.stage }}"

repository_name: "{{ .vars.prefix }}-storybook"

How are others doing this resource prefixing? Thank you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently Atmos does one pass at Go template processing, your example requires two passes. It’s not currently supported, but we’ll consider implementing it in the near future

dropinboy avatar
dropinboy

Thank you for the quick response (not expected on a Saturday). I believe it’s helpful/convenient.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for now consider repeating the template (not very DRY, but will work)

dropinboy avatar
dropinboy

yes, that’ll work, thank you

dropinboy avatar
dropinboy

BTW really enjoying using Atmos (as a current Terragrunt user)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thank you

Release notes from atmos avatar
Release notes from atmos
03:14:34 AM

v1.70.0 Add gomplate datasources to Go templates in Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2241696365” data-permission-text=”Title is private”…

Release v1.70.0 · cloudposse/atmosattachment image

Add gomplate datasources to Go templates in Atmos stack manifests. Update docs @aknysh (#582) what

Add gomplate datasources to Go templates in Atmos stack manifests Fix an issue with enabling/…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
03:34:35 AM

v1.70.0 Add gomplate datasources to Go templates in Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2241696365” data-permission-text=”Title is private”…

    keyboard_arrow_up