#atmos (2024-04)
2024-04-01
Hopefully a small question this monday morning, I’m trying to get atmos.exe functioning on Win11 with our current ver (v1.4.25), and it looks like she wants to fire up, but fails to find terraform in the %PATH%. I dropped it in path and even updated path with terraform.exe, not sure where atmos is searching for that path. See what I mean here -
C:\path\>
Executing command:
terraform init -reconfigure
exec: "terraform": executable file not found in %PATH%
C:\path\>terraform
Usage: terraform [global options] <subcommand> [args]
The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.
I believe we’ve predominantly tested it on WSL2
Are you using WSL?
No but I can redo in wsl, kinda hacking away at improving the dev environment this morning.
I’ll come back in a bit after I give wsl a shot
@Ryan I’m not sure why Atmos can’t see the Terraform binary in the PATH on Windows (we’ll investigate that). It spawns a separate process to execute TF and other binaries. For now, maybe you can try something like this:
terraform:
command: <path to TF binary>
the command
attribute allows you to select any binary to execute (instead of relying on it to be in PATH), and also it aloows you to set the binary for OpenTofy instead of Terraform (if you want to use it)
this config
terraform:
command: <path to TF binary>
can be set globally (per Org, tenant, account), or even per component (if, for example, you want to use diff TF or OpenTofu binaries for diff components)
components:
terraform:
my-component:
command: <TF or OpenTofu binary>
let us know if that works for you and how it can be improved
Will play around with it today, I appreciate your responses. We’re in geodesic now and only really leverage atmos so I’d like to try to make it easier to access for my team directly from vscode in native os.
Ok, I’ll probably have to talk to you gentlemen friday about risk. the issue is we’re on 1.4.x and I was pulling that atmos version, when I grabbed the latest atmos it was fine. I could tell it was a separate process too but im like uhhh where the heck is that little guy going for env.
I dont know the risk involved in using the updated exe, we are very narrow in our atmos usage specifically around terraform and the name structuring
i would think thats all just atmos yaml but the updated exe is like 4x the size of the one I was using previously.
so the latest Atmos version works OK on Windows?
yea but now its complaining about my structure so new issues, its ok though
the terraform piece no longer complained when i updated
sorry i didnt want to present a bug before i checked that
no problem at all. We can help you with any issues and with the structure
Yea it all works happy now besides that it cannot find the stack based on atmos.yaml stacks: config, I’ll read deeper on this. From what I can see , my stacksbaseabsolutepaths seems to be pointing at my components /../.. directory. Somethings fishy with my atmos.yaml.
you can DM us the file, we’ll review
Hi, everyone. I’m evaluating Atmos for the company to enhance Terraform. In Atmos everything revolves around a library of components which understandably where a majority of reusable modules should be stored. But I don’t understand (and docs don’t help) if I can define a single resource using plain terraform syntax on the deepest level of a stack without creating a component for it.
For the following file tree:
.
├── atmos.yaml
├── components
│ └── terraform
│ └── aws-vpc
│ ├── main.tf
│ ├── outputs.tf
│ ├── versions.tf
├── stacks
│ ├── aws
│ │ ├── _defaults.yaml
│ │ └── general_12345678901
│ │ ├── core
│ │ │ ├── _defaults.yaml
│ │ │ ├── [eks.tf](http://eks.tf)
│ │ │ └── us-east-2.yaml
│ │ └── _defaults.yaml
│ ├── catalog
│ │ └── aws-vpc
│ │ └── defaults.yaml
└── vendor.yaml
I’d like to just drop [eks.tf](http://eks.tf)
(a YAML version of HCL is also fine) into stacks/aws/general_12345678901/core
and expect Atmos to include it into the deployment. Is it possible?
Hrm… let me see if I can help.
if I can define a single resource using plain terraform syntax on the deepest level of a stack without creating a component for it.
So let’s take a step back. What we call a “component” in atmos
is something that is deployable. For terraform, that’s a root module. In terraform, it’s impossible to deploy anything outside of a root module.
So we’re confusing 2 concepts here. And it happens easily because many different vendors define “stacks” differently.
Cloud Posse, defines a stack as yaml configuration that specifies how to deploy components (e.g. root modules).
So if you have a file like [eks.tf](http://eks.tf)
, you will need to stick that in a root module.
In atmos, we keep the configuration (stacks) separate from the components (e.g. terraform root modules).
So inside the stacks/
folder you would never see terraform code.
All terraform code, is typically stored in terraform/components/<something>
So this is quite different from tools like terragrunt/terramate that I believe combine terraform code with configuration.
so if I need to deploy a single resource to a single environment only, I’d need to create a wrapper module in components
and use in the stack?
Perhaps… That said, I’d have to understand more about what you’re trying to do. For example, at Cloud Posse, we would never “drop in” an EKS cluster.
An EKS cluster has it’s own lifecycle, disjoint from a VPC, and applications. It deserves it’s own component.
Many times, users will want to extend some functionality of a component. In that case, a it’s great to use the native Terraform _override.tf
pattern.
For example, if there’s already an eks.tf
file in a component, and you want to tweak how it behaves, you could create an eks_override.tf
file
Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.
an eks cluster is a bad example. But a lighter resource like ECR registry which does not require too many input parameters to configure is a better one. Having a module for every AWS resource would be infeasible to maintain with the small team. I checked out the library of TF components from Cloudposse and it looks impressive but probably require an orchestrated effort of the entire team to support. AWS provider changes very frequently and you have to keep up to update the modules. Another point is that our org comes from a very chaotic tf organization trying to implement a better structure. Replacing everything with modules in a single sweep is not realistic. So we would like to start using Atmos with our existing tf code organization gradually swapping out individual resources with modules. In our use case environments (AWS accounts) do not have much in common. Having a module for a resource that only used in a single place does not make much sense
@alla2 you don’t need to replace everything. If you want a new component, just put it into components/terraform/<new-component>
folder (it could be one TF file like [main.tf](http://main.tf)
, or the standard config like [variables.tf](http://variables.tf)
[main.tf](http://main.tf)
, [outputs.tf](http://outputs.tf)
etc.
then, in stacks
, you provide the configuration for your new component (TF root module)
and in stacks
, you can make it DRY by speccifying the common config for all accounts and environments in stacks/catalog/<my-component>/defaults.yaml
and then in the top-level stacks you can override or provide specific values for your component for each account, region, tenant etc.
this is to separate the code (TF components) from the configuration (stacks) (separation of concerns)
so your component (TF code) is completely generic and reusable and does not know/care about where it will be deployed - the deployment configs are specified in stacks
So we would like to start using Atmos with our existing tf code organization gradually swapping out individual resources with modules
that’s easily done - put your TF code in components/terraform
and configure it in stacks
then in the future, you can create a repo with your TF modules, and in your TF components you would just instantiate your modules making your components very small
Thanks, Andriy and Erik for your input and the options you provided. I’ll have another look. I agree that having a module for a logical group of resources is a right thing to do. But it usually only works when you already have a good code organization and a somewhat large codebase. I’d argue that managing everything with modules is not productive at the start as you spend a lot of time creating modules which you don’t even know if you can reuse later. Even within a mature org parts of the configuration can be defined as plain resources as they are only useful in this particular combination for this particular environment and does not demand a separate module.
Yes, so I know where you’re coming from, and if you’re stuck in that situation - other tools might be better suited for dynamically constructing terraform “on the fly”. Right now, atmos
is not optimized for that use-case.
I’d argue that managing everything with modules is not productive at the start as you spend a lot of time creating modules which you don’t even know if you can reuse later.
This is true… There’s a significant up front investment. Our belief though, is once you’ve created that investment, you almost don’t need to write any HCL code any more.
Everything becomes configuration.
(We’ve of course spent considerable time building that library of code, which is why we have so much)
You might have already come across this, but here’s how we we think of components for AWS: https://github.com/cloudposse/terraform-aws-components
These are all the “building blocks” we use in our commercial engagements
Opinionated, self-contained Terraform root modules that each solve one, specific problem
yeah, I was referring to it above, impressive piece of work. Right now we’re mostly using another modules library which I guess should also work fine with Atmos.
Collection of Terraform AWS modules supported by the community
Yes, though keep in mind those are child modules, and not root modules.
That said, it would be interesting to show a reference architecture using those child modules. It would be easy to vendor them in with atmos vendor
, and atmos can automatically generate the provider configuration.
Hey, I would to add some follow up to this discussion under the @alla2 message. As far as I see that atmos
doesn’t provide any option to pass output from one module as an input to another (with optional conditionals and iteration). It can only be achieved with instrumenting the module’s code by context
with remote backend
module, isn’t it?
P.S. It probably could be achieved also with some scripting in workflows
, but it would be more like workaround, then I’m not taking it into account.
Yes, we have intentionally not implemented it in atmos since we want as much as possible to rely on native Terraform functionality. Terraform can already read remote state either by the remote state data source or any of the plethora of other data source look ups. We see that as a better approach because it is not something that will tie you to “atmos”. It will work no matter how you run terraform, and we see that as a win for interoperability.
To make this easier we have a module to simplify remote state look ups that is context aware. This is what @Roy refers to.
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
@Erik Osterman (Cloud Posse) okey, because I was hoping if there is possibility to have implementation-agnostic (then context-agnostic) building blocks (that could be vendored from i.e. Cloud Posse ,Gruntwork, Azure Verified Modules, etc.) and to glue things together with business logic with use of higher-level tooling (i.e. Atmos), without usage of any intermediate wrapper modules and without instrumenting the low-level blocks. But it would require capabilities to connect outputs to inputs in declarative way with some basic logic support (conditionals, iteration). Of course for more sophisticated logic some “logic modules” could be written, but after all it would be still the totally flat structure of modules with orchestration on the config management layer, i.e. Atmos.
For now with Atmos if I want to, i.e. use output from one module in for_each
for other module, I need to have additional module that: 1. grab the output from remote state and 2. implements this for_each
for the second module. Am I right?
So what I tried to say, was we take that implementation agnostic approach. By not building anything natively into atmos it’s agnostic to how you run terraform.
We are also building some other things (open source) but cannot discuss here. DM me for details.
These are terraform mechanisms to further improve what you are trying to do across multiple clouds, ideally suited for enterprise.
For now I have 3-layer layout of modules. First is set of context-agnostic building blocks (like Gruntwork modules
). The second is the orchestrator layer that carries the context with use of external config store and implements business logic. The last is environment layer that just calls the second with exact input. I’m looking for some solution that could eliminate the 2. and 3. layer and Atmos deals with the 3. in the way that I loved
now I’m looking at layer 2. and try to figure out how to deal with it :P
Hrmmm think there might have have been a punctuation problem in the previous message. You want to eliminate layer 2 and 3?
yes, I’m wondering about eliminating layer 2 and 3
Do you mean calling “child” modules as “root” modules by dynamically configuring the backend? Thus being able to use any module as a root module. Then needing to solve the issue of passing dynamic values to those child modules, without the child modules being aware of its context or how it got those values.
sometimes it would require “logic modules”, that would act as a procedure in general purpose lang (some generalisation of locals
), but after that it would be simple as
resource_component:
vars:
var1: something
var2: something
roles_logic_component:
vars:
domain_to_query: example.com
{{ for role in roles_logic_component.output.roles }}
{{ role.name}}-role-component:
name: {{ role.name }}
scope: {{ resource_component.output.id }}
permissions: some-permissions
Of course it would require to build dependency tree on the tool level, what is duplication of tf responsibility, but.. it would be not only for tf
As You can see, the context is carried by tool itself then
A forth coming version of atmos will support all of these https://docs.gomplate.ca/datasources/
gomplate documentation
Will that solve what you want to do?
depends on final implementation, but sounds like it would
did You foresee this use case I described?
It makes sense, although it’s not a use case we are actively looking to solve for ourselves, as we predominantly want nice and neat components that are documented and tested. The more dynamic and meta things become, the harder to document, test and explain how it works.
correct, templating can become difficult to understand and read very fast. Having said that, we’ll release support for https://docs.gomplate.ca/functions/ in Atmos manifests this week
gomplate documentation
(Sprig is already supported https://masterminds.github.io/sprig/)
Useful template functions for Go templates.
then what is for You the recommended way of gluing things together? Let’s take a resource <-> roles example from my para-yaml-file.
do You have some ETA for datasources?
recommended way of gluing things together? We take a different approach to glueing things together, which is the approach you want to eliminate We design components to work together, using data sources. This keeps configuration lighter. We are weary of re-inventing HCL in YAML.
Keep modules generic and reusuable (e.g. github.com/cloudposse)
Then create opinionated root modules that combine functionality. Root modules can use data sources to look up and glue things together.
Segment root modules by life cycle. E.g. a VPC is disjoint from an EKS cluster which is disjoint from the lifecycle of the applications running on the cluster.
The way we segment root modules, is demonstrated by our terraform-aws-components
repo.
ETA for gomplate
- we’ll try to release it this week (this will include the datasources)
but probably it won’t have such wonderful UX as {{ component.output }}
reference? :D
@Roy Sprague maybe you have some thoughts on this one https://github.com/cloudposse/atmos/issues/598
Describe the Feature
This is a similar idea to what Terragrunt does with their “Remote Terraform Configurations” feature: https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/#remote-terraform-configurations
The idea would be that you could provide a URL to a given root module and use that to create a component instance instead of having that component available locally in the atmos project repo.
The benefit here is that you don’t need to vendor in the code for that root module. Vendoring is great when you’re going to make changes to a configuration, BUT if you’re not making any changes then it just creates large PRs that are hard to review and doesn’t provide much value.
Another benefit: I have team members that strongly dislike creating root modules that are simply slim wrappers of a single child module because then we’re in the game of maintaining a very slim wrapper. @kevcube can speak to that if there is interest to understand more there.
Expected Behavior
Today all non-custom root module usage is done through vendoring in Atmos, so no similar expected behavior AFAIK.
Use Case
Help avoid vendoring in code that you’re not changing and therefore not polluting the atmos project with additional code that is unchanged.
Describe Ideal Solution
I’m envisioning this would work like the following with the $COMPONENT_NAME.metadata.url
being the only change to the schema. Maybe we also need a version
attribute as well, but TBD.
components:
terraform:
s3-bucket:
metadata:
url: <https://github.com/cloudposse/terraform-aws-components/tree/1.431.0/modules/s3-bucket>
vars:
...
Running atmos against this configuration would result in atmos cloning that root module down to the local in a temporary cache and then using that cloned root module as the source to run terraform
or tofu
against.
Alternatives Considered
None.
Additional Context
None.
@Roy Sprague this is now supported!
https://sweetops.slack.com/archives/C031919U8A0/p1718495222745629
Morning,
@Erik Osterman (Cloud Posse) one good think (among many others!) that i find using vendoring is, that i can simply vendor versioned context.tf files in every component. Also i can change the component locally before pushing the change, if i use a local version of a component.
is there a way to cover those use cases using the metadata.url option?
@Stephan Helas can you provide a configuration example of what you would ideally be able to do?
what i’m trying to say is: i like the idea to use the url metadata, but i don’t know how to develop components with this configuration
I think my confusion stems from which part of this thread you are referring to. It’s partially my fault for re-awakening it months later. So we just added the ability to pass outputs between components, which was something @Roy Sprague suggested above (with a different syntax), and we came around to the idea, and recently released it. I am very bullish about this now.
Since we don’t support a URL in the metadata of component configuration in a stack, I am scratching my head.
sorry, that was my fault you totally right, the topic about remote state vs passing outputs is much more important. my coworker struggle to grasp the concept for remote-state as module. so i will make time to look into the new approach right away
the metadata.url is nice to have and will be put into my backlog for later
2024-04-02
v1.67.0 Add Terraform Cloud backend. Add/update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2221110085” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/572“…
Add Terraform Cloud backend. Add/update docs @aknysh (#572) what
Add Terraform Cloud backend Add docs:
Terraform Workspaces in Atmos Terraform Backends. Describes how to configure backends for AW…
aknysh has 266 repositories available. Follow their code on GitHub.
what
Add Terraform Cloud backend Add docs:
Terraform Workspaces in Atmos Terraform Backends. Describes how to configure backends for AWS S3, Azure Blob Storage, Google Cloud Storage, and Terrafor…
v1.67.0 Add Terraform Cloud backend. Add/update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2221110085” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/572“…
So I am examining the output for atmos describe component <component-name> —stack <stack_name>
I am trying to understand the output of the command .
- what’s the difference between deps and deps_all
- What does the imports section mean? I see catalog/account-dns and quite lot files under catalog dir
imports
, deps
and deps_all
are not related to the component, but rather to the stack
imports - a list of all imports in the Atmos stack (this shows all imports in the stack, related to the component and not)
deps_all - a list of all component stack dependencies (stack manifests where the component settings are defined, either inline or via imports)
deps - a list of component stack dependencies where the final values of all component configurations are defined (after the deep-merging and processing all the inheritance chains and all the base components)
Use this command to describe the complete configuration for an Atmos component in an Atmos stack.
they are added to describe component
as an additional info that might be useful in some cases (e.g. to find all manifests where any config for the component is defined)
Thanks much @Andriy Knysh (Cloud Posse). I think my main intent to understand these things are related to a project that am trying to work is by creating a single file which will serve as a registry for bootstrapping new accounts with foundational infra (such as account dns , region dns , vpc , compute layer , and preferably storage , and some global component need for each account (each product gets an account) . So the registry file i think will be a stack in the atmos file with values in it and a template file preferably to create all the files and put in the path that atmos needs . So this way if I need to bring up an account , all I need is to add the contents to the registry file , run a cli tool probably to generate the file needed for atmos
Preferably have the account related meta data config to be stored in a json in a S3 to start with( relational db in the future ) an api to for the the platform team so devs can use platform cli to get the values to manipulate their configs etc .
Does it makes sense? Or I am approaching this wrong?
it does make sense. In fact, we will be working on atmos generate
functionality to generate diff things (projects, components, stacks, files, etc.) from templates, and what you are explaining is exactly what the new Atmos functionality will do cc: @Erik Osterman (Cloud Posse)
Interesting! Including the metadata store with an api?
When is the generate feature set to release ?
Interesting! Including the metadata store with an api?
we’ll be leveraging the beautiful Go lib
https://github.com/hairyhenderson/gomplate
(embedded in Atmos)
so all these Datasources will be supported
https://docs.gomplate.ca/datasources/
and functions
gomplate documentation
gomplate documentation
gomplate documentation
A flexible commandline tool for template rendering. Supports lots of local and remote datasources.
this is a multi-phase process, phase #1 will be to add gomplate
support to Atmos stack manifests. Currently https://masterminds.github.io/sprig/ is supported, soon (this or next week) gomplate
will be supported as well
Useful template functions for Go templates.
Atmos supports Go templates in stack manifests.
This looks nice . What do you think about how we could implement the metadata config store with atmos in picture? For storing environment specific metadata , say like I want to look logs for one environment , atmos somewhere does the deep merge and have the information , (other examples could be anything that folks need that information but don’t need to look at the tfstate or log into AWS account , cluster url , sqs used by the service (non secrets) . Is this possible ? If so a general idea would help . Thanks again Andriy
we’ll be implementing that by using https://docs.gomplate.ca/datasources/ - whatever is supported by gomplate
will be supported by Atmos (and that’s already a lot including S3, GCP, HTTP, etc.). So at least in the first version it will not be “anything” b/c it’s not possible to create a swiss-knife tool that would do everything. Maybe later we might consider adding a plugin functionality so you would create your own plugin to read from whatever source you need (we have not considered it yet)
gomplate documentation
@Shiv if you are asking how you could do it now, you can do it in Terraform using the many providers that are already supported by Terraform (https://registry.terraform.io/browse/providers). Create Terraform components using providers you need (to get the metadata), and use Atmos to configure and provision them
2024-04-03
2024-04-05
hmmm
> atmos describe stacks
the stack name pattern '{tenant}-{environment}-{stage}' specifies 'tenant', but the stack 'catalog/component/_defaults' does not have a tenant defined in the stack file 'catalog/component/_defaults'
how to avoid including catalog items from describing the stacks? All components have type: abstract
meta attribute.
you can do one of the following:
• exclude all defaults.yaml
• exclude the entire catalog
folder
stacks:
# Can also be set using 'ATMOS_STACKS_BASE_PATH' ENV var, or '--config-dir' and '--stacks-dir' command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using 'ATMOS_STACKS_INCLUDED_PATHS' ENV var (comma-separated values string)
included_paths:
- "orgs/**/*"
# Can also be set using 'ATMOS_STACKS_EXCLUDED_PATHS' ENV var (comma-separated values string)
excluded_paths:
- "**/_defaults.yaml"
In the previous step, we’ve decided on the following:
depending on the structure of your stacks
folder, you can use included_paths
to include everything and then excluded_paths
to exclude some paths, or just included_paths
to only include what’s needed
What are some recommended patterns to add / enforce tagging as part of workflows? So if a component is not tagged as per standards . Do not apply sort of thing
So this is where the OPA policies come into play.
@Andriy Knysh (Cloud Posse) might have an example somewhere.
Since you’ve probably become familiar with atmos describe
(per other question), anything in that output can be used to enforce a policy.
See:
# Check if the component has a `Team` tag
here: https://atmos.tools/core-concepts/components/validation#opa-policy-examples
Use JSON Schema and OPA policies to validate Components.
Atmos supports Atmos Manifests Validation and Atmos Components Validation
I see , so even if the stack’s vars section is missing the necessary tags the opa policies can add the tags during the runtime ? Don’t think so yes? Because there is not state for the atmos to have that metadata context? Because the devs could be adding incorrect tags and then tags pretty much becomes useless if it doesn’t fit the company’s tagging scheme
opa policies can add the tags during the runtime
Policies are simply about enforcement, but don’t modify the state.
However, since with atmos, you get inheritance, it’s super easy to ensure proper tagging across the entire infrastructure.
those tags can be set at a level above where the devs operate. They basically don’t even need to be aware of it.
Based on where a component configuration gets imported, it will inherit all the context. This is assuming you’re adopting the terraform-null-label
conventions.
Without using something like null-label
it will be hard to enforce consistency in terraform across an enterprise.
However, there’s still an alternative approach. Since we support provider config generation, it’s possible to automatically set the required tags on any component also based on the whole inheritance modell.
Enforcements makes sense . Yes we have context.tf in all our components . What do you mean by
tags are set at a level above where the devs operate
Yes… let me try to explain that another way.
So we can establish tags in the _defaults.yaml
that get imported in to the stacks. Stacks are at different levels
What makes it hard to describe, is we don’t know how you organize your infrastructure.
In our reference architecture (not public), we organize everything in a very hierarchical manner. By setting tags in the appropriate place of the hierarchy, those tags natuarlly flow through to the component based on where it’s deployed in the stack.
So when I say “above where the devs operate”, I just mean higher up in the hierarchy of configuration.
Here’s a more advanced hierarchy https://atmos.tools/design-patterns/organizational-structure-configuration
Organizational Structure Configuration Atmos Design Pattern
So at each level of that hierarchy, we can establish the required tags. Developers don’t need to be aware of that, so long as they use the [context.tf](http://context.tf)
pattern, they’ll get the right tags.
I was also thinking say if we have a tagging scheme for the company . And the tagging scheme is purely account for cost model for teams . So it goes something like say domains -> subdomains -> service repos . And probably product tag as well .
My thought is I just want to ask for information once. If we ask for it in a standard place such as the backstage , I would want to be able to reuse that data where we need it. So the tagging scheme would inform the data that we want to ask for.
I am thinking terraform provider is if we need as a way to grab information on a backstage or something similar to that
Possibly, but now you have configuration sprawl. It’s an interesting use-case with backstage. I’d be open to more brainstorming.
DM me and we can do a zoom sometime.
2024-04-08
v1.68.0
Add gomplate
templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…
Add gomplate
templating engine to Atmos stack manifests. Update docs @aknysh (#578)
Breaking changes
If you used Go templates in Atmos stack manifest before, they were enabled by default.
Startin…
aknysh has 266 repositories available. Follow their code on GitHub.
what
Add gomplate templating engine to Atmos stack manifests Update docs:
https://pr-578.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/templating/ https://pr-578.atmos-docs.ue2.dev….
v1.68.0
Add gomplate
templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…
is there an easy way to have atmos describe stacks --stacks <stackname>
skip abstract
components? or have any jq
handy cause i just suck at jq
take a klook at this custom command https://github.com/cloudposse/atmos/blob/master/examples/quick-start/atmos.yaml#L177
- name: components
atmos list components -s plat-ue2-dev -t enabled
will skip abstract
(and you can modify the command to improve or add more filters)
ahhh huzzah tysm
2024-04-09
Hi all,
I’m working through a project right now where I need to collect the security group ID for a security group
vendored: https://github.com/cloudposse/terraform-aws-security-group
/components/terraform/networking/security_group/v2.2.0
vendored: https://github.com/cloudposse/terraform-aws-ecs-web-app
/components/terraform/ecs/web/v2.0.1
For my stack configuration, I’d like to create two security groups, and then provide the IDs to the ecs-web-app.
components:
terraform:
security_group/1:
vars:
ingress:
- key: ingress
type: ingress
from_port: 0
to_port: 8080
protocol: tcp
cidr_blocks: 169.254.169.254
self: null
description: Apipa example group 1
egress:
- key: "egress"
type: "egress"
from_port: 0
to_port: 0
protocol: "-1"
cidr_blocks: ["0.0.0.0/0"]
self: null
description: "All output traffic
security_group/2:
vars:
ingress:
- key: ingress
type: ingress
from_port: 0
to_port: 8009
protocol: tcp
cidr_blocks: 169.254.169.254
self: null
description: Apipa example group 1
egress:
- key: "egress"
type: "egress"
from_port: 0
to_port: 0
protocol: "-1"
cidr_blocks: ["0.0.0.0/0"]
self: null
description: "All output traffic
web/1:
name: sampleapp
vpc_id: <reference remote state from core stack>
ecs_security_group_ids:
- remote_state_reference_security_group/1
- remote_state_reference_security_group/2
If the security group and ecs modules have been vendored in, what is the best practice to get the CloudPosse remote_state file into place and configured so that I can reference each security group ID in the creation of my web/1
stack? Same with the VPC created in a completely different stack.
My thinking is that I’d like to keep everything versioned and vendored from separate repositories that have their own tests / QA passes performed on them unique to Terraform, or vendored in from CloudPosse / Terraform.
I’m missing the connection of how to fetch the remote state from each security group and reference the ids in the web1
component.
Good question. Cool that you’re kicking the tires.
So vendoring child modules and using them as root modules, would be more of an advanced use-case.
The way Cloud Posse typically uses and advises for the use of vendoring is with “root” modules.
It’s interesting, because what want to do I think is very similar to another recent thread we have. Let me dig it up.
Read the thread here, to get on the same page. I’d be curious if there’s some overlap with what you’re trying to do.
I’m missing the connection of how to fetch the remote state from each security group and reference the ids in the web1
component
This is why I think what you’re trying to do is related.
@Erik Osterman (Cloud Posse) okey, because I was hoping if there is possibility to have implementation-agnostic (then context-agnostic) building blocks (that could be vendored from i.e. Cloud Posse ,Gruntwork, Azure Verified Modules, etc.) and to glue things together with business logic with use of higher-level tooling (i.e. Atmos), without usage of any intermediate wrapper modules and without instrumenting the low-level blocks. But it would require capabilities to connect outputs to inputs in declarative way with some basic logic support (conditionals, iteration). Of course for more sophisticated logic some “logic modules” could be written, but after all it would be still the totally flat structure of modules with orchestration on the config management layer, i.e. Atmos.
For now with Atmos if I want to, i.e. use output from one module in for_each
for other module, I need to have additional module that: 1. grab the output from remote state and 2. implements this for_each
for the second module. Am I right?
TL;DR: we probably don’t support in Atmos (today) what you’re trying to do. That’s because we take a different approach.
You can see how we write components here. https://github.com/cloudposse/terraform-aws-components
The gist of it is, we’ve historically taken a very delibrate and different approach from tools like terragrunt (And now terramate), that are optimized for terraform code generation.
There are a lot of reasons for this, but chief amongst them is we haven’t needed to, despite the insane amounts of terraform we write.
Opinionated, self-contained Terraform root modules that each solve one, specific problem
In our approach, we write context-aware, purpose build and highly re-usable components with Terraform. In terraform, it’s so easy to look up remote state, like “from each security group and reference the ids in the web1
component”, so we don’t do that in atmos.
Instead, we write our components so they know how to look up that remote state based on declarative keys,
So if we look at your example here:
web/1:
name: sampleapp
vpc_id: <reference remote state from core stack>
ecs_security_group_ids:
- remote_state_reference_security_group/1
- remote_state_reference_security_group/2
The “web” component should be aware how to look up security groups, e.g. by tag.
Child modules, shouldn’t know how to do this. Because they are by design less opinionated, so they work for anyone - even those who do not subscribe to the “Cloud Posse” way of doing Terraform with Atmos.
Atmos is opionated, in that it follows our guidelines for how to write terraform for maximum reusability without code generation.
I wouldn’t write it off that we’ll never support it in atmos, just there are other things we’re working on right now before we consider how to do code generation, the pattern we don’t ourselves use right now.
Yeah, I think my ultimate goal is to eliminate rewriting the same structure over and over again for a technology. So if I have a CloudPosse module that knows how to write a subnet very well, already has the context capabilities, and can be reused anywhere. I’d like to be able to take one product, say a kuberenets stack, and build subnets and what not by reference, gluing the pieces together like what you linked in the previous article. The modules can be opinionated enough that they meet our needs….or we can write our own more opinionated root modules, and then reuse them across all of our business needs.
Then a new product release can just tie those things together via the catalog and tweaked as needed on a stack by stack basis.
So with this in mind and just to check my comprehension of the current state:
• Vendor in root modules that can be reused over and over again.
• Create my child modules that reference those root modules which are more opinionated and build core catalogs off of the child modules which can have a baseline configuration in the catalog and then tweak based on a stack basis.
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
in your component you just need to add this part of code (in remote-state.tf
) using the remote-state
module:
module "vpc_flow_logs_bucket" {
count = local.vpc_flow_logs_enabled ? 1 : 0
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.5.0"
# Specify the Atmos component name (defined in YAML stack config files)
# for which to get the remote state outputs
component = var.vpc_flow_logs_bucket_component_name
# Override the context variables to point to a different Atmos stack if the
# `vpc-flow-logs-bucket-1` Atmos component is provisioned in another AWS account, OU or region
stage = try(coalesce(var.vpc_flow_logs_bucket_stage_name, module.this.stage), null)
tenant = try(coalesce(var.vpc_flow_logs_bucket_tenant_name, module.this.tenant), null)
environment = try(coalesce(var.vpc_flow_logs_bucket_environment_name, module.this.environment), null)
# `context` input is a way to provide the information about the stack (using the context
# variables `namespace`, `tenant`, `environment`, `stage` defined in the stack config)
context = module.this.context
}
and configure the variables in Atmos
yeah, I was reading through this, and I my disconnect has been trying to incorporate this with root modules that don’t already have this in place.
So, in my example, I want to build multiple subnets by declaring the same subnet module vendored from CloudPosse. If I don’t need a child module, I don’t want to create one. However, it sounds like what I need to do is build these child modules with the remote state configuration with the exports listed that I’ll need in other stacks.
I’ve only been working with atmos for a couple of days, so my apologies if I missed something obvious in that document.
if you want us to review your config, DM me your repo and we’ll try to provide some ideas
Other things that work well with vendoring in this sort of situation can be using [_override.tf](http://_override.tf)
files
You can vendor in the the upstream child module and essentially do monkey patching using override files
https://en.wikipedia.org/wiki/Monkey_patch https://developer.hashicorp.com/terraform/language/files/override
In computer programming, monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. It is used to extend or modify the runtime code of dynamic languages such as Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, and Lisp without altering the original source code.
Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.
(what we want to avoid doing is reimplementing HCL in YAML )
Hrm, that’s an interesting thought. Currently what I’m thinking is that I need to evaluate a polyrepo structure for my highly opinionated modules which build the solution we’re looking for. There I can run through assertion testing which was added recently and pull in passing versions to my “monorepo” which helps controls different stacks/solutions and build the catalog/mixins for those opinionated repositories.
So I can still use the CloudPosse subnet module should I need it in a root module by simply calling and pinning it, and then build my YAML catalogs around those vendored root modules. That brings less imports into the control repository and less things to maintain in vendor files…I just need to make sure that we are consistent and clean with versioned repos of our own that we’re vendoring in.
That would then allow me to bring in the remote_state configuration that Andriy referenced to build out all of my “core” services and export anything that would need to be referenced in other modules.
Yep, I think that might work.
If it would be helpful, happy to do a zoom and “white board” on some ideas.
I really appreciate that, thank you. I’m going to take some time and rework a personal project I’ve been working on to learn Atmos and will report back once I have it a bit more fleshed out. Really appreciate your time and conversation today, thank you so much.
any recommendations on how to pass output from 1 workflow command to another workflow command? right now just thinking of writing out/reading from a file…. just curious if there are other mechanisms that aren’t as kludgy
heh, wow. A lot of similar requests to this lately. If not workflows steps, then state between components.
I think using an intermediary file is your best bet right now.
For advanced workflows, you may want to consider something like Go task. You can still call gotask as a custom command from atmos.
We have improvements planned (not started) for workflows. If you could share your use-case, it could help inform future implementations.
yeah i mean ultimately i need to run a shell command as part of my cold start (azure) based on an ID that got generated by terraform… long term i wont’ have this probably because there’s another provider i can use to skip the shell command and just do it all in terraform but going through corp approvals
Ok, then avoid the yak shaving for a elegant solution and go with an intermediary command.
Or, a variation of the above. Ensure the “ID that got generated by terraform” is an output of the component
Then in your workflow, just call atmos terraform output....
to retrieve the output.
You can use that in $(atmos terraform output ....)
to use it inline within your workflow
e.g.
workflow:
foobar:
steps:
- echo "Hello $(atmos terraform output ....) "
Yep that’s true. Haven’t quite discovered how complex this morphs into but best to start small and simple
Thanks
v1.69.0 Restore Terraform workspace selection side effect In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands….
In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands. This had the unintended consequence that the …
what Use the TF_WORKSPACE environment variable to select and/or create the terraform workspace. why This is a better solution than the existing one of running two external terraform workspace selec…
v1.69.0 In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands….
heads up @RB this rolls back whaet we implemented in 515 and TF_WORKSPACE
due to it behaving differently
2024-04-10
What’s the recommended approach to set up data dependencies between stacks (use the output of one stack as the input for another stack)? Data blocks?
please see this doc https://atmos.tools/core-concepts/components/remote-state
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
in your component where you want to read a remote state of another component, add
module "xxxxxx" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
is one approach
We’re working on refining another appraoch as well, but it’s not published. It’s also a native-terraform approach.
It’s important to address the heart of it: we try to leverage native terraform everywhere possible
Since terraform supports this use-case natively, we haven’t invested in solving it in atmos because that forces vendor lock-in
2024-04-11
v1.68.0
Add gomplate
templating engine to Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2230185566” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/578“…
Add gomplate
templating engine to Atmos stack manifests. Update docs @aknysh (#578)
Breaking changes
If you used Go templates in Atmos stack manifest before, they were enabled by default.
Startin…
aknysh has 266 repositories available. Follow their code on GitHub.
what
Add gomplate templating engine to Atmos stack manifests Update docs:
https://pr-578.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/templating/ https://pr-578.atmos-docs.ue2.dev….
2024-04-12
For anyone else working on GHA with Atmos https://sweetops.slack.com/archives/CB6GHNLG0/p1712912684845849
What schema does the DDB table used by https://github.com/cloudposse/github-action-terraform-plan-storage/tree/main need to have? cc @Erik Osterman (Cloud Posse)
What is a good SCP for the identity
account ? Is it only iam
roles that belong here so perhaps ?
2024-04-13
hi all, i’m new to atmos and try to follow along the quick-start quide.
i got a weird issue were it seems like {{ .Version }}
isn’t rendered when running atmos vendor pull
:
❯ atmos vendor pull
Processing vendor config file 'vendor.yaml'
Pulling sources for the component 'vpc' from 'github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref={{.Version}}' into 'components/terraform/vpc'
error downloading '<https://github.com/cloudposse/terraform-aws-components.git?ref=%7B%7B.Version%7D%7D>': /usr/bin/git exited with 1: error: pathspec '{{.Version}}' did not match any file(s) known to git
it works when I replace the templated version with a real one.
i’ve got the stock config from https://atmos.tools/quick-start/vendor-components and running atmos 1.69.0:
❯ atmos version
█████ ████████ ███ ███ ██████ ███████
██ ██ ██ ████ ████ ██ ██ ██
███████ ██ ██ ████ ██ ██ ██ ███████
██ ██ ██ ██ ██ ██ ██ ██ ██
██ ██ ██ ██ ██ ██████ ███████
👽 Atmos 1.69.0 on darwin/arm64
@Ben in Atmos 1.68.0 we added gomplate
to Atmos Go
templates, and added the enabled
flag to enable/disable templating, but the flag affected not only Atmos stack manifests, but all other places where templates are used (including vendoring and imports with templates), see https://atmos.tools/core-concepts/stacks/templating
Atmos supports Go templates in stack manifests.
we’ll release a new Atmos release today which fixes that issue
for now, either use Atmos 1.67.0, or add
templates:
settings:
enabled: true
to your atmos.yaml
today’s release 1.70.0 will fix that so you don’t need to do it
thanks! enabling templating in the config solved the issue
thanks. That flag should not affect templating in vendoring and imports, so it’s a bug which will be fixed in 1.70.0
Hi All :wave: ,
I’m just getting started with Atmos and wanted to check if nested variable interpolation is possible? My example is creating an ECR repo and want the name to have a prefix.
I’ve put the prefix:
in a _defaults.yaml file and used {{ .vars.prefix }}
in a stack/stage.yaml and it doesn’t work.
vars:
prefix: "{{ .vars.namespace }}-{{ .vars.environment }}-{{ .vars.stage }}"
repository_name: "{{ .vars.prefix }}-storybook"
How are others doing this resource prefixing? Thank you
currently Atmos does one pass at Go
template processing, your example requires two passes. It’s not currently supported, but we’ll consider implementing it in the near future
Thank you for the quick response (not expected on a Saturday). I believe it’s helpful/convenient.
for now consider repeating the template (not very DRY, but will work)
yes, that’ll work, thank you
thank you
v1.70.0
Add gomplate
datasources to Go
templates in Atmos stack manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2241696365” data-permission-text=”Title is private”…
Add gomplate
datasources to Go
templates in Atmos stack manifests. Update docs @aknysh (#582)
what
Add gomplate datasources to Go templates in Atmos stack manifests Fix an issue with enabling/…
aknysh has 266 repositories available. Follow their code on GitHub.
2024-04-15
Hi,
Is there a way to use the file-path of the Stack.yaml file to be used as the workspace_key_prefix
value?
Looking in the S3 bucket, the tfstate file prefix appears to be stored as bucket_name/component_name/stack_name-component_name
.
Currently the repo structure is Stacks/app_name/env_name/stack_name
and would prefer them to be aligned but maybe they don’t need to be.
Thank you
@Andriy Knysh (Cloud Posse) would know the specifics of this, but one word of caution - this couples more of the filesystem to the terraform state, making it more cumbersome to reorganize stacks in the future
It’s really nice to be able to move stack files around, and it doesn’t affect your terraform state.
(It was a design consideration that of configuration be derived from configuration rather than filesystem location)
That said, now with some of the go template/gomplate stuff, it could be possible.
So if you add something like this in your stack manifests (where depends on your import/inheritance chain), it will work as you describe
vars:
stage: staging
tags:
stack_file: '{{ .atmos_stack_file }}'
import:
- catalog/myapp
terraform:
backend_type: s3
backend:
s3:
workspace_key_prefix: '{{ .atmos_stack_file }}'
components:
terraform:
myapp:
vars:
location: Los Angeles
lang: en
Note, that var.tags.stack_file
is entirely optional and has no bearing on the workspace_key_prefix
. I just added it as an example for how you could also tag every resource with the stack file that manages it.
For this to work, make sure you have the following enabled in your atmos.yml
# <https://pkg.go.dev/text/template>
templates:
settings:
enabled: true
# <https://masterminds.github.io/sprig>
sprig:
enabled: true
# <https://docs.gomplate.ca>
gomplate:
enabled: true
Or you might get an error like this:
❯ atmos describe stacks
invalid stack manifest 'deploy/staging.yaml'
yaml: invalid map key: map[interface {}]interface {}{".atmos_stack_file":interface {}(nil)}
the templates support all values that are returned from atmos describe component <component> -s <stack>
. So you can also do something like
workspace_key_prefix: '{{ .vars.app_name }}/{{ .vars.environment }}...'
oh if you configure the settings
section with some metadata, you could you it in the templates as well, e,g, (just an example)
workspace_key_prefix: '{{ .settings.app_name }}/{{ .settings.workspace_key }}...'
the settings
section is a free-form map, you can add any properties to it, including embedded maps/objects. Then it cam be used anywhere in the stack manifests in Go
templates and Sprig Functions, Gomplate Functions and Gomplate Datasources
Useful template functions for Go templates.
gomplate documentation
gomplate documentation
Atmos supports Go templates in stack manifests.
thank you both for the response. I see the value in disconnecting the path to the state-file and the file-system, I’ll give that more thought and thank you for informing about the template support of atmos describe component ...
values.
2024-04-16
Hi,
i’m currently using terragrunt and want to migrate to atmos. One very convenient thing about terragrunt is that i can simply overwrite the terraform module git repo urls with a local path (https://terragrunt.gruntwork.io/docs/reference/cli-options/#terragrunt-source-map). This allows me to develop tf modules using terragrunt in a efficient way.
Is there anything like it in atmos? If not, what is the best way to develop tf modules while using atmos?
Hey guys, I’m looking to adopt adopt Atmos for the whole organisation I’m currently working/building for.
I’m trying plan the monorepo structure and how to organise things to avoid the pain re-organising later on. I see tools is pretty flexible, so right know I’m looking at https://atmos.tools/design-patterns/organizational-structure-configuration to find some convention guidance. Looking at atmos cli terraform commands, I understand that org, tenant, region and environment/stage/account are somehow merged into one
atmos terraform apply vpc-flow-logs-bucket -s org1-plat-ue2-dev
but it’s a bit hard to grasp how to approach the structure, when my goal is to deploy the same infrastructure in every region - for the most part, in multi-cloud setup. Any tips?
Hey @toka this is the exact use-case we solve in our reference architecture and our customers use. I’ll take a stab at answering it.
Right know was thinking should I split the code between different orgs directories, or between different tenats, or both
So first to understand is how we achive this at the terraform layer. It’s important that your terraform is written to a) support parameterized region in [providers.tf](http://providers.tf)
b) leverage something like our terraform-null-label
module, c) use some field to connote the region.
We use env
in the terraform-null-label
to connote the region, and stage
to connote where it’s in the lifecycle (e.g. dev, staging, prod, etc.)
So basically, to go “multi-region” requires first solving how you naming your resources, using a naming convention.
Are you already familiar with our cloudposse/terraform-null-label
module?
Regarding naming, I have a naming-module implemented already, all module resources are named by sourcing the naming module first
Right know was thinking should I split the code between different orgs directories, or between different tenats, or both
So, “code” might be a bit ambiguous here since “code” could refer to the stacks (YAML) or maybe the terraform (HCL).
We recommend organizing the stacks the way you describe but organizing the terraform much the way you organize any software application’s code. So for example stick all “EKS” related root module components into components/terraform/eks/<blah>
Ok, I’d need to know more about the naming module to answer specifically, but if it works similar to null label, you should be good. To be clear, atmos does not know anything about null-label
, it’s just most of our documentation assumes usage.
A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …
Here’s how we organize it in our commercial reference architecture
How we organize components is largely the same as this (which is very different from how stacks are organized): https://github.com/cloudposse/terraform-aws-components/tree/main/modules
Note, components don’t have to live in a separate repo. Ours do because this is how we distribute them as open souce.
In general, component should be organized by “service” (e.g. EKS, vs ECS), or “app1” and “app2”, and if multi-cloud, but provider, so aws/eks
and azure/aks
but if it works similar to null label, you should be good
Yes, it works very similar, almost the same, though Cloud Posse implementation seems to be better since you can inherit with
# Here's the important bit
context = module.public_alb_label.context
avoiding code duplication - in my TF codebase, I need to instantiate naming_module for each and every resource.
So, “code” might be a bit ambiguous here since “code” could refer to the stacks (YAML) I’m referring to stacks/YAML files structure. For HCL TF files, I’m planning to re-use all TF modules existing know, such as
common_vpc
,common_compute_instance
,common_iam
etc.
I think my need is to not duplicate stack configs across regions
Ok, so far I think we’re on the same page.
Now, use inheritance to set vars
at each level of your directory structure.
We use the convention of a file called _defaults.yaml
that we stash at each level. This then sets the namespace
(At the root of the directory), the tenant
(aka OU) at the OU level, the stage in the stage folder, the env in the region folder, and so on. Each default file can import the parent default.
I think my need is to not duplicate stack configs across regions
Then we use the “catalog” diretory to define the baseline configuration for “something”
• Something could be a service
• Something could be a region
• Something could be an application
• Something could be a layer What “something” is will depend on how you want to logically organize your configuration for maximum DRYness and reusability
Here’s a quick demo of how we organize it in our refarch
What “something” is will depend on how you want to logically organize your configuration for maximum DRYness and reusability Ok Erick I think now I get it, this seems to be it. This is very lit, very promising. I needed to see that example in order to understand that though catalog, I can set a baseline for regions, for services or a layer and not only a TF component which is how I saw it initially
Yes, exactly! It’s more than just for individual TF components, which makes it very powerful. You can define the configuration for how a set of components behave when in a “region” or in an OU.
@toka please take a look at https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks, which is described in Atmos Quick Start
As Erik mentioned, the defaults go to _defaults.yaml
at every scope (org, tenant, account, region)
https://github.com/cloudposse/atmos/blob/master/examples/quick-start/stacks/orgs/acme/_defaults.yaml
vars:
namespace: acme
terraform:
vars:
tags:
# <https://atmos.tools/core-concepts/stacks/templating>
atmos_component: "{{ .atmos_component }}"
atmos_stack: "{{ .atmos_stack }}"
atmos_manifest: "{{ .atmos_stack_file }}"
terraform_workspace: "{{ .workspace }}"
# Examples of using the Sprig and Gomplate functions
# <https://masterminds.github.io/sprig/os.html>
provisioned_by_user: '{{ env "USER" }}'
# <https://docs.gomplate.ca/functions/strings>
atmos_component_description: "{{ strings.Title .atmos_component }} component {{ .vars.name | strings.Quote }} provisioned in the stack {{ .atmos_stack | strings.Quote }}"
# Terraform backend configuration
# <https://atmos.tools/core-concepts/components/terraform-backends>
# <https://developer.hashicorp.com/terraform/language/settings/backends/configuration>
# backend_type: cloud # s3, cloud
# backend:
# # AWS S3 backend
# s3:
# acl: "bucket-owner-full-control"
# encrypt: true
# bucket: "your-s3-bucket-name"
# dynamodb_table: "your-dynamodb-table-name"
# key: "terraform.tfstate"
# region: "us-east-2"
# role_arn: "arn:aws:iam::<your account ID>:role/<IAM Role with permissions to access the Terraform backend>"
# # Terraform Cloud backend
# # <https://developer.hashicorp.com/terraform/cli/cloud/settings>
# cloud:
# organization: "your-org"
# hostname: "app.terraform.io"
# workspaces:
# # The token `{terraform_workspace}` will be automatically replaced with the
# # Terraform workspace for each Atmos component
# name: "{terraform_workspace}"
the mixins
are the catalog/defaults/baseline for all defaults for tenants, accounts and regions
https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks/mixins
(the mixins are imported into the top-level stacks to make them DRY)
the catalog
is the catalog/baseline for the components config
https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks/catalog
(the component catalog is imported into mixins if you use the same component in many tenants/accounts/regions, or is imported into the top-level stacks directly)
so the baseline for regions is in https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks/mixins/region
the baseline for tenants is in https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks/mixins/tenant
the baseline for accounts is in https://github.com/cloudposse/atmos/tree/master/examples/quick-start/stacks/mixins/stage
etc.
Thank you for you help! I understand you put a lot of work for the documentation and examples and I appreciate it a lot, but I won’t pretend its easy to switch the thinking
I’ll try my best to adopt atmos and prove it’s value, hopefully some day we’ll have some resources to contribute back to the project. Wish me luck
I also got a lot of value out of the design patterns documentation. https://atmos.tools/design-patterns/
Atmos Design Patterns. Elements of Reusable Infrastructure Configuration
What are best practices for handling the scenario when you haven’t used a tenant in your naming scheme to overcome NULL errors?
label_order: ["namespace", "stage","environment", "name", "attributes"]
To date I have just been removing the code, realizing this is only a temporary solution
EXAMPLE { “APP_ENV” = format(“%s-%s-%s-%s”, var.namespace, var.tenant, var.environment, var.stage) },
map_environment = lookup(each.value, "map_environment", null) != null ? merge(
{ for k, v in local.env_map_subst : split(",", k)[1] => v if split(",", k)[0] == each.key },
{ "APP_ENV" = format("%s-%s-%s-%s", var.namespace, var.tenant, var.environment, var.stage) },
{ "RUNTIME_ENV" = format("%s-%s-%s", var.namespace, var.tenant, var.stage) },
{ "CLUSTER_NAME" = module.ecs_cluster.outputs.cluster_name },
var.datadog_agent_sidecar_enabled ? {
"DD_DOGSTATSD_PORT" = 8125,
"DD_TRACING_ENABLED" = "true",
"DD_SERVICE_NAME" = var.name,
"DD_ENV" = var.stage,
"DD_PROFILING_EXPORTERS" = "agent"
} : {},
lookup(each.value, "map_environment", null)
) : null
ERROR: Null Value for Tenant
╷
│ Error: Error in function call
│
│ on main.tf line 197, in module "container_definition":
│ 197: { "APP_ENV" = format("%s-%s-%s-%s", var.namespace, var.tenant, var.environment, var.stage) },
│ ├────────────────
│ │ while calling format(format, args...)
│ │ var.environment is "cc1"
│ │ var.namespace is "dhe"
│ │ var.stage is "dev"
│ │ var.tenant is null
│
│ Call to function "format" failed: unsupported value for "%s" at 3: null value cannot be formatted.
╵
```
Generic non company specific locals
locals { enabled = module.this.enabled
s3_mirroring_enabled = local.enabled && try(length(var.s3_mirror_name) > 0, false)
service_container = lookup(var.containers, “service”) # Get the first containerPort in var.container[“service”][“port_mappings”] container_port = try(lookup(local.service_container, “port_mappings”)[0].containerPort, null)
assign_public_ip = lookup(local.task, “assign_public_ip”, false)
container_definition = concat([ for container in module.container_definition : container.json_map_object ], [ for container in module.datadog_container_definition : container.json_map_object ], var.datadog_log_method_is_firelens ? [ for container in module.datadog_fluent_bit_container_definition : container.json_map_object ] : [], )
kinesis_kms_id = try(one(data.aws_kms_alias.selected[*].id), null)
use_alb_security_group = local.is_alb ? lookup(local.task, “use_alb_security_group”, true) : false
task_definition_s3_key = format(“%s/%s/task-definition.json”, module.ecs_cluster.outputs.cluster_name, module.this.id) task_definition_use_s3 = local.enabled && local.s3_mirroring_enabled && contains(flatten(data.aws_s3_objects.mirror[].keys), local.task_definition_s3_key) task_definition_s3_objects = flatten(data.aws_s3_objects.mirror[].keys)
task_definition_s3 = try(jsondecode(data.aws_s3_object.task_definition[0].body), {})
task_s3 = local.task_definition_use_s3 ? { launch_type = try(local.task_definition_s3.requiresCompatibilities[0], null) network_mode = lookup(local.task_definition_s3, “networkMode”, null) task_memory = try(tonumber(lookup(local.task_definition_s3, “memory”)), null) task_cpu = try(tonumber(lookup(local.task_definition_s3, “cpu”)), null) } : {}
task = merge(var.task, local.task_s3)
efs_component_volumes = lookup(local.task, “efs_component_volumes”, []) efs_component_map = { for efs in local.efs_component_volumes : efs[“name”] => efs } efs_component_remote_state = { for efs in local.efs_component_volumes : efs[“name”] => module.efs[efs[“name”]].outputs } efs_component_merged = [ for efs_volume_name, efs_component_output in local.efs_component_remote_state : { host_path = local.efs_component_map[efs_volume_name].host_path name = efs_volume_name efs_volume_configuration = [ #again this is a hardcoded array because AWS does not support multiple configurations per volume { file_system_id = efs_component_output.efs_id root_directory = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].root_directory transit_encryption = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption transit_encryption_port = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption_port authorization_config = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].authorization_config } ] } ] efs_volumes = concat(lookup(local.task, “efs_volumes”, []), local.efs_component_merged) }
data “aws_s3_objects” “mirror” { count = local.s3_mirroring_enabled ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) prefix = format(“%s/%s”, module.ecs_cluster.outputs.cluster_name, module.this.id) }
data “aws_s3_object” “task_definition” { count = local.task_definition_use_s3 ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) key = try(element(local.task_definition_s3_objects, index(local.task_definition_s3_objects, local.task_definition_s3_key)), null) }
module “logs” { source = “cloudposse/cloudwatch-logs/aws” version = “0.6.8”
# if we are using datadog firelens we don’t need to create a log group count = local.enabled && (!var.datadog_agent_sidecar_enabled || !var.datadog_log_method_is_firelens) ? 1 : 0
stream_names = lookup(var.logs, “stream_names”, []) retention_in_days = lookup(var.logs, “retention_in_days”, 90)
principals = merge({ Service = [“ecs.amazonaws.com”, “ecs-tasks.amazonaws.com”] }, lookup(var.logs, “principals”, {}))
additional_permissions = concat([ “logs:CreateLogStream”, “logs:DeleteLogStream”, ], lookup(var.logs, “additional_permissions”, []))
context = module.this.context }
module “roles_to_principals” { source = “../account-map/modules/roles-to-principals” context = module.this.context role_map = {} }
locals { container_chamber = { for name, result in data.aws_ssm_parameters_by_path.default : name => { for key, value in zipmap(result.names, result.values) : element(reverse(split(“/”, key)), 0) => value } }
container_aliases = { for name, settings in var.containers : settings[“name”] => name if local.enabled }
container_s3 = { for item in lookup(local.task_definition_s3, “containerDefinitions”, []) : local.container_aliases[item.name] => { container_definition = item } }
containers_priority_terraform = { for name, settings in var.containers : name => merge(local.container_chamber[name], lookup(local.container_s3, name, {}), settings, ) if local.enabled } containers_priority_s3 = { for name, settings in var.containers : name => merge(settings, local.container_chamber[name], lookup(local.container_s3, name, {})) if local.enabled } }
data “aws_ssm_parameters_by_path” “default” { for_each = { for k, v in var.containers : k => v if local.enabled } path = format(“/%s/%s/%s”, var.chamber_service, var.name, each.key) }
locals { containers_envs = merge([ for name, settings in var.containers : { for k, v in lookup(settings, “map_environment”, {}) : “${name},${k}” => v if local.enabled } ]…) }
data “template_file” “envs” { for_each = { for k, v in local.containers_envs : k => v if local.enabled }
template = replace(each.value, “$$”, “$”)
vars = {
stage = module.this.stage
namespace = module.this.namespace
name = module.this.name
full_domain = local.full_domain
vanity_domain = var.vanity_domain
# service_domain
uses whatever the current service is (public/private)
service_domain = local.domain_no_service_name
service_domain_public = local.public_domain_no_service_name
service_domain_private = local.private_domain_no_service_name
}
}
locals { env_map_subst = { for k, v in data.template_file.envs : k => v.rendered } map_secrets = { for k, v in local.containers_priority_terraform : k => lookup(v, “map_secrets”, null) != null ? zipmap( keys(lookup(v, “map_secrets”, null)), formatlist(“%s/%s”, format(“arnssm%s:parameter”, var.region, module.roles_to_principals.full_account_map[format(“%s-%s”, var.tenant, var.stage)]), values(lookup(v, “map_secrets”, null))) ) : null } }
module “container_definition” { source = “cloudposse/ecs-container-definition/aws” version = “0.61.1”
for_each = { for k, v in local.containers_priority_terraform : k => v if local.enabled }
container_name = each.value[“name”]
container_image = lookup(each.value, “ecr_image”, null) != null ? format( “%s.dkr.ecr.%s.amazonaws.com/%s”, module.roles_to_principals.full_account_map[var.ecr_stage_name], coalesce(var.ecr_region, var.region), lookup(local.containers_priority_s3[each.key], “ecr_image”, null) ) : lookup(local.containers_priority_s3[each.key], “image”)
container_memory = each.value[“memory”] container_memory_reservation = each.value[“memory_reservation”] container_cpu = each.value[“cpu”] essential = each.value[“essential”] readonly_root_filesystem = each.value[“readonly_root_filesystem”…
Aha, so you’re trying to pull down some Cloud Posse managed components, which assume you’re using tenant
```
Generic non company specific locals
locals { enabled = module.this.enabled
s3_mirroring_enabled = local.enabled && try(length(var.s3_mirror_name) > 0, false)
service_container = lookup(var.containers, “service”) # Get the first containerPort in var.container[“service”][“port_mappings”] container_port = try(lookup(local.service_container, “port_mappings”)[0].containerPort, null)
assign_public_ip = lookup(local.task, “assign_public_ip”, false)
container_definition = concat([ for container in module.container_definition : container.json_map_object ], [ for container in module.datadog_container_definition : container.json_map_object ], var.datadog_log_method_is_firelens ? [ for container in module.datadog_fluent_bit_container_definition : container.json_map_object ] : [], )
kinesis_kms_id = try(one(data.aws_kms_alias.selected[*].id), null)
use_alb_security_group = local.is_alb ? lookup(local.task, “use_alb_security_group”, true) : false
task_definition_s3_key = format(“%s/%s/task-definition.json”, module.ecs_cluster.outputs.cluster_name, module.this.id) task_definition_use_s3 = local.enabled && local.s3_mirroring_enabled && contains(flatten(data.aws_s3_objects.mirror[].keys), local.task_definition_s3_key) task_definition_s3_objects = flatten(data.aws_s3_objects.mirror[].keys)
task_definition_s3 = try(jsondecode(data.aws_s3_object.task_definition[0].body), {})
task_s3 = local.task_definition_use_s3 ? { launch_type = try(local.task_definition_s3.requiresCompatibilities[0], null) network_mode = lookup(local.task_definition_s3, “networkMode”, null) task_memory = try(tonumber(lookup(local.task_definition_s3, “memory”)), null) task_cpu = try(tonumber(lookup(local.task_definition_s3, “cpu”)), null) } : {}
task = merge(var.task, local.task_s3)
efs_component_volumes = lookup(local.task, “efs_component_volumes”, []) efs_component_map = { for efs in local.efs_component_volumes : efs[“name”] => efs } efs_component_remote_state = { for efs in local.efs_component_volumes : efs[“name”] => module.efs[efs[“name”]].outputs } efs_component_merged = [ for efs_volume_name, efs_component_output in local.efs_component_remote_state : { host_path = local.efs_component_map[efs_volume_name].host_path name = efs_volume_name efs_volume_configuration = [ #again this is a hardcoded array because AWS does not support multiple configurations per volume { file_system_id = efs_component_output.efs_id root_directory = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].root_directory transit_encryption = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption transit_encryption_port = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption_port authorization_config = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].authorization_config } ] } ] efs_volumes = concat(lookup(local.task, “efs_volumes”, []), local.efs_component_merged) }
data “aws_s3_objects” “mirror” { count = local.s3_mirroring_enabled ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) prefix = format(“%s/%s”, module.ecs_cluster.outputs.cluster_name, module.this.id) }
data “aws_s3_object” “task_definition” { count = local.task_definition_use_s3 ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) key = try(element(local.task_definition_s3_objects, index(local.task_definition_s3_objects, local.task_definition_s3_key)), null) }
module “logs” { source = “cloudposse/cloudwatch-logs/aws” version = “0.6.8”
# if we are using datadog firelens we don’t need to create a log group count = local.enabled && (!var.datadog_agent_sidecar_enabled || !var.datadog_log_method_is_firelens) ? 1 : 0
stream_names = lookup(var.logs, “stream_names”, []) retention_in_days = lookup(var.logs, “retention_in_days”, 90)
principals = merge({ Service = [“ecs.amazonaws.com”, “ecs-tasks.amazonaws.com”] }, lookup(var.logs, “principals”, {}))
additional_permissions = concat([ “logs:CreateLogStream”, “logs:DeleteLogStream”, ], lookup(var.logs, “additional_permissions”, []))
context = module.this.context }
module “roles_to_principals” { source = “../account-map/modules/roles-to-principals” context = module.this.context role_map = {} }
locals { container_chamber = { for name, result in data.aws_ssm_parameters_by_path.default : name => { for key, value in zipmap(result.names, result.values) : element(reverse(split(“/”, key)), 0) => value } }
container_aliases = { for name, settings in var.containers : settings[“name”] => name if local.enabled }
container_s3 = { for item in lookup(local.task_definition_s3, “containerDefinitions”, []) : local.container_aliases[item.name] => { container_definition = item } }
containers_priority_terraform = { for name, settings in var.containers : name => merge(local.container_chamber[name], lookup(local.container_s3, name, {}), settings, ) if local.enabled } containers_priority_s3 = { for name, settings in var.containers : name => merge(settings, local.container_chamber[name], lookup(local.container_s3, name, {})) if local.enabled } }
data “aws_ssm_parameters_by_path” “default” { for_each = { for k, v in var.containers : k => v if local.enabled } path = format(“/%s/%s/%s”, var.chamber_service, var.name, each.key) }
locals { containers_envs = merge([ for name, settings in var.containers : { for k, v in lookup(settings, “map_environment”, {}) : “${name},${k}” => v if local.enabled } ]…) }
data “template_file” “envs” { for_each = { for k, v in local.containers_envs : k => v if local.enabled }
template = replace(each.value, “$$”, “$”)
vars = {
stage = module.this.stage
namespace = module.this.namespace
name = module.this.name
full_domain = local.full_domain
vanity_domain = var.vanity_domain
# service_domain
uses whatever the current service is (public/private)
service_domain = local.domain_no_service_name
service_domain_public = local.public_domain_no_service_name
service_domain_private = local.private_domain_no_service_name
}
}
locals { env_map_subst = { for k, v in data.template_file.envs : k => v.rendered } map_secrets = { for k, v in local.containers_priority_terraform : k => lookup(v, “map_secrets”, null) != null ? zipmap( keys(lookup(v, “map_secrets”, null)), formatlist(“%s/%s”, format(“arnssm%s:parameter”, var.region, module.roles_to_principals.full_account_map[format(“%s-%s”, var.tenant, var.stage)]), values(lookup(v, “map_secrets”, null))) ) : null } }
module “container_definition” { source = “cloudposse/ecs-container-definition/aws” version = “0.61.1”
for_each = { for k, v in local.containers_priority_terraform : k => v if local.enabled }
container_name = each.value[“name”]
container_image = lookup(each.value, “ecr_image”, null) != null ? format( “%s.dkr.ecr.%s.amazonaws.com/%s”, module.roles_to_principals.full_account_map[var.ecr_stage_name], coalesce(var.ecr_region, var.region), lookup(local.containers_priority_s3[each.key], “ecr_image”, null) ) : lookup(local.containers_priority_s3[each.key], “image”)
container_memory = each.value[“memory”] container_memory_reservation = each.value[“memory_reservation”] container_cpu = each.value[“cpu”] essential = each.value[“essential”] readonly_root_filesystem = each.value[“readonly_root_filesystem”…
It’s a good question, I’m afraid right now that’s not possible without forking the components. Our components are designed/optimized to work with our commercial reference architecture. We give them away in the event they are useful for others, but they are not as generic as our child modules (e.g. https://github.com/cloudposse)
DevOps Accelerator for Funded Startups & Enterprises Hire Us! https://cloudposse.com/services
We intend to generalize this in future versions of our refarch, but for now that’s not supported.
But I can tell you how to work around it!
Use the monkey patching approach described here
So we’ve seen more and more similar requests to these, and I can really identify with this request as something we could/should support via the atmos vendor
command.
One of our design goal in atmos is to avoid code generation / manipulation as much as possible. This ensures future compatibility with terraform.
So while we don’t support it the way terragrunt
does, we support it the way terraform
already supports it. :smiley:
That’s using the [_override.tf](http://_override.tf)
pattern.
We like this approach because it keeps code as vanilla as possible while sticking with features native to Terraform.
https://developer.hashicorp.com/terraform/language/files/override
So, let’s say you have a [main.tf](http://main.tf)
with something like this (from the terragrunt docs you linked)
module "example" {
source = "github.com/org/modules.git//example"
// other parameters
}
To do what you want to do in native terraform, create a file called [main_override.tf](http://main_override.tf)
module "example" {
source = "/local/path/to/modules//example"
}
You don’t need to duplicate the rest of the definition, only the parts you want to “override”, like the source
In this case, you would create an overrides that alters the map_environment
This way you can preserve all the rest of the functionality, without forking.
In this case, you would use [main_override.tf](http://main_override.tf)
to replace the local
locals {
map_secrets = ....
}
Ah k, and the override is there so you can pull updates with getting it overwritten? From time to time you might need to update the override in order to match the new schema?
Yes, since the overrides are in a file named ....[_override.tf](http://_override.tf)
they will never get overwritten by our upstream. This is a “contract” we have to ensure vendoring + overrides always works.
In otherwords, we will not have files named ....[_override.tf](http://_override.tf)
in our upstream components repo.
2024-04-17
Besides renaming an account, what can be done if the account-name is too long causing module.this.id
’s to hit max character restrictions for aws resources?
So there’s no strict enforcement on AWS or in atmos on what those names are. It’s purely self-inflicted
It’s not clear why you cannot change the account name in stack configuration
If we make the stage
shorter, then new account names have to be learned when targeting a stack from the command line.
I did think maybe using stage aliases would help with this problem and wrote up https://github.com/cloudposse/atmos/issues/581
Aha, yes, I recall that
I think this is an interesting use case.
oh i didnt see you commented there. I was also opening this thread to see alternatives but ill read up the comments now, sorry
For example, a recent customer didn’t like that we abbreviated the stage name for production
to be prod
to conserve length. If we had support of aliases, the full length can still be named
Short term, I don’t see any solution besides using an abbreviation.
Longer term, there’s something else we’re about to release that could be where this is solved.
If we go with short term of an abbreviation, that would be the codified naming convention.
Another alternative is dropping namespace from the null label for accounts that hit these restraints…. but then we have inconsistency.
Both short terms are hard to back out from
Also, it predominantly affects specific resources in AWS.
Are relative path imports for catalogs supported in atmos yaml ?
import:
# relative path from stacks/
- path: catalog/services/echo-server/resources/*.yaml
# relative path from catalog itself
- path: ./resources/*.yaml
We have an issue on our roadmap to implement this, but no ETA
Ah ok found the ticket https://github.com/cloudposse/atmos/issues/293
Describe the Feature
Instead of requring the full path relative to the top-level, support relative paths to the current file.
Today, we have to write:
# orgs/acme/mgmt/auto/network.yaml
import:
- orgs/acme/mgmt/auto/_defaults
Instead, we should be able to write:
# orgs/acme/mgmt/auto/network.yaml
import:
- _defaults
Expected Behavior
A clear and concise description of what you expected to happen.
Where by _default
resolves to orgs/acme/mgmt/auto/_default.yaml
;
Use Case
Make configurations easier to templatize without knowing the full path to the current working file.
Describe Ideal Solution
Basically, take the dirname('orgs/acme/mgmt/auto/network.yaml')
and concatenate it with /_default.yaml
Alternatives Considered
Explain what alternative solutions or features you’ve considered.
Today, we’re using cookie-cutter when generating the files:
import:
- orgs/{{ cookiecutter.namespace }}/mgmt/auto/_defaults
ty
currently, all paths are relative to the “stacks” folder
ya but i was trying to import relative to another catalog
you’re right, both are relatives
you are talking about paths relative to the current manifest file. This will save you from typing the path prefix (e.g. catalog/xxx
), but at the same time will make the manifest non-portable (always some compromises/tradeoffs)
2024-04-19
Is there anything that changed on the Atmos side regarding GHA pipelines? I have my workflow file that runs certain files that have commands for plan and apply. Last week, I was able to run multiple plan and apply commands in one file. Now, my workflow hangs in GHA. The only fix I have is to run one command at a time which takes a lot more time for me to deploy things
Hey @pv - this is the first report we hear of that.
I wonder if it could be a concurrency setting somewhere?
Are you using self-hosted runners?
@Erik Osterman (Cloud Posse) No, in this setup it is not self hosted
@Igor Rodionov any ideas on this one?
@pv can you share the logs of GHA failed run ?
There are no logs of value. On the plan or deploy stage, it just gets stuck and runs forever until the workflow is cancelled
Are you using the version of the GHA that uses atmos.yml
or the “gitops” config?
weird
We have an atmos.yaml
Would you happen to be leveraging -lock-timeout
anywhere?
@Erik Osterman (Cloud Posse) Nope, just checked the yaml and don’t see that value anywhere
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir ('/usr/local/etc/atmos' on Linux, '%LOCALAPPDATA%/atmos' on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star '**' is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# Base path for components, stacks and workflows configurations.
# Can also be set using 'ATMOS_BASE_PATH' ENV var, or '--base-path' command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, 'components.terraform.base_path', 'components.helmfile.base_path', 'stacks.base_path'
# and 'workflows.base_path' are independent settings (supporting both absolute and relative paths).
# If 'base_path' is provided, 'components.terraform.base_path', 'components.helmfile.base_path', 'stacks.base_path'
# and 'workflows.base_path' are considered paths relative to 'base_path'.
base_path: ""
components:
terraform:
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_BASE_PATH' ENV var, or '--terraform-dir' command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE' ENV var
apply_auto_approve: true
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT' ENV var, or '--deploy-run-init' command-line argument
deploy_run_init: true
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE' ENV var, or '--init-run-reconfigure' command-line argument
init_run_reconfigure: true
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE' ENV var, or '--auto-generate-backend-file' command-line argument
auto_generate_backend_file: true
stacks:
# Can also be set using 'ATMOS_STACKS_BASE_PATH' ENV var, or '--config-dir' and '--stacks-dir' command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using 'ATMOS_STACKS_INCLUDED_PATHS' ENV var (comma-separated values string)
included_paths:
- "orgs/**/*"
# Can also be set using 'ATMOS_STACKS_EXCLUDED_PATHS' ENV var (comma-separated values string)
excluded_paths:
- "**/_defaults.yaml"
# Can also be set using 'ATMOS_STACKS_NAME_PATTERN' ENV var
name_pattern: "{tenant}-{environment}-{stage}"
workflows:
# Can also be set using 'ATMOS_WORKFLOWS_BASE_PATH' ENV var, or '--workflows-dir' command-line arguments
# Supports both absolute and relative paths
base_path: "stacks/workflows"
logs:
file: "/dev/stdout"
# Supported log levels: Trace, Debug, Info, Warning, Off
level: Trace
# Custom CLI commands
commands: []
# Integrations
integrations: {}
# Validation schemas (for validating atmos stacks and components)
schemas:
# <https://json-schema.org>
jsonschema:
# Can also be set using 'ATMOS_SCHEMAS_JSONSCHEMA_BASE_PATH' ENV var, or '--schemas-jsonschema-dir' command-line arguments
# Supports both absolute and relative paths
base_path: "stacks/schemas/jsonschema"
# <https://www.openpolicyagent.org>
opa:
# Can also be set using 'ATMOS_SCHEMAS_OPA_BASE_PATH' ENV var, or '--schemas-opa-dir' command-line arguments
# Supports both absolute and relative paths
base_path: "stacks/schemas/opa"
# JSON Schema to validate Atmos manifests
# <https://atmos.tools/reference/schemas/>
# <https://atmos.tools/cli/commands/validate/stacks/>
# <https://atmos.tools/quick-start/configure-validation/>
# <https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json>
# <https://json-schema.org/draft/2020-12/release-notes>
atmos:
# Can also be set using 'ATMOS_SCHEMAS_ATMOS_MANIFEST' ENV var, or '--schemas-atmos-manifest' command-line arguments
# Supports both absolute and relative paths (relative to the `base_path` setting in `atmos.yaml`)
manifest: "stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json"
Your integrations section is empty, which tells me it’s not configured for the current version of the actions
I am afk, but if you check the actions themselves the u have a migration guide
@Igor Rodionov @Gabriela Campana (Cloud Posse) we should error if the integration configuration is absent
What integration do I need? The documentation only shows the run Atlantis integration and I can’t find any list of other integrations
Thanks! It would be nice to have the list of integrations linked in the docs integrations page as well
Most of these arguments seem to be related to AWS so I don’t know if this will actually solve my issue. I am using GCP. The only thing I could really use from this is setting the Atmos version but we already have that set as latest in our pipeline
@Igor Rodionov any action item here?
@Erik Osterman (Cloud Posse) @Igor Rodionov I did some digging and I found the issue. When deploying each project I need to pass a GHA variable for the project ID which is secured. When running this workflow, it freezes because each apply command is asked for the variable that is stored as a GHA var. So my question is how do I fix the workflow to apply the GHA variable for each plan and apply command?
Let me put this in my own words, reading between the lines.
In a GitHub repo, you’ve configured a variable (or a secret) that contains some sort of a “project id”.
When you run the GitHub Action Workflow (not an atmos workflow), it freezes during the atmos terraform apply
because terraform apply
is prompting for an unset variable (e.g. var.project_id
). The value you want to pass to Terraform (e.g. var.project_id
) is available in the repository.
Common techniques for running Terraform in continuous delivery pipelines.
We might not be passing -input=false
in our action. I’ll check. But that would explain why it’s hanging.
To solve the variable passing, are you using the TF_ENV_project_id
format which corresponds to var.project_id
in Terraform?
We do set -input=false
which is good in the plan. Checking apply
https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L208
Atmos Terraform Plan. Contribute to cloudposse/github-action-atmos-terraform-plan development by creating an account on GitHub.
And we do set -input=false
in apply. So I’m not sure why it would be hung on input.
https://github.com/cloudposse/github-action-atmos-terraform-apply/blob/main/action.yml#L302
Atmos Terraform Apply. Contribute to cloudposse/github-action-atmos-terraform-apply development by creating an account on GitHub.
If you could share any more details, or logs that would be great. And if it’s sensitive, feel free to DM me
@Erik Osterman (Cloud Posse) I don’t know what logs I can give when the job is stuck. When I reduce the workflow file to one command, it works. When I add more than one, it hangs. When I run locally, with all commands enabled, it asks for the billing ID for each command. I am passing it as an env var as GH recommends and this was all working a month ago just fine. We have made no changes to the pipeline and the only change to this atmos workflow was adding additional plans and applies
Understood… that’s frustrating
I know it’s no consolation, but none of our customers are reporting any issues with this, so we don’t have any way to reproduce it. What happens if you pin to an earlier commit of the actions? For example, pin to a commit that corresponds to the last known date it was working
Then we could look at what changed between then and now.
2024-04-22
2024-04-23
Hey All, I’m confused as to the best way to handle this, though I’m sure I’ll learn as I get into working with Cloudposse more. In a few of the small modules I’ve made, I build a main or a couple main object variables with everything tied back to that master object var. In looking at terraform-aws-network-firewall, the variables are more of an any type and I have to figure out the YAML data structures for input. Both ways make sense, especially because network firewalls kind of a lot of settings to wrangle. I had most of my module built for network-firewall but based on Eriks suggestion I figured I’d give your module a try. The examples are helping but GPT is definitely helping me more than I’m helping myself in TF to YAML conversions lol.
Morning btw everyone.
Hrmmm so basically, are you trying to figure out the shape of the YAML for the component?
Ideally, our components all have a working example to make that easier.
In this case, you’re referring to our child module terraform-aws-network-firewall
yea but network_firewall has ALOT of variables
yea
here’s an example on how to configure terraform-aws-network-firewall
in Atmos :
components:
terraform:
# <https://catalog.workshops.aws/networkfirewall/en-US/intro>
# <https://d1.awsstatic.com/events/aws-reinforce-2022/NIS308_Deploying-AWS-Network-Firewall-at-scale-athenahealths-journey.pdf>
network-firewall:
metadata:
component: "network-firewall"
vars:
enabled: true
name: "network-firewall"
# The name of a VPC component where the Network Firewall is provisioned
vpc_component_name: "vpc"
all_traffic_cidr_block: "0.0.0.0/0"
delete_protection: false
firewall_policy_change_protection: false
subnet_change_protection: false
# Logging config
logging_enabled: true
flow_logs_bucket_component_name: "network-firewall-logs-bucket-flow"
alert_logs_bucket_component_name: "network-firewall-logs-bucket-alert"
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateless-default-actions.html>
# <https://docs.aws.amazon.com/network-firewall/latest/APIReference/API_FirewallPolicy.html>
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-action.html#rule-action-stateless>
stateless_default_actions:
- "aws:forward_to_sfe"
stateless_fragment_default_actions:
- "aws:forward_to_sfe"
stateless_custom_actions: [ ]
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-strict-rule-evaluation-order.html>
# <https://github.com/aws-samples/aws-network-firewall-strict-rule-ordering-terraform>
policy_stateful_engine_options_rule_order: "STRICT_ORDER"
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-default-actions.html>
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-default-rule-evaluation-order>
# <https://docs.aws.amazon.com/network-firewall/latest/APIReference/API_FirewallPolicy.html>
stateful_default_actions:
- "aws:alert_established"
# - "aws:alert_strict"
# - "aws:drop_established"
# - "aws:drop_strict"
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-groups.html>
# Map of arbitrary rule group names to rule group configs
rule_group_config:
stateful-inspection:
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-group-managing.html#nwfw-rule-group-capacity>
# For stateful rules, `capacity` means the max number of rules in the rule group
capacity: 1000
name: "stateful-inspection"
description: "Stateful inspection of packets"
type: "STATEFUL"
rule_group:
rule_variables:
port_sets: [ ]
ip_sets:
- key: "APPS_CIDR"
definition:
- "10.96.0.0/10"
- key: "SCANNER"
definition:
- "10.80.40.0/32"
- key: "CIDR_1"
definition:
- "10.32.0.0/12"
- key: "CIDR_2"
definition:
- "10.64.0.0/12"
# bad actors today on 8 blacklists
- key: "BLACKLIST"
definition:
- "193.142.146.35/32"
- "69.40.195.236/32"
- "125.17.153.207/32"
- "185.220.101.4/32"
stateful_rule_options:
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-strict-rule-evaluation-order.html>
# All the stateful rule groups are provided to the rule engine as Suricata compatible strings
# Suricata can evaluate stateful rule groups by using the default rule group ordering method,
# or you can set an exact order using the strict ordering method.
# The settings for your rule groups must match the settings for the firewall policy that they belong to.
# With strict ordering, the rule groups are evaluated by order of priority, starting from the lowest number,
# and the rules in each rule group are processed in the order in which they're defined.
rule_order: "STRICT_ORDER"
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-how-to-provide-rules.html>
rules_source:
# Suricata rules for the rule group
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html>
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html>
# <https://github.com/aws-samples/aws-network-firewall-terraform/blob/main/firewall.tf#L66>
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-suricata.html>
# <https://coralogix.com/blog/writing-effective-suricata-rules-for-the-sta/>
# <https://suricata.readthedocs.io/en/suricata-6.0.10/rules/intro.html>
# <https://suricata.readthedocs.io/en/suricata-6.0.0/rules/header-keywords.html>
# <https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-action.html>
# <https://yaml-multiline.info>
#
# With Strict evaluation order, the rules in each rule group are processed in the order in which they're defined
#
# Pass – Discontinue inspection of the matching packet and permit it to go to its intended destination
#
# Drop or Alert– Evaluate the packet against all rules with drop or alert action settings.
# If the firewall has alert logging configured, send a message to the firewall's alert logs for each matching rule.
# The first log entry for the packet will be for the first rule that matched the packet.
# After all rules have been evaluated, handle the packet according to the action setting in the first rule that matched the packet.
# If the first rule has a drop action, block the packet. If it has an alert action, continue evaluation.
#
# Reject – Drop traffic that matches the conditions of the stateful rule and send a TCP reset packet back to sender of the packet.
# A TCP reset packet is a packet with no payload and a RST bit contained in the TCP header flags.
# Reject is available only for TCP traffic. This option doesn't support FTP and IMAP protocols.
rules_string: |
alert ip $BLACKLIST any <> any any ( msg:"Alert on blacklisted traffic"; sid:100; rev:1; )
drop ip $BLACKLIST any <> any any ( msg:"Blocked blacklisted traffic"; sid:200; rev:1; )
pass ip $SCANNER any -> any any ( msg: "Allow scanner"; sid:300; rev:1; )
alert ip $APPS_CIDR any -> $CIDR_1 any ( msg:"Alert on APPS_CIDR to CIDR_1 traffic"; sid:400; rev:1; )
drop ip $APPS_CIDR any -> $CIDR_1 any ( msg:"Blocked APPS_CIDR to CIDR_1 traffic"; sid:410; rev:1; )
alert ip $APPS_CIDR any -> $CIDR_2 any ( msg:"Alert on APPS_CIDR to CIDR_2 traffic"; sid:500; rev:1; )
drop ip $APPS_CIDR any -> $CIDR_2 any ( msg:"Blocked APPS_CIDR to CIDR_2 traffic"; sid:510; rev:1; )
nice ty andriy, give me a few to read im split between a call and this.
Make sure you are using our component and not the child module
ok maybe i pulled the wrong module
thats much closer in the example, i was trying to figure out the YAML objects myself
ooo likely interested in the zscaler module but with govcloud configs btw
im still very much learning terraform, so if there are better ways to handle inputs like this im all ears on learning it.
rule_group_config
is a very complex variable with a LOT of diff combinations of data types. That’s why it’s set to any
- this simplifies the variable, but complicates figuring out how to define it (either in plain Terraform or in Atmos stack manifests in YAML)
the example in the component repo (and the one I posted above) should help you to start (that’s a working example)
yea i started laughing out loud when i realized what i was doing was futile and then seeing how you handled it
it was complex either way but your way is much better, i appreciate the help this am
if you vendor the https://github.com/cloudposse/terraform-aws-components/tree/main/modules/network-firewall component and use the YAML above to configure Atmos component, you should be able to provision. Pay attention to the VPC component and the remote-state
(this needs to be already provisioned)
# The name of a VPC component where the Network Firewall is provisioned
vpc_component_name: "vpc"
i might need help but i need to struggle through this a bit and learn, ill come back in a few hours.
and thank you, idk if ill have to modify based on dependency?
its a thought i hadnt had
please review the Atmos remote state docs (and ask questions if any)
https://atmos.tools/core-concepts/components/remote-state
https://atmos.tools/quick-start/vendor-components#remote-state-notes
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
In the previous steps, we’ve configured the repository and decided to provision the vpc-flow-logs-bucket and vpc Terraform
yea we’re on remote state under globals i believe but we still need to have env review
in short, you provision the vpc
component first (e.g. atmos terraform apply vpc -s <stack>
)
yea i know it has a vpc dependency and i was kinda hacking vpc.id or vpc.arn
then in other components that need to get the remote state from the vpc
component (e.g. to get the VPC ID), you define the remote-state Terraform module and configure a variable in Atmos to specify the name of the component
i was kinda hacking vpc.id
yea ive done that for my own modules in and out
you can def hack that for testing and to start faster with the network firewall
yea thats basically where i am now in testing
vpcid and subnetid as statics
theyre probably not going to care if my modules interact properly btw, i have an unreasonable deadline of making 5 weeks work in 2 1/2 weeks
ive made most of the work needed but im down to this piece
i can always try it in dev bc i believe were using vpc but unsure of integration
i hear you. The network firewall is def not an easy thing
yea when i got to the rules after saying it was easy im like oh…………….
anyway, adding remote state for vpc
is much simpler than network firewall, and you can do it later
good lesson on data structures though
ok cool, i appreciate it
so a few thoughts - I had to edit providers.tf to include root_account_environment_name under module iam_roles - then plan fired up local.vpc_outputs and shared config profile do not exist per plan results. i could probably cut out reliance on the buckets and make them myself as annoying as that would be, otherwise i think I see defaults I can maybe configure in remote-state.tf and just point to resources. i appreciate the intergration btw i hadnt really seen tf talk across resources that way.
going to take a walk and get some fresh air.
appreciate your teams help on this.
Not sure if I follow…. need more context.
Sorry I should’ve provided context, I tried network-firewall out in the environment
to disable reading remote state for the log buckets
# Logging config
logging_enabled: true
flow_logs_bucket_component_name: "network-firewall-logs-bucket-flow"
alert_logs_bucket_component_name: "network-firewall-logs-bucket-alert"
set
logging_enabled: false
Ty
finally seeing green andriy but im unsure of my hacking consequences lol. i had to modify providers.tf as follows -
provider "aws" {
region = var.region
# Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
#profile = module.iam_roles.terraform_profile_name
dynamic "assume_role" {
# module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
for_each = compact([module.iam_roles.terraform_role_arn])
content {
role_arn = assume_role.value
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
root_account_environment_name = var.root_account_environment_name
context = module.this.context
}
variable "root_account_environment_name" {
type = string
description = "Global Environment Name"
}
I also went into remotestate.tf and created a defaults = {vpc_id} and adjusted main.tf firewall_subnet_ids to a subnet map
what it does is it reads the terraform role to assume
not my final solution
i was kinda hacking it together to get it to show me green text
as long as you have it, and it can read it, and the role has the required permission, should be fine
i could probably pipe those back to variables in the yaml, im trying not to get too hacky as im still learning atmos structure
im going to run my first deploy now and see how she goes, half of what sucks about working with this resource is the coffee break each time you destroy/recreate
mmmmm
hey andriy - do you have any examples of multiple rulegroups or stateful/stateless combo examples? Im swapping back and forth between the the tf code and yaml right now trying to get multiple rules going. the siracata parse of ip sets was very cool btw, i was trying to think of ways to deal with the massive rule lists
just the yaml example would be much appreciated
ill dig around otherwise, most of what i have works though, just trying to get syntax correct for more rules
this is what im struggling with but im going to take a break for a bit, i went back and forth between the main.tf and into the vars to be sure it was configuring correctly, but my yaml attempts have failed thus far for a stateless rule example -
rules_source:
stateless_rules_and_custom_actions:
stateless_rule:
- priority: 1
rule_definition:
actions:
- "aws:drop"
match_attributes:
protocols:
- "1"
source:
- address_definition: $SOURCE
destination:
- address_definition: $DST
Planned Result:
+ rules_source {
+ stateless_rules_and_custom_actions {
+ stateless_rule {
+ priority = 1
+ rule_definition {
+ actions = [
+ "aws:drop",
]
+ match_attributes {
+ protocols = [
+ 1,
]
}
}
}
}
}
i feel like im giving it source and destination but my spacing or something is off
@Ryan I think it should be done like this
rules_source:
stateless_rules_and_custom_actions:
stateless_rule:
- priority: 1
rule_definition:
actions:
- "aws:drop"
match_attributes:
protocols:
- "1"
source:
- $SOURCE
destination:
- $DST
look at how it’s done in Terraform https://github.com/cloudposse/terraform-aws-network-firewall/blob/main/main.tf#L163
dynamic "stateless_rule" {
dynamic "destination" {
for_each = lookup(stateless_rule.value.rule_definition.match_attributes, "destination", [])
content {
address_definition = destination.value
}
}
yea that is exactly what i was referencing but i am terrible still going from tf to yaml
let me know if it works for you
im glad i was referencing the right place
ok i see now and yea its planning
for some reason i thought i had to define address_definition, but looking now i was wrong
I understand. It’s just how it’s implemented in the TF module. We could have easily parsed address_definition
in TF
content {
address_definition = destination.value.address_definition
}
and then you’d have to do
source:
- address_definition: $SOURCE
destination:
- address_definition: $DST
network firewall is complicated
yea thats what i was figuring in my head after you pasted and i went oh
Hello - was wondering if there is any updates on refactoring of the account
and account-map
modules to enable brownfield / control tower deployments https://sweetops.slack.com/archives/C031919U8A0/p1702136079102269?thread_ts=1702135734.967949&cid=C031919U8A0 . I see 2 related PRs that look like they will never be approved. Dont know if it matters, this particular project is using TF Cloud as a backend.
until then, what is the suggested work around to enable using those modules in an existing organization - it seems to be creating a remote-state
module that represents the output of account-map
? Does anyone have an example of what this is supposed to look like? And if I go this approach, how will this change when the refactored modules become available?
Well, those are precisely the components that won’t work well in brownfield, at least the brownfield we plan to address.
However our plan for next quarter (2024) is to refactor our components for ala carte deployment in brown field settings. E.g. enterprise account management team issues your product team 3 accounts from a centrally managed Control Tower. You want to deploy Cloud Posse components and solutions in those accounts, without any sort of shared s3 bucket for state, no account map, no account management.
Unfortunately, Cloud Posse has not been able to prioritize this work over other work. Note, this relates more to refarch than Atmos.
Well, those are precisely the components that won’t work well in brownfield, at least the brownfield we plan to address.
However our plan for next quarter (2024) is to refactor our components for ala carte deployment in brown field settings. E.g. enterprise account management team issues your product team 3 accounts from a centrally managed Control Tower. You want to deploy Cloud Posse components and solutions in those accounts, without any sort of shared s3 bucket for state, no account map, no account management.
@RB has success I believe doing this, and may have more guidance.
@Andrew Chemis see this prs. I was able to do it successfully.
https://github.com/cloudposse/terraform-aws-components/pull/945 (open)
https://github.com/cloudposse/terraform-aws-components/pull/943 (closed see comments)
At this point, we would recommend you use the code from the associated branches submitted by RB
We haven’t merged/incorporated it, because we (Cloud Posse) are responsible for LTS support and have no way to test the changes.
Also, since our plans involve signficant changes to these components, we don’t want to take on more than we can chew.
Excellent - thank you. This is useful
How would I pass this variable in a yaml?
variable "database_encryption" {
description = "Application-layer Secrets Encryption settings. The object format is {state = string, key_name = string}. Valid values of state are: \"ENCRYPTED\"; \"DECRYPTED\". key_name is the name of a CloudKMS key."
type = list(object({ state = string, key_name = string }))
default = [{
state = "DECRYPTED"
key_name = ""
}]
}
list of objects
vars:
database_encryption:
- state: DECRYPT
key_name: xxx
That worked, thanks!
2024-04-24
Hello,
i try to pass values from settings into vars. this only works after componets are processed. what i mean by that is
this is working
import:
- accounts/_defaults
settings:
account: '0'
vars:
tenant: account
environment: test
stage: '0'
tags:
account: '{{ .settings.account }}'
This is not
import:
- accounts/_defaults
settings:
account: '0'
vars:
tenant: account
environment: test
stage: '{{ .settings.account }}'
Is there a way to pass settings to vars before the components are processed?
So there are a few observations here. And…. while we can directly address what you’re trying to do, I think there could be an xyproblem.info
When you say it’s not working, what’s the error or the observed behavior?
…is it a syntax error?
(note, that templating right now is a single pass, it’s not re-entrant. We plan to make this configurable, e.g. make it 3 passes, but it won’t be infinitely recursive)
@Stephan Helas it’s not working not b/c any errors in the template and not b/c it’s multi=pass (it’s a single pass)
it’s not working b/c you are overriding the stage
variable, which is a context var which Atmos uses to find components and stacks in stack manifests
if you are using namespace
, tenant
, environment
and stage
as context
vars, and in your atmos.yaml
you have stacks.name_pattern
, those context vars must be provided in stack manifests and known before any processing
but, what you are trying to do, it looks like you want to override the context vars. Here’s how to do it:
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
look at stacks.name_pattern
and stacks.name_template
in your case/example, you can do this in atmos.yaml
:
stacks:
name_template: "{{.settings.tenant}}-{{.settings.region}}-{{.settings.accoint}}"
what i tried to do is to use settings instead vars as source of truth. if i have general purpose variables i don’t always want them as inputs for tf modules. but i also don’t want to duplicate this information. i’ll try adjusting the name_template. i think that is the solution to this.
I noticed in geodesic that this env var is correct ATMOS_BASE_PATH
. I didn’t see ATMOS_CLI_CONFIG_PATH
and since that is unset, atmos cannot fully understand the stack yaml.
Is this something I need to manually set in the Dockerfile
? or is this a bug in geodesic ?
I don’t see the ATMOS_CLI_CONFIG_PATH defined in geodesic
we usually place atmos.yaml
in rootfs/usr/local/etc/atmos/atmos.yaml
in the repo, and then in Dockerfile
COPY rootfs/ /
which becomes /usr/local/etc/atmos/atmos.yaml
in the container, and that’s a location which Atmos always checks
ohhhh i see
thanks!
that’s how we usually do it, but you can place atmos.yaml
in any folder and then use the ENV var ATMOS_CLI_CONFIG_PATH
(if that’s what you want)
if you don’t use the ENV var, Atmos searches for atmos.yaml
in these locations. https://atmos.tools/cli/configuration#configuration-file-atmosyaml
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
my problem was that i want to run atmos from outside geodesic and inside geodesic so I have my atmos.yaml
file in the repo root so I copied the above linked atmos.sh
script and added the following to it
export ATMOS_CLI_CONFIG_PATH="${ATMOS_BASE_PATH}"
did it work?
yes
if you don’t use the ENV var, Atmos searches for atmos.yaml
in these locations. https://atmos.tools/cli/configuration#configuration-file-atmosyaml
this is odd because it says
Current directory (
./atmos.yaml
) and that is where my file is located. I get an error unless I also setATMOS_CLI_CONFIG_PATH
.
⨠ unset ATMOS_CLI_CONFIG_PATH
⨠ atmos version | grep -i atmos
Found ENV var ATMOS_BASE_PATH=/localhost/git/work/atmos
👽 Atmos v1.70.0 on linux/arm64
⨠ atmos terraform plan service/s3 --stack ue1-dev
...
module.iam_roles.module.account_map.data.utils_component_config.config[0]: Reading...
Planning failed. Terraform encountered an error while generating this plan.
╷
│ Error:
│ Searched all stack YAML files, but could not find config for the component 'account-map' in the stack 'core-gbl-root'.
│ Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the stack config files.
│ Are the component and stack names correct? Did you forget an import?
│
│
│ with module.iam_roles.module.account_map.data.utils_component_config.config[0],
│ on .terraform/modules/iam_roles.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
Seems like a bug. Should I write it up ?
no, this is not a bug The error is from the remote state module, see https://atmos.tools/core-concepts/components/remote-state#caveats
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
for the remote-state to work, atmos.yaml
needs to be in a “global” directory (e.g. /usr/local/etc/atmos/atmos.yaml
)
booo
lol
(all of that is b/c Terraform executes providers in the component folder, so any relative paths will not work, and atmos.yaml
placed in one component folder will not affect another component)
so basically i should always have the ATMOS_CLI_CONFIG_PATH
set
we always have it in /usr/local/etc/atmos/atmos.yaml
(where both Atmos binary and the Terraform utils
provider for remote state can find it)
and yes, you can use the ENV var
so then the alternative is
COPY atmos.yaml /usr/local/etc/atmos/atmos.yaml
I suppose that’s easier than overwriting the atmos.sh
file
you can try that (there are many diff ways of doing it, copying, using ENV vars). The idea is to have atmos.yaml
(or pointer to it in the ENV var) at some path where all involved binaries can find it
kk thanks andriy!
How does the components/terraform/account-map/account-info/acme-gbl-root.sh
get used by other scripts?
#!/bin/bash
# This script is automatically generated by `atmos terraform account-map`.
# Do not modify this script directly. Instead, modify the template file.
# Path: components/terraform/account-map/account-info.tftmpl
# CAUTION: this script is appended to other scripts,
# so it must not destroy variables like `functions`.
# On the other hand, this script is repeated for each
# organization, so it must destroy/override variables
# like `accounts` and `account_roles`.
functions+=(namespace)
function namespace() {
echo ${namespace}
}
functions+=("source-profile")
function source-profile() {
echo ${source_profile}
}
declare -A accounts
# root account included
accounts=(
%{ for k, v in account_info_map ~}
["${k}"]="${v.id}"
%{ endfor ~}
)
declare -A account_profiles
# root account included
account_profiles=(
%{ for k, v in account_profiles ~}
["${k}"]="${v}"
%{ endfor ~}
)
declare -A account_roles
account_roles=(
%{ for k, v in account_role_map ~}
["${k}"]="${v}"
%{ endfor ~}
)
functions+=("account-names")
function _account-names() {
printf "%s\n" "$${!accounts[@]}" | sort
}
function account-names() {
printf "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}-}%s\n" $(_account-names)
}
functions+=("account-ids")
function account-ids() {
for name in $(_account-names); do
printf "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}-}%s = %s\n" "$name" "$${accounts[$name]}"
done
}
functions+=("account-roles")
function _account-roles() {
printf "%s\n" "$${!account_roles[@]}" | sort
}
function account-roles() {
for role in $(_account-roles); do
printf "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}: }%s -> $${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}-}%s\n" $role "$${account_roles[$role]}"
done
}
########### non-template helpers ###########
functions+=("account-profile")
function account-profile() {
printf "%s\n" "$${account_profiles[$1]}"
}
functions+=("account-id")
function account-id() {
local id="$${accounts[$1]}"
if [[ -n $id ]]; then
echo "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}: }$id"
else
echo "Account $1 not found" >&2
exit 1
fi
}
functions+=("account-for-role")
function account-for-role() {
local account="$${account_roles[$1]}"
if [[ -n $account ]]; then
echo "$${CONFIG_NAMESPACE:+$${CONFIG_NAMESPACE}: }$account"
else
echo "Account $1 not found" >&2
exit 1
fi
}
function account_info_main() {
if printf '%s\0' "$${functions[@]}" | grep -Fxqz -- "$1"; then
"$@"
else
fns=$(printf '%s\n' "$${functions[@]}" | sort | uniq)
usage=$${fns//$'\n'/ | }
echo "Usage: $0 [ $usage ]"
exit 99
fi
}
if ! command -v main >/dev/null; then
function main() {
account_info_main "$@"
}
fi
# If this script is being sourced, do not execute main
(return 0 2>/dev/null) && sourced=1 || main "$@"
Looks like it got added here https://github.com/cloudposse/terraform-aws-components/pull/496
It’s a handy script, just kind of painful to run manually since it’s so far deep into the components dir.
⨠ components/terraform/account-map/account-info/acme-gbl-root.sh "account-id" root
1234567890
Also curious how I can integrate it, otherwise i might gitignore it for now or copy it into /usr/local/bin
in geodesic
We use this script with our aws-config
script to generate a local AWS config:
https://github.com/cloudposse/terraform-aws-components/blob/main/rootfs/usr/local/bin/aws-config#L47
account_sources=("$ATMOS_BASE_PATH/"components/terraform/account-map/account-info/*sh)
Hi Dan! I was searching for this and didn’t find it. Thank you very much.
No problem!
2024-04-25
Is there a way to tag the name of the components , stacks , and stage in a different scheme ( mycompany-automation-stack :
how about using Go
templates, see https://atmos.tools/core-concepts/stacks/templating#use-cases
Atmos supports Go templates in stack manifests.
templates can be used in the tag names and tag values
terraform:
vars:
tags:
"mycompany-{{ .vars.stage }}-stack": "{{ .atmos_stack }}"
Bingo , that’s it . Thanks
hi team,
We have a default.yaml
parent config and default-use2.yaml
child config that inherits default
default.yaml has a var principal: [user1,user2]
default-use2.yaml as a var principal: [user3,user4]
For the application stack app1-stack.yaml
inherits default-use2.yaml,
The value for principal is [user3,user4], which is expected behaviour but if we wanted to append the list (ex: [user1,user2,user3,user4])than override, do we have a way to do it?
currently, lists are replaced, not merged (for various reasons, one of which is it’s not possible to know in general how to handle the same values, and whether or not to delete items if one of the lists does not have them
in the next Atmos release (prob next week), we’ll add configuration (global or per component, in settings
, to specify how to deal with merging of lists and to allow it
right now, you can use mixins
(abstract base components) and inheritance to achieve what you want. (Note that type: abstract
makes those mixins not-deployable so nobody could provision them by mistake, they are just blueprints). For example:
components:
terraform:
principals-mixing-1:
metadata:
type: abstract
vars:
principal:
- user1
- user2
- user3
- user4
principals-mixing-2:
metadata:
type: abstract
vars:
principal:
- user1
- user2
my-component-1:
metadata:
component: # Point to the Terraform component
inherits:
- principals-mixing-1
vars: # other vars
my-component-2:
metadata:
component: # Point to the Terraform component
inherits:
- principals-mixing-2
vars: # other vars
you can put principals-mixing-1
and principals-mixing-1
into catalog
(e.g. catalog/mixins
, and then just import them into the stacks. For example:
import:
- catalog/mixins/principals-mixing-1
- catalog/mixins/principals-mixing-2
components:
terraform:
my-component-1:
metadata:
component: # Point to the Terraform component
inherits:
- principals-mixing-1
vars: # other vars
my-component-2:
metadata:
component: # Point to the Terraform component
inherits:
- principals-mixing-2
vars: # other vars
this will make those mixins reusable (you can import them into many stacks where you have the components that inherit from them), and the config DRY
@Andriy Knysh (Cloud Posse) thanks for the immediate support and details Even in this stack file var principal is not merging correct? my-component-1principal would just be [user1,user2,user3,user4] correct?
import:
- catalog/mixins/principals-mixing-1
- catalog/mixins/principals-mixing-2
components:
terraform:
my-component-1:
metadata:
component: # Point to the Terraform component
inherits:
- principals-mixing-1
vars:
principal: [user5,user6]
my-component-2:
metadata:
component: # Point to the Terraform component
inherits:
- principals-mixing-2
vars: # other vars
would this give me final var principal to [user1,user2,user3,user4,user5,user6]?
no, the lists will not be merged (as I mentioned, we’ll add support for that). But if you create a map instead of a list, it will be merged (for this you will need to update your terraform variable to be a map type)
Ok thanks
you can use a map type and don’t change the terraform variable at all. Using Go
templates. For example:
components:
terraform:
my-component-1:
metadata:
component: # Point to the Terraform component
inherits:
- principals-mixing-1
vars:
{{ range $key, $value := .settings.principal }}
- {{ $value }}
{{ end }}
if you define principal
in settings
as a map, it will be inherited by the deried components and deep-merged
then using Go
templates, you can convet the alredy deep-merged map into a list
for templating, see https://atmos.tools/core-concepts/stacks/templating
Atmos supports Go templates in stack manifests.
Thanks @Andriy Knysh (Cloud Posse) will follow up.. appreciate your help
Hi,
if i define a local backend,
terraform:
backend_type: local
i’ll get a schema validation error, which i fixed with this:
▶ git diff stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
diff --git a/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json b/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
index 959c7f2..f06fbfc 100644
--- a/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
+++ b/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
@@ -437,6 +437,7 @@
"backend_type": {
"type": "string",
"enum": [
+ "local",
"s3",
"remote",
"vault",
Thanks for reporting - will get that fixed
I am so curious, are you planning to use the new backend encryption functionality of opentofu, to encrypt the state locally and commit it?
not yet first need to stop hating ibm for buying off hashicorp. until now i’ve never looked into opentofu
@Stephan Helas thanks for this. The change to the schema will be in the next Atmos release
Haha, I confess I didn’t realize how much animosity there was out there for IBM!
2024-04-26
Hi,
is there an easy way to check which components of which stacks are in sync with the infrastructure? In terragrunt this was possible using run-all and terraform state list
or terraform plan -destroy
. is there something similar where i can get an overview which components in all stacks are deployed?
this is not currently supported as one command. We are working on adding this functionality to Atmos to show you the difference b/w all the components/stacks defined for the infra and what are actually in the Terraform state
for now, atmos describe stacks
command shows all the stacks and all the components in each stack
Use this command to show the fully deep-merged configuration for all stacks and the components in the stacks.
the command accepts many filters to filter the result
then, you can use a shell script and jq/yq
to loop through all the stacks and components and run a TF command on each one (e.g. terraform state list
)
you can also create a custom Atmos command to do so, see https://atmos.tools/core-concepts/custom-commands/
Atmos can be easily extended to support any number of custom CLI commands.
and you’ll be able to execute your custom command like this: atmos <my-command> <params>
this requires you to write some scripts, but as mentioned, we are working on adding more functionality to Atmos to list and show the diff b/w what is configured in Atmos manifests and what is actually deployed
ok. i understand. i need to look into custom commands.
but one thing: if i list components using atmos list components
i got real and abstract compontest back. evan with --type = real
is this normal?
and also show “stale” resources (e.g. those that were deployed in the past, but then removed from the stacks, and they are still in the TF state (or deployed)
for a custom list command with filters for component type, see https://atmos.tools/core-concepts/custom-commands/#list-atmos-stacks-and-components
Atmos can be easily extended to support any number of custom CLI commands.
(you can copy it and def improve b/c it’s just a shell script with some jq/yq
so i’d create some shell script and the call it by using a custom comand, right?
yes
ok
and the shell script can be external to atmos.yaml
(in some folder) and you just call it from the custom command. Or the shell script can be just embeded in the command itself in YAML
let us know if you need any help on this
and i could vendor this script from a git repo?
yes, you can vendor anything
yeah, thought so pretty cool
see https://atmos.tools/core-concepts/custom-commands/#advanced-examples for a some examples of custom commands
Atmos can be easily extended to support any number of custom CLI commands.
which components of which stacks are in sync with the infrastructure?
Also, to be clear, we do support this today. Just not in the CLI. That’s because in practice, doing it on the command line is not how teams will succeed.
We call this “drift detection” and our GitHub Actions for Atmos support that today out of the box.
oh, yeah - i forgot about that. unfortunately i’m forced to use bitbucket without pipelining for the time beeing. but maybe i can adopt the github action somehow
Ahah, bitbucket
BitBucket / GitLab / GitHub / etc are polarizing. Teams pick them and their tradeoffs for various reasons. Often we speak to teams who are just looking for a reason to switch, something to justify it. Maybe this can be your golden ticket.
thanks @Erik Osterman (Cloud Posse), yes it’s called drift detection and it’s done in CI/CD (we support it out of the box uisng GHA), and it’s not practical to do it on command line. @Stephan Helas you need to consider that.
Whatever I mentioned above was for command line and to detect old/stale/non-used-anymore workspaces and resources (not drift detection on the existing resources) - same thing you are describing in the other thread with an old/stale TF workspace.
You should not do drift detection on the command line.
if you have hundreds or thousands of stacks, it’s not even feasible for people to do and review.
ok, do you have a tip for me how to handle drift detection, if we don’t use github (in fact its bitbucket )?
The problem with BitBucket and GitLab, et al, is they lack a marketplace like the GitHub Actions marketplace. So it’s not straight forward to distribute reusable actions.
The gist of it is that you’ll want to use atmos describe affected
in concert with atmos terraform plan
and atmos terraform apply
. However, there’s quite a lot of work to implementing a proper CI/CD workflow for Terraform/OpenTofu.
sometime (more often then not) if i switch components in a stack, i got a tf workspace message, which i don’t understand. it looks like this:
▶ kn_atmos_vault terraform destroy wms-wms -s wms-it03-test -- -auto-approve
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
...
...
No changes. No objects need to be destroyed.
Either you have not created any objects yet or the existing objects were already deleted outside of Terraform.
Destroy complete! Resources: 0 destroyed.
now the switch
▶ kn_atmos_vault terraform destroy wms-base -s wms-it03-test -- -auto-approve
Initializing the backend...
The currently selected workspace (wms-apt01-test-wms-base) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. default
2. wms-it03-test-wms-base
3. wms-it03-test-wms-wms
Enter a value: 1
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
it will work, but why is this happening?
kn_atmos_vault is just a shell script to get aws credentials from vault ..
this is what Terraform does when you use the same workspace for diff configurations
you can clean it before you switch the components, e.g. by using atmos terraform clean
command https://atmos.tools/cli/commands/terraform/clean
Use this command to delete the .terraform folder, the folder that TFDATADIR ENV var points to, .terraform.lock.hcl file, varfile
ah, so its because i did use the component with a different stack before. ok understood.
did you override TF workspaces for the component? (if you used the default Atmos behavior to always calculate TF workspaces for the components, you would not see that behavior)
as far as i remember, i only overwrite the Bucket key_prefix.
can i define the workspace names for the components to inlude the stackname?
you can override TF workspaces per component https://atmos.tools/core-concepts/components/terraform-workspaces
Terraform Workspaces.
ah, ok - so i do in fact overwrite the workspace, but for the remote state of a component in a different stack. a part from that i dont
so, if i use tf shell
it looks like this:
▶ kn_atmos_vault terraform shell wms-base -s wms-it03-test
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/external from the dependency lock file
- Reusing previous version of cloudposse/utils from the dependency lock file
- Using previously-installed hashicorp/aws v5.46.0
- Using previously-installed hashicorp/local v2.5.1
- Using previously-installed hashicorp/external v2.3.3
- Using previously-installed cloudposse/utils v1.21.0
Terraform has been successfully initialized!
▶ tf workspace list
default
* wms-it03-test-wms-base
wms-it03-test-wms-wms
▶ kn_atmos_vault terraform shell wms-base -s wms-apt01-test
Initializing the backend...
The currently selected workspace (wms-it03-test-wms-base) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. default
2. wms-apt01-test-wms-base
3. wms-apt01-test-wms-wms
Enter a value: 1
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/external from the dependency lock file
- Reusing previous version of cloudposse/utils from the dependency lock file
- Using previously-installed cloudposse/utils v1.21.0
- Using previously-installed hashicorp/local v2.5.1
- Using previously-installed hashicorp/aws v5.46.0
- Using previously-installed hashicorp/external v2.3.3
Terraform has been successfully initialized!
Switched to workspace "wms-apt01-test-wms-base".
▶ tf workspace list
default
* wms-apt01-test-wms-base
wms-apt01-test-wms-wms
so, it seems to create workspaces using <stack><component> but it clashes
if i ‘clean’ before the switch to another stack, it works. so for me it seems that tf tries to switch from a no longer existing workspace to another one. and thats why it is complaining
the current selected workspace is from the old stack, and therefore not existing (because of the workspace name template)
you prob need to delete the old/stale workspace from the state
ok, but the state is in s3 bucket, right
if yoiu are using s3
backend, then yes
in the bucket, the folders are worspace_key_prefixes
, then the subfolders are workspaces
but terraform clean is not tf but atmos, right? so i can only delete something local
yes, atmos terraform clean
deletes local files only. We don’t go to your TF state to delete things
yeah, i think i just live with it for the moment. in ci pipeline this should not be a problem, right?
i’m not sure about that, you have to test. You still should prob delete the stale TF workspace from the bucket
I’d like to use the cloudposse context, keep the name empty, set an attribute for the existing cluster name, and an attribute for the service name
This should create something like this
{namespace}-{fixed_region}-{account_name}-{eks_cluster_name}-{service_name}
acme-ue1-somelongdevaccount-somelongclustername-servicename
acme-ue1-somelongdevaccount-somelongclustername-titan-echo-server
acme-ue1-somelongdevaccount-somelongclustername-bigpharmaceuticalorg
etc
However, I’m hitting the 64 max char hard aws limit issue with iam role creation.
Expansion
• 25 = cloudposse context ◦ 5 = namespace is 4 chars + 1 for delimiter ◦ 16 = account names (stage) are 12 to 15 chars + 1 for delimiter ◦ 4 = fixed region is 3 chars + 1 for delimiter ◦ 0 = tenant • 20 = existing eks clusters have 20 char non-standard names • 25 = service-names can be between 15 and 25 chars Options
- Trim the name using context’s
id_length_limit
a. not a good option because we can get multiple conflicting roles - Reduce the name of the account a. This is too painful to do
- Reduce the name of the eks clusters by using shorter aliases in some kind of generic alias map ◦ Is it possible using go-templating to create a generic map of aliases to shorten the cluster name ?
- Drop some or all of the cloudposse context
- Any other option?
I’m leaning towards option 3 or 4 unless there is a better alternative
I recall load balancer names and target groups have a lower max of 32 as well.
I believe the length limit does not do a naive truncation.
https://github.com/cloudposse/terraform-null-label/blob/main/main.tf#L166
id = local.id_length_limit != 0 && length(local.id_full) > local.id_length_limit ? local.id_short : local.id_full
It uses a hashing strategy
Hmm interesting, but even so, I think it’s less than ideal
Haha, but the ideal is capped at 64 characters
yes, i wish amazon would lift the restrictions
maybe then aliasing account names and aliasing the eks cluster names may be a viable solution via go templating
How would another naming convention give you N dimensions of variation, and somehow not hit the 64 character limit, and if such a convention exists, why is it not possible with null label?
Atmos could still use full length names
The truncation happens in provisioning, so users don’t use the hashed ids
The users use the iam role names which need to be codified in argocd later on
If the ~users~oles are trimmed with a hashing strategy then the names would be less predictable by humans
What about making an exception for those and using a uuid()
?
or more precisely https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/uuid
I saw atmos supports go templating for ssm params. Does atmos’ go templating also support generic aws data sources? For instance, can I retrieve a vpc id using an hcl-like data source and use it as an input to a component?
currently Atmos supports the Gomplate datasources https://docs.gomplate.ca/datasources/
gomplate documentation
what you are saying is to have remote-state
in Atmos templates, e.g.:
components:
terraform:
my-component:
vars:
vpc_id: "{{ AtmosTerraformOutput(<params>).vpc_id }}"
Yes that would be nifty!
we discussed that and will implement the AtmosTerraformOutput
function (no ETA yet)
I seem to be hitting an odd issue. I just provisioned the tfstate-backend
, migrated the state, I set the terraform.backend.s3.role_arn
(delegated role) to the role outputted from the tfstate-backend
, and copied that role into terraform.remote_state_backend.s3.role_arn
and I’m running atmos terraform plan aws-teams --stack gbl-identity
. I checked the roles in between and the role assumption works, even the terraform init
works which shows the role assumption works… but this error seems to show that atmos is trying to use the primary role instead of the delegated role to access the s3 bucket for each remote state.
│ Error: Error loading state error
│
│ with module.iam_roles.module.account_map.data.terraform_remote_state.data_source[0],
│ on .terraform/modules/iam_roles.account_map/modules/remote-state/data-source.tf line 91, in data "terraform_remote_state" "data_source":
│ 91: backend = local.ds_backend
error loading the remote state: Unable to list objects in S3 bucket ... with prefix "account-map/": operation error S3: ListObjectsV2, https response error StatusCode: 403
Ah I figured it out, kind of. I can set privileged = false
and then it will use the role_arn
in the terraform_remote_state
resource to get the information properly from the s3 bucket…
I did this a bit backwards. I reused an existing s3 bucket for the state, provisioned aws-teams
using an Admin
SSO role with role_arn: null
for both s3
and remote_state_backend
.
Now I have provisioned the tfstate-backend
which gives me IAM roles to assume the appropriate tfstate
role cross-account.
But now when I run AWS_PROFILE=admin-identity atmos terraform apply aws-teams --stack gbl-identity
, it throws an error because the Admin
SSO role cannot update the bucket directly.
I thought maybe the tfstate-backend
would have a bucket policy to allow the PrincipalArn
of my SSO permission set Admin
to access it directly (similar to SuperAdmin
) but that doesn’t seem to be the case.
Im at a bit of a loss.
- On one hand, I want to update the s3 tfstate bucket policy to allow my
SuperAdmin
-esqueAdmin
role inidentity
- and on the other hand I want to set
privileged = false
for all the remote state inaws-teams
(and other root components) Both options seem incorrect
Final error since s3 bucket is not in the identity
account
⨠ AWS_PROFILE=admin-identity atmos terraform plan aws-teams --stack gbl-identity
Initializing the backend...
Initializing modules...
╷
│ Error: Failed to get existing workspaces: Unable to list objects in S3 bucket
I did see the following here
For convenience, the component automatically grants access to the backend to the user deploying it. This is helpful because it allows that user, presumably SuperAdmin, to deploy the normal components that expect the user does not have direct access to Terraform state, without requiring custom configuration. However, you may want to explicitly grant SuperAdmin access to the backend in the allowed_principal_arns configuration, to ensure that SuperAdmin can always access the backend, even if the component is later updated by the root-admin role.
and since I’m using SSO, I do have this set in the backend
allowed_permission_sets:
identity: ["Admin"]
which allows SSO Admin from identity to assume it… but i still get Error loading state error
Here is the tfstate-backend
component yaml
tfstate-backend:
vars:
enabled: true
name: tfstate
enable_server_side_encryption: true
force_destroy: false
prevent_unencrypted_uploads: true
access_roles:
default:
write_enabled: true
allowed_roles:
identity:
- admin
- dev
- cicd
- terraform
- readonly
devops-prod-01: ["admin"]
denied_roles: {}
allowed_permission_sets:
identity: ["Admin"]
denied_permission_sets: {}
allowed_principal_arns: []
denied_principal_arns: []
cc: @Andriy Knysh (Cloud Posse) if you have a sec
@RB not sure if you solved the issue, but take a look at https://github.com/cloudposse/terraform-aws-components/blob/main/modules/aws-teams/providers.tf
provider "aws" {
region = var.region
# aws-teams, since it creates the initial SAML login roles,
# must be run as SuperAdmin, and cannot use "profile" instead of "role_arn"
# even if the components are generally using profiles.
# Note the role_arn is the ARN of the OrganizationAccountAccessRole, not the SAML role.
dynamic "assume_role" {
for_each = var.import_role_arn == null ? (module.iam_roles.org_role_arn != null ? [true] : []) : ["import"]
content {
role_arn = coalesce(var.import_role_arn, module.iam_roles.org_role_arn)
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
privileged = true
context = module.this.context
}
variable "import_role_arn" {
type = string
default = null
description = "IAM Role ARN to use when importing a resource"
}
this component (if you are using it directly w/o any changes) will use the OrganizationAccountAccessRole
role and needs to be provisioned as SuperAdmin
@Jeremy G (Cloud Posse) maybe you can provide more info
Oh interesting, thank you. How is the OrganizationAccountAccessRole
created? Is it created in the same account as the tfstate bucket?
If i deploy aws-teams
in identity
, how does it get access to the tfstate bucket if the tfstate bucket is in the corp
account (for example)?
So far the only way I’ve been able to connect it is by updating the tfstate bucket policy manually (since the tfstate-backend
module doesn’t support adding additional principals directly)
regarding OrganizationAccountAccessRole
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_access.html
Access the accounts that are part of your organization in AWS Organizations. Use the root user or an AWS Identity and Access Management (IAM) role to access the resources of a member account as a user in the organization’s management account.
he role is created in all accounts automatically by AWS, but only if the account is part of Org
if the account is not part of Org, or just invited to the Org, you need to create that role
I see this role exists. However it looks like the trust role needs to be updated in order to assume it per account.
Since my root
tfstate is not in the root
account, that’s probably the “root” issue. The remote state error is thrown when I use my superuser user (admin sso) in root
to deploy identity
and my tfstate-backend
is in corp (because there is a rule to have fewer root
resources and users).
I suppose then the need to update the bucket policy for the tfstate backend is a must or id have to create a new s3 tfstate bucket for root
.
2024-04-27
2024-04-28
I know in the past we ran into default_tags
with the aws v4 provider. In the v5 provider, the default_tags
seem to be fixed.
Is it acceptable to update the upstream components to use this feature so we can start dropping the tags = module.this.tags
for each resource?
We’ve ran into problems when trying to implement default_tags. There are still lots of open issues on it.
https://github.com/hashicorp/terraform-provider-aws/issues?q=is%3Aissue+is%3Aopen+default_tags
Version 5.0 of the HashiCorp Terraform AWS provider brings improvements to default tags, allowing practitioners to set tags at the provider level.
I put this PR in to help start the conversation
https://github.com/cloudposse/terraform-aws-components/pull/1025
what
• feat: add default_tags
why
• Consistent tagging without needing to try • Take advantage of the fixed default_tags in aws v5 provider
references
• https://www.hashicorp.com/blog/terraform-aws-provider-5-0-adds-updates-to-default-tags • https://sweetops.slack.com/archives/C031919U8A0/p1714328473235309
This should already be possible with atmos provider generation
Hmm i didn’t know about this feature. Does this mean we should start removing the providers.tfs ?
So this would be done with something like this?
https://atmos.tools/core-concepts/components/terraform-providers/
components:
terraform:
vpc:
providers:
aws:
default_tags:
tags: {{ .tags }}
Its complementary. No generalized rule. Relying heavily on it will make escaping atmos harder, but if that’s not a concern then go all in.
Your example is close, but taking a templated approach rather than an inherited approach. The .tags is nothing we provide. Instead, try to manage the provider just like you manage vars, in a hierarchical, inherited manner.
Also, keep in mind if you do stick with the template approach, the go templating works like in helm. So since tags is a map, you would need to format the output accordingly for YAML or serialize it to a JSON string using one of the functions provided
Thanks Erik. So it sounds like the provider generated will override only parts of my existing aws provider so I do not need to generate the entire aws provider within the yaml. I can simply append my existing aws provider using the provider overrides generated from the atmos yaml.
Your example is close, but taking a templated approach rather than an inherited approach. The .tags is nothing we provide. Instead, try to manage the provider just like you manage vars, in a hierarchical, inherited manner.
Hmm oh you mean the context
tags. So if I wanted the context tags (which are individually in a string
type), then I’d need to construct a new map and then merge that with my tags
(which are already in a map
type) that I’m supplying and then pass that in to providers.aws.default_tags.tags
Devils advocate question…
Wouldn’t it just be easier to hard code this change across the providers without the yaml logic ?
So if we were just trying to solve that one specific problem, it would be. We didn’t implement this provider generation functionality for this specific use-case, although it incidentally solves it.
We implemented the provider generation primarily for a different provider (the forthcoming “context” provider), but as usual, we like to try to solve the general problem and not the specific use case.
As it relates to Cloud Posse managed components, none of them should have the problem that default_tags
solves, so if you’re seeing a problem somewhere - we should fix that instead.
If you’re writing “in house” components, and cannot enforce usage of “null label” style conventions, then it makes more sense.
Ah I see.
One issue I also see is if we use modules outside of cloudposse’s like aws-ia or terraform-aws-components, sometimes the resources are not tagged. I’ve seen it with cloudposse too but not as often.
For those cases, the default_tags
would be nice to nip the issue in the bud which then renders all the tags
inputs as redundant unless there were resource-specific input tags
such as security_group_tags
or target_group_tags
, etc.
Yes, in such a situation it definitely makes sense to make use of default_tags
in your components
2024-04-29
2024-04-30
Hey Guys
I have a question if someone can help with the answer. I have few modules developed in a separate repository and puling them down to the atmos repo while running the pipeline dynamically using the vendor pull command. But, when I bump up the version atmos is unable to consider that as a change in a component and the atmos describe affected commands gives me an empty response, any idea what I’ m missing here? below is my code snippet - vendor.yaml.
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: accounts-vendor-config
description: Atmos vendoring manifest
spec:
sources:
- component: "account_scp"
source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/account-scp.git///?ref={{.Version}}
version: "1.0.0"
targets:
- "components/terraform/account_scp"
excluded_paths:
- "**/tests"
- "**/.gitlab-ci.yml"
tags:
- account_scp
\# please use code fences
when posting code ```
Atmos compares what changed between commits, no terraform code (components) or stack configurations changed.
That said, I understand your use-case, and it makes sense.
Also, I think this might work better in the forth comnig release.
Okay, then may I know what is the main purpose of vendor? is it to just pull down the third party component? if so, what if we want to use the latest version of the third party modules/components and that change should be applicable to all stacks in the repo?
You can make a component depend on the vendor config
This command produces a list of Atmos components in Atmos stacks that depend on the provided Atmos component.
How we use vendoring at Cloud Posse is similar, but different. We use it to create an immutable copy of dependencies in the current repo. Think of it as a snapshot of the desired state.
While terragrunt, for example, is optimized for pull it down “just in time”, which is not the same.
That said, with the latest PR, I think this might be possible.
what
• Update atmos describe affected
command
• Add --clone-target-ref
flag to the atmos describe affected
command
• Update docs. Add “Remote State Backend” doc
• [https://pr-590.atmos-docs.ue2.dev.plat.cloudposse.org/cli/commands/describe/affected/](https://pr-590.atmos-docs.ue2.dev.plat.cloudposse.org/cli/commands/describe/affected/)
• [https://pr-590.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/components/remote-state-backend/](https://pr-590.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/components/remote-state-backend/)
why
• “Remote State Backend” doc describes how to override Terraform Backend Configuration to access components remote state, and how to do Brownfield development in Atmos
• Simplify the atmos describe affected
command to not force it to clone the remote target reference (branch or tag), but instead just check it out (assuming the target reference is already on the local file system)
• Add --clone-target-ref
flag to the atmos describe affected
command for backwards compatibility. If the flag is passed, the command behaves as the old version (clones the target reference first from the remote origin)
breaking changes
If the atmos describe affected
command was used in a GitHub Action (similar to https://github.com/cloudposse/github-action-atmos-affected-stacks), and the action performed a shallow Git clone (instead of a deep clone), it will break with an error that the target reference (branch) does not exist on the file system. There are a few ways to fix it:
• Use the flag --clone-target-ref=true
to force the command to clone the target reference from the remote origin (this flag is addd for backwards compatibility)
• `atmos describe affected --clone-target-ref=true` • Update the GitHub Action to perform a deep-clone instead of a shallow-clone
- uses: actions/checkout@v4
with:
fetch-depth: 0
• Perform a clone of the target branch into a separate directory and use the --repo-path=<dir>
command line parameter to specify the path to the already cloned target repository (refer tohttps://atmos.tools/cli/commands/describe/affected#flags)
description
The atmos describe affected
command uses two different Git commits to produce a list of affected Atmos components and stacks.
For the first commit, the command assumes that the current repo root is a Git checkout. An error will be thrown if the
current repo is not a Git repository (the .git/
folder does not exist or is configured incorrectly).
The second commit can be specified on the command line by using the --ref
(Git References) or --sha
(commit SHA) flags. The --sha
takes precedence over the --ref
flag.
How does it work?
The command performs the following:
• If the --repo-path
flag is passed, the command uses it as the path to the already cloned target repo with which to compare the current working branch. I this case, the command will not clone and checkout the target reference, but instead will use the already cloned one to compare the current branch with. In this case, the --ref
, --sha
, --ssh-key
and --ssh-key-password
flags are not used, and an error will be thrown if the --repo-path
flag and any of the --ref
, --sha
, --ssh-key
or --ssh-key-password
flags are provided at the same time
• Otherwise, if the --clone-target-ref=true
flag is specified, the command clones (into a temp directory) the remote target with which to compare the current working branch. If the --ref
flag or the commit SHA flag --sha
are provided, the command uses them to clone and checkout the remote target. Otherwise, the HEAD
of the remote origin is used (refs/remotes/origin/HEAD
Git ref, usually the main
branch)
• Otherwise, (if the --repo-path
and --clone-target-ref=true
flags are not passed), the command does not clone anything from the remote origin, but instead just copies the current repo into a temp directory and checks out the target reference with which to compare the current working branch. If the --ref
flag or the commit SHA flag --sha
are provided, the command uses them to check out. Otherwise, the HEAD
of the remote origin is used (refs/remotes/origin/HEAD
Git ref, usually the main
branch). This requires that the target reference is already cloned by Git, and the information about it exists in the .git
directory (in case of using a non-default branch as the target, Git deep clone needs to be executed instead of a shallow clone). This is the recommended way to execute the atmos describe affected
command since it allows working with private repositories without providing the SSH credentials (--ssh-key
and --ssh-key-password
flags), since in this case Atmos does not access the remote origin and instead just checks out the target reference (which is already on the local file system)
• The command deep-merges all stack configurations from both sources: the current working branch and the target reference
• The command searches for changes in the component directories
• The command compares each stack manifest section of the stack configurations from both sources looking for differences
• And finally, the command outputs a JSON or YAML document consisting of a list of the affected components and stacks and what caused it to be affected
@Andriy Knysh (Cloud Posse) will this forth coming version of atmos work better for his use case?
Basically,
atmos vendor pull ...
atmos describe affected ...
yes, and some more info:
would you mind giving me some reference on the point you made above “You can make a component depend on the vendor config”
if you have a terraform component that uses some local TF modules (in some folder on your filesystem), Atmos detects the dependency on that module automatically b/c it knows all the Terraform metadata about your Tf component and knows all local modules it’s using
would you mind giving me some reference on the point you made above
It’s in the link shared.
components:
terraform:
account_scp:
metadata:
component: "account_scp"
settings:
depends_on:
1: relative/path/to/vendor.yaml
if you have some file ot folder that your TF code uses (e.g. loads some policy definitions), you can add depends_on
on that file or folder, and then atmos describe affected
will check if that file or folder changed and include the component/stack in the affected list
omponents:
terraform:
top-level-component:
metadata:
component: "top-level-component"
settings:
depends_on:
1:
file: "examples/tests/components/terraform/mixins/introspection.mixin.tf"
2:
folder: "examples/tests/components/helmfile/infra/infra-server"
what if we have more than one components in vendor.yaml will atmos thinks that all components are changed even we change the version for single component?
same thing as Erik mentioned above
what if we have more than one components in vendor.yaml will atmos thinks that all components are changed
Yes, but you can also use the component.yaml
instead, to avoid this.
Also, the forth coming release (ETA today or tomorrow) should be able to detect affected files if you do
@Andriy Knysh (Cloud Posse) will this forth coming version of atmos work better for his use case?
Basically,
atmos vendor pull ...
atmos describe affected ...
See https://atmos.tools/cli/commands/vendor/pull/#vendoring-using-componentyaml-manifest
For the component.yaml
Use this command to pull sources and mixins from remote repositories for Terraform and Helmfile components and stacks.
I would probably wait and test the new release and see if it works.
If it doesn’t work, it’s probably a minor tweak we can make to get it to work for you.
But, when I bump up the version atmos is unable to consider that as a change in a component and the atmos describe affected commands gives me an empty response
also @Kubhera, if you change only the TF code of the component, then atmos describe affected
will not detect any changes b/c it detects changes to the stack manifests (e.g. vars), not the source code
so, for example, in vendor.yaml
you bumped up the version, atmos vendor pull
downloads that new version, but if ther are no changes to the config (vars, settings, file/folder deps), then nothing will be affected
I understand that, when change in component source code then atmos makes all the stacks which have this component defined affected.
so to work around that, add depends_on
on the folder where you download your component to
settings:
depends_on:
1:
folder: "xxxxx
then use
atmos vendor pull
atmos describe affected
components:
terraform:
account_scp:
vars:
enabled: true
enable_lambda_public_url: true
regions_in_use: ["us-east-2"]
let me know if that works for you
above is the catalog for a component , can I add the depends_on here?
components:
terraform:
account_scp:
settings:
depends_on:
1:
folder: "xxxxx"
vars:
enabled: true
enable_lambda_public_url: true
regions_in_use: ["us-east-2"]
also note that only if the component source changes in the new version, it will be included in describe affected
folder attribute may not be useful here for me, need to depend on the component defined inside the vendor.yaml
let’s start from the beginning :slightly_smiling_face: You have a TF component in some repo, you vendor it by atmos vendor pull
, you configure it in Atmos stack manifest
Basically here the component source code is not part of the components directory hence not being tracked by the same git repo where atmos exist
if only the source code of the component changes w/o affecting the interface (inputs/outputs), then it means that no resources were affected
if the interface changes, then it will be considered affected
then how do we publish the new change in component to the stacks?
what I’m saying is this:
components:
terraform:
account_scp:
is it your Atmos component?
suppose if I change the name of an existing resource or added a new resource to the component which doesn’t require any input
yes
ok, now point it to the TF component:
components:
terraform:
account_scp:
metadata:
component: <component folder in components/terraform>
import:
- catalog/account-scp/defaults
- catalog/s3/defaults
vars:
tenant: "8964a414-e76d-449b-97c1-7f1295ef7358"
components:
terraform:
account_scp:
vars: # these are input vars for account_scp component
account_id: "643633022299"
environment_type: "NonProd"
regions_in_use: ["us-east-1","us-east-2","us-west-1","us-west-2"]
ok pointed
now, vendor your component atmos vendor pull
into components/terarform/<my-component>
yes got it and that is what I’m doing
Atmos knows that your Atmos component points to the TF component in components/terarform/<my-component>
- it always checks if the component source changes
if it does not work for you, then something else is not configured correctly
how does it checks? it is not part of the same repo… vendor pull downloads it when i execute the command
Not sure if I’m unable to catch your point
see above is my stack
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: accounts-vendor-config
description: Atmos vendoring manifest
spec:
sources:
- component: "account_scp"
source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/account-scp.git///?ref={{.Version}}
version: "1.0.0"
targets:
- "components/terraform/account_scp"
excluded_paths:
- "**/tests"
- "**/.gitlab-ci.yml"
tags:
- account_scp
- component: "self_service_repo"
source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/self-service-repo.git///?ref={{.Version}}
version: "1.0.1"
targets:
- "components/terraform/self_service_repo"
excluded_paths:
- "**/tests"
- "**/.gitlab-ci.yml"
tags:
- self_service_repo
- component: "global_general"
source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/global-general.git///?ref={{.Version}}
version: "1.0.1"
targets:
- "components/terraform/global_general"
excluded_paths:
- "**/tests"
- "**/.gitlab-ci.yml"
tags:
- global_general
- component: "cloudtrail"
source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/cloudtrail.git///?ref={{.Version}}
version: "1.0.0"
targets:
- "components/terraform/cloudtrail"
excluded_paths:
- "**/tests"
- "**/.gitlab-ci.yml"
tags:
- cloudtrail
this one is vendor.yaml
targets:
- “components/terraform/account_scp”
now when I change the account_scp version in the vendor.yaml and commit, it should now affect the stacks against this component right?
atmos vendor pull
downloads the new version into components/terraform/account_scp
correct
then atmos describe affected accoint_scp -s <stack>
checks all Atmos stack manifests (YAML config files) AND it checks the component folder components/terraform/account_scp
for any changes
yes the component folder gets created only after the vendor pull command? may I know against what reference it checks for the change?
for example: in one of the components I had few IAM roles being deployed. later I have update the component with the new IAM role resource which doesn’t accept any input. so I have now bumped up the version of that component in vendor.yaml to include my new IAM role changes. but describe affected couldn’t detect the change unless I had to add a new variable globally in he catalog/defaults for that component.
hope you got my use case
ok, I understand that this is not an issue with your config. This could be an issue with Git
what I described above works - Atmos checks the component folder for any changes
@Andriy Knysh (Cloud Posse) is it only checking the commits? If so this will not work.
but… your changes should be committed b/c Atmos just executes Git commands (uisng the go-git
library)
see basically my components are not part of the same git repository as of my atmos, they are being vendored dynamically.
Yeah
if that is the case I’m unable to understand the purpose of vendoring
if I have to have the components in the same repo under the components folder then why I have to use vendor feature?
vendoring is a diff feature from describe affected
, your use case tries to combine them. Let me think about it
basically I want to keep my components development separately in other git repos and always point to the stable version through the vendor.yaml
do not want to have the components and atmos stuff like stacks, workflowws etc.. int he same repo
(@Kubhera committing vendored files is a bit like the “tabs vs spaces” debate. Depending on the language and framework, different, conflicting “Best Practices” exist. For example, best practices between Golang and Node exist. For context see all the articles debating it https://www.google.com/search?q=vendoring+commit+or+not+to+commit&sourceid=chrome&ie=UTF-8#ip=1)
want to maintain my components in different repos like how terraform or you guys develops modules and publish
@Kubhera what you want to do is also possible
I just want to make sure we’re first on the same page.
Okay
if you can make me understand or suggest me a proper work around to achieve what I’m looking for then that would be of a great help.
So in our practice, all of our components are distributed here: https://github.com/cloudposse/terraform-aws-components. We early on learned, that customers preferred having a local copy. That’s why we adopted the approach we do.
do not want to have the components and atmos stuff like stacks, workflowws etc.. int he same repo
So the components are in a different repo. We just happen to commit them.
I understand you don’t want to do that. It makes perfect sense. It’s annoying sometimes to review vendored files.
So let’s say there’s 2 ways to do vendoring.
- vendor & commit
- vendor just-in-time We want to help you solve (2). So you don’t have to commit & push the files.
yes exactly
There’s what we can do now, and there’s what we are open to adding support to improve the DX.
So in your CI, try this:
- Wait for https://github.com/cloudposse/atmos/pull/590 to merge & release. Hopefully today. Then update your pipelien like this:
atmos vendor pull ...
git commit -a --message 'temporary commit'
atmos describe affected ...
This should allow the git diff
to work the way it’s implemented.
Note, you do not need to push the “temporary commit”; just discard it.
since I’m using affected command to run the workflow against only the affected stacks this is a problem, else if I have to run workflow against all stacks every time then absolutely there won’t be any issues.
Step 5 could be
git revert HEAD~1..HEAD
ok
yes, this is how describe affected
currently works
Use this command to show a list of the affected Atmos components and stacks given two Git commits
OK, can I have multiple vendor files? one per component? then we can add a file dependency in the component definition under catalogs.
@Kubhera your use-case is interesting (dynamic vendoring), thank you, we’ll review it and prob can add more functionality
can I have multiple vendor files? one per component? then we can add a file dependency in the component definition under catalogs.
See component.yaml
https://sweetops.slack.com/archives/C031919U8A0/p1714485451239789?thread_ts=1714484118.739959&cid=C031919U8A0
See https://atmos.tools/cli/commands/vendor/pull/#vendoring-using-componentyaml-manifest
For the component.yaml
You can also have many vendor.yaml
files, but the component.yaml
in your use-case makes more sense.
why can’t just atmos keep track of the version in vendor.yaml and if there is change in the version in the recent commit, then it should affect all the related stacks, this would be really helpful
would you mind elaborating more on how to achieve it with component.yaml
@Kubhera stick the component.yaml
in each component folder. Point to the upstream component.
Since the component.yaml is in the component folder, atmos describe affected
will see that component changes, any time the component.yaml
changes.
That’s the exact outcome you want
sounds good let me try it out and revert
@Andriy Knysh (Cloud Posse) any settings he should be aware of to make sure that .yaml
changes in component folders are considered?
We want to support this use-case, so thanks for bearing with us.
by default, all files are considered
but you can also explicitly include and/or exclude
# Only include the files that match the 'included_paths' patterns
# If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
# 'included_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# <https://github.com/bmatcuk/doublestar#patterns>
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
# Exclude the files that match any of the 'excluded_paths' patterns
# Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
# 'excluded_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
excluded_paths:
- "**/context.tf"
we were on terragrunt and now I’m migrating things to atmos for my org
Yup, I’m aware of that
thank you, let us know if that works for you
Sure catch you guys later with the outcome
note that if you use verbose
atmos describe affected --verbose=true
it will show you all the affected files
Changed files:
cmd/atlantis_generate_repo_config.go
cmd/describe_affected.go
examples/quick-start/Dockerfile
examples/quick-start/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
examples/tests/atmos.yaml
which should include your component.yaml
file in components/terraform/<component>/component.yaml
@Kubhera maybe you have some thoughts to add to https://github.com/cloudposse/atmos/issues/598
Describe the Feature
This is a similar idea to what Terragrunt does with their “Remote Terraform Configurations” feature: https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/#remote-terraform-configurations
The idea would be that you could provide a URL to a given root module and use that to create a component instance instead of having that component available locally in the atmos project repo.
The benefit here is that you don’t need to vendor in the code for that root module. Vendoring is great when you’re going to make changes to a configuration, BUT if you’re not making any changes then it just creates large PRs that are hard to review and doesn’t provide much value.
Another benefit: I have team members that strongly dislike creating root modules that are simply slim wrappers of a single child module because then we’re in the game of maintaining a very slim wrapper. @kevcube can speak to that if there is interest to understand more there.
Expected Behavior
Today all non-custom root module usage is done through vendoring in Atmos, so no similar expected behavior AFAIK.
Use Case
Help avoid vendoring in code that you’re not changing and therefore not polluting the atmos project with additional code that is unchanged.
Describe Ideal Solution
I’m envisioning this would work like the following with the $COMPONENT_NAME.metadata.url
being the only change to the schema. Maybe we also need a version
attribute as well, but TBD.
components:
terraform:
s3-bucket:
metadata:
url: <https://github.com/cloudposse/terraform-aws-components/tree/1.431.0/modules/s3-bucket>
vars:
...
Running atmos against this configuration would result in atmos cloning that root module down to the local in a temporary cache and then using that cloned root module as the source to run terraform
or tofu
against.
Alternatives Considered
None.
Additional Context
None.
How do I pass
variable "service_project_names" {
description = "list of service projects to connect with host vpc to share the network"
type = list(string)
default = []
}
in a yaml? When I try to use service_project_names: [“PROJECT_NAME”], my run does not show any changes to be made to my plan
That looks correct.
When you do the run, can you see the deep merged yaml by turning atmos log to Trace
in the atmos.yaml
? Or can you see the input in the generated tfvars json?
I see it like this in the run for variables being used:
- service_project_names:
- project1
- project2
if you can see it in the deep-merged atmos yaml and you can see it in the outputted tfvars.json file for the terraform workspace, then the issue may be with the terraform
try adding this
output "service_project_names" { value = var.service_project_names }
and see if the output matches your input when you run a plan/apply
The output shows that is = [] so it is not passing the value from my yaml for some reason but the pipeline shows that the var is declared
did you verify the tfvars json output ?
You should have something like this:
vars:
service_project_names:
- project1
- project2
But no where should you have this:
vars:
- service_project_names:
- project1
- project2
Also, if you think that list can get long, be careful and avoid creating factories inside of terraform to create smaller, more nimble components. This is not a rule but a consideration.
https://atmos.tools/core-concepts/components/#best-practices
Components are opinionated building blocks of infrastructure as code that solve one specific problem or use-case.
All good, the variable was nested in the wrong spot but there was also an issue where a resource needed to use a beta provider. Fixed and working now though
v1.71.0
Update atmos describe affected
command. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2264365448” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/590“…
Update atmos describe affected
command. Update docs @aknysh (#590)
what
Update atmos describe affected command
Add –clone-target-ref flag to the atmos describe affected command
Update docs…
aknysh has 266 repositories available. Follow their code on GitHub.
what
Update atmos describe affected command
Add –clone-target-ref flag to the atmos describe affected command
Update docs. Add “Remote State Backend” doc
v1.71.0
Update atmos describe affected
command. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2264365448” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/590“…