#atmos (2024-06)
2024-06-03
Hey, I’m using atmos
with Azure, and I noticed the github-action-atmos-terraform-plan
and github-action-atmos-terraform-apply
github actions are AWS specific. Would a PR be welcome to allow them to work with Azure too?
So the irony there is these were actually developed for a customer using them primarily on Azure
They support Azure, but our docs might be lacking.
Any PRs improving that would be welcome.
@Matt Calhoun do we have any azure docs that weren’t published?
Oh really? Ha! Looking at the action.yml
I don’t see a way to bypass the AWS steps and instead authenticate with Azure… https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/2313b46bbad5da8a93177cd6d443f565e229906e/action.yml#L169
- name: Configure Plan AWS Credentials
Hrmmm
Maybe I’m looking at the wrong actions?
No, so I think that the story goes a little bit different. They are using custom workflows on their end that basically do with the actions do. These actions were developed afterwards to do the same thing, but as you noticed, had a AWS. I’m pretty sure if you look at the plan file Storage action you will see specific logic for Azure. That action they are using.
A GitHub action that enables securely storing and retrieving Terraform plan files
Here you can see the docs as a relates to this action for Azure
So what we need to do is do the same thing for these other actions. If you wouldn’t mind opening a PR for that, that would be amazing.
Also, we have a PR open that’s updating the docs on the website which are out of date
what
• Update github actions documentation
why
• Document the latest gitops
references
• DEV-491: Update Atmos.tools documentation for GitHub Actions to use atmos.yaml
I’ll check it out, thanks @Erik Osterman (Cloud Posse). We’ll PR any changes, of course. @Zack Ferguson
2024-06-05
Update atmos vendor pull
and atmos validate stacks
commands. Add --include-dependents
flag to atmos describe affected
command @aknysh (#616)
what
• Update atmos vendor pull
command
• Update atmos validate stacks
command
• Add --include-dependents
flag to atmos describe affected
command
• Update docs
• https://atmos.tools/cli/commands/describe/affected/
why
• When executing atmos vendor pull
, Atmos creates a temp directory to clone the remote repo into.
Atmos uses go-getter
to download the sources into the temp directory. When cloning from the root of a repo w/o using modules (sub-paths), go-getter
does the following:
• If the destination directory does not exist, it creates it and runs `git init`
• If the destination directory exists, it should be an already initialized Git repository (otherwise an error will be thrown)
For more details, refer to
• [hashicorp/go-getter#114](https://github.com/hashicorp/go-getter/issues/114)
• [https://github.com/hashicorp/go-getter?tab=readme-ov-file#subdirectories](https://github.com/hashicorp/go-getter?tab=readme-ov-file#subdirectories) • Don't check for duplicate abstract components in the same stack from different stack manifests. Abstract components are never provisioned and serve as blueprints for real components. This is an update (follow up) to the previous PRs:
• [#608](https://github.com/cloudposse/atmos/pull/608)
• [#611](https://github.com/cloudposse/atmos/pull/611) • The `--include-dependents` flag allows including dependencies for the affected components
If the command-line flag --include-dependents=true
is passed to the atmos describe affected
command, and there are other components that depend on the affected components in the stack, the command will include a dependents
property (list) for each affected component. The dependents
property is hierarchical - each component in the list will also contain a dependents
property if that component has dependent components as well.
For example, suppose that we have the following configuration for the Atmos components component-1
, component-2
and component-3
in the stack plat-ue2-dev
:
components:
terraform:
component-1:
metadata:
component: "terraform-component-1"
vars: {}
component-2:
metadata:
component: "terraform-component-2"
vars: {}
settings:
depends_on:
1:
component: "component-1"
component-3:
metadata:
component: "terraform-component-3"
vars: {}
settings:
depends_on:
1:
component: "component-2"
In the above configuration, component-3
depends on component-2
, whereas component-2
depends on component-1
.
If all the components are affected (modified) in the current working branch, the atmos describe affected --include-dependents=true
command will produce the following result:
[
{
"component": "component-1",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-1",
"included_in_dependents": false,
"dependents": [
{
"component": "component-2",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-2",
"dependents": [
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3"
}
]
}
]
},
{
"component": "component-2",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-2",
"included_in_dependents": true,
"dependents": [
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3"
}
]
},
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3",
"included_in_dependents": true
}
]
The component-1
component does not depend on any other component, and therefore it has the included_in_dependents
attribute set to false
. The component-2
and component-3
components depend on other components and are included in the dependents
property of the other components, and hence the included_in_dependents
attribute is set to true
.
When processing the above output, you might decide to not plan/apply the component-2
and component-3
components since they are in the dependents
property of the component-1
component. Instead, you might just trigger component-1
and then component-2
and component-3
in the order of dependencies.
what
• Update atmos vendor pull
command
• Update atmos validate stacks
command
• Add --include-dependents
flag to atmos describe affected
command
• Update docs
• https://atmos.tools/cli/commands/describe/affected/
why
• When executing atmos vendor pull
, Atmos creates a temp directory to clone the remote repo into.
Atmos uses go-getter
to download the sources into the temp directory. When cloning from the root of a repo w/o using modules (sub-paths), go-getter
does the following:
• If the destination directory does not exist, it creates it and runs `git init`
• If the destination directory exists, it should be an already initialized Git repository (otherwise an error will be thrown)
For more details, refer to
• [hashicorp/go-getter#114](https://github.com/hashicorp/go-getter/issues/114)
• [https://github.com/hashicorp/go-getter?tab=readme-ov-file#subdirectories](https://github.com/hashicorp/go-getter?tab=readme-ov-file#subdirectories) • Don't check for duplicate abstract components in the same stack from different stack manifests. Abstract components are never provisioned and serve as blueprints for real components. This is an update (follow up) to the previous PRs:
• [#608](https://github.com/cloudposse/atmos/pull/608)
• [#611](https://github.com/cloudposse/atmos/pull/611) • The `--include-dependents` flag allows including dependencies for the affected components
If the command-line flag --include-dependents=true
is passed to the atmos describe affected
command, and there are other components that depend on the affected components in the stack, the command will include a dependents
property (list) for each affected component. The dependents
property is hierarchical - each component in the list will also contain a dependents
property if that component has dependent components as well.
For example, suppose that we have the following configuration for the Atmos components component-1
, component-2
and component-3
in the stack plat-ue2-dev
:
components:
terraform:
component-1:
metadata:
component: "terraform-component-1"
vars: {}
component-2:
metadata:
component: "terraform-component-2"
vars: {}
settings:
depends_on:
1:
component: "component-1"
component-3:
metadata:
component: "terraform-component-3"
vars: {}
settings:
depends_on:
1:
component: "component-2"
In the above configuration, component-3
depends on component-2
, whereas component-2
depends on component-1
.
If all the components are affected (modified) in the current working branch, the atmos describe affected --include-dependents=true
command will produce the following result:
[
{
"component": "component-1",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-1",
"included_in_dependents": false,
"dependents": [
{
"component": "component-2",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-2",
"dependents": [
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3"
}
]
}
]
},
{
"component": "component-2",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-2",
"included_in_dependents": true,
"dependents": [
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3"
}
]
},
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3",
"included_in_dependents": true
}
]
The component-1
component does not depend on any other component, and therefore it has the included_in_dependents
attribute set to false
. The component-2
and component-3
components depend on other components and are included in the dependents
property of the other components, and hence the included_in_dependents
attribute is set to true
.
When processing the above output, you might decide to not plan/apply the component-2
and component-3
components since they are in the dependents
property of the component-1
component. Instead, you might just trigger component-1
and then component-2
and component-3
in the order of dependencies.
Hey all, wondering if there is a way to set a workspace pattern override at the top level configuration in a stack file. Similar to this idea in the docs (https://atmos.tools/core-concepts/components/terraform-workspaces/#terraform-workspace-override-in-atmos) but that’s for an individual, if there’s a more efficient way to apply it to all in that stack YAML file.
For context, the reason is that the resources is managed via Spacelift and we have a stack_name_pattern
(https://github.com/cloudposse/atmos/releases/tag/v1.4.1) override (which is on the top level) for this particular file but unless there is a workspace override, the naming becomes inconsistent.
@Andriy Knysh (Cloud Posse)
@Yangci Ou you can update the stacks.name_pattern
in atmos.yaml
to make it the same as you did for Spacelift using settings.stack_name_pattern
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
(there is no setting for that in the stack manifests because Atmos needs to know the stack name pattern before being able to process the stack manifest files, to be able to find the stack and component in the stack when executing CLI commands) - that’s why it’s in atmos.yaml
only
stacks.name_pattern
in atmos.yaml
is used in these cases:
• it’s the pattern for Atmos stack names
• it’s the pattern for Terraform workspaces (by default, but you can override TF workspaces as well if needed, see https://atmos.tools/core-concepts/components/terraform-workspaces#terraform-workspace-override-in-atmos)
Terraform Workspaces.
Hey @Andriy Knysh (Cloud Posse) - I did take a look at the name_pattern
in atmos.yaml
but that sets it for the entire project. I’m more looking for what the TF workspace override is doing, but that’s doing it for the individual component
That’s what I want, but I’d like it for that particular entire .yaml stack
At the same time, the name_pattern
for this project is already set, it’s more of just these couple of components* in this .yaml stack file that needs overriding
I’m thinking of something that does the same thing as terraform_workspace_pattern
in that docs/pic, but on the terraform.settings
level instead of the components.terraform.resource123.metadata
level
currently, TF workspace can be overridden in the metadata
section per component (not globally and not per file) (but it can be added to Atmos)
@Yangci Ou just currious, why do you want to override the TF workspaces for all components in a stack file, but don’t overide the Atmos stack name ?
@Andriy Knysh (Cloud Posse) it’s because we already have a name_pattern
set for the stacks in the atmos.yaml
. For all stacks and the components in them, all of the name patterns are as consistent between the workspace name to the Spacelift stack names.
But our situation calls for a special case where we’re creating a new .yaml file for a Atmos stack, but we want all the component that live in that to have a different Spacelift stack name, which in turns means that we want that different workspace name as well (hence overriding the TF workspace)
so essentially: these components are in the same Atmos stack as others, but we want different Spacelift name (which is implemented per file via stack_name_pattern
) and looking to do the same for the TF workspace (which you mentioned only done per component)
@Yangci Ou i understand what you want to do. While currently you can override the TF workspace per component only (since metadata
section is not inherited), we’ll consider adding a global settings
to be able to do it on a set of components (we improve/update Atmos often when seeing real-life use-cases like this one)
having said that, I’d like to review with you what you are doing with overriding the Spacelift stack names for the components in a stack manifest file. I’ll put a wall of text below for you to better understand the difference b/w/ an Atmos stack (logical construct) and a stack manifest file (physical file where the components for the stack are defined)
let’s say you have an Atmos stack plat-use2-prod
(the stack name is defined by the context variables tenant
, environment
and stage
, and by the stacks.name_pattern
in atmos.yaml
(so Atmos knows which context variables are part of the stack name and in which order)
the components in the stack plat-use2-prod
can be defined inline or imported in a single manifest file, e.g. orgs/acme/plat/prod/us-est-2.yaml
- it’s called a top-level stack manifest
or the logical stack plat-use2-prod
can be split into multiple top-level stack manifest files. e.g.
orgs/acme/plat/prod/us-east-2.yaml
orgs/acme/plat/prod/us-east-2-extras.yaml
a set of components for the Atmos stack plat-use2-prod
are defined (inline or imported) in orgs/acme/plat/prod/us-east-2.yaml
manifest file, while another set of components for the same Atmos stack plat-use2-prod
are defined (inline or via imports) in the second stack manifest file orgs/acme/plat/prod/us-east-2-extras.yaml
this pattern is called “Partial Stack Configuration” https://atmos.tools/design-patterns/partial-stack-configuration
Partial Stack Configuration Atmos Design Pattern
while it’s a powerful pattern to use when you want to split your stack manifest files into smaller parts to manage them easily, you need to pay attention to the following:
• Regardless of how many stack manifest files you have, it’s still the same (logical) Atmos stack plat-use2-prod
(just split into a few physical files)
• Any global config you defined in any of the files and any of the imported files will be applied to ALL components in the Atmos stack plat-use2-prod,
not only to the components in one manifest file. For example, if you define something like this inline or via imports
settings:
spacelift:
workspace_enabled: true
# `stack_name_pattern` overrides Spacelift stack names
stack_name_pattern: "{tenant}-{environment}-{stage}-{component}"
the settings.spacelift
section is now a global section for the entire Atmos stack plat-use2-prod
, and all components defined in the two stack manifest files orgs/acme/plat/prod/us-east-2.yaml
and ``orgs/acme/plat/prod/us-east-2-extras.yaml will get that global config and the
stack_name_pattern` will be applied to all of them
if you need to make sure that you provide or override some config just for a specific stack manifes file, Atmos supports the overrides
section https://atmos.tools/design-patterns/component-overrides
Component Overrides Atmos Design Pattern
it allows you to create a new separate scope per FILE (not per Atmos stack), and only the components in that file (defined inline or imported) will get the config defined in the overrides
section. The other stack manifest files (even for the same logical stack) will not be affected. On the other hand, any config specific to a component (added into components.terraform.<my-component>
section) will be applied only to that component in the stack and will not affect anything else (as we can currently do with the metadata
section)
@Yangci Ou let me know if it makes sense and describes the use-case you want to implement
Hey Andriy! Just got around to reading this, yes this 100% makes sense and was a good explanation of the use-cases. Super helpful on the clear distinction of an Atmos logical stack and the physical stack manifest files. The partial stack configuration use case is basically what’s being implemented here with the idea of overrides.
Thanks for taking the time to provide that detailed and thorough explanation, reading up more examples on the docs for those links you provided and the organization/structures of those components looks very effective and clean
thanks. Having said that, we’ll consider adding the ability to override terraform workspace for a group of components, not only for one component at a time
hi, i’ve modified atmos schema.json to support “assume_role” in s3 backend. Maybe you want to add it to the schema definition:
M stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json @@ -498,6 +498,14 @@ “region”: { “type”: “string” },
- “assume_role”: {
- “role_arn”: {
- “type”: [
- “string”,
- “null”
- ]
- }
- }, “role_arn”: { “type”: [ “string”,
@Andriy Knysh (Cloud Posse)
Big thumbs up on this! Saw the recent updates on atmos validate and I was looking into using it in our CI/CD pipeline as a check - was looking at the results and saw bunch of errors for the invalid assume_role
because of the manifest
Fix an issue with the component_info
output from the atmos describe component
command. Add assume_role
property to Atmos JSON Schema S3 backend @aknysh (#621) ## what
• Fix an issue with the component_info
output from the atmos describe component
command
• Add assume_role
property to Atmos JSON Schema S3 backend
why
• The issue with the component_info
output from the atmos describe component
command was introduced in the previous PRs (different order of execution when evaluation Go
templates in Atmos stack manifests)
• Support the recommended assume_role
property in S3 backends. Assuming an IAM Role can be configured in two ways. The preferred way is to use the argument assume_role
, the other, which is deprecated, is with arguments at the top level (e.g. role_arn
)
references
• https://developer.hashicorp.com/terraform/language/settings/backends/s3#assume-role-configuration
Update github actions documentation @goruha (#606) ## what * Update github actions documentation why
• Document the latest gitops
references
• DEV-491: Update Atmos.tools documentation for GitHub Actions to use atmos.yaml
Fix tests @goruha (#619) ## what * Fix tests * Fix documentation why
• Because branch master
renamed to main
Go auto release workflows @goruha (#586) # What * Use go auto-release workflow cloudposse/.github/.github/workflows/shared-go-auto-release.yml@main
* Remove .goreleaser.yml
. Now will use https://github.com/cloudposse/.github/blob/main/.github/goreleaser.yml * Drop auto-release.yaml
. Now will use https://github.com/cloudposse/.github/blob/main/.github/auto-release.yml and https://github.com/cloudposse/.github/blob/main/.github/auto-release-hotfix.yml
Why
• Consolidate go releases workflow pattern • Closes #579
2024-06-06
2024-06-07
2024-06-08
Fix an issue with the component_info
output from the atmos describe component
command. Add assume_role
property to Atmos JSON Schema S3 backend @aknysh (#621) ## what
• Fix an issue with the component_info
output from the atmos describe component
command
• Add assume_role
property to Atmos JSON Schema S3 backend
why
• The issue with the component_info
output from the atmos describe component
command was introduced in the previous PRs (different order of execution when evaluation Go
templates in Atmos stack manifests)
• Support the recommended assume_role
property in S3 backends. Assuming an IAM Role can be configured in two ways. The preferred way is to use the argument assume_role
, the other, which is deprecated, is with arguments at the top level (e.g. role_arn
)
references
• https://developer.hashicorp.com/terraform/language/settings/backends/s3#assume-role-configuration
Update github actions documentation @goruha (#606) ## what * Update github actions documentation why
• Document the latest gitops
references
• DEV-491: Update Atmos.tools documentation for GitHub Actions to use atmos.yaml
Fix tests @goruha (#619) ## what * Fix tests * Fix documentation why
• Because branch master
renamed to main
Go auto release workflows @goruha (#586) # What * Use go auto-release workflow cloudposse/.github/.github/workflows/shared-go-auto-release.yml@main
* Remove .goreleaser.yml
. Now will use https://github.com/cloudposse/.github/blob/main/.github/goreleaser.yml * Drop auto-release.yaml
. Now will use https://github.com/cloudposse/.github/blob/main/.github/auto-release.yml and https://github.com/cloudposse/.github/blob/main/.github/auto-release-hotfix.yml
Why
• Consolidate go releases workflow pattern • Closes #579
2024-06-09
It would be nice to not have to redefine every modules’ inputs in the component. This github comment on using a provider function to decode_tfvars
and then pass it into modules would prevent having to define each var in each component
What do you folks think
Are you just proxying the module as a component? then there’s another way.
Kind of. I think this approach could be used for any component even a component that composes multiple modules
What is the other way?
This is what I was thinking of https://atmos.tools/core-concepts/components/vendoring/#vendoring-modules-as-components
Use Component Vendoring to make copies of 3rd-party components in your own repo.
even a component that composes multiple modules
Inevitably, users want “glue” to compose stateful root modules from multiple child modules. Do you think there’s a way without something more HCL based like what terragrunt and terramate do?
I want to be cognizant of reinventing HCL in YAML (not the end goal)
decode_tfvars
and then pass it into modules would prevent having to define each var in each componen
I have a mental block for how this would work. This allows for dynamic loading of tfvar
files, which is nice, but the values still need to be passed. Also, if done at the component level, this method bypasses variable validation, which is very helpful for users. It’s a good thing to declare the variables, because it’s a documented interface.
without it, it’s like type any
and that’s anything but helpful to understand what is accepted.
Many times the module inputs are copied and pasted into the component variables.tf so if the component uses 3 modules, each of their vars includng types, descriptions, validations etc are copied over. This also means if something changes in the module, that introduces drift, and so the component also needs to be updated.
We can side step this by allowing vars to pass through from yaml to tfvars with atmos and then decode using decode_tfvars
. That should avoid having to copy and paste the module vars into component vars and reduce the redundant validation checks in the component and defer that to the upstream (or local) modules
Or maybe I’m mistaken here. I haven’t tested it out. It seems promising tho and may allow less module to component variable drift
So I get the gist of what you’re saying but the function you’re referring to is more like the YAML decode method so I don’t see the end solution coming together. The fact that function exist is no different than any of the other decoding method such as for JSON or YAML. Am I missing something?
If you could mock some thing either through YAML or something else that might help me connect the dots.
I’m on a flight at the moment but i will mock something up this week to illustrate it
2024-06-10
2024-06-11
I’ve been wrangling with the best way to implement resource groups in Azure. Initially I created a storage component that provisions storage resources in a resource group (also created by the storage component). This worked great - the component would (by default) create a resource group for each component, or, I could provide a resource group name and it would re-use an existing resource group.
Then I realized because I’m deploying atmos component changes in parallel using a matrix in my pipeline, the ordering of resource creation is important, and reusing another resource group can fail if it hasn’t already been created.
So I experimented with using a data in the storage component to check if a resource group is missing and create it if it is. But then de-provisioning blows up when destroying the storage component that created the resource group because it still contains other resources.
I’ve since created a resourcegroups
component which accepts a list of resource group names from the stack yaml, and provisions them, but again - if this component isn’t deployed first by the pipeline, deploying other components will fail. Is that the right way to do this?
Has anyone else solved this conundrum? (Thanks)
@Andriy Knysh (Cloud Posse) do you have any clever ideas on how to have like a prioritized output of components?
We’re implementing ordered dependencies with with github actions, but due to technical limitations of GitHub matrixes, jobs, and concurrency groups, there’s no way to do it without also introducing a GitHub App.
We thought that may be the case. Are you able to share your timeline? Need any help with that?
as Erik mentioned, this is a multi-phase process. First step we added in this PR https://github.com/cloudposse/atmos/pull/616 - atmos describe affected --include-dependents=true
. Second step is to use that functionality in GitHub Actions and GH App
what
• Update atmos vendor pull
command
• Update atmos validate stacks
command
• Add --include-dependents
flag to atmos describe affected
command
• Update docs
• https://pr-616.atmos-docs.ue2.dev.plat.cloudposse.org/cli/commands/describe/affected/
why
• When executing atmos vendor pull
, Atmos creates a temp directory to clone the remote repo into.
Atmos uses go-getter
to download the sources into the temp directory. When cloning from the root of a repo w/o using modules (sub-paths), go-getter
does the following:
• If the destination directory does not exist, it creates it and runs `git init`
• If the destination directory exists, it should be an already initialized Git repository (otherwise an error will be thrown)
For more details, refer to
• [hashicorp/go-getter#114](https://github.com/hashicorp/go-getter/issues/114)
• [https://github.com/hashicorp/go-getter?tab=readme-ov-file#subdirectories](https://github.com/hashicorp/go-getter?tab=readme-ov-file#subdirectories) • Don't check for duplicate abstract components in the same stack from different stack manifests. Abstract components are never provisioned and serve as blueprints for real components. This is an update (follow up) to the previous PRs:
• [#608](https://github.com/cloudposse/atmos/pull/608)
• [#611](https://github.com/cloudposse/atmos/pull/611) • The `--include-dependents` flag allows including dependencies for the affected components
If the command-line flag --include-dependents=true
is passed to the atmos describe affected
command, and there are other components that depend on the affected components in the stack, the command will include a dependents
property (list) for each affected component. The dependents
property is hierarchical - each component in the list will also contain a dependents
property if that component has dependent components as well.
For example, suppose that we have the following configuration for the Atmos components component-1
, component-2
and component-3
in the stack plat-ue2-dev
:
components:
terraform:
component-1:
metadata:
component: "terraform-component-1"
vars: {}
component-2:
metadata:
component: "terraform-component-2"
vars: {}
settings:
depends_on:
1:
component: "component-1"
component-3:
metadata:
component: "terraform-component-3"
vars: {}
settings:
depends_on:
1:
component: "component-2"
In the above configuration, component-3
depends on component-2
, whereas component-2
depends on component-1
.
If all the components are affected (modified) in the current working branch, the atmos describe affected --include-dependents=true
command will produce the following result:
[
{
"component": "component-1",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-1",
"included_in_dependents": false,
"dependents": [
{
"component": "component-2",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-2",
"dependents": [
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3"
}
]
}
]
},
{
"component": "component-2",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-2",
"included_in_dependents": true,
"dependents": [
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3"
}
]
},
{
"component": "component-3",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-component-3",
"included_in_dependents": true
}
]
The component-1
component does not depend on any other component, and therefore it has the included_in_dependents
attribute set to false
. The component-2
and component-3
components depend on other components and are included in the dependents
property of the other components, and hence the included_in_dependents
attribute is set to true
.
When processing the above output, you might decide to not plan/apply the component-2
and component-3
components since they are in the dependents
property of the component-1
component. Instead, you might just trigger component-1
and then component-2
and component-3
in the order of dependencies.
Thanks @Andriy Knysh (Cloud Posse). In Azure, management of resource dependencies seems more fundamental - so hopefully this approach solves the problem. For now, I think I’ll create the resource group within the component, and adjust the component input vars to accept multiple things to create. I.e. the storage component creates a storage resource group, and then provisions n storage accounts within it. And if another storage resource group is required, have another storage component in the stack with a different config.
i’ve generally organized my components into their own resource groups. That kinda logically makes sense too but YMMV
Sharing some Atmos + Cloud Posse component love
Thanks Matt!
2024-06-12
Hi everyone,
I am trying to configure gcs backend for GCP deployment. Does remote-state
module support gcs as backend ?
It should work. To our understand it just doesn’t work with local state backends.
Also, due to popular demand (and a change of heart), we are implementing the ability to directly pass state (terraform outputs) using atmos without relying on terraform data sources. This should be released soonish < 2 weeks.
@Amit the remote-state
module is not configured to use the GCP backend, just (but it can be updated to support it). https://github.com/cloudposse/terraform-yaml-stack-config/blob/main/modules/remote-state/data-source.tf
locals {
data_source_backends = ["remote", "s3", "azurerm"]
is_data_source_backend = contains(local.data_source_backends, local.backend_type)
remote_workspace = var.workspace != null ? var.workspace : local.workspace
ds_backend = local.is_data_source_backend ? local.backend_type : "local"
ds_workspace = local.ds_backend == "local" ? null : local.remote_workspace
ds_configurations = {
local = {
path = "${path.module}/dummy-remote-state.json"
}
remote = local.ds_backend != "remote" ? null : {
organization = local.backend.organization
workspaces = {
name = local.remote_workspace
}
}
s3 = local.ds_backend != "s3" ? null : {
encrypt = local.backend.encrypt
bucket = local.backend.bucket
key = local.backend.key
dynamodb_table = local.backend.dynamodb_table
region = local.backend.region
# NOTE: component types
# Privileged components are those that require elevated (root-level) permissions to provision and access their remote state.
# For example: `tfstate-backend`, `account`, `account-map`, `account-settings`, `iam-primary`.
# Privileged components are usually provisioned during cold-start (when we don't have any IAM roles provisioned yet) by using an admin user credentials.
# To access the remote state of privileged components, the caller needs to have permissions to access the backend and the remote state without assuming roles.
# Regular components, on the other hand, don't require root-level permissions and are provisioned and their remote state is accessed by assuming IAM roles (or using profiles).
# For example: `vpc`, `eks`, `rds`
# NOTE: global `backend` config
# The global `backend` config should be declared in a global YAML stack config file (e.g. `globals.yaml`)
# where all stacks can import it and have access to it (note that the global `backend` config is organization-wide and will not change after cold-start).
# The global `backend` config in the global config file should always have the `role_arn` or `profile` specified (added after the cold-start).
# NOTE: components `backend` config
# The `backend` portion for each individual component should be declared in a catalog file (e.g. `stacks/catalog/<component>.yaml`)
# along with all the default values for a component.
# The `privileged` attribute should always be declared in the `backend` portion for each individual component in the catalog.
# Top-level stacks where a component is provisioned import the component's catalog (the default values and the component's backend config portion) and can override the default values.
# NOTE: `cold-start`
# During cold-start we don't have any IAM roles provisioned yet, so we use an admin user credentials to provision the privileged components.
# The `privileged` attribute for the privileged components should be set to `true` in the components' catalog,
# and the privileged components should be provisioned using an admin user credentials.
# NOTE: after `cold-start`
# After the privileged components (including the primary IAM roles) are provisioned, we update the global `backend` config in the global config file
# to add the IAM role or profile to access the backend (after this, the global `backend` config should never change).
# For some privileged components we can change the `privileged` attribute in the YAML config from `true` to `false`
# to allow the regular components to access their remote state (e.g. we set the `privileged` attribute to `false` in the `account-map` component
# since we use `account-map` in almost all regular components.
# For each regular component, set the `privileged` attribute to `false` in the components' portion of `backend` config (in `stacks/catalog/<component>.yaml`)
# Advantages:
# The global `backend` config is specified just once in the global config file, IAM role or profile is added to it after the cold start,
# and after that the global `backend` config never changed.
# We can make a component privileged or not any time by just updating its `privileged` attribute in the component's catalog file.
# We can change a component's `backend` portion any time without touching/affection the backend configs of all other components (e.g. when we add a new
# component, we don't touch the `globals.yaml` file at all, and we don't update the component's `role_arn` and `profile` settings).
# Use the role to access the remote state if the component is not privileged and `role_arn` is specified
role_arn = !coalesce(try(local.backend.privileged, null), var.privileged) && contains(keys(local.backend), "role_arn") ? local.backend.role_arn : null
# Use the profile to access the remote state if the component is not privileged and `profile` is specified
profile = !coalesce(try(local.backend.privileged, null), var.privileged) && contains(keys(local.backend), "profile") ? local.backend.profile : null
workspace_key_prefix = local.workspace_key_prefix
}
azurerm = local.ds_backend != "azurerm" ? null : {
resource_group_name = local.backend.resource_group_name
storage_account_name = local.backend.storage_account_name
container_name = local.backend.container_name
key = local.backend.key
}
} # ds_configurations
}
data "terraform_remote_state" "data_source" {
count = var.bypass ? 0 : 1
backend = local.ds_backend
workspace = local.ds_workspace
config = local.ds_configurations[local.ds_backend]
defaults = var.defaults
}
what you can do is fork the repo, add the GCP backend to the map, use your branch as the source in your terraform code, test it, then open a PR in the Cloud Posse repo. This would be the fastest way to test it
@Amit you can now try this instead: https://sweetops.slack.com/archives/C031919U8A0/p1718495222745629
Add --include-settings
flag to atmos describe affected
command @aknysh (#624) ## what
• Add --include-settings
flag to atmos describe affected
command
• Update docs
why
• If the --include-settings=true
flag is passed, atmos describe affected
will include the settings
section for each affected component in the stack. The settings
sections is a free-form map used to pass configuration information to Atmos Integrations. Having the settings
section in the output will allow the integrations to parse it and detect settings for the corresponding integration
Fix goreleaser @goruha (#623) ## what * Added atmos
specific goreleaser
why
• Set atmos version on building
2024-06-13
question: how can i use object in vars:
this is working
component:
server:
vars:
hc_ssh_keys:
shelas:
name: foo
filename: bar
but this ist not
settings:
ssh_keys:
shelas:
name: foo
filename: bar
component:
server:
vars:
hc_ssh_keys: '{{ .settings.ssh_keys }}'
The indenting is weird. Not sure if that’s just a formatting issue. Have you run atmos validate? Also make sure you have defined a schema.
its just written from my head
the problem is, that i can only define values, not objects
There’s nothing special you need to do to pass the parammeters for an object. It works the same as a map.
Can you elaborate on what in particular is not working? in terraform? is the Deep merging not working as you expected?
so, this is my resource in the component
resource "hcloud_ssh_key" "this" {
for_each = var.hc_ssh_keys
name = each.value.name
public_key = each.value.filename
labels = var.tags
}
this is my stack
components:
terraform:
server:
metadata:
component: '{{ .settings.version }}'
inherits:
- 'server/_defaults'
vars:
hc_ssh_keys:
shelas:
name: [email protected]
filename: '¨/.ssh/id_xxxxx.pub'
plan looks like this
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create
Terraform will perform the following actions:
# hcloud_ssh_key.this["shelas"] will be created
+ resource "hcloud_ssh_key" "this" {
+ fingerprint = (known after apply)
+ id = (known after apply)
+ labels = {
+ "Managed-By" = "Terraform"
+ "foo" = "bar"
}
+ name = "[email protected]"
+ public_key = "¨/.ssh/id_xxxxx.pub"
}
Plan: 1 to add, 0 to change, 0 to destroy.
but, if i change it to .settings:
settings:
stage: test
hc_ssh_keys:
shelas:
name: [email protected]
filename: '¨/.ssh/id_xxxxx.pub'
components:
terraform:
server:
metadata:
component: '{{ .settings.version }}'
inherits:
- 'server/_defaults'
vars:
hc_ssh_keys: '{{ .settings.hc_ssh_keys}}'
i’l get this error:
You can apply this plan to save these new output values to the Terraform state, without changing any real
infrastructure.
╷
│ Error: Invalid value for input variable
│
│ on isa-zoom-test-server-server.terraform.tfvars.json line 3:
│ 3: "hc_ssh_keys": "map[shelas:map[filename:¨/.ssh/id_isa_hetzner.pub name:[email protected]]]",
│
│ The given value is not suitable for var.hc_ssh_keys declared at variables.tf:7,1-23: map of object required.
╵
It appears to me that settings is being confused with variables.
if i look into tfvars.json, in the first example, i got this:
{
"hc_ssh_keys": {
"shelas": {
"filename": "¨/.ssh/id_isa_hetzner.pub",
"name": "[email protected]"
}
but in the second, i’ve got this:
{
"datacenter": "fsn1-dc14",
"hc_ssh_keys": "map[shelas:map[filename:¨/.ssh/id_isa_hetzner.pub name:[email protected]]]",
"namespace": "default",
Settings block relates to integrations. Think “integration settings” for spacelift.
ok
Settings for GitHub actions, settings for Atlantis
But I can probably guide you to what you want to accomplish. Let’s take a step back. What is the goal you were hoping to achieve by putting the settings at the top?
i try to configure a map containing ssh keys to be rolled out by the server component. i’ll then reference some of those keys to some server instances by using the name.
this map should be defined “further up” in my stack
I think what you’re looking for for then is the multiple inheritance pattern
Are you familiar with the design patterns page on Atmos docs?
yeah, more or less
the whole component looks like this
server:
metadata:
component: '{{ .settings.version }}'
inherits:
- 'server/_defaults'
vars:
network_component: network
datacenter: '{{ .settings.datacenter }}'
network_zone: '{{ .settings.network_zone }}'
server_type: '{{ .settings.server_type }}'
hc_ssh_keys:
shelas:
name: [email protected]
filename: '¨/.ssh/id_isa_hetzner.pub'
servers:
web1:
image: 'debian-12'
ssh_keys:
- [email protected]
public_ipv4_enabled: true
public_primary_ip: false
i’d like to provision one or more ssh keys, and then attach one or more to a server
in that pattern, you’ll basically describe two abstract classes of component configurations. In each one of those you’ll put your vars. Then using inherit your pull from one of those. Overall, we discourage using templating as much as possible, when template free approaches exist
but i’d like to have the list of ssh keys outside the component
You’re taking a templated approach in the examples provided
ok. i think, i don’t understand. is templating good or bad?
With multiple inheritance, you can have the values live outside of the direct component configuration. For example, we almost never use templating in our reference architecture. To say that it’s bad, they’re definitely times when it makes sense. The simpler the templating, such as templating values like you are doing is the lesser evil.
@Stephan Helas Go templates can’t generate maops of objects by default, you can use a few techniques:
- use
range
insiderange
(see this for example https://docs.dynatrace.com/docs/manage/configuration-as-code/monaco/guides/configuration-as-code-advanced-use-case)
Effectively utilize Go templating in projects with Dynatrace Configuration as Code via Monaco.
- Use Sprig
toJson
function (since JSON is a subset of YAML) https://masterminds.github.io/sprig/defaults.html
Useful template functions for Go templates.
vars:
hc_ssh_keys: '{{ toJson .settings.hc_ssh_keys}}'
Yep!
can you point me to the design pattern to change my design as you described?
Atmos essentially takes a two phased approach. This is typical with tools that use templates. The first treats the file as plain text unstructured. Then it processes, the templates, which results in a new file. It loads that resultant file as YAML, and then proceeds to the next file.
or a third way, use gomplate
data.ToYAML
https://docs.gomplate.ca/functions/data/#datatoyaml
gomplate documentation
But Andriy can’t just be done with multiple inheritance as well?
vars:
hc_ssh_keys: '{{ data.ToYAML .settings.hc_ssh_keys}}'
Then you have no templates and it’s a lot easier than string manipulation
Also, we have the new setting to control how lists are deep merged
can’t just be done with multiple inheritance as well?
yes, it can be refactored. All the above methods (range
, toJson
, data.ToYAML
) are if you don’t want to refactor and want just use a map of objects
the data.toYaml generate this:
'{{ data.ToYAML .settings.hc_ssh_keys}}'
{
"datacenter": "fsn1-dc14",
"hc_ssh_keys": "shelas: filename: ¨/.ssh/id_xxxx.pub name: [email protected] ",
"namespace": "default",
yes, this is b/c Go templates are string-based, they don’t know anything about the shape of your data type
in Helm
, they introduced toYAML
and indent
functions for that
toJson
and data.ToYAML
return a string representation of the data
you can always use range
{{- range ....
Sprig has indent
as well https://masterminds.github.io/sprig/strings.html
Useful template functions for Go templates.
you have to experiment with using toJson/toYAML
and indent
(or just use range
)
i tried, but i don’t get the syntax right
server_type: '{{ .settings.server_type }}'
hc_ssh_keys:
'{{- range $i, $e := .itemList}}'
'{{$i}}. {{$e.name}} is priced at {{$e.price}}. {{$e.description}}'
'{{- end}}'
servers:
web1:
will not run
yaml: line 48: could not find expected ‘:’
review YAML multi-line syntax https://yaml-multiline.info/
Find the right syntax for your YAML multiline strings.
example: |\n
··Several lines of text,
all of this is jumping through many hoops. Since Go templates are string-based, they don’t know anything about the shape of the data. That’s why in YAML files with Go templates, you have to “create” the correct “shape”.
Note that if you changed the Terraform var type to string
, then in the template you would just use toJson
, and then in the TF code you would just do
locals {
hc_ssh_keys = jsondecode(var.hc_ssh_keys)
}
resource "hcloud_ssh_key" "this" {
for_each = local.hc_ssh_keys
name = each.value.name
public_key = each.value.filename
labels = var.tags
}
oh yeah, that sounds good
vars:
hc_ssh_keys: '{{ toJson .settings.hc_ssh_keys}}'
toJson leaves me with an empty string
data.ToYAML worked for you, you can use it, and then in TF use yamlencode
yamldecode
hm, ok. i’ll tinker a bit with this.
oh, this seems to do the trick
hc_ssh_keys: '{{ data.ToYAML .settings.hc_ssh_keys | toJson }}'
and then yamldecode. not really elegant solution
so, for reference, this works:
encode to string in atmos
hc_ssh_keys: '{{ data.ToYAML .settings.hc_ssh_keys | toJson }}'
decode in terraform
locals {
hc_ssh_keys = yamldecode(jsondecode(var.hc_ssh_keys))
}
does it work w/o | toJson
?
Hi :wave: I’m super new to Atmos so excuse the question, but I couldnt seem to find an answer from searching and my assumptions may be wrong.
From my understanding Atmos works by defining name_pattern: "{tenant}-{stage}-{environment}"
, and essentially each stack needs to supply these vars in order for it to work.
This is done by adding to a relevant yaml to the stack/mixin.
vars:
environment: west-us-2
stage: dev
tenant: plat
My question is whether or not we can define these values outside of the vars:
key?
The Terraform modules I’m using do not have these variables defined, which is leading to a bunch of Value for undeclared variable
whenever I run a Terraform plan/apply.
If I can move these outside of the vars:
key, then they wont be passed into the generated .tfvars.json
and I wont get the error from Terraform.
Thanks!
@Stuart Martin yes, you can define those context variables in other Atmos sections, e.g. settings
(in which case they will not be variables anymore)
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
Look for the description of stacks.name_pattern
and stacks.name_template
in short, you can define those in e.g. settimgs.context
(context
is just an example here, you can name it as you want)
settings:
context:
environment: usw2
stage: dev
tenant: plat
and in atmos.yaml
define stacks.name_template
like this:
stacks:
name_template: "{{.settings.context.tenant}}-{{.settings.context.environment}}-{{.settings.context.stage}}"
you can use any names for the context, e.g. if you want to use account
instead of stage
, you do:
settings:
context:
environment: usw2
account: dev
tenant: plat
stacks:
name_template: "{{.settings.context.tenant}}-{{.settings.context.environment}}-{{.settings.context.account}}"
(settings
is a free-form map, you can put anything in it, including other maps like context
)
Amazing! This is exactly what I was looking/hoping for
Is there an example or a GitHub demo repo that plumbs atmos in argocd . Would like to give a try on how atmos can execute atmos plan/apply? Would it be a substitute of spacelift like system or Argo cd is just for helmfile?
So to be clear, we are using Atmos to deploy Argo CD itself, and Argo CD itself is defined as a terraform component. Argo CD does not drive Atmos.
2024-06-14
Any thoughts about supporting imports at a component level rather than stack level? Example - I have 1 stack (a management group in azure) that has 3 subscriptions (eng, test, prod). And i have mixins for each “environment” eng/test/prod. I want to import the particular mixin w/ the particular subscription within the same stack
@Andrew Ochsner thanks for the question. We constantly improve Atmos from the users feedback like yours. We’ll think about “imports for components”. Having said that, you can achieve what you want by using base components, inheritance and
metadata:
inherits:
- base-component-1
- base-component-2
here are a few docs that can help you:
ah that’s a good call
<https://atmos.tools/design-patterns/abstract-component/>
<https://atmos.tools/design-patterns/component-catalog/>
<https://atmos.tools/design-patterns/component-catalog-with-mixins/>
<https://atmos.tools/design-patterns/component-catalog-template/>
i mean i guess by definiton they’ aren’t part of the same stack just the same yaml file… because environment makes up part of my stack name
can you show some config that you want to create, we can prob point you into the right direction
well i keep going in circles in my own head… wehtehr it makes more sense for the “management group stack” to contain “subscriptions” or for “subscription stacks” to contain both the subscription itself and the resoures in that subscription….
please show your YAML config (you can DM me). I’m not very familiar with Azure, so will have to look at the code to understand the patterns
yep can do one sec
welp actualy that might be harder than i think… i’ll tryo t get back toyou today
ok here’s an example: mg-vdi:
import: #management group is global region global environment..
- mixins/region/global
- mixins/environment/global
- catalog/subscription/default
components:
terraform:
subscription/vdi:
metadata:
component: subscription
inherits:
- subscription/default
vars:
name_base: vdi
# the subscription needs the mixins/environment/eng & mixins/region/eastus values
subscription/poc:
metadata:
component: subscription
inherits:
- subscription/default
vars:
name_base: poc
# the subscription needs the mixins/environment/test & mixins/region/eastus values
i can only think of copy/pasting the values int he mixins in teh vars of the subscription comopnents (can have base components for the different options but still approach being the same)… is there any way not to duplicate taht sort of thing
i haven’t noodled around if templating would solve this but generally i know that adds another layer of complexity
what context variables define your stacks (stack name pattern), and where do you have them?
from the code above, it looks like you want to have components belonging to two diff top-level stacks but defined in one stack manifest
tenant-namespace-environment-stage… the environment == region (and defined int he mixins) & stage == environment (and defined int he mixins)
the two components belong to two diff top-level stacks?
yeah, so whats unique here is there are 2 “types” of stacks… management group stacks and subscription stacks…. and the twist is that management group stacks contain the subscriptions that get provisioned in those managementn groups…
no… 2 components (subscriptions) belong to 1 top level stack (vdi management group in this case)…
i think short term, i can create component specific mixins to override the top level management group values… again i think this only will happen in subscription components
ok, you should not do it like that. You should have manifests for your top-level stacks, then import the required mixins into them, then import the defaults for the components
this way, you don’t have to specify the context variables (which define your stack names) in the components. They will get them automatically from the corresponding imports in the top-level stack manifests
what’s’ the diff b/w mixins/environment/global
, mixins/environment/eng
and mixins/environment/test
?
yeah so effectively have component specific mixins for region & environment… that makes sense… just the values are the same as the top level mixins, just scoped to the component
the pattern here is to define your stacks by importing the global settings (e.g. from the mixins). Then in each stack, import the components that you want to provision in the stacks (and you can override some vars inline). The components should NOT know where they are provisioned, they get the context variables from the stacks
mixins/environment/eng.yaml
vars:
stage: eng
syslevel: eng
tags:
SysLevel: eng
so i’d also need to create catalog/subscription/mixins/environment/eng.yaml
components:
terraform:
subscription:
vars:
syslevel: eng
tags:
SysLevel: eng
but the context of the management group != the context of the subscription we are provisioning here… i don’t want to override the stage/environment of hte mgmt group w/ the imports of the subscription (which is what i’ve got now that doesn’t work)
i think short term, i can create component specific mixins to override the top level management group values… again i think this only will happen in subscription components
my comment “ok, you should not do it like that.” was related to the previous messages
note that if you import global variables, they will affect all components in the stack
i think this is the best way to do it
components:
terraform:
subscription:
matadata:
type: abstract
vars:
syslevel: eng
tags:
SysLevel: eng
even if you repeat the vars there
a better way would be to create these components in the corresponding stacks
subscription/vdi:
metadata:
component: subscription
inherits:
- subscription/default
vars:
name_base: vdi
# the subscription needs the mixins/environment/eng & mixins/region/eastus values
subscription/poc:
metadata:
component: subscription
inherits:
- subscription/default
vars:
name_base: poc
# the subscription needs the mixins/environment/test & mixins/region/eastus values
and not override the context variables
they are separate components, just inheriting the default values from the common base component
as I described above, each component should belong to a specific too-level stack, we should not override the context variables for components in a stack
well, teh alternative approach would be to put the subscription component in the “subscription” stack… i played w/ that but it’s a bit messy because the backends are different then…
appreciate the back and forth btw
Newbie here,
After setting an account structure using the account
component on the core
tenant, I’m trying to create an EKS cluster on one of the accounts that have been created. I’m getting the following error:
│ Error: Reference to undeclared module
│
│ on main.tf line 7, in locals:
│ 7: this_account_name = module.iam_roles.current_account_account_name
│
│ No module call named "iam_roles" is declared in the root module.
Seems like the eks/cluster
component is referencing iam_roles (a sub module of account-info) without actually adding it as a module in the TF eks/cluster module
What do I miss?
Something I noticed, in the examples the vendor.yaml
file is looking like this:
- component: eks
source: "github.com/cloudposse/terraform-aws-components.git//modules/eks/cluster?ref={{.Version}}"
version: "1.463.0"
targets:
- "components/terraform/eks/cluster"
included_paths:
- "**/*.tf"
# If the component's folder has the `modules` sub-folder, it needs to be explicitly defined
- "**/modules/**"
excluded_paths:
- "**/providers.tf"
# Tags can be used to vendor component that have the specific tags
# `atmos vendor pull --tags networking,storage`
# Refer to <https://atmos.tools/cli/commands/vendor/pull>
tags:
- eks
(a modification I’ve made for EKS) The providers file is excluded
Why is that, I see that iam_roles
module is there?
@Dan Miller (Cloud Posse)
The terraform-aws-components
repository is a collection of opinionated root modules. These are all designed around a set of opinionated choices we have made around designing a fully functional AWS architecture. As such, many of these components are intertwined with each other.
Primarily, [providers.tf](http://providers.tf)
for almost every component relies on the account-map
component. That component establishes the foundation for planning and applying Terraform across many accounts in the AWS Organization and much more. That includes the iam_roles
submodule, which in particular selects the correct IAM role to use to plan/apply terraform in the given account
2024-06-15
Introduce Atmos Go
template functions. Add atmos.Component
function to read values from other Atmos components including outputs (remote state) @aknysh (#628) ## what
• Introduce Atmos Go
template functions
• Add atmos.Component
template function to read values from other Atmos components including outputs (remote state)
• Update docs
• Atmos Template Functions
• atmos.Component
function
why
• Atmos now supports custom Go
template functions similar to Sprig and Gomplate
In `Go` templates in Atmos stack manifests, you can use the following functions and datasources:
• [Go `text/template` functions](https://pkg.go.dev/text/template#hdr-Functions)
• [Sprig Functions](https://masterminds.github.io/sprig/)
• [Gomplate Functions](https://docs.gomplate.ca/functions/)
• [Gomplate Datasources](https://docs.gomplate.ca/datasources/)
• [Atmos Template Functions](https://pr-628.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/template-functions/)
description
The atmos.Component
template function allows you to read any Atmos section or any attribute from a section for an Atmos component in a stack, and use it in Go
templates in Atmos component configurations.
Usage
{{ (atmos.Component "<component>" "<stack>").<section>.<attribute> }}
Arguments
• component
- Atmos component name
• stack
- Atmos stack name
• section
- Atmos section name. Any section returned by the CLI command atmos describe component
can be used. A special outputs
section is also supported to get the outputs (remote state) of Terraform/OpenTofu components.
*NOTE:* Using the `outputs` section in the `atmos.Component` command is an alternative way to read the outputs (remote state) of a component in a stack directly in Atmos stack manifests instead of using the `remote-state` module and configuring Terraform/OpenTofu components to use the `remote-state` module as described in [Component Remote State](https://atmos.tools/core-concepts/components/remote-state) • `attribute` - attribute name (field) from the `section`. `attribute` is optional, you can use the `section` itself if it's a simple type (e.g. `string`). Any number of attributes can be chained using the dot (`.`) notation. For example, if the first two attributes are maps, you can chain them and get a field from the last map:
{{ (atmos.Component "<component>" "<stack>").<section>.<attribute1>.<attribute2>.<field1> }}
Specifying Atmos stack
stack
is the second argument of the atmos.Component
function, and it can be specified in a few different ways:
• Hardcoded stack name. Use it if you want to get an output from a component from a different (well-known and static) stack. For example, you have a tgw
component in a stack plat-ue2-dev
that requires the vpc_id
output from the vpc
component from the stack plat-ue2-prod
:
components:
terraform:
tgw:
vars:
vpc_id: '{{ (atmos.Component "vpc" "plat-ue2-prod").outputs.vpc_id }}'
• Use the .stack
(or .atmos_stack
) template identifier to specify the same stack as the current component (for which the atmos.Component
function is executed):
{{ (atmos.Component "<component>" .stack).<section>.<attribute> }}
{{ (atmos.Component "<component>" .atmos_stack).<section>.<attribute> }}
For example, you have a `tgw` component that requires the `vpc_id` output from the `vpc` component in the same stack:
components:
terraform:
tgw:
vars:
vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
• Use the printf
template function to construct stack names using static strings and dynamic identifiers:
{{ (atmos.Component "<component>" (printf "%s-%s-%s" .vars.tenant .vars.environment .vars.stage)).<section>.<attribute> }}
{{ (atmos.Component "<component>" (printf "plat-%s-prod" .vars.environment)).<section>.<attribute> }}
{{ (atmos.Component "<component>" (printf "%s-%s-%s" .settings.context.tenant .settings.context.region .settings.context.account)).<section>.<attribute> }}
For example, you have a `tgw` component deployed in the stack `plat-ue2-dev`. The `tgw` component requires the `vpc_id` output from the `vpc` component from the same environment (`ue2`) and same stage (`dev`), but from a different tenant `net` (instead of `plat`):
components:
terraform:
tgw:
vars:
vpc_id: '{{ (atmos.Component "vpc" (printf "net-%s-%s" .vars.environment .vars.stage)).outputs.vpc_id }}'
*NOTE:* By using the `printf "%s-%s-%s"` function, you are constructing stack names using the stack context variables/identifiers. For more information on Atmos stack names and how to define them, refer to `stacks.name_pattern` and `stacks.name_template` sections in [`atmos.yaml` CLI config file](https://atmos.tools/cli/configuration/#stacks)
Examples
The following configurations show different ways of using the atmos.Component
template function to read values from different Atmos sections directly in Atmos stack manifests, including the outputs of other (already provisioned) components.
# Global `settings` section
# It will be added and deep-merged to the `settings` section of all components
settings:
test: true
components:
terraform:
test:
metadata:
# Point to the Terraform/OpenTofu component
component: "test"
vars:
name: "test"
test1:
metadata:
# Point to the Terraform/OpenTofu component
component: "test1"
vars:
name: "test1"
test2:
metadata:
# Point to the Terraform/OpenTofu component
component: "test2"
vars:
name: "test2"
# Use the `atmos.Component` function to get the outputs of the Atmos component `test1`
# The `test1` component must be already provisioned and its outputs stored in the Terraform/OpenTofu state
# Atmos will execute `terraform output` on the `test1` component in the same stack to read its outputs
test1_id: '{{ (atmos.Component "test1" .stack).outputs.test1_id }}'
tags:
# Get the `settings.test` field from the `test` component in the same stack
test: '{{ (atmos.Component "test" .stack).settings.test }}'
# Get the `metadata.component` field from the `test` component in the same stack
test_terraform_component: '{{ (atmos.Component "test" .stack).metadata.component }}'
# Get the `vars.name` field from the `test1` component in the same stack
test1_name: '{{ (atmos.Component "test1" .stack).vars.name }}'
[docs] Integration GHA fix version compatibility table @goruha (#626) ## what * [docs] Integration GHA fix version compatibility table why
• Table in tip box looks ugly
<https://private-user-images.githubusercontent.com/496956/339514649-5b4ab1a4-a38b-4725-a2d1-96cf2294f47a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg0OTU1MjIsIm5iZiI6MTcxODQ5NTIyMiwicGF0aCI6Ii80OTY5NTYvMzM5NTE0NjQ5LTViNGFiMWE0LWEzOGItNDcyNS1hMmQxLTk2Y2YyMjk0ZjQ3YS5wbmc_WC1BbXotQWxnb3JpdGhtPUF…
This is actually incredible, output sharing from component to component in YAML has been my dream feature since I started using Atmos
Introduce Atmos Go
template functions. Add atmos.Component
function to read values from other Atmos components including outputs (remote state) @aknysh (#628) ## what
• Introduce Atmos Go
template functions
• Add atmos.Component
template function to read values from other Atmos components including outputs (remote state)
• Update docs
• Atmos Template Functions
• atmos.Component
function
why
• Atmos now supports custom Go
template functions similar to Sprig and Gomplate
In `Go` templates in Atmos stack manifests, you can use the following functions and datasources:
• [Go `text/template` functions](https://pkg.go.dev/text/template#hdr-Functions)
• [Sprig Functions](https://masterminds.github.io/sprig/)
• [Gomplate Functions](https://docs.gomplate.ca/functions/)
• [Gomplate Datasources](https://docs.gomplate.ca/datasources/)
• [Atmos Template Functions](https://pr-628.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/template-functions/)
description
The atmos.Component
template function allows you to read any Atmos section or any attribute from a section for an Atmos component in a stack, and use it in Go
templates in Atmos component configurations.
Usage
{{ (atmos.Component "<component>" "<stack>").<section>.<attribute> }}
Arguments
• component
- Atmos component name
• stack
- Atmos stack name
• section
- Atmos section name. Any section returned by the CLI command atmos describe component
can be used. A special outputs
section is also supported to get the outputs (remote state) of Terraform/OpenTofu components.
*NOTE:* Using the `outputs` section in the `atmos.Component` command is an alternative way to read the outputs (remote state) of a component in a stack directly in Atmos stack manifests instead of using the `remote-state` module and configuring Terraform/OpenTofu components to use the `remote-state` module as described in [Component Remote State](https://atmos.tools/core-concepts/components/remote-state) • `attribute` - attribute name (field) from the `section`. `attribute` is optional, you can use the `section` itself if it's a simple type (e.g. `string`). Any number of attributes can be chained using the dot (`.`) notation. For example, if the first two attributes are maps, you can chain them and get a field from the last map:
{{ (atmos.Component "<component>" "<stack>").<section>.<attribute1>.<attribute2>.<field1> }}
Specifying Atmos stack
stack
is the second argument of the atmos.Component
function, and it can be specified in a few different ways:
• Hardcoded stack name. Use it if you want to get an output from a component from a different (well-known and static) stack. For example, you have a tgw
component in a stack plat-ue2-dev
that requires the vpc_id
output from the vpc
component from the stack plat-ue2-prod
:
components:
terraform:
tgw:
vars:
vpc_id: '{{ (atmos.Component "vpc" "plat-ue2-prod").outputs.vpc_id }}'
• Use the .stack
(or .atmos_stack
) template identifier to specify the same stack as the current component (for which the atmos.Component
function is executed):
{{ (atmos.Component "<component>" .stack).<section>.<attribute> }}
{{ (atmos.Component "<component>" .atmos_stack).<section>.<attribute> }}
For example, you have a `tgw` component that requires the `vpc_id` output from the `vpc` component in the same stack:
components:
terraform:
tgw:
vars:
vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
• Use the printf
template function to construct stack names using static strings and dynamic identifiers:
{{ (atmos.Component "<component>" (printf "%s-%s-%s" .vars.tenant .vars.environment .vars.stage)).<section>.<attribute> }}
{{ (atmos.Component "<component>" (printf "plat-%s-prod" .vars.environment)).<section>.<attribute> }}
{{ (atmos.Component "<component>" (printf "%s-%s-%s" .settings.context.tenant .settings.context.region .settings.context.account)).<section>.<attribute> }}
For example, you have a `tgw` component deployed in the stack `plat-ue2-dev`. The `tgw` component requires the `vpc_id` output from the `vpc` component from the same environment (`ue2`) and same stage (`dev`), but from a different tenant `net` (instead of `plat`):
components:
terraform:
tgw:
vars:
vpc_id: '{{ (atmos.Component "vpc" (printf "net-%s-%s" .vars.environment .vars.stage)).outputs.vpc_id }}'
*NOTE:* By using the `printf "%s-%s-%s"` function, you are constructing stack names using the stack context variables/identifiers. For more information on Atmos stack names and how to define them, refer to `stacks.name_pattern` and `stacks.name_template` sections in [`atmos.yaml` CLI config file](https://atmos.tools/cli/configuration/#stacks)
Examples
The following configurations show different ways of using the atmos.Component
template function to read values from different Atmos sections directly in Atmos stack manifests, including the outputs of other (already provisioned) components.
# Global `settings` section
# It will be added and deep-merged to the `settings` section of all components
settings:
test: true
components:
terraform:
test:
metadata:
# Point to the Terraform/OpenTofu component
component: "test"
vars:
name: "test"
test1:
metadata:
# Point to the Terraform/OpenTofu component
component: "test1"
vars:
name: "test1"
test2:
metadata:
# Point to the Terraform/OpenTofu component
component: "test2"
vars:
name: "test2"
# Use the `atmos.Component` function to get the outputs of the Atmos component `test1`
# The `test1` component must be already provisioned and its outputs stored in the Terraform/OpenTofu state
# Atmos will execute `terraform output` on the `test1` component in the same stack to read its outputs
test1_id: '{{ (atmos.Component "test1" .stack).outputs.test1_id }}'
tags:
# Get the `settings.test` field from the `test` component in the same stack
test: '{{ (atmos.Component "test" .stack).settings.test }}'
# Get the `metadata.component` field from the `test` component in the same stack
test_terraform_component: '{{ (atmos.Component "test" .stack).metadata.component }}'
# Get the `vars.name` field from the `test1` component in the same stack
test1_name: '{{ (atmos.Component "test1" .stack).vars.name }}'
[docs] Integration GHA fix version compatibility table @goruha (#626) ## what * [docs] Integration GHA fix version compatibility table why
• Table in tip box looks ugly
<https://private-user-images.githubusercontent.com/496956/339514649-5b4ab1a4-a38b-4725-a2d1-96cf2294f47a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg0OTU1MjIsIm5iZiI6MTcxODQ5NTIyMiwicGF0aCI6Ii80OTY5NTYvMzM5NTE0NjQ5LTViNGFiMWE0LWEzOGItNDcyNS1hMmQxLTk2Y2YyMjk0ZjQ3YS5wbmc_WC1BbXotQWxnb3JpdGhtPUF…
2024-06-16
2024-06-17
hey all - working with module ec2-instance, specifically cloudpossee/security-group/aws - my issue is that I’m trying to configure the rule as a security group ingress versus cidr_block. now I see online it says source_security_group_id can be configured, but it is not liking that in my testing. I can work around this otherwise but figured I’d ask here first.
Sometimes components don’t expose all the parameters of the child module
This is not “by design”, it’s just for the original use case we implemented it for, it was not a requirement. If this turns out to be the case, just open a PR to add the missing variable.
If you share an example of your stack configurations it will be easier to assist
yea i looked back at the code and it doesnt look exposed which is what i kinda figured.
Note, when this happens you should get a warning from terraform about a variable being passed that was not delcared
2024-06-18
Hi all. Did not found proper issue or conversation about it. I notice a difference in atmos generate backends
and atmos generate backend %component% behav
iors. Templating is not respected by backends
. Is it intentional?
That would not be intentional
@Aleksandr Rozhkov templating was implemented recently (in the past few months). Since atmos generate backends
is not used often, it was not tested with templating. We’ll look into that
(@Aleksandr Rozhkov I’m curious, are you using this strategy to integrate with another system?)
I make internal POC to choose toolset for TF states orchestration. using backends instead of backend per component reduces my initial steps in workflow.
and templating needed - to delimit state objects in the same storage.
vanila TF with scoped states + some orchestration of states(dependancy) + GH PR-driven UI(GH actions )
@Erik Osterman (Cloud Posse) POC: check some in-house implementation and compare with possible SaaS usage.
no “another systems” except GH Actions
@Aleksandr Rozhkov the Go template processing in atmos generate backends
is fixed in https://github.com/cloudposse/atmos/releases/tag/v1.82.0
@Aleksandr Rozhkov were you able to validate?
@Andriy Knysh (Cloud Posse) Thanks. It works now.
@Erik Osterman (Cloud Posse) Was busy checking all cloudposse github actions usable for me. PS. We are multi-cloud with GCP as the main cloud and I use it for the backend. A lot of opinionated configurations.
2024-06-19
2024-06-22
Add --upload
flag to atmos describe affected
command @aknysh (#631) ## what
• Add --upload
flag to atmos describe affected
command
• Update docs
• https://atmos.tools/cli/commands/describe/affected/
why
If the --upload=true
command-line flag is passed, Atmos will upload the affected components and stacks to a specified HTTP endpoint.
The endpoint can process the affected components and their dependencies in a CI/CD pipeline (e.g. execute terraform apply
on all the affected components in the stacks and all the dependencies).
Atmos will perform an HTTP POST request to the URL ${ATMOS_PRO_BASE_URL}/${ATMOS_PRO_ENDPOINT}
, where the base URL is defined by the ATMOS_PRO_BASE_URL
environment variable, and the URL path is defined by the ATMOS_PRO_ENDPOINT
environment variable.
An Authorization header Authorization: Bearer $ATMOS_PRO_TOKEN
will be added to the HTTP request (if the ATMOS_PRO_TOKEN
environment variable is set) to provide credentials to authenticate with the server.
NOTE: If the --upload=true
command-line flag is passed, the --include-dependencies
and --include-settings
flags are automatically set to true
, so the affected components will be uploaded with their dependencies and settings (if they are configured in Atmos stack manifests).
The payload of the HTTP POST request will be a JSON object with the following schema:
{
"base_sha": "6746ba4df9e87690c33297fe740011e5ccefc1f9",
"head_sha": "5360d911d9bac669095eee1ca1888c3ef5291084",
"repo_url": "<https://github.com/cloudposse/atmos>",
"repo_host": "github.com",
"repo_name": "atmos",
"repo_owner": "cloudposse",
"stacks": [
{
"component": "vpc",
"component_type": "terraform",
"component_path": "examples/quick-start/components/terraform/vpc",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-vpc",
"affected": "stack.vars",
"included_in_dependents": false,
"dependents": [],
"settings": {}
}
]
}
where:
• base_sha
- the Git commit SHA of the base branch against which the changes in the current commit are compared
• head_sha
- the SHA of the current Git commit
• repo_url
- the URL of the current repository
• repo_host
- the host of the current repository
• repo_name
- the name of the current repository
• repo_owner
- the owner of the current repository
• stacks
- a list of affected components and stacks with their dependencies
2024-06-23
Hi, I’m testing out atmos v1.81 template functions release: How do I get array element for this one?
- '{{ (atmos.Component "aws-vpc" .stack).outputs.private_subnets }}'
It is important to realize that all you are doing is manipulating a text file using using templating, not YAML
It is similar to Helm and Helmfile
If you share your component configuration in the stack, I can provide an example.
vars:
my_subnets: {{ toJson atmos.Component(“vpc”, .stack).outputs.private_subnets }}
Remember that YAML is a superset of JSON, so that should work for you
or use range
{{range atmos.Component("vpc", .stack).outputs.private_subnet}}
- {{.}}
{{end}}
Thanks!
@prwnd9 did you get it working?
2024-06-25
Update atmos describe affected
command @aknysh (#635) ## what
• Update atmos describe affected
command
• If --upload=true
flag is passed, include dependents
for all dependents
(even an empty list), and include the settings
section for all the dependent components
why
• Make the API schema consistent on the server that processes the result of atmos describe affected --upload=true
command
New docs are live! This is a massive update that includes:
• New demos (including with k3s and localstack)
• Simple quick start - https://atmos.tools/quick-start/simple/
• Devcontainer - https://github.com/codespaces/new?hide_repo_select=true&ref=main&repo=cloudposse/atmos&skip_quickstart=true
• Localstack - https://github.com/cloudposse/atmos/tree/main/examples/demo-localstack
• The “Atmos Mindset” https://atmos.tools/quick-start/mindset
2024-06-27
Auto completion for zsh devcontainer @osterman (#639) ## what - Add autocompletion while you type why
• Better DX (less typing)
demo image Add docker image @osterman (#627) ## what - Add a docker image for Atmos - Bundle typcal dependencies - Multi-architecture build for ARM64 and AMD64 why
• Make it easier to get up and running with Atmos
Introduce License Check @osterman (#638) ## what
Check for approved licenses
why
Avoid accidentally introducing code that is non-permissively licensed
Fix Codespace url @osterman (#637) ## what
• Update to Codespace URL for main
branch
why
• It was pointed to an older reorg
branch
Reorganize Documentation For a Better Learning Journey @osterman (#612) ## what - Rename top menu items to “Learn” and “Reference” - Move community to the left, remove discussions and add contributing - Introduce sidebar sections, so that content is further left-justified - Consolidate Terraform content into one section to tell a better story about how to use Terraform with Atmos. why
• Reorganize Atmos Docs to better help developers on their learning journey
Note
• Dependency Review action is failing. Might be something new with the action.
actions/dependency-review-action#786
:rocket: Enhancements
Don’t copy unix sockets in atmos describe affected
command @aknysh (#640) ## what
• Don’t copy unix sockets when executing atmos describe affected
command
• Fix some links (left over after renaming examples/quick-start
to examples/quick-start-advanced
)
why
• Sockets are not regular files, and if someone uses tools like git-fsmonito and executes atmos describe affected
command, the following error will be thrown:
open .git/fsmonitor--daemon.ipc: operation not supported on socket
How does one verify the pr is approved by a codeowner before allowing atmos apply
within a PR ?
I was looking at this action to see
https://github.com/cloudposse/github-action-validate-codeowners
Is it possible
• to create a new check for codeowner approval validation
• then the gha for apply can check the other actions result ◦ if its green then it applies ◦ if its red, the apply should fail
I’m unsure if one gha can check the result of another gha
How does one prevent atmos apply
of components and stacks outside of the atmos PR’s file changes?
For example, if I only changed an s3
bucket in ue1-dev
, i shouldn’t be able to deploy an eks
cluster in uw2-prod
@RB are you asking about how to prevent users or machines from running apply
on other components not related to the changes in a PR?
Yes!
so that is not related to the PR itself?
it’s related to the PR cause the apply is done on the PR itself, no ? other than drift detection
in a GH action? The apply
will be done only on the affected components. The action should not attempt to apply not affected
is it what you are asking?
or you asking how to prevent it in general if the action tries to apply some other components (for any reason, e.g. misconfig or wrong code)?
So if we do atmos apply <component> -s <stack>
and it’s of a component and stack outside of the PR’s changes, it will be applied, no ?
@Andriy Knysh (Cloud Posse)
@RB Atmos (the CLI) does not know anything about PRs, CI/CD and other external systems. It means you can always execute any command, e.g. atmos terraform apply <component> -s <stack>
on the command line. Maybe I’m not understanding your question or use-cases completely, please provide the details
I meant that can be added to a comment in the pr to apply other stacks in order to target specific stacks. Id like the comment on the pr to only be able to apply the affected stacks
w/o implementation details, the idea would be to do the following in the PR (or in the comment handling action)
1. Execute `atmos describe affected` to get a list of affected components in the stacks (<https://atmos.tools/cli/commands/describe/affected/>)
2. Detect the component and stack from the comment in the command that the user wants to execute
3. If the provided component and stack is not in the list of the affexcted components/stacks, show an error to the user
Also regarding the drift detection on github issues, how does one only allow specific users (such as codeowners) to allow commenting to apply those stacks ?
Great question - you would update the workflow it self
Checks if the github actor is a member of a specific GitHub team. If
I just did a quick google for that action, haven’t used it
You would put that in the drift remediation workflow
Thanks for sending, I’ll check it out
out of curiosity, is this problem solved already for clients ?
There’s no one way to solve it.
Are you on github enterprise?
i believe so
With GHE, you can use environment protection rules.
You would add this to the workflow, then use protection rules. Those can be as advanced so you would like.
environment: something
This is the most native way in GitHub to control who can do what.
We are leaning mostly on leveraging GitHub for as much as we can, which is why right now, we don’t prioritize a custom solution in our GitHub Actions, when GitHub already supports it.
Oh very cool! I’ll check that out, ty
When using the atmos.Component
template functions for retrieving a few component outputs, I’ve noticed a slow down when running a terraform plan
or terraform apply
or even just running atmos
itself, the startup is quite delayed. Just wondering if that is expected, since I guess the more outputs that are pulled this way, the more state lookups are needed?
it should be the same speed as executing terraform output ...
command (which is not super fast). Atmos uses https://github.com/hashicorp/terraform-exec to execute terraform output
programmatically, but the lib uses the same code as the regular terraform
Terraform CLI commands via Go.
Ah okay, thanks
Keep us posted.
But @Andriy Knysh (Cloud Posse) is right, that this is expected, since it’s basically a subprocess. Right now, we don’t do any caching.
It would be interesting to learn how you use it, and if you are retrieving exactly the same component outputs in multiple places
Caching would be nice, for me - I’m just passing things like arn roles or instance profiles from an iam component to something else
Cool, this is a use-case we anticipated would be simplified using atmos.Component
, so glad you are using it. Makes sense it would benefit from caching. I think @Andriy Knysh (Cloud Posse) will be adding it related to other work he is doing to optimize data sources.
Awesome! I’ve been using it on the GCP side for creating multiple cloud build triggers and a repo connection — passing the repo connection id to multiple build trigger components, as well as a few gcp project id’s between components. I don’t think the remote_state module you guys have supports GCP yet, so is great to have this new option