#atmos (2024-07)
2024-07-01
Hi, we are staring to switch to use Atmos instead of a custom terragrunt config to manage terraform since it resolves a lot of custom stuff we need to do so thanks for this awesome tool I just have 2 questions.
- When creating the initial S3 backend and DynamoDB do you guys use the tfstate-backend module outside of atmos or create this manually? I was following the advance tutorial and it looks good but I don’t see any docs around this although maybe I miss something
- Is there a global env to get the git commit number so I can set it as a tag?
Yes, so in this case I recommend a hierarchical backend architecture.
We use the following workflow to initialize the root tfstate backend.
init/tfstate:
description: Provision Terraform State Backend for initial deployment.
steps:
- command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack core-use1-root --auto-generate-backend-file=false
- command: until aws s3 ls acme-core-use1-root-tfstate; do sleep 5; done
type: shell
- command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack core-use1-root --init-run-reconfigure=false
Then you can initialize any number of additional backends, with the state of each of those backends the root backend.
global env to get the git commit number so I can set it as a tag? In the stack configurations, you can use any Sprig or Gomplate function.
Useful template functions for Go templates.
In addition to command-line arguments, gomplate supports the use of configuration files to control its behaviour. Using a file for configuration can be useful especially when rendering templates that use multiple datasources, plugins, nested templates, etc… In situations where teams share templates, it can be helpful to commit config files into the team’s source control system. By default, gomplate will look for a file .gomplate.yaml in the current working directory, but this path can be altered with the –config command-line argument, or the GOMPLATE_CONFIG environment variable.
So if you are running in a GHA, you can read the environment variables using the env
function
Or if. you want to read it from the .git/HEAD
file you can use a file
data source
Ah I see and then do you just download the tfstate-backend
as a vendor but not actually use it or even define it in stack/catalog? or you will still do this to move the state of this later to the bucket it created?
also thanks for the info on the template I will look into this so I can add this it looks like this will do the trick
@Erik Osterman (Cloud Posse) the cmd on the workflow:
- command: until aws s3 ls acme-core-use1-root-tfstate; do sleep 5; done
type: shell
dose not work for me so I change it to the default wait from aws cli:
- command: aws s3api wait bucket-exists --bucket acme-core-use1-root-tfstate
is there a way to also use template here? like if I want to use profile on this cmd?
Yes, see https://atmos.tools/core-concepts/custom-commands/ for examples
Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos CLI when you run atmos help. It’s a great way to centralize the way operational tools are run in order to improve DX.
I see I will try it out thanks!
2024-07-02
2024-07-03
@here
For a limited time, we’re offering the complete Cloud Posse reference architecture as a perk for a monthly GitHub Enterprise Sponsorship.
Please feel free to contact me if you’d like to learn more.
This is ideal for savvy teams who want a head start. The feedback we’ve received is excellent.
2024-07-04
2024-07-05
2024-07-08
dose atmos has a validation to check version?
atmos version
?
Use this command to get the Atmos CLI version
I mean to set it like a .terraform-version
or to validate that we are using the correct atmos version
do you want to validate it from the command-line, or from a CI/CD?
I will like to validate this in command-line for CI/CD I guess I can just create a workflow to run but a team mate ask if we have a way to set the atmos version like we do terraform with like a file or something like that
@Andriy Knysh (Cloud Posse)
@Miguel Zablah i guess we need more details on what you wnat to do here. By validating Atmos version, do you mean you want to install a particular Atmos version in your CI/CD workflow?
Sounds like maybe you’re looking for a version manager, like https://github.com/tofuutils/tenv
OpenTofu / Terraform / Terragrunt and Atmos version manager
@Miguel Zablah if you were asking about version management, tenv
looks very nice for that
Oh yeah this sounds like what I was looking for
2024-07-09
Is there some examples of Atmos that are not reference architecture? I’m trying to find if it’s a good fit for our use case, which is simply managing Datadog resources
Have you taken a look at some of these examples: https://github.com/cloudposse/atmos
Terraform Orchestration Tool for DevOps. Keep environment configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile.
Er… i meant to link to https://github.com/cloudposse/atmos/tree/main/examples
2024-07-10
Hello,
would it be possible to make the interactive mode optional? like using a parameter to start / env variable to prevent it?
what Atmos command do you want to prevent the interactive mode for?
almost all commands don’t use UX, it’s just a few will open a TUI when called w/o parameters, e.g.
atmos
atmos workflow
w/o parameters, they will not execute at all, hence we start an interactive TUI for the user to select the parameters for the commands
Also we have an open task to fix using interactive mode for things like help. It was not supposed to work that way
i mean basically if you just type $>atmos
its different behavior to almost (aptitude comes to mind) all other cmd line utilities.
just an idea. not too important
Yes, agree - it should not load interactive mode.
(I also find it very annoying when it does that)
Yes definitely not going away, and we want to invest more into it
It’s just that things like help menus gain nothing by being interactive. And when mistyping a command it goes into an interactive menu, rather than just emit the help option, which is awkward.
we will overhaul atmos help
and add atmos man
command soon
2024-07-11
Add Atmos Pro integration to atmos.yaml
. Add caching to atmos.Component
template function. Implement atmos.GomplateDatasource
template function @aknysh (#647) ## what
• Add Atmos Pro integration to atmos.yaml
• Add caching to atmos.Component
template function
• Implement atmos.GomplateDatasource
template function
• Update docs
• atmos.Component
• atmos.GomplateDatasource
why
• Add Atmos Pro integration to atmos.yaml
. This is in addition to the functionality added in Add –upload flag to atmos describe affected command. If the Atmos Pro configuration is present in the [integrations.pro](http://integrations.pro)
section in atmos.yaml
, it will be added in the config
section when executing the atmos describe affected --upload=true
command for further processing on the server
{
"base_sha": "6746ba4df9e87690c33297fe740011e5ccefc1f9",
"head_sha": "5360d911d9bac669095eee1ca1888c3ef5291084",
"owner": "cloudposse",
"repo": "atmos",
"config": {
"timeout": 3,
"events": [
"pull_request": [
{
"on": ["open", "synchronize", "reopen"],
"workflow": "atmos-plan.yml",
"dispatch_only_top_level_stacks": true
},
{
"on": ["merged"],
"workflow": "atmos-apply.yaml",
},
],
"release": [
]
]
}
"stacks": [
{
"component": "vpc",
"component_type": "terraform",
"component_path": "components/terraform/vpc",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-vpc",
"affected": "stack.vars",
"included_in_dependents": false,
"dependents": []
}
]
}
• Add caching to atmos.Component
template function
Atmos caches (in memory) the results of `atmos.Component` template function execution. If you call the function for the same component in a stack more than once, the first call will produce the result and cache it, and all the consecutive calls will just use the cached data. This is useful when you use the `atmos.Component` function for the same component in a stack in multiple places in Atmos stack manifests. It will speed up the function execution and stack processing.
For example:
components:
terraform:
test2:
vars:
tags:
test: '{{ (atmos.Component "test" .stack).outputs.id }}'
test2: '{{ (atmos.Component "test" .stack).outputs.id }}'
test3: '{{ (atmos.Component "test" .stack).outputs.id }}'
In the example, the `test2` Atmos component uses the outputs (remote state) of the `test` Atmos component from the same stack. The template function `{{ atmos.Component "test" .stack }}` is executed three times (once for each tag).
After the first execution, Atmos caches the result in memory (all the component sections, including the `outputs`), and reuses it in the next two calls to the function. The caching makes the stack processing about three times faster in this particular example. In a production environment where many components are used, the speedup can be even more significant.
• Implement atmos.GomplateDatasource
template function
The `atmos.GomplateDatasource` template function wraps the [Gomplate Datasources](https:/./atmos.tools/core-concepts/stacks/templates/datasources) and caches the results, allowing executing the same datasource many times without calling the external endpoint multiple times. It speeds up the datasource execution and stack processing, and can eliminate other issues with calling an external endpoint, e.g. timeouts and rate limiting.
*Usage*
{{ (atmos.GomplateDatasource "<alias>").<attribute> }}
*Caching the result of `atmos.GomplateDatasource` function*
Atmos caches (in memory) the results of `atmos.GomplateDatasource` template function execution. If you execute the function for the same datasource alias more than once, the first execution will call the external endpoint, produce the result and cache it. All the consecutive calls will just use the cached data. This is useful when you use the `atmos.GomplateDatasource` function for the same datasource alias in multiple places in Atmos stack manifests. It will speed up the function execution and stack processing.
For example:
settings:
templates:
settings:
gomplate:
timeout: 5
datasources:
ip:
url: "<https://api.ipify.org?format=json>"
headers:
accept:
- "application/json"
components:
terraform:
test:
vars:
tags:
test1: '{{ (datasource "ip").ip }}'
test2: '{{ (atmos.GomplateDatasource "ip").ip }}'
test3: '{{ (atmos.GomplateDatasource "ip").ip }}'
test4: '{{ (atmos.GomplateDatasource "ip").ip }}'
In the example, we define a `gomplate` datasource `ip` and specify an external endpoint in the `url` parameter.
We use the [Gomplate `datasource`](https://docs.gomplate.ca/datasources/) function in the tag `test1`, and the `atmos.GomplateDatasource` wrapper for the same datasource alias `ip` in the other tags. The `atmos.GomplateDatasource` wrapper will call the same external endpoint, but will cache the result and reuse it between the datasource invocations.
When processing the component `test` from the above example, Atmos does the following:
• Executes the `{{ (datasource "ip").ip }}` template. It calls the external endpoint using the HTTP protocol and assign the `ip` attribute from the result to the tag `test1`
• Executes the `{{ (atmos.GomplateDatasource "ip").ip }}` template. It calls the external endpoint again, caches the result in memory, and assigns the `ip` attribute from the result to the tag `test2`
• Executes the `{{ (atmos.GomplateDatasource "ip").ip }}` two more times for the tags `test3` and `test4`. It detects that the result for the same datasource alias `ip` is already presend in the memory cache and reuses it without calling the external endpoint two more times
The datasource result caching makes the stack processing much faster and significantly reduces the load on external endpoints, preventing such issues as timeouts and rate limiting.
can confirm the speed-up is significant with the atmos.Component
caching now, thanks @Andriy Knysh (Cloud Posse)!
Add Atmos Pro integration to atmos.yaml
. Add caching to atmos.Component
template function. Implement atmos.GomplateDatasource
template function @aknysh (#647) ## what
• Add Atmos Pro integration to atmos.yaml
• Add caching to atmos.Component
template function
• Implement atmos.GomplateDatasource
template function
• Update docs
• atmos.Component
• atmos.GomplateDatasource
why
• Add Atmos Pro integration to atmos.yaml
. This is in addition to the functionality added in Add –upload flag to atmos describe affected command. If the Atmos Pro configuration is present in the [integrations.pro](http://integrations.pro)
section in atmos.yaml
, it will be added in the config
section when executing the atmos describe affected --upload=true
command for further processing on the server
{
"base_sha": "6746ba4df9e87690c33297fe740011e5ccefc1f9",
"head_sha": "5360d911d9bac669095eee1ca1888c3ef5291084",
"owner": "cloudposse",
"repo": "atmos",
"config": {
"timeout": 3,
"events": [
"pull_request": [
{
"on": ["open", "synchronize", "reopen"],
"workflow": "atmos-plan.yml",
"dispatch_only_top_level_stacks": true
},
{
"on": ["merged"],
"workflow": "atmos-apply.yaml",
},
],
"release": [
]
]
}
"stacks": [
{
"component": "vpc",
"component_type": "terraform",
"component_path": "components/terraform/vpc",
"stack": "plat-ue2-dev",
"stack_slug": "plat-ue2-dev-vpc",
"affected": "stack.vars",
"included_in_dependents": false,
"dependents": []
}
]
}
• Add caching to atmos.Component
template function
Atmos caches (in memory) the results of `atmos.Component` template function execution. If you call the function for the same component in a stack more than once, the first call will produce the result and cache it, and all the consecutive calls will just use the cached data. This is useful when you use the `atmos.Component` function for the same component in a stack in multiple places in Atmos stack manifests. It will speed up the function execution and stack processing.
For example:
components:
terraform:
test2:
vars:
tags:
test: '{{ (atmos.Component "test" .stack).outputs.id }}'
test2: '{{ (atmos.Component "test" .stack).outputs.id }}'
test3: '{{ (atmos.Component "test" .stack).outputs.id }}'
In the example, the `test2` Atmos component uses the outputs (remote state) of the `test` Atmos component from the same stack. The template function `{{ atmos.Component "test" .stack }}` is executed three times (once for each tag).
After the first execution, Atmos caches the result in memory (all the component sections, including the `outputs`), and reuses it in the next two calls to the function. The caching makes the stack processing about three times faster in this particular example. In a production environment where many components are used, the speedup can be even more significant.
• Implement atmos.GomplateDatasource
template function
The `atmos.GomplateDatasource` template function wraps the [Gomplate Datasources](https:/./atmos.tools/core-concepts/stacks/templates/datasources) and caches the results, allowing executing the same datasource many times without calling the external endpoint multiple times. It speeds up the datasource execution and stack processing, and can eliminate other issues with calling an external endpoint, e.g. timeouts and rate limiting.
*Usage*
{{ (atmos.GomplateDatasource "<alias>").<attribute> }}
*Caching the result of `atmos.GomplateDatasource` function*
Atmos caches (in memory) the results of `atmos.GomplateDatasource` template function execution. If you execute the function for the same datasource alias more than once, the first execution will call the external endpoint, produce the result and cache it. All the consecutive calls will just use the cached data. This is useful when you use the `atmos.GomplateDatasource` function for the same datasource alias in multiple places in Atmos stack manifests. It will speed up the function execution and stack processing.
For example:
settings:
templates:
settings:
gomplate:
timeout: 5
datasources:
ip:
url: "<https://api.ipify.org?format=json>"
headers:
accept:
- "application/json"
components:
terraform:
test:
vars:
tags:
test1: '{{ (datasource "ip").ip }}'
test2: '{{ (atmos.GomplateDatasource "ip").ip }}'
test3: '{{ (atmos.GomplateDatasource "ip").ip }}'
test4: '{{ (atmos.GomplateDatasource "ip").ip }}'
In the example, we define a `gomplate` datasource `ip` and specify an external endpoint in the `url` parameter.
We use the [Gomplate `datasource`](https://docs.gomplate.ca/datasources/) function in the tag `test1`, and the `atmos.GomplateDatasource` wrapper for the same datasource alias `ip` in the other tags. The `atmos.GomplateDatasource` wrapper will call the same external endpoint, but will cache the result and reuse it between the datasource invocations.
When processing the component `test` from the above example, Atmos does the following:
• Executes the `{{ (datasource "ip").ip }}` template. It calls the external endpoint using the HTTP protocol and assign the `ip` attribute from the result to the tag `test1`
• Executes the `{{ (atmos.GomplateDatasource "ip").ip }}` template. It calls the external endpoint again, caches the result in memory, and assigns the `ip` attribute from the result to the tag `test2`
• Executes the `{{ (atmos.GomplateDatasource "ip").ip }}` two more times for the tags `test3` and `test4`. It detects that the result for the same datasource alias `ip` is already presend in the memory cache and reuses it without calling the external endpoint two more times
The datasource result caching makes the stack processing much faster and significantly reduces the load on external endpoints, preventing such issues as timeouts and rate limiting.
What is atmos pro ?
@Erik Osterman (Cloud Posse) @Matt Calhoun
We’re working on something that enhances GitHub Actions for continuous delivery
Does this mean there will be a free version and a pay version ?
It doesn’t change anything about how it works today. We are building new features that are not possible today in GHA, and incorporating those into the Atmos Pro GitHub App.
For now, we’re working on enterprise=level features that work in concert with GitHub Enterprise. We’ll have more details in the coming months.
cool, looking forward to it
thanks for sharing!
2024-07-12
2024-07-14
2024-07-15
Who do I ping to get https://github.com/cloudposse/terratest-helpers/pull/23 released?
Previous dependency versions have vulnerabilities that the PR fixes e.g. https://discuss.hashicorp.com/t/hcsec-2024-13-hashicorp-go-getter-vulnerable-to-code-execution-on-git-update-via-git-config-manipulation/68081
Bulletin ID: HCSEC-2024-13 Affected Products / Versions: go-getter up to 1.7.4; fixed in go-getter 1.7.5. Publication Date: June 24, 2024 Summary HashiCorp’s go-getter library can be coerced into executing Git update on an existing maliciously modified Git Configuration, potentially leading to arbitrary code execution. This vulnerability, CVE-2024-6257, was fixed in go-getter 1.7.5. Background HashiCorp’s go-getter is a library for Go for downloading files or directories from various sourc…
This PR contains the following updates:
Release Notes
gruntwork-io/terratest (github.com/gruntwork-io/terratest)
Modules affected
• packer
• aws
• helm
• azure
Description
• Added support for Packer 1.10 • Fixed error checking in Azure • Fixed tests and documentation for helm, aws, packer modules
Related links
• https://github.com/gruntwork-io/terratest/pull/1395 • https://github.com/gruntwork-io/terratest/pull/1419
Description
• Updated [github.com/hashicorp/go-getter](http://github.com/hashicorp/go-getter)
from 1.7.4 to 1.7.5.
Related links
• https://github.com/gruntwork-io/terratest/pull/1415
Description
• Updated golang.org/x/net from 0.17.0
to 0.23.0
.
Related links
• https://github.com/gruntwork-io/terratest/pull/1402
Configuration
Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
Automerge: Disabled by config. Please merge this manually once you are satisfied.
Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
Ignore: Close this PR and you won’t be reminded about this update again.
• If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
@Matt Calhoun @Igor Rodionov @Jeremy G (Cloud Posse)
Bulletin ID: HCSEC-2024-13 Affected Products / Versions: go-getter up to 1.7.4; fixed in go-getter 1.7.5. Publication Date: June 24, 2024 Summary HashiCorp’s go-getter library can be coerced into executing Git update on an existing maliciously modified Git Configuration, potentially leading to arbitrary code execution. This vulnerability, CVE-2024-6257, was fixed in go-getter 1.7.5. Background HashiCorp’s go-getter is a library for Go for downloading files or directories from various sourc…
This PR contains the following updates:
Release Notes
gruntwork-io/terratest (github.com/gruntwork-io/terratest)
Modules affected
• packer
• aws
• helm
• azure
Description
• Added support for Packer 1.10 • Fixed error checking in Azure • Fixed tests and documentation for helm, aws, packer modules
Related links
• https://github.com/gruntwork-io/terratest/pull/1395 • https://github.com/gruntwork-io/terratest/pull/1419
Description
• Updated [github.com/hashicorp/go-getter](http://github.com/hashicorp/go-getter)
from 1.7.4 to 1.7.5.
Related links
• https://github.com/gruntwork-io/terratest/pull/1415
Description
• Updated golang.org/x/net from 0.17.0
to 0.23.0
.
Related links
• https://github.com/gruntwork-io/terratest/pull/1402
Configuration
Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
Automerge: Disabled by config. Please merge this manually once you are satisfied.
Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
Ignore: Close this PR and you won’t be reminded about this update again.
• If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
@Andriy Knysh (Cloud Posse)
Since the tests passed, I just released it as a patch release. @Matt Calhoun is out today.
how would one go about running a terraform import state under atmos?
something like? atmos terraform "import resource resource-id" component --s stack
(this didn’t work)
@Andriy Knysh (Cloud Posse)
thank you!
2024-07-16
dose atmos.Component
allows for a dependency of list or only strings?
I try to do this:
vpc_cidr_blocks: '{{ (atmos.Component "vpc" .stack).outputs.vpc_private_cidr_blocks }}'
but I get this:
vpc_cidr_blocks: "[10.100.0.0/24 10.100.1.0/24]"
I will like to have this as a normal list not a string:
vpc_cidr_blocks: [10.0.0.0/24, 10.0.1.0/24]
2024-07-17
Hello,
I’m using atmos.Component
in my stack manifest file and I’m trying to fetch a list of subnet IDs. I can’t seem to find the correct syntax.
This:
subnet_ids: |
{{- range (atmos.Component "vpc" .stack).outputs.private_subnet_ids }}
- {{ . }}
{{- end }}
produces this in the tfvars.json file:
"subnet_ids": "- subnet-0x8ae0b62b933ab72\n- subnet-08551c9314d89f695\n- subnet-04f5a5e154ec39aa3\n",
This:
subnet_ids:
'{{- range (atmos.Component "vpc" .stack).outputs.private_subnet_ids }}
- {{ . }}
{{- end }}'
produces this in the tfvars.json file:
"subnet_ids": " - subnet-0f8ae0b634933ab72 - subnet-08551c9314d89f695 - subnet-0423a4e154ec39aa3",
And this:
subnet_ids: '{{ (atmos.Component "vpc" .stack).outputs.private_subnet_ids | toJson}}'
produces this in the tfvars.json file:
"subnet_ids": "[\"subnet-0f8ae2362b933ab72\",\"subnet-08551c9314d89f695\",\"subnet-0423a4e154ec39aa3\"]",
Terraform output:
private_subnet_ids = [
"subnet-0f8a23b62b933ab72",
"subnet-0855123314d89f695",
"subnet-04f5a2354ec39aa3",
]
What’s the correct way to fetch the list of subnet IDs?
this is not the first time people asked the same question. It’s not about “the correct way to fetch the list of subnet IDs”, it’s how to use lists data type, Go templates and YAML to correctly construct valid YAML files (since by uisng Go templates, you are just manipulating text files). We’ll test that and add some docs in the next Atmos release
So what I’m trying to accomplish won’t work?
it will work, let us find the correct way and we’ll show it here
@Patrick McDonald are you familiar with Helm by any chance?
With the information shared above I’m trying to move up a level with processing a datasource. If I start with a datasource of:
tags:
- tag1:foo
- tag2:bar
- tag3:123
and a template of:
components:
terraform:
monitors:
vars:
datasource_import: {{ (datasource "global_tags").tags }}
test:
{{- range (datasource "global_tags").tags }}
- {{ . }}
{{- end }}
I can run gomplate -d global_tags.yaml -f _template.tmpl
and get an output of:
components:
terraform:
monitors:
vars:
datasource_import: [tag1:foo tag2:bar tag3:123]
test:
- tag1:foo
- tag2:bar
- tag3:123
I’m having trouble translating that to use in an Atmos stack however. I can do a stack such as:
import:
- catalog/rick/*.yaml
vars:
environment: test
stage: rick
components:
terraform:
monitors:
vars:
datasource: '{{ (datasource "global_tags").tags }}'
And get the output:
vars:
datasource: '[tag1:foo tag2:bar tag3:123]'
But without putting that in quotes I get errors and I cannot do range
over the datasource to get a list.
Any guidance on how using that datasource might work?
Hi, I’m testing out atmos v1.81 template functions release: How do I get array element for this one?
- '{{ (atmos.Component "aws-vpc" .stack).outputs.private_subnets }}'
@Andriy Knysh (Cloud Posse) I think we need to add a chapter about this, since it’s very easily misunderstood
are single quotes required? Its treating the list as a string.
subnet_ids: '{{ toJson (atmos.Component "vpc" .stack).outputs.private_subnet_ids }}'
subnet_ids: '["subnet-0f8ae0b62b933ab72","subnet-08551c9314d89f695","subnet-04f5a4e154ec39aa3"]'
so we have two diff things here:
- The quotes are not needed for Go templates (if you use the quotes, the result will be quoted as well)
- But w/o using the quotes, it’s not a valid YAML It’s a combination of Go templates and YAML. We’ll check how to do it correctly and let you know
( I realize I didn’t read your post carefully enough, @Patrick McDonald)
But w/o using the quotes, it’s not a valid YAML
Well, that’s only because validation is happening before rendering instead of after rendering
I think we have a task to fix that, and move validation after rendering
it’s not about validation (we are not validating anything yet), the YAML Go lib we are using does not like that YAML. We are reading the files as YAML before rendering the templates
And we need to read the files as YAML to be able to parse them and get all the sections in order to get the final values for all the sections which are actually used in the Go templates.
So we read the files, parse the YAML, deep-merge all the sections, then use the final values as the context for all the Go templates.
It’s not the other way around - we don’t read the files and process the templates b/c the context for the templates are the final mvalues from all the files and sections.
It’s reading all the files first, then deep-merging all the sections, then finding the final values for the component in the stack which are the context
for all the Go templates.
If we did evaluate the Go templates before reading the files as YAML and processing the stacks, then context
would have to be provided separately (we don’t know how, and it would not work anyway b/c we want to use the final values for the component in the stack, which we can know only after processing everything).
we’ll look into that
@Patrick McDonald did you found a way to do this?
Okay I think I got it this works for me:
vpc_private_subnets: ['{{ range $index, $element := (atmos.Component "vpc" .stack).outputs.vpc_private_subnets }}{{ if $index }}, {{ end }}{{ $element }}{{ end }}']
2024-07-18
Update atmos describe affected
and atmos terraform
commands @aknysh (#654) ## what
• Update atmos describe affected
command
• Update atmos terraform
command
• Allow Gomplate, Sprig and Atmos template functions in imports in Atmos stack manifests
why
• The atmos describe affected
command had an issue with calculating the included_in_dependents
field for all combination of the affected components with their dependencies. Now it’s correctly calculated for all affected
• In atmos describe affected
command, if the Git config core.untrackedCache
is enabled, it breaks the command execution. We disable this option if it is set
• The atmos terraform
command now respects the TF_WORKSPACE
environment variable. If the environment variable is set by the caller, Atmos will not calculate and set a Terraform workspace for the component in the stack, but instead will let Terraform use the workspace provided in the TF_WORKSPACE
environment variable
• Allow Gomplate, Sprig and Atmos template functions in imports in Atmos stack manifests. All functions are allowed now in Atmos stacks manifests and in the import templates
Hi all - In my atmos.yaml
file, where I call the terraform_workspace_pattern
, I want to force all values to be lowercase. Is that possible?
terraform:
metadata:
terraform_workspace_pattern: "{tenant}-{environment}-{stage}-ws"
@Andriy Knysh (Cloud Posse)
@Andrew Chemis terraform_workspace_pattern
attribute is not in atmos.yaml
, it should be defined in component stack manifests in metadata
section, e.g.
components:
terraform:
my-component:
vars: {}
env: {}
metadata:
# Override Terraform workspace
# Note that by default, Terraform workspace is generated from the context, e.g. `<environment>-<stage>`
terraform_workspace_pattern: "{tenant}-{environment}-{stage}-{component}"
I want to force all values to be lowercase. Is that possible?
if you want to force it at the component level, you can use Go
templates and Gomplate functions https://docs.gomplate.ca/functions/strings/
strings.Abbrev Abbreviates a string using … (ellipses). Takes an optional offset from the beginning of the string, and a maximum final width (including added ellipses). Also see strings.Trunc. Added in gomplate v2.6.0 Usage strings.Abbrev [offset] width inputinput | strings.Abbrev [offset] widthArguments name description offset (optional) offset from the start of the string. Must be 4 or greater for ellipses to be added. Defaults to 0 width (required) the desired maximum final width of the string, including ellipses input (required) the input string to abbreviate Examples $ gomplate -i ‘{{ “foobarbazquxquux” | strings.
for example:
components:
terraform:
my-component:
vars: {}
env: {}
metadata:
# Override Terraform workspace
terraform_workspace_pattern: '{{ strings.ToLower "{tenant}-{environment}-{stage}-{component}" }}'
refer to templating doc for more details https://atmos.tools/core-concepts/stacks/templates/
let me know if you have questions or need more help on that
Ah, I was in my root manifest file and said the wrong thing . Exactly what I was looking for - thanks!
@Andriy Knysh (Cloud Posse) hmm just now trying this, getting the error:
Error: Error loading state: Failed to retrieve workspace {{ strings.ToLower "SharedServices-uw2-prod-vpc-ws" }}: invalid value for workspace
In the new backend config its creating
"workspaces": {
"name": "{{ strings.ToLower \"SharedServices-uw2-prod-vpc-ws\" }}"
}
I have it enabled in my atmos.yaml
templates:
settings:
enabled: true
sprig:
enabled: false
gomplate:
enabled: true
timeout: 5
and in my stack manifest:
components:
terraform:
vpc:
metadata:
terraform_workspace_pattern: '{{ strings.ToLower "{tenant}-{environment}-{stage}-{component}-ws" }}'
Any thoughts? Thanks again!
@Andrew Chemis I’m sorry I showed you that, but Atmos does not support templates in terraform_workspace_pattern,
it currently supports templates in these sections https://atmos.tools/core-concepts/stacks/templates/#atmos-sections-supporting-go-templates
templates are not supported in the entire metadata
section for other reasons (related to inheritance). We will add templates to terraform_workspace_pattern
in the next Atmos release
why do you have the tenant SharedServices
in upper-case? Can you make it lower-case?
brownfield deployment that uses control tower and account factory, so a few accounts don’t match naming convention. Ill just adjust the casing
@Andriy Knysh (Cloud Posse) Sorry for repeated annoyances :sweat_smile:. When I try this config,
terraform:
backend_type: cloud
backend:
cloud:
organization: "ac_atmos_testing"
hostname: "app.terraform.io"
workspaces:
name: "{terraform_workspace}-ws"
with a -ws at the end, I get the issue Workspace "sharedservices-uw2-prod-ws" doesn't exist.
I think this isnt working because the stack manifest is trying to create the workspace using a different naming convention. I dont want to change how im naming my stacks to include ws
at the end.
I know i’m not using Atmos properly, but, when I change workspaces:name
to anything except {terraform_workspaces} it simply errors out.
workspaces:
name: "null"
Workspace "sharedservices-uw2-prod" doesn't exist.
So the only way to customize a workspace name is a component level override? End of the day, what im trying to do is set a default workspace convention that differs from the stack name. In my current environment, I will never reuse components in my organization. Adding {component} to my workspace is noise and complicates permission management. But, then for a weird reason we want to append -ws
to the end of the workspace name.
Desired stack name: {tenant}-{environment}-{stage}
desired workspace name: {tenant}-{environment}-{stage}-ws
In our env, {tenant}
is basically the account name, so were also probably null-label wrong
with a -ws at the end, I get the issue Workspace "sharedservices-uw2-prod-ws" doesn't exist.
that’s because the selected TF workspace is what’s replaced in the {terraform_workspace
} token
to understand how Atmos ctreates TF workspaces for components, please refer to this doc https://atmos.tools/core-concepts/components/terraform/workspaces/
Terraform Workspaces.
whatever workspace is calculated for a component in a stack (automatically by using the stack name, or by overriding it in metadata.terraform_worspace_pattern
), the token {terraform_workspace}
gets replaced with it
in the above case,
workspaces:
name: "{terraform_workspace}-ws"
is not the same as the selected TF workspace (it adds -ws
)
execute atmos describe component <component> -s <stack>
to see what TF workspace is used, it should be in the workspace
field in the output
in any case, this will be always wrong
workspaces:
name: "{terraform_workspace}-ws"
b/c the selected TF workspace will be inserted instead of the {terraform_workspace}
token. Adding -ws
will make the workspace in TF Cloud different from the selected TF workspace
use this instead
terraform:
backend_type: cloud
backend:
cloud:
organization: "ac_atmos_testing"
hostname: "app.terraform.io"
workspaces:
name: "{terraform_workspace}"
and then you can add -ws
at the end for a component:
components:
terraform:
vpc:
metadata:
terraform_workspace_pattern: '{tenant}-{environment}-{stage}-ws'
2024-07-19
Does atmos have features for importing existing resources / moving state?
It does not. The back story is how you work with state largely depends on the backend (e.g. TFE, AWS/S3, GCP, Azure, etc). It’s hard to do a generic implementation.
got it
but it supports all standard Terraform commands for state management atmos terraform state list/mv <component> -s <stack>
also terraform import
atmos terraform import <component> -s <stack> ADDRESS ID
The import or move blocks should work too…
Actually come to think of it, if you added those blocks, it would only work for a single workspace and not all workspaces.
Like andriy said, you can use the atmos terraform import command. If you have multiple imports, you can also use the shell to run the raw commands
atmos terraform shell <component> -s <stack>
terraform import ADDRESS ID
2024-07-20
2024-07-23
Using vars in imports with v1.85
Am playing with the new version today and trying to use .atmos_stack
in an import. In the following config collection stack
is the var and stack_test
is hard coded to be the same value. In the output stack
processes fine showing it’s a string. Then stack2
works fine changing the hard coded value to upper.
But I cannot change stack
to upper and with the two enabled outputs _enabled
doesn’t process properly, but that same syntax does fine with the hard coded value in _enabled2
.
Am I trying to do magic again or is there something I need to learn about the handling of .atmos_stack
when going down this path?
Manifest
=-=-=-=
import:
- "catalog/rick/rmq_reboot_status.tmpl"
Import level 1
=-=-=-=
import:
- path: "catalog/rick/group_rmq.tmpl"
context:
settings:
stack: "{{ .atmos_stack }}"
stack_test: "test-rick"
applies_to: "^test"
Import level 2
=-=-=-=
import:
- path: "catalog/rick/_default.tmpl"
context:
group:
not: important_really
Base Template
=-=-=-=
components:
terraform:
monitors:
vars:
settings:
_enabled: {{ .settings.stack | regexp.Match .settings.applies_to }}
_enabled2: {{ .settings.stack_test | regexp.Match .settings.applies_to }}
applies_to: {{ .settings.applies_to }}
stack: "The stack value is {{ .settings.stack }} of kind {{ kindOf .settings.stack }} and type {{ typeOf .settings.stack }}"
stack2: {{ upper .settings.stack_test }}
Outputs
=-=-=-=
vars:
environment: test
monitors:
group:
not: important_really
settings:
_enabled: false
_enabled2: true
applies_to: ^test
stack: The stack value is test-rick of kind string and type string
stack2: TEST-RICK
stage: rick
And cause it’s usually you called out @Andriy Knysh (Cloud Posse)
we have two diff ways of using Go templates, and they are executed at completely diff times:
- The templates in
imports
use the static context, the vars in the context should be static b/c we need to process all the imports first before we find the final values for the component in the stacks - The templates in other sections in Atmos stack manifests. These are processed as the last step in the processing pipeline and they use the final values for the component in the stacks In short, you can’t use Go templates in imports with random context or context from the component in the stack. The context in imports is static
refer to this for more detaiols https://atmos.tools/core-concepts/stacks/templates/#template-evaluations
So the same stuff we covered. I misunderstood the change notes as potentially unlocking something that just doesn’t.
• `Allow Gomplate, Sprig and Atmos template functions in imports in Atmos stack manifests
the release allows the functions in imports (so you can use all Gomplate, Sprig and Atmos template functions)
template data (context) is not the same as template functions
this is template data
stack: "{{ .atmos_stack }}"
it starts with a dot
this is template functions
atmos.Component
strings.Lower
Gotcha, thank you. I managed to repeat my misunderstanding by working on a different problem. Sorry to interrupt your day.
not a problem
we can improve Atmos in many ways, but using templates in imports can’t be changed - the reason beign that we need to import everything first in order to find the component in the stacks and then find the final vars and config for it. We can’t use the final values in imports (in templates)
I just gotta get hit with the clue stick a few more times. Quite a few of the problems we’re dealing with in our re-work of Datadog monitor management can benefit from keeping Terraform dumber - so I keep repeating mistakes with Atmos’ features.
I’ll get there.
2024-07-26
The atmos opa integration is really nice. I previously wrote a few opa policies on raw terraform code and coupled it with conftest unit tests with coverage. Is there an atmosy way of integrating conftests or unit tests for the atmos opa policies?
How do you folks currently do it ?
some people have OPA policy files in stacks/schemas/opa
, and for each policy file there is a test file ending with _test.rego
does atmos run the tests automagically ?
can atmos also run conftest to collect coverage ?
and then in GHA, something like
jobs:
opa-tests:
steps:
- name: Setup OPA
uses: open-policy-agent/setup-opa
with:
version: latest
- name: Run OPA Tests
run: |
for policy in stacks/schemas/opa/*; do
opa test "${policy}" -v
done
(Atmos does not embed conftest or other binaries, but you can execute any of them on the command line or from GHA, or from Atmos workflows, or from Atmos custom commands)
ah ok got it. we’ll use a similar action and run conftest
out of curiosity, any customers using conftest to test their atmos opa policies yet ?
btw, thanks a lot Andriy for your insight and guidance here
i’m not aware of people using conftest, let us know if you know anybody. BTW, conftest looks like a very nice framework to write tests
Hello. Previous behavior for stack imports would fail if a value is missing, now it does not fail and it replaces my missing import values with "<no value>"
. I have Go Templating enabled and gomplate disabled in my atmos.yaml
. Is this new intended behavior? If so, is the solution to use schemas/opa to ensure stack files have the proper values?
not related to OPA
please read this doc that explains your use-case
Templates in imports and templates in stack manifests are diff things, and they get evaluateed at diff states of the stack processing pipeline
since you have templates and imports and have enabled templates in stack manifests, Atmos tries to evaluate templates in imports
@Andriy Knysh (Cloud Posse) Thank you for your response.
Ultimately, I’m trying to error any missing keys when rendering map keys from context and not assign them <no value>
. https://stackoverflow.com/questions/49933684/prevent-no-value-being-inserted-by-golang-text-template-library
In Go Templating the setting is missingkey=zero
, which seems to be linked to ignore_missing_template_values
import:
- path: "<path_to_atmos_manifest>"
context: {}
ignore_missing_template_values: false
However, it seems like this setting is being ignored completely for me. I verified I’m on the latest atmos version 1.85.0
Example stack
import:
- path: catalog/defaults/mycatalog
context: {}
ignore_missing_template_values: false
vars:
environment: "myenv"
Example component
components:
terraform:
mycomponent:
vars:
FOO: "{{ .foo }}"
Shouldn’t this give me an error if I describe the stack, since I do not pass .foo
?
atmos describe component mycomponent -s myenv
I found that the quotations DO matter in this context…
If I remove ""
,
components:
terraform:
mycomponent:
vars:
FOO: {{ .foo }}
Then I’ll get the error I was expecting:
invalid stack manifest 'catalog/defaults/mycatalog.yaml'
yaml: invalid map key: map[interface {}]interface {}{".foo":interface {}(nil)}
Interesting- I hadn’t faced this before.
Hi. Given my last message I am still facing an issue…
Say I have the following files setup as such:
myenv.yaml
import:
- path: "catalog/templates/stacks/stack_template"
context:
foo: "my_string"
env_name: "myenv"
stack_template.yaml
import:
- path: "catalog/templates/components/component_template"
context:
foo: "{{`{{ .foo }}`}}"
vars:
environment: "{{ .env_name }}"
component_template.yaml
components:
terraform:
mycomponent:
vars:
foo: {{ .foo }}
Since foo is declared, it appears normally when I describe the component:
MacBook-Pro:atmos zainzahran$ atmos describe component mycomponent -s myenv | grep foo
foo:
name: foo
foo: my_string
Now, if I remove foo from context in myenv.yaml
MacBook-Pro:atmos zainzahran$ atmos describe component mycomponent -s myenv | grep foo
invalid stack manifest 'catalog/templates/components/component_template.yaml'
yaml: invalid map key: map[interface {}]interface {}{".foo":interface {}(nil)}
^^This is what I expect and what I got.
Now, I noticed some abnormal behavior with missing contexts…
Say, I change my component_template.yaml
by adding -testing
to foo
components:
terraform:
mycomponent:
vars:
foo: {{ .foo }}-testing
The result
MacBook-Pro:atmos zainzahran$ atmos describe component mycomponent -s myenv | grep foo
invalid stack manifest 'catalog/templates/components/component_template.yaml'
yaml: line 4: did not find expected key
That should be normal, however, if move testing-
to the front of that string, like so
components:
terraform:
mycomponent:
vars:
foo: testing-{{ .foo }}
The result is
MacBook-Pro:atmos zainzahran$ atmos describe component mycomponent -s myenv | grep foo
foo:
name: foo
foo: testing-<no value>
Lastly, I tried wrapping them in quotes, but it always results in
components:
terraform:
mycomponent:
vars:
foo: "testing-{{ .foo }}"
Still gives me
MacBook-Pro:atmos zainzahran$ atmos describe component mycomponent -s myenv | grep foo
foo:
name: foo
foo: testing-<no value>
Additionally, adding ignore_missing_template_values: true
does not error if there is a <no value>
present when importing.
Any help with this would be appreciated.
@Zain Zahran this is the same question as https://sweetops.slack.com/archives/C031919U8A0/p1722420585991999
Hi,
i try to use the import “-path” function like described here https://atmos.tools/core-concepts/stacks/imports/#imports-schema
problem is, i can’t figure out how to use go template to set or not set a variable:
so this will work
import:
- path: "<some_path>"
context:
setttings:
my_value: null
but this will give me the string “
import:
- path: "<some_path>"
context:
setttings:
my_value: '{{ or .settings.some_value nil}}'
and this won’t run - compile error
import:
- path: "<some_path>"
context:
setttings:
'{{ if .settings.some_value}}'
my_value: '{{ .settings.some_value }}'
'{{ else }}'
my_value: null
'{{ end }}'
note that the templates in imports are completely different templates then in other sections.
Templates in imports are evaluated first (in order to be able to import all the files and process all the configs), and they use static context defined in the context
var
this means that this is not correct
import:
- path: "catalog/templates/components/component_template"
context:
foo: "{{`{{ .foo }}`}}"
b/c you are trying to define context
using a template variable from the same context
context
in imports should have static variables, and not use templates that use other Atmos sections for the reason explained in the other thread - we need to import all files first, and for that we need to know all the template variables
please take a look at these docs:
https://atmos.tools/core-concepts/stacks/templates/#template-evaluations
the first doc explains why the templates in import
are not the same as templates in other Atmos sections
and they should be treated differently
@Zain Zahran I can help you more, please DM me your setup and I’ll take a look
2024-07-29
2024-07-30
How to prevent developers from modifying component stack yaml that should not be modified ?
For example, aws-team-roles
is a high privilege component where only ops should be able to modify the terraform code and the stack yaml.
The catalog is defined here stacks/catalog/aws-team-roles
and imported here stacks/org/stage-1/global.yaml
.
These 2 files have codeowners for ops teams, however, the global.yaml
also imports files for IAM roles such as IRSAs for services. This file could technically be modified to misconfigure aws-team-roles
or modify it in a way to add additional perms
import:
- stacks/catalog/services/irsa.yaml
# last imports
- stacks/catalog/aws-team-roles
We could use (atmos or non-atmos) OPA for stacks/catalog/services/irsa.yaml
but there must be a better way to go about this
Using Atmos OPA policies you can restrict which files may configure components. Then use CODEOWNERS
with repository/org rulesets to prevent those files from being modified without approval from specific groups
but doesn’t atmos OPA work exclusively on deep merged yaml ?
i only want to block developers from contributing high priv components in their stack yaml
Got it. Yes, in that case you probably need something that operates at the HCL level.
The other thing you can do, is have different GitHub environments which have different OIDC scopes
Then, using a more privileged environment you could require additional reviews
or leverage the environment protection rules we previously mentioned
The hcl level is easy to gate with codeowners, you must mean the yaml level, no?
Hmm yes perhaps diff roles and reviewers would work codeowners cannot be dynamically set tho
i only want to block developers from contributing high priv components in their stack yaml
There are 2 things at play here.
A) Developer tries to change the configuration for privileged components like aws-teams
or aws-team-roles
. For this, you can use Atmos OPA to restrict where those configurations can be defined, combined with CODEOWNERS
on the paths for rego policy files and protected stack configurations.
B) Attacker decides to just create a new privileged component and copy aws-teams
to foobar
. To protect against this, you may need another level of defense.
(B) is what I’m talking about using either GitHub environments or HCL-level policies
Ah I see, B I’m not too worried about at the moment. You’re right that is another concern though for sure. More to think about it here… Maybe we can just codeowner all of components/terraform to ops to gate new terraform from entering.
I’m more concerned with A. We can easily protect the catalog with codeowners but we cannot easily protect the deep merged yaml catalogs that are imported which can alter the component prior to it getting into the root stack
With atmos opa, we only can run policies on the final deep merged output, but then we would need to know what specifically we are gating against.
We essentially want zero changes to X component yaml inputs, in any level of the hierarchy, and I’m probably too new, but i dont see an easy way to do this with opa
unless we took an md5 of all the yaml inputs of components X, Y, Z and if the md5 changed, it would disable the ability to merge the PR
With atmos opa, we only can run policies on the final deep merged output, but then we would need to know what specifically we are gating against.
This is wrong. We built the OPA for a client specifically for this exact use case. It’s true that it’s deep merged, but we include the file it came from as part of the metadata, so you can rejcet untrusted files.
I’m looking for example
Here’s where you can see the data structure
Use this command to describe the complete configuration for an Atmos component in an Atmos stack.
ohhhhh very cool
i didnt realize that, ty
so you want sources.vars.stack_dependencies.stack_file
and only allow that to be certain paths
perfect, so we can allowlist the sources, use codeowners on each source, and then verify the sources using OPA with atmos
Yep, that’s it
2024-07-31
Hi,
i try to use the import “-path” function like described here https://atmos.tools/core-concepts/stacks/imports/#imports-schema
problem is, i can’t figure out how to use go template to set or not set a variable:
so this will work
import:
- path: "<some_path>"
context:
setttings:
my_value: null
but this will give me the string “
import:
- path: "<some_path>"
context:
setttings:
my_value: '{{ or .settings.some_value nil}}'
and this won’t run - compile error
import:
- path: "<some_path>"
context:
setttings:
'{{ if .settings.some_value}}'
my_value: '{{ .settings.some_value }}'
'{{ else }}'
my_value: null
'{{ end }}'
please note that Go templates in imports are different from Go templates in stack manifests
please see this doc https://atmos.tools/core-concepts/stacks/templates/#template-evaluations
templates in imports are evaluated as the very first step and they require a static context
which should not be templates depending on other sections (which are evalutaed as the very last step in the stack processing pipeline)
this is not a technical thing, this is fundamental (which means it’s not a bug and we can’t fix it).
in order to process a component in a stack, we need to import all manifest files (b/c we can have config for the component in any of the manifests), but to import all of the manifest files we need to know all the template values.
In other words, we can’t use template values in imports if those values depend on the final value from the component in the stack (which we can know only after we import and process everything)
ok.
you can use templates in two cases at the same time: in imports and in stack manifests (please see the doc above). But those are completely diff templates and they use completely diff contexts and they are evaluated at diff stages