#atmos (2023-03)
2023-03-01
How can I plan or deploy the account
module in the gbl
region under the mgmt
or core
tenant in the the example within the atmos repo, when the stack name pattern is formatted as ’{tenant}-{environment}-{stage}
atmos terraform plan account -s mgmt-gbl-root
accounts are provisioned in the root account
you need to create a YAML config file in stacks/catalog/account/defaults.yaml
components:
terraform:
account:
vars: ...
then in your top-level stack for mgmt-gbl-root
you need to import that
assuming this folder structure (and it can be anything suitable to your needs):
stacks/
catalog/
account/
defaults.yaml
orgs/
org1/
mgmt/
root/
global-region.yaml
in the global-region.yaml
file, import the account Atmos component:
import:
- catalog/account/defaults
then execute
atmos terraform plan account -s mgmt-gbl-root
excellent, i’ll give that a go when i get back to my desk
(also, we did launch the refarch channel, which is more focused on questions like these related to our terraform components)
2023-03-06
Hi guys, I faced to strange behavior of stacks imports.
I have 4 stacks
./atmos/stacks/backend.yaml
- with some vars
./atmos/stacks/base.yaml
with some vars and import
import:
- backend.yaml
`./environments/advisor/asa/backendoverlay.yaml` - with some vars
./environments/advisor/asa/newbase.yaml - with some vars and import
import:
- base
Idea is to overwrite common layer settings with environment settings - this is why stacks locates in different folders all 4 stacks are available in `atmos describe stacks` But when I try to modify newbase to overwrite a backend `./environments/advisor/asa/newbase.yaml`
import:
- base
- backendoverlay
I see this error
no matches found for the import ‘backendoverlay’ in the file ‘/home/viacheslav/work/repos/helmfile-atmos/environments/advisor/asa/base.yaml’ Error: failed to find a match for the import ‘/home/viacheslav/work/repos/helmfile-atmos/atmos/stacks/backendoverlay.yaml’ (‘/home/viacheslav/work/repos/helmfile-atmos/atmos/stacks’ + ‘backendoverlay.yaml’) ``` Why Atmos looks only in one path during import, but for both during describe? Can I import anything from
environments/advisor/asa/
?
atmos.yaml for stacks looks like:
stacks:
base_path: "atmos/stacks"
included_paths:
- "../../environments/advisor/**/*"
- "**/*"
excluded_paths:
- "../../environments/advisor/**/atmos.yaml"
- "../../environments/advisor/**/secrets.yaml"
- "../../environments/advisor/**/versions.yaml"
- "../../environments/advisor/**/stateValues.yaml"
name_pattern: "{stage}"
ATMOS_BASE_PATH points to the root of the repo (which contains atmos/stacks and environments/advisor folders)
@Andriy Knysh (Cloud Posse)
all stacks must be under the base_path - this is the root directory for all stacks
# Base path for components, stacks and workflows configurations.
# Can also be set using 'ATMOS_BASE_PATH' ENV var, or '--base-path' command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, 'components.terraform.base_path', 'components.helmfile.base_path', 'stacks.base_path' and 'workflows.base_path'
# are independent settings (supporting both absolute and relative paths).
# If 'base_path' is provided, 'components.terraform.base_path', 'components.helmfile.base_path', 'stacks.base_path' and 'workflows.base_path'
# are considered paths relative to 'base_path'.
base_path: ""
the stacks
and components
folder must be under base_path
the path to a stack is calculated by base_path
(if present) + stacks.base_path
+ the import
the path to a terraform component is calculated by base_path
(if present) + components.terraform.base_path
@Andriy Knysh (Cloud Posse) Thanks, got it. So if I need to use “environments” folder, or “components” or anything else to import stacks, then I need to set stacks.base_path
equal to the repository root, to access all stacks in nested folders, right? I just was confused, that I can describe and apply stacks that leave outside the stacks.base_path
, but can’t import them. Thanks!
Is there a way to define a stack dependency within the stack yaml?
we have a very very limited support for Spacelift. Not for manual deployment
for that, you can use workflows
Workflows are a way of combining multiple commands into one executable unit of work.
my fault…specifically for spacelift
btw, we are working on improvements to it using the latest features that Spacelift added to their TF provider and UI
so I would not use the old thing that we have (and it’s very limited anyway, and does not work in all cases)
is that something coming out soon’ish?
they added this https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/stack_dependency a few months ago
yup…that’s exactly what i was looking at
you can use it on your own now
but we don’t have support for it yet anywhere (we are working on it, don’t know about ETA)
good deal. under a large microscope so can’t lend time right now, but if i get a window I’ll look at adding support
(…manually on this end)
is support for Spaces in already as well @Andriy Knysh (Cloud Posse)?
yes, spaces are working (I’d say not perfectly, some issues come up periodically). @RB was working on them
ok, sweet. I don’t know how we’d use them just yet, but I’m sure we’ll look into that soon
• This fixes an issue w/ the new administrative GIT_PUSH policy when using spaces (unless otherwise specified, it will be created in legacy
, which is a problem for anyone using spaces!
Hi, I was looking at atmos documentation and I noticed the command atmos vendor diff
is not documented. Also some other things:
I was having a very hard time finding something like a complete reference page for the component.yaml. sometihng similar to the github actions reference page that shows every option and you can click in to more relevant docs.
I was specifically looking for documentation on the mixins section of component.yaml, and couldn’t find, so i just copied from other places in our codebase
atmos vendor pull
is
ok, i never even dug further, I just saw it here and looked for docs
Use Component Vendoring to make a copy of 3rd-party components in your own repo.
also some hyperlinks in the docs point you to the docs
dir in the atmos repo, which is nearly empty. i’ll try to find an example
@Andriy Knysh (Cloud Posse) yes I saw that page, but wasn’t sure if those options were exhaustive, and also they don’t really explain themselves. one single example wasn’t enough for me to feel that i have a full understanding of what that file is doing
example of component.yaml
https://github.com/cloudposse/atmos/blob/master/examples/complete/components/terraform/infra/vpc/component.yaml
# 'vpc' component vendoring config
# 'component.yaml' in the component folder is processed by the 'atmos' commands
# 'atmos vendor pull -c infra/vpc' or 'atmos vendor pull --component infra/vpc'
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
name: vpc-vendor-config
description: Source and mixins config for vendoring of 'vpc' component
spec:
source:
# Source 'uri' supports the following protocols: Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP,
# and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
# In 'uri', Golang templates are supported <https://pkg.go.dev/text/template>
# If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref={{.Version}}
version: 1.91.0
# Only include the files that match the 'included_paths' patterns
# If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
# 'included_paths' support POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# <https://github.com/bmatcuk/doublestar#patterns>
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
# mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
# mixins are processed in the order they are declared in the list
mixins:
# <https://github.com/hashicorp/go-getter/issues/98>
# Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
# - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
# This mixin `uri` is relative to the current `vpc` folder
- uri: ../../mixins/context.tf
filename: context.tf
(and yes, the docs still needs improvements, but it takes enormous amount of time to describe every single feature; we’ve spent months on the docs already, and it’s still not 100% ready except some sections)
Yeah I get it - documentation is my least favorite part of shipping a product Just wanted to share my experience
check component.yaml
in the examples here https://github.com/cloudposse/atmos/tree/master/examples/complete/components/terraform/infra, it describes some diff use cases (but the schema is the same as shown here https://atmos.tools/core-concepts/components/vendoring
I would recommend for vendor diff
to at least add stub documentation explaining what you told me, or just disable the command via a flag in the CLI
And finally one other piece of feedback which I will start a new thread for
I feel one of the core parts of the atmos experience is reusing open source code which is great, but the way things are “vendored” in using a git clone at a point in time and changes being made via mixins, otherwise a vendor pull
will silently overwrite your changes - this makes it very hard to upstream. I know any change to atmos in this area is likely a large architectural change, but if components were somehow vendored in as git submodules, or in some way that would enable us to use existing tooling to maintain changes against upstream, that would help to foster upstreaming/contribution
overall I like opportunities to reduce SLOC that I am maintaining in infra projects, which is why i use OSS modules as much as possible, but vendoring in a component can quickly add 1k+ lines to my PR that I worry my team won’t have time to properly review
well, thanks for the feedback. atmos
(at least currently is not a tool to upstream and downstream changes to TF components. It has just one simple command
vendor pull` to get a component first time w/o copying it manually
this is just a very small part of the whole vendoring/upstream/downstream process
the larger part is how to keep the components in syn, always up to date, and always tested, and if a user changes something, how to make sure that it works for all other people
this is a very complicated process (and not related to atmos)
we are discussing this internally
and we need a lot of things to be implemented before this is ready for prime time
do you find people using atmos vendor
commands in CI to make sure that their component library matches the remote?
e.g. automatic testing of all components, all new changes, etc. (including on platforms like Spacelift)
do you find people using atmos vendor
commands in CI to make sure that their component library matches the remote?
see the above, this is not the main part, the more important part is how to keep hundreds of components up to date and tested
in our experience, people get a component and change it, then they want to upsteream, then everything is different and not tested
we need a process for this
with hundreds of components, something always changes somewhere
I have not seen a single infra yet where the same component was exactly the same (well, except for very simple components where there is nothing to change)
we would be happy to add additional features to atmos to help with all of this, once we figure out the process and all the details
great thank you. yeah as I’ve been learning to use it, i have had some pain points and other desires for the tool, and I thought it would be best to communicate those up to the maintainers
yes thank you for all the feedback
and third, would you consider decoupling the terraform command apply, plan,
from the atmos terraform
command?
I would rather do atmos terraform -c component -s stack plan
because then it feels more that atmos will explicitly pass everything after the stack to terraform. when it is atmos terraform plan -c .. -s ..
then i wonder what atmos is doing under the hood, if they truly support every terraform command (including future commands if atmos is not updated)
a minor UX nitpick that I have learned to live with but was confusing at first
it especially feels weird to do atmos terraform apply -c component -s stack -auto-approve
(I only just now learned about atmos deploy)
because I am passing flags so far removed from the terraform
that I assume apply
is going to atmos instead and I wonder if it will forward my auto-approve
we try to support all terraform commands transparently
including the future ones
maybe something will not work, but we review it on a case by case basis
yes and terraform’s CLI has been very good about compatibility so far so I assume it will be alright. My concern is mainly the ordering of the commands. And I am familiar with it now, but as a beginner it was counterintuitive. It would probably be more disruptive to change it at this stage.
i would say it’s a lot of work and disruption :)
Also regarding stacks - is there any plan to give the ability at the atmos level to retrieve outputs from one stack to be passed as inputs to another stack? I prefer to never use remote_state references in native terraform, I come from a terragrunt world where dependencies’ outputs can be transformed into inputs easily, and that’s something I’m missing
currently we support this abstraction over remote state https://atmos.tools/core-concepts/components/remote-state
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
no need to use native TF data sources with a ton of configs
and yes, something like you mentioned can be added as well - just need to figure out the interface first and how all of that would fit together (e.g. do we read the real state, or just use vars from stack config, which ill not work in all cases anyway)
(this is much simple in principle than the vendoring/upstreaming/downstreaming thing)
hey all :wave: is there a way to specify Terraform provider/version information in a stack file? In particular I’d like to configure the TF Kubernetes provider with the correct context without having to pass it in as a variable (e.g. generating a providers.tf.json
the same way atmos can generate backend.tf.json
).
alternatively I guess I could use the KUBE_CTX
variable in my atmos config
generating providers.tf.json
is not currently supported, but you can use regular variables in [provider.tf](http://provider.tf)
and define them in the stack config YAML
gotcha, thanks! that’s what I’m doing currently
2023-03-07
2023-03-09
Cross posting here: https://sweetops.slack.com/archives/CB6GHNLG0/p1678373778144549
Hi folks, i have a question about Atmos as i was going over the concepts and trying to map to what been said in various office-hourse in past year or so.
As per Atmos docs you have components/terraform
directory and the full list points to https://github.com/cloudposse/terraform-aws-components. Now my question goes around:
what was the context of keeping all those modules locals instead of consuming from your private/ public registry ? As others child modules do point to the the registry
2023-03-14
atmos terraform output -s path/to/stack component -json
doesn’t work, atmos seemingly interprets the -js
and looks for a stack called “on”
Searched all stack YAML files, but could not find config for the component 'component' in the stack 'on'.
I was only able to get this to work with ..
atmos terraform "output -json" -s path/to/stack component
Oh I guess this has already been raised https://sweetops.slack.com/archives/C031919U8A0/p1676290032840729
Hi,
I want to extract some JSON properties from terraform output, but have some troubles with it. When I am trying to save terraform output to the file:
atmos terraform "output -json > output.json" main -s security --skip-init
I got a error message, because Atmos recognize it as a single terraform command, not as a command and argument:
│ Error: Unexpected argument
│
│ The output command expects exactly one argument with the name of an output variable or no arguments to show all outputs.
Alternative option also doesn’t fit my requirements:
atmos terraform "output -json" main -s security --skip-init > output.json
because output contains not only terraform outputs, but also atmos logs like
...
Executing command:
/usr/bin/terraform output -json
...
Is there a way to pass > output.json
argument to the terraform in Atmos, or maybe turn off stdout of Atmos for a specific workflow step? Does Atmos native way allow it?
Final goal is to read the service principal password, created in terraform and call az login
command to switch the user before running the next step.
yes, we know about the issues. Those are diff issues, the one in GH is to redirect only TF outputs to a file using >
, for which we need to add log levels to atmos
we’ll fix those
@kevcube try atmos terraform output -s path/to/stack component --skip-init --args --json
2023-03-15
hi, could someone please tell me what this tool he’s using for authenticating? https://youtu.be/0dFEixCK0Wk?t=2525
ah. lol i tried so many iterations on google. that was not one i thought of. thanks!!
Learn how to use Leapp to supply AWS credentials to tools used within Geodesic.
v1.31.0 what Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than the second level Add atmos describe dependants command Update docs why
Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than second level. For example: https://user-images.githubusercontent.com/7356997/224600584-d77e3fe6-a7a4-4d6d-a691-7cd2a5603963.png The…
what
Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than the second level Add atmos describe dependants command Update docs
why
Fix an i…
v1.31.0 what Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than the second level Add atmos describe dependants command Update docs why
Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than second level. For example: https://user-images.githubusercontent.com/7356997/224600584-d77e3fe6-a7a4-4d6d-a691-7cd2a5603963.png The…
2023-03-17
my-ecs-service
inherits ecs-service/with-lb
inherits ecs-service/default
.
This doesn’t work unless in my-ecs-service
I also add an inherits
line for ecs-service/default
. It would seem if I inherit something, I also inherit what it inherits.
Is this the expected functionality? Could this apply the deep merge on inheritance like it does for other parts?
NOTE: I get that this is an array so YAML arrays aren’t merged so I’m asking if something else could be done here.
YAML example:
components:
terraform:
my-ecs-service:
metadata:
component: ecs-service
inherits:
- ecs-service/with-lb
--
components:
terraform:
ecs-service/with-lb:
metadata:
component: ecs-service
inherits:
- ecs-service/default
type: abstract
--
components:
terraform:
ecs-service/default:
metadata:
component: ecs-service
type: abstract
Yes this makes sense. Like you said, the inherits key is a list and lists are stomped on a deep merge. I don’t believe there is any other way to do this for now other than to list all the inherits
I believe @Andriy Knysh (Cloud Posse) is working on something for this
cc: @Matt Calhoun
@johncblandii from the 5 types of inheritance (see https://www.simplilearn.com/tutorials/cpp-tutorial/types-of-inheritance-in-cpp)
Explore the different types of inheritance in C++, such as single multiple multilevel hierarchical and hybrid inheritance with examples. Read on!
Atmos currently supports two (Single and Multiple) https://atmos.tools/core-concepts/components/inheritance
Component Inheritance is one of the principles of Component-Oriented Programming (COP)
in your example, it’s “Hierarchical Multilevel Inheritance”, and I’m working on it right now (you will be able to use it next week)
with what we have now (“Multiple Single-Level Inheritance”), this
my-ecs-service inherits ecs-service/with-lb inherits ecs-service/default
can be modeled like this:
my-ecs-service:
inherits:
- ecs-service/default
- ecs-service/with-lb
# The order is important, the items are processed in the order they are defined in the list, see <https://atmos.tools/core-concepts/components/inheritance>
(yes, you need to specify all inherited components in the list since Single-Level inheritance is not transitive)
as usual, your timing is impeccable @Andriy Knysh (Cloud Posse).
Next week would be perfect. We’re getting a lot of new people into doing stacks and I’d like to avoid confusion on their behalf.
v1.32.0 what Add Hierarchical Inheritance and Multilevel Inheritance for Atmos Components Update docs https://atmos.tools/core-concepts/components/inheritance why Allow creating complex hierarchical stack configurations and make them DRY In Hierarchical Inheritance, every component can act as a base component for one or more child (derived) components, and each child component can inherit from one of more base…
I saw that. Need to dig in
Component Inheritance is one of the principles of Component-Oriented Programming (COP)
oh snap…*
in import lines?
A quick question about atmos vendor pull
authentication. Currently go-getter supports basic and headers authentication for http(s) sources https://github.com/hashicorp/go-getter/blob/main/README.md#http-http. But I didn’t found something about it in Atmos vendor utils: https://github.com/cloudposse/atmos/blob/c1679524cf66d241e0426672bfadbee6447aed69/internal/exec/vendor_utils.go#L196
Does Atmos support any kind of authentication for vendors? In my case I need to authenticate on JFrog Artifactory (both basic and header auth supported) to download a zip with component.
the basic auth should be supported if you add username:password@
to the hostname in the URL- but that’s prob a bad idea since you will have the username/pass in the repo in YAML
headers are not supported
in ay case, the problem here is where to store the secrets. Maybe we could use ENV vars. This is a separate project that required consideration (designing first, e.g. where those secrets come from etc.)
@Viacheslav this was a one reason we chose to use go-getter
, but as @Andriy Knysh (Cloud Posse) alluded to, not sure on the most practical way to pass secrets that is not opinionated on the platform.
If you propose some suggestions, we’ll take that into account.
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) thanks guys! Passing on secrets is always a controversial topic.
Our component.yaml already supports go templates. Maybe can read an ENV?
If not, maybe something easy to add if it solves this use-case
Hi @Erik Osterman (Cloud Posse) / @Andriy Knysh (Cloud Posse) Jfrog artifactory under the hood support the native authentication mechanism for their private terraform registries implementation.
Option 1:
After running terraform login <some terraform private registry>
, credentials file being created under ~/.terraform.d/credentials.tfrc.json
.
If Atmos can look for this path on component initialisation, it might solve the problem for any terraform registry use case as long as it work with native terraform registry authentication.
Reference: ==> Terraform Credentials Storage
Option 2(might be the preferred):
Atmos will fetch the following environment variable in case of private registries, E.g TF_TOKEN_cloudposse_jfrog_io=<terraform-private-registry-token>
, terraform translate the uri to [cloudposse.jfrog.io](http://cloudposse.jfrog.io)
.
Reference ==> Environment Variable Credentials
I’ve created this PR
The terraform login command can be used to automatically obtain and save an API token for Terraform Cloud, Terraform Enterprise, or any other host that offers Terraform services.
Learn to use the CLI configuration file to customize your CLI settings, including credentials, plugin caching, provider installation methods, etc.
Describe the Feature
This feature is an enhancement for Atmos Components
to be able to fetch terraform modules from private registries based on https.
Expected Behavior
Ability to fetch terraform modules from private terraform registries
Use Case
Many companies use private registries and repositories to:
• Decoupled from vendor servers. • Fasten their pipeline builds. • Avoid breaking routine work when problems appear with vendor servers. • Improve security. • etc…
E.g:
• Docker • Apt • Rpm • Npm • Maven • Helm • Terraform • etc…
Describe Ideal Solution
Option 1:
After running terraform login <some terraform private registry>
, credentials file being created under ~/.terraform.d/credentials.tfrc.json
.
If Atmos can look for this path on component initialization, it might solve the problem for any terraform registry use case as long as it work with native terraform registry authentication.
Reference: ==> Terraform Credentials Storage
Option 2 (might be the preferred):
Atmos will fetch the following environment variable in case of private registries TF_TOKEN_cloudposse_jfrog_io=<terraform-private-registry-token>
, terraform translate the uri to [cloudposse.jfrog.io](http://cloudposse.jfrog.io)
.
Reference: ==> Environment Variables Credentials
Alternatives Considered
No response
Additional Context
No response
So for replacing values in a stack yaml is shown on https://atmos.tools/core-concepts/stacks/imports.
it isn’t working for me on v1.31.0. Do we have to use path/context on the import for it to work? (meaning it can’t read the values without being pushed those values)
yaml:
...
map_environment:
AWS_ENV: "{{ .stage }}"
map_secrets:
NEW_RELIC_LICENSE_KEY: "/{{ .stage }}/newrelic/license_key"
...
describe component:
map_environment:
APP_NAME: report-generator
AWS_ENV: '{{ .stage }}'
NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: true
NEW_RELIC_ENABLED: true
map_secrets:
NEW_RELIC_LICENSE_KEY: /{{ .stage }}/newrelic/license_key
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
well, it doesn’t look like path/context works either.
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
any thoughts @Andriy Knysh (Cloud Posse)?
Go templates are supported only in imports, and you have to provide all the values for the templates in the context
for each import
if you don’t provide the values, Atmos does not know anything about how to get them from any other place
import:
- path: catalog/terraform/ecs-service/default
context:
stage: whatever
ok, looks good
what’s the issue?
AWS_ENV: '{{ .stage }}'
^ generated value
from describe component
w/o seeign the whole solution it’s not possible to understand it
this is a working example https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/catalog/terraform/eks_cluster_tmpl.yaml
components:
terraform:
# Parameterize Atmos component name
"eks-{{ .flavor }}/cluster":
metadata:
component: "test/test-component"
# Parameterize variables
vars:
enabled: "{{ .enabled }}"
name: "eks-{{ .flavor }}"
service_1_name: "{{ .service_1_name }}"
service_2_name: "{{ .service_2_name }}"
tags:
flavor: "{{ .flavor }}"
imports look like this:
plat-ue1-dev -> mixins/services/all -> mixins/services/service1/all -> mixins/services/service1/app1 -> ecs-service/default
ok…checking the link
import:
- path: mixins/region/us-west-2
- path: orgs/cp/tenant1/test1/_defaults
# This import with the provided context will dynamically generate
# a new Atmos component `eks-blue/cluster` in the current stack
- path: catalog/terraform/eks_cluster_tmpl
context:
flavor: "blue"
enabled: true
service_1_name: "blue-service-1"
service_2_name: "blue-service-2"
# This import with the provided context will dynamically generate
# a new Atmos component `eks-green/cluster` in the current stack
- path: catalog/terraform/eks_cluster_tmpl
context:
flavor: "green"
enabled: false
service_1_name: "green-service-1"
service_2_name: "green-service-2"
yeah, pretty much what i have. maybe it is too many imports down?
or maybe if an import of mine imports the same file it is a problem
you have to provide the context to each import in the chain
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
the point is, Atmos takes an import like
path: "catalog/terraform/eks_cluster_tmpl_hierarchical"
and the context for the import
context:
# Context variables for the EKS component
flavor: "blue"
enabled: true
service_1_name: "blue-service-1"
service_2_name: "blue-service-2"
# Context variables for the hierarchical imports
# `catalog/terraform/eks_cluster_tmpl_hierarchical` imports other parameterized configurations
tenant: "tenant1"
region: "us-west-1"
environment: "uw1"
stage: "test1"
and just calls Go template functions on the imported file (it’s the template) providing the context
as the data for the template
and can it use the global vars
to pass them in?
or would they be available without context?
backstory: all of the ECS services have similar values, but they differ by stage
goal: create the env var at the ecs-service/default
level instead of baking it into Terraform
I understand what you are talking about, and I was thinking about that for some time now
instead of providing the (hardcoded) context to every import, you are talking about some global “context” to be used for all possible imports
i can be missing something, but that does not work in principle (I need to think about it more)
we already have that global context, it’s the vars
section
but… to get that final vars
section, Atmos needs to process all imports and all inheritance chains
now, if we want to use the global context before Atmos processes all imports with Go templates, then it’s a chicken and egg problem: to get the final vars
, it needs to process all the imports (including imports with templates); but to process all the imports with templates, it already needs the final vars
(global context)
that’s why it’s not supported yet, we need to think about it
having said that, from what you want “backstory: all of the ECS services have similar values, but they differ by stage” - we’ve implemented many use cases like that already, can help you with it (if you send your full solution or add me to the repo)
Exactly what I’m talking about….a global context. I get why it’s problematic for sure which is why I wanted to ask so I know the boundaries when teaching this internally.
Hopefully we’re getting rolling very soon which will open up the repo for a full review.
Id sefinitely love to get feedback
(the global context derived from the final vars
is a chicken and an egg problem as I see it)
we can help reviewing it
“backstory: all of the ECS services have similar values, but they differ by stage” - there are a few ways of doing it in Atmos w/o inventing new features
ok sweet. definitley look forward to it
if you have any links to relevant implementation, i can pick the pieces up on how we can do it
the biggest part for this will come soon and I’ll probably just use the path/context approach for those ssm params
2023-03-18
context: all non-prod (6-8 accounts) uses 1 value and prod has a different value.
For the Go template imports, is there a good way to provide a default value?
I’d like to not have to copy the value 6-8 times and would prefer to provide a default value then just override in prod the 1 time.
I have to copy this N times and tweak for prod right now:
- path: "ssm-parameters-tmpl"
context:
license_path: nonprod.xml
I could prob do abstract versions then just inherit the default and override specifics. A simpler version in 1 file would be ideal.
possible options (in the tmpl file):
context:
defaults:
license_path: nonprod.xml
…or inline…
"/{{ .stage }}/license/path":
value: "{{ .license_path | nonprod.xml}}"
description: iText license path
overwrite: true
type: String
The difference with the former is it allows a DRY solution in case a value is needed multiple times. Support for both would be ideal.
Thoughts @Andriy Knysh (Cloud Posse)?
imports with Go templates don’t support default values (as of now)
but all of that can be done w/o using Go templates
right. just curious if any patterns emerged
or if it would make sense to do so
there are other patterns that can be used (w/o using Go templates in the imports)
but if you want to use them, you can provide default value in the templates https://stackoverflow.com/questions/44532017/how-can-i-add-a-default-value-to-a-go-text-template
I want to create a golang template with a default value that is used if a parameter is not supplied, but if I try to use the or function in my template, it gives me this error:
template: t220:
whatever Go templates support (and they support a lot of features), can be used
oh we can or or
in here?
ohhhh snap!
if you provide default values in the templates, that means you don’t have to specify them in context
every time
my man
going to toy with this a bit
Learn the syntax for Go’s text/template package, the basis of the template processing engines in Nomad, Consul, and Vault.
you can use or, and, if, else etc.
so basically we can do these like helmfiles now
the only thing you need to keep in mind is that after the template is processed, the result should be a valid YAML
{{ if eq .stage "prod" }}
works
nick cage
2023-03-20
v1.32.0 what Add Hierarchical Inheritance and Multilevel Inheritance for Atmos Components Update docs https://atmos.tools/core-concepts/components/inheritance why Allow creating complex hierarchical stack configurations and make them DRY In Hierarchical Inheritance, every component can act as a base component for one or more child (derived) components, and each child component can inherit from one of more base…
what
Add Hierarchical Inheritance and Multilevel Inheritance for Atmos Components Update docs https://atmos.tools/core-concepts/components/inheritance
why
Allow creating complex hierarchical sta…
Component Inheritance is one of the principles of Component-Oriented Programming (COP)
2023-03-21
v1.32.1 what Update atmos describe affected command why Handle atmos absolute base path when working with the cloned remote repo in the describe affected command Absolute base path can be set in the base_path attribute in atmos.yaml, or using the ENV var ATMOS_BASE_PATH (as it’s done in geodesic) If the atmos base path is absolute, find the relative path between the local repo path and the atmos base path. This relative path (the difference) is then used to join with the remote (cloned) repo path
what
Update atmos describe affected command
why
Handle atmos absolute base path when working with the cloned remote repo in the describe affected command Absolute base path can be set in the bas…
v1.32.1 what Update atmos describe affected command why Handle atmos absolute base path when working with the cloned remote repo in the describe affected command Absolute base path can be set in the base_path attribute in atmos.yaml, or using the ENV var ATMOS_BASE_PATH (as it’s done in geodesic) If the atmos base path is absolute, find the relative path between the local repo path and the atmos base path. This relative path (the difference) is then used to join with the remote (cloned) repo path
is there a plan to add in graphviz or similar support to map out component connections, @Andriy Knysh (Cloud Posse)?
it’s a complicated topic, but yes we were thinking about a user interface to help with visualizing the whole infra
absolutely complicated. good to know it is a potential offering
Especially now with multiple-levels of multiple inheritance shesh
Oh, snap, looks like you already implemented the graphviz for this?!
2023-03-22
v1.32.2 what & why Converted ‘Get Help’ nav bar item to CTA button More user friendly 404 page
what & why
Converted ‘Get Help’ nav bar item to CTA button More user friendly 404 page
v1.32.2 what & why Converted ‘Get Help’ nav bar item to CTA button More user friendly 404 page
2023-03-24
Hey Everybody, is there a way to declare an outside file path as a component dependency? I’m using a somewhat eccentric terraform provider that lets point all of my actual configuration to a relative file path that falls outside of my atmos directories. When I run “atmos describe affected”, on a PR made to that outside directory, the change doesn’t get picked up as impacting my stack and it my workflow doesn’t identify that a new plan is needed
Thanks for bringing this over! Still wrestling with it
Hey Everybody, is there a way to declare an outside file path as a component dependency? I’m using a somewhat eccentric terraform provider that lets point all of my actual configuration to a relative file path that falls outside of my atmos directories. When I run “atmos describe affected”, on a PR made to that outside directory, the change doesn’t get picked up as impacting my stack and it my workflow doesn’t identify that a new plan is needed
@jbdunson what dependencies are those? Terraform component deps?
note that describe affected
takes into account all dependencies from YAML stack configs (all sections) and the component terraform folder
anything changed in the component folder will make the component affected
Atmos does not know anything about random files somewhere in the file system, and it can’t know if your terraform code depends on some files outside of the component folder
if you place your files in a subfolder in the component folder, that should work
Great! thanks for the clarity :)
@Andriy Knysh (Cloud Posse) is the subfolder pickup a recent change? I’m using atmos v1.26.0
Currently my component looks like
• component ◦ main.tf ◦ versions.tf ◦ providers.tf ◦ non_tf_subfolder ▪︎ subfolder • file1.txt • file2.txt
it was added some time ago
When I make a change to file1 or file2, describe affected doesn’t seem to pick it up, it’s it because I have additional sub folders?
it should check everything in the component folder
if you run atmos describe affected --verbose=true
, do you see those files changes in the output?
also note, you have to not only change a file, you have to commit it
it’s all about git
I do - the path to file.txt is outputted but the “Affected components and stacks:” array is blank
atmos uses go-git
to get a list of changed files
show the output from the command
we’ll test this use case. If it’s not working, we’ll fix it in the next release. For now, if you put the files into the component folder (not in a subfolder), it should work ok
apologies for the delay, hope this helps ^
yes, thanks, we’ll test it (and cut a new release with a fix if not working now)
cool, thank you for looking into it! Appreciate the support
@jbdunson please use this release https://github.com/cloudposse/atmos/releases/tag/v1.32.4
Awesome - will give it a go, thanks for the quick turnaround @Andriy Knysh (Cloud Posse)
@Andriy Knysh (Cloud Posse) can confirm the feature works well with our use case!
v1.32.3 what & why Use consolidated search index between atmos.tools and docs.cloudposse.com #351 Fix panic on “index out of range” in terraform two-words…
what & why
Use consolidated search index between atmos.tools and docs.cloudposse.com #351 Fix panic on “index out of range” in terraform two-words commands (if atmos component is not provided on t…
what
Use consolidated index between atmos.tools and docs.cloudposse.com
why
Use consolidated index between atmos.tools and docs.cloudposse.com
references
v1.32.3 what & why Use consolidated search index between atmos.tools and docs.cloudposse.com #351 Fix panic on “index out of range” in terraform two-words…
2023-03-25
v1.32.4 what Update atmos describe affected command why Check if not only the files in the component folder itself have changed, but also the files in all sub-folders at any level test For example, if we have the policies sub-folder in the component folder components/terraform/top-level-component1, and we have some files in the sub-folder (e.g. components/terraform/top-level-component1/policies/policy1.rego), and if the files changed, atmos describe affected would mark all Atmos components that use…
what
Update atmos describe affected command
why
Check if not only the files in the component folder itself have changed, but also the files in all sub-folders at any level
test For example, if …
v1.32.4 what Update atmos describe affected command why Check if not only the files in the component folder itself have changed, but also the files in all sub-folders at any level test For example, if we have the policies sub-folder in the component folder components/terraform/top-level-component1, and we have some files in the sub-folder (e.g. components/terraform/top-level-component1/policies/policy1.rego), and if the files changed, atmos describe affected would mark all Atmos components that use…
2023-03-27
2023-03-28
Is there a good way to read the component name of a stack from within that component? For example I need what is the equivalent of TF_VAR_spacelift_stack_id
from here but in a local run
This article describes the environment in which each workload (run, task) is executed
atmos describe component xxx -s yyy
This article describes the environment in which each workload (run, task) is executed
But I am looking to reference that component name inside the terraform code.
well, terraform code is supposed to be generic and not related to the configuration (separation of logic and config)
but you can use remote-state to get the remote state of any atmos component
yeah, but in this situation I am needing to alter the spacelift stack from within the spacelift stack, to add GCP credentials. Feature request to atmos to expose some informational environment variables like spacelift does in that link
and I need to first run from local, because I am privileged in GCP, so I can’t rely on entirely spacelift’s variables in this case
@RB thanks, that’s exactly what I’m talking about.
so I have this
block_device_mappings:
- device_name: "/dev/sda1"
no_device: false
virtual_name: null
ebs:
volume_size: 20
delete_on_termination: true
encrypted: true
volume_type: "gp2"
iops: null
kms_key_id: null
snapshot_id: null
in a type: abstract
component and then I use it like so:
asg/pepe:
metadata:
component: asg
type: real
inherits:
- asg/pepe/defaults
vars:
name: "pepe"
enabled: true
block_device_mappings:
- device_name: /dev/sda1
ebs:
volume_size: 200
but then if I describe the component
the block mapping ends like :
block_device_mappings:
- device_name: /dev/sda1
ebs:
volume_size: 200
all the other options removed
I guess this is because it is a map?
@Andriy Knysh (Cloud Posse)
Cannot deep merge lists
it’s a list, we don’t merge lists for many reasons. in this case you need to copy all the settings from one component to the other
Maybe can use go templates as a workaround
Atmos supports those natively
what other type does not do deep merge?
Lists are the only thing
It’s because what does it mean? Do you append the items? Do you prepend the items? Do you merge on index in the list?
There’s no one way to do it. But with maps and scalars it’s straightforward.
yes, it is hard
you guys should not allow list ever again as I put vars
lol
So it’s not that it’s hard, it’s easy to implement anyone of those algorithms. But someone is going to want the behavior of appending. Someone is going to want prepending. And another is going to want deep merging on index.
in all latest terraform modules we used maps everywhere
for all vars
this one is the asg module
not the latest
not updated
the asg module have this :
variable "block_device_mappings" {
description = "Specify volumes to attach to the instance besides the volumes specified by the AMI"
type = list(object({
device_name = string
no_device = bool
virtual_name = string
ebs = object({
delete_on_termination = bool
encrypted = bool
iops = number
kms_key_id = string
snapshot_id = string
volume_size = number
volume_type = string
})
}))
`
how you go about changing that to a map?
do you have an example of some of the other modules you guys updated?
I think I found something
variable "block_device_mappings" {
description = "Specify volumes to attach to the instance besides the volumes specified by the AMI"
type = map(object({
device_name = string
no_device = bool
virtual_name = string
ebs = object({
delete_on_termination = bool
encrypted = bool
iops = number
kms_key_id = string
snapshot_id = string
volume_size = number
volume_type = string
})
}))
default = {}
}
locals {
block_device_map = { for bdm in var.block_device_mappings : bdm.device_name => bdm }
}
yes, it looks good and you can use it (but to modify the public module to use it would take some effort to maintain backwards compatibility)
I have a wrapper already so I’m good
2023-03-29
Use professional tools to find colors, generate uniform shades and create your palette in minutes.
lol
In the Atmos “Hierarchical Layout” there seems to be a lot of assumption about the way we organize our OUs and accounts. I assume this is because it has been a working strategy for Cloud Posse.
However, it seems to be making it much more difficult to adopt into our tooling.
E.g. The hierarchical layout assumes that the accounts living directly under each OU are only separate stages of a single account.
This is because the stage
variable from the name_pattern
is tied to the stack living directly under an OU tenant
You can change the name_pattern
but it won’t break the overall assumption that stacks actually cannot be per-account.
The assumption is more strict than that, because we’re limited to the following variables in the name_pattern
:
• namespace
• tenant
• stage
• environment
Case: Sandbox accounts. What if we wanted to provision defaults for sandbox accounts for our developers?
These sandbox accounts might live in a Sandbox
OU (tenant), but they aren’t necessarily separate stages of one another, at all.
There is no feasible strategy with the name_pattern
without breaking the behavior of other stacks.
One option could be to combine our account name and region into the environment
variable (possibly without side-effects?) like so: sandbox-account-1-use1.yaml
But then we would be left with several directories where nesting would be better organized like so: sandbox-account-1/use1.yaml
I can only think that we should have an additional variable in the name_pattern
for example: name
to truly identify the account.
I hope I’ve missed something and Atmos does have the flexibility for this. Any advice would be much appreciated!
@Zach B all those context variables (namespace, tenant, environment, stage) are optional, you can use all of them, ot just two, or even just one
In the Atmos “Hierarchical Layout” there seems to be a lot of assumption about the way we organize our OUs and accounts. I assume this is because it has been a working strategy for Cloud Posse.
However, it seems to be making it much more difficult to adopt into our tooling.
E.g. The hierarchical layout assumes that the accounts living directly under each OU are only separate stages of a single account.
This is because the stage
variable from the name_pattern
is tied to the stack living directly under an OU tenant
You can change the name_pattern
but it won’t break the overall assumption that stacks actually cannot be per-account.
The assumption is more strict than that, because we’re limited to the following variables in the name_pattern
:
• namespace
• tenant
• stage
• environment
Case: Sandbox accounts. What if we wanted to provision defaults for sandbox accounts for our developers?
These sandbox accounts might live in a Sandbox
OU (tenant), but they aren’t necessarily separate stages of one another, at all.
There is no feasible strategy with the name_pattern
without breaking the behavior of other stacks.
One option could be to combine our account name and region into the environment
variable (possibly without side-effects?) like so: sandbox-account-1-use1.yaml
But then we would be left with several directories where nesting would be better organized like so: sandbox-account-1/use1.yaml
I can only think that we should have an additional variable in the name_pattern
for example: name
to truly identify the account.
I hope I’ve missed something and Atmos does have the flexibility for this. Any advice would be much appreciated!
regarding the structure of stacks
- The folder structure is for humans to organize the stack config (so you understand where the config for each org, OU, account, region is). Atmos does not care about the folder structure and how you organize the files in the folders
- Atmos cares about
context
(namespace, tenant, environment, stage) - Atmos stack names are constructed from the context variables which must be defined in the stack config files
see https://atmos.tools/quick-start/create-atmos-stacks for more info
In the previous step, we’ve configured the Terraform components and described how they can be copied into the repository.
Right - We’ve been using these context variables for a while now with Cloud Posse modules and the null
label.
I did eventually realized the directory structure is irrelevant. Thanks for clarifying.
I think, as I pointed out to Erik in the thread in the other channel, I had a case where there actually weren’t enough context variables that atmos
uses to be specific enough for our hierarchy.
I see you are using 4 variables
Atmos supports 4
(granted, the names could be not perfect for your case, e.g. you call the namespace something else)
and to make use of 4 context vars, you need to update stacks.name_pattern
in atmos.yaml
here’s a working example of using all 4 context vars
stacks:
# Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
name_pattern: "{namespace}-{tenant}-{environment}-{stage}"
do you need more than 4?
Right - At the moment what I’ve done is combined our OU names and account names into the single tenant
variable, so that we could support separate accounts under the same OU, that aren’t necessarily directly related.
Such as workloads-data
and workloads-jam
I think more than 4 would eliminate the need to do what I have done here, yes.
yes it would
but that will require a lot of redesigning including the label
module
Makes sense
I’ll see how far this gets me. Luckily, we don’t actually use the tenant
variable to name any of our resources, so this appears to work for us.
At the moment what I’ve done is combined our OU names and account names into the single tenant
variable, so that we could support separate accounts under the same OU
@Andriy Knysh (Cloud Posse) don’t we do this as well for disambiguation?
e.g. there could be multiple “prod” accounts, across multiple OUs
Of course another strategy could be to ensure each distinguished account does live in its own OU or sub-OU, but that is certainly unnecessary to support tooling.
as a side note, if you’re operating in AWS make sure you’re acutely aware of resource name length limits. Make sure to try to keep each context parameter as short as possible.
Right, we do make use of id_length_limit
- You’ve really put together a lot of necessary configuration!
@Zach B also, if ecme
is used in all stacks, you don’t need to include it, and you have one more context var to use
e.g. if a company operates under just one namespace
, we don’t include it in the stack names
b/c it makes all names longer and is not necessary
only if we need to use multiple Orgs, we use namespace
That makes sense. And that is the case for us (single namespace
throughout).
We have been including it in our resource names by default. One idea was that it would add a little bit of additional uniqueness to resources that required uniqueness, such as S3 buckets.
Although there is a good consideration for dropping it completely.
yes
also, tenant
is just roughly corresponds to an OU
Are there any examples of IAM policies in atmos? This is usually a tricky one.
if let’s say your sandbox account is not in any OU, you can still crate a virtual tenant and use it
it’s not about the Org structure per se, it’s more about naming conventions (how all the resources get their names/IDs)
Are there any examples of IAM policies in atmos? This is usually a tricky one
Atmos as a CLI does not care about IAM and access, it just cares about configurations (and making them DRY)
all those IAM roles are Terraform concerns
So essentially, a component.
E.g. ECS task execution role/policy. These policies can be very granular and change from ECS task to ECS task.
I assume the best way would still be to create a component for it that is used by an atmos stack.
yes, all of that is component’s concerns. In Atmos, you create config for the component
@Zach B regarding using the namespace
. 1) You can (and should) use it in the label
module for it to be part of the resource names/IDs; but 2) you should not use it for Atmos stack names b/c it’s the same Org and the namespace is the same
I’m saying that those two things are configured separately
terraform:
vars:
label_order:
- namespace
- tenant
- environment
- stage
- name
- attributes
descriptor_formats:
account_name:
format: "%v-%v"
labels:
- tenant
- stage
this is how to configure the label module
and Atmos stack pattern is configured in atmos.yaml
, which can be same as label_order
above, or can be completely diff (e.g. not using namespace
in the stack name pattern)
Thanks @Andriy Knysh (Cloud Posse) It seemed like passing the namespace
in through atmos defaults was the simplest way to get it down into the components to eventually be used by the label
module though.
I’m doing my best to understand this part.
@Zach B has joined the channel
Is it possible for atmos
to generate backend files when the backend uses blocks?
E.g.:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
prefix = "my-app-"
}
}
}
How can the workspaces
block be included with YAML?:
---
terraform:
backend_type: remote
backend:
remote:
organization: company
hostname: app.terraform.io
Yes, I believe the YAML is simply a YAML representation of the HCL data structure (e.g. HCL -> JSON -> YAML)
Note, workspaces are managed by atmos. You can overwrite it, but that will lose some of the convenience that atmos provides.
@Andriy Knysh (Cloud Posse) I don’t see any parameter in the atmos.yaml
config to manage the workspace format: https://atmos.tools/cli/configuration
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
What I was experiencing was that atmos was creating a new workspace, but storing the state locally. It appeared I needed to define the remote state for Terraform Cloud in some way.
I believe I needed atmos to generate the backend files for the components at least, but it could not generate the “workspaces” block from YAML.
workspaces.prefix
is not supported (we did not test remote
backend much, we mostly use s3
). Please open an issue in atmos repo and we’ll review it
Thanks for clarifying
note that you can add [backend.tf](http://backend.tf)
or backend.tf.json
manually for each component and configure atmos to not generate backend files - this will work for you w/o waiting for a fix
also, you can override the auto-generated TF workspace per component in the metadata
section
"test/test-component-override-3":
metadata:
# Terraform component. Must exist in `components/terraform` folder.
# If not specified, it's assumed that this component `test/test-component-override-3` is also a Terraform component
# in `components/terraform/test/test-component-override-3` folder
component: "test/test-component"
# Override Terraform workspace
# Note that by default, Terraform workspace is generated from the context, e.g. `<tenant>-<environment>-<stage>`
terraform_workspace: test-component-override-3-workspace
It seems like this would require changing the backend.tf each time a different stack uses that component. Because each stack’s component usage will correspond to a different workspace, but each component only has one backend.tf at a given time. (I think)
S3 backend has
backend:
s3:
workspace_key_prefix: infra-vpc
and that allows you to use a static backend file
if you use
workspaces {
prefix = "my-app-"
}
for remote backend (in the manually created backend file), it should do the same, no?
Yes that should do the same
@Zach B also (sorry if I misled you), whatever you put into backend.remote
section in YAML, will be generated into the backend.tf.json
file
terraform:
backend_type: remote
backend:
remote:
organization: company
hostname: app.terraform.io
workspaces:
prefix: "my-app-"
will work and will be in the generated backend file
so remote
should work in the same way as s3
(auto-generated ), and TF workspaces auto-generated, and using workspace prefix
@Andriy Knysh (Cloud Posse) I tried that, it did not work, at least for “workspaces.name”
I assume it is supported for “workspaces.prefix” instead?
Also, Terraform Cloud recommends using the “cloud” block for backend configuration rather than the “backend remote” block if you are using Terraform Cloud for state management. I’m assuming atmos does not support generating this “cloud” block and will only generate a “backend” block?
I tried that, it did not work, at least for “workspaces.name”
i mean if you configure it in YAML and generate backend, it will end up in the backend.tf.json
file (all blocks, all maps, all lists - they are converted from YAML verbatim). If a generated block works with TF Cloud, needs to be tested
Ahh I wonder if my issue is that atmos was converting from YAML to HCL. Maybe if I try YAML to JSON this will produce better results.
It appeared that by default, atmos was generating backend.tf rather than backend.tf.json.
Also, Terraform Cloud recommends using the “cloud” block for backend configuration rather than the “backend remote” block if you are using Terraform Cloud for state management. I’m assuming atmos does not support generating this “cloud” block and will only generate a “backend” block?
the cloud
block is not supported (when we implemented it, TF did not have that block yet)
so atmos does the following
// generateComponentBackendConfig generates backend config for components
func generateComponentBackendConfig(backendType string, backendConfig map[any]any) map[string]any {
return map[string]any{
"terraform": map[string]any{
"backend": map[string]any{
backendType: backendConfig,
},
},
}
}
It appeared that by default, atmos was generating backend.tf rather than backend.tf.json
I would say it’s the other way around but I don’t know what you are doing
atmos terraform plan/apply …
always generates the backend file in JSON
but this command https://atmos.tools/cli/commands/terraform/generate-backends can generate it in JSON, TF backend block, and HCL
Use this command to generate the Terraform backend config files for all Atmos terraform components in all stacks.
(Sorry, for some reason it’s not letting me do Slack replies)
Anyway, when I tried this with the “name” property rather than the “prefix” property, and ran “atmos terraform generate backend….” Atmos generated a backend.tf file without the “workspaces” block.
Im going to give it another go.
terraform:
backend_type: remote
backend:
remote:
organization: company
hostname: app.terraform.io
workspaces:
prefix: "my-app-"
This also generates the backend.tf file?
^ supposed to be a reply to:
but this command https://atmos.tools/cli/commands/terraform/generate-backends can generate it in JSON, TF backend block, and HCL
Ignore the last 2 messages. Apparently I’m having trouble with Slack for mobile.
ok, this looks like a classic example of https://xyproblem.info/
Asking about your attempted solution rather than your actual problem
Let me explain a few ways of using backends in atmos:
- You can manually create the backend files in the component folders (for any type of backend). With TF workspace prefix, it will work for all stacks. It will not work only if your backend requires a separate role (e.g. AWS IAM role) for TF to assume for different accounts (in which case we always generate backends dynamically including the roles for TF to assume)
I think it would be better described as “I don’t know how to do multiple things in Atmos, and we’ve encountered those multiple things I don’t know how to do while trying to solve a single problem” - but yes, essentially.
- You can configure any backend in YAML (using
backend
section) and then call https://atmos.tools/cli/commands/terraform/generate-backends to auto-generate ALL the backends for all the components at once (and then you can commit the files)
- You can configure any backend in YAML (using
backend
section) and then, when callingatmos terraform plan/apply <component> -s <stack>
, the backend file for the component in the stack will be generated automatically on the fly
#3 I believe is only true if components.terraform.auto_generate_backend_file
is set to true
in atmos.yaml
?
- Regarding the
HCL
format for the auto-generated backend files, I think there is is bug in atmos b/c of some restrictions of the HCL Golang library it’s using, so for the HCL format complex maps (maps inside of maps) are not generated correctly, but a simple map is ok, e.g. fors3
backend (I think there is an open issue for this, we’ll have to take a look and see if it can be fixed). So don’t use HCL, use JSON - everything with JSON is ok
so if you use JSON format, you can use any of the #1,2,3 ways of working with the backens for any type of backend (including remote
)
Thanks a lot for clarifying all of that. That is the way that I understood it as well up to this point. I am just looking to confirm some behavior at the moment.
Currently, it still appears atmos
is generating a [backend.tf](http://backend.tf)
file by default during atmos terraform generate backends
but it seems like you think it should be backend.tf.json
by default?
With components.terraform.auto_generate_backend_file
set to true
- The backend
file sometimes does not generate during a plan/apply and I receive a Terraform error about needed to run terraform init
first. When it does generate the backend file, it is backend.tf.json
by default.
So - it appears we have some strange behavior and potentially some fixes that need to be made.
JSON is used by default in atmos terraform plan/apply <component> -s <stack>
I guess that would leave me wondering why it wouldn’t be used by default in atmos terraform generate backend
atmos terraform generate backend
also generates a backend for a single component in JSON
you are talking about https://atmos.tools/cli/commands/terraform/generate-backends/ - this uses HCL by default, but you can use the flag --format json
Use this command to generate the Terraform backend config files for all Atmos terraform components in all stacks.
and to answer your question why the HCL is default, and not JSON, b/c we did not know an=bout that bug with complex maps in HCL
With components.terraform.auto_generate_backend_file
set to true
- The backend
file sometimes does not generate during a plan/apply and I receive a Terraform error about needed to run terraform init
first. When it does generate the backend file, it is backend.tf.json
by default.
this ^ we’ve never seen before
anyway, if you want to generate all backends for all components for remote
backend type, use atmos terraform generate backends --format json
Thanks a lot. Makes sense.
One note about atmos generate backend
(singular)
It appears too closely tied to the s3
backend maybe? It expects the workspace_key_prefix
property in your backend YAML, and will error if it is not present. For remote
backends, this property does not exist.
atmos terraform generate backend vpc --stack acme-workloads-data-test-use1
{
"terraform": {
"backend": {
"remote": {
"hostname": "app.terraform.io",
"organization": "ACME",
"workspaces": {
"prefix": "acme"
}
}
}
}
}
Backend config for the 'vpc' component is missing 'workspace_key_prefix'
yes, you right about atmos generate backend
(singular) - it’s tied to s3 backend (we’ll fix it)
the rest, atmos terraform generate backends --format json
and atmos terrform plan/apply
work with any backends
@Zach B thank you for all the testing, you pointed out to a few issues that we need to fix: support cloud
block (any type of block, not only bckend
); untie the atmos generate backend
(singular) command from s3
backend. Since we mostly use s3
backend, those issues were not visible and not tested)
@Andriy Knysh (Cloud Posse) Thank you too, a lot. I think I’m about to point out a much deeper issue though with remote backends in atmos that use the workspaces
block in the backend file. Currently looking into it.
You said atmos automatically manages workspaces and creates the workspace name, I think.
This, I believe, makes it incompatible with the following:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "acme"
workspaces {
name = "workspace-name" # will get error "Error loading state: workspaces not supported", because workspace is already set by atmos?
}
}
}
yes, this one too. We need to look into remote backends , this is a separate task
did you get the error or you think you’ll get it?
Executing command:
/usr/local/bin/terraform workspace new acme-workloads-data-test-use1
^ this is one reason things can break with remote backends. prefix
is supported with workspaces in the backend config, but of course a lot of people prefer the workspaces being automatically created and not require any user interaction.
I got the error myself.
It appears the terraform workspace new
might set the TF_WORKSPACE
ENV VAR, which is where things collide when using a remote backend config with a workspace name
no, it does not set ENV vars
I think TF cloud workspaces are diff things from TF workspaces, no?
I personally used TF cloud 2-3 years ago, so don’t have understanding how it works now
I think a potential fix would be not to run terraform workspace new
depending on the backend type? (Might not be that simple)
E.g. workspaces in terraform cloud can be automatically generated if they don’t already exist just by seeing the backend config for the first time.
But I think the issue is that atmos provides a way to create workspaces using your naming conventions without requiring you to hardcode them. That would be another problem to solve if you dropped terraform workspace new
for certain backend types.
if you ask the thing that knows a lot
You know what’s funny, I asked Chat GPT about atmos last week
It recommended it, haha
so you sure that this error Error loading state: workspaces not supported
is about TF workspaces, and not about your TF cloud account (maybe you don’t have workspaces enabled with your plan)?
I’m 95% sure. I have at least 20-30 workspaces in that account lol.
And have been using it for about a year now.
You know what’s funny, I asked Chat GPT about atmos last week
what question did you ask?
“What do you think about atmos? (A terraform tool built by CloudPosse)”
“Atmos is a Terraform module generator tool built by CloudPosse that aims to simplify the process of creating reusable Terraform modules. The tool offers a number of features such as scaffolding, testing, and publishing modules to the Terraform Registry. I think it’s a great tool for those who are looking to create reusable Terraform modules with ease. The tool’s ability to generate a skeleton for a new module, including a test harness, makes it easier to get started with building modules. Additionally, the tool’s integration with Terraform Cloud makes it easy to automate the process of testing and publishing modules. Overall, if you’re looking to create reusable Terraform modules, I think atmos is definitely worth checking out.”
well, the answer looks like a mix of common statements which can be applied to any such tool (and some of those don’t apply to our atmos) You give that answer when you don’t know the exact answer. The chat has a lot to learn yet
lol..
but the AI gave a nice explanation what atmos is
ATMOS (Automated Testing Management & Orchestration System)
To be clear, TF Cloud Workspaces != Terraform Open Source Workspaces. I am pretty sure Open Source Workspaces are incompatible/do not work with TFC Workspaces.
Looks like there’s a workaround though
(to be fair to ChatGPT, our atmos.tools docs were not available when they build the first language model)
we’ll add a setting in atmos.yaml to enable/disable TF workspaces auto-generation, and fix a few issues in atmos with remote
backend
Trigger runs from your terminal using the Terraform CLI. Learn the required configuration for remote CLI runs.
2023-03-30
2023-03-31
trying to take a look at atmos for our environment… we’re already using many cloudposse Terraform modules for our new deployments so looking to see how easily I can migrate things over to using atmos
let us know if you need any help
take a look at https://atmos.tools/category/quick-start
Take 20 minutes to learn the most important atmos concepts.
Thanks @Andriy Knysh (Cloud Posse) I’m reading through the docs on there now. We currently used the tfstate-backend
module to store our state files remotely in S3 and provide locking.
yes, the module is a good start to use Atmos (we have the config for it, let us know if you need help)
@Andriy Knysh (Cloud Posse) let me get your thoughts on this… I had a bit of a hack to using the [context.tf](http://context.tf)
to match our expected naming… So I’m tweaking the label_order
to produce the name following {namespace}-{environment}-{name}-{attributes}-{stage}
… {stage}
in our case is the region either use1
or use2
and {environment}
is our dev
, qa
, uat
, preview
or prod
. We generate the {stage}
using the terraform-aws-utils module… Any thought on a cleaner/simpler way to accomplish the same? This requires me to include the label_order
hack to every module to remain consistent
yes, include label_order
at the top level in the stack configs
also update stacks.name_pattern
in atmos.yaml
oh okay… so I can simply move that up into atmos then… I would add the label_order
and label_as_tags
(we get rid of all but name
tags) to the *.tfvars we ran and then the stage
was set inside the main.tf based on the region
being deployed… I guess moving to use atmos we’d just have the stack imports for us-east-1
or us-east-2
and set stage
in there and no longer need to use the utils module to generate the short name from the region long name
yes, in the company-wide defaults, you can add
# orgs/_defaults.yaml
#
# COMPANY-WIDE DEFAULTS
#
terraform:
vars:
label_order:
- namespace
- tenant
- environment
- stage
- name
- attributes
descriptor_formats:
account_name:
format: "%v-%v"
labels:
- tenant
- stage
stack:
format: "%v-%v-%v-%v"
labels:
- namespace
- tenant
- environment
- stage
backend_type: s3
and update label_order
according to your requirements
ah okay… that makes sense now seeing an example that also looks easier to read than what I see in the example that shows the stacks.name_pattern
as {tenant}-{environment}-{stage}
which if I follow your example there would just mean it would be equivalent to {namespace}-{tenant}-{environment}-{stage}
?
we don’t use any *.tfvars
files for many reasons, we have all the vars in the stack configs:
- No need to have the vars in many diff places
- Amos does not see the vars in the files
- If all the vars are in YAML, the following Atmos commands will show them (including the
sources
of all vars:atmos describe component…
,atmos describe stacks
- If all vars are in the stack configs, you can you the Atmos validation with JSONSchema and OPA to validate the vars and relations b/w the vars https://atmos.tools/core-concepts/components/validation
Use JSON Schema and OPA policies to validate Components.
And, lastly, if there are .tfvars
not managed by atmos, they could take precedence, leading to unpredictable behavior.
This last issue, was the real kicker that led us to officially not recommend them. Users were getting very confusing results.
yeah I thought about that as well @Erik Osterman (Cloud Posse)…
I’ve kept my interactions as an independent open-source guy but I think I may have seen that our parent company has engaged you guys at some point/level… I’m trying to look at trying make changes with my boss support but trying to fit it in without disruption to what is already there. Getting things under Terraform at any level thus far has been big win but it’s lead things towards Terraliths as you called them the other week in office hours
We already have accounts setup and running things under… That would present some challenges to full multi-account deployment but I’ve also said it could be done