#atmos (2024-05)
2024-05-01
removed an integration from this channel: Linear Asks
Hey all, my team is working through updating some of our atmos configuration, and we’re looking for some guidance around “when” to vendor? We’re considering adding some logic to our GitHub Actions that would pull components for affected stacks allowing us to keep the code outside of the repository. Some wins here would be less to review on pull requests as we vendor new versions into different dev/stag/prod stages. However, is it a better play to vendor in as we develop and then commit the changes to the atmos repo?
Great question! I think we could/should add some guidance to our docs.
Let’ me take a stab at that now, and present some options.
First, make sure you’re familiar with the .gitattributes
file which let’s you flag certain files/paths as auto generated, which collapses them in your Pull Requests, and reduces the eye strain.
Option 1: By default, Cloud Posse (in our engagements and refarch), vendor the everything in to the repositories.
Pros
• an immutable record of components / less reliance on the repo repositories
• super easy to test changes without a “fork bomb” and ensuring “PR storm” as you update multiple repos
• super easy to diverge when you want to
• super easy to detect changes and what’s affected
• much faster than cloning all dependencies
• super easy to “grep” (search) the repo to find where something is defined
• No need to dereference a bunch of URLs just to find where something is defined
• Easier for newcomers to understand what is going on
Cons
• Reviewing PRs containing tons of vendored files sucks
• …? I struggle to see them
Option 2: Vendoring components (or anything for that fact, which is supported by atmos) can be done “Just in time”, more or less like terraform init
for provider and modules.
Pros:
• only things that change or are different are in the local repo
• PRs don’t contain a bunch of duplicated files
• It’smore “DRY” (but I’d argue this is not really any more DRY than committing them. Not in practice because vendoring is completely automated) Cons
• It’s slower to run, because everything must first be downloaded
• It’s not immutable. Remote refs can change, including tags
• Remote sources can go away, or suffer transient errors
• It’s harder to understand what something is doing, when you have to dereference dozens of URLs to look at the code
• Cannot just do a “code search” (grep) through the repo to see where something is defined
• In order to determine what is affected, you have to clone everything which is slower
• If you want to test out some change, you have to fork it and create branch with your changes, then update your pinning
• If you want to diverge, you also have to fork it, or vendor it in locally
Option 3: Hybrid - might make sense in some cirucmstances. Vendor & commit 3rd-party dependencies you do not control, and for everything else permit remote dependencies and vendor JIT.
I didn’t even give thought to someone making a cheeky ref update to a tag
Alright, this is supremely helpful, thanks a ton for the analysis. I’ll bring these points back to my team for discussion.
Great, would love to hear the feedback.
Also, not sure if this is just a coincidence, but check out this thread
Hey Guys
I have a question if someone can help with the answer. I have few modules developed in a separate repository and puling them down to the atmos repo while running the pipeline dynamically using the vendor pull command. But, when I bump up the version atmos is unable to consider that as a change in a component and the atmos describe affected commands gives me an empty response, any idea what I’ m missing here? below is my code snippet - vendor.yaml.
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: accounts-vendor-config
description: Atmos vendoring manifest
spec:
sources:
- component: "account_scp"
source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/account-scp.git///?ref={{.Version}}
version: "1.0.0"
targets:
- "components/terraform/account_scp"
excluded_paths:
- "**/tests"
- "**/.gitlab-ci.yml"
tags:
- account_scp
I really like the point of this making the configuration set immutable as well. We’re locked in with exactly what we have committed to the repository.
A lot of our “best practices” are inspired by the tools we’ve cut our teeth on. In this case, our recommendation is inspired by ArgoCD (~Flux) is the inspiration - at least the way we’ve implemented and used it.
Yeah, I was working with vendoring in different versions today to their own component path (component/1.2.1, component/1.2.2) and it took me a moment to realize this was changing the workspace prefix in the S3 bucket where the state for the stack was being stored.
it took me a moment to realize this was changing the workspace prefix in the S3 bucket where the state for the stack was being stored.
This is configurable
So now am vendoring in different versions to different environment paths to keep the component names the same as things are promoted. (1.2.1 => component/prod, 1.2.2 => component/dev)
This is a great way to support multiple concurrent versions, just make sure you configure the workspace key prefix in the atmos settings for the component.
So now am vendoring in different versions to different environment paths to keep the component names the same as things are promote
This is another way. Think of them as release channels (e.g. alpha, beta, stable, etc)
Either way, its’ great to fix the component path in the state bucket, so you don’t encounter problems if you reorganize how you store components on disk.
So then my configs look sort of like,
components:
terraform:
egress_vpc/vpc:
metadata:
component: vendored/networking/vpc/dev
inherits:
- vpc/dev
egress_vpc:
metadata:
component: vendored/networking/egress_vpc/dev
settings:
depends_on:
1:
component: egress_vpc/vpc
vars:
enabled: true
vpc_configuration: egress_vpc/vpc
@Andriy Knysh (Cloud Posse) do we have docs on how to “pin” the component path in the state backend?
Note, I would personally invert the paths for release channels.
• vendored/dev/networking….
• vendored/prod/…. Since over time, some components might get deprecated and removed, or others never progressing past dev.
Yeah, that’s a very good point and something we’re still working through.
We also want to get to the point where we have “release waves”…..so, release a change to a “canary” group, and then carry out groupA, groupB, groupC, etc….
Which, version numbers would honestly help with that a bit more. How could I pin the workspace key prefix for a client if I did have the component version in the path?
Ahh, looks like towards the bottom of the page here: https://atmos.tools/quick-start/configure-terraform-backend/
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts
Yea, I think our example could be better….
But this:
# Atmos component `vpc`
vpc:
metadata:
# Point to the Terraform component in `components/terraform/vpc/1.2.3`
component: vpc/1.2.3
# Define variables specific to this `vpc/2` component
vars:
name: vpc
ipv4_primary_cidr_block: 10.10.0.0/18
# Optional backend configuration for the component
backend:
s3:
# by default, this is the relative path in `components/terraform`, so it would be `vpc/1.2.3`
# here we fix it to `vpc`
workspace_key_prefix: vpc
Yeah, just hugely beneficial, thank you so much for the help.
Our pleasure!
I have the remote-state stuff working that you were able to guide me through the other week…many thanks as well there, I think that’s really going to help us level up our IaC game.
Please feel free to add any thoughts to this https://github.com/cloudposse/atmos/issues/598
Describe the Feature
This is a similar idea to what Terragrunt does with their “Remote Terraform Configurations” feature: https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/#remote-terraform-configurations
The idea would be that you could provide a URL to a given root module and use that to create a component instance instead of having that component available locally in the atmos project repo.
The benefit here is that you don’t need to vendor in the code for that root module. Vendoring is great when you’re going to make changes to a configuration, BUT if you’re not making any changes then it just creates large PRs that are hard to review and doesn’t provide much value.
Another benefit: I have team members that strongly dislike creating root modules that are simply slim wrappers of a single child module because then we’re in the game of maintaining a very slim wrapper. @kevcube can speak to that if there is interest to understand more there.
Expected Behavior
Today all non-custom root module usage is done through vendoring in Atmos, so no similar expected behavior AFAIK.
Use Case
Help avoid vendoring in code that you’re not changing and therefore not polluting the atmos project with additional code that is unchanged.
Describe Ideal Solution
I’m envisioning this would work like the following with the $COMPONENT_NAME.metadata.url
being the only change to the schema. Maybe we also need a version
attribute as well, but TBD.
components:
terraform:
s3-bucket:
metadata:
url: <https://github.com/cloudposse/terraform-aws-components/tree/1.431.0/modules/s3-bucket>
vars:
...
Running atmos against this configuration would result in atmos cloning that root module down to the local in a temporary cache and then using that cloned root module as the source to run terraform
or tofu
against.
Alternatives Considered
None.
Additional Context
None.
2024-05-02
Hey Guys,
I have an use case where my component in atmos has just a terraform null_reource to execute a python script based on few triggers. However, is there any way out I can still manage this similar to a component but not through terraform(null resource), can I use something like custom cli commands that atmos supports to do this? Any input on this use case would be really appreciated.
Yes, exactly one of the use cases for custom sub commands. This way you can call it with atmos to make it feel more integrated, documented (e.g. with atmos help). You can then automate it with atmos workflows. Alternatively you can skip the sub command and only use a workflow too. Up to you.
You can also access the stack configuration from custom commands
any examples on how to access stack configuration through custom commands?
Have you seen this? https://atmos.tools/core-concepts/custom-commands/#show-component-info
Atmos can be easily extended to support any number of custom CLI commands.
Instead of running echo
, just run your python script instead.
@Gabriela Campana (Cloud Posse) we need a task to add example command invocation for each example custom command. cc @Andriy Knysh (Cloud Posse)
E.g. the docs only show how to define it, not how to call it. Of course, it can be inferred, but “as a new user I want to quickly see how to run the command because it will help me connect the dots”
@Kubhera we’ll improve the docs. For an example on “any examples on how to access stack configuration through custom commands?”, please see this custom command (as Erik mentioned above):
Atmos can be easily extended to support any number of custom CLI commands.
2024-05-03
What do you folks think of allowing s3 module or component to have an option to add a suffix with a random string to avoid the high cost of unauthorized s3 denied access?
Or is there a gomplate way of generating a random id and passing it in the yaml to attributes ?
Or would gomplate, if a random function exists, generate a new random string upon each atmos run?
I suppose uuid does exist here
Unfortunately cannot effectively use the uuid function in gomplate for this because it will cause permadrift
Just to confirm, aesthetically the automatic checksum hash is not option you are considering? …because it solves exactly this problem with the bucket length, but will chop and checksum the tail end
Sorry, I didn’t read carefully. You are not asking about controlling the length.
Regarding the post you linked, we discussed this on office hours. @matt mentioned that Jeff Barr in response to the post, said AWS is not going to charge for unauthorized requests and they are making changes for that. To be clear, they refunded the person who made the unfortunate discovery, and are making more permanent fixes to prevent this in the future.
Oh i didn’t realize that jeff bar responded with that. That’s great news. Thanks Erik. Then i suppose it’s a non issue once they makes changes for it
Is there a doc available that points to jeff barrs response? That may calm some nerves
Thank you to everyone who brought this article to our attention. We agree that customers should not have to pay for unauthorized requests that they did not initiate. We’ll have more to share on exactly how we’ll help prevent these charges shortly.
#AWS #S3
How an empty S3…
@RB here’s the documentation on OpenTofu cc @Matt Gowie https://github.com/cloudposse/atmos/pull/594
what
• Document how to use OpenTofu together with Atmos
why
• OpenTofu is a stable alternative to HashiCorp Terraform • Closes #542
2024-05-04
HI @Erik Osterman (Cloud Posse), I have an interesting use case where I have a stack with 10 components approximately and the all of these components are depending on a output of a single component which is being deployed at the first place as part of my stack. what I’m doing right now is reading out put of the component using remote-state feature of atmos, however, when I execute the workflow which has commands to execute all these components sequentially even there is a change only for a single component(this is the current design I came up with) it is reading the state file of the component every single time for each component and that is adding up extra time for my pipeline execution time. imagine if I have to deploy 100 stacks which are affected. is there any way to mimic this feature something similar to having a global variable in the stack file and refer the same all over the stack wherever it is needed?. basically what I’m looking for is, read once per stack for the output of a component and use it as part of all other dependant components.
Haha, @Kubhera my ears are ringing from the number of times this is coming up in questions.
TL;DR: we don’t support this today
Design-wise, we’ve really wanted to stick with vanilla Terraform as much as possible. Our argument is that the more we stick into atmos that depend on Terraform behaviors, the more end-users are “vendor locked” into using it; also, we don’t want to re-invent “terraform” inside of Atmos and YAML. We want to use Terraform for what it’s good at, and atmos for where it’s lacking: configuration. However, that’s a self-imposed constraint, and it seems like one users are not so concerned about and a feature frequently asked for, albeit for different use cases. This channel has many such requests.
So we’ve been discussing a way of doing that. It’s tricky because there are a dozen types of backends. We’ll not want to reimplement the functionality in atmos
to read the raw state but instead make terraform output
a datasource
in atmos
.
We’ve recently introduced datasources into atmos, so this will slot in nicely there. Also, take a look at the other data sources, and see if one will work for you.
Atmos supports Go templates in stack manifests.
Here are the data sources we support today: https://docs.gomplate.ca/datasources/
gomplate documentation
So if you write you outputs to SSM, for example, atmos
can read those.
Alternatively, since you’re using a workflow, you can do this workaround.
File file Files can be read in any of the supported formats, including by piping through standard input (Stdin). Directories are also supported.
So in one of your steps, save the terraform output
as a json blob to a file.
Then define a datasource to read that file. All the values in the file will be available then in the stack configuration.
As always, atmos is a swiss army knife, so there’s away to do it. Just maybe not optimized yet for your use-case.
What about switching from terraform remote state to aws data sources instead for each component?
This way we don’t have to depend on the output of another component to deploy a dependent component.
We are planning to switch to using a configurable kv-store
model, so that things work better in brownfield.
Using data sources assumes too much: a) that a data source exists for the type of look-up on all resources, and is not consistent across providers b) frequently don’t support returning all values of a given type or error if not found c) is complicated when what you need to look up is in multiple different accounts, d) is messy if you depend on resources cross-cloud. The “kv” store pattern, allows one to choose from any number of backends for configuration, such as dynamodb, s3, SSM, etc. We’re building modules for multiple mainstream clouds (as part of an engagement), so expect to see aws, artifactory, gcp, and azure kv-store implementations.
In a brownfield, that obligation would be to populate the “well known path” in whatever backend you choose for kv-store
with the values you need. However, @RB since you are doing a lot of brownfield, I’d like to learn if you think this could work for you.
Hmmm a kv-store
model would then require to host dynamodb/s3/ssm. It sounds like a generic cross-platform generic remote-state.
You would still have the dependency issue unfortunately where component X needs to be deployed (or redeployed) for component Y to get the correct information even if it’s simply adding an extra output to X whereas if using a data source, the extra output may already be available.
Since the remote-state uses the context which includes all the contextual tags, I’m not sure I fully understand why it would be difficult to use data sources.
For example, a vpc created by cloudposse’s component, should have the Namespace
, Stage
, Environment
, Name
, etc added to the vpc resource as well as the subnets. Using just those tags, a data source for the VPC and subnets can retrieve the vpc id, private subnets, and public subnets which is mostly why the remote state for the vpc component is used.
Just chiming in…the big difference here, IMO, is that in order to use a data source you have to be able to authenticate to the account where the resource is defined. So imagine a component where you need to read some piece of data (like CIDR subnet blocks) for all of the VPCs in your organization. Now you need a ton of providers for that component just to be able to read a list of all your CIDR blocks across accounts and regions. Compare that with the k/v store pattern, where you just read the data directly in the k/v store in your local account and region where you’re deploying the component.
Hi Matt. Interesting use-case and agreed, if you’re deploying in us-west-2 and need to retrieve a resource in another region like us-east-1, then creating lots of providers is not scalable. That does seem like an edge case compared to same-region/same-account resources.
However, you could have both implementations depending on the use-case
- the
kv-store
for the use-case you described, where you have resources in different regions/accounts a. Pros i. works for cross-region/cross-account without additional providers b. Cons i. requires chaining applies for components that rely on other components’ outputs - the
data source
method for resources that are in the same region and account a. Pros i. works for same region, same account without additional providers ii. no need for chaining applies for components b. Cons i. does not work well with cross-region/cross-account as additional providers are needed It doesn’t have to be one or the other.
We can use (1) for those edge cases and (2) for the common case of retrieving items like VPCs, EKS, etc via data source.
Which components affect you the most with this, @RB ?
Pretty much anything that currently depends on the vpc
component for now but im sure I’ll hit it more with other common remote states too like eks
Yea, maybe we could support that use-case for a finite number of common data sources, and support a specific type of look up method, for example by tags and resource type.
It would really save me a lot of time, anybody’s help in this regard would be really appreciated.
Thanks a ton in advance !!!
2024-05-05
If a component’s enabled
flag is set to false
it should delete an existing component’s infra, but what if you did not want the component to be acted on at all? Would a new metadata.enabled
flag be acceptable? This way it wouldn’t even create the workspace or run terraform commands. Atmos should just early exit
Thoughts on this?
Wouldn’t that be an abstract component?
It doesn’t have to be. It could be a non abstract or abstract component.
non abstract or real
# non-abstract or real
components:
terraform:
vpc:
metadata:
# atmos refuses to run commands on it because this is false
enabled: false
vars:
# if metadata.enabled is true or omitted, this enabled flag will enable all terraform resources
enabled: true
@RB this sounds like a good idea. Currently we can use metadata.type: abstract
to make the component non-deployable and just a blueprint for derived components. But you are saying that you’d like to “disable” any component (abstract or real). There could def be use-cases for that. We’ll create a ticket
Yes please! Thank you very much!
@Gabriela Campana (Cloud Posse) please create a ticket to add metadata.enabled
to Atmos manifests
2024-05-06
Hey all, happy Monday! I hope I have a quick question and I’m missing something obvious. I have the need to configure a provider for a component that require credentials stored in a GitHub secret. I’m missing how to get that data out of the secret and available for Atmos to use when building the provider_override.tf.json file in the component directory.
providers:
provider_name:
alias: "example"
host: "<https://example.com>"
account_id: ${{ env.example_account_id }}
client_id: ${{ env.example_client_id }}
client_secret: ${{ env.example_client_secret }}
Is there some documentation or capability in Atmos to parse our YAML files and replace variable placeholders with content from a secret store?
Wondering if I should place this provider in the [main.tf](http://main.tf)
of the module and pass in the values via variables, or if there is a better way to do this.
Hi @Justin
i have the same problem. my current workaround is to use sops in the component to decrypt the secrets. but you should be able to use other providers as well.
So, I think what you have should work provided,
a) you’re using lower case environment variables (since your example uses lower case) b) you have templating enabled c) you’re aware of this warning
https://atmos.tools/core-concepts/stacks/templating/#atmos-sections-supporting-go-templates
Atmos supports Go templates in stack manifests.
Oh, I didn’t look closely enough. Your syntax is wrong. Use Go Template syntax.
provisioned_by_user: '{{ env "USER" }}'
providers:
provider_name:
alias: "example"
host: "<https://example.com>"
account_id: '{{ env "example_account_id" }}'
client_id: '{{ env "example_client_id" }}'
client_secret: '{{ env "example_client_secret" }}'
if your envs were upper case, you would need to retrieve it that way too
client_secret: '{{ env "EXAMPLE_CLIENT_SECRET" }}'
Atmos supports Sprig and Gomplate functions in templates in stacks manifests. Both engines have specific syntax to get ENV variables
https://masterminds.github.io/sprig/os.html
https://docs.gomplate.ca/functions/env/
Also, make sure that templating is enabled in atmos.yaml
https://atmos.tools/core-concepts/stacks/templating#configuration
Useful template functions for Go templates.
gomplate documentation
Atmos supports Go templates in stack manifests.
@Andriy Knysh (Cloud Posse) what do you think about importing the docs for these functions? Since they are not in our docs, they are not searchable or discoverable without knowing to go to another site
i think it’s a good idea, we need to improve our docs. How do we “import” it?
Hi,
i don’t know if its me or a bug If i use Uppercase Letters in tenant, the remote state provider will downcase it and then not find the stack. I’ve simply renamed the stack, but wanted to let you know.
I suspect it does this because I think null-label
convention used to be to always downcase, but I think that option is now configurable.
@Andriy Knysh (Cloud Posse)
@Stephan Helas i think it’s not related to the remote-state provider (it does not change the case on its own), but as Erik mentioned, it’s the null-label
module that always downcase, in which case, if you are using Upper case for any of the context variables, it will not work with Atmos commands
variable "label_value_case" {
you can set it to none
and test. Let us know if it’s working
Second thing, i’m not 100% sure, but i belive the remote-state provider ignores stacks.name_template
and only looks for stacks.name_pattern
@Andriy Knysh (Cloud Posse) worth double checking
@Stephan Helas I would like to see your config to better understand what you are doing and the issues you are facing. You can DM me your repo (or part of it with the relevant config) and I’ll take a look and help you with any issues
Since stacks.name_template
is a new feature, it could be some issues in Atmos, which we can def fix. It was tested (and is used now) in some configs, but we def did not spend much time on it as on the other Atmos features. Without seeing what you are doing, it’s difficult to tell if there is any issues or it’s just a misconfig
I’ll try to make a small demo. essential, what happens is this:
atmos.yaml
stacks:
base_path: 'stacks'
included_paths:
- 'bootstrap/**/*'
- 'KN/**/*'
excluded_paths:
- '**/_defaults.yaml'
name_template: '{{.settings.tenant}}-{{.settings.instance}}-{{.settings.stage}}'
remote-state.tf
module "vpc" {
source = "...."
component = var.vpc_component
context = module.this.context
}
atmos output:
Terraform has been successfully initialized!
module.wms_vpc.data.utils_component_config.config[0]: Reading...
module.base.data.aws_availability_zones.available: Reading...
module.base.data.aws_availability_zones.available: Read complete after 0s [id=eu-central-1]
Planning failed. Terraform encountered an error while generating this plan.
╷
│ Error: stack name pattern must be provided in 'stacks.name_pattern' CLI config or 'ATMOS_STACKS_NAME_PATTERN' ENV variable
│
│ with module.wms_vpc.data.utils_component_config.config[0],
│ on .terraform/modules/wms_vpc/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
after changing remote state, it works :
module "vpc" {
source = "..."
component = var.wms_vpc_component
context = module.this.context
env = {
ATMOS_STACKS_NAME_PATTERN = "{tenant}-{environment}-{stage}"
}
tenant = var.tags["atmos:tenant"]
environment = var.tags["atmos:instance"]
stage = var.tags["atmos:stage"]
}
Aha, @Andriy Knysh (Cloud Posse) looks like the provider is out of date, or not supporting the name_template
the latest version of the utils
provider supports uses the Atmos code that supports name_temoplate
https://github.com/cloudposse/terraform-provider-utils/releases/tag/1.22.0
make sure it’s downloaded by terraform
also please make sure that your atmos.yaml
is in a location where both Atmos binary and the utils
provider can find it, see https://atmos.tools/core-concepts/components/remote-state#caveats
The Terraform Component Remote State is used when we need to get the outputs of a Terraform component,
if all is ok and still does not work, ping me, I’ll review your config
i did a terraform clean
and added the provider version, but it did not work the modul version used is 1.22.
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
- base in ../../../../../../poc/modules/wms-base
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for base.this...
- base.this in .terraform/modules/base.this
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for this...
- this in .terraform/modules/this
Downloading git::ssh://..../aws-atmos-modules.git?ref=v0.1.0 for wms_vpc...
- wms_vpc in .terraform/modules/wms_vpc/remote-state
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for wms_vpc.always...
- wms_vpc.always in .terraform/modules/wms_vpc.always
Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Finding cloudposse/utils versions matching "!= 1.4.0, >= 1.7.1, ~> 1.22, < 2.0.0"...
- Finding hashicorp/local versions matching ">= 1.3.0, ~> 2.4"...
- Finding hashicorp/aws versions matching "~> 5.37"...
- Finding hashicorp/external versions matching ">= 2.0.0"...
- Installing cloudposse/utils v1.22.0...
- Installed cloudposse/utils v1.22.0 (self-signed, key ID 7B22D099488F3D11)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing hashicorp/aws v5.48.0...
@Andriy Knysh (Cloud Posse) looks like he’s using the correct version
yes looks like it
i don’t know what the issue is, and we never tested a config like that with the remote-state
module, will have to review in more details
@Stephan Helas if you could DM me your config (or the relevant part of it), that would save some time to test and figure out any issues. If not, I’ll recreate something similar. Thank you
yes, will do. will take till tomorrow.
Hi @Andriy Knysh (Cloud Posse),
i tried to build a simple hello-world using local backend, but i can’t get it to work. I’ve created two components, vpc
and hello-world
. vpc will simply output vpc_id. hello-world should then use the output and just output it again.
https://github.com/astephanh/atmos-hello-world/tree/main/hello-world-remote
after terraform apply vpc i have a locale state, containing the output.
▶ atmos terraform apply vpc -s org-acme-test
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/null from the dependency lock file
- Using previously-installed hashicorp/null v3.2.2
Terraform has been successfully initialized!
module.vpc.null_resource.name: Refreshing state... [id=8866214659539567934]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no
differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
vpc_id = "vpc-123456789"
but outputs of remote_state is null ( i don’t know why)
▶ atmos terraform plan hello-world -s org-acme-test
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/external from the dependency lock file
- Reusing previous version of cloudposse/utils from the dependency lock file
- Using previously-installed hashicorp/null v3.2.2
- Using previously-installed hashicorp/local v2.5.1
- Using previously-installed hashicorp/external v2.3.3
- Using previously-installed cloudposse/utils v1.22.0
Terraform has been successfully initialized!
module.vpc.data.utils_component_config.config[0]: Reading...
module.vpc.data.utils_component_config.config[0]: Read complete after 0s [id=e69aca3dce4ce3047ed5ff092291e5c4c02ee685]
module.vpc.data.terraform_remote_state.data_source[0]: Reading...
module.vpc.data.terraform_remote_state.data_source[0]: Read complete after 0s
Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
+ create
Terraform planned the following actions, but then encountered a problem:
# module.base.null_resource.name will be created
+ resource "null_resource" "name" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ remote_vpc = {
+ backend = {}
+ backend_type = "local"
+ outputs = null
+ remote_workspace_name = null
+ s3_workspace_name = null
+ workspace_name = "org-acme-test-vpc"
}
+ tags = {
+ "atmos:component" = "hello-world"
+ "atmos:component_version" = "hello-world/v0.1.0"
+ "atmos:manifest" = "org/acme/hello-world"
+ "atmos:stack" = "org-acme-test"
+ "atmos:workspace" = "org-acme-test-hello-world"
+ environment = "acme"
+ stage = "test"
+ tenant = "org"
}
╷
│ Error: Attempt to get attribute from null value
│
│ on main.tf line 5, in module "base":
│ 5: vpc_id = module.vpc.outputs.vpc_id
│ ├────────────────
│ │ module.vpc.outputs is null
│
│ This value is null, so it does not have any attributes.
╵
exit status 1
@Andriy Knysh (Cloud Posse) have we tested remote-state with the local backend?
i don’t think we have (will check if the remote-state
module supports that). I’ll check the Stephan’s repo as well
@Stephan Helas unfortunately, the module https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/remote-state does not support local
remote state. looks like you are the first person that tried to use it with local
backend
you can do any of the following:
- Use
s3
backend (local
backend is just for testing, not for real infra) - Use
data "terraform_remote_state"
directly (https://developer.hashicorp.com/terraform/language/settings/backends/local). This is not an Atmos way of doing things, but you can test with it - Use remote state of type
static
https://atmos.tools/core-concepts/components/remote-state-backend
Terraform can store the state remotely, making it easier to version and work with in a team.
Atmos supports configuring Terraform Backends to define where
Now you can use https://sweetops.slack.com/archives/C031919U8A0/p1718495222745629 with local
backends.
Note this is different from the remote-state
module. This is configured natively using stacks.
2024-05-07
2024-05-08
2024-05-09
I’d like to solicit more feedback on remote sources for components so we arrive at the best implementation.
Describe the Feature
This is a similar idea to what Terragrunt does with their “Remote Terraform Configurations” feature: https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/#remote-terraform-configurations
The idea would be that you could provide a URL to a given root module and use that to create a component instance instead of having that component available locally in the atmos project repo.
The benefit here is that you don’t need to vendor in the code for that root module. Vendoring is great when you’re going to make changes to a configuration, BUT if you’re not making any changes then it just creates large PRs that are hard to review and doesn’t provide much value.
Another benefit: I have team members that strongly dislike creating root modules that are simply slim wrappers of a single child module because then we’re in the game of maintaining a very slim wrapper. @kevcube can speak to that if there is interest to understand more there.
Expected Behavior
Today all non-custom root module usage is done through vendoring in Atmos, so no similar expected behavior AFAIK.
Use Case
Help avoid vendoring in code that you’re not changing and therefore not polluting the atmos project with additional code that is unchanged.
Describe Ideal Solution
I’m envisioning this would work like the following with the $COMPONENT_NAME.metadata.url
being the only change to the schema. Maybe we also need a version
attribute as well, but TBD.
components:
terraform:
s3-bucket:
metadata:
url: <https://github.com/cloudposse/terraform-aws-components/tree/1.431.0/modules/s3-bucket>
vars:
...
Running atmos against this configuration would result in atmos cloning that root module down to the local in a temporary cache and then using that cloned root module as the source to run terraform
or tofu
against.
Alternatives Considered
None.
Additional Context
None.
v1.72.0
Update gomplate
datasources. Add env
and evaluations
sections to Go
template configurations @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2286698395” data-permission-text=”Title is private”…
Update gomplate
datasources. Add env
and evaluations
sections to Go
template configurations @aknysh (#599)
what
Update gomplate datasources Add env section to Go template configurations A…
aknysh has 266 repositories available. Follow their code on GitHub.
Is there a way to list only real components ?
atmos list components --real
i have this ugly workaround for now
✗ atmos describe stacks | yq e '. | to_entries | .[].value.components.terraform | with_entries(select(.value.metadata.type != "abstract")) | keys' | grep -v '\[\]' | sort | uniq
- aws-team-roles
- aws-teams
- account
- account-map
- aws-team-roles
- github-oidc-provider
- vpc
Atmos can be easily extended to support any number of custom CLI commands.
I think we should add native support, but you could imagine due to how much configuration is managed by Atmos that the number of possible ways of filtering is quite staggering. So, for now, the recommendation is to create a custom command to view the data the way you want to view it. Literally think about it like creating a view.
@RB what do you think about adding a --jq
parameter like in the gh
cli as a stop-gap measure?
Thanks for sharing Erik. I did not realize there was already documentation for this and it was already in my atmos.yaml (at least the list stacks command)
Yes, the --jq
or --query
(jmespath) would be handy so you do not need to depend on jq
I don’t know how much it would help me personally since I already have jq
installed
2024-05-10
Hi,
is there an json manifest for vendor.yaml?
Since inherit will not merge metadata, i need to define the component version for every component in every stack.
i try to dynamically name a component using catalog templates. But so far it is not working. With this approach i try to version my component but sill be able to use multiple instances of it.
import highlevel component in stack:
import:
- catalog/account/v0_1_0
- mixins/region/eu-central-1
use import for the highlevel component:
import:
- path: catalog/components/account-vpc/v0_1_0
context:
name: 'vpc/{{ .setting.region }}'
use component template:
components:
terraform:
'{{ .name }}':
metadata:
component: account-vpc/v0.1.0
my component ends up beeing:
atmos list components -s accounts-sandbox-5
global
vpc/{{ .setting.region }}
I’m probably mistaken here but is the problem that .setting.region
is not getting replaced by the region or that the list subcommand is not interpreting the region?
Or should it be changed to .settings.region
or .vars.region
instead?
Or what was the exact result you were expecting? Because if you change the metadata.component
to a versioned component, you wouldn’t see that result in atmos list components
command since that’s only returning the names of the real planable components and not their real component dir names (which is located in the metadata
).
the way i understand atmos is, that .settings get the same treatment as .vars. difference is, that .vars will be converted to terraform inputs for components. thats the reason why i try to use .settings.
so, the documentation states, that metadata can be used in templating. but this is not working for me (templating is enabled)
atmos.yaml
stacks:
base_path: 'stacks'
included_paths:
- 'bootstrap/**/*'
- 'KN/**/*'
excluded_paths:
- '**/_defaults.yaml'
name_template: '{{.settings.tenant}}-{{.settings.instance}}-{{.settings.stage}}'
templates:
settings:
enabled: true
evaluations: 1
sprig:
enabled: true
gomplate:
enabled: true
stacks/foo.yaml
settings:
tenant: accounts
instance: sandbox
stage: 5
component: account-vpc/v0.2.0
components:
terraform:
vpc/eu-central-1:
metadata:
component: '{{ .settings.component }}'
atmos list components -s accounts-sandbox-5
global
vpc/eu-central-1
plan doesn’t start
▶ atmos terraform plan vpc/eu-central-1 --stack accounts-sandbox-5
'vpc/eu-central-1' points to the Terraform component '{{ .settings.component }}', but it does not exist in 'components/terraform'
setting
templates:
settings:
evaluations: 2
makes no difference
ah I see, interesting. I haven’t used [templates.settings.xyz](http://templates.settings.xyz)
yet. I did notice that in your stacks/foo.yaml
, you omit the templates
portion so should
stacks/foo.yaml
settings:
tenant: accounts
instance: sandbox
stage: 5
component: account-vpc/v0.2.0
be
templates:
settings:
tenant: accounts
instance: sandbox
stage: 5
component: account-vpc/v0.2.0
you mean like this?
settings:
tenant: accounts
instance: sandbox
stage: 5
templates:
settings:
component: account-vpc/v0.2.0
components:
terraform:
vpc/eu-central-1:
metadata:
component: '{{ .settings.component }}'
its the same result for me:
you may have better luck with a yaml anchor
but my goal is to use multiple instances of components with different versions. if i inherit something, only .vars and .settings are merged.
I’ve created a sample repo: https://github.com/astephanh/atmos-hello-world
[hello-world-settings](https://github.com/astephanh/atmos-hello-world/tree/main/hello-world-settings)
is working, while [hello-world-settings-template](https://github.com/astephanh/atmos-hello-world/tree/main/hello-world-settings-template)
is not.
the only difference i’ve made is using template for the component path
diff --color -r hello-world-settings/stacks/org/acme/hello-world.yaml hello-world-settings-template/stacks/org/acme/hello-world.yaml
7a8
> component: v0.1.0
13c14
< component: hello-world/v0.1.0
---
> component: 'hello-world/{{ .settings.component }}'
Here is the setion of the documenation:
https://atmos.tools/core-concepts/stacks/templating#atmos-sections-supporting-go-templates
component:
terraform:
vpc:
metadata:
component: "{{ .settings.component }}"
I’m not sure. Perhaps it’s a bug?
Maybe @Andriy Knysh (Cloud Posse) may know more here
@Stephan Helas this looks wrong (wrong indent)
settings:
tenant: accounts
instance: sandbox
stage: 5
templates:
settings:
component: account-vpc/v0.2.0
should be
settings:
tenant: accounts
instance: sandbox
stage: 5
component: xxxx
templates:
settings:
evaluations: 2
component:
terraform:
vpc:
metadata:
component: "{{ .settings.component }}"
{{ .settings.component }} refers to the `component` in the `settings` section
I'm not sure what you are trying to achieve, or just playing/testing, let me know and we'll try to help
settings.templates.settings
should be used. It might look “wrong”, but we’ll be adding more features to templates (e.g. specify some templates to use with other Atmos commands), so in the near future it might look like this
settings:
templates:
settings:
evaluations: 2
gomplate:
datasources: {}
definitions:
readme: {}. # template config to generate READMEs
component: {} # template config to generate components
stack: {} # template config to generate stacks
i was following the documentation here https://atmos.tools/core-concepts/stacks/templating#atmos-sections-supporting-go-templates.
I try to reuse components in stack without the need to define the component version every time. for this i try to use .settings.component like here:
repo: https://github.com/astephanh/atmos-hello-world/tree/main/hello-world-template
settings:
component: v0.1.0
templates:
settings:
enabled: true
evaluations: 2
vars:
tenant: org
environment: acme
stage: test
components:
terraform:
hello-world/1:
metadata:
component: 'hello-world/{{ .settings.component }}'
vars:
lang: de
location: hh
region: hh
result:
atmos terraform plan hello-world/1 -s org-acme-test
'hello-world/1' points to the Terraform component '{{ .settings.component }}', but it does not exist in 'components/terraform/hello-world'
you need to enable templating in atmos.yaml
templates:
settings:
# Enable `Go` templates in Atmos stack manifests
enabled: true
# Number of evaluations/passes to process `Go` templates
# If not defined, `evaluations` is automatically set to `1`
evaluations: 2
Atmos supports Go templates in stack manifests.
remove this from settings
in the stack manifest
templates:
settings:
enabled: true
evaluations: 2
enabling templates in stack manifests is not currently supported (we might add it in the future versions)
Templating in Atmos can also be configured in the settings.templates.settings section in stack manifests.
The settings.templates.settings section can be defined globally per organization, tenant, account, or per component. Atmos deep-merges the configurations from all scopes into the final result using inheritance.
The schema is the same as templates.settings in the atmos.yaml CLI config file, except the following settings are not supported in the settings.templates.settings section:
settings.templates.settings.enabled
settings.templates.settings.sprig.enabled
settings.templates.settings.gomplate.enabled
settings.templates.settings.evaluations
settings.templates.settings.delimiters
These settings are not supported for the following reasons:
You can't disable templating in the stack manifests which are being processed by Atmos as Go templates
If you define the delimiters in the settings.templates.settings section in stack manifests, the Go templating engine will think that the delimiters specify the beginning and the end of template strings, will try to evaluate it, which will result in an error
please see these examples in the Quick Start
https://github.com/cloudposse/atmos/blob/master/examples/quick-start/atmos.yaml#L246
ok. did that, but still the same error:
https://github.com/astephanh/atmos-hello-world/commit/2b9706fde6f6bc6276ccfb40d3a62069b8b99434
How does one provision a VPC with database specific subnets like terraform-aws-modules/vpc ?
Is it better to provision a new VPC instead ?
or create a new component for dynamic-subnets
and attach a new suite of public/private subnets to an existing VPC ?
it allows you to create “named” subnets (e.g. database
), and then use their names (using remote state and outputs) to deploy resources into
That looks promising. Thanks Andriy!
Is it possible then to create public and private subnets and additionally create a named subnet for databases in the same vpc component instantiation?
But if we did do multiple subnets in the same vpc, each component that reads from the remote state would only read the public and private subnet output. How would we get the remote state to show different outputs and then use them differently for components that depend on the vpc
component? I imagine we’d need to update each component, right?
see the PR description, the module outputs maps of public and private subnets, you can use it from remote state
named_private_subnets_map = {
"backend" = tolist([
"subnet-0393680d8ea3dd70f",
"subnet-06764c7316567eacc",
])
"db" = tolist([
"subnet-0a7c4b117b2105a69",
"subnet-074fd7ad2b902bec2",
])
"services" = tolist([
"subnet-02c63d0c0c2f84bf5",
"subnet-0f6d042c659cc1346",
])
}
named_public_subnets_map = {
"backend" = tolist([
"subnet-03e27e41e0b818080",
"subnet-00155e6b64925ba51",
])
"db" = tolist([
"subnet-04e5d57b1e2035c7c",
"subnet-0a326693cfee8e68d",
])
"services" = tolist([
"subnet-05647fc1f31a30896",
"subnet-01cc440339718014e",
])
}
module.vpc.outputs.named_private_subnets_map["db"] - gives a list of private subnets named "db", assuming that `module.vpc` is the `remote-state` module
we use that for Network Firewall since it always required a dedicated subnet in a VPC just for itself
Fantastic. Thank you. I’ll give this a try
Hmm the vpc component may be out of date because i don’t see the input exposed
module "subnets" {
Ref network firewall usage https://github.com/cloudposse/terraform-aws-components/blob/a9252bc79d5fed90edec6ddb147bd61287de0620/modules/network-firewall/main.tf#L26
firewall_subnet_ids = local.vpc_outputs.named_private_subnets_map[var.firewall_subnet_name]
oh, the component is missing the inputs for the module
variable "subnets_per_az_count" {
type = number
description = <<-EOT
The number of subnet of each type (public or private) to provision per Availability Zone.
EOT
default = 1
nullable = false
validation {
condition = var.subnets_per_az_count > 0
# Validation error messages must be on a single line, among other restrictions.
# See <https://github.com/hashicorp/terraform/issues/24123>
error_message = "The `subnets_per_az` value must be greater than 0."
}
}
variable "subnets_per_az_names" {
type = list(string)
description = <<-EOT
The subnet names of each type (public or private) to provision per Availability Zone.
This variable is optional.
If a list of names is provided, the list items will be used as keys in the outputs `named_private_subnets_map`, `named_public_subnets_map`,
`named_private_route_table_ids_map` and `named_public_route_table_ids_map`
EOT
default = ["common"]
nullable = false
}
what
• add named subnets to vpc component
why
• allow using named subnets for databases
references
I’m not sure what’s happening. Looks like a lot of the checks failed because they took 6 hours and maybe timed out?
I added the question here too
Could i get some help with the failing checks in this pr ?
https://github.com/cloudposse/terraform-aws-components/pull/1032
2024-05-11
2024-05-13
for those, who use tenv: https://github.com/tofuutils/tenv/pull/129
Thanks @Stephan Helas!
(small typo in PR)
support atmoskj from
yeah was late in the night
I think they talk about sign-off here https://github.com/tofuutils/tenv?tab=readme-ov-file#signature-support
ok, i signed it off. one more thing i need to read about….
2024-05-14
question… i know this has been covered here but I don’t think I can find/search the history w/o it getting chopped off going to Free Slack…. any guidance around how to lookup resources/ids from prereq stacks? Is it always just use the remote-state data lookup? or is it just easier/preferred to do a regular data lookup like one would w/ any resources? guidance on when to use one or the other? In my environment (azure) we split up terraform state across the subscriptions that hold the resources so there’s not liek a single blob storage that holds all of the state files…
So, first and foremost, we are starting to come around to the idea of doing what the other tooling in the Terraform ecosystem does: directly making outputs available from one component as inputs to another.
For the longest time, we’ve been adamant about not doing this, as to forces reliance on the tooling. We always say Atmos lets you eject and Terraform handles remote state just fine, so why should the tool do it?
But this is a self-imposed limitation, and how we’ve implemented our reference architecture, and it seems many would prefer just to do it in the tooling layer.
Therefore we are discussing how to best support this concept in atmos, such as with the newly released Atmos data sources.
So. you have a few options.
- The remote state implementation, which works very well once you have it set up, but it takes some configuration to get it working.
- Use data sources and lookups as you would normally, however, not all information is available via data source lookups, not all providers have sufficient data sources, etc.
- Consider using a KV store for your cloud platform. For an enterprise customer, for example, we implemented a kv store for artifactory and plan to do the same for GCP, Azure, and AWS.
https://github.com/cloudposse/terraform-artifactory-kv-store
We recommend the KV store approach b/c it’s provider agnostic and cloud agnostic. The same pattern can be used however makes best sense.
Different systems can read/write from it and populate it. It’s not restricted to the information provided by data sources, so it’s more generalizable.
Thanks. Is any of that guidance/pattern/tradeoffs documented anywhere? woudl hate to lose this useful info ot the Slack ether
I know. I reallllly hurt to downgrade.
We also have https://archive.sweetops.com but slack changed also how exports work. So they are now limited to 90 days too. That did not used to be the case.
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
Is any of that guidance/pattern/tradeoffs documented anywhere? woudl hate to lose this useful info ot the Slack ether
You’re absolutely right. It would be beneficial to post this and the trade offs.
Also, here’s a recent thread on the topic as well. https://sweetops.slack.com/archives/C031919U8A0/p1714829174944729
HI @Erik Osterman (Cloud Posse), I have an interesting use case where I have a stack with 10 components approximately and the all of these components are depending on a output of a single component which is being deployed at the first place as part of my stack. what I’m doing right now is reading out put of the component using remote-state feature of atmos, however, when I execute the workflow which has commands to execute all these components sequentially even there is a change only for a single component(this is the current design I came up with) it is reading the state file of the component every single time for each component and that is adding up extra time for my pipeline execution time. imagine if I have to deploy 100 stacks which are affected. is there any way to mimic this feature something similar to having a global variable in the stack file and refer the same all over the stack wherever it is needed?. basically what I’m looking for is, read once per stack for the output of a component and use it as part of all other dependant components.
@Erik Osterman (Cloud Posse) regarding the slack downgrade - maybe it’s worthwhile moving to some other solution? self host mattermost or something..?
(but yeah, this means you guys have to pay for all the community using it. kinda sucks)
We would, but it’s so disruptive to how any community functions. Imagine getting even a few hundred people to move let alone thousands. It’s basically a reset or starting over.
Also, none of the systems support proper imports. At best, it mocks the user who sent the message as an app, and embeds the date as part of the message [2024-03-01 01:02:03] I said blah
.
This is at least why we haven’t pulled the trigger. Here’s what I think would actually happen: We move to the new platform, and 2 months later Slack finally announces a plan for communities.
This is now supported in native Stack configurations without modification of components.
https://sweetops.slack.com/archives/C031919U8A0/p1718495222745629
2024-05-15
2024-05-16
Hey everyone, I figured I would come in here and ask before I start hacking around. We’re leveraging the VPC module to build our pub/priv subnets, and I need to modify the default route on those public subnets away from the IGW to a Cisco device. I’m guessing this is outside the scope of the module but I figured I would ask Hope you guys are having a good week.
Noting I think I could build the change on top of it? idk. Just brainstorming based on new reqs this week.
We’ve accomplished this I know previously, since we’ve implemented centralized organizational egress over the transit gateway
I don’t know the specifics. Would need to defer to @Andriy Knysh (Cloud Posse) or @Jeremy G (Cloud Posse)
Yea we’re on Transit but then we also have Ciscos SDWAN product as the interchnage to make it tougher to understand.
Yea that’s cool, we’re literally just in planning phases.
I believe we implemented something similar to https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/inspection-deploy[…]els-with-AWS-network-firewall-ra.pdf?did=wp_card&trk=wp_card
Yea I REALLY wanted that subnet firewall design
instead now I have to route to ip i think
ty for that diagram
im imagining doing some ugly static vars in to fix this lol
I think it can be hard to do fully dynamically
we have implemented the network architecture depicted in the diagrams https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/inspection-deployment-models-with-AWS-network-firewall-ra.pdf, but that’s a completely separate/new Terraform components to do all the routing b/w diff VPCs (ingress, egress, inspection). The components is configured and deployed by Atmos. Since it’s complex and custom-made for specific use-cases (even in the diagrams in the PDF there are many diff use-cases), we have not open-sourced it
No I wasn’t expecting open source, I was moreso just brainstorming on how to handle modifying those route tables for my case
it’s prob not possible to make such a component to be able to define all possible networking configurations in such complex architectures as using ingress/egress/inspection VPCs and TGWs
yea i was kinda like uhhhhh when they said they wanted to go this direction
def preferred network fw
yep, I know what you feel
anyway, regarding “I was moreso just brainstorming on how to handle modifying those route tables for my case”
what we did, we implemenbetd all of that in Terraform (as a few components), including defining all the VPCs (using our vpc
components), and the following:
• Route tables (they are not default)
• All TGW routes b/w all those ingress/egress/inspection VPCs
• All EC2 subnet routes b/w all those ingress/egress/inspection VPCs
that’s why the Terraform components are custom, we did not try to make it universal for all possible combinations of such network architectures (which would be a very complex task)
@Ryan I hope this will help you
I recommend not trying to make a universal component, but do what you need in Terraform and then deploy it with Atmos
thats prob most of what im doing now, i def do not dig into root/child cloudposse structure besides what was previously deployed
are you going to use the network firewall and the inspection VPC?
asking because network firewall complicates the design even further, e.g it requires a separate dedicated subnet in a VPC where it deployed
No we currently have a pretty flat network design but everything routes through a network acct
Besides the igws the vpc module deploys
Well somewhere in that module stack lol
that’s why, if you already have VPCs with subnets and definitely can’t destroy them to add another subnet, you can use a separate inspection VPC with a dedicated Firewall subnet and route traffic through it
That would make cost lower too
I think
see this PR https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/174 where we added the ability to deploy multiple “named” subnets in a VPC per AZ
it was done to support Network Firewall’s dedicated subnet
@Ryan if you have questions, feel free to ask. It’s not possible to explain everything at once since it’s too many moving parts, but we’ll be able to answer questions one by one on diff parts of that network architecture
@Ryan We recommend using terraform-aws-dynamic-subnets to deploy and configure the VPC’s public and private subnets. (We have a bunch of similar modules, but this we are trying to consolidate them down to just this one, so it has the most features and best support.) It has a so may options even I have forgotten them all, and I wrote most of them. The options I would recommend first considering:
• If you do not want a route to the internet gateway, you do not have to supply the IGW ID, and no route to it will be created. You can then add your own route, using the aws_route
resource, to the route tables specified by the public_route_table_ids
module output.
• You can suppress the creation and modification of route tables altogether by setting public_route_table_enabled
to false
, and then you can create and configure the route tables yourself.
If neither of those suit you, you can probable figure out yet another way. I suggest reviewing the code as the best documentation.
yes, that’s what we did: use the dynamic-subnets
component to create subnets and disable some routes, then used the aws_route
resource to create subnet routes, and aws_ec2_transit_gateway_route
resource to create TGW routes
appreciate it gentlemen
I created the following SCP for the identity
account which allows me to manage iam roles, assume iam roles, and blocks all other access in this account.
data "aws_iam_policy_document" "default" {
statement {
sid = "DenyAllExcept"
effect = "Deny"
resources = ["*"]
not_actions = [
# The identity account should be able to create and manage IAM roles
# policy_sentry query action-table --service iam | grep -i role
# policy_sentry query action-table --service iam --access-level read
"iam:AddRoleToInstanceProfile",
"iam:AttachRolePolicy",
"iam:CreateRole",
"iam:CreateServiceLinkedRole",
"iam:DeleteRole",
"iam:DeleteRolePermissionsBoundary",
"iam:DeleteRolePolicy",
"iam:DeleteServiceLinkedRole",
"iam:DetachRolePolicy",
"iam:Generate*",
"iam:Get*",
"iam:List*",
"iam:PassRole",
"iam:PutRolePermissionsBoundary",
"iam:PutRolePolicy",
"iam:Simulate*",
"iam:RemoveRoleFromInstanceProfile",
"iam:TagRole",
"iam:UntagRole",
"iam:UpdateAssumeRolePolicy",
"iam:UpdateRole",
"iam:UpdateRoleDescription",
# Also need to be able to assume roles into this account as this will be the primary ingress
"sts:AssumeRole",
]
condition {
test = "StringNotLike"
variable = "aws:PrincipalArn"
values = [
# "arn:aws:iam::*:role/this-is-an-exempt-role",
"arn:aws:iam::*:root",
]
}
}
}
Hope that helps anyone else secure this account. It was way too easy to accidentally create resources here when incorrectly assuming a secondary role.
@Dan Miller (Cloud Posse) @Jeremy G (Cloud Posse)
I created the following SCP for the identity
account which allows me to manage iam roles, assume iam roles, and blocks all other access in this account.
data "aws_iam_policy_document" "default" {
statement {
sid = "DenyAllExcept"
effect = "Deny"
resources = ["*"]
not_actions = [
# The identity account should be able to create and manage IAM roles
# policy_sentry query action-table --service iam | grep -i role
# policy_sentry query action-table --service iam --access-level read
"iam:AddRoleToInstanceProfile",
"iam:AttachRolePolicy",
"iam:CreateRole",
"iam:CreateServiceLinkedRole",
"iam:DeleteRole",
"iam:DeleteRolePermissionsBoundary",
"iam:DeleteRolePolicy",
"iam:DeleteServiceLinkedRole",
"iam:DetachRolePolicy",
"iam:Generate*",
"iam:Get*",
"iam:List*",
"iam:PassRole",
"iam:PutRolePermissionsBoundary",
"iam:PutRolePolicy",
"iam:Simulate*",
"iam:RemoveRoleFromInstanceProfile",
"iam:TagRole",
"iam:UntagRole",
"iam:UpdateAssumeRolePolicy",
"iam:UpdateRole",
"iam:UpdateRoleDescription",
# Also need to be able to assume roles into this account as this will be the primary ingress
"sts:AssumeRole",
]
condition {
test = "StringNotLike"
variable = "aws:PrincipalArn"
values = [
# "arn:aws:iam::*:role/this-is-an-exempt-role",
"arn:aws:iam::*:root",
]
}
}
}
Hope that helps anyone else secure this account. It was way too easy to accidentally create resources here when incorrectly assuming a secondary role.
We can probably further tune it too cause we probably don’t need AddRoleToInstanceProfile
or RemoveRoleFromInstanceProfile
2024-05-17
Heya! I’m following the Quick Start docs in Atmos, and I am enjoying it very much so far, its hard to not skip steps and go all in I have ran in to a small issue though with provisioning, Im at the configure a TF backend, and it feels like the docs are skipping some steps between here: https://atmos.tools/quick-start/configure-terraform-backend#provision-terraform-s3-backend and https://atmos.tools/quick-start/configure-terraform-backend#configure-terraform-s3-backend. In the first section it describes that Atmos has the capability to provision itself a backend using the tfstate-backend component, but I can’t find a good doc on how to actually use it, I tried and I see some errors I’ll post in the thread here. In the next step it assumes provisioning is complete. Im happy to open a PR with the missing instructions if I figure it out.
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map: no such file or directory
╵
╷
│ Error: Unreadable module directory
│
│ The directory could not be read for module "assume_role" at iam.tf:30.
╵
Hrm… that’s odd. I don’t believe symlinks are used anywhere in the quick start. Are you using any symbolic links with your project (e.g. ln -s
)
sometimes people symlink their home directory to google drive or dropbox, or onedrive
Not consciously, as in I did not run any symlinking
we’re using the latest version of the components: (vendor.yaml)
- component: "tfstate-backend"
source: "github.com/cloudposse/terraform-aws-components.git//modules/tfstate-backend?ref={{.Version}}"
version: "1.433.0"
targets:
- "components/terraform/tfstate-backend"
included_paths:
- "**/*.tf"
excluded_paths:
- "**/providers.tf"
tags:
- core
To be clear, this is not an atmos error. This is terraform
│ Unable to evaluate directory symlink:
Atmos does not create any symlinks as part of vendoring.
What operating system are you on?
OSX
Hrm…
Im running atmos in Docker though
Are you using it in geodesic
or a custom image?
ARG GEODESIC_VERSION=2.11.2
ARG GEODESIC_OS=debian
ARG ATMOS_VERSION=1.72.0
ARG TF_VERSION=1.8.3
custom image based on cloudposse/geodesic:${GEODESIC_VERSION}-${GEODESIC_OS}
Aha, ok, Geodesic does have symlinks. So that’s probably wherein we have a problem.
@Jeremy G (Cloud Posse) or @Jeremy White (Cloud Posse) any ideas on this one?
Thanks so far!
Our cold start procedure has changed over time, and you may be looking at an older version. In the current cold start, you need to have in your repository the source code
and stack configuration
for:
• account
• account-map
• account-settings
• aws-teams
• aws-team-roles
• tfstate-backend
before you start running Terraform.
Then, to start, you run
atmos terraform apply tfstate-backend -var=access_roles_enabled=false --stack core-usw2-root --auto-generate-backend-file=false
(assuming you use our defaults, where the org root is core-root
, the primary region is us-west-2
, and the region abbreviation scheme is short
).
Then, if you have not already configured the Atmos stacks backend
, you configure it now to use the S3 bucket and Role ARN output by the component.
Then, you move the tfstate-backend
state to S3 by running
atmos terraform apply tfstate-backend -var=access_roles_enabled=false --stack core-uw2-root --init-run-reconfigure=false
Later, after provisioning all the other components, you come back and run
atmos terraform apply tfstate-backend --stack core-uw2-root
@Marvin de Bruin Please post links to the documentation you are looking at.
@Jeremy G (Cloud Posse) Thanks for looking in to this.
These are all the docs Im looking at apart from some google-fu results that didnt bring me far
Main: https://atmos.tools/quick-start/ TF backend docs: https://atmos.tools/quick-start/configure-terraform-backend#terraform-s3-backend provisioning section: https://atmos.tools/quick-start/configure-terraform-backend#provision-terraform-s3-backend (this step is where Im currently blocked, but I havent followed your steps yet)
I’ve also been looking at the tutorials, but they come with outdated components https://atmos.tools/tutorials/first-aws-environment (repo: https://github.com/cloudposse/tutorials) The component: https://github.com/cloudposse/tutorials/tree/main/03-first-aws-environment/components/terraform/tfstate-backend
I’ve also looked at https://github.com/cloudposse/terraform-aws-components/tree/main/modules/tfstate-backend which also does not mention the cold start dependencies
I’ve now added all the components you mentioned, pulled them down (atmos vendor pull
) and gave them a minimal config (based on the usage docs of each component). I’m still seeing that Terraform symlink issue though when I run the command you mentioned (I modified it slightly):
⨠ atmos terraform apply tfstate-backend -var=access_roles_enabled=false --stack core-gbl-root --auto-generate-backend-file=false
Initializing the backend...
Initializing modules...
- assume_role in
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map/modules: no such file or directory
╵
╷
│ Error: Unreadable module directory
│
│ The directory could not be read for module "assume_role" at iam.tf:30.
╵
exit status 1
Btw, Im not expecting a response in the weekend, Im just very excited working on this Have a great weekend.
@Marvin de Bruin I don’t know where or why you have a symlink. Our standard directory structure looks like this:
interesting, I indeed do not have that modules folder
You do not need to have components/terraform
exactly, but you do need to have all the components in the same directory. In particular, all the modules reference ../account-map/
And these are in the components pulled down by the vendor pull script, I also see them in the repo. I think I might have a misconfig in my vendors file
I might have missed a step, but is the recommended way to manually modify the vendors.yaml or do this automatically somehow (like with NPM for javascript, Composer for php)?
Our vendoring system has quirks and kinks still to be worked out. The glob patterns do not quite work as expected, for example. I’d expect **/*.tf
to pull in all the *.tf
files, in all the subdirectories, but it does not do that if you do not include the **/modules/**
as well.
account-map
is a special case because of the sub-modules
Ah! This was indeed the cause of the blockage! Thank you very much! Is there a repo somewhere with all the default vendor configs for each module? It could be a fun project to create an installer make module add x
@Erik Osterman (Cloud Posse)
Lets move future refarch questions to refarch
Re: installer for components, we plan to make it easier to do via CLI.
Re: vendor configs, those are part of the typical jumpstart engagement and generated based on the type of architecture you need.
2024-05-18
2024-05-19
2024-05-20
2024-05-22
Go and Sprig functions
From the doc examples this ought to be pretty straight forward, but the errors suggest otherwise:
components:
terraform:
downtimes:
vars:
config-testing: true
datadog_downtime_configuration:
"{{ strings.ToLower .id }}":
enabled: {{or (index . "enabled") "true"}}
monitor_tags: "{{ .monitor_tags }}"
scope: {{or (index . "scope") "'*'"}}
{{ if hasKey . "recurrence" }}
recurrence: "{{ .recurrence }}"
{{- end }}
timezone: {{or (index . "timezone") "UTC"}}
display_timezone: {{or (index . "display_timezone") "UTC"}}
message: "{{ .message }}"
I get yaml:7: function "strings" not defined
when using strings.ToLower
.
Well and now I went to show the error for hasKey
and it’s working. I updated my atmos version so perhaps that solved that.
So just the Go function issue if there’s an obvious answer, please.
you prob need to enable gomplate in atmos.yaml
see <https://atmos.tools/core-concepts/stacks/templating#stack-manifest-templating-configuration>
sprig:
# Enable Sprig functions in `Go` templates in Atmos stack manifests
enabled: true
# <https://docs.gomplate.ca>
# <https://docs.gomplate.ca/functions>
gomplate:
# Enable Gomplate functions and datasources in `Go` templates in Atmos stack manifests
enabled: true
Well that’s embarrassing. I should read the entire page.
Thank you and sorry for being a bad user.
not a problem
let us know if it works for you
Sprig seems to be working by default.
Gomplate does not.
Adding templates.settings
(each of the 3 enable options) doesn’t impact the behavior of either one. As in I can’t disable Sprig either.
Should we expect templates.settings
to be shown when doing an atmos describe component
? Not seeing my change reflected in the output.
And to confirm I should be able to get away with a pretty simple addition such as:
templates:
settings:
enabled: true
sprig:
enabled: false
gomplate:
enabled: true
I presume? This was an attempt to stop Sprig from working.
Am on v1.72.0
@RickA those are settings in atmos.yaml
, not in stack manifests
Yessir.
did you put them in your atmos.yaml
?
Yes, atmos.yaml
looks like this:
base_path: "."
components:
terraform:
base_path: "./components/terraform"
stacks:
base_path: "./stacks"
included_paths: "**/*"
excluded_paths:
- "catalog/**/"
name_pattern: "{environment}-{stage}"
logs:
verbose: false
templates:
settings:
enabled: true
sprig:
enabled: false
gomplate:
enabled: true
and yuo see the same error yaml:7: function "strings" not defined
?
That is correct.
please see the Quick Start example
https://github.com/cloudposse/atmos/blob/master/examples/quick-start/atmos.yaml#L259
it has the gomplate
strings
function working
if you still can’t find the issue, you can DM me your config and I’ll take a look
Roger. Thank you.
Am back in the template world again trying to use gomplate functions. A teammate revealed that they work in our environment, which is more than I figured out 2 months ago. So the issue is they do not work within our templates.
Stack files configuration in play: my-stack.yaml import.yaml template.tmpl
my-stack will import the file import.yaml import is configured to import template.tmpl with context template.tmpl is where I want to use gomplate functions
import.yaml
import:
- path: "catalog/rick/_template.tmpl"
context:
id: "rmq_monitor_poc"
name: "[poc] RabbitMQ reboot status"
tags:
- "foo:bar"
template.tmpl
components:
terraform:
monitors:
vars:
monitors:
"{{ lower .id }}":
name: "{{ .name }}"
tags:
{{- range .tags }}
- {{ . }}
{{- end }}
strings_template_1: "{{ strings.Title .atmos_component }} component"
Error
invalid stack manifest '_template.tmpl'
template: _template.tmpl:12: function "strings" not defined
If I move strings_template_1
up to import.yaml then it works.
Hopefully that all makes sense. What’s needed to use gomplate functions at the level I’m after? Working with v1.72.0 mostly, but did test with v1.83.1 too just in case. Same result.
@Andriy Knysh (Cloud Posse)
@RickA for historical reasons, in such cases as you explained (templates in imports and in the imported stack manifests), gomplate
functions are not supported in the imports (but Sprig
functions are). We’ll review that and improve in the next release
Appreciate you confirming that. Thank you.
With the information shared above I’m trying to move up a level with processing a datasource. If I start with a datasource of:
tags:
- tag1:foo
- tag2:bar
- tag3:123
and a template of:
components:
terraform:
monitors:
vars:
datasource_import: {{ (datasource "global_tags").tags }}
test:
{{- range (datasource "global_tags").tags }}
- {{ . }}
{{- end }}
I can run gomplate -d global_tags.yaml -f _template.tmpl
and get an output of:
components:
terraform:
monitors:
vars:
datasource_import: [tag1:foo tag2:bar tag3:123]
test:
- tag1:foo
- tag2:bar
- tag3:123
I’m having trouble translating that to use in an Atmos stack however. I can do a stack such as:
import:
- catalog/rick/*.yaml
vars:
environment: test
stage: rick
components:
terraform:
monitors:
vars:
datasource: '{{ (datasource "global_tags").tags }}'
And get the output:
vars:
datasource: '[tag1:foo tag2:bar tag3:123]'
But without putting that in quotes I get errors and I cannot do range
over the datasource to get a list.
Any guidance on how using that datasource might work?
it’s a combination of YAML and Go templates…
datasource: {{ (datasource “global_tags”).tags }}
you can use toJson
function https://masterminds.github.io/sprig/defaults.html
Useful template functions for Go templates.
datasource: '{{ toJson (datasource "global_tags").tags }}'
also, if you are using datasource "global_tags"
many times in the stack manifest, recommend you use atmos.GomplateDatasource
instead
Wrap Gomplate data sources and cache the results
it will cache the first invocation result and re-use it in the next invocations (which can speed it up a lot)
I saw that, thank you. Will be considering it depending on how things progress.
(it’s the same as the datasource
with the same alias, it calls the datasource, and caches the result)
I’m sorry, Andriy, but I am not understanding the reference to toJson
at all. It seems like I have to use gomplate
functions in the stack (not template) file, but have to use sprig
function in a template file. I haven’t been able to use both in the same spot.
I can’t get Atmos to play nicely with anything other than a simple string. Would it be possible to see an example using any sort of multi-value type? I haven’t located one in the repo, docs, or in Slack.
I’ve done it a few different ways in a gomplate playground, but attempting to mimic what I can see the datasource outputting from Atmos isn’t working out the same.
@RickA Gomplate has a toJson
function as well, https://docs.gomplate.ca/functions/data/#datatojson
A collection of functions that retrieve, parse, and convert structured data. datasource Alias: ds Parses a given datasource (provided by the –datasource/-d argument or defineDatasource). If the alias is undefined, but is a valid URL, datasource will dynamically read from that URL. See Datasources for (much!) more information. Added in gomplate v0.5.0 Usage datasource alias [subpath]Arguments name description alias (required) the datasource alias (or a URL for dynamic use) subpath (optional) the subpath to use, if supported by the datasource Examples person.
so you can use
'{{ data.ToJSON (datasource "global_tags").tags }}'
regarding your questions about multi-value types. Go templates work with strings only (Atmos does not and cnnot change anything about that). So anything in a template will be converted to a string using Go
String() function. If the data is a list, it will be converted to the Go
representation of the list
the fact that you are doing that in a YAML document is not relevant to Go
templates. So you have to “shape” the data into the type that YAML supports
in this case, you can convert the data to JSON (since JSON is a subset of YAML), or convert it to YAML, or use the Go
range
function
gomplate -d global_tags.yaml -f _template.tmpl
components:
terraform:
monitors:
vars:
datasource_import: [tag1:foo tag2:bar tag3:123]
test:
- tag1:foo
- tag2:bar
- tag3:123
I’m confused by the statement that gomplates work with strings only. Iterating over {{- range (datasource "global_tags").tags }}
worked using a yaml file with a list.
what exactly did not work for you in this expression
'{{ toJson (datasource "global_tags").tags }}'
I’m confused by the statement that gomplates work with strings only
i did not say that
I said that Go
templates work with strings only, not Gomplate
Indeed. My mistake.
Gomplate is a lib on top of Go templates
So gomplate supports it, but Go templating does not, and Atmos is using Go templating?
what exactly ?
But can support gomplate functions, which still means I can’t do what I want.
btw, this is not correct
datasource_import: [tag1:foo tag2:bar tag3:123]
It’s just the output of datasource_import: {{ (datasource "global_tags").tags }}
even if Gomplate outputted it for you, it’s not correct data type, it’s just a string representation of the list in Go (this is just a string, not an object, not an array, not a map)
it’s not a correct data type in YAML nor in JSON, nor in Terraform
Just a reference var while working through this to show me what’s in the datasource.
The important bit to me was that the var test
which iterated over the list was successful.
this
{{- range (datasource "global_tags").tags }}
should be the same as using toJson
function - the function will print a JSON object which is a corerct data type in YAML
even if you don’t see this in the output
test:
- tag1:foo
- tag2:bar
- tag3:123
you will see this
test: [ {"tag1": "foo"}, {"tag2": "bar"}, {"tag3": 123} ]
which in YAML is exactly the same as the YAML list (list of maps)
test:
- tag1:foo
- tag2:bar
- tag3:123
because JSON is a subset of YAML (so any JSON object or array is correct in YAML)
so you see that datatypes in both cases are correct (YAML list and JSON list), but the “shape” of the data is different. That’s why I mentioned many times, that when working with Go templates in YAML files, you have to “shape” complext types (maps, objects, lists of strings, lists of maps, etc.)
So given this template:
components:
terraform:
monitors:
vars:
monitors:
"{{ lower .id }}":
name: "{{ .name }}"
tags:
{{- range .tags }}
- {{ . }}
{{- end }}
{{ if hasKey . "datasource" }}
{{- range .datasource }}
- {{ . }}
{{- end }}
{{- end }}
Is the line containing data.toJson
supposed to work? (edit: darn fingers)
Assuming I’m feeding that line as .datasource
via an import context.
note again, that by using Go templates you are constructing your YAML files - the result of a template execution should be a valid YAML file
this is not a valid YAML file
components:
terraform:
monitors:
vars:
datasource_import: [tag1:foo tag2:bar tag3:123]
even though Gomplate printed it out for you
I understand that’s a string.
I’m expecting this output:
vars:
environment: test
monitors:
rmq_monitor_poc:
name: '[poc] RabbitMQ reboot status'
tags:
- tag:poc
stage: rick
workspace: test-rick
With a longer list of tags that is fed from a datasource file.
I understand that’s a string.
well, it’s not only a string, it’s an actual NOT valid YAML file
a valid YAML file would be this
components:
terraform:
monitors:
vars:
datasource_import: "[tag1:foo tag2:bar tag3:123]"
I really don’t care about that line. It was a troubleshooting var trying to see what was happening.
i don’t understand this
vars:
monitors:
"{{ lower .id }}":
name: "{{ .name }}"
tags:
{{- range .tags }}
- {{ . }}
{{- end }}
{{ if hasKey . "datasource" }}
{{- range .datasource }}
- {{ . }}
{{- end }}
{{- end }}
not sure what you want to achieve, but in the above template looks like too many errors
for example, what’s this
{{ .name }}
in what context is the name
variable present?
This particular project is for Datadog monitors. So the name is the literal name of the monitor for their API. Unique for each run of the template.
so, in this case, you need to look at this doc:
b/c Atmos will evaluate the template since it has no idea that the template is for an eternal system (datadog) and is not intended to be progressed by Atmos
Atmos should evaluate the template. The name is coming from our config. I’m not trying to pass the var to Datadog.
in that case, this is not correct .
name: "{{ .name }}"
Atmos does not have any context with the var name
in it
It works.
What is more correct?
import:
- path: "catalog/rick/_template.tmpl"
context:
id: "RmQ_monitor_POC"
name: "[poc] RabbitMQ reboot status"
tags:
- "tag:poc"
datasource: '{{ data.toJson (datasource .global_tags).tags }}'
oh ok, you provide the context in the import, I see
Our current terraform project that manages monitors is pretty cumbersome. It didn’t scale well.
An Ansible evaluation was completed. Now a few of us want to see if we can do similar work using Atmos config files instead. We were able to do downtimes
effectively with it. Now doing monitors means needing more challenging solutions to reach some of our improvement goals of the rework.
Which is how we’re running into the datasource topic. There are scenarios where being able to pull in lists as needed might be helpful. With tags being our first poc scenario.
So that template works with the context we feed it. Except the part where I want to pull a datasourced list in the context and feed it to the template.
try this
import:
- path: "catalog/rick/_template.tmpl"
context:
id: "RmQ_monitor_POC"
name: "[poc] RabbitMQ reboot status"
tags:
- "tag:poc"
datasource: '{{ (datasource .global_tags).tags }}'
you have two issues here, let me explain:
1. datasource: '{{ data.toJson (datasource .global_tags).tags }}'
- ``data.toJson produces a string, and then you do
range` on the string (which is not working of cause)
- When import with templates is used, Atmos evaluates the import first sending the provided
context
. So in this case,context.datasource
is. not evaluated b/c imports are evaluated first, and only then the templates in stack manifests. Import with templates require static context (not context with other templates, b/c it’s very complicated to evaluate all of that correctly, and also b/c we need to import everything in order to later evaluate all the templates for a component in the stack). You need to redesign this part
also, this datasource .global_tags
is another template variable inside the datasource
this should be a static alias
datasource: '{{ (datasource .global_tags).tags }}'
will work in stack manifests, but not in imports (assuming .global_tags
is in the context, which is not . So it should be something like
datasource: '{{ (datasource .vars.global_tags).tags }}'
please provide more details on what you want to achieve and we’ll figure it out how to do it better
Please allow me a little time to absorb those messages and play with the test files.
Thank you for your help and patience with me.
doc on how Atmos processes templates in imports and in the imported manifests https://atmos.tools/core-concepts/stacks/templates/#excluding-templates-in-imports-from-processing-by-atmos
Wanted to follow up and say we’re trying to pivot on our path. Seems like what you mentioned to Patrick in his thread is going to be the education we require as well.
Is it possible to reference anything like .atmos_stack
or a .vars.whatever
in a context? If I’m reading the phases properly it feels like none of that can be passed to an import.
Unless you can suggest otherwise it feels like we’re going to want to handle variance in our template within Terraform. I’m thinking vars in Atmos that tell Terraform how to grab data fed to it in other means so it can further customize the config passed to it before its sent to Datadog.
in imports, you can use only one context - the static context provided in the context
field
the reason is simple (although not easy to grasp) - before you could use anything from an Atmos component in the templates, Atmos CLI needs to process all imports and process all the inheritance to get the final values for all the sections
w/o importing everything, we can’t calculate the final values
that’s why the final values can’t be used in imports
but only the static context
I think that makes sense. It’s not obvious, but with what you’ve shared I can imagine the challenge.
it’s not a technical challenge, it’s a fundamental restriction
to get the final values, we need to import everything and deep-merge all the sections
if we used the final values in the imports, that’s would be a circular dep
The templating wouldn’t allow us to do helpers as you would in the Helm world, would it? Step out of Atmos a level for some magic.
I’m in a spot where I can potentially use Atmos to give static values as instructions for the template to make decisions off of. I’d just benefit from being able to manage some of the content of those decisions in a better organized fashion.
That’s why the datasource looked promising. I want to manage a file with particular data that gets injected. Then it can be subjected to Atmos’ override logic that we all know and love.
@RickA we’ll help you with any particular question and implementation. Just keep in mind that in imports you can use only the static context
which is not the same as the context of the entire Atmos component.
I understand this is an annoying restriction (maybe it can be improved for some particular cases, but not in general, since it’s a fundamental restriction, not an implementation detail).
What you are trying to do could be prob done in a few diff ways
I’m not annoyed. Just trying to get a better understanding of the intentions of Atmos. You guys have a ton of information available, but it’s also a ton of information to try to understand. And as always a user base is going to push the limits - so if anything I’m the annoying one.
I have a variable that I need to pass that is formatted like this in HCL:
variable "firewall_rules" {
description = "value"
type = list(object({
name = string,
description = optional(string, null),
direction = string, # The input Value must be uppercase
priority = optional(number, 1000),
ranges = list(string),
source_tags = optional(list(string), null),
source_service_accounts = optional(list(string), null),
target_tags = optional(list(string), null),
target_service_accounts = optional(list(string), null),
allow = optional(list(object({
protocol = string # The input Value must be uppercase
ports = optional(list(string), [])
})), []),
deny = optional(list(object({
protocol = string # The input Value must be uppercase
ports = optional(list(string), [])
})), []),
log_config = optional(object({
metadata = string
}), null)
}))
default = []
}
And I cannot get the value to translate properly. How do I fix my yaml file?
firewall_rules:
- name: "RULE_NAME"
description: "DESCRIPTION"
direction: "DIRECTION"
priority: <NUMBER>
ranges: ["<IP_RANGE>"]
- allow:
protocol: "PROTOCOL_TYPE"
ports: ["PORT1", "PORT2"]
- allow:
needs two indents to the right since it’s a property of the object with the name - name: "RULE_NAME"
firewall_rules:
- name: "RULE_NAME"
description: "DESCRIPTION"
direction: "DIRECTION"
priority: <NUMBER>
ranges: ["<IP_RANGE>"]
allow:
- protocol: "PROTOCOL_TYPE"
ports: ["PORT1", "PORT2"]
allow
is a list of objects, so in YAML it should be expressed as shown above
v1.73.0
Allow Go
templates in metadata.component
section. Add components.terraform.command
section to atmos.yaml
. Document OpenTofu support @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2309535706” data-permission-text=”Title is…
Allow Go
templates in metadata.component
section. Add components.terraform.command
section to atmos.yaml
. Document OpenTofu support @aknysh (#604)
what
Allow Go templates in metadata.compo…
aknysh has 267 repositories available. Follow their code on GitHub.
2024-05-23
Anyone facing issues with Affected Stacks, seems like it is unable to get the correct componentPath, runs previously successful are failing now: Previously:
Run set +e
set +e
TERRAFORM_OUTPUT_FILE="./terraform-${GITHUB_RUN_ID}-output.txt"
tfcmt \
--config /home/runner/work/_actions/cloudposse/github-action-atmos-terraform-plan/v2/config/summary.yaml \
-owner "Org" \
-repo "aws_infra_atmos" \
-var "target:ops-logging-deploy-org_vpc_logs-bucket" \
-var "component:org_vpc_logs-bucket" \
-var "componentPath:components/terraform/s3-bucket" \
Now:
Run set +e
set +e
TERRAFORM_OUTPUT_FILE="./terraform-${GITHUB_RUN_ID}-output.txt"
tfcmt \
--config /home/runner/work/_actions/cloudposse/github-action-atmos-terraform-plan/v2/config/summary.yaml \
-owner "Org" \
-repo "aws_infra_atmos" \
-var "target:ops-logging-deploy-org_vpc_logs-bucket" \
-var "component:org_vpc_logs-bucket" \
-var "componentPath:components/terraform/" \
Which eventually leads to failure This is my step snippet:
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v2
with:
component: ${{ matrix.component }}
stack: ${{ matrix.stack }}
atmos-config-path: /home/runner/work/_temp/atmos-config
runs previously successful are failing now
Anyone facing issues with Affected Stacks, seems like it is unable to get the correct componentPath, runs previously successful are failing now: Previously:
Run set +e
set +e
TERRAFORM_OUTPUT_FILE="./terraform-${GITHUB_RUN_ID}-output.txt"
tfcmt \
--config /home/runner/work/_actions/cloudposse/github-action-atmos-terraform-plan/v2/config/summary.yaml \
-owner "Org" \
-repo "aws_infra_atmos" \
-var "target:ops-logging-deploy-org_vpc_logs-bucket" \
-var "component:org_vpc_logs-bucket" \
-var "componentPath:components/terraform/s3-bucket" \
Now:
Run set +e
set +e
TERRAFORM_OUTPUT_FILE="./terraform-${GITHUB_RUN_ID}-output.txt"
tfcmt \
--config /home/runner/work/_actions/cloudposse/github-action-atmos-terraform-plan/v2/config/summary.yaml \
-owner "Org" \
-repo "aws_infra_atmos" \
-var "target:ops-logging-deploy-org_vpc_logs-bucket" \
-var "component:org_vpc_logs-bucket" \
-var "componentPath:components/terraform/" \
Which eventually leads to failure This is my step snippet:
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v2
with:
component: ${{ matrix.component }}
stack: ${{ matrix.stack }}
atmos-config-path: /home/runner/work/_temp/atmos-config
Are you pinning the version of atmos?
yes:
jobs:
atmos-affected:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- id: affected
uses: cloudposse/github-action-atmos-affected-stacks@v3
with:
atmos-config-path: .
atmos-version: 1.63.0
nested-matrices-count: 1
Can you please share the actual failure?
Error:
The file /home/runner/work/aws_infra_atmos/aws_infra_atmos/components/terraform/.terraform.lock.hcl does not exist.
First I thought, it could have been :
https://github.com/cloudposse/github-action-atmos-get-setting
Which was released 2 days ago, and is part of atmos-plan actions, but don’t see anything which would lead to this,
Also thought maybe .terraform.lock.hcl
isn’t supposed to be there, but in previous successful runs, it was included, only change being, the path, where it would be something like : /home/runner/work/aws_infra_atmos/aws_infra_atmos/components/terraform/<component_name>/.terraform.lock.hcl
Oh, so had to pin the atmos version for atmos plan as well, this fixed it:
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v2
with:
component: ${{ matrix.component }}
stack: ${{ matrix.stack }}
atmos-config-path: /home/runner/work/_temp/atmos-config
atmos-version: 1.63.0
Strange though, as this was working fine before
Thanks for the help @Erik Osterman (Cloud Posse)
It is an annoyance the version needs to be pinned in more than one place.
We should resolve that
We’ve added the documentation for how to leverage opentofu
Atmos natively supports OpenTofu,
v1.74.0 Update Atmos logs. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2313407201” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/605” data-hovercard-type=”pull_request”…
Update Atmos logs. Update docs @aknysh (#605) what
Update Atmos logs Make the logs respect the standard file descriptors like /dev/stderr Update docs
https://atmos.tools/cli/configuration/#logs
…
aknysh has 267 repositories available. Follow their code on GitHub.
what
Update Atmos logs Make the logs respect the standard file descriptors like /dev/stderr Update docs
https://pr-605.atmos-docs.ue2.dev.plat.cloudposse.org/cli/configuration/
why Atmos logs …
v1.74.0 Update Atmos logs. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2313407201” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/605” data-hovercard-type=”pull_request”…
2024-05-24
Is it possible to define multiple aws providers in atmos yaml, to be used by a single component? I’m thinking of something like the below, but that obviously won’t work because of the duplicate aws:
keys
terraform:
providers:
aws:
region: us-west-2
assume_role:
role_arn: "role_1"
aws:
alias: "account_2"
region: us-west-2
assume_role:
role_arn: "role_2"
this is an interesting use-case which we did not consider before, but it can be implemented. Let us discuss it internally
Well, interestingly enough, we figured out how to get this working. It turns out the terraform provider block can take a list:
terraform:
providers:
aws:
- region: us-west-2
assume_role:
role_arn: "role-1"
- region: us-west-2
alias: "account-2"
assume_role:
role_arn: "role-2"
Terraform has no issue with this, as long as the aliased provider is defined in the component. It overrides the config just fine.
oh nice, thanks for the info (we’ll update docs to describe this use-case)
2024-05-25
We added a new <File/>
component to the docs, so it’s easier to identify files from terminal output.
https://atmos.tools/quick-start/add-custom-commands
Atmos can be easily extended to support any number of custom CLI commands.
v1.75.0
Improve atmos validate stacks
and atmos describe affected
commands @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2316727181” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/608“…
Improve atmos validate stacks
and atmos describe affected
commands @aknysh (#608)
what
Improve atmos validate stacks and atmos describe affected commands Update docs
atmos validate stacks ali…
aknysh has 267 repositories available. Follow their code on GitHub.
what
Improve atmos validate stacks and atmos describe affected commands Update docs
atmos validate stacks alias: Multiple Provider Configuration in Atmos Manifests
why
atmos validate stacks …
/github subscribe cloudposse/atmos releases
:white_check_mark: Subscribed to cloudposse/atmos. This channel will receive notifications for issues
, pulls
, commits
, releases
, deployments
/github subscribe list features
Subscribed to the following repository
https://github.com/cloudposse/atmos | cloudposse/atmos
issues
, pulls
, commits
, releases
, deployments
/github unsubscribe cloudposse/atmos issues
This channel will receive notifications from cloudposse/atmos for: pulls
, commits
, releases
, deployments
/github unsubscribe cloudposse/atmos pulls commits deployments
This channel will receive notifications from cloudposse/atmos for: releases
2024-05-26
Are there plans to document integration atmos into argocd? I’d absolutely love the ability to auto sync stacks and manually sync stacks to apply terraform
Don’t get me wrong. The current drift detection and applying in the pr is handy.
As atmos gets closer to the helm for terraform (sprig/gomplate, deep merge yaml, component vendoring, etc), argocd seems just.
What do you folks think?
Integration with cello would be nifty
Or crossplane, terranetes, weaveworks flux tf controller, etc
IMO, atmos solves the problem of everything up to kubernetes and possibly installing something like Crossplane and ArgoCD>
But beyond that, other tools are probably better suited.
absolutely love the ability to auto sync stacks and manually sync stacks to apply terraform This is the part I don’t get. I mean, I get why it’s nice when everything is as expected. In fact, this is trivial to do in atmos today, and with github actions. What’s non-trivial to do is everything we do to make sure you have controls and the ability to review plans before apply.
E.g. What’s non-trivial is ensuring that some destructive operation doesn’t happen, or that one change must happen before the other in an coordinated rollout.
Today, with about 5 minutes of effort, you can “auto sync stacks” to apply terraform, because YOLO!
Conceptually, I’m for Crossplane/ACK/etc. But not for anything stateful, like buckets, queues, databases, etc. If you’re deploying an IAM role, it’s not a big deal to restore (but InfoSec might think so). If you’re changing a security group, whoops, just add the rule back. Again, not a big deal. But managing something like a database with a Custom Resource really gives me chills.
We went down this road before with in-cluster Prometheus/Grafana. It was super easy. It worked well. Up and running fast. Everything worked when it worked. And then, your cluster gets hosed. You need to check Grafana, but you can’t because it was evicted or something. So then people say, “well, if your cluster is hosed you have bigger problems”. Which is what I don’t buy. When your cluster is hosed, that’s exactly when you need Grafana and Prometheus to be working.
My point is, the Crossplanes of the world are like this. They are perfecft, when everything is perfect, and then there’s no breakglass when you need it. Similarly, deploying everything is a breeze with Crossplane until that new hire accidentally blows away something that was not ephemeral because there was no inspection of the plan.
It’s also why Account Factories for Terraform is such a joke. I would be terrified of making changes when you cannot review them prior to rolling them out en masse across the enterprise organization. That requires 10x skill and precision.
So, all that is to say, I want it to work. I want to make atmos
improve that experience, e.g with an Atmos controller. However, the I have a mental block seeing how it could actually work.
(oh, and for some reason, for the last 2 years, we’re seeing 3:1 more interest in ECS over EKS)
Thanks for the background on that. I can see what you mean.
But it seems like cello and others do allow for seeing the plan prior to syncing provided you dont set the stack to auto sync. Wouldn’t a manual sync with the tf plan shown in argocd be the best of both worlds?
Also very cool that ecs is getting more interest. I’ve always thought it’s less complex and more straightforward to use than eks so devs can focus more on the apps than esoteric annotations
I have to look into cello. As for manual syncs on Argo, I don’t believe there would be a way to inspect the plan. Plus as a developer, I would prefer to see that on PR, and not switch to another system.
Hmm but if terraform plan can be seen in argocd, a link could show up in the github pr to the plan in argocd, much like spacelift has a link the plan. Wouldn’t that be the same, or similar to what a developer would currently experience with the atmos gh action?
ArgoCD wouldn’t do anything but submit the manifest containing the Custom Resource to Kubernetes, that would then be handled by an operator. At best, Argo could show what about the manifest is changing, but what terraform does is far removed from that.
Ah true. I thought maybe it was possible to see the plan in argocd but perhaps it’s not…
It does seem like some of the controllers allow for manual approval https://terranetes.appvia.io/terranetes-controller/developer/provision/#approving-a-plan but it’s unclear how that can be applied using argocd.
I guess gh gitops is easiest for now. Thanks for considering
2024-05-27
Add Atmos manifest lists merge strategies @aknysh (#609) what
• Add Atmos manifest lists merge strategies
• Update docs
• Settings
section in CLI Configuration
why
• Allow using the following list merge strategies in Atmos stack manifests:
• `replace` - Most recent list imported wins (the default behavior).
• `append` - The sequence of lists is appended in the same order as imports.
• `merge` - The items in the destination list are deep-merged with the items in the source list. The items in the source list take precedence. The items are processed starting from the first up to the length of the source list (the remaining items are not processed). If the source and destination lists have the same length, all items in the destination lists are deep-merged with all items in the source list.
The list merging strategies are configured in atmos.yaml
CLI config file in the settings.list_merge_strategy
section
settings:
# `list_merge_strategy` specifies how lists are merged in Atmos stack manifests.
# Can also be set using 'ATMOS_SETTINGS_LIST_MERGE_STRATEGY' environment variable, or '--settings-list-merge-strategy' command-line argument
# The following strategies are supported:
# `replace`: Most recent list imported wins (the default behavior).
# `append`: The sequence of lists is appended in the same order as imports.
# `merge`: The items in the destination list are deep-merged with the items in the source list.
# The items in the source list take precedence.
# The items are processed starting from the first up to the length of the source list (the remaining items are not processed).
# If the source and destination lists have the same length, all items in the destination lists are
# deep-merged with all items in the source list.
list_merge_strategy: replace
:tada: If you’ve ever wanted to merge lists in atmos
, this release is for you.
Add Atmos manifest lists merge strategies @aknysh (#609) what
• Add Atmos manifest lists merge strategies
• Update docs
• Settings
section in CLI Configuration
why
• Allow using the following list merge strategies in Atmos stack manifests:
• `replace` - Most recent list imported wins (the default behavior).
• `append` - The sequence of lists is appended in the same order as imports.
• `merge` - The items in the destination list are deep-merged with the items in the source list. The items in the source list take precedence. The items are processed starting from the first up to the length of the source list (the remaining items are not processed). If the source and destination lists have the same length, all items in the destination lists are deep-merged with all items in the source list.
The list merging strategies are configured in atmos.yaml
CLI config file in the settings.list_merge_strategy
section
settings:
# `list_merge_strategy` specifies how lists are merged in Atmos stack manifests.
# Can also be set using 'ATMOS_SETTINGS_LIST_MERGE_STRATEGY' environment variable, or '--settings-list-merge-strategy' command-line argument
# The following strategies are supported:
# `replace`: Most recent list imported wins (the default behavior).
# `append`: The sequence of lists is appended in the same order as imports.
# `merge`: The items in the destination list are deep-merged with the items in the source list.
# The items in the source list take precedence.
# The items are processed starting from the first up to the length of the source list (the remaining items are not processed).
# If the source and destination lists have the same length, all items in the destination lists are
# deep-merged with all items in the source list.
list_merge_strategy: replace
Please note, this is only implemented as a global default. It’s not possible to merge lists in some contexts, append or replace in others.
We have set the default behavior to be exactly the same as before, which is to replace.
Oh very nice.
Does that mean we can enable this for inherits
?
@Andriy Knysh (Cloud Posse)
@RB it’s enabled globally for all stack manifest YAML, so inherits
should work as well
2024-05-29
Update atmos validate stacks
command @aknysh (#611)
what
• Update atmos validate stacks
command
• Improve stack validation error messages
why
• When checking for misconfiguration and duplication of components in stacks, throw errors only if the duplicate component configurations in the same stack are different (this will allow importing the base default/abstract components into many stack manifest files)
• The atmos validate stacks
command check the following
• All YAML manifest files for YAML errors and inconsistencies
• All imports: if they are configured correctly, have valid data types, and point to existing manifest files
• Schema: if all sections in all YAML manifest files are correctly configured and have valid data types
• Misconfiguration and duplication of components in stacks. If the same Atmos component in the same Atmos stack is defined in more than one stack manifest file, and the component configurations are different, an error message will be displayed similar to the following:
The Atmos component 'vpc' in the stack 'plat-ue2-dev' is defined in more than one
top-level stack manifest file: orgs/acme/plat/dev/us-east-2-extras, orgs/acme/plat/dev/us-east-2.
The component configurations in the stack manifests are different.
To check and compare the component configurations in the stack manifests, run the following commands:
- atmos describe component vpc -s orgs/acme/plat/dev/us-east-2-extras
- atmos describe component vpc -s orgs/acme/plat/dev/us-east-2
You can use the '--file' flag to write the results of the above commands to files
(refer to <https://atmos.tools/cli/commands/describe/component>).
You can then use the Linux 'diff' command to compare the files line by line and show the differences
(refer to <https://man7.org/linux/man-pages/man1/diff.1.html>)
When searching for the component 'vpc' in the stack 'plat-ue2-dev', Atmos can't decide which
stack manifest file to use to get configuration for the component. This is a stack misconfiguration.
Consider the following solutions to fix the issue:
- Ensure that the same instance of the Atmos 'vpc' component in the stack 'plat-ue2-dev'
is only defined once (in one YAML stack manifest file)
- When defining multiple instances of the same component in the stack,
ensure each has a unique name
- Use multiple-inheritance to combine multiple configurations together
(refer to <https://atmos.tools/core-concepts/components/inheritance>)
notes
• This is an improvement of the previous release https://github.com/cloudposse/atmos/releases/tag/v1.75.0. The previous release introduced too strict checking and disabled the case where the same component in the same stack was just imported into two or more stack manifest files (this type of configuration is acceptable since the component config is always the same in the stack manifests since it’s just imported and not modified)