#atmos (2024-05)

2024-05-01

SlackBot avatar
SlackBot
01:22:21 PM

removed an integration from this channel: Linear Asks

Justin avatar

Hey all, my team is working through updating some of our atmos configuration, and we’re looking for some guidance around “when” to vendor? We’re considering adding some logic to our GitHub Actions that would pull components for affected stacks allowing us to keep the code outside of the repository. Some wins here would be less to review on pull requests as we vendor new versions into different dev/stag/prod stages. However, is it a better play to vendor in as we develop and then commit the changes to the atmos repo?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great question! I think we could/should add some guidance to our docs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’ me take a stab at that now, and present some options.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

First, make sure you’re familiar with the .gitattributes file which let’s you flag certain files/paths as auto generated, which collapses them in your Pull Requests, and reduces the eye strain.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Option 1: By default, Cloud Posse (in our engagements and refarch), vendor the everything in to the repositories.

Pros

• an immutable record of components / less reliance on the repo repositories

• super easy to test changes without a “fork bomb” and ensuring “PR storm” as you update multiple repos

• super easy to diverge when you want to

• super easy to detect changes and what’s affected

• much faster than cloning all dependencies

• super easy to “grep” (search) the repo to find where something is defined

• No need to dereference a bunch of URLs just to find where something is defined

• Easier for newcomers to understand what is going on

Cons

• Reviewing PRs containing tons of vendored files sucks

• …? I struggle to see them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Option 2: Vendoring components (or anything for that fact, which is supported by atmos) can be done “Just in time”, more or less like terraform init for provider and modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Pros:

• only things that change or are different are in the local repo

• PRs don’t contain a bunch of duplicated files

• It’smore “DRY” (but I’d argue this is not really any more DRY than committing them. Not in practice because vendoring is completely automated) Cons

• It’s slower to run, because everything must first be downloaded

• It’s not immutable. Remote refs can change, including tags

• Remote sources can go away, or suffer transient errors

• It’s harder to understand what something is doing, when you have to dereference dozens of URLs to look at the code

• Cannot just do a “code search” (grep) through the repo to see where something is defined

• In order to determine what is affected, you have to clone everything which is slower

• If you want to test out some change, you have to fork it and create branch with your changes, then update your pinning

• If you want to diverge, you also have to fork it, or vendor it in locally

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Option 3: Hybrid - might make sense in some cirucmstances. Vendor & commit 3rd-party dependencies you do not control, and for everything else permit remote dependencies and vendor JIT.

Justin avatar

I didn’t even give thought to someone making a cheeky ref update to a tag

Justin avatar

Alright, this is supremely helpful, thanks a ton for the analysis. I’ll bring these points back to my team for discussion.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great, would love to hear the feedback.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, not sure if this is just a coincidence, but check out this thread

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey Guys

I have a question if someone can help with the answer. I have few modules developed in a separate repository and puling them down to the atmos repo while running the pipeline dynamically using the vendor pull command. But, when I bump up the version atmos is unable to consider that as a change in a component and the atmos describe affected commands gives me an empty response, any idea what I’ m missing here? below is my code snippet - vendor.yaml.

apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: accounts-vendor-config
  description: Atmos vendoring manifest
spec:
  sources:
    - component: "account_scp"
      source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/account-scp.git///?ref={{.Version}}
      version: "1.0.0"
      targets:
        - "components/terraform/account_scp"
      excluded_paths:
        - "**/tests"
        - "**/.gitlab-ci.yml"
      tags:
        - account_scp
Justin avatar

I really like the point of this making the configuration set immutable as well. We’re locked in with exactly what we have committed to the repository.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A lot of our “best practices” are inspired by the tools we’ve cut our teeth on. In this case, our recommendation is inspired by ArgoCD (~Flux) is the inspiration - at least the way we’ve implemented and used it.

Justin avatar

Yeah, I was working with vendoring in different versions today to their own component path (component/1.2.1, component/1.2.2) and it took me a moment to realize this was changing the workspace prefix in the S3 bucket where the state for the stack was being stored.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


it took me a moment to realize this was changing the workspace prefix in the S3 bucket where the state for the stack was being stored.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is configurable

Justin avatar

So now am vendoring in different versions to different environment paths to keep the component names the same as things are promoted. (1.2.1 => component/prod, 1.2.2 => component/dev)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is a great way to support multiple concurrent versions, just make sure you configure the workspace key prefix in the atmos settings for the component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


So now am vendoring in different versions to different environment paths to keep the component names the same as things are promote
This is another way. Think of them as release channels (e.g. alpha, beta, stable, etc)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Either way, its’ great to fix the component path in the state bucket, so you don’t encounter problems if you reorganize how you store components on disk.

Justin avatar

So then my configs look sort of like,

components:
  terraform:
    egress_vpc/vpc:
      metadata:
        component: vendored/networking/vpc/dev
        inherits:
          - vpc/dev

    egress_vpc:
      metadata:
        component: vendored/networking/egress_vpc/dev
      settings:
        depends_on:
          1:
            component: egress_vpc/vpc
      vars:
        enabled: true
        vpc_configuration: egress_vpc/vpc
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) do we have docs on how to “pin” the component path in the state backend?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, I would personally invert the paths for release channels.

• vendored/dev/networking….

• vendored/prod/…. Since over time, some components might get deprecated and removed, or others never progressing past dev.

1
Justin avatar

Yeah, that’s a very good point and something we’re still working through.

We also want to get to the point where we have “release waves”…..so, release a change to a “canary” group, and then carry out groupA, groupB, groupC, etc….

Justin avatar

Which, version numbers would honestly help with that a bit more. How could I pin the workspace key prefix for a client if I did have the component version in the path?

Justin avatar

Ahh, looks like towards the bottom of the page here: https://atmos.tools/quick-start/configure-terraform-backend/

Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I think our example could be better….

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But this:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
    # Atmos component `vpc`
    vpc:
      metadata:
        # Point to the Terraform component in `components/terraform/vpc/1.2.3`
        component: vpc/1.2.3
      # Define variables specific to this `vpc/2` component
      vars:
        name: vpc
        ipv4_primary_cidr_block: 10.10.0.0/18
      # Optional backend configuration for the component
      backend:
        s3:
          # by default, this is the relative path in `components/terraform`, so it would be `vpc/1.2.3`
          # here we fix it to `vpc`
          workspace_key_prefix: vpc
1
Justin avatar

I can have my cake and eat it too. Delicious.

1
Justin avatar

Yeah, just hugely beneficial, thank you so much for the help.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our pleasure!

Justin avatar

I have the remote-state stuff working that you were able to guide me through the other week…many thanks as well there, I think that’s really going to help us level up our IaC game.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please feel free to add any thoughts to this https://github.com/cloudposse/atmos/issues/598

#598 Remote sources for components

Describe the Feature

This is a similar idea to what Terragrunt does with their “Remote Terraform Configurations” feature: https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/#remote-terraform-configurations

The idea would be that you could provide a URL to a given root module and use that to create a component instance instead of having that component available locally in the atmos project repo.

The benefit here is that you don’t need to vendor in the code for that root module. Vendoring is great when you’re going to make changes to a configuration, BUT if you’re not making any changes then it just creates large PRs that are hard to review and doesn’t provide much value.

Another benefit: I have team members that strongly dislike creating root modules that are simply slim wrappers of a single child module because then we’re in the game of maintaining a very slim wrapper. @kevcube can speak to that if there is interest to understand more there.

Expected Behavior

Today all non-custom root module usage is done through vendoring in Atmos, so no similar expected behavior AFAIK.

Use Case

Help avoid vendoring in code that you’re not changing and therefore not polluting the atmos project with additional code that is unchanged.

Describe Ideal Solution

I’m envisioning this would work like the following with the $COMPONENT_NAME.metadata.url being the only change to the schema. Maybe we also need a version attribute as well, but TBD.

components:
  terraform:
    s3-bucket:
      metadata:
        url: <https://github.com/cloudposse/terraform-aws-components/tree/1.431.0/modules/s3-bucket>
      vars:
        ...

Running atmos against this configuration would result in atmos cloning that root module down to the local in a temporary cache and then using that cloned root module as the source to run terraform or tofu against.

Alternatives Considered

None.

Additional Context

None.

2024-05-02

Kubhera avatar
Kubhera

Hey Guys,

I have an use case where my component in atmos has just a terraform null_reource to execute a python script based on few triggers. However, is there any way out I can still manage this similar to a component but not through terraform(null resource), can I use something like custom cli commands that atmos supports to do this? Any input on this use case would be really appreciated.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, exactly one of the use cases for custom sub commands. This way you can call it with atmos to make it feel more integrated, documented (e.g. with atmos help). You can then automate it with atmos workflows. Alternatively you can skip the sub command and only use a workflow too. Up to you.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can also access the stack configuration from custom commands

Kubhera avatar
Kubhera

any examples on how to access stack configuration through custom commands?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Instead of running echo, just run your python script instead.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gabriela Campana (Cloud Posse) we need a task to add example command invocation for each example custom command. cc @Andriy Knysh (Cloud Posse)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. the docs only show how to define it, not how to call it. Of course, it can be inferred, but “as a new user I want to quickly see how to run the command because it will help me connect the dots”

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kubhera we’ll improve the docs. For an example on “any examples on how to access stack configuration through custom commands?”, please see this custom command (as Erik mentioned above):

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

2024-05-03

RB avatar

What do you folks think of allowing s3 module or component to have an option to add a suffix with a random string to avoid the high cost of unauthorized s3 denied access?

Or is there a gomplate way of generating a random id and passing it in the yaml to attributes ?

RB avatar

Or would gomplate, if a random function exists, generate a new random string upon each atmos run?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately cannot effectively use the uuid function in gomplate for this because it will cause permadrift

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Just to confirm, aesthetically the automatic checksum hash is not option you are considering? …because it solves exactly this problem with the bucket length, but will chop and checksum the tail end

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry, I didn’t read carefully. You are not asking about controlling the length.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Regarding the post you linked, we discussed this on office hours. @matt mentioned that Jeff Barr in response to the post, said AWS is not going to charge for unauthorized requests and they are making changes for that. To be clear, they refunded the person who made the unfortunate discovery, and are making more permanent fixes to prevent this in the future.

RB avatar

Oh i didn’t realize that jeff bar responded with that. That’s great news. Thanks Erik. Then i suppose it’s a non issue once they makes changes for it

RB avatar

Is there a doc available that points to jeff barrs response? That may calm some nerves

RB avatar
Jeff Barr :cloud: (@jeffbarr) on X

Thank you to everyone who brought this article to our attention. We agree that customers should not have to pay for unauthorized requests that they did not initiate. We’ll have more to share on exactly how we’ll help prevent these charges shortly.

#AWS #S3

How an empty S3…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB here’s the documentation on OpenTofu cc @Matt Gowie https://github.com/cloudposse/atmos/pull/594

#594 Document OpenTofu Support

what

• Document how to use OpenTofu together with Atmos

why

• OpenTofu is a stable alternative to HashiCorp Terraform • Closes #542

2
1

2024-05-04

Kubhera avatar
Kubhera

HI @Erik Osterman (Cloud Posse), I have an interesting use case where I have a stack with 10 components approximately and the all of these components are depending on a output of a single component which is being deployed at the first place as part of my stack. what I’m doing right now is reading out put of the component using remote-state feature of atmos, however, when I execute the workflow which has commands to execute all these components sequentially even there is a change only for a single component(this is the current design I came up with) it is reading the state file of the component every single time for each component and that is adding up extra time for my pipeline execution time. imagine if I have to deploy 100 stacks which are affected. is there any way to mimic this feature something similar to having a global variable in the stack file and refer the same all over the stack wherever it is needed?. basically what I’m looking for is, read once per stack for the output of a component and use it as part of all other dependant components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haha, @Kubhera my ears are ringing from the number of times this is coming up in questions.

TL;DR: we don’t support this today

Design-wise, we’ve really wanted to stick with vanilla Terraform as much as possible. Our argument is that the more we stick into atmos that depend on Terraform behaviors, the more end-users are “vendor locked” into using it; also, we don’t want to re-invent “terraform” inside of Atmos and YAML. We want to use Terraform for what it’s good at, and atmos for where it’s lacking: configuration. However, that’s a self-imposed constraint, and it seems like one users are not so concerned about and a feature frequently asked for, albeit for different use cases. This channel has many such requests.

So we’ve been discussing a way of doing that. It’s tricky because there are a dozen types of backends. We’ll not want to reimplement the functionality in atmos to read the raw state but instead make terraform output a datasource in atmos.

We’ve recently introduced datasources into atmos, so this will slot in nicely there. Also, take a look at the other data sources, and see if one will work for you.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here are the data sources we support today: https://docs.gomplate.ca/datasources/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if you write you outputs to SSM, for example, atmos can read those.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Alternatively, since you’re using a workflow, you can do this workaround.
File file Files can be read in any of the supported formats, including by piping through standard input (Stdin). Directories are also supported.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So in one of your steps, save the terraform output as a json blob to a file.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then define a datasource to read that file. All the values in the file will be available then in the stack configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As always, atmos is a swiss army knife, so there’s away to do it. Just maybe not optimized yet for your use-case.

RB avatar

What about switching from terraform remote state to aws data sources instead for each component?

This way we don’t have to depend on the output of another component to deploy a dependent component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are planning to switch to using a configurable kv-store model, so that things work better in brownfield.

Using data sources assumes too much: a) that a data source exists for the type of look-up on all resources, and is not consistent across providers b) frequently don’t support returning all values of a given type or error if not found c) is complicated when what you need to look up is in multiple different accounts, d) is messy if you depend on resources cross-cloud. The “kv” store pattern, allows one to choose from any number of backends for configuration, such as dynamodb, s3, SSM, etc. We’re building modules for multiple mainstream clouds (as part of an engagement), so expect to see aws, artifactory, gcp, and azure kv-store implementations.

In a brownfield, that obligation would be to populate the “well known path” in whatever backend you choose for kv-store with the values you need. However, @RB since you are doing a lot of brownfield, I’d like to learn if you think this could work for you.

RB avatar

Hmmm a kv-store model would then require to host dynamodb/s3/ssm. It sounds like a generic cross-platform generic remote-state.

You would still have the dependency issue unfortunately where component X needs to be deployed (or redeployed) for component Y to get the correct information even if it’s simply adding an extra output to X whereas if using a data source, the extra output may already be available.

RB avatar

Since the remote-state uses the context which includes all the contextual tags, I’m not sure I fully understand why it would be difficult to use data sources.

For example, a vpc created by cloudposse’s component, should have the Namespace, Stage, Environment, Name, etc added to the vpc resource as well as the subnets. Using just those tags, a data source for the VPC and subnets can retrieve the vpc id, private subnets, and public subnets which is mostly why the remote state for the vpc component is used.

Matt Calhoun avatar
Matt Calhoun

Just chiming in…the big difference here, IMO, is that in order to use a data source you have to be able to authenticate to the account where the resource is defined. So imagine a component where you need to read some piece of data (like CIDR subnet blocks) for all of the VPCs in your organization. Now you need a ton of providers for that component just to be able to read a list of all your CIDR blocks across accounts and regions. Compare that with the k/v store pattern, where you just read the data directly in the k/v store in your local account and region where you’re deploying the component.

RB avatar

Hi Matt. Interesting use-case and agreed, if you’re deploying in us-west-2 and need to retrieve a resource in another region like us-east-1, then creating lots of providers is not scalable. That does seem like an edge case compared to same-region/same-account resources.

However, you could have both implementations depending on the use-case

  1. the kv-store for the use-case you described, where you have resources in different regions/accounts a. Pros i. works for cross-region/cross-account without additional providers b. Cons i. requires chaining applies for components that rely on other components’ outputs
  2. the data source method for resources that are in the same region and account a. Pros i. works for same region, same account without additional providers ii. no need for chaining applies for components b. Cons i. does not work well with cross-region/cross-account as additional providers are needed It doesn’t have to be one or the other.

We can use (1) for those edge cases and (2) for the common case of retrieving items like VPCs, EKS, etc via data source.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Which components affect you the most with this, @RB ?

RB avatar

Pretty much anything that currently depends on the vpc component for now but im sure I’ll hit it more with other common remote states too like eks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, maybe we could support that use-case for a finite number of common data sources, and support a specific type of look up method, for example by tags and resource type.

1
Kubhera avatar
Kubhera

It would really save me a lot of time, anybody’s help in this regard would be really appreciated.

Kubhera avatar
Kubhera

Thanks a ton in advance !!!

2024-05-05

RB avatar

If a component’s enabled flag is set to false it should delete an existing component’s infra, but what if you did not want the component to be acted on at all? Would a new metadata.enabled flag be acceptable? This way it wouldn’t even create the workspace or run terraform commands. Atmos should just early exit

RB avatar

Thoughts on this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wouldn’t that be an abstract component?

RB avatar

It doesn’t have to be. It could be a non abstract or abstract component.

non abstract or real

# non-abstract or real
components:
  terraform:
    vpc:
      metadata:
        # atmos refuses to run commands on it because this is false
        enabled: false
      vars:
        # if metadata.enabled is true or omitted, this enabled flag will enable all terraform resources
        enabled: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RB this sounds like a good idea. Currently we can use metadata.type: abstract to make the component non-deployable and just a blueprint for derived components. But you are saying that you’d like to “disable” any component (abstract or real). There could def be use-cases for that. We’ll create a ticket

RB avatar

Yes please! Thank you very much!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Gabriela Campana (Cloud Posse) please create a ticket to add metadata.enabled to Atmos manifests

1

2024-05-06

Justin avatar

Hey all, happy Monday! I hope I have a quick question and I’m missing something obvious. I have the need to configure a provider for a component that require credentials stored in a GitHub secret. I’m missing how to get that data out of the secret and available for Atmos to use when building the provider_override.tf.json file in the component directory.

providers:
  provider_name:
    alias: "example"
    host: "<https://example.com>"
    account_id: ${{ env.example_account_id }}
    client_id: ${{ env.example_client_id }}
    client_secret: ${{ env.example_client_secret }}

Is there some documentation or capability in Atmos to parse our YAML files and replace variable placeholders with content from a secret store?

Justin avatar

Wondering if I should place this provider in the [main.tf](http://main.tf) of the module and pass in the values via variables, or if there is a better way to do this.

Stephan Helas avatar
Stephan Helas

Hi @Justin

i have the same problem. my current workaround is to use sops in the component to decrypt the secrets. but you should be able to use other providers as well.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So, I think what you have should work provided,

a) you’re using lower case environment variables (since your example uses lower case) b) you have templating enabled c) you’re aware of this warning

https://atmos.tools/core-concepts/stacks/templating/#atmos-sections-supporting-go-templates

Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, I didn’t look closely enough. Your syntax is wrong. Use Go Template syntax.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
provisioned_by_user: '{{ env "USER" }}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
providers:
  provider_name:
    alias: "example"
    host: "<https://example.com>"
    account_id: '{{ env "example_account_id" }}'
    client_id: '{{ env "example_client_id" }}'
    client_secret: '{{ env "example_client_secret" }}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if your envs were upper case, you would need to retrieve it that way too

   client_secret: '{{ env "EXAMPLE_CLIENT_SECRET" }}'
Justin avatar

Amazing, I will give this a shot, thank you.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos supports Sprig and Gomplate functions in templates in stacks manifests. Both engines have specific syntax to get ENV variables

https://masterminds.github.io/sprig/os.html

https://docs.gomplate.ca/functions/env/

Also, make sure that templating is enabled in atmos.yaml

https://atmos.tools/core-concepts/stacks/templating#configuration

OS Functions

Useful template functions for Go templates.

Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) what do you think about importing the docs for these functions? Since they are not in our docs, they are not searchable or discoverable without knowing to go to another site

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i think it’s a good idea, we need to improve our docs. How do we “import” it?

Stephan Helas avatar
Stephan Helas

Hi,

i don’t know if its me or a bug If i use Uppercase Letters in tenant, the remote state provider will downcase it and then not find the stack. I’ve simply renamed the stack, but wanted to let you know.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suspect it does this because I think null-label convention used to be to always downcase, but I think that option is now configurable.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas i think it’s not related to the remote-state provider (it does not change the case on its own), but as Erik mentioned, it’s the null-label module that always downcase, in which case, if you are using Upper case for any of the context variables, it will not work with Atmos commands

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can set it to none and test. Let us know if it’s working

Stephan Helas avatar
Stephan Helas

Second thing, i’m not 100% sure, but i belive the remote-state provider ignores stacks.name_template and only looks for stacks.name_pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) worth double checking

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas I would like to see your config to better understand what you are doing and the issues you are facing. You can DM me your repo (or part of it with the relevant config) and I’ll take a look and help you with any issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Since stacks.name_template is a new feature, it could be some issues in Atmos, which we can def fix. It was tested (and is used now) in some configs, but we def did not spend much time on it as on the other Atmos features. Without seeing what you are doing, it’s difficult to tell if there is any issues or it’s just a misconfig

Stephan Helas avatar
Stephan Helas

I’ll try to make a small demo. essential, what happens is this:

atmos.yaml

stacks:
  base_path: 'stacks'
  included_paths:
    - 'bootstrap/**/*'
    - 'KN/**/*'
  excluded_paths:
    - '**/_defaults.yaml'
  name_template: '{{.settings.tenant}}-{{.settings.instance}}-{{.settings.stage}}'

remote-state.tf

module "vpc" {

  source = "...."

  component = var.vpc_component
  context   = module.this.context
}

atmos output:

 Terraform has been successfully initialized!
module.wms_vpc.data.utils_component_config.config[0]: Reading...
module.base.data.aws_availability_zones.available: Reading...
module.base.data.aws_availability_zones.available: Read complete after 0s [id=eu-central-1]

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: stack name pattern must be provided in 'stacks.name_pattern' CLI config or 'ATMOS_STACKS_NAME_PATTERN' ENV variable
│
│   with module.wms_vpc.data.utils_component_config.config[0],
│   on .terraform/modules/wms_vpc/remote-state/main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
│

after changing remote state, it works :

module "vpc" {

  source = "..."

  component = var.wms_vpc_component
  context   = module.this.context

  env = {
    ATMOS_STACKS_NAME_PATTERN = "{tenant}-{environment}-{stage}"
  }

  tenant      = var.tags["atmos:tenant"]
  environment = var.tags["atmos:instance"]
  stage       = var.tags["atmos:stage"]

}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, @Andriy Knysh (Cloud Posse) looks like the provider is out of date, or not supporting the name_template

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the latest version of the utils provider supports uses the Atmos code that supports name_temoplate

https://github.com/cloudposse/terraform-provider-utils/releases/tag/1.22.0

make sure it’s downloaded by terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also please make sure that your atmos.yaml is in a location where both Atmos binary and the utils provider can find it, see https://atmos.tools/core-concepts/components/remote-state#caveats

Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of a Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if all is ok and still does not work, ping me, I’ll review your config

Stephan Helas avatar
Stephan Helas

i did a terraform clean and added the provider version, but it did not work the modul version used is 1.22.

 Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
- base in ../../../../../../poc/modules/wms-base
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for base.this...
- base.this in .terraform/modules/base.this
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for this...
- this in .terraform/modules/this
Downloading git::ssh://..../aws-atmos-modules.git?ref=v0.1.0 for wms_vpc...
- wms_vpc in .terraform/modules/wms_vpc/remote-state
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for wms_vpc.always...
- wms_vpc.always in .terraform/modules/wms_vpc.always

Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Finding cloudposse/utils versions matching "!= 1.4.0, >= 1.7.1, ~> 1.22, < 2.0.0"...
- Finding hashicorp/local versions matching ">= 1.3.0, ~> 2.4"...
- Finding hashicorp/aws versions matching "~> 5.37"...
- Finding hashicorp/external versions matching ">= 2.0.0"...
- Installing cloudposse/utils v1.22.0...
- Installed cloudposse/utils v1.22.0 (self-signed, key ID 7B22D099488F3D11)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing hashicorp/aws v5.48.0...
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) looks like he’s using the correct version

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes looks like it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t know what the issue is, and we never tested a config like that with the remote-state module, will have to review in more details

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas if you could DM me your config (or the relevant part of it), that would save some time to test and figure out any issues. If not, I’ll recreate something similar. Thank you

Stephan Helas avatar
Stephan Helas

yes, will do. will take till tomorrow.

Stephan Helas avatar
Stephan Helas

Hi @Andriy Knysh (Cloud Posse),

i tried to build a simple hello-world using local backend, but i can’t get it to work. I’ve created two components, vpc and hello-world. vpc will simply output vpc_id. hello-world should then use the output and just output it again.

https://github.com/astephanh/atmos-hello-world/tree/main/hello-world-remote

after terraform apply vpc i have a locale state, containing the output.

 ▶ atmos terraform apply vpc  -s org-acme-test

Initializing the backend...
Initializing modules...

Initializing provider plugins...
- Reusing previous version of hashicorp/null from the dependency lock file
- Using previously-installed hashicorp/null v3.2.2

Terraform has been successfully initialized!
module.vpc.null_resource.name: Refreshing state... [id=8866214659539567934]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no
differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

vpc_id = "vpc-123456789"

but outputs of remote_state is null ( i don’t know why)

▶ atmos terraform plan hello-world -s org-acme-test

Initializing the backend...
Initializing modules...

Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/external from the dependency lock file
- Reusing previous version of cloudposse/utils from the dependency lock file
- Using previously-installed hashicorp/null v3.2.2
- Using previously-installed hashicorp/local v2.5.1
- Using previously-installed hashicorp/external v2.3.3
- Using previously-installed cloudposse/utils v1.22.0

Terraform has been successfully initialized!
module.vpc.data.utils_component_config.config[0]: Reading...
module.vpc.data.utils_component_config.config[0]: Read complete after 0s [id=e69aca3dce4ce3047ed5ff092291e5c4c02ee685]
module.vpc.data.terraform_remote_state.data_source[0]: Reading...
module.vpc.data.terraform_remote_state.data_source[0]: Read complete after 0s

Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
  + create

Terraform planned the following actions, but then encountered a problem:

  # module.base.null_resource.name will be created
  + resource "null_resource" "name" {
      + id = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + remote_vpc = {
      + backend               = {}
      + backend_type          = "local"
      + outputs               = null
      + remote_workspace_name = null
      + s3_workspace_name     = null
      + workspace_name        = "org-acme-test-vpc"
    }
  + tags       = {
      + "atmos:component"         = "hello-world"
      + "atmos:component_version" = "hello-world/v0.1.0"
      + "atmos:manifest"          = "org/acme/hello-world"
      + "atmos:stack"             = "org-acme-test"
      + "atmos:workspace"         = "org-acme-test-hello-world"
      + environment               = "acme"
      + stage                     = "test"
      + tenant                    = "org"
    }
╷
│ Error: Attempt to get attribute from null value
│
│   on main.tf line 5, in module "base":
│    5:   vpc_id = module.vpc.outputs.vpc_id
│     ├────────────────
│     │ module.vpc.outputs is null
│
│ This value is null, so it does not have any attributes.
╵
exit status 1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) have we tested remote-state with the local backend?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t think we have (will check if the remote-state module supports that). I’ll check the Stephan’s repo as well

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas unfortunately, the module https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/remote-state does not support local remote state. looks like you are the first person that tried to use it with local backend

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can do any of the following:

  1. Use s3 backend (local backend is just for testing, not for real infra)
  2. Use data "terraform_remote_state" directly (https://developer.hashicorp.com/terraform/language/settings/backends/local). This is not an Atmos way of doing things, but you can test with it
  3. Use remote state of type static https://atmos.tools/core-concepts/components/remote-state-backend
Backend Type: local | Terraform | HashiCorp Developerattachment image

Terraform can store the state remotely, making it easier to version and work with in a team.

Remote State Backend | atmos

Atmos supports configuring Terraform Backends to define where

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note this is different from the remote-state module. This is configured natively using stacks.

2024-05-07

2024-05-08

2024-05-09

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’d like to solicit more feedback on remote sources for components so we arrive at the best implementation.

https://github.com/cloudposse/atmos/issues/598

#598 Remote sources for components

Describe the Feature

This is a similar idea to what Terragrunt does with their “Remote Terraform Configurations” feature: https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/#remote-terraform-configurations

The idea would be that you could provide a URL to a given root module and use that to create a component instance instead of having that component available locally in the atmos project repo.

The benefit here is that you don’t need to vendor in the code for that root module. Vendoring is great when you’re going to make changes to a configuration, BUT if you’re not making any changes then it just creates large PRs that are hard to review and doesn’t provide much value.

Another benefit: I have team members that strongly dislike creating root modules that are simply slim wrappers of a single child module because then we’re in the game of maintaining a very slim wrapper. @kevcube can speak to that if there is interest to understand more there.

Expected Behavior

Today all non-custom root module usage is done through vendoring in Atmos, so no similar expected behavior AFAIK.

Use Case

Help avoid vendoring in code that you’re not changing and therefore not polluting the atmos project with additional code that is unchanged.

Describe Ideal Solution

I’m envisioning this would work like the following with the $COMPONENT_NAME.metadata.url being the only change to the schema. Maybe we also need a version attribute as well, but TBD.

components:
  terraform:
    s3-bucket:
      metadata:
        url: <https://github.com/cloudposse/terraform-aws-components/tree/1.431.0/modules/s3-bucket>
      vars:
        ...

Running atmos against this configuration would result in atmos cloning that root module down to the local in a temporary cache and then using that cloned root module as the source to run terraform or tofu against.

Alternatives Considered

None.

Additional Context

None.

1
Release notes from atmos avatar
Release notes from atmos
07:34:34 PM

v1.72.0 Update gomplate datasources. Add env and evaluations sections to Go template configurations @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2286698395” data-permission-text=”Title is private”…

Release v1.72.0 · cloudposse/atmosattachment image

Update gomplate datasources. Add env and evaluations sections to Go template configurations @aknysh (#599) what

Update gomplate datasources Add env section to Go template configurations A…

aknysh - Overview

aknysh has 266 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
07:54:34 PM

v1.72.0 Update gomplate datasources. Add env and evaluations sections to Go template configurations @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2286698395” data-permission-text=”Title is private”…

1
RB avatar

Is there a way to list only real components ?

atmos list components --real
RB avatar

i have this ugly workaround for now

✗ atmos describe stacks | yq e '. | to_entries | .[].value.components.terraform | with_entries(select(.value.metadata.type != "abstract")) | keys' | grep -v '\[\]' | sort | uniq
- aws-team-roles
- aws-teams
- account
- account-map
- aws-team-roles
- github-oidc-provider
- vpc
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we should add native support, but you could imagine due to how much configuration is managed by Atmos that the number of possible ways of filtering is quite staggering. So, for now, the recommendation is to create a custom command to view the data the way you want to view it. Literally think about it like creating a view.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB what do you think about adding a --jq parameter like in the gh cli as a stop-gap measure?

RB avatar

Thanks for sharing Erik. I did not realize there was already documentation for this and it was already in my atmos.yaml (at least the list stacks command)

RB avatar

Yes, the --jq or --query (jmespath) would be handy so you do not need to depend on jq

RB avatar

I don’t know how much it would help me personally since I already have jq installed

2024-05-10

Stephan Helas avatar
Stephan Helas

Hi,

is there an json manifest for vendor.yaml?

Stephan Helas avatar
Stephan Helas

Since inherit will not merge metadata, i need to define the component version for every component in every stack.

i try to dynamically name a component using catalog templates. But so far it is not working. With this approach i try to version my component but sill be able to use multiple instances of it.

import highlevel component in stack:

 import:
  - catalog/account/v0_1_0
  - mixins/region/eu-central-1

use import for the highlevel component:

import:
  - path: catalog/components/account-vpc/v0_1_0
    context:
      name: 'vpc/{{ .setting.region }}'

use component template:

components:
  terraform:
    '{{ .name }}':
      metadata:
        component: account-vpc/v0.1.0

my component ends up beeing:

atmos list components -s accounts-sandbox-5
global
vpc/{{ .setting.region }}
RB avatar

I’m probably mistaken here but is the problem that .setting.region is not getting replaced by the region or that the list subcommand is not interpreting the region?

Or should it be changed to .settings.region or .vars.region instead?

Or what was the exact result you were expecting? Because if you change the metadata.component to a versioned component, you wouldn’t see that result in atmos list components command since that’s only returning the names of the real planable components and not their real component dir names (which is located in the metadata).

Stephan Helas avatar
Stephan Helas

the way i understand atmos is, that .settings get the same treatment as .vars. difference is, that .vars will be converted to terraform inputs for components. thats the reason why i try to use .settings.

so, the documentation states, that metadata can be used in templating. but this is not working for me (templating is enabled)

atmos.yaml

stacks:
  base_path: 'stacks'
  included_paths:
    - 'bootstrap/**/*'
    - 'KN/**/*'
  excluded_paths:
    - '**/_defaults.yaml'
  name_template: '{{.settings.tenant}}-{{.settings.instance}}-{{.settings.stage}}'

templates:
  settings:
    enabled: true
    evaluations: 1
    sprig:
      enabled: true
    gomplate:
      enabled: true

stacks/foo.yaml

settings:
  tenant: accounts
  instance: sandbox
  stage: 5
  component: account-vpc/v0.2.0

components:
  terraform:
    vpc/eu-central-1:
      metadata:
        component: '{{ .settings.component }}'
atmos list components -s accounts-sandbox-5
global
vpc/eu-central-1

plan doesn’t start

▶ atmos terraform plan vpc/eu-central-1  --stack accounts-sandbox-5
'vpc/eu-central-1' points to the Terraform component '{{ .settings.component }}', but it does not exist in 'components/terraform'
Stephan Helas avatar
Stephan Helas

setting

templates:
  settings:
    evaluations: 2 

makes no difference

RB avatar

ah I see, interesting. I haven’t used [templates.settings.xyz](http://templates.settings.xyz) yet. I did notice that in your stacks/foo.yaml, you omit the templates portion so should

stacks/foo.yaml

settings:
  tenant: accounts
  instance: sandbox
  stage: 5
  component: account-vpc/v0.2.0

be

templates:
  settings:
    tenant: accounts
    instance: sandbox
    stage: 5
    component: account-vpc/v0.2.0
Stephan Helas avatar
Stephan Helas

you mean like this?

settings:
  tenant: accounts
  instance: sandbox
  stage: 5

templates:
  settings:
     component: account-vpc/v0.2.0

components:
  terraform:
    vpc/eu-central-1:
      metadata:
        component: '{{ .settings.component }}'

its the same result for me:

1
RB avatar

you may have better luck with a yaml anchor

Stephan Helas avatar
Stephan Helas

but my goal is to use multiple instances of components with different versions. if i inherit something, only .vars and .settings are merged.

I’ve created a sample repo: https://github.com/astephanh/atmos-hello-world

[hello-world-settings](https://github.com/astephanh/atmos-hello-world/tree/main/hello-world-settings) is working, while [hello-world-settings-template](https://github.com/astephanh/atmos-hello-world/tree/main/hello-world-settings-template) is not.

the only difference i’ve made is using template for the component path

diff --color -r hello-world-settings/stacks/org/acme/hello-world.yaml hello-world-settings-template/stacks/org/acme/hello-world.yaml
7a8
>   component: v0.1.0
13c14
<         component: hello-world/v0.1.0
---
>         component: 'hello-world/{{ .settings.component }}'
Stephan Helas avatar
Stephan Helas

Here is the setion of the documenation:

https://atmos.tools/core-concepts/stacks/templating#atmos-sections-supporting-go-templates

component:
  terraform:
    vpc:
       metadata:
        component: "{{ .settings.component }}"
RB avatar

I’m not sure. Perhaps it’s a bug?

Maybe @Andriy Knysh (Cloud Posse) may know more here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas this looks wrong (wrong indent)

settings:
  tenant: accounts
  instance: sandbox
  stage: 5

templates:
  settings:
     component: account-vpc/v0.2.0
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

should be

settings:
  tenant: accounts
  instance: sandbox
  stage: 5
  component: xxxx
  templates:
    settings:
      evaluations: 2

component:
  terraform:
    vpc:
      metadata:
        component: "{{ .settings.component }}"

{{ .settings.component }} refers to the `component` in the `settings` section

I'm not sure what you are trying to achieve, or just playing/testing, let me know and we'll try to help
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

settings.templates.settings should be used. It might look “wrong”, but we’ll be adding more features to templates (e.g. specify some templates to use with other Atmos commands), so in the near future it might look like this

settings:
   templates:
     settings:
       evaluations: 2
       gomplate:
         datasources: {}
     definitions:
       readme: {}. # template config to generate READMEs
       component: {} # template config to generate components
       stack: {} # template config to generate stacks
Stephan Helas avatar
Stephan Helas

i was following the documentation here https://atmos.tools/core-concepts/stacks/templating#atmos-sections-supporting-go-templates.

I try to reuse components in stack without the need to define the component version every time. for this i try to use .settings.component like here:

repo: https://github.com/astephanh/atmos-hello-world/tree/main/hello-world-template

https://github.com/astephanh/atmos-hello-world/blob/main/hello-world-template/stacks/org/acme/hello-world.yaml

settings:
  component: v0.1.0
  templates:
    settings:
      enabled: true
      evaluations: 2

vars:
  tenant: org
  environment: acme
  stage: test

components:
  terraform:
    hello-world/1:
      metadata:
        component: 'hello-world/{{ .settings.component }}'
      vars:
        lang: de
        location: hh
        region: hh

result:

atmos terraform plan hello-world/1 -s org-acme-test
'hello-world/1' points to the Terraform component '{{ .settings.component }}', but it does not exist in 'components/terraform/hello-world'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to enable templating in atmos.yaml

templates:
  settings:
    # Enable `Go` templates in Atmos stack manifests
    enabled: true
    # Number of evaluations/passes to process `Go` templates
    # If not defined, `evaluations` is automatically set to `1`
    evaluations: 2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

remove this from settings in the stack manifest

  templates:
    settings:
      enabled: true
      evaluations: 2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

enabling templates in stack manifests is not currently supported (we might add it in the future versions)

Templating in Atmos can also be configured in the settings.templates.settings section in stack manifests.

The settings.templates.settings section can be defined globally per organization, tenant, account, or per component. Atmos deep-merges the configurations from all scopes into the final result using inheritance.

The schema is the same as templates.settings in the atmos.yaml CLI config file, except the following settings are not supported in the settings.templates.settings section:

settings.templates.settings.enabled
settings.templates.settings.sprig.enabled
settings.templates.settings.gomplate.enabled
settings.templates.settings.evaluations
settings.templates.settings.delimiters
These settings are not supported for the following reasons:

You can't disable templating in the stack manifests which are being processed by Atmos as Go templates

If you define the delimiters in the settings.templates.settings section in stack manifests, the Go templating engine will think that the delimiters specify the beginning and the end of template strings, will try to evaluate it, which will result in an error
RB avatar

How does one provision a VPC with database specific subnets like terraform-aws-modules/vpc ?

Is it better to provision a new VPC instead ?

RB avatar

or create a new component for dynamic-subnets and attach a new suite of public/private subnets to an existing VPC ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it allows you to create “named” subnets (e.g. database), and then use their names (using remote state and outputs) to deploy resources into

2
RB avatar

That looks promising. Thanks Andriy!

Is it possible then to create public and private subnets and additionally create a named subnet for databases in the same vpc component instantiation?

But if we did do multiple subnets in the same vpc, each component that reads from the remote state would only read the public and private subnet output. How would we get the remote state to show different outputs and then use them differently for components that depend on the vpc component? I imagine we’d need to update each component, right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

see the PR description, the module outputs maps of public and private subnets, you can use it from remote state

named_private_subnets_map = {
  "backend" = tolist([
    "subnet-0393680d8ea3dd70f",
    "subnet-06764c7316567eacc",
  ])
  "db" = tolist([
    "subnet-0a7c4b117b2105a69",
    "subnet-074fd7ad2b902bec2",
  ])
  "services" = tolist([
    "subnet-02c63d0c0c2f84bf5",
    "subnet-0f6d042c659cc1346",
  ])
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
named_public_subnets_map = {
  "backend" = tolist([
    "subnet-03e27e41e0b818080",
    "subnet-00155e6b64925ba51",
  ])
  "db" = tolist([
    "subnet-04e5d57b1e2035c7c",
    "subnet-0a326693cfee8e68d",
  ])
  "services" = tolist([
    "subnet-05647fc1f31a30896",
    "subnet-01cc440339718014e",
  ])
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module.vpc.outputs.named_private_subnets_map["db"] - gives a list of private subnets named "db", assuming that `module.vpc` is the `remote-state` module
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use that for Network Firewall since it always required a dedicated subnet in a VPC just for itself

RB avatar

Fantastic. Thank you. I’ll give this a try

RB avatar
module "subnets" {
RB avatar
  firewall_subnet_ids = local.vpc_outputs.named_private_subnets_map[var.firewall_subnet_name]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh, the component is missing the inputs for the module

variable "subnets_per_az_count" {
  type        = number
  description = <<-EOT
    The number of subnet of each type (public or private) to provision per Availability Zone.
    EOT
  default     = 1
  nullable    = false
  validation {
    condition = var.subnets_per_az_count > 0
    # Validation error messages must be on a single line, among other restrictions.
    # See <https://github.com/hashicorp/terraform/issues/24123>
    error_message = "The `subnets_per_az` value must be greater than 0."
  }
}

variable "subnets_per_az_names" {
  type = list(string)

  description = <<-EOT
    The subnet names of each type (public or private) to provision per Availability Zone.
    This variable is optional.
    If a list of names is provided, the list items will be used as keys in the outputs `named_private_subnets_map`, `named_public_subnets_map`,
    `named_private_route_table_ids_map` and `named_public_route_table_ids_map`
    EOT
  default     = ["common"]
  nullable    = false
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RB we’d appreciate it if you opened a PR and add it

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @RB please format TF code

1
RB avatar

I’m not sure what’s happening. Looks like a lot of the checks failed because they took 6 hours and maybe timed out?

2024-05-11

2024-05-13

Stephan Helas avatar
Stephan Helas
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Stephan Helas!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(small typo in PR)
support atmoskj from

Stephan Helas avatar
Stephan Helas

yeah was late in the night

Stephan Helas avatar
Stephan Helas

fixed it

wave1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Stephan Helas avatar
Stephan Helas

ok, i signed it off. one more thing i need to read about….

2024-05-14

Andrew Ochsner avatar
Andrew Ochsner

question… i know this has been covered here but I don’t think I can find/search the history w/o it getting chopped off going to Free Slack…. any guidance around how to lookup resources/ids from prereq stacks? Is it always just use the remote-state data lookup? or is it just easier/preferred to do a regular data lookup like one would w/ any resources? guidance on when to use one or the other? In my environment (azure) we split up terraform state across the subscriptions that hold the resources so there’s not liek a single blob storage that holds all of the state files…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So, first and foremost, we are starting to come around to the idea of doing what the other tooling in the Terraform ecosystem does: directly making outputs available from one component as inputs to another.

For the longest time, we’ve been adamant about not doing this, as to forces reliance on the tooling. We always say Atmos lets you eject and Terraform handles remote state just fine, so why should the tool do it?

But this is a self-imposed limitation, and how we’ve implemented our reference architecture, and it seems many would prefer just to do it in the tooling layer.

Therefore we are discussing how to best support this concept in atmos, such as with the newly released Atmos data sources.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So. you have a few options.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  1. The remote state implementation, which works very well once you have it set up, but it takes some configuration to get it working.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  1. Use data sources and lookups as you would normally, however, not all information is available via data source lookups, not all providers have sufficient data sources, etc.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  1. Consider using a KV store for your cloud platform. For an enterprise customer, for example, we implemented a kv store for artifactory and plan to do the same for GCP, Azure, and AWS.

https://github.com/cloudposse/terraform-artifactory-kv-store

cloudposse/terraform-artifactory-kv-store
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We recommend the KV store approach b/c it’s provider agnostic and cloud agnostic. The same pattern can be used however makes best sense.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Different systems can read/write from it and populate it. It’s not restricted to the information provided by data sources, so it’s more generalizable.

Andrew Ochsner avatar
Andrew Ochsner

Thanks. Is any of that guidance/pattern/tradeoffs documented anywhere? woudl hate to lose this useful info ot the Slack ether

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know. I reallllly hurt to downgrade.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We also have https://archive.sweetops.com but slack changed also how exports work. So they are now limited to 90 days too. That did not used to be the case.

SweetOps Slack Archive

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Is any of that guidance/pattern/tradeoffs documented anywhere? woudl hate to lose this useful info ot the Slack ether
You’re absolutely right. It would be beneficial to post this and the trade offs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, here’s a recent thread on the topic as well. https://sweetops.slack.com/archives/C031919U8A0/p1714829174944729

HI @Erik Osterman (Cloud Posse), I have an interesting use case where I have a stack with 10 components approximately and the all of these components are depending on a output of a single component which is being deployed at the first place as part of my stack. what I’m doing right now is reading out put of the component using remote-state feature of atmos, however, when I execute the workflow which has commands to execute all these components sequentially even there is a change only for a single component(this is the current design I came up with) it is reading the state file of the component every single time for each component and that is adding up extra time for my pipeline execution time. imagine if I have to deploy 100 stacks which are affected. is there any way to mimic this feature something similar to having a global variable in the stack file and refer the same all over the stack wherever it is needed?. basically what I’m looking for is, read once per stack for the output of a component and use it as part of all other dependant components.

1
Marat Bakeev avatar
Marat Bakeev

@Erik Osterman (Cloud Posse) regarding the slack downgrade - maybe it’s worthwhile moving to some other solution? self host mattermost or something..?

Marat Bakeev avatar
Marat Bakeev

(but yeah, this means you guys have to pay for all the community using it. kinda sucks)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We would, but it’s so disruptive to how any community functions. Imagine getting even a few hundred people to move let alone thousands. It’s basically a reset or starting over.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, none of the systems support proper imports. At best, it mocks the user who sent the message as an app, and embeds the date as part of the message [2024-03-01 01:02:03] I said blah .

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is at least why we haven’t pulled the trigger. Here’s what I think would actually happen: We move to the new platform, and 2 months later Slack finally announces a plan for communities.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is now supported in native Stack configurations without modification of components.

https://sweetops.slack.com/archives/C031919U8A0/p1718495222745629

2024-05-15

2024-05-16

Ryan avatar

Hey everyone, I figured I would come in here and ask before I start hacking around. We’re leveraging the VPC module to build our pub/priv subnets, and I need to modify the default route on those public subnets away from the IGW to a Cisco device. I’m guessing this is outside the scope of the module but I figured I would ask Hope you guys are having a good week.

Ryan avatar

Noting I think I could build the change on top of it? idk. Just brainstorming based on new reqs this week.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve accomplished this I know previously, since we’ve implemented centralized organizational egress over the transit gateway

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t know the specifics. Would need to defer to @Andriy Knysh (Cloud Posse) or @Jeremy G (Cloud Posse)

Ryan avatar

Yea we’re on Transit but then we also have Ciscos SDWAN product as the interchnage to make it tougher to understand.

Ryan avatar

Yea that’s cool, we’re literally just in planning phases.

Ryan avatar

Yea I REALLY wanted that subnet firewall design

Ryan avatar

instead now I have to route to ip i think

Ryan avatar

ty for that diagram

Ryan avatar

im imagining doing some ugly static vars in to fix this lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it can be hard to do fully dynamically

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have implemented the network architecture depicted in the diagrams https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/inspection-deployment-models-with-AWS-network-firewall-ra.pdf, but that’s a completely separate/new Terraform components to do all the routing b/w diff VPCs (ingress, egress, inspection). The components is configured and deployed by Atmos. Since it’s complex and custom-made for specific use-cases (even in the diagrams in the PDF there are many diff use-cases), we have not open-sourced it

Ryan avatar

No I wasn’t expecting open source, I was moreso just brainstorming on how to handle modifying those route tables for my case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s prob not possible to make such a component to be able to define all possible networking configurations in such complex architectures as using ingress/egress/inspection VPCs and TGWs

Ryan avatar

yea i was kinda like uhhhhh when they said they wanted to go this direction

Ryan avatar

def preferred network fw

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yep, I know what you feel

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

anyway, regarding “I was moreso just brainstorming on how to handle modifying those route tables for my case”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what we did, we implemenbetd all of that in Terraform (as a few components), including defining all the VPCs (using our vpc components), and the following:

• Route tables (they are not default)

• All TGW routes b/w all those ingress/egress/inspection VPCs

• All EC2 subnet routes b/w all those ingress/egress/inspection VPCs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why the Terraform components are custom, we did not try to make it universal for all possible combinations of such network architectures (which would be a very complex task)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan I hope this will help you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I recommend not trying to make a universal component, but do what you need in Terraform and then deploy it with Atmos

Ryan avatar

thats prob most of what im doing now, i def do not dig into root/child cloudposse structure besides what was previously deployed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you going to use the network firewall and the inspection VPC?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

asking because network firewall complicates the design even further, e.g it requires a separate dedicated subnet in a VPC where it deployed

Ryan avatar

No we currently have a pretty flat network design but everything routes through a network acct

Ryan avatar

Besides the igws the vpc module deploys

Ryan avatar

Well somewhere in that module stack lol

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why, if you already have VPCs with subnets and definitely can’t destroy them to add another subnet, you can use a separate inspection VPC with a dedicated Firewall subnet and route traffic through it

1
Ryan avatar

That would make cost lower too

Ryan avatar

I think

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

see this PR https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/174 where we added the ability to deploy multiple “named” subnets in a VPC per AZ

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it was done to support Network Firewall’s dedicated subnet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan if you have questions, feel free to ask. It’s not possible to explain everything at once since it’s too many moving parts, but we’ll be able to answer questions one by one on diff parts of that network architecture

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Ryan We recommend using terraform-aws-dynamic-subnets to deploy and configure the VPC’s public and private subnets. (We have a bunch of similar modules, but this we are trying to consolidate them down to just this one, so it has the most features and best support.) It has a so may options even I have forgotten them all, and I wrote most of them. The options I would recommend first considering: • If you do not want a route to the internet gateway, you do not have to supply the IGW ID, and no route to it will be created. You can then add your own route, using the aws_route resource, to the route tables specified by the public_route_table_ids module output. • You can suppress the creation and modification of route tables altogether by setting public_route_table_enabled to false, and then you can create and configure the route tables yourself.
If neither of those suit you, you can probable figure out yet another way. I suggest reviewing the code as the best documentation.

cloudposse/terraform-aws-dynamic-subnets
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, that’s what we did: use the dynamic-subnets component to create subnets and disable some routes, then used the aws_route resource to create subnet routes, and aws_ec2_transit_gateway_route resource to create TGW routes

Ryan avatar

appreciate it gentlemen

RB avatar

I created the following SCP for the identity account which allows me to manage iam roles, assume iam roles, and blocks all other access in this account.

data "aws_iam_policy_document" "default" {
  statement {
    sid       = "DenyAllExcept"
    effect    = "Deny"
    resources = ["*"]
    not_actions = [
      # The identity account should be able to create and manage IAM roles
      # policy_sentry query action-table --service iam | grep -i role
      # policy_sentry query action-table --service iam --access-level read
      "iam:AddRoleToInstanceProfile",
      "iam:AttachRolePolicy",
      "iam:CreateRole",
      "iam:CreateServiceLinkedRole",
      "iam:DeleteRole",
      "iam:DeleteRolePermissionsBoundary",
      "iam:DeleteRolePolicy",
      "iam:DeleteServiceLinkedRole",
      "iam:DetachRolePolicy",
      "iam:Generate*",
      "iam:Get*",
      "iam:List*",
      "iam:PassRole",
      "iam:PutRolePermissionsBoundary",
      "iam:PutRolePolicy",
      "iam:Simulate*",
      "iam:RemoveRoleFromInstanceProfile",
      "iam:TagRole",
      "iam:UntagRole",
      "iam:UpdateAssumeRolePolicy",
      "iam:UpdateRole",
      "iam:UpdateRoleDescription",
      # Also need to be able to assume roles into this account as this will be the primary ingress
      "sts:AssumeRole",
    ]

    condition {
      test     = "StringNotLike"
      variable = "aws:PrincipalArn"
      values = [
        # "arn:aws:iam::*:role/this-is-an-exempt-role",
        "arn:aws:iam::*:root",
      ]
    }
  }
}

Hope that helps anyone else secure this account. It was way too easy to accidentally create resources here when incorrectly assuming a secondary role.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) @Jeremy G (Cloud Posse)

I created the following SCP for the identity account which allows me to manage iam roles, assume iam roles, and blocks all other access in this account.

data "aws_iam_policy_document" "default" {
  statement {
    sid       = "DenyAllExcept"
    effect    = "Deny"
    resources = ["*"]
    not_actions = [
      # The identity account should be able to create and manage IAM roles
      # policy_sentry query action-table --service iam | grep -i role
      # policy_sentry query action-table --service iam --access-level read
      "iam:AddRoleToInstanceProfile",
      "iam:AttachRolePolicy",
      "iam:CreateRole",
      "iam:CreateServiceLinkedRole",
      "iam:DeleteRole",
      "iam:DeleteRolePermissionsBoundary",
      "iam:DeleteRolePolicy",
      "iam:DeleteServiceLinkedRole",
      "iam:DetachRolePolicy",
      "iam:Generate*",
      "iam:Get*",
      "iam:List*",
      "iam:PassRole",
      "iam:PutRolePermissionsBoundary",
      "iam:PutRolePolicy",
      "iam:Simulate*",
      "iam:RemoveRoleFromInstanceProfile",
      "iam:TagRole",
      "iam:UntagRole",
      "iam:UpdateAssumeRolePolicy",
      "iam:UpdateRole",
      "iam:UpdateRoleDescription",
      # Also need to be able to assume roles into this account as this will be the primary ingress
      "sts:AssumeRole",
    ]

    condition {
      test     = "StringNotLike"
      variable = "aws:PrincipalArn"
      values = [
        # "arn:aws:iam::*:role/this-is-an-exempt-role",
        "arn:aws:iam::*:root",
      ]
    }
  }
}

Hope that helps anyone else secure this account. It was way too easy to accidentally create resources here when incorrectly assuming a secondary role.

RB avatar

We can probably further tune it too cause we probably don’t need AddRoleToInstanceProfile or RemoveRoleFromInstanceProfile

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Also should allow

        "sts:SetSourceIdentity",
        "sts:TagSession",
1

2024-05-17

Marvin de Bruin avatar
Marvin de Bruin

Heya! I’m following the Quick Start docs in Atmos, and I am enjoying it very much so far, its hard to not skip steps and go all in I have ran in to a small issue though with provisioning, Im at the configure a TF backend, and it feels like the docs are skipping some steps between here: https://atmos.tools/quick-start/configure-terraform-backend#provision-terraform-s3-backend and https://atmos.tools/quick-start/configure-terraform-backend#configure-terraform-s3-backend. In the first section it describes that Atmos has the capability to provision itself a backend using the tfstate-backend component, but I can’t find a good doc on how to actually use it, I tried and I see some errors I’ll post in the thread here. In the next step it assumes provisioning is complete. Im happy to open a PR with the missing instructions if I figure it out.

1
Marvin de Bruin avatar
Marvin de Bruin
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map: no such file or directory
╵

╷
│ Error: Unreadable module directory
│
│ The directory  could not be read for module "assume_role" at iam.tf:30.
╵
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrm… that’s odd. I don’t believe symlinks are used anywhere in the quick start. Are you using any symbolic links with your project (e.g. ln -s)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sometimes people symlink their home directory to google drive or dropbox, or onedrive

Marvin de Bruin avatar
Marvin de Bruin

Not consciously, as in I did not run any symlinking

Marvin de Bruin avatar
Marvin de Bruin

we’re using the latest version of the components: (vendor.yaml)

- component: "tfstate-backend"
      source: "github.com/cloudposse/terraform-aws-components.git//modules/tfstate-backend?ref={{.Version}}"
      version: "1.433.0"
      targets:
        - "components/terraform/tfstate-backend"
      included_paths:
        - "**/*.tf"
      excluded_paths:
        - "**/providers.tf"
      tags:
        - core
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To be clear, this is not an atmos error. This is terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

│ Unable to evaluate directory symlink:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Atmos does not create any symlinks as part of vendoring.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What operating system are you on?

Marvin de Bruin avatar
Marvin de Bruin

OSX

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrm…

Marvin de Bruin avatar
Marvin de Bruin

Im running atmos in Docker though

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using it in geodesic or a custom image?

Marvin de Bruin avatar
Marvin de Bruin
ARG GEODESIC_VERSION=2.11.2
ARG GEODESIC_OS=debian
ARG ATMOS_VERSION=1.72.0
ARG TF_VERSION=1.8.3
1
Marvin de Bruin avatar
Marvin de Bruin

custom image based on cloudposse/geodesic:${GEODESIC_VERSION}-${GEODESIC_OS}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, ok, Geodesic does have symlinks. So that’s probably wherein we have a problem.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) or @Jeremy White (Cloud Posse) any ideas on this one?

Marvin de Bruin avatar
Marvin de Bruin

Thanks so far!

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Our cold start procedure has changed over time, and you may be looking at an older version. In the current cold start, you need to have in your repository the source code and stack configuration for: • account • account-map • account-settings • aws-teams • aws-team-roles • tfstate-backend before you start running Terraform.

Then, to start, you run

atmos terraform apply tfstate-backend -var=access_roles_enabled=false --stack core-usw2-root --auto-generate-backend-file=false

(assuming you use our defaults, where the org root is core-root, the primary region is us-west-2, and the region abbreviation scheme is short).

Then, if you have not already configured the Atmos stacks backend , you configure it now to use the S3 bucket and Role ARN output by the component.

Then, you move the tfstate-backend state to S3 by running

atmos terraform apply tfstate-backend -var=access_roles_enabled=false --stack core-uw2-root --init-run-reconfigure=false

Later, after provisioning all the other components, you come back and run

atmos terraform apply tfstate-backend --stack core-uw2-root
1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Marvin de Bruin Please post links to the documentation you are looking at.

Marvin de Bruin avatar
Marvin de Bruin

@Jeremy G (Cloud Posse) Thanks for looking in to this.

These are all the docs Im looking at apart from some google-fu results that didnt bring me far

Main: https://atmos.tools/quick-start/ TF backend docs: https://atmos.tools/quick-start/configure-terraform-backend#terraform-s3-backend provisioning section: https://atmos.tools/quick-start/configure-terraform-backend#provision-terraform-s3-backend (this step is where Im currently blocked, but I havent followed your steps yet)

I’ve also been looking at the tutorials, but they come with outdated components https://atmos.tools/tutorials/first-aws-environment (repo: https://github.com/cloudposse/tutorials) The component: https://github.com/cloudposse/tutorials/tree/main/03-first-aws-environment/components/terraform/tfstate-backend

I’ve also looked at https://github.com/cloudposse/terraform-aws-components/tree/main/modules/tfstate-backend which also does not mention the cold start dependencies

Marvin de Bruin avatar
Marvin de Bruin

I’ve now added all the components you mentioned, pulled them down (atmos vendor pull) and gave them a minimal config (based on the usage docs of each component). I’m still seeing that Terraform symlink issue though when I run the command you mentioned (I modified it slightly):

 ⨠ atmos terraform apply tfstate-backend -var=access_roles_enabled=false --stack core-gbl-root --auto-generate-backend-file=false

Initializing the backend...
Initializing modules...
- assume_role in
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map/modules: no such file or directory
╵

╷
│ Error: Unreadable module directory
│
│ The directory  could not be read for module "assume_role" at iam.tf:30.
╵

exit status 1
Marvin de Bruin avatar
Marvin de Bruin

Btw, Im not expecting a response in the weekend, Im just very excited working on this Have a great weekend.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Marvin de Bruin I don’t know where or why you have a symlink. Our standard directory structure looks like this:

Marvin de Bruin avatar
Marvin de Bruin

interesting, I indeed do not have that modules folder

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You do not need to have components/terraform exactly, but you do need to have all the components in the same directory. In particular, all the modules reference ../account-map/

Marvin de Bruin avatar
Marvin de Bruin

And these are in the components pulled down by the vendor pull script, I also see them in the repo. I think I might have a misconfig in my vendors file

Marvin de Bruin avatar
Marvin de Bruin

I might have missed a step, but is the recommended way to manually modify the vendors.yaml or do this automatically somehow (like with NPM for javascript, Composer for php)?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Vendor config for account-map

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Our vendoring system has quirks and kinks still to be worked out. The glob patterns do not quite work as expected, for example. I’d expect **/*.tf to pull in all the *.tf files, in all the subdirectories, but it does not do that if you do not include the **/modules/** as well.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

account-map is a special case because of the sub-modules

Marvin de Bruin avatar
Marvin de Bruin

Ah! This was indeed the cause of the blockage! Thank you very much! Is there a repo somewhere with all the default vendor configs for each module? It could be a fun project to create an installer make module add x

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Lets move future refarch questions to refarch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Re: installer for components, we plan to make it easier to do via CLI.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Re: vendor configs, those are part of the typical jumpstart engagement and generated based on the type of architecture you need.

1

2024-05-18

2024-05-19

2024-05-20

2024-05-22

RickA avatar

Go and Sprig functions

1
RickA avatar

From the doc examples this ought to be pretty straight forward, but the errors suggest otherwise:

components:
  terraform:
    downtimes:
      vars:
        config-testing: true
        datadog_downtime_configuration:
          "{{ strings.ToLower .id }}":
            enabled: {{or (index . "enabled") "true"}}
            monitor_tags: "{{ .monitor_tags }}"
            scope: {{or (index . "scope") "'*'"}}
            {{ if hasKey . "recurrence" }}
            recurrence: "{{ .recurrence }}"
            {{- end }}
            timezone: {{or (index . "timezone") "UTC"}}
            display_timezone: {{or (index . "display_timezone") "UTC"}}
            message: "{{ .message }}"

I get yaml:7: function "strings" not defined when using strings.ToLower.

Well and now I went to show the error for hasKey and it’s working. I updated my atmos version so perhaps that solved that.

So just the Go function issue if there’s an obvious answer, please.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you prob need to enable gomplate in atmos.yaml

see <https://atmos.tools/core-concepts/stacks/templating#stack-manifest-templating-configuration>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
sprig:
      # Enable Sprig functions in `Go` templates in Atmos stack manifests
      enabled: true
    # <https://docs.gomplate.ca>
    # <https://docs.gomplate.ca/functions>
gomplate:
      # Enable Gomplate functions and datasources in `Go` templates in Atmos stack manifests
      enabled: true
   
RickA avatar

Well that’s embarrassing. I should read the entire page.

Thank you and sorry for being a bad user.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not a problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if it works for you

RickA avatar

Sprig seems to be working by default. Gomplate does not. Adding templates.settings (each of the 3 enable options) doesn’t impact the behavior of either one. As in I can’t disable Sprig either.

Should we expect templates.settings to be shown when doing an atmos describe component? Not seeing my change reflected in the output.

And to confirm I should be able to get away with a pretty simple addition such as:

templates:
  settings:
    enabled: true
    sprig:
      enabled: false
    gomplate:
      enabled: true

I presume? This was an attempt to stop Sprig from working.

Am on v1.72.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RickA those are settings in atmos.yaml, not in stack manifests

RickA avatar

Yessir.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you put them in your atmos.yaml ?

RickA avatar

Yes, atmos.yaml looks like this:

base_path: "."
components:
  terraform:
    base_path: "./components/terraform"
stacks:
  base_path: "./stacks"
  included_paths: "**/*"
  excluded_paths:
    - "catalog/**/"
  name_pattern: "{environment}-{stage}"
logs:
  verbose: false
templates:
  settings:
    enabled: true
    sprig:
      enabled: false
    gomplate:
      enabled: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and yuo see the same error yaml:7: function "strings" not defined ?

RickA avatar

That is correct.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you still can’t find the issue, you can DM me your config and I’ll take a look

RickA avatar

Roger. Thank you.

RickA avatar

Am back in the template world again trying to use gomplate functions. A teammate revealed that they work in our environment, which is more than I figured out 2 months ago. So the issue is they do not work within our templates.

Stack files configuration in play: my-stack.yaml import.yaml template.tmpl

my-stack will import the file import.yaml import is configured to import template.tmpl with context template.tmpl is where I want to use gomplate functions

import.yaml

import:
  - path: "catalog/rick/_template.tmpl"
    context:
      id: "rmq_monitor_poc"
      name: "[poc] RabbitMQ reboot status"
      tags:
        - "foo:bar"

template.tmpl

components:
  terraform:
    monitors:
      vars:
        monitors:
          "{{ lower .id }}":
            name: "{{ .name }}"
            tags:
            {{- range .tags }}
            - {{ . }}
            {{- end }}
            strings_template_1: "{{ strings.Title .atmos_component }} component"

Error

invalid stack manifest '_template.tmpl'
template: _template.tmpl:12: function "strings" not defined

If I move strings_template_1 up to import.yaml then it works.

Hopefully that all makes sense. What’s needed to use gomplate functions at the level I’m after? Working with v1.72.0 mostly, but did test with v1.83.1 too just in case. Same result.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RickA for historical reasons, in such cases as you explained (templates in imports and in the imported stack manifests), gomplate functions are not supported in the imports (but Sprig functions are). We’ll review that and improve in the next release

RickA avatar

Appreciate you confirming that. Thank you.

RickA avatar

With the information shared above I’m trying to move up a level with processing a datasource. If I start with a datasource of:

tags:
  - tag1:foo
  - tag2:bar
  - tag3:123

and a template of:

components:
  terraform:
    monitors:
      vars:
        datasource_import: {{ (datasource "global_tags").tags }}
        test:
          {{- range (datasource "global_tags").tags }}
          - {{ . }}
          {{- end }}

I can run gomplate -d global_tags.yaml -f _template.tmpl and get an output of:

components:
  terraform:
    monitors:
      vars:
        datasource_import: [tag1:foo tag2:bar tag3:123]
        test:
          - tag1:foo
          - tag2:bar
          - tag3:123

I’m having trouble translating that to use in an Atmos stack however. I can do a stack such as:

import:
  - catalog/rick/*.yaml

vars:
  environment: test
  stage: rick

components:
  terraform:
    monitors:
      vars:
        datasource: '{{ (datasource "global_tags").tags }}'

And get the output:

vars:
  datasource: '[tag1:foo tag2:bar tag3:123]'

But without putting that in quotes I get errors and I cannot do range over the datasource to get a list.

Any guidance on how using that datasource might work?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a combination of YAML and Go templates…

datasource: {{ (datasource “global_tags”).tags }}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

This is not a valid YAML ^

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Default Functions

Useful template functions for Go templates.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
datasource: '{{ toJson (datasource "global_tags").tags }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, if you are using datasource "global_tags" many times in the stack manifest, recommend you use atmos.GomplateDatasource instead

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will cache the first invocation result and re-use it in the next invocations (which can speed it up a lot)

RickA avatar

I saw that, thank you. Will be considering it depending on how things progress.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it’s the same as the datasource with the same alias, it calls the datasource, and caches the result)

1
RickA avatar

I’m sorry, Andriy, but I am not understanding the reference to toJson at all. It seems like I have to use gomplate functions in the stack (not template) file, but have to use sprig function in a template file. I haven’t been able to use both in the same spot.

I can’t get Atmos to play nicely with anything other than a simple string. Would it be possible to see an example using any sort of multi-value type? I haven’t located one in the repo, docs, or in Slack.

I’ve done it a few different ways in a gomplate playground, but attempting to mimic what I can see the datasource outputting from Atmos isn’t working out the same.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RickA Gomplate has a toJson function as well, https://docs.gomplate.ca/functions/data/#datatojson

data functions - gomplate documentation

A collection of functions that retrieve, parse, and convert structured data. datasource Alias: ds Parses a given datasource (provided by the –datasource/-d argument or defineDatasource). If the alias is undefined, but is a valid URL, datasource will dynamically read from that URL. See Datasources for (much!) more information. Added in gomplate v0.5.0 Usage datasource alias [subpath]Arguments name description alias (required) the datasource alias (or a URL for dynamic use) subpath (optional) the subpath to use, if supported by the datasource Examples person.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can use

'{{ data.ToJSON (datasource "global_tags").tags }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding your questions about multi-value types. Go templates work with strings only (Atmos does not and cnnot change anything about that). So anything in a template will be converted to a string using Go String() function. If the data is a list, it will be converted to the Go representation of the list

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the fact that you are doing that in a YAML document is not relevant to Go templates. So you have to “shape” the data into the type that YAML supports

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, you can convert the data to JSON (since JSON is a subset of YAML), or convert it to YAML, or use the Go range function

RickA avatar
gomplate -d global_tags.yaml -f _template.tmpl
components:
  terraform:
    monitors:
      vars:
        datasource_import: [tag1:foo tag2:bar tag3:123]
        test:
          - tag1:foo
          - tag2:bar
          - tag3:123

I’m confused by the statement that gomplates work with strings only. Iterating over {{- range (datasource "global_tags").tags }} worked using a yaml file with a list.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what exactly did not work for you in this expression

'{{ toJson (datasource "global_tags").tags }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I’m confused by the statement that gomplates work with strings only

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i did not say that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I said that Go templates work with strings only, not Gomplate

RickA avatar

Indeed. My mistake.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Gomplate is a lib on top of Go templates

RickA avatar

So gomplate supports it, but Go templating does not, and Atmos is using Go templating?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what exactly ?

RickA avatar

But can support gomplate functions, which still means I can’t do what I want.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

btw, this is not correct

datasource_import: [tag1:foo tag2:bar tag3:123]
RickA avatar

It’s just the output of datasource_import: {{ (datasource "global_tags").tags }}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

even if Gomplate outputted it for you, it’s not correct data type, it’s just a string representation of the list in Go (this is just a string, not an object, not an array, not a map)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not a correct data type in YAML nor in JSON, nor in Terraform

RickA avatar

Just a reference var while working through this to show me what’s in the datasource.

The important bit to me was that the var test which iterated over the list was successful.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this

{{- range (datasource "global_tags").tags }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

should be the same as using toJson function - the function will print a JSON object which is a corerct data type in YAML

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

even if you don’t see this in the output

        test:
          - tag1:foo
          - tag2:bar
          - tag3:123
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you will see this

test: [ {"tag1": "foo"}, {"tag2": "bar"}, {"tag3": 123} ]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which in YAML is exactly the same as the YAML list (list of maps)

        test:
          - tag1:foo
          - tag2:bar
          - tag3:123
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

because JSON is a subset of YAML (so any JSON object or array is correct in YAML)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you see that datatypes in both cases are correct (YAML list and JSON list), but the “shape” of the data is different. That’s why I mentioned many times, that when working with Go templates in YAML files, you have to “shape” complext types (maps, objects, lists of strings, lists of maps, etc.)

RickA avatar

So given this template:

components:
  terraform:
    monitors:
      vars:
        monitors:
          "{{ lower .id }}":
            name: "{{ .name }}"
            tags:
            {{- range .tags }}
            - {{ . }}
            {{- end }}
            {{ if hasKey . "datasource" }}
            {{- range .datasource }}
            - {{ . }}
            {{- end }}
            {{- end }}

Is the line containing data.toJson supposed to work? (edit: darn fingers)

RickA avatar

Assuming I’m feeding that line as .datasource via an import context.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note again, that by using Go templates you are constructing your YAML files - the result of a template execution should be a valid YAML file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is not a valid YAML file

components:
  terraform:
    monitors:
      vars:
        datasource_import: [tag1:foo tag2:bar tag3:123]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

even though Gomplate printed it out for you

RickA avatar

I understand that’s a string.

I’m expecting this output:

vars:
  environment: test
  monitors:
    rmq_monitor_poc:
      name: '[poc] RabbitMQ reboot status'
      tags:
      - tag:poc
  stage: rick
workspace: test-rick

With a longer list of tags that is fed from a datasource file.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I understand that’s a string.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well, it’s not only a string, it’s an actual NOT valid YAML file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

a valid YAML file would be this

components:
  terraform:
    monitors:
      vars:
        datasource_import: "[tag1:foo tag2:bar tag3:123]"
RickA avatar

I really don’t care about that line. It was a troubleshooting var trying to see what was happening.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t understand this

      vars:
        monitors:
          "{{ lower .id }}":
            name: "{{ .name }}"
            tags:
            {{- range .tags }}
            - {{ . }}
            {{- end }}
            {{ if hasKey . "datasource" }}
            {{- range .datasource }}
            - {{ . }}
            {{- end }}
            {{- end }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not sure what you want to achieve, but in the above template looks like too many errors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example, what’s this

{{ .name }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in what context is the name variable present?

RickA avatar

This particular project is for Datadog monitors. So the name is the literal name of the monitor for their API. Unique for each run of the template.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so, in this case, you need to look at this doc:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

b/c Atmos will evaluate the template since it has no idea that the template is for an eternal system (datadog) and is not intended to be progressed by Atmos

RickA avatar

Atmos should evaluate the template. The name is coming from our config. I’m not trying to pass the var to Datadog.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in that case, this is not correct .

name: "{{ .name }}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos does not have any context with the var name in it

RickA avatar

It works.

What is more correct?

RickA avatar
import:
  - path: "catalog/rick/_template.tmpl"
    context:
      id: "RmQ_monitor_POC"
      name: "[poc] RabbitMQ reboot status"
      tags:
        - "tag:poc"
      datasource: '{{ data.toJson (datasource .global_tags).tags }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh ok, you provide the context in the import, I see

RickA avatar

Our current terraform project that manages monitors is pretty cumbersome. It didn’t scale well.

An Ansible evaluation was completed. Now a few of us want to see if we can do similar work using Atmos config files instead. We were able to do downtimes effectively with it. Now doing monitors means needing more challenging solutions to reach some of our improvement goals of the rework.

Which is how we’re running into the datasource topic. There are scenarios where being able to pull in lists as needed might be helpful. With tags being our first poc scenario.

RickA avatar

So that template works with the context we feed it. Except the part where I want to pull a datasourced list in the context and feed it to the template.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try this

import:
  - path: "catalog/rick/_template.tmpl"
    context:
      id: "RmQ_monitor_POC"
      name: "[poc] RabbitMQ reboot status"
      tags:
        - "tag:poc"
      datasource: '{{ (datasource .global_tags).tags }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have two issues here, let me explain:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

1. datasource: '{{ data.toJson (datasource .global_tags).tags }}' - ``data.toJson produces a string, and then you do range` on the string (which is not working of cause)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. When import with templates is used, Atmos evaluates the import first sending the provided context. So in this case, context.datasource is. not evaluated b/c imports are evaluated first, and only then the templates in stack manifests. Import with templates require static context (not context with other templates, b/c it’s very complicated to evaluate all of that correctly, and also b/c we need to import everything in order to later evaluate all the templates for a component in the stack). You need to redesign this part
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, this datasource .global_tags is another template variable inside the datasource

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this should be a static alias

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

datasource: '{{ (datasource .global_tags).tags }}' will work in stack manifests, but not in imports (assuming .global_tags is in the context, which is not . So it should be something like

datasource: '{{ (datasource .vars.global_tags).tags }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please provide more details on what you want to achieve and we’ll figure it out how to do it better

RickA avatar

Please allow me a little time to absorb those messages and play with the test files.

Thank you for your help and patience with me.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

doc on how Atmos processes templates in imports and in the imported manifests https://atmos.tools/core-concepts/stacks/templates/#excluding-templates-in-imports-from-processing-by-atmos

RickA avatar

Wanted to follow up and say we’re trying to pivot on our path. Seems like what you mentioned to Patrick in his thread is going to be the education we require as well.

Is it possible to reference anything like .atmos_stack or a .vars.whatever in a context? If I’m reading the phases properly it feels like none of that can be passed to an import.

Unless you can suggest otherwise it feels like we’re going to want to handle variance in our template within Terraform. I’m thinking vars in Atmos that tell Terraform how to grab data fed to it in other means so it can further customize the config passed to it before its sent to Datadog.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in imports, you can use only one context - the static context provided in the context field

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the reason is simple (although not easy to grasp) - before you could use anything from an Atmos component in the templates, Atmos CLI needs to process all imports and process all the inheritance to get the final values for all the sections

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

w/o importing everything, we can’t calculate the final values

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why the final values can’t be used in imports

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but only the static context

RickA avatar

I think that makes sense. It’s not obvious, but with what you’ve shared I can imagine the challenge.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not a technical challenge, it’s a fundamental restriction

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to get the final values, we need to import everything and deep-merge all the sections

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if we used the final values in the imports, that’s would be a circular dep

RickA avatar

The templating wouldn’t allow us to do helpers as you would in the Helm world, would it? Step out of Atmos a level for some magic.

RickA avatar

I’m in a spot where I can potentially use Atmos to give static values as instructions for the template to make decisions off of. I’d just benefit from being able to manage some of the content of those decisions in a better organized fashion.

That’s why the datasource looked promising. I want to manage a file with particular data that gets injected. Then it can be subjected to Atmos’ override logic that we all know and love.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RickA we’ll help you with any particular question and implementation. Just keep in mind that in imports you can use only the static context which is not the same as the context of the entire Atmos component. I understand this is an annoying restriction (maybe it can be improved for some particular cases, but not in general, since it’s a fundamental restriction, not an implementation detail). What you are trying to do could be prob done in a few diff ways

RickA avatar

I’m not annoyed. Just trying to get a better understanding of the intentions of Atmos. You guys have a ton of information available, but it’s also a ton of information to try to understand. And as always a user base is going to push the limits - so if anything I’m the annoying one.

pv avatar

I have a variable that I need to pass that is formatted like this in HCL:

variable "firewall_rules" {
  description = "value"
  type = list(object({
    name                    = string,
    description             = optional(string, null),
    direction               = string, # The input Value must be uppercase
    priority                = optional(number, 1000),
    ranges                  = list(string),
    source_tags             = optional(list(string), null),
    source_service_accounts = optional(list(string), null),
    target_tags             = optional(list(string), null),
    target_service_accounts = optional(list(string), null),
    allow = optional(list(object({
      protocol = string # The input Value must be uppercase
      ports    = optional(list(string), [])
    })), []),
    deny = optional(list(object({
      protocol = string # The input Value must be uppercase
      ports    = optional(list(string), [])
    })), []),
    log_config = optional(object({
      metadata = string
    }), null)
  }))
  default = []
}

And I cannot get the value to translate properly. How do I fix my yaml file?

firewall_rules:
          - name: "RULE_NAME"
            description: "DESCRIPTION"
            direction: "DIRECTION"
            priority: <NUMBER>      
            ranges: ["<IP_RANGE>"]
          - allow:
              protocol: "PROTOCOL_TYPE"
              ports: ["PORT1", "PORT2"]
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

- allow: needs two indents to the right since it’s a property of the object with the name - name: "RULE_NAME"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
firewall_rules:
          - name: "RULE_NAME"
            description: "DESCRIPTION"
            direction: "DIRECTION"
            priority: <NUMBER>      
            ranges: ["<IP_RANGE>"]
            allow:
              - protocol: "PROTOCOL_TYPE"
                ports: ["PORT1", "PORT2"]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

allow is a list of objects, so in YAML it should be expressed as shown above

1
Release notes from atmos avatar
Release notes from atmos
06:34:38 PM

v1.73.0 Allow Go templates in metadata.component section. Add components.terraform.command section to atmos.yaml. Document OpenTofu support @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2309535706” data-permission-text=”Title is…

Release v1.73.0 · cloudposse/atmosattachment image

Allow Go templates in metadata.component section. Add components.terraform.command section to atmos.yaml. Document OpenTofu support @aknysh (#604) what

Allow Go templates in metadata.compo…

aknysh - Overview

aknysh has 267 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
06:54:33 PM

v1.73.0 Allow Go templates in metadata.component section. Add components.terraform.command section to atmos.yaml. Document OpenTofu support @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2309535706” data-permission-text=”Title is…

2024-05-23

Dhruv Tiwari avatar
Dhruv Tiwari

Anyone facing issues with Affected Stacks, seems like it is unable to get the correct componentPath, runs previously successful are failing now: Previously:

Run set +e
  set +e
  
  TERRAFORM_OUTPUT_FILE="./terraform-${GITHUB_RUN_ID}-output.txt"
  
  tfcmt \
  --config /home/runner/work/_actions/cloudposse/github-action-atmos-terraform-plan/v2/config/summary.yaml \
  -owner "Org" \
  -repo "aws_infra_atmos" \
  -var "target:ops-logging-deploy-org_vpc_logs-bucket" \
  -var "component:org_vpc_logs-bucket" \
  -var "componentPath:components/terraform/s3-bucket" \

Now:

Run set +e
  set +e
  
  TERRAFORM_OUTPUT_FILE="./terraform-${GITHUB_RUN_ID}-output.txt"
  
  tfcmt \
  --config /home/runner/work/_actions/cloudposse/github-action-atmos-terraform-plan/v2/config/summary.yaml \
  -owner "Org" \
  -repo "aws_infra_atmos" \
  -var "target:ops-logging-deploy-org_vpc_logs-bucket" \
  -var "component:org_vpc_logs-bucket" \
  -var "componentPath:components/terraform/" \

Which eventually leads to failure This is my step snippet:

      - name: Plan Atmos Component
        uses: cloudposse/github-action-atmos-terraform-plan@v2
        with:
          component: ${{ matrix.component }}
          stack: ${{ matrix.stack }}
          atmos-config-path: /home/runner/work/_temp/atmos-config
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


runs previously successful are failing now

Anyone facing issues with Affected Stacks, seems like it is unable to get the correct componentPath, runs previously successful are failing now: Previously:

Run set +e
  set +e
  
  TERRAFORM_OUTPUT_FILE="./terraform-${GITHUB_RUN_ID}-output.txt"
  
  tfcmt \
  --config /home/runner/work/_actions/cloudposse/github-action-atmos-terraform-plan/v2/config/summary.yaml \
  -owner "Org" \
  -repo "aws_infra_atmos" \
  -var "target:ops-logging-deploy-org_vpc_logs-bucket" \
  -var "component:org_vpc_logs-bucket" \
  -var "componentPath:components/terraform/s3-bucket" \

Now:

Run set +e
  set +e
  
  TERRAFORM_OUTPUT_FILE="./terraform-${GITHUB_RUN_ID}-output.txt"
  
  tfcmt \
  --config /home/runner/work/_actions/cloudposse/github-action-atmos-terraform-plan/v2/config/summary.yaml \
  -owner "Org" \
  -repo "aws_infra_atmos" \
  -var "target:ops-logging-deploy-org_vpc_logs-bucket" \
  -var "component:org_vpc_logs-bucket" \
  -var "componentPath:components/terraform/" \

Which eventually leads to failure This is my step snippet:

      - name: Plan Atmos Component
        uses: cloudposse/github-action-atmos-terraform-plan@v2
        with:
          component: ${{ matrix.component }}
          stack: ${{ matrix.stack }}
          atmos-config-path: /home/runner/work/_temp/atmos-config
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you pinning the version of atmos?

Dhruv Tiwari avatar
Dhruv Tiwari

yes:

jobs:
  atmos-affected:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - id: affected
        uses: cloudposse/github-action-atmos-affected-stacks@v3
        with:
          atmos-config-path: .
          atmos-version: 1.63.0
          nested-matrices-count: 1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you please share the actual failure?

Dhruv Tiwari avatar
Dhruv Tiwari

Error:

The file /home/runner/work/aws_infra_atmos/aws_infra_atmos/components/terraform/.terraform.lock.hcl does not exist.

First I thought, it could have been : https://github.com/cloudposse/github-action-atmos-get-setting Which was released 2 days ago, and is part of atmos-plan actions, but don’t see anything which would lead to this, Also thought maybe .terraform.lock.hcl isn’t supposed to be there, but in previous successful runs, it was included, only change being, the path, where it would be something like : /home/runner/work/aws_infra_atmos/aws_infra_atmos/components/terraform/<component_name>/.terraform.lock.hcl

Dhruv Tiwari avatar
Dhruv Tiwari

Oh, so had to pin the atmos version for atmos plan as well, this fixed it:

      - name: Plan Atmos Component
        uses: cloudposse/github-action-atmos-terraform-plan@v2
        with:
          component: ${{ matrix.component }}
          stack: ${{ matrix.stack }}
          atmos-config-path: /home/runner/work/_temp/atmos-config
          atmos-version: 1.63.0

Strange though, as this was working fine before

Dhruv Tiwari avatar
Dhruv Tiwari

Thanks for the help @Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It is an annoyance the version needs to be pinned in more than one place.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We should resolve that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve added the documentation for how to leverage opentofu

https://atmos.tools/integrations/opentofu

OpenTofu Integration | atmos

Atmos natively supports OpenTofu,

party_parrot1
1
Release notes from atmos avatar
Release notes from atmos
10:24:39 PM

v1.74.0 Update Atmos logs. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2313407201” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/605” data-hovercard-type=”pull_request”…

Release v1.74.0 · cloudposse/atmosattachment image

Update Atmos logs. Update docs @aknysh (#605) what

Update Atmos logs Make the logs respect the standard file descriptors like /dev/stderr Update docs

https://atmos.tools/cli/configuration/#logs

aknysh - Overview

aknysh has 267 repositories available. Follow their code on GitHub.

Update Atmos logs. Update docs by aknysh · Pull Request #605 · cloudposse/atmosattachment image

what

Update Atmos logs Make the logs respect the standard file descriptors like /dev/stderr Update docs

https://pr-605.atmos-docs.ue2.dev.plat.cloudposse.org/cli/configuration/

why Atmos logs …

Release notes from atmos avatar
Release notes from atmos
10:44:30 PM

v1.74.0 Update Atmos logs. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2313407201” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/605” data-hovercard-type=”pull_request”…

2024-05-24

Andy Wortman avatar
Andy Wortman

Is it possible to define multiple aws providers in atmos yaml, to be used by a single component? I’m thinking of something like the below, but that obviously won’t work because of the duplicate aws: keys

terraform:
  providers:
    aws:
      region: us-west-2
      assume_role:
        role_arn: "role_1"
    aws:
      alias: "account_2"
      region: us-west-2
      assume_role:
        role_arn: "role_2"
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is an interesting use-case which we did not consider before, but it can be implemented. Let us discuss it internally

1
1
Andy Wortman avatar
Andy Wortman

Well, interestingly enough, we figured out how to get this working. It turns out the terraform provider block can take a list:

terraform:
  providers:
    aws:
      - region: us-west-2
        assume_role:
          role_arn: "role-1"
      - region: us-west-2
        alias: "account-2"
        assume_role:
          role_arn: "role-2"
1
Andy Wortman avatar
Andy Wortman

Terraform has no issue with this, as long as the aliased provider is defined in the component. It overrides the config just fine.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh nice, thanks for the info (we’ll update docs to describe this use-case)

1
this1

2024-05-25

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We added a new <File/> component to the docs, so it’s easier to identify files from terminal output. https://atmos.tools/quick-start/add-custom-commands

Add Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

3
Release notes from atmos avatar
Release notes from atmos
03:34:34 PM

v1.75.0 Improve atmos validate stacks and atmos describe affected commands @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2316727181” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/608“…

Release v1.75.0 · cloudposse/atmosattachment image

Improve atmos validate stacks and atmos describe affected commands @aknysh (#608) what

Improve atmos validate stacks and atmos describe affected commands Update docs

atmos validate stacks ali…

aknysh - Overview

aknysh has 267 repositories available. Follow their code on GitHub.

Improve `atmos validate stacks` and `atmos describe affected` commands by aknysh · Pull Request #608 · cloudposse/atmosattachment image

what

Improve atmos validate stacks and atmos describe affected commands Update docs

atmos validate stacks alias: Multiple Provider Configuration in Atmos Manifests

why

atmos validate stacks …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:42:22 PM

/github subscribe cloudposse/atmos releases

 avatar
04:42:22 PM

:white_check_mark: Subscribed to cloudposse/atmos. This channel will receive notifications for issues, pulls, commits, releases, deployments

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:43:19 PM

/github subscribe list features

 avatar
04:43:19 PM

Subscribed to the following repository

https://github.com/cloudposse/atmos | cloudposse/atmos issues, pulls, commits, releases, deployments

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:44:11 PM

/github unsubscribe cloudposse/atmos issues

 avatar
04:44:11 PM

This channel will receive notifications from cloudposse/atmos for: pulls, commits, releases, deployments

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:44:29 PM

/github unsubscribe cloudposse/atmos pulls commits deployments

 avatar
04:44:29 PM

This channel will receive notifications from cloudposse/atmos for: releases

2024-05-26

RB avatar

Are there plans to document integration atmos into argocd? I’d absolutely love the ability to auto sync stacks and manually sync stacks to apply terraform

RB avatar

Don’t get me wrong. The current drift detection and applying in the pr is handy.

As atmos gets closer to the helm for terraform (sprig/gomplate, deep merge yaml, component vendoring, etc), argocd seems just.

What do you folks think?

RB avatar

Integration with cello would be nifty

https://cello-proj.github.io/cello/

RB avatar

Or crossplane, terranetes, weaveworks flux tf controller, etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

IMO, atmos solves the problem of everything up to kubernetes and possibly installing something like Crossplane and ArgoCD>

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But beyond that, other tools are probably better suited.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

absolutely love the ability to auto sync stacks and manually sync stacks to apply terraform This is the part I don’t get. I mean, I get why it’s nice when everything is as expected. In fact, this is trivial to do in atmos today, and with github actions. What’s non-trivial to do is everything we do to make sure you have controls and the ability to review plans before apply.

E.g. What’s non-trivial is ensuring that some destructive operation doesn’t happen, or that one change must happen before the other in an coordinated rollout.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Today, with about 5 minutes of effort, you can “auto sync stacks” to apply terraform, because YOLO!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Conceptually, I’m for Crossplane/ACK/etc. But not for anything stateful, like buckets, queues, databases, etc. If you’re deploying an IAM role, it’s not a big deal to restore (but InfoSec might think so). If you’re changing a security group, whoops, just add the rule back. Again, not a big deal. But managing something like a database with a Custom Resource really gives me chills.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We went down this road before with in-cluster Prometheus/Grafana. It was super easy. It worked well. Up and running fast. Everything worked when it worked. And then, your cluster gets hosed. You need to check Grafana, but you can’t because it was evicted or something. So then people say, “well, if your cluster is hosed you have bigger problems”. Which is what I don’t buy. When your cluster is hosed, that’s exactly when you need Grafana and Prometheus to be working.

My point is, the Crossplanes of the world are like this. They are perfecft, when everything is perfect, and then there’s no breakglass when you need it. Similarly, deploying everything is a breeze with Crossplane until that new hire accidentally blows away something that was not ephemeral because there was no inspection of the plan.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s also why Account Factories for Terraform is such a joke. I would be terrified of making changes when you cannot review them prior to rolling them out en masse across the enterprise organization. That requires 10x skill and precision.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So, all that is to say, I want it to work. I want to make atmos improve that experience, e.g with an Atmos controller. However, the I have a mental block seeing how it could actually work.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(oh, and for some reason, for the last 2 years, we’re seeing 3:1 more interest in ECS over EKS)

RB avatar

Thanks for the background on that. I can see what you mean.

But it seems like cello and others do allow for seeing the plan prior to syncing provided you dont set the stack to auto sync. Wouldn’t a manual sync with the tf plan shown in argocd be the best of both worlds?

RB avatar

Also very cool that ecs is getting more interest. I’ve always thought it’s less complex and more straightforward to use than eks so devs can focus more on the apps than esoteric annotations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have to look into cello. As for manual syncs on Argo, I don’t believe there would be a way to inspect the plan. Plus as a developer, I would prefer to see that on PR, and not switch to another system.

1
RB avatar

Hmm but if terraform plan can be seen in argocd, a link could show up in the github pr to the plan in argocd, much like spacelift has a link the plan. Wouldn’t that be the same, or similar to what a developer would currently experience with the atmos gh action?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ArgoCD wouldn’t do anything but submit the manifest containing the Custom Resource to Kubernetes, that would then be handled by an operator. At best, Argo could show what about the manifest is changing, but what terraform does is far removed from that.

RB avatar

Ah true. I thought maybe it was possible to see the plan in argocd but perhaps it’s not…

It does seem like some of the controllers allow for manual approval https://terranetes.appvia.io/terranetes-controller/developer/provision/#approving-a-plan but it’s unclear how that can be applied using argocd.

I guess gh gitops is easiest for now. Thanks for considering

2024-05-27

github3 avatar
github3
04:23:43 PM

Add Atmos manifest lists merge strategies @aknysh (#609) what

• Add Atmos manifest lists merge strategies • Update docs • Settings section in CLI Configuration

why

• Allow using the following list merge strategies in Atmos stack manifests:

• `replace` - Most recent list imported wins (the default behavior).
• `append` - The sequence of lists is appended in the same order as imports.
• `merge` - The items in the destination list are deep-merged with the items in the source list. The items in the source list take precedence. The items are processed starting from the first up to the length of the source list (the remaining items are not processed). If the source and destination lists have the same length, all items in the destination lists are deep-merged with all items in the source list.

The list merging strategies are configured in atmos.yaml CLI config file in the settings.list_merge_strategy section

settings:
  # `list_merge_strategy` specifies how lists are merged in Atmos stack manifests.
  # Can also be set using 'ATMOS_SETTINGS_LIST_MERGE_STRATEGY' environment variable, or '--settings-list-merge-strategy' command-line argument
  # The following strategies are supported:
  # `replace`: Most recent list imported wins (the default behavior).
  # `append`:  The sequence of lists is appended in the same order as imports.
  # `merge`:   The items in the destination list are deep-merged with the items in the source list.
  #            The items in the source list take precedence.
  #            The items are processed starting from the first up to the length of the source list (the remaining items are not processed).
  #            If the source and destination lists have the same length, all items in the destination lists are
  #            deep-merged with all items in the source list.
  list_merge_strategy: replace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

:tada: If you’ve ever wanted to merge lists in atmos, this release is for you.

Add Atmos manifest lists merge strategies @aknysh (#609) what

• Add Atmos manifest lists merge strategies • Update docs • Settings section in CLI Configuration

why

• Allow using the following list merge strategies in Atmos stack manifests:

• `replace` - Most recent list imported wins (the default behavior).
• `append` - The sequence of lists is appended in the same order as imports.
• `merge` - The items in the destination list are deep-merged with the items in the source list. The items in the source list take precedence. The items are processed starting from the first up to the length of the source list (the remaining items are not processed). If the source and destination lists have the same length, all items in the destination lists are deep-merged with all items in the source list.

The list merging strategies are configured in atmos.yaml CLI config file in the settings.list_merge_strategy section

settings:
  # `list_merge_strategy` specifies how lists are merged in Atmos stack manifests.
  # Can also be set using 'ATMOS_SETTINGS_LIST_MERGE_STRATEGY' environment variable, or '--settings-list-merge-strategy' command-line argument
  # The following strategies are supported:
  # `replace`: Most recent list imported wins (the default behavior).
  # `append`:  The sequence of lists is appended in the same order as imports.
  # `merge`:   The items in the destination list are deep-merged with the items in the source list.
  #            The items in the source list take precedence.
  #            The items are processed starting from the first up to the length of the source list (the remaining items are not processed).
  #            If the source and destination lists have the same length, all items in the destination lists are
  #            deep-merged with all items in the source list.
  list_merge_strategy: replace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please note, this is only implemented as a global default. It’s not possible to merge lists in some contexts, append or replace in others.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have set the default behavior to be exactly the same as before, which is to replace.

1
RB avatar

Oh very nice.

Does that mean we can enable this for inherits ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RB it’s enabled globally for all stack manifest YAML, so inherits should work as well

1

2024-05-29

github3 avatar
github3
04:13:22 PM

Update atmos validate stacks command @aknysh (#611) what

• Update atmos validate stacks command • Improve stack validation error messages

why

• When checking for misconfiguration and duplication of components in stacks, throw errors only if the duplicate component configurations in the same stack are different (this will allow importing the base default/abstract components into many stack manifest files) • The atmos validate stacks command check the following

• All YAML manifest files for YAML errors and inconsistencies
• All imports: if they are configured correctly, have valid data types, and point to existing manifest files
• Schema: if all sections in all YAML manifest files are correctly configured and have valid data types
• Misconfiguration and duplication of components in stacks. If the same Atmos component in the same Atmos stack is defined in more than one stack manifest file, and the component configurations are different, an error message will be displayed similar to the following:
        The Atmos component 'vpc' in the stack 'plat-ue2-dev' is defined in more than one
        top-level stack manifest file: orgs/acme/plat/dev/us-east-2-extras, orgs/acme/plat/dev/us-east-2.
        
        The component configurations in the stack manifests are different.
        
        To check and compare the component configurations in the stack manifests, run the following commands:
        - atmos describe component vpc -s orgs/acme/plat/dev/us-east-2-extras
        - atmos describe component vpc -s orgs/acme/plat/dev/us-east-2
        
        You can use the '--file' flag to write the results of the above commands to files
        (refer to <https://atmos.tools/cli/commands/describe/component>).
        
        You can then use the Linux 'diff' command to compare the files line by line and show the differences
        (refer to <https://man7.org/linux/man-pages/man1/diff.1.html>)
        
        When searching for the component 'vpc' in the stack 'plat-ue2-dev', Atmos can't decide which
        stack manifest file to use to get configuration for the component. This is a stack misconfiguration.
        
        Consider the following solutions to fix the issue:
        
          - Ensure that the same instance of the Atmos 'vpc' component in the stack 'plat-ue2-dev'
            is only defined once (in one YAML stack manifest file)
          
          - When defining multiple instances of the same component in the stack,
            ensure each has a unique name
          
          - Use multiple-inheritance to combine multiple configurations together
            (refer to <https://atmos.tools/core-concepts/components/inheritance>)   
        

notes

• This is an improvement of the previous release https://github.com/cloudposse/atmos/releases/tag/v1.75.0. The previous release introduced too strict checking and disabled the case where the same component in the same stack was just imported into two or more stack manifest files (this type of configuration is acceptable since the component config is always the same in the stack manifests since it’s just imported and not modified)

    keyboard_arrow_up