#atmos (2024-05)

2024-05-01

SlackBot avatar
SlackBot
01:22:21 PM

removed an integration from this channel: Linear Asks

Justin avatar

Hey all, my team is working through updating some of our atmos configuration, and we’re looking for some guidance around “when” to vendor? We’re considering adding some logic to our GitHub Actions that would pull components for affected stacks allowing us to keep the code outside of the repository. Some wins here would be less to review on pull requests as we vendor new versions into different dev/stag/prod stages. However, is it a better play to vendor in as we develop and then commit the changes to the atmos repo?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great question! I think we could/should add some guidance to our docs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’ me take a stab at that now, and present some options.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

First, make sure you’re familiar with the .gitattributes file which let’s you flag certain files/paths as auto generated, which collapses them in your Pull Requests, and reduces the eye strain.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Option 1: By default, Cloud Posse (in our engagements and refarch), vendor the everything in to the repositories.

Pros

• an immutable record of components / less reliance on the repo repositories

• super easy to test changes without a “fork bomb” and ensuring “PR storm” as you update multiple repos

• super easy to diverge when you want to

• super easy to detect changes and what’s affected

• much faster than cloning all dependencies

• super easy to “grep” (search) the repo to find where something is defined

• No need to dereference a bunch of URLs just to find where something is defined

• Easier for newcomers to understand what is going on

Cons

• Reviewing PRs containing tons of vendored files sucks

• …? I struggle to see them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Option 2: Vendoring components (or anything for that fact, which is supported by atmos) can be done “Just in time”, more or less like terraform init for provider and modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Pros:

• only things that change or are different are in the local repo

• PRs don’t contain a bunch of duplicated files

• It’smore “DRY” (but I’d argue this is not really any more DRY than committing them. Not in practice because vendoring is completely automated) Cons

• It’s slower to run, because everything must first be downloaded

• It’s not immutable. Remote refs can change, including tags

• Remote sources can go away, or suffer transient errors

• It’s harder to understand what something is doing, when you have to dereference dozens of URLs to look at the code

• Cannot just do a “code search” (grep) through the repo to see where something is defined

• In order to determine what is affected, you have to clone everything which is slower

• If you want to test out some change, you have to fork it and create branch with your changes, then update your pinning

• If you want to diverge, you also have to fork it, or vendor it in locally

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Option 3: Hybrid - might make sense in some cirucmstances. Vendor & commit 3rd-party dependencies you do not control, and for everything else permit remote dependencies and vendor JIT.

Justin avatar

I didn’t even give thought to someone making a cheeky ref update to a tag

Justin avatar

Alright, this is supremely helpful, thanks a ton for the analysis. I’ll bring these points back to my team for discussion.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great, would love to hear the feedback.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, not sure if this is just a coincidence, but check out this thread

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey Guys

I have a question if someone can help with the answer. I have few modules developed in a separate repository and puling them down to the atmos repo while running the pipeline dynamically using the vendor pull command. But, when I bump up the version atmos is unable to consider that as a change in a component and the atmos describe affected commands gives me an empty response, any idea what I’ m missing here? below is my code snippet - vendor.yaml.

apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: accounts-vendor-config
  description: Atmos vendoring manifest
spec:
  sources:
    - component: "account_scp"
      source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/account-scp.git///?ref={{.Version}}
      version: "1.0.0"
      targets:
        - "components/terraform/account_scp"
      excluded_paths:
        - "**/tests"
        - "**/.gitlab-ci.yml"
      tags:
        - account_scp
Justin avatar

I really like the point of this making the configuration set immutable as well. We’re locked in with exactly what we have committed to the repository.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A lot of our “best practices” are inspired by the tools we’ve cut our teeth on. In this case, our recommendation is inspired by ArgoCD (~Flux) is the inspiration - at least the way we’ve implemented and used it.

Justin avatar

Yeah, I was working with vendoring in different versions today to their own component path (component/1.2.1, component/1.2.2) and it took me a moment to realize this was changing the workspace prefix in the S3 bucket where the state for the stack was being stored.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


it took me a moment to realize this was changing the workspace prefix in the S3 bucket where the state for the stack was being stored.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is configurable

Justin avatar

So now am vendoring in different versions to different environment paths to keep the component names the same as things are promoted. (1.2.1 => component/prod, 1.2.2 => component/dev)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is a great way to support multiple concurrent versions, just make sure you configure the workspace key prefix in the atmos settings for the component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


So now am vendoring in different versions to different environment paths to keep the component names the same as things are promote
This is another way. Think of them as release channels (e.g. alpha, beta, stable, etc)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Either way, its’ great to fix the component path in the state bucket, so you don’t encounter problems if you reorganize how you store components on disk.

Justin avatar

So then my configs look sort of like,

components:
  terraform:
    egress_vpc/vpc:
      metadata:
        component: vendored/networking/vpc/dev
        inherits:
          - vpc/dev

    egress_vpc:
      metadata:
        component: vendored/networking/egress_vpc/dev
      settings:
        depends_on:
          1:
            component: egress_vpc/vpc
      vars:
        enabled: true
        vpc_configuration: egress_vpc/vpc
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) do we have docs on how to “pin” the component path in the state backend?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, I would personally invert the paths for release channels.

• vendored/dev/networking….

• vendored/prod/…. Since over time, some components might get deprecated and removed, or others never progressing past dev.

1
Justin avatar

Yeah, that’s a very good point and something we’re still working through.

We also want to get to the point where we have “release waves”…..so, release a change to a “canary” group, and then carry out groupA, groupB, groupC, etc….

Justin avatar

Which, version numbers would honestly help with that a bit more. How could I pin the workspace key prefix for a client if I did have the component version in the path?

Justin avatar

Ahh, looks like towards the bottom of the page here: https://atmos.tools/quick-start/configure-terraform-backend/

Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I think our example could be better….

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But this:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
    # Atmos component `vpc`
    vpc:
      metadata:
        # Point to the Terraform component in `components/terraform/vpc/1.2.3`
        component: vpc/1.2.3
      # Define variables specific to this `vpc/2` component
      vars:
        name: vpc
        ipv4_primary_cidr_block: 10.10.0.0/18
      # Optional backend configuration for the component
      backend:
        s3:
          # by default, this is the relative path in `components/terraform`, so it would be `vpc/1.2.3`
          # here we fix it to `vpc`
          workspace_key_prefix: vpc
Justin avatar

I can have my cake and eat it too. Delicious.

1
Justin avatar

Yeah, just hugely beneficial, thank you so much for the help.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our pleasure!

Justin avatar

I have the remote-state stuff working that you were able to guide me through the other week…many thanks as well there, I think that’s really going to help us level up our IaC game.

1

2024-05-02

Kubhera avatar
Kubhera

Hey Guys,

I have an use case where my component in atmos has just a terraform null_reource to execute a python script based on few triggers. However, is there any way out I can still manage this similar to a component but not through terraform(null resource), can I use something like custom cli commands that atmos supports to do this? Any input on this use case would be really appreciated.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, exactly one of the use cases for custom sub commands. This way you can call it with atmos to make it feel more integrated, documented (e.g. with atmos help). You can then automate it with atmos workflows. Alternatively you can skip the sub command and only use a workflow too. Up to you.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can also access the stack configuration from custom commands

Kubhera avatar
Kubhera

any examples on how to access stack configuration through custom commands?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Instead of running echo, just run your python script instead.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gabriela Campana (Cloud Posse) we need a task to add example command invocation for each example custom command. cc @Andriy Knysh (Cloud Posse)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. the docs only show how to define it, not how to call it. Of course, it can be inferred, but “as a new user I want to quickly see how to run the command because it will help me connect the dots”

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kubhera we’ll improve the docs. For an example on “any examples on how to access stack configuration through custom commands?”, please see this custom command (as Erik mentioned above):

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands.

2024-05-03

RB avatar

What do you folks think of allowing s3 module or component to have an option to add a suffix with a random string to avoid the high cost of unauthorized s3 denied access?

Or is there a gomplate way of generating a random id and passing it in the yaml to attributes ?

RB avatar

Or would gomplate, if a random function exists, generate a new random string upon each atmos run?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately cannot effectively use the uuid function in gomplate for this because it will cause permadrift

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Just to confirm, aesthetically the automatic checksum hash is not option you are considering? …because it solves exactly this problem with the bucket length, but will chop and checksum the tail end

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry, I didn’t read carefully. You are not asking about controlling the length.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Regarding the post you linked, we discussed this on office hours. @matt mentioned that Jeff Barr in response to the post, said AWS is not going to charge for unauthorized requests and they are making changes for that. To be clear, they refunded the person who made the unfortunate discovery, and are making more permanent fixes to prevent this in the future.

RB avatar

Oh i didn’t realize that jeff bar responded with that. That’s great news. Thanks Erik. Then i suppose it’s a non issue once they makes changes for it

RB avatar

Is there a doc available that points to jeff barrs response? That may calm some nerves

RB avatar
Jeff Barr :cloud: (@jeffbarr) on X

Thank you to everyone who brought this article to our attention. We agree that customers should not have to pay for unauthorized requests that they did not initiate. We’ll have more to share on exactly how we’ll help prevent these charges shortly.

#AWS #S3

How an empty S3…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB here’s the documentation on OpenTofu cc @Matt Gowie https://github.com/cloudposse/atmos/pull/594

#594 Document OpenTofu Support

what

• Document how to use OpenTofu together with Atmos

why

• OpenTofu is a stable alternative to HashiCorp Terraform • Closes #542

2
1

2024-05-04

Kubhera avatar
Kubhera

HI @Erik Osterman (Cloud Posse), I have an interesting use case where I have a stack with 10 components approximately and the all of these components are depending on a output of a single component which is being deployed at the first place as part of my stack. what I’m doing right now is reading out put of the component using remote-state feature of atmos, however, when I execute the workflow which has commands to execute all these components sequentially even there is a change only for a single component(this is the current design I came up with) it is reading the state file of the component every single time for each component and that is adding up extra time for my pipeline execution time. imagine if I have to deploy 100 stacks which are affected. is there any way to mimic this feature something similar to having a global variable in the stack file and refer the same all over the stack wherever it is needed?. basically what I’m looking for is, read once per stack for the output of a component and use it as part of all other dependant components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haha, @Kubhera my ears are ringing from the number of times this is coming up in questions.

TL;DR: we don’t support this today

Design-wise, we’ve really wanted to stick with vanilla Terraform as much as possible. Our argument is that the more we stick into atmos that depend on Terraform behaviors, the more end-users are “vendor locked” into using it; also, we don’t want to re-invent “terraform” inside of Atmos and YAML. We want to use Terraform for what it’s good at, and atmos for where it’s lacking: configuration. However, that’s a self-imposed constraint, and it seems like one users are not so concerned about and a feature frequently asked for, albeit for different use cases. This channel has many such requests.

So we’ve been discussing a way of doing that. It’s tricky because there are a dozen types of backends. We’ll not want to reimplement the functionality in atmos to read the raw state but instead make terraform output a datasource in atmos.

We’ve recently introduced datasources into atmos, so this will slot in nicely there. Also, take a look at the other data sources, and see if one will work for you.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here are the data sources we support today: https://docs.gomplate.ca/datasources/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if you write you outputs to SSM, for example, atmos can read those.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Alternatively, since you’re using a workflow, you can do this workaround.
File file Files can be read in any of the supported formats, including by piping through standard input (Stdin). Directories are also supported.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So in one of your steps, save the terraform output as a json blob to a file.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then define a datasource to read that file. All the values in the file will be available then in the stack configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As always, atmos is a swiss army knife, so there’s away to do it. Just maybe not optimized yet for your use-case.

RB avatar

What about switching from terraform remote state to aws data sources instead for each component?

This way we don’t have to depend on the output of another component to deploy a dependent component.

Kubhera avatar
Kubhera

It would really save me a lot of time, anybody’s help in this regard would be really appreciated.

Kubhera avatar
Kubhera

Thanks a ton in advance !!!

2024-05-05

RB avatar

If a component’s enabled flag is set to false it should delete an existing component’s infra, but what if you did not want the component to be acted on at all? Would a new metadata.enabled flag be acceptable? This way it wouldn’t even create the workspace or run terraform commands. Atmos should just early exit

RB avatar

Thoughts on this?

    keyboard_arrow_up