#atmos (2024-10)
2024-10-01
hey I’m debuging some template and I notice that atmos is not updating correctly the error dose it cache the template result? if so is there a way to clear this?
@Andriy Knysh (Cloud Posse)
@Miguel Zablah Atmos caches the results of the atmos.Component
functions only for the same component and stack, and only for one command execution. If you run atmos terraform ...
again, it will not use the cached results anymore
Thanks it looks like I had an error on another file that is why I though it was maybe cache, thanks!
I’m having an issue, I’m using atmos to autogenerate the backend. I’m getting a Error: Backend configuration changed. I am not quite, as I don’t see Atmos making the backend
hmm atmos terraform generate backend is not making backend files, Any tips?
run validate stacks to see if you have a wrong yaml
all successful
and make sure the base_path is correct, and atmos cli can find the atmos.yaml
for backends to be auto-generated, you need to configure the following:
- In
atmos.yaml
components: terraform: # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument auto_generate_backend_file: true
• Configure the backend in YAML https://atmos.tools/quick-start/advanced/configure-terraform-backend/#configure-terraform-s3-backend-with-atmos
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts
you need to have this config
terraform:
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: true
bucket: "your-s3-bucket-name"
dynamodb_table: "your-dynamodb-table-name"
key: "terraform.tfstate"
region: "your-aws-region"
role_arn: "arn:aws:iam::<your account ID>:role/<IAM Role with permissions to access the Terraform backend>"
in the defaults for the Org (e.g. in stacks/orgs/acme/_defaults.yaml
) if you have one backend for the entire Org
What is strange it was working, this is the error.
atmos terraform generate backend wazuh -s dev –logs-level debug template: all-atmos-sections35: executing “all-atmos-sections” at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require migrating existing state.
If you wish to attempt automatic migration of the state, use “terraform init -migrate-state”. If you wish to store the current configuration with no changes to the state, use “terraform init -reconfigure”.
try to remove the .terraform
folder and run it again
Same error with the .terraform folder removed
Ahh ran with log_level trace, and found the issue
what was the issue?
The component had references for var in other components
removed .terraform in those other components
TY for trace
I should make a script to just clean-slate all those .terraform directories
Ya, we should have that as a built in command.
@Dennis DeMarco for now, instead of a script add a custom command to atmos
Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos CLI when you run atmos help. It’s a great way to centralize the way operational tools are run in order to improve DX.
2024-10-02
Hi! I’m having an issue with this workflow:
cloudposse/github-action-atmos-affected-stacks
where it will do a plan for all stacks even when I got some disable stacks on the CI/CD and since one of the components is dependent of another it fails with.
I get the same error when running this locally:
atmos describe affected --include-settings=false --verbose=true
is there a way to skip a stack or mark it as ignore?
this is the error:
template: describe-stacks-all-sections:74:26: executing "describe-stacks-all-sections" at <concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets)>: error calling concat: runtime error: invalid memory address or nil pointer dereference
it’s complaining about this concat I do:
'{{ concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets | default (list)) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets | default (list)) | toRawJson }}'
but this works when vpc is apply but since this stack is not being uses at the moment it fails
any idea how to fix this?
@Igor Rodionov
@Andriy Knysh (Cloud Posse)
@Miguel Zablah by “disabled component” do you mean you set enabled: false
for it?
Yes
i see the issue. We’ll review and update it in the next Atmos release
That will be awesome since this is a blocker for us now
@Andriy Knysh (Cloud Posse) any ETL on this?
Hi @Miguel Zablah We don’t have an ETA yet. But should be soon
we’ll try to do it in the next few days
thanks!!
2024-10-03
Good morning, I’m new here but looks like a great community. I’ve been using various Cloud Posse modules for terraform for a while but am now trying to set up a new AWS account from scratch to learn the patterns for the higher level setup. I’ve run into a problem and am hoping for some help. I feel like its probably just a setting somewhere but for the life of me I can’t find it.
So I have been working through the Cold Start and have gotten through the account setup successfully but running the account-map
commands is resulting in errors. I’ll walk through the steps I’ve tried in case my tweaks have confused the root issue… For reference, I am using all the latest versions of the various components mentioned and pulled them in again just before posting this.
- When I first ran the
atmos terraform deploy account-map -s core-gbl-root
command, I got an error that it was unable to find a stack file in the/stacks/orgs
folder. That was fine as I wasn’t using that folder but in the error message, it was clear that it was using a defaultatmos.yaml
(this one) that includesorgs/**/*
in theinclude_paths
and not the one that I have been using on my machine. I’ve spent a long time trying to get it to use my local yaml and finally gave up and just added an empty file in theorgs
folder to get passed that error. Then I get to a new error… - Now if I run the
plan
foraccount-map
I get what looks like a correct full plan and then a new error at the end:╷ │ Error: │ Could not find the component 'account' in the stack 'core-gbl-root'. │ Check that all the context variables are correctly defined in the stack manifests. │ Are the component and stack names correct? Did you forget an import? │ │ │ with module.accounts.data.utils_component_config.config[0], │ on .terraform/modules/accounts/modules/remote-state/main.tf line 1, in data "utils_component_config" "config": │ 1: data "utils_component_config" "config" { │ ╵ exit status 1
If I run
atmos validate component account -s core-gbl-root
I get successful validations and the same with validatingaccount-map
.
I’ve tried deleting the .terraform
folders from both the accounts
and account-map
components and re-run the applies but get the same thing.
I’ve run with both Debug and Trace logs and am not seeing anything that points to where this error may be coming from.
I’ve been at this for hours yesterday and a few more hours this morning and decided it was time to seek some help.
Thanks for any advice!
@Andriy Knysh (Cloud Posse) @Jeremy G (Cloud Posse) @Dan Miller (Cloud Posse)
I think it could be related to the path of your atmos.yaml
file, since you deal with remote-state
and terraform-provider-utils
Terraform provider. Check this: https://atmos.tools/quick-start/simple/configure-cli/#config-file-location
I thought it might be that but it is clearly registering my atmos.yaml
file as it is in the main directory of my repo and i’ve found atmos to respond to settings there (such as the include/exclude paths and log levels) but the other terraform providers aren’t picking it up. (EDIT! Just saw the bit at the bottom… exploring now)
@Drew Fulton Terraform executes the provider code from the component directory (e.g. components/terraform/my-component
). We don’t want to put atmos.yaml
into each component directory, so we put it into one of the known places (as described in the doc) or we can use ENV vars, so both Atmos binary and the provider binary can find it
Got it! Is there a best practice for selecting where to put it to prevent duplication?
for Docker containers (geodesic
is one of them), we put it in rootfs/usr/local/etc/atmos/atmos.yaml
in the repo, and then in the Dockerfile we do
COPY rootfs/ /
so inside the container, it will be in usr/local/etc/atmos/atmos.yaml
, which is a know path to Atmos and the provider
hi all, am fresh new to atmos. And i was quite mind blown why i have not discover this tool yet while still figuring my way into the documentation, I have a question about the components and stacks.
If i as an ops engineer were to create and standardise my own component in a repository to store all the standard “libraries” of components. Is it advisable if the stacks, and atmos.yaml be in a separate repository ?
Meaning, for a developer will only need to declare the various component inside a stacks folder of their own project’s repository. Like only need to write yaml files and not needing to deal/write terraform code.
We then during execution have a github workflow that clones the core component repository into the project repository, and then complete the infra related deployment. is that something supported ?
Yes, centrally defined components is best for larger organizations that might have multiple infrastructure repositories
However, it can also be burdensome when iterating quickly to have to push all changes via an upstream component git component registry
yeah, thanks for the input. We’re not a large organization, there will be some trade off we’ll have to consider, if we try to limit the devs to only work with yaml. Still early stages of exploring the capabilities of atmos.
2024-10-04
atlantis integration question, I have this config in atmos.yaml
project_templates:
project-1:
# generate a project entry for each component in every stack
name: "{tenant}-{stage}-{environment}-{component}"
workspace: "{workspace}"
dir: "./components/terraform/{component}"
terraform_version: v1.6.3
delete_source_branch_on_merge: true
plan_requirements: [undiverged]
apply_requirements: [mergeable,undiverged]
autoplan:
enabled: true
when_modified:
- '**/*.tf'
- "varfiles/{tenant}-{stage}-{environment}-{component}.tfvars.json"
- "backends/{tenant}-{stage}-{environment}-{component}.tf"
the plan_requirements field doesn’t seem to have any effect on the generated atlantis.yaml
there is a lot of options recently added that atmos might now recognize
although
plan_requirements: [undiverged]
apply_requirements: [mergeable,undiverged]
I think they can only be declared in the server side repo config repo.yaml
mmm no, they can can be declared in the atlantis.yaml
FYI is not recommended to allow users to override workflows, so it is much safer to configure workflows and repo options on the server side config and leave the atlantis.yaml
as simple as possible
Yeah it make sense to have these values in server side repo config… I was trying some test scenarios and encountered the issue. in related question, the field ‘when_modified’ in atlantis.yaml .. I was trying to find some documentation about how it determines the modified files, just determined on if the file is modified in the PR?
file modified in the PR yes
is a regex used for autoplan
@jose.amengual another issue I am running in to with atlantis Have a question about the ‘when_modified’ field in the repo level config, we are using dynamic repo config generation and in the same way generating the var files for the project. The var files are not commited to the git repo, since it’s not commited I beleive the when_modified causes an issue by not planning the project. Removing the when_modified field from config doesn’t seem to help becuase of the default values. Do we have way to ignore this field and just plan the projects, regardless of modified file in the project
2024-10-05
trying to create a super simple example of atmos + cloudposse/modules to see if they will work for our needs. I’m using the s3-bucket but when I do a plan on it.. it prints out the terraform plan but that shows
│ Error: failed to find a match for the import '/opt/test/components/terraform/s3-bucket/stacks/orgs/**/*.yaml' ('/opt/test/components/terraform/s3-bucket/stacks/orgs' + '**/*.yaml')
I can’t make heads or tails of this error.. there are no stacks/orgs
under the s3-bucket module when I pulled it in with atmos vendor pull
Thanks in advance
please DM me your config, i’ll review it
2024-10-06
Continuing from my understanding of the design pattern. Base on this screenshot. Can i ask few questions:
- The infrastructure repository holds all the atmos components, stacks, modules ?
- The application repository only requires to write the taskdef.json for deployment into ECS ?
- If there are additional infrastructure the application needs, example s3, dynamodb … etc. The approach is to get the developer to write a PR to the infrastructure repository first with the necessary “stack” information, prior to performing any application type deployment ?
Yes, they would open a PR and add the necessary components to the stack configuration.
…and prior to performing any application deployment that depends on it.
Cool, i think i begin to have clearer understanding on the Atmos mindset … thnks !!
2024-10-07
2024-10-08
Hey, I’m trying to execute a plan and I’m getting the following output:
% atmos terraform plan keycloak_sg -s deploy/dev/us-east-1
Variables for the component 'keycloak_sg' in the stack 'deploy/dev/us-east-1':
aws_account_profile: [redacted]
cloud_provider: aws
environment: dev
region: us-east-1
team: [redacted]
tfstate_bucket: [redacted]
vpc_cidr_blocks:
- 172.80.0.0/16
- 172.81.0.0/16
vpc_id: [redacted]
Writing the variables to file:
components/terraform/sg/-keycloak_sg.terraform.tfvars.json
Using ENV vars:
TF_IN_AUTOMATION=true
Executing command:
/opt/homebrew/bin/tofu init -reconfigure
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.70.0
OpenTofu has been successfully initialized!
Command info:
Terraform binary: tofu
Terraform command: plan
Arguments and flags: []
Component: keycloak_sg
Terraform component: sg
Stack: deploy/dev/us-east-1
Working dir: components/terraform/sg
Executing command:
/opt/homebrew/bin/tofu workspace select -keycloak_sg
Usage: tofu [global options] workspace select NAME
Select a different OpenTofu workspace.
Options:
-or-create=false Create the OpenTofu workspace if it doesn't exist.
-var 'foo=bar' Set a value for one of the input variables in the root
module of the configuration. Use this option more than
once to set more than one variable.
-var-file=filename Load variable values from the given file, in addition
to the default files terraform.tfvars and *.auto.tfvars.
Use this option more than once to include more than one
variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg
Executing command:
/opt/homebrew/bin/tofu workspace new -keycloak_sg
Usage: tofu [global options] workspace new [OPTIONS] NAME
Create a new OpenTofu workspace.
Options:
-lock=false Don't hold a state lock during the operation. This is
dangerous if others might concurrently run commands
against the same workspace.
-lock-timeout=0s Duration to retry a state lock.
-state=path Copy an existing state file into the new workspace.
-var 'foo=bar' Set a value for one of the input variables in the root
module of the configuration. Use this option more than
once to set more than one variable.
-var-file=filename Load variable values from the given file, in addition
to the default files terraform.tfvars and *.auto.tfvars.
Use this option more than once to include more than one
variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg
exit status 1
goroutine 1 [running]:
runtime/debug.Stack()
runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x105c70460, 0x14000b306e0})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x105c70460, 0x14000b306e0})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
github.com/cloudposse/atmos/cmd.init.func17(0x10750ef60, {0x14000853480, 0x4, 0x4})
github.com/cloudposse/atmos/cmd/terraform.go:33 +0x150
github.com/spf13/cobra.(*Command).execute(0x10750ef60, {0x14000853480, 0x4, 0x4})
github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x10750ec80)
github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
github.com/cloudposse/atmos/cmd/root.go:88 +0x214
main.main()
github.com/cloudposse/atmos/main.go:9 +0x1c
I’m not really sure what I can do since the error message suggests an underlying tofu/terraform error and not an atmos one. I bet my stack/component has something wrong but I’m not entirely sure why. The atmos.yaml is the same of the previous message I sent here yesterday. I’d appreciate any pointers
I just now tried something that seemed to work. Since I’m using the “{stage}” name pattern inside atmos.yaml, I set a “vars.stage: dev” inside my dev.yaml stack file and it seemed to do the trick. Is this a correct pattern? Thanks!
Hey, I’m trying to execute a plan and I’m getting the following output:
% atmos terraform plan keycloak_sg -s deploy/dev/us-east-1
Variables for the component 'keycloak_sg' in the stack 'deploy/dev/us-east-1':
aws_account_profile: [redacted]
cloud_provider: aws
environment: dev
region: us-east-1
team: [redacted]
tfstate_bucket: [redacted]
vpc_cidr_blocks:
- 172.80.0.0/16
- 172.81.0.0/16
vpc_id: [redacted]
Writing the variables to file:
components/terraform/sg/-keycloak_sg.terraform.tfvars.json
Using ENV vars:
TF_IN_AUTOMATION=true
Executing command:
/opt/homebrew/bin/tofu init -reconfigure
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.70.0
OpenTofu has been successfully initialized!
Command info:
Terraform binary: tofu
Terraform command: plan
Arguments and flags: []
Component: keycloak_sg
Terraform component: sg
Stack: deploy/dev/us-east-1
Working dir: components/terraform/sg
Executing command:
/opt/homebrew/bin/tofu workspace select -keycloak_sg
Usage: tofu [global options] workspace select NAME
Select a different OpenTofu workspace.
Options:
-or-create=false Create the OpenTofu workspace if it doesn't exist.
-var 'foo=bar' Set a value for one of the input variables in the root
module of the configuration. Use this option more than
once to set more than one variable.
-var-file=filename Load variable values from the given file, in addition
to the default files terraform.tfvars and *.auto.tfvars.
Use this option more than once to include more than one
variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg
Executing command:
/opt/homebrew/bin/tofu workspace new -keycloak_sg
Usage: tofu [global options] workspace new [OPTIONS] NAME
Create a new OpenTofu workspace.
Options:
-lock=false Don't hold a state lock during the operation. This is
dangerous if others might concurrently run commands
against the same workspace.
-lock-timeout=0s Duration to retry a state lock.
-state=path Copy an existing state file into the new workspace.
-var 'foo=bar' Set a value for one of the input variables in the root
module of the configuration. Use this option more than
once to set more than one variable.
-var-file=filename Load variable values from the given file, in addition
to the default files terraform.tfvars and *.auto.tfvars.
Use this option more than once to include more than one
variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg
exit status 1
goroutine 1 [running]:
runtime/debug.Stack()
runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x105c70460, 0x14000b306e0})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x105c70460, 0x14000b306e0})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
github.com/cloudposse/atmos/cmd.init.func17(0x10750ef60, {0x14000853480, 0x4, 0x4})
github.com/cloudposse/atmos/cmd/terraform.go:33 +0x150
github.com/spf13/cobra.(*Command).execute(0x10750ef60, {0x14000853480, 0x4, 0x4})
github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x10750ec80)
github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
github.com/cloudposse/atmos/cmd/root.go:88 +0x214
main.main()
github.com/cloudposse/atmos/main.go:9 +0x1c
I’m not really sure what I can do since the error message suggests an underlying tofu/terraform error and not an atmos one. I bet my stack/component has something wrong but I’m not entirely sure why. The atmos.yaml is the same of the previous message I sent here yesterday. I’d appreciate any pointers
@Andriy Knysh (Cloud Posse)
2024-10-09
Hey, on a setup like this:
import:
- catalog/keycloak/defaults
components:
terraform:
keycloak_route53_zones:
vars:
zones:
"[redacted]":
comment: "zone made for the keycloak sso"
keycloak_acm:
vars:
domain_name: [redacted]
zone_id: '{{ (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id }}'
My keycloak_acm component is failing to actually get the output of the one above it. Am I doing this fundamentally wrongly? The defaults.yaml being imported looks like this:
components:
terraform:
keycloak_route53_zones:
backend:
s3:
workspace_key_prefix: keycloak-route53-zones
metadata:
component: route53-zones
keycloak_acm:
backend:
s3:
workspace_key_prefix: keycloak-acm
metadata:
component: acm
depends_on:
- keycloak_route53_zones
Can you confirm that you see a valid zone_id
as a terraform output of keycloak_route53_zones
and please make sure the component is provisioned
atmos.Component calls terraform output, so it must be in the state already
atmos terraform output keycloak_route53_zones -s aws-dev-us-east-1
gives me
zone_id = {
"redacted" = "redacted"
}
which is redacted but it is the zone subdomain as key and the id as value
aha, so it’s returning a map for the zone id
ah!
so I guess it should be
zone_id: '{{ (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id.value }}'
value is not correct
you should get the value by the map key using Go templates
{{ index .YourMap "yourKey" }}
zone_id: '{{ index (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id "<key>" }}'
there the key
is the "redacted"
in your example
It didn’t seem to work and for this case I just passed the zone id directly (like the actual value from the output). The error was that terraform complained that it should be less than 32 characters, which means that terraform understood my go template as an actual string and not a template. I run it with atmos terraform apply component -s stack
after a successful plan creation
I passed the value directly because I don’t think the zone id will change in the future, but for other components this might be an issue for me. Maybe I should run it differently? I set the value exactly as suggested by Andriy
please send me your yaml config, i’ll take a look
Sure!
2024-10-10
Hello, me again……. component setting are arbitrary keys?
settings:
pepe:
does-not-like-sushi: true
can I do that?
yes
settings is a free form map
its the same as vars, but free form, participates in all the inheritance, meaning you can define it globally, per stack, per base component, per component, and then everything gets deep merged into the final map
Is there a way to inherit metadata for all components without having to create an abstract
component? something that all component should have
imagine if this was added to something like atmos.yaml and CODEOWNERS only allow very few people to modify it
like having Sec rules for all components
no, metadata is not inherited, it’s per component
ok
because in metadata you specify all the base components for your component
Hi :wave: calling for help with configuring gcs backend. I’m bootstraping a GCP organization. I have a module that created a seed project with initial bits, including gcs bucket that I would like to use for storing tfstate files. I’ve run atmos configured with local tf backend for the init.
Now I’d like to move my backend from local to the bucket and move from there. I’ve added bucket configuration to _defaults.yaml
for my org:
backend_type: gcs
backend:
gcs:
bucket: "bucket_name"
Unfortunately atmos says that this bucket doesn’t exist, despite I have copied a test file into the bucket
╷
│ Error: Error inspecting states in the "local" backend:
│ querying Cloud Storage failed: storage: bucket doesn't exist
Note that atmos never uses this backend. We generate a backend.tf.json file used by terraform
Could it be a GCP permissions issue
I am a organisation admin, have full access to the bucket, and besides Ive tested access to the bucket itself with cli.
Based on your screenshot the YAML is invalid.
the hint is it’s trying to use a “local” backend type.
Configure Terraform Backends.
terraform:
backend_type: gcs
backend:
gcs:
bucket: "tf-state"
Note in your YAML above, the whitespace is off
Terraform is attempting to use this: https://developer.hashicorp.com/terraform/language/backend/local
which indicates that the backend type is not getting set
Terraform can store the state remotely, making it easier to version and work with in a team.
Thanks Erik. I’ve double checked again everything.
I’ve tried to delete backend.tt.json
file, disable auto_generate_backend_file
and after that added backend configuration directly into module - same result.
That got me thinking something is not right with auth into GCP. Re-logged again with gcloud auth application-default login
and now state is migrated into the bucket. Honestly no idea, I’ve logged in multiple times this day already. Even before posting here
maybe this is the kind of tax for changing whole workflow to atmos
can I vendor all the components using vendor.yaml? or do I have to set every component that I want to vendor?
@jose.amengual from where you want to vendor all the components?
from my iac repo
to another repo ( using atmos)
let me check
I did this :
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: iac-vendoring
description: Atmos vendoring manifest for Atmos-iac repo
spec:
# Import other vendor manifests, if necessary
imports: []
sources:
- source: "github.com/jamengual/pepe-iac.git/"
#version: "main"
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/sandbox/**"
I was able to vendor components just fine
but not sandbox stack files for some reason
you need to add another item for the stacks folder separately
that’s how the glob lib that we are using works
we don’t like it and will revisit it, but currently it’s what it is
you mean something like :
sources:
- source: "github.com/jamengua/pepe-iac.git/"
#version: "main"
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- source: "github.com/jamengual/pepe-iac.git/"
#version: "main"
targets:
- "./stacks/sandbox/"
included_paths:
- "stacks/sandbox/*.yaml"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
- "**/stacks/sandbox/**"
try this ^
so this pulled all the stacks
- "**/stacks/**"
but it looks like I can’t pull a specific subfolder
hmm, so this "**/stacks/sandbox/**"
does not work?
no
i guess you can use
- "**/stacks/**"
- "**/stacks/sandbox/**"
and exclude the other stacks in excluded_paths:
I could do that….but it makes it not very DRY
can you verbose the pull command?
atmos vendor pull --verbose
atmos vendor pull --verbose
Error: unknown flag: --verbose
Usage:
atmos vendor pull [flags]
Flags:
-c, --component string Only vendor the specified component: atmos vendor pull --component <component>
--dry-run atmos vendor pull --component <component> --dry-run
-h, --help help for pull
-s, --stack string Only vendor the specified stack: atmos vendor pull --stack <stack>
--tags string Only vendor the components that have the specified tags: atmos vendor pull --tags=dev,test
-t, --type string atmos vendor pull --component <component> --type=terraform|helmfile (default "terraform")
Global Flags:
--logs-file string The file to write Atmos logs to. Logs can be written to any file or any standard file descriptor, including '/dev/stdout', '/dev/stderr' and '/dev/null' (default "/dev/stdout")
--logs-level string Logs level. Supported log levels are Trace, Debug, Info, Warning, Off. If the log level is set to Off, Atmos will not log any messages (default "Info")
--redirect-stderr string File descriptor to redirect 'stderr' to. Errors can be redirected to any file or any standard file descriptor (including '/dev/null'): atmos <command> --redirect-stderr /dev/stdout
unknown flag: --verbose
goroutine 1 [running]:
runtime/debug.Stack()
runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x10524c0c0, 0x140003edf10})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x10524c0c0, 0x140003edf10})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
main.main()
github.com/cloudposse/atmos/main.go:11 +0x24
Atmos 1.88.1 on darwin/arm64
oh sorry, try
ATMOS_LOGS_LEVEL=Trace atmos vendor pull
@jose.amengual also check out this example for using YAML anchors in vendor.yaml to DRY it up
https://github.com/cloudposse/atmos/blob/main/examples/demo-component-versions/vendor.yaml#L10-L20
- &library
source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
version: "main"
targets:
- "components/terraform/{{ .Component }}/{{.Version}}"
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
tags:
- demo
could you pass an ENV variable as a value inside of the yaml?
like
excluded_paths:
- "**/production/**"
- "${EXCLUDE_PATHS}"
- "${EXCLUDE_PATH_2}"
- "${EXCLUDE_PATH_3}"
currently not, env vars in vendor.yaml are not supported
ok, Thanks guys
2024-10-11
what’s the best way to distinguish between custom components and vendored components from cloudposse?
I was thinking
Option 1) a unique namespace via a separate dir
e.g.
# cloudposse components
components/terraform
# internal components
components/terraform/internal
Option 2) a unique namespace via a prefix
# upstream component
components/terraform/ecr
# internal component
components/terraform/internal-ecr
Option 3) Unique key in component.yaml
and enforce this file in all components
Option 4) Vendor internal components from an internal repo
This way the source
will contain the other org
instead of cloudposse
so that can be used to distinguish
We’re actually looking into something imilar to this right now related to our refarch stack configs
For stack configs we’ve settled on stacks/vendor/cloudposse
For components, I would maybe suggest
components/vendor/cloudposse
The alternative is a top-level folder like vendor/cloudposse
which could contain stacks and components.
Thats for stack configs, what about terraform components ? Or do you think it would be for both stack configs and terraform components?
What our team dose this:
Vendor: components/terraform/vendor/{provider, ej. cloudposse}
Internal: components/terraform/{cloudProvider, Ej. AWS}/{componentName, Ej. VPC}
Thats for stack configs, what about terraform components ?
https://sweetops.slack.com/archives/C031919U8A0/p1728683594309549?thread_ts=1728675666.706109&cid=C031919U8A0ß
https://sweetops.slack.com/archives/C031919U8A0/p1728683609704999?thread_ts=1728675666.706109&cid=C031919U8A0
For components, I would maybe suggest
components/vendor/cloudposse
The alternative is a top-level folder like vendor/cloudposse
@burnzy is this working for you? https://github.com/cloudposse/terraform-yaml-stack-config/pull/95 @Jeremy G (Cloud Posse) is looking into something similar and we’ll likely get this merged. Sorry it fell through the cracks.
what
Simple change to add support for GCS backends
why
Allows GCP users (users with gcs backends) to make use of this remote-state module for sharing data between components.
references
• https://developer.hashicorp.com/terraform/language/settings/backends/gcs • https://atmos.tools/core-concepts/share-data/#using-remote-state
is it possible to vendor pull from a different repo?
I have my vendor.yaml
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: iac-vendoring
description: Atmos vendoring manifest for Atmos-iac repo
spec:
imports: []
sources:
- source: "<https://x-access-token>:${{ secrets.TOKEN }}@github.com/PEPE/pepe-iac.git"
#version: "main"
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
excluded_paths:
- "**/production/**"
I tried git:// but then tried to use ssh git, and this is running from a reusable action
I tried this locally and it works but in my local I have a ssh-config
if I run git clone using that url it clones just fine
I’m hitting this error on the go-git library now
// <https://github.com/go-git/go-git/blob/master/worktree.go>
func (w *Worktree) getModuleStatus() (Status, error) {
// ...
if w.r.ModulesPath == "" {
return nil, ErrModuleNotInitialized
}
if !filepath.IsAbs(w.r.ModulesPath) {
return nil, errors.New("relative paths require a module with a pwd")
}
// ...
}
``` package git
import ( “context” “errors” “fmt” “io” “os” “path/filepath” “runtime” “strings”
"github.com/go-git/go-billy/v5"
"github.com/go-git/go-billy/v5/util"
"github.com/go-git/go-git/v5/config"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
"github.com/go-git/go-git/v5/plumbing/format/index"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/merkletrie"
"github.com/go-git/go-git/v5/utils/sync" )
var ( ErrWorktreeNotClean = errors.New(“worktree is not clean”) ErrSubmoduleNotFound = errors.New(“submodule not found”) ErrUnstagedChanges = errors.New(“worktree contains unstaged changes”) ErrGitModulesSymlink = errors.New(gitmodulesFile + “ is a symlink”) ErrNonFastForwardUpdate = errors.New(“non-fast-forward update”) ErrRestoreWorktreeOnlyNotSupported = errors.New(“worktree only is not supported”) )
// Worktree represents a git worktree. type Worktree struct { // Filesystem underlying filesystem. Filesystem billy.Filesystem // External excludes not found in the repository .gitignore Excludes []gitignore.Pattern
r *Repository }
// Pull incorporates changes from a remote repository into the current branch. // Returns nil if the operation is successful, NoErrAlreadyUpToDate if there are // no changes to be fetched, or an error. // // Pull only supports merges where the can be resolved as a fast-forward. func (w *Worktree) Pull(o *PullOptions) error { return w.PullContext(context.Background(), o) }
// PullContext incorporates changes from a remote repository into the current // branch. Returns nil if the operation is successful, NoErrAlreadyUpToDate if // there are no changes to be fetched, or an error. // // Pull only supports merges where the can be resolved as a fast-forward. // // The provided Context must be non-nil. If the context expires before the // operation is complete, an error is returned. The context only affects the // transport operations. func (w *Worktree) PullContext(ctx context.Context, o *PullOptions) error { if err := o.Validate(); err != nil { return err }
remote, err := w.r.Remote(o.RemoteName)
if err != nil {
return err
}
fetchHead, err := remote.fetch(ctx, &FetchOptions{
RemoteName: o.RemoteName,
RemoteURL: o.RemoteURL,
Depth: o.Depth,
Auth: o.Auth,
Progress: o.Progress,
Force: o.Force,
InsecureSkipTLS: o.InsecureSkipTLS,
CABundle: o.CABundle,
ProxyOptions: o.ProxyOptions,
})
updated := true
if err == NoErrAlreadyUpToDate {
updated = false
} else if err != nil {
return err
}
ref, err := storer.ResolveReference(fetchHead, o.ReferenceName)
if err != nil {
return err
}
head, err := w.r.Head()
if err == nil {
// if we don't have a shallows list, just ignore it
shallowList, _ := w.r.Storer.Shallow()
var earliestShallow *plumbing.Hash
if len(shallowList) > 0 {
earliestShallow = &shallowList[0]
}
headAheadOfRef, err := isFastForward(w.r.Storer, ref.Hash(), head.Hash(), earliestShallow)
if err != nil {
return err
}
if !updated && headAheadOfRef {
return NoErrAlreadyUpToDate
}
ff, err := isFastForward(w.r.Storer, head.Hash(), ref.Hash(), earliestShallow)
if err != nil {
return err
}
if !ff {
return ErrNonFastForwardUpdate
}
}
if err != nil && err != plumbing.ErrReferenceNotFound {
return err
}
if err := w.updateHEAD(ref.Hash()); err != nil {
return err
}
if err := w.Reset(&ResetOptions{
Mode: MergeReset,
Commit: ref.Hash(),
}); err != nil {
return err
}
if o.RecurseSubmodules != NoRecurseSubmodules {
return w.updateSubmodules(&SubmoduleUpdateOptions{
RecurseSubmodules: o.RecurseSubmodules,
Auth: o.Auth,
})
}
return nil }
func (w *Worktree) updateSubmodules(o *SubmoduleUpdateOptions) error { s, err := w.Submodules() if err != nil { return err } o.Init = true return s.Update(o) }
// Checkout switch branches or restore working tree files. func (w *Worktree) Checkout(opts *CheckoutOptions) error { if err := opts.Validate(); err != nil { return err }
if opts.Create {
if err := w.createBranch(opts); err != nil {
return err
}
}
c, err := w.getCommitFromCheckoutOptions(opts)
if err != nil {
return err
}
ro := &ResetOptions{Commit: c, Mode: MergeReset}
if opts.Force {
ro.Mode = HardReset
} else if opts.Keep {
ro.Mode = SoftReset
}
if !opts.Hash.IsZero() && !opts.Create {
err = w.setHEADToCommit(opts.Hash)
} else {
err = w.setHEADToBranch(opts.Branch, c)
}
if err != nil {
return err
}
if len(opts.SparseCheckoutDirectories) > 0 {
return w.ResetSparsely(ro, opts.SparseCheckoutDirectories)
}
return w.Reset(ro) }
func (w *Worktree) createBranch(opts *CheckoutOptions) error { if err := opts.Branch.Validate(); err != nil { return err }
_, err := w.r.Storer.Reference(opts.Branch)
if err == nil {
return fmt.Errorf("a branch named %q already exists", opts.Branch)
}
if err != plumbing.ErrReferenceNotFound {
return err
}
if opts.Hash.IsZero() {
ref, err := w.r.Head()
if err != nil {
return err
}
opts.Hash = ref.Hash()
}
return w.r.Storer.SetReference(
plumbing.NewHashReference(opts.Branch, opts.Hash),
) }
func (w *Worktree) getCommitFromCheckoutOptions(opts *CheckoutOptions) (plumbing.Hash, error) { hash := opts.Hash if hash.IsZero() { b, err := w.r.Reference(opts.Branch, true) if err != nil { return plumbing.ZeroHash, err }
hash = b.Hash()
}
o, err := w.r.Object(plumbing.AnyObject, hash)
if err != nil {
return plumbing.ZeroHash, err
}
switch o := o.(type) {
case *object.Tag:
if o.TargetType != plumbing.CommitObject {
return plumbing.ZeroHash, fmt.Errorf("%w: tag target %q", object.ErrUnsupportedObject, o.TargetType)
}
return o.Target, nil
case *object.Commit:
return o.Hash, nil
}
return plumbing.ZeroHash, fmt.Errorf("%w: %q", object.ErrUnsupportedObject, o.Type()) }
func (w *Worktree) setHEADToCommit(commit plumbing.Hash) error { head := plumbing.NewHashReference(plumbing.HEAD, commit) return w.r.Storer.SetReference(head) }
func (w *Worktree) setHEADToBranch(branch plumbing.ReferenceName, commit plumbing.Hash) error { target, err := w.r.Storer.Reference(branch) if err != nil { return err }
var head *plumbing.Reference
if target.Name().IsBranch() {
head = plumbing.NewSymbolicReference(plumbing.HEAD, target.Name())
} else {
head = plumbing.NewHashReference(plumbing.HEAD, commit)
}
return w.r.Storer.SetReference(head) }
func (w *Worktree) ResetSparsely(opts *ResetOptions, dirs []string) error { if err := opts.Validate(w.r); err != nil { return err }
if opts.Mode == MergeReset {
unstaged, err := w.containsUnstagedChanges()
if err != nil {
return err
}
if unstaged {
return ErrUnstagedChanges
}
}
if err := w.setHEADCommit(opts.Commit); err != nil {
return err
}
if opts.Mode == SoftReset {
return nil
}
t, err := w.r.getTreeFromCommitHash(opts.Commit)
if err != nil {
return err
}
if opts.Mode == MixedReset || opts.Mode == MergeReset || opts.Mode =…
I do not know if this is because vendir sources on http are not compatible with git?
2024-10-12
Improve error stack trace. Add --stack
flag to atmos describe affected
command. Improve atmos.Component
template function @aknysh (#714)## what
• Improve error stack trace
• Add --stack
flag to atmos describe affected
command
• Improve atmos.Component
template function
why
• On any error in the CLI, print Go
stack trace only when Atmos log level is Trace
- improve user experience
• The --stack
flag in the atmos describe affected
command allows filtering the results by the specific stack only:
atmos describe affected –stack plat-ue2-prod
Affected components and stacks:
[ { “component”: “vpc”, “component_type”: “terraform”, “component_path”: “examples/quick-start-advanced/components/terraform/vpc”, “stack”: “plat-ue2-prod”, “stack_slug”: “plat-ue2-prod-vpc”, “affected”: “stack.vars” } ]
• In the atmos.Component
template function, don’t execute terraform output
on disabled and abstract components. The disabled components (when enabled: false
) don’t produce any terraform outputs. The abstract components are not meant to be provisioned (they are just blueprints for other components with default values), and they don’t have any outputs.
Summary by CodeRabbit
Release Notes
• New Features
• Added a --stack
flag to the atmos describe affected
command for filtering results by stack.
• Enhanced error handling across various commands to include configuration context in error logs.
• Documentation
• Updated documentation for the atmos describe affected
command to reflect the new --stack
flag.
• Revised “Atlantis Integration” documentation to highlight support for Terraform Pull Request Automation.
• Dependency Updates
• Upgraded several dependencies, including Atmos version from 1.88.0
to 1.89.0
and Terraform version from 1.9.5
to 1.9.7
.
Correct outdated ‘myapp’ references in simple tutorial @jasonwashburn (#707)## what
Corrects several (assuming) outdated references to a ‘myapp’ component rather than the correct ‘station’ component in the simple tutorial.
Also corrects the provided example repository hyperlink to refer to the correct weather example ‘quick-start-simple’ used in the tutorial rather than ‘demo-stacks’
why
Appears that the ‘myapp’ references were likely just missed during a refactor of the simple tutorial. Fixing them alleviates confusion/friction for new users following the tutorial. Attempting to use the examples/references as-is results in various errors as there is no ‘myapp’ component defined.
references
Also closes #664
Summary by CodeRabbit
• New Features
• Renamed the component from myapp
to station
in the configuration.
• Updated provisioning commands in documentation to reflect the new component name.
• Documentation
• Revised “Deploy Everything” document to replace myapp
with station
.
• Enhanced “Simple Atmos Tutorial” with updated example link and clarified instructional content.
Fix incorrect terraform flag in simple tutorial workflow example @jasonwashburn (#709)## what
Fixes inconsistencies in the simple-tutorial extra credit section on workflows that prevent successful execution when following along.
why
As written, the tutorial results in two errors, one due to an incorrect terraform flag, and one due to a mismatch between the defined workflow name, and the provided command in the tutorial to execute it.
references
Closes #708
Fix typos @NathanBaulch (#703)Just thought I’d contribute some typo fixes that I stumbled on. Nothing controversial (hopefully).
Use the following command to get a quick summary of the specific corrections made:
git diff HEAD^! –word-diff-regex=’\w+’ -U0
| grep -E ‘[-.-]{+.+}’
| sed -r ‘s/.[-(.)-]{+(.)+}./\1 \2/’
| sort | uniq -c | sort -n
FWIW, the top typos are:
• usign
• accross
• overriden
• propogate
• verions
• combinatino
• compoenents
• conffig
• conventionss
• defind
Fix version command in simple tutorial @jasonwashburn (#705)## what
• Corrects incorrect atmos --version
command to atmos version
in simple tutorial docs.
why
• Documentation is incorrect.
references
closes #704
docs: add installation guides for asdf and Mise @mtweeman (#699)## what
Docs for installing Atmos via asdf or Mise
why
As of recent, Atmos can be installed by asdf and Mise. Installation guides are not yet included on the website. This PR aims to fill this gap.
references
Use Latest Atmos GitHub Workflows Examples with RemoteFile
Component @milldr (#695)## what - Created the RemoteFile
component - Replace all hard-coded files with RemoteFile
call
why
• These workflows quickly get out of date. We already have these publicly available on cloudposse/docs
, so we should fetch the latest pattern instead
references
• SweetOps slack thread
Update Documentation and Comments for Atmos Setup Action @RoseSecurity (#692)## what
• Updates comment to reflect action defaults
• Fixes atmos-version
input
why
• Fixes input variables to match acceptable action variables
references
How log does it take to get the linux x64 package?
Improve error stack trace. Add --stack
flag to atmos describe affected
command. Improve atmos.Component
template function @aknysh (#714)## what
• Improve error stack trace
• Add --stack
flag to atmos describe affected
command
• Improve atmos.Component
template function
why
• On any error in the CLI, print Go
stack trace only when Atmos log level is Trace
- improve user experience
• The --stack
flag in the atmos describe affected
command allows filtering the results by the specific stack only:
atmos describe affected –stack plat-ue2-prod
Affected components and stacks:
[ { “component”: “vpc”, “component_type”: “terraform”, “component_path”: “examples/quick-start-advanced/components/terraform/vpc”, “stack”: “plat-ue2-prod”, “stack_slug”: “plat-ue2-prod-vpc”, “affected”: “stack.vars” } ]
• In the atmos.Component
template function, don’t execute terraform output
on disabled and abstract components. The disabled components (when enabled: false
) don’t produce any terraform outputs. The abstract components are not meant to be provisioned (they are just blueprints for other components with default values), and they don’t have any outputs.
Summary by CodeRabbit
Release Notes
• New Features
• Added a --stack
flag to the atmos describe affected
command for filtering results by stack.
• Enhanced error handling across various commands to include configuration context in error logs.
• Documentation
• Updated documentation for the atmos describe affected
command to reflect the new --stack
flag.
• Revised “Atlantis Integration” documentation to highlight support for Terraform Pull Request Automation.
• Dependency Updates
• Upgraded several dependencies, including Atmos version from 1.88.0
to 1.89.0
and Terraform version from 1.9.5
to 1.9.7
.
Correct outdated ‘myapp’ references in simple tutorial @jasonwashburn (#707)## what
Corrects several (assuming) outdated references to a ‘myapp’ component rather than the correct ‘station’ component in the simple tutorial.
Also corrects the provided example repository hyperlink to refer to the correct weather example ‘quick-start-simple’ used in the tutorial rather than ‘demo-stacks’
why
Appears that the ‘myapp’ references were likely just missed during a refactor of the simple tutorial. Fixing them alleviates confusion/friction for new users following the tutorial. Attempting to use the examples/references as-is results in various errors as there is no ‘myapp’ component defined.
references
Also closes #664
Summary by CodeRabbit
• New Features
• Renamed the component from myapp
to station
in the configuration.
• Updated provisioning commands in documentation to reflect the new component name.
• Documentation
• Revised “Deploy Everything” document to replace myapp
with station
.
• Enhanced “Simple Atmos Tutorial” with updated example link and clarified instructional content.
Fix incorrect terraform flag in simple tutorial workflow example @jasonwashburn (#709)## what
Fixes inconsistencies in the simple-tutorial extra credit section on workflows that prevent successful execution when following along.
why
As written, the tutorial results in two errors, one due to an incorrect terraform flag, and one due to a mismatch between the defined workflow name, and the provided command in the tutorial to execute it.
references
Closes #708
Fix typos @NathanBaulch (#703)Just thought I’d contribute some typo fixes that I stumbled on. Nothing controversial (hopefully).
Use the following command to get a quick summary of the specific corrections made:
git diff HEAD^! –word-diff-regex=’\w+’ -U0
| grep -E ‘[-.-]{+.+}’
| sed -r ‘s/.[-(.)-]{+(.)+}./\1 \2/’
| sort | uniq -c | sort -n
FWIW, the top typos are:
• usign
• accross
• overriden
• propogate
• verions
• combinatino
• compoenents
• conffig
• conventionss
• defind
Fix version command in simple tutorial @jasonwashburn (#705)## what
• Corrects incorrect atmos --version
command to atmos version
in simple tutorial docs.
why
• Documentation is incorrect.
references
closes #704
docs: add installation guides for asdf and Mise @mtweeman (#699)## what
Docs for installing Atmos via asdf or Mise
why
As of recent, Atmos can be installed by asdf and Mise. Installation guides are not yet included on the website. This PR aims to fill this gap.
references
Use Latest Atmos GitHub Workflows Examples with RemoteFile
Component @milldr (#695)## what - Created the RemoteFile
component - Replace all hard-coded files with RemoteFile
call
why
• These workflows quickly get out of date. We already have these publicly available on cloudposse/docs
, so we should fetch the latest pattern instead
references
• SweetOps slack thread
Update Documentation and Comments for Atmos Setup Action @RoseSecurity (#692)## what
• Updates comment to reflect action defaults
• Fixes atmos-version
input
why
• Fixes input variables to match acceptable action variables
references