#atmos (2023-02)
2023-02-01
v1.26.0 what Update atmos describe affected command Add –repo-path flag to atmos describe affected command Update docs https://atmos.tools/cli/commands/describe/affected why The –repo-path flag specifies the filesystem path to the already cloned target repository with which to compare the current branch, and it can be used when working with private repositories Working with Private Repositories There are a few ways to…
what
Update atmos describe affected command Add –repo-path flag to atmos describe affected command Update docs https://atmos.tools/cli/commands/describe/affected
why
The –repo-path flag specif…
This command produces a list of the affected Atmos components and stacks given two Git commits.
v1.26.0 what Update atmos describe affected command Add –repo-path flag to atmos describe affected command Update docs https://atmos.tools/cli/commands/describe/affected why The –repo-path flag specifies the filesystem path to the already cloned target repository with which to compare the current branch, and it can be used when working with private repositories Working with Private Repositories There are a few ways to…
2023-02-03
v1.26.1 what Update Go templates in stack configs why If a context is not provided (as if using import with context), don’t process Go templates in the imported configurations (templating can be used for Datadog, ArgoCD etc. w/o requiring Atmos to process them)
what
Update Go templates in stack configs
why
If a context is not provided (as if using import with context), don’t process Go templates in the imported configurations (templating can be used fo…
v1.26.1 what Update Go templates in stack configs why If a context is not provided (as if using import with context), don’t process Go templates in the imported configurations (templating can be used for Datadog, ArgoCD etc. w/o requiring Atmos to process them)
2023-02-09
v1.27.0 what && why Algolia search integration for Atmos docs
what && why
Algolia search integration for Atmos docs
v1.27.0 what && why Algolia search integration for Atmos docs
Hi, I’m, using Atmos in a test deployment to improve my understanding of the cloudposse terraform modules, and run into a problem when backfilling the access_roles
for the tfstate-backend after the bucket has been created (and post-deployment of account
+ account-map
)
Although I can view the outputs of the account-map
module from a local command (atmos terraform output account-map -s core-gbl-root
), attempts to access this data via component remote state are failing.
atmos terraform plan tfstate-backend -s core-ue1-root
...
│ Error: Attempt to get attribute from null value
│
│ on ../account-map/modules/roles-to-principals/outputs.tf line 12, in output "full_account_map":
│ 12: value = module.account_map.outputs.full_account_map
│ ├────────────────
│ │ module.account_map.outputs is null
│
│ This value is null, so it does not have any attributes.
I can’t quite see how the tenant, env and stage passed to the remote state module in roles-to-principles
differs from that used in the command line, so is there configuration elsewhere that I’ve missed?
show how do you use the remote-state module
context must be provided to it
context = module.this.context
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
module "account_map" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.3.1"
component = "account-map"
privileged = var.privileged
tenant = var.global_tenant_name
environment = var.global_environment_name
stage = var.global_stage_name
context = module.always.context
}
The remote state is the one defined in a vendor component, the child module of account-map
I have a remote_stack_backend configured in a _defaults.yaml (for now)
remote_state_backend:
s3:
bucket: "pd-core-ue1-root-tfstate"
dynamodb_table: "pd-core-ue1-root-tfstate-lock"
key: "terraform.tfstate"
role_arn: "arn:aws:iam::root_acc_id:role/seed_role"
Thought that was the sufficient condition to hook things up
are these set correctly?
tenant = var.global_tenant_name
environment = var.global_environment_name
stage = var.global_stage_name
roles-to-principles
defaults them to core, gbl, and root — and I’m not deviating from the standard, so that should work
not easy to figure out what’s wrong, you can zip the project up and DM me
Seemed to have solved this, was a problem with the s3 remote state of the account-map
component. Made a stub component to test remote state modules for various deployed components and each had a non-null outputs — except account-map
Complete delete and clean of the directory and redeploy fixed the problem, cheers for the help yesterday
super, glad you solved it. I looked at your code yesterday, but didn’t find anything obvious, that’s why i wanted to recheck it today
2023-02-10
v1.28.0 what Add component_path to the outputs of atmos describe affected command why Output the path to the terraform or helmfile component Useful when we want to run certain workflows (e.g. in GitHub actions) for specific components and need to know what the component path is for each
what
Add component_path to the outputs of atmos describe affected command
why
Output the path to the terraform or helmfile component Useful when we want to run certain workflows (e.g. in GitHub …
2023-02-13
Hi,
I want to extract some JSON properties from terraform output, but have some troubles with it. When I am trying to save terraform output to the file:
atmos terraform "output -json > output.json" main -s security --skip-init
I got a error message, because Atmos recognize it as a single terraform command, not as a command and argument:
│ Error: Unexpected argument
│
│ The output command expects exactly one argument with the name of an output variable or no arguments to show all outputs.
Alternative option also doesn’t fit my requirements:
atmos terraform "output -json" main -s security --skip-init > output.json
because output contains not only terraform outputs, but also atmos logs like
...
Executing command:
/usr/bin/terraform output -json
...
Is there a way to pass > output.json
argument to the terraform in Atmos, or maybe turn off stdout of Atmos for a specific workflow step? Does Atmos native way allow it?
Final goal is to read the service principal password, created in terraform and call az login
command to switch the user before running the next step.
@Andriy Knysh (Cloud Posse) all atmos output should be to stderr IMO
I believe atmos output can be squelched with log levels, but we should still handle this use case as pipelining commands is common Unix behavior
all atmos output should be to stderr IMO
I think no, b/c we have many situations where we don’t want anything in stderr, e.g. in GH actions
and we even have a PR to disable stderr completely
what
• Add --redirect-stderr
flag to atmos
commands
why
• The flag specifies the file descriptor to redirect stderr
to when atmos
executes subcommands like atmos terraform
and atmos helmfile
• Errors can be redirected to any file or any standard file descriptor (including /dev/null
and /dev/stdout
)
• Useful in GitHub actions to prevent failures when any error sent to stderr
makes the action fail
examples
atmos terraform plan test/test-component-override-2 -s tenant1-ue2-dev --redirect-stderr /dev/stdout
atmos terraform plan test/test-component-override -s tenant1-ue2-dev --redirect-stderr ./errors.txt
atmos terraform apply test/test-component-override-2 -s tenant1-ue2-dev --redirect-stderr /dev/stdout
atmos terraform apply test/test-component-override -s tenant1-ue2-dev --redirect-stderr ./errors.txt
atmos terraform destroy test/test-component-override-2 -s tenant1-ue2-dev --redirect-stderr /dev/stdout
atmos terraform destroy test/test-component-override -s tenant1-ue2-dev --redirect-stderr /dev/null
mos terraform workspace test/test-component-override-3 -s tenant1-ue2-dev --redirect-stderr /dev/null
atmos terraform workspace test/test-component-override-3 -s tenant1-ue2-dev --redirect-stderr /dev/stdout
atmos terraform workspace test/test-component-override-3 -s tenant1-ue2-dev --redirect-stderr ./errors.txt
test
atmos terraform plan test/test-component-override-3 -s tenant1-ue2-dev
Executing command:
/usr/local/bin/terraform workspace select test-component-override-3-workspace
# This is an error generated by terraform and sent to `stderror`
Workspace "test-component-override-3-workspace" doesn't exist.
You can create this workspace with the "new" subcommand.
Executing command:
/usr/local/bin/terraform workspace new test-component-override-3-workspace
Created and switched to workspace "test-component-override-3-workspace"!
atmos terraform plan test/test-component-override-3 -s tenant1-ue2-dev --redirect-stderr /dev/null
Executing command:
/usr/local/bin/terraform workspace select test-component-override-3-workspace
Executing command:
/usr/local/bin/terraform workspace new test-component-override-3-workspace
Created and switched to workspace "test-component-override-3-workspace"!
b/c we had issues with GH action seeing errors in stderr
Disabling stderr is a good feature, when we want to show only relevant information regarding a process, but process errors separately or even ignore them. But currently Atmos logs all service information to stdout as well as terraform output, and there is no ability to split terraform output from atmos output (without writing ugly wrapper to find a first figure bracket etc. ), am I right?
I see these kinds of entries in my file:
Found stack config files:
Variables for the component 'main' in the stack 'security':
Writing the variables to file:
Command info:
Executing command:
along with an output from terraform, when I redirect stdout to the file, but stderr is empty. So stderr disabling is not a magic bullet in this case
@Viacheslav atmos terraform ...
currently does not support piping the terraform only output into a file b/c atmos always outputs some info. We need to add LOG levels with one LOG level to completely suppress any outputs from atmos
currently, you can do the following:
atmos terraform shell main -s security
terraform output > file.json
So stderr disabling is not a magic bullet in this case
yes, this is not for this case, this is mostly for GitHub actions to redirect stderr from other binaries if we don’t want to have errors in stderr in GH actions - I was trying to point out that we don’t need to redirect all atmos outputs to stderr, we need to introduce LOG levels
@Viacheslav for manual one-time execution, this should work for you
atmos terraform shell main -s security
terraform output > file.json
it will not work in some cases in CICD. We’ll try to implement log levels in atmos soon
@Andriy Knysh (Cloud Posse) Thank you! atmos terraform shell
is enough for testing purposes right now.
So log levels would be a very expecting feature, we hope to see it in upcoming releases :)
I’m experiencing this problem today, hoping to use terraform output > out.json
but it has atmos output. I have used --redirect-stderr
but still have some atmos output in my out.json. I will try the shell command above.
Also Terragrunt outputs only to stderr but can be run in CI, I agree with @Erik Osterman (Cloud Posse) that it makes sense to send all atmos output to stderr
(it will not work in all cases, e.g. some GH actions)
I am getting a really strange error here
and when the process exited, I got kicked out into a subshell inside the component directory. @Andriy Knysh (Cloud Posse) do you see what I did wrong?
reordered the command to more closely match the above suggestion and still same error
@Viacheslav for manual one-time execution, this should work for you
atmos terraform shell main -s security
terraform output > file.json
those are two diff commands
first, you need to execute
atmos terraform shell <component> -s <stack>
then in the new shell you can execute any native TF command w/o using atmos syntax
Ohhh I see
This command starts a new SHELL
configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.
@Viacheslav @kevcube please see this release https://github.com/cloudposse/atmos/releases/tag/v1.33.0
v1.29.0 what Add –redirect-stderr flag to atmos commands If –redirect-stderr flag is not passed, always redirect stderr to stdout for terraform workspace select command why The flag specifies the file descriptor to redirect stderr to when atmos executes subcommands like atmos terraform and atmos helmfile Errors can be redirected to any file or any standard file descriptor (including /dev/null and /dev/stdout) Useful in GitHub actions to prevent failures when any error sent to stderr makes the…
what
Add –redirect-stderr flag to atmos commands If –redirect-stderr flag is not passed, always redirect stderr to stdout for terraform workspace select command
why
The flag specifies the file…
v1.29.0 what Add –redirect-stderr flag to atmos commands If –redirect-stderr flag is not passed, always redirect stderr to stdout for terraform workspace select command why The flag specifies the file descriptor to redirect stderr to when atmos executes subcommands like atmos terraform and atmos helmfile Errors can be redirected to any file or any standard file descriptor (including /dev/null and /dev/stdout) Useful in GitHub actions to prevent failures when any error sent to stderr makes the…
2023-02-16
I’ve got a bit stuck with how the aws-sso
component provisions identity account permission sets, can anyone spot what I’ve missed here:
following earlier conversations in this channel, for my atmos experiments I’m using the {tenant}-{stage}
format for account names in account
along with a global descriptor_formats
addition (described here link to comment) resulting in account_map
outputs like the following:
terraform_roles = {
"core-auto" = "arn:aws:iam::{acc_id}:role/eg-core-gbl-auto-terraform"
"core-identity" = "arn:aws:iam::{acc_id}:role/eg-core-gbl-identity-admin"
...
}
While this has worked well elsewhere, in aws-sso
the policy-Identity-role-RoleAccess.tf uses stage alone to perform a full_account_map lookup, which is an invalid index (“identity” vs “core-identity”)
My solution was to implement a lookup like that described by @Jeremy G (Cloud Posse) in the link comment above:
locals {
# add this local to convert from "identity" to "core-identity"
account_name = lookup(module.this.descriptors, "account_name", var.iam_primary_roles_stage_name)
# edit this lookup to use my new local
identity_account = module.account_map.outputs.full_account_map[local.account_name]
}
module "role_prefix" {
source = "cloudposse/label/null"
version = "0.25.0"
stage = var.iam_primary_roles_stage_name
context = module.this.context
}
The alternative, setting var.iam_primary_roles_stage_name
to the full “core-identity” string results in a bad role name later on as it is passed to to “stage” in the role_prefix null label module (you end up with arns like arn:aws:iam::{acc_id}:role/eg-core-gbl-core-identity-admin
)
Is adding module.this.descriptors lookups to solve these kinds of issues the correct move here?
# This file generates a permission set for each role specified in var.target_identity_roles
# which is named "Identity<Role>RoleAccess" and grants access to only that role,
# plus ViewOnly access because it is difficult to navigate without any access at all.
locals {
identity_account = module.account_map.outputs.full_account_map[var.iam_primary_roles_stage_name]
}
module "role_prefix" {
source = "cloudposse/label/null"
version = "0.25.0"
stage = var.iam_primary_roles_stage_name
context = module.this.context
}
data "aws_iam_policy_document" "assume_identity_role" {
for_each = local.enabled ? var.identity_roles_accessible : []
statement {
sid = "RoleAssumeRole"
effect = "Allow"
actions = [
"sts:AssumeRole",
"sts:TagSession",
]
resources = [
format("arn:${local.aws_partition}:iam::%s:role/%s-%s", local.identity_account, module.role_prefix.id, each.value)
]
/* For future reference, this tag-based restriction also works, based on
the fact that we always tag our IAM roles with the "Name" tag.
This could be used to control access based on some other tag, like "Category",
so is left here as an example.
condition {
test = "ForAllValues:StringEquals"
variable = "iam:ResourceTag/Name" # "Name" is the Tag Key
values = [format("%s-%s", module.role_prefix.id, each.value)]
}
resources = [
# This allows/restricts access to only IAM roles, not users or SSO roles
format("arn:aws:iam::%s:role/*", local.identity_account)
]
*/
}
}
locals {
identity_access_permission_sets = [for role in var.identity_roles_accessible : {
name = format("Identity%sRoleAccess", title(role)),
description = "Allow user to assume %s role in Identity account, which allows access to other accounts",
relay_state = "",
session_duration = "",
tags = {},
inline_policy = data.aws_iam_policy_document.assume_identity_role[role].json
policy_attachments = ["arn:${local.aws_partition}:iam::aws:policy/job-function/ViewOnlyAccess"]
}]
}
Sorry @Matthew Reggler (cc @Erik Osterman (Cloud Posse)), the published aws-sso
component is old and has not been updated. The good news is that yes,
account_name = lookup(module.this.descriptors, "account_name", var.iam_primary_roles_stage_name)
is the way to move forward.
# This file generates a permission set for each role specified in var.target_identity_roles
# which is named "Identity<Role>RoleAccess" and grants access to only that role,
# plus ViewOnly access because it is difficult to navigate without any access at all.
locals {
identity_account = module.account_map.outputs.full_account_map[var.iam_primary_roles_stage_name]
}
module "role_prefix" {
source = "cloudposse/label/null"
version = "0.25.0"
stage = var.iam_primary_roles_stage_name
context = module.this.context
}
data "aws_iam_policy_document" "assume_identity_role" {
for_each = local.enabled ? var.identity_roles_accessible : []
statement {
sid = "RoleAssumeRole"
effect = "Allow"
actions = [
"sts:AssumeRole",
"sts:TagSession",
]
resources = [
format("arn:${local.aws_partition}:iam::%s:role/%s-%s", local.identity_account, module.role_prefix.id, each.value)
]
/* For future reference, this tag-based restriction also works, based on
the fact that we always tag our IAM roles with the "Name" tag.
This could be used to control access based on some other tag, like "Category",
so is left here as an example.
condition {
test = "ForAllValues:StringEquals"
variable = "iam:ResourceTag/Name" # "Name" is the Tag Key
values = [format("%s-%s", module.role_prefix.id, each.value)]
}
resources = [
# This allows/restricts access to only IAM roles, not users or SSO roles
format("arn:aws:iam::%s:role/*", local.identity_account)
]
*/
}
}
locals {
identity_access_permission_sets = [for role in var.identity_roles_accessible : {
name = format("Identity%sRoleAccess", title(role)),
description = "Allow user to assume %s role in Identity account, which allows access to other accounts",
relay_state = "",
session_duration = "",
tags = {},
inline_policy = data.aws_iam_policy_document.assume_identity_role[role].json
policy_attachments = ["arn:${local.aws_partition}:iam::aws:policy/job-function/ViewOnlyAccess"]
}]
}
@Linda Pham (Cloud Posse) we should get this component updated this week then.
Heads up, some more updates went out to aws-sso
I think there may be more coming
Hi everyone. Are there any examples of directory structure that would be recommended for multi-cloud, multi-tenant environments using Atmos + Terragrunt + Terraform? I am interested in learning more about using Atmos to deploy multi-tenant, multi-cloud environments across AWS and GCP . Dev, Test, Prod can have multiple tenants that are almost identical but can run in separate cloud accounts across clouds. For example, for Prod we may have 2 tenants on aws and 5 tenants on GCP. The number of tenants is under 20, not in the 100s. (We are already using Terragrunt and looking for a better way to refactor our code to keep it DRY and easy to maintain across multiple environments/tenants).
I will say terragrunt is not needed in this case if you use atmos and I do not think it will even compatible
managing remote state with Atmos is trivial so no need for it
Here is the refarch repo structure for atmos
https://github.com/cloudposse/atmos/tree/master/examples/complete
And atmos.tools documentation.
The stacks/orgs
contain the root stacks, stacks/catalogs
contain per tf root dir yaml that gets turned into tfvars, and the components/terraform
directory that houses all of the terraform root dirs
Pepe is right. Both atmos and terragrunt are terraform wrappers and result in a different structure of your code. I do not think it’s possible to combine them.
Why are you trying to combine them? What’s your use case?
There are other alternative layouts proposed here https://atmos.tools/core-concepts/stacks/catalogs
Catalogs are how to organize all Stack configurations for easy imports.
@Igor I’d be happy to do a screenshare and give you some more ideas
Welcome to my scheduling page. Please follow the instructions to add an event to my calendar.
Great, this is what I was looking for! We are already using Terragrunt, and we like it. However, over time the directory structure got messy and inconsistent across teams/clouds. So we are in the process of refactoring it. I was hoping to introduce Atmos without removing Terragrunt as an additional layer.
It should be possible. That’s probably the right move as a first step to reduce risk.
Part of our motivation in writing atmos though was to ensure it works with vanilla terraform. I can see why you might want to keep using terragrunt if you’re using a lot of complex terragrunt.hcl
files.
There’s a way to specify the terraform
command which you would want to overwrite in the atmos.yaml
@Andriy Knysh (Cloud Posse) seems like we didn’t document it here: https://atmos.tools/cli/configuration#components
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
I believe you would add something like this to the atmos.yaml
components:
terraform:
command: /usr/bin/terragrunt
Also, since you are using terragrunt, you’ll probably want to disable the backend autogeneration in atmos
That would certainly simplify the migration! I would like to reduce the effort of initial refactoring by not having to refactor all Terragrunt code and just move the directories/update the links/etc.
components:
terraform:
auto_generate_backend_file: false
Yes, agree
Keep us posted on your progress. If you run into any gotchas, let us know or open an issue. Chances are there’s an easy fix on our side.
I’d love to write up a page for “How to migrate from Terragrunt to Atmos”
Igor, we need to write more documentation on different atmos topics. if you want us to review your work, you can add us to your repo or DM the repo structure
we designed infra using atmos for many different use cases, would help you with reviewing your design as well
Sounds good! I would love to see some documentation about how to migrate from Terragrunt to Atmos. Especially if it’s possible to keep the original Terragrunt code and even allow adding it. Over time DevOps engineers will get more comfortable with a new Atmos approach but we keeping the old approach (Terragrunt) would simplify the migration. Again, if it only makes sense since we don’t want to create a monster. Our life is already complicated by the fact we need to work across aws/gcp/azure.
let’s start with your repo I’m not super familiar with terragrant, will need something real to look at and start with, then we can write a doc on migrating from terragrant to atmos or how to use them together
also, just a few notes on multi-cloud, multi-org design (I’m planning to write a doc about that as well )
components
folder:
components/
terraform/
aws/
vpc/
vpc-log-bucket/
gcp/
component-1/
component-2/
azure/
component-1/
component-2/
separate the logic (code) by the cloud since the code is different
stacks
folder:
stacks/
catalog/
aws/
vpc/
base.yaml
vpc-log-bucket/
base.yaml
gcp/
component-1/
base.yaml
component-2/
base.yaml
azure/
component-1/
base.yaml
component-2/
base.yaml
mixins/
aws/
region/
stage/
gcp/
region/
stage/
azure/
region/
stage/
orgs/
org1/
core/
dev/
prod/
staging/
plat/
dev/
prod/
staging/
org2/
core/
dev/
prod/
staging/
plat/
dev/
prod/
staging/
then let’s take for example org1-core-us1-dev
stack:
in the file stacks/orgs/org1/core/dev/us-eat-1.yaml
, add this:
import:
# <Import the components from the catalog required for this stack>
- catalog/aws/vpc/base
- catalog/gcp/component-1/base
components:
terraform:
vpc:
metadata:
component: aws/vpc # Point to the AWS VPC Terraform component
vars: {} # Override the default vars in `base.yaml` if needed
gcp-component-1:
metadata:
component: gcp/component-1 # Point to the GCP Terraform component
vars: {} # Override the default vars in `base.yaml` if needed
just an idea ^
this shows you how you can create different Atmos components and point them to the TF code for diff clouds
of cause, you can create diff orgs or diff tenants dedicated only to a specific cloud. For example, your org1
could work only with AWS, while org2
could work only with GCP, etc.
in which case, you would import the component and create Atmos components in the root stacks only for the specific cloud
similar with tenants inside one org
Yes that would be more common use case - tenants that are limited to one cloud. The main reason for multi-cloud on our side is because companies have a strong preference to working with a specific cloud provider. I also see it as an industry trend. So usually tenants are not replicated across clouds.
this is completely flexible and you can structure it in diff folders and sub-folders (both components
and stacks
folders)
anything can be in any subfolder at any level
just be consistent
we’ve accumulated a lot of patterns and best practices already, just need to document all of that
Yes the structure of stacks
above looks great, it can consistently scale across many tenants/environments.
Just one more question - how do Terraform workspaces relate to this structure? Is it in separate folder per tenant or per environment?
how do Terraform workspaces relate to this structure? Is it in separate folder per tenant or per environment?
TF workspaces are done automatically by atmos taking into account the stack name (e.g. org1-core-ue1-dev
) and the Atmos component name (e.g. vpc
or gcp-component-1)
note, that @Igor already has tfstate. so atmos handles workspaces, but won’t know what to do about the state you have managed by terragrunt. So either you’ll want to disable atmos usage of workspaces for state, or migrate the state. We do not have a process to migrate that state.
Igor let me know when you get to workspaces, I’ll show you how to override them per stack/component to keep the current ones
2023-02-17
2023-02-19
2023-02-20
Hi everyone, can somebody advise me please?
I have a helmfile component helmfile
in ./components/helmfile/
directory. The component contains a helmfile.d
folder inside as well as environments and bases directories, there are no helmfile.yaml and this structure works well with standalone helmfile execution.
In Atmos I have a stack mid
with some variables. When I execute atmos helmfile template helmfile -s mid
, varfile ./components/helmfile/mid-helmfile.helmfile.vars.yaml
created, but I get following error:
Executing command:
/usr/local/bin/helmfile --state-values-file mid-helmfile.helmfile.vars.yaml template
in helmfile.d/00-cluster-addons.yaml: environment values file matching "mid-helmfile.helmfile.vars.yaml" does not exist in "."
When I move this file from the root of component to the ./components/helmfile/helmfile/helmfile.d/
directory, then it works well. But I don’t get how to tell Atmos that it should use a state file is located in the root of the components folder, where it was generated, but not in helmfile.d/
?
I know that I can run separate command atmos helmfile generate varfile
with –file option, but will be nice if I can set the default destination of a state file or to change a path where helmfile is looking for the state file (in the root of component folder, not in helmfile.d).
Thanks!
please show the Atmos component YAML definition in your stack
@Andriy Knysh (Cloud Posse) Sure, this is my mid.yaml stack file:
components:
helmfile:
helmfile:
vars:
provider: azure
vars:
stage: mid
also atmos.yaml contains this component configuration:
base_path: "."
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
helmfile:
base_path: "components/helmfile"
kubeconfig_path: "~/.kube"
use_eks: false
cluster_name_pattern: "{stage}-cluster"
stacks:
base_path: "stacks"
included_paths:
- "**/*"
name_pattern: "{stage}"
workflows:
base_path: "workflows"
logs:
verbose: true
colors: true
the name helmfile
is not a good name for the component, try to name it something meaningful (e.g. what the component does)
try this:
components:
helmfile:
my-component:
metadata:
component: my-component/helmfile.d # Point to the helmfile component code (on disk)
vars:
provider: azure
metadata.component:
attribute tells Atmos where to find the component’s folder. In this folder Atmos will generate the varfile
so if your helmfile component works from my-component/helmfile.d
, you can point Atmos to it (note that for Helmfile to work, a helmfile.yaml
should be in the same folder, or a similar file which Helmfile expects as the starting point)
@Viacheslav
@Andriy Knysh (Cloud Posse) mm, got it! Thanks :)
@Andriy Knysh (Cloud Posse) I have additional question - is it possible to use absolute path when run atmos helmfile apply
?
So under the hood, instead of
helmfile -e asa -l app=kubeflow-core --state-values-file=somestatefile.yaml apply
be able to run:
helmfile -e asa -l app=kubeflow-core --state-values-file=${PWD}/somestatefile.yaml apply
An idea is to keep root of the repo as working directory, because in case of helmfile.d
, helmfile goes to this folder and look for values there, instead of looking in the root folder. It is easy to fix by adding PWD prefix, but is there something that help to do it in Atmos?
not currently. atmos always uses the component’s folder to write files to and as the working dir
2023-02-21
2023-02-23
2023-02-24
v1.30.0 what Update and enhance Atlantis Integration Update and overhaul the Atlantis Integration docs https://atmos.tools/integrations/atlantis/ Add –format and –file flags to atmos describe component command why
Allow configuring Atlantis Integration in the settings.atlantis section in the YAML stack configs (instead, or in addition to, configuring it in integrations.atlantis in atmos.yaml) Configuring the Atlantis…
what
Update and enhance Atlantis Integration Update and overhaul the Atlantis Integration docs https://atmos.tools/integrations/atlantis/ Add –format and –file flags to atmos describe component …
Atmos natively supports Atlantis for Terraform Pull Request Automation.