#atmos (2023-01)
2023-01-04
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.19.1 what Add Sources of Component Variables to atmos describe component command Update docs why The atmos describe component command outputs the final deep-merged component configuration in YAML format. The output contains the following sections: atmos_component - Atmos component name atmos_stack - Atmos stack name backend - Terraform backend configuration backend_type - Terraform backend type command - the binary to execute when provisioning the component (e.g. terraform, terraform-1, helmfile)…
what
Add Sources of Component Variables to atmos describe component command Update docs
why The atmos describe component command outputs the final deep-merged component configuration in YAML form…
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.19.1 what Add Sources of Component Variables to atmos describe component command Update docs why The atmos describe component command outputs the final deep-merged component configuration in YAML format. The output contains the following sections: atmos_component - Atmos component name atmos_stack - Atmos stack name backend - Terraform backend configuration backend_type - Terraform backend type command - the binary to execute when provisioning the component (e.g. terraform, terraform-1, helmfile)…
2023-01-06
![Viacheslav avatar](https://avatars.slack-edge.com/2023-02-21/4846260547329_aab47a8274143e053527_72.jpg)
Hi all! I have an atmos workflow that creates two stacks with terraform. Let suppose, stack1 creates an Azure resource group with randomly generated name, and stack2 creates a storage account that needs to be pointed to that resource group. In order to create a storage account, I need to use a data source for the resource group in stack2, but I don’t know the name of that resource group to import.
Is there any native way to read outputs from stack1 and pass it as variables to stack2? I want to avoid using some complex bash commands between applying two stacks. Maybe there are any other suitable approaches to solve this problem following the philosophy of atmos?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
will this https://pr-280.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/components/remote-state/ help?
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
(not related to workflows, but it’s a way to get the remote state of one component and use it as inputs for another component)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Yes, this is the way you would want to do it
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
(note this is a link to an open PR updating the documentation; we’re going to merge it soon, and the link above will break)
![Viacheslav avatar](https://avatars.slack-edge.com/2023-02-21/4846260547329_aab47a8274143e053527_72.jpg)
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) thanks a lot, it looks suitable for my project
![Viacheslav avatar](https://avatars.slack-edge.com/2023-02-21/4846260547329_aab47a8274143e053527_72.jpg)
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) For some reason it does not work as I expected, I’ll be glad to get any advice.
I created resources with component main and stack prerequisites
In the second stack mid with a reference to the same component main, I enabled remote-state module following the guide and passed stack=prerequisites
and component=main
as inputs, so I expect to get outputs from remote backend of the previous stack, but got module.remote_state[0].outputs is null
instead.
Investigating the state file I found that module.remote_state[0]
with name “config” contains all input vars of the stack prerequisites, which is correct I think. But “data_source” looks like:
{
"module": "module.remote_state[0]",
"mode": "data",
"type": "terraform_remote_state",
"name": "data_source",
"provider": "provider[\"<http://terraform.io/builtin/terraform\|terraform.io/builtin/terraform\>"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"backend": "local",
"config": {
"value": {
"path": ".terraform/modules/remote_state/modules/remote-state/dummy-remote-state.json"
},
"type": [
"object",
{
"path": "string"
}
]
},
"defaults": null,
"outputs": {
"value": {},
"type": [
"object",
{}
]
},
"workspace": null
},
"sensitive_attributes": []
}
]
},
For some reason outputs of the component created in stack prerequisites can’t be read by the remote state module.
P.S. component main uses a remote backend in Azure storage account, that defined in backend.tf file, so I do not define backend in atmos.yaml, but here I see that remote-state supports only s3 and tf cloud. Can it be a reason why the module returns null?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
this module https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/remote-state supports only s3 and remote backends
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we are not using Azure and can’t currently test it
![Viacheslav avatar](https://avatars.slack-edge.com/2023-02-21/4846260547329_aab47a8274143e053527_72.jpg)
Thanks for update, got it. Do you plan to add Azure as a backend for remote state in the near future?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
No, this would be a community driven feature. We would accept a PR adding support.
![Viacheslav avatar](https://avatars.slack-edge.com/2023-02-21/4846260547329_aab47a8274143e053527_72.jpg)
@Erik Osterman (Cloud Posse) Thanks, here is my PR to enable azurerm support: https://github.com/cloudposse/terraform-yaml-stack-config/pull/61
what
• Allow to use azurerm
backend to read a state from
why
• Currently only s3
and remote
backend supported
references
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
2023-01-07
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.20.0 what & why
Add “Quick Start” doc https://atmos.tools/category/quick-start/
Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/
Add “CLI Commands Cheat Sheet” doc https://atmos.tools/cli/cheatsheet/
Update/improve “Atmos…
what & why
Add “Quick Start” doc https://atmos.tools/category/quick-start/
Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/
Add “CLI Commands Chea…
Take 20 minutes to learn the most important atmos concepts.
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
CLI Commands Cheat Sheet
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.20.0 what & why
Add “Quick Start” doc https://atmos.tools/category/quick-start/
Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/
Add “CLI Commands Cheat Sheet” doc https://atmos.tools/cli/cheatsheet/
Update/improve “Atmos…
2023-01-09
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
Hi, I see some action in this repo https://github.com/cloudposse/github-action-setup-atmos is this action ready to use? @matt
A GitHub Actions that installs the Atmos CLI
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
This one is ready
A GitHub Actions that installs the Atmos CLI
2023-01-10
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.21.0 what
Avoid exiting early when missing configuration file
Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs
Update atmos describe stacks and atmos describe affected commands
why
Do not immediately exit when a configuration file is not found. This allows, for example, the version command to be run without a configuration file. The commands requiring a configuration file still do exit when it is missing. See <a class=”issue-link…
what
Avoid exiting early when missing configuration file
Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs
Update atmos describe stac…
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.21.0 what
Avoid exiting early when missing configuration file
Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs
Update atmos describe stacks and atmos describe affected commands
why
Do not immediately exit when a configuration file is not found. This allows, for example, the version command to be run without a configuration file. The commands requiring a configuration file still do exit when it is missing. See <a class=”issue-link…
2023-01-12
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Hello all, I have (what I hope is) a fairly basic question about Atmos that I haven’t been able to answer with the documentation. Is there a way to configure components in a stack, such that you can take outputs from one component and feed them in as inputs to another component?
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
As an example, let’s assume I have a stack that creates a VPC and a set of EC2 instances – could I take a set of subnet ids from the “VPC component” and pass it into the components that create the EC2 instances?
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Imports don’t seem like the right answer since that appears to be primarily static configuration
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
also take a look at https://atmos.tools/category/quick-start/, especially https://atmos.tools/quick-start/create-components
Take 20 minutes to learn the most important atmos concepts.
In the previous steps, we’ve configured the repository, and decided to provision the vpc-flow-logs-bucket and vpc Terraform
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Oh, I see . One thing that’s not immediately clear is how you indicate the order in which the components have to run. Is there some way to create a dependency tree out of them?
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I see no indication in here how the stack determines which TF modules to run first – is it just based on the definition order?
In the previous step, we’ve configured the Terraform components and described how they can be copied into the repository.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you decide what to deploy first. In stacks, you just configure the components that CAN be deployed into the stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you execute atmos terraform apply <component> --stack <stack>
command
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
so you deploy the dependencies first
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
then deploy the components that uses the remote state from the dependencies
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you can also use workflows if you want to deploy in particular order https://atmos.tools/core-concepts/workflows/
Workflows are a way of combining multiple commands into one executable unit of work.
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Hmm, that’s unfortunate. I’d really like a “deploy this whole tree of dependencies” where I can clearly tell each component what other components it depends on
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
that’s what the workflows are for
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Workflows appear to require you to create a linear list rather than a dependency tree, where things can work in parallel
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
oh, graphs are complicated and we don’t support anything like that
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
it would convolute the stacks and components
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
In a previous role, I wrote something like that to orchestrate CloudFormation stacks
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
it’s def possible, but will create a lot of new issues and will make everything much more complicated
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Each component would indicate what outputs it requires from each other component and it would automatically resolve the dependency tree
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Yeah, I understand
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
It can be done for a particular case, but to make all of that generic is not an easy task
![Geoffrey Hichborn avatar](https://secure.gravatar.com/avatar/c500620dcdbef186ab90ef1c7e541a77.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
When you are looking at deploying literally hundreds of components, having things that don’t depend on each other deployed in parallel is a nice benefit
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we are planning on extending Atmos workflows to add parallel
section to deploy a collection of components in parallel
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
this is much easier to understand, reason about and use than some complex dependencies expressed in YAML for diff components in the same or diff stacks
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.22.0 what Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/ why Useful when you want to restart a workflow from a particular step. Each workflow step can be given an arbitrary name (step’s identifier) using the name attribute. For example: workflows: test-1: description: “Test workflow” steps: - command: echo Command…
what
Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/
why Useful when you want to restart a workflow…
Workflows are a way of combining multiple commands into one executable unit of work.
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.22.0 what Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/ why Useful when you want to restart a workflow from a particular step. Each workflow step can be given an arbitrary name (step’s identifier) using the name attribute. For example: workflows: test-1: description: “Test workflow” steps: - command: echo Command…
2023-01-13
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.23.0 what Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs why
Support two ways of specifying a planfile for atmos terraform apply and atmos terraform deploy commands:
atmos terraform apply and atmos terraform deploy commands support –planfile flag to specify the path to a planfile. The –planfile flag should be used instead of the planfile argument in the native terraform apply…
what
Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs
why
Support two ways of speci…
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.23.0 what Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs why
Support two ways of specifying a planfile for atmos terraform apply and atmos terraform deploy commands:
atmos terraform apply and atmos terraform deploy commands support –planfile flag to specify the path to a planfile. The –planfile flag should be used instead of the planfile argument in the native terraform apply…
2023-01-17
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.24.0 what & why
Update atmos describe affected command
Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for the component in the stack). This takes into account settings.spacelift.stack_name and settings.spacelift.stack_name_pattern settings which override the auto-generated Spacelift stack names
atmos describe afected
[
{
“component”: “test/test-component-override-2”,
“component_type”:…
what & why
Update atmos describe affected command
Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for …
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.24.0 what & why
Update atmos describe affected command
Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for the component in the stack). This takes into account settings.spacelift.stack_name and settings.spacelift.stack_name_pattern settings which override the auto-generated Spacelift stack names
atmos describe afected
[
{
“component”: “test/test-component-override-2”,
“component_type”:…
2023-01-19
![Nimesh Amin avatar](https://avatars.slack-edge.com/2022-03-01/3175287937013_6196d4e8ede5f6d560e9_72.png)
hi! When I try to run atmos terraform shell <component> -s <env> I get: “can’t find a shell to execute” at the very end. I remember this working at some point, but now I’m second guessing myself
This is all within the docker container. Am I forgetting to set some env?
![Nimesh Amin avatar](https://avatars.slack-edge.com/2022-03-01/3175287937013_6196d4e8ede5f6d560e9_72.png)
2023-01-20
![Russell Sherman avatar](https://avatars.slack-edge.com/2022-11-02/4336208605536_5b81e95c6772a7a54ebc_72.jpg)
it appears that the atmos helmfile implementation does not not take into account the possibility of an eks cluster name including attributes
adding attributes to the cluster_name_pattern does not appear to have any effect and there does not appear to be a way to override or specify the cluster name
am i missing something?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
did you set the name pattern in atmos.yaml
?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
all these tokens are supported
// ReplaceContextTokens replaces context tokens in the provided pattern and returns a string with all the tokens replaced
func ReplaceContextTokens(context Context, pattern string) string {
r := strings.NewReplacer(
"{base-component}", context.BaseComponent,
"{component}", context.Component,
"{component-path}", context.ComponentPath,
"{namespace}", context.Namespace,
"{environment}", context.Environment,
"{region}", context.Region,
"{tenant}", context.Tenant,
"{stage}", context.Stage,
"{workspace}", context.Workspace,
"{attributes}", strings.Join(context.Attributes, "-"),
)
return r.Replace(pattern)
}
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
so if you set, for example:
cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-{attributes}-eks-cluster"
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
and add
attributes:
- blue
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
to the resources, then the cluster name will be something like this
ns-core-ue2-prod-blue-eks-cluster
![Russell Sherman avatar](https://avatars.slack-edge.com/2022-11-02/4336208605536_5b81e95c6772a7a54ebc_72.jpg)
using atmos 1.24.0
in atmos.yaml i have
yaml
helmfile:
use_eks: true
cluster_name_pattern: "{namespace}-{stage}-{attributes}"
in my stack config i have
yaml
components:
helmfile:
echo-server:
vars:
attributes:
- "20230115"
in the execution log i see
txt
Writing the variables to file:
/conf/components/helmfile/echo-server/palolo-labs-us-west-2-echo-server.helmfile.vars.yaml
Using AWS_PROFILE=default
Downloading kubeconfig from the cluster 'palolo-labs-' and saving it to /dev/shm/palolo-labs-us-west-2-kubecfg
Executing command:
/usr/local/bin/aws --profile default eks update-kubeconfig --name=palolo-labs- --region=us-west-2 --kubeconfig=/dev/shm/palolo-labs-us-west-2-kubecfg
An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: palolo-labs-.
exit status 254
here it does not appear that the attribute is being appended to the cluster name and so the cluster is not found
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
it’s not easy to understand what’s going on from the output above, you can zip up the project (all files and atmos.yaml) and DM me, I’ll check it
![Russell Sherman avatar](https://avatars.slack-edge.com/2022-11-02/4336208605536_5b81e95c6772a7a54ebc_72.jpg)
much appreciated..
2023-01-23
2023-01-24
2023-01-25
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
How do you guys deploy lambdas with atmos? I’m trying to decide the best way when I have a monorepo with atmos infra and another repo for the lambda code and trying to avoid 2 PRs to create/deploy the lambda function
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
this is not related to atmos
, this is how do you handle two repos, it’s a terraform thing
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
and in general, you can use two (maybe more) diff methods to handle that:
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
yes is not pure atmos related, but I was wondering if you guys handle any part of a lambda Infrastructure with atmos and what it looks like fke the user flow
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- For the lambda source repo, create a pipeline (e.g. GH actions) to compile the code, zip it, and store in, let’s say, S3 bucket. Then in the infra repo, use the ZIP artifact URL to download it and use for the lambda, e.g. https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup/blob/master/main.tf#L99
module "artifact" {
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
- In the infra repo Terraform code, download the Lambda source(s) using, for example, https://github.com/hashicorp/terraform-provider-http. Then zip it up and create an archive on disk, then use it for Lambda. Similar to https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup/blob/master/examples/complete/fixtures.us-east-2.tfvars#L31 https://github.com/cloudposse/terraform-aws-lambda-function/blob/main/examples/complete/main.tf#L46
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
trying to avoid 2 PRs
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
in the best case scenario, you will need a PR in the lambda source repo (for the changes), and a pipeline run (e.g. Spacelift or GH action) in the infra repo to detect the source changes (or an artifact changes) and redeploy
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
yes, I could avoid using 2 PRs but no matter what a user will have to interact with two different repos
2023-01-26
2023-01-27
2023-01-31
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.25.0 what Update Atmos Stack imports Update docs https://atmos.tools/core-concepts/stacks/imports/ Add imports schema with context Allow using Go templates in imported configurations why Auto-generate Atmos components from templates Using imports with context and parameterized config files will help to make the stack configurations extremely DRY, and is very useful when creating stacks and components for <a…
what
Update Atmos Stack imports Update docs https://atmos.tools/core-concepts/stacks/imports/ Add imports schema with context Allow using Go templates in imported configurations
why
Auto-generat…
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once