#atmos (2023-04)
2023-04-04
2023-04-06
hello all we are currently deploying AWS components (such as VPC/EC2 etc) in a stack from the atmos CLI, something like this:
atmos terraform plan vpc -s <id-env-region
is there a way to deploy all components in that same stack with a single CLI command?
otherwise, our CI/CD system has to define some sort of list and loop over all components and
you can use workflows https://atmos.tools/core-concepts/workflows/
Workflows are a way of combining multiple commands into one executable unit of work.
you can create reusable workflows and then provide atmos stack on the command line
perfect, thank you, this looks exactly like what we are looking for. apologies for the newbie question but we are all picking up atmos for the first time and enjoying the ride!
thanks for using it, let us know if you need any help
awesome. thank you Andriy
watch out with this guy @Andriy Knysh (Cloud Posse) he is nice but not as nice as me
(we work together)
We’re also using workflows for this. So we define all the steps to bring up a layer of your stack.
(or destroy)
and they are sequential, they do not run in parallel right?
not yet, we have a task to add parallel steps
I prefer sequential
2023-04-07
v1.32.5 what && why Add docs for the new Atmos GitHub Actions references https://github.com/cloudposse/github-action-atmos-affected-stacks https://github.com/cloudposse/github-action-atmos-component-updater
what && why
Add docs for the new Atmos GitHub Actions
references
https://github.com/cloudposse/github-action-atmos-affected-stacks https://github.com/cloudposse/github-action-atmos-component-upd…
A composite workflow that runs the atmos describe affected command - GitHub - cloudposse/github-action-atmos-affected-stacks: A composite workflow that runs the atmos describe affected command
GitHub Action that can be used as workflow for automatic update via Pull Requests infrastructure repository according to versions to components sources - GitHub - cloudposse/github-action-atmos-com…
wild thought/idea incoming. :wink:
are there plans for dynamically referencing remote state without updating a TF module to specifically call out remote state? Many times we just need the ARN anyway, but sometimes we need the name or similar.
Potential example:
map_environment:
APP_NAME: echoserver
S3_BUCKET: "{{ remote-state.s3-access-logs.bucket_id }}"
Just like [context.tf](http://context.tf)
, we could use remote_state
(or similar) as a variable in a generic [stacks-state.tf](http://stacks-state.tf)
or whatever (since components already have a [remote-state.tf](http://remote-state.tf)
) copied to a component. Then it looks up state, takes the values it requested, and uses them (the biggest caveat of how to use them; don’t want coalesce(remote.blah, var.blah)
littered on every possible value.
So another thought is one component Atmos calls to get the remote values then takes those values and passes it to the expected component as it normally does (tfvars). That component just loops over all options provided, returns them to Atmos via TF outputs, Atmos replaces references in YAML, and creates the tfvars.
This slows down since you have to plan/apply 2 projects, but it allows us an easy path to referencing outputs without writing TF to reference them.
atmos terraform plan xyz -s my-stack
-> runs terraform plan remote-state -s my-stack
-> processes outputs -> creates tfvars file/backend json -> runs terraform plan xyz -s my-stack
if no remote-state
references exist, it does what it does now.
@Andriy Knysh (Cloud Posse)
as a side note, Go templates could help with basic ARN generation as well.
if we set context or something we could potentially provide all necessary values and Atmos gen that ARN so we don’t have to do that in TF
a bit more context: https://sweetops.slack.com/archives/G014YEKDH4K/p1680713411920479
cc @Jeremy G (Cloud Posse) (this is an off-shoot of that thought/discussion)
I don’t like the idea of Atmos generating an ARN rather than using Terraform to do it. Also, Atmos is not Terraform and does not directly have access to remote state.
You might be able to refine our current remote state Terraform implementation with a more generic remote state “mixin”, but unless you want to run through a separate Terraform apply to read the values, I don’t think Atmos can wire the outputs to the inputs for you. You might come up with some kind of “convention over configuration” where an input has a specific naming format and the mixin is configured to place a value with that name as a key in an output map and then end up with something like
locals {
var_name = try(modules.remote[var.component]["var_name"], var.var_name)
}
I would need to see some fully worked-out examples to see if that is really simplifying anything. I don’t think it would be possible to satisfy the constraints of a mixin, which is that the component should work without any code modification whether or not the mixin is present.
right. definitely a complex scenario. if it shows up as a need more often, i might take a rough stab at a POC
One important design decision is atmos isn’t bound to AWS. We want to keep it platform agnostic. Our recommendation is to make the components support references and have it dereference those inputs. That’s how we currently do everything
If atmos starts reading terraform remote state we’re treading down a slippery slope. There’s all the configuration of backends to accesss remote state and a dozen types of remote ends
I could imagine atmos running a command that output some settings and those settings are accessible
yes. Also, Atmos does not know anything about roles, credentials etc. Underlaying tools (e.g. terraform) do. So Atmos can’t access AWS or any other clouds/platforms on its own
That’s a solid point @Erik Osterman (Cloud Posse).
@Andriy Knysh (Cloud Posse) that’s why I was suggesting a TF component that does it, but I get how other providers may be problematic for that.
What about reading things like name, etc from other stacks to reference them?
That’s all YAML and agnostic
That’s another chicken/egg problem for sure so definitely not an easy thing.
Just random thoughts.
this is done using the remote-state module
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
Yeah, just thinking through ways not to define a bunch of those and doing something more dynamic
2023-04-08
v1.33.0 what Update Atmos logs Add logs.file and logs.level to atmos.yaml Update docs https://atmos.tools/cli/configuration why Allow specifying a file for Atmos to write logs to Allow specifying a log level to control the amount of Atmos logging Logs Atmos logs are configured in the logs section: logs: file: “/dev/stdout” # Supported log levels: Trace, Debug, Info, Warning, Off (Off is the default and is used if logs.level is not…
what
Update Atmos logs Add logs.file and logs.level to atmos.yaml Update docs https://atmos.tools/cli/configuration
why
Allow specifying a file for Atmos to write logs to Allow specifying a log …
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
2023-04-10
2023-04-11
quick question about atmos vendor
. my component.yaml looks like this
uri: github.com/cloudposse/terraform-aws-ec2-instance.git/?ref={{.Version}}
version: 0.47.1
but when I pull, I get this error:
subdir "%253Fref=0.47.1" not found
how should the url be formatted?
what happens if you remove the /
uri: github.com/cloudposse/terraform-aws-ec2-instance.git?ref={{.Version}}
version: 0.47.1
Use Component Vendoring to make a copy of 3rd-party components in your own repo.
@Andriy Knysh (Cloud Posse) interesting usage of vendor @Michael Dizon is using it to vendor modules and not components
Since we support generating the backend in atmos, really any module we have can be a component
so when i remove the /
I get this error:
error downloading '<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git
ah interesting. so presently, the intent is for this to point to the terraform-aws-components
repo?
no, it can point to any endpoint supported by go-getter
As implemented, we usually use it for components in our monorepo
what about
uri: github.com/cloudposse/terraform-aws-ec2-instance.git//?ref={{.Version}}
version: 0.47.1
Pulling sources for the component 'ec2-instance' from 'github.com/cloudposse/terraform-aws-ec2-instance.git//?ref=0.47.1' and writing to '/Users/mdizon/Code/xxx/xxx-terraform/components/terraform/ec2-instance'
error downloading '<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git
formatted it like this and got the same result
github.com/cloudposse/terraform-aws-ec2-instance?ref={{.Version}}
I can confirm this appears to be a bug.
erik@Eriks-MacBook-Pro /tmp % export ATMOS_LOGS_LEVEL=Trace
erik@Eriks-MacBook-Pro /tmp % atmos vendor pull --component bar
Pulling sources for the component 'bar' from 'github.com/cloudposse/terraform-aws-ec2-instance.git//?ref=0.47.1' and writing to 'components/terraform/bar'
error downloading '<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git
ah
@Andriy Knysh (Cloud Posse) https://github.com/hashicorp/go-getter/issues/114
All of these give the same error:
go-getter "<git://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:16:38 Error downloading: error downloading '<git://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git
go-getter <git://github.com/kelseyhightower/hashiconf-eu-2016.git> "dest"
2018/09/08 21:16:49 Error downloading: error downloading '<git://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git
go-getter "<git://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:16:53 Error downloading: error downloading '<git://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git
go-getter "<git://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:16:57 Error downloading: error downloading '<git://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git
go-getter "git::<http://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:17:09 Error downloading: error downloading '<http://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git
go-getter "git::<https://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:17:14 Error downloading: error downloading '<https://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git
go-getter [email protected]:kelseyhightower/hashiconf-eu-2016.git "dest"
2018/09/08 21:17:53 Error downloading: error downloading '<ssh://[email protected]/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git
This also occurs on nomad. What is the problem? I’m using the latest version.
interesting.. this worked:
uri: github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
oh, interesting
I was just going to say, I was able to do //test
to vendor the tests, so thought it was a problem with root only
but that makes sense that ///
works
If you get this working, could I trouble you to open a doc here on how to vendor modules?
title: Component Vendoring
sidebar_position: 3
sidebar_label: Vendoring
description: Use Component Vendoring to make a copy of 3rd-party components in your own repo.
id: vendoring
Atmos natively supports the concept of “vendoring”, which is making a copy of the 3rd party components in your own repo. Our implementation is primarily inspired by the excellent tool by VMware Tanzu, called vendir
. While atmos
does not call vendir
, it functions and supports a configuration that is very similar.
After defining the component.yaml
configuration, the remote component can be downloaded by running the following command:
atmos vendor pull -c components/terraform/vpc
Schema: component.yaml
To vendor a component, create a component.yaml
file stored inside of the components/_type_/_name_/
folder (e.g. components/terraform/vpc/
).
The schema of a component.yaml
file is as follows:
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
name: vpc-flow-logs-bucket-vendor-config
description: Source and mixins config for vendoring of 'vpc-flow-logs-bucket' component
spec:
source:
# 'uri' supports all protocols (local files, Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP),
# and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
# In 'uri', Golang templates are supported <https://pkg.go.dev/text/template>
# If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
version: 0.194.0
# Only include the files that match the 'included_paths' patterns
# If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
# 'included_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# <https://github.com/bmatcuk/doublestar#patterns>
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
# Exclude the files that match any of the 'excluded_paths' patterns
# Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
# 'excluded_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
excluded_paths:
- "**/context.tf"
# Mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
# All mixins are processed in the order they are declared in the list.
mixins:
# <https://github.com/hashicorp/go-getter/issues/98>
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
filename: context.tf
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
version: 0.194.0
filename: introspection.mixin.tf
sure no problem!
@Erik Osterman (Cloud Posse) https://github.com/cloudposse/atmos/pull/364
re-reviewed
ok, in a meeting right now. will update after i come out
@Erik Osterman (Cloud Posse) committed your suggestions. learned something new today, didn’t know github could do that!
Thanks for the contributions!
Use Component Vendoring to make a copy of 3rd-party components in your own repo.
2023-04-12
v1.33.1 Update documentation and examples for vendoring modules as components (…
-
Update documentation and examples to include instructions on vendoring components from github
-
update: removed update from component.yaml
-
update: provide more thorough instructions
-
Update…
Workflow automation tool for DevOps. Keep configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile. - Release v1.33.1 · cloudposse/atmos
what Updated docs and examples with instructions on vendoring components from github repos why appending /// to the end of a github uri is not obvious
2023-04-13
explains how any module can now be used as a component without needing to write a component that wraps it.
Basically any module can be used as a root module within the atmos framework.
v1.33.2 what Rename atmos describe dependants to atmos describe dependents Add alias to allow atmos describe dependants why There are two acceptable ways to spell dependent depending (no pun intended) on whether you are speaking American English (dependent) or British English (dependant)
what
Rename atmos describe dependants to atmos describe dependents Add alias to allow atmos describe dependants
why
There are two acceptable ways to spell dependent depending (no pun intended) o…
v1.33.2 what Rename atmos describe dependants to atmos describe dependents Add alias to allow atmos describe dependants why There are two acceptable ways to spell dependent depending (no pun intended) on whether you are speaking American English (dependent) or British English (dependant)
2023-04-14
v1.34.0 what Update env stack config section Allow using null to unset the ENV var why If it’s set to null, it will not be set as ENV var in the executing process (will be just skipped) Setting it to null will override all other values set in the stack configs for the component This is useful if an ENV var is set globally in top-level stacks for the entire configuration, but needs to be unset for some specific components test Set TEST_ENV_VAR4 to some value components: terraform:…
what
Update env stack config section Allow using null to unset the ENV var
why
If it’s set to null, it will not be set as ENV var in the executing process (will be just skipped) Setting it to nu…
2023-04-15
2023-04-19
Can Atmos + Terraform be used to create new AWS accounts in an organization and provision resources into that new account as a single operation? I’m assuming the account creation is asynchronous, but the Terraform resource docs don’t specify.
atmos may make this possible but it’d require a lot of additional logic, two steps makes more sense
because the account ID won’t be known, and you need to instantiate a new provider to work on that second account
so there will necessarily be multiple calls to “terraform” which I don’t think atmos is capable of today
unless it has a terragrunt style “apply-all” command that i don’t know about
I see that atmos has workflows which allow sequential commands, but I don’t see anything that would enforce the “wait” or dependency ordering. I think remote outputs and some dynamic stuff in the providers can handle finding the account id. I’m mainly just wondering on the orchestration part.
The two-steps approach makes sense, but how is this typically done in a CI/CD environment? I basically want to add a new account to some config, submit a PR, then have it created and a baseline config applied
I’ve never fully automated that. Usually just have a common “account-baseline” module that we add to the new account after creation
could be done in CI/CD in two steps still
Yeah it does look like there is a status check API on the create account operation: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/organizations/describe-create-account-status.html
Probably not worth optimizing with full automation. Using the CLI to poll the status and then run the baseline tf apply as a second step seems reasonable
well the terraform resource for adding an account to an org shouldn’t be “created” until this status check would return true anyway, i don’t think you’ll need that CLI bit
That’s what I was wondering, but haven’t found confirmation of that
yeah. in my experience, AWS account creation happens really fast, like maybe a few seconsd
that’s what makes me think it is not waiting for the full completion
as i’ve seen in most guides it can take several minutes
And the API docs seem to indicate it’s a background process that could take minutes: https://docs.aws.amazon.com/organizations/latest/APIReference/API_CreateAccount.html
Actually looking at the TF provider code, it does look like it is calling and waiting on that status endpoint. So maybe it’s just fast the majority of the time, but could take a while
Yes, we do this
Workflows are a way of combining multiple commands into one executable unit of work.
In our commercial reference architecture (for sale), we leverage workflows to bring everything up
This is our account factory https://docs.cloudposse.com/components/catalog/aws/account/
This component is responsible for provisioning the full account hierarchy along with Organizational Units (OUs). It includes the ability to associate Service Control Policies (SCPs) to the Organization, each Organizational Unit and account.
Thanks, @Erik Osterman (Cloud Posse). Sounds like I’m on the right track. Do you all ever use nested OUs?Unless i’m missing something, that account factory only supports one level deep. I’m curious if that is an intentional best practice
yes & no. We don’t recommend them b/c of resource naming and disambiguation.
architecturally I like it, but in practice it’s problematic if you subscribe to our opinionated naming conventions.
ah fair, that makes sense
in our convention, org = namespace ou = tenant account = stage region = environment resource = name
so imagine an org called acme
with an ou called plat
(For platform), with an account called prod
with resources in us-east-2
(we use use2
), and a cluster (eks
)
the resource becomes acme-plat-prod-use2-eks
and then all the sub-resources assocaited with the cluster
now introduce sub-OUs, and you and 4-5 characters, and now you’re definitely hitting limits on resource names for many systems.
That’s helpful, thanks!
Hi, to jump on this, one option I’ve considered is to use a convention along the lines of
namespace = org tenant = “smallest” ou stage = account environment = region name = resource
taking your example:
an org called acme
with an ou called plat
with child ou’s called foo
and bar
with an account called prod
in each ou with resources in us-east-2
and a resource (eks
)
resource in plat-prod: acme-plat-prod-use2-eks
resource in (plat-)foo-prod: acme-foo-prod-use2-eks
resource in (plat-)bar-prod: acme-bar-prod-use2-eks
There is a loss of visibility inherent here (which can only be partially mitigated by adding a “ou-path” tag/attribute/etc to each null-label), and some tweaking of the account module required (add support for the parent_id
property in the YAML for an optional second layer of aws_organizations_organizational_unit resources).
However, I don’t think this would actually break anything in account-map
or elsewhere, tenant still refers to an ou, just with an additional constraint.
Have you ever considered something like this? Or is “tenant=ou” by convention or technical limitation more immutable than I have considered here?
I suppose you would be in trouble if you reused an ou name across different branches of the your aws org hierarchy, but should be fairly easy to guard against
Yeah, we would have that issue is we have multiple OUs with child OUs based on environment, like dev and prod
(disregard my most recent messages, deleted)
I am jumping on a call - will get back
Agreed Austin, that wouldn’t work in the model I have described here. However, the refarch’s current OU heirarchy model is entirely flat with by necessity, large, low-resolution OUs, so env-based child OUs wouldn’t be viable in that model either.
There’s a tension here between the refarch’s flat structure (which I otherwise love), and a desire for nested OUs — admittedly in my attempt in an approach that adds a irritating uniqueness constraint on the name of “child OUs”.
The other option: acme-plat_foo-prod-use2-eks
, is my least favourite, as not only does it have the length issues Erik mentioned, but it feels like an abuse of the tenant keyword.
In my hand-waved model the level-1 OU would be a broad project category, the level-2 OU would be a specific project, and each level-2 OU would have a dev
, stag
and prod
account trio.
This would let you add governance controls and shared atmos-yaml configuration with slightly more granularity than in the refarch, with a downside that “projects” (or whatever an equivalent keyword is in your organisation’s internal parlance) would need to by either intentionally uniquely named, or assigned a random id-phrase.
in reflection, I think you can mostly get way with the hierarchical OUs, provided you a) use the most specific OU as the tenant, and ensure that the most-specific OU is always unique across all parent OUs
Maybe there’s someway to create a rego policy to enforce this. Since atmos supports rego, this would then be maintainable.
So with the standard model, to apply different SCPs between dev and prod, you would have to do that on each explicit account in the Atmos stack, correct? To me that’s the only real benefit to the OU nesting is being apply to take advantage of the policy inheritance
Keep in mind it’s all IAC and with atmos, you get imports
So there’s really no benefit to the nesting.
With imports it’s all still DRY.
@here Hi, everyone. i am trying to apply the resources through the bitbucket pipeline using atmos. using atmos workflows to do the plan and apply multiple components at a time and i am just wondering is there a way where i can apply only the output of the Affected components( atmos affected describe
) to atmos apply instead of going to whole workflow everytime. cheers .
Awesome to hear you’re building out those pipelines.
We did build atmos describe affected
for this purpose, but have not implemented the apply-affected command you describe.
Mostly b/c gitops best practices for terraform involve reviewing the plan before apply, and that woudl require tighter integration with your build process
you could implement this probably quite easily with some jq
foo and xargs
Then add your own subcommand for it https://atmos.tools/core-concepts/subcommands/
Atmos can be easily extended to support any number of custom commands, what we call “subcommands”.
Ahh, i see. Thank you so much for the quick response
Fwiw, we’re implementing GitHub Actions support
Awesome
you could implement this probably quite easily with some jq
run atmos describe affected
, use jq
to select the components and stacks, then in a loop you can call atmos terraform apply
sounds good and i am working on that now. Thanks so much
by custom command we mean two possible things:
- You can add that script as a workflow step using the type
shell
https://atmos.tools/core-concepts/workflows/
Workflows are a way of combining multiple commands into one executable unit of work.
The first one sounds cool to use
@Andriy Knysh (Cloud Posse) do not trust this guy, he works with that other guy that is not as nice as me
yea, and note that you can create a custom atmos command using any complex script (including other atmos commands including other custom commands), and then you can use that custom command in a workflow step of type atmos
how does the atmos Github action does this today?
atmos custom commands can be as simple as calling a shell script
# Custom CLI commands
commands:
- name: aws
description: Execute AWS commands
commands:
- name: assume-role
description: Execute 'aws assume-role' command
steps:
- set-aws-assume-role-credentials
or as complex as this example
- name: manage-s3-assets
description: Pushes or syncs assets to the given s3 bucket
verbose: true
flags:
- name: environment
shorthand: e
description: Environment
required: true
- name: stage
shorthand: s
description: Stage
required: true
- name: mode
- name: from
description: From Path
required: true
- name: datasource
description: Target
required: true
- name: datasource_path
description: Target path
required: true
component_config:
component: "deploy"
stack: "xxx-{{ .Flags.environment }}-{{ .Flags.stage }}"
env:
- key: AWS_REGION
value: '{{ .ComponentConfig.settings.standard_env.AWS_REGION }}'
- key: S3_CMD
value: '{{ index .ComponentConfig.settings.aws_commands.s3_cmd .Flags.mode}}'
- key: BUCKET_NAME
value: "xx-{{ .Flags.environment }}-{{ .Flags.stage }}-{{ .Flags.datasource }}"
- key: AWS_PROFILE
value: "xx-gbl-{{ .Flags.stage }}-terraform"
steps:
- aws s3 ${S3_CMD} {{ .Flags.from }} s3://${BUCKET_NAME}/${DATA_SOURCE_PATH}
we are going to create a custom command so people can run this in any pipeline and once we.move to github actions we will definitely use your actions.
we are stuck with bitbucket internally but for external repos we can use github (after approval )
2023-04-20
2023-04-21
Hi everyone,
I’m currently playing around with Atmos and as far as I understand, the components modules composed in way to provide naming convention within [context.tf](http://context.tf)
and [remote-state.tf](http://remote-state.tf)
for fetching outputs of other components.
Since I haven’t see any terraform data source definition in vpc module by cloudposse, I wonder how are you facing a scenario when you need to use an existing vpc instead of creating one with atmos components.
Is mixins the right solution for such cases?
I do not use the remote-state module cloudposse uses but I do data lookups so in my remote state I will have something like :
data "aws_vpc" "main" {
tags = {
Name = format("%s-%s-%s", var.namespace, var.environment, var.vpc_name)
}
}
because as you said the [context.tf](http://context.tf)
maintains the naming convention is easy to find any resource
now if is a resource you did not create you can still craft a data lookup to find the existing vpc base on tags and such
Thanks @jose.amengual for your response. I’m talking about scenario that we have a VPC that was created outside of cloudposse context and you are require to use an existing VPC or any other existing resource.
CloudPosse recommends to use their own aws components or writing your own stacks as long as you follow their conventions and then we can also use how many data sources that we need as you mentioned and develop our own building blocks.
There are some differences between startup companies that would much appreciate your intervention in all of their aws account management and so on.
Within enterprise companies is the exact opposite, they want you to follow their rules, their policies, they would like to provide you part of the infrastructure and we should use it according to their guidelines as long as it comply with product requirements that you deploy on their tenant.
I was more curious to know how cloudposse team face in such scenarios and keep using their aws terraform components.
Nothing about atmos requires using context or remote state. We use those as our best practices for terraform, but atmos doesn’t enforce it.
Atmos is used by massive enterprises
With conventions far different from what we do in our reference architecture, which is why everything is configurable via the CLI config
We recommend companies use what serves them best. We provide components we use, but users of atmos, enterprises and startups alike are not required to use our components and the tool works with any vanilla terraform or the components we provide. You can use it with any cloud, GCP, azure, AWS, etc.
JFYI, the components that we open source are designed to be internally consistent, and won’t work as well when trying to leverage them in existing environments that have different assumptions.
Got it, Thanks for your clarification Erik, We actually already did a huge project with Atmos for provisioning environments on Azure for enterprise companies , it works pretty well and we are happy with our outcome.
We align it to our needs by developing some super module which wire all the connections between modules, we were influenced by Azure Cloud Adoption Framework module but since we had a special requirements we decided to develop our own super module and plug it with Atmos.
The thing is that now we should do the same for aws infrastructre provisioning and we know that cloudposse has their own aws terraform components. Therefore, we thought about give a try and use it instead of reinvent the wheel with another intermediate layer between Atmos and modules for our needs but looks like we can not avoid it.
Terraform supermodule for the Terraform platform engineering for Azure
2023-04-22
2023-04-23
2023-04-24
Hi All, Is there a way to bypass atmos and use terraform module using plan terraform commands without stacks??
I am trying to use the cloudposse/components/aws//modules/cognito module using plain terraform plan command
atmos does the following: 1) allows you to organize component/stack configurations in a hierarchical and DRY way; 2) generates TF varfiles and backend configs from the stack config
you can use any module/component w/o atmos, you just need to cd
into the component folder and provide all the required variables. Then use plain TF commands
@Andriy Knysh (Cloud Posse) I think what he wants to do is akin to our atmos terraform shell
This command starts a new SHELL
configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.
If you run atmos terraform shell
, it will set up your environment so you can run a “plain terraform plan command”
yes, terraform shell
can be used
but only if you have atmos.yaml
defined
the error above says that atmos.yaml
does not exist (https://atmos.tools/quick-start/configure-cli)
In the previous step, we’ve decided on the following:
so if you want Atmos to generate all the varfiles for the component, you need: 1) create atmos.yaml
(https://atmos.tools/quick-start/configure-cli) ; 2) configure Atmos components and stacks (https://atmos.tools/quick-start/create-components, https://atmos.tools/quick-start/create-atmos-stacks)
In the previous steps, we’ve configured the repository, and decided to provision the vpc-flow-logs-bucket and vpc Terraform
In the previous step, we’ve configured the Terraform components and described how they can be copied into the repository.
on the other hand, if you want to use a TF module/component w/o Atmos (just TF code), then you cd
into the folder and provide all the variables and backend config, then use plain TF commands
this is a working example (which we are using for testing) https://github.com/cloudposse/atmos/tree/master/examples/complete with everything defined (atmos.yaml, components, stacks, etc.)
Okay. Will explore this more. Our organisation is using terragrunt so wanted to explore how to use cloudposse modules with it. Thank you.
if you need help with using the components, let us know. Also, we can help with integrating Atmos if you want to explore it
Aha, so in that case, wouldn’t it be more about overriding the terraform command? We haven’t tried it, but others have mentioned they are using terragrunt.
but it gives me following error
Error:
│ 'atmos.yaml' CLI config files not found in any of the searched paths: system dir, home dir, current dir, ENV vars.
│ You can download a sample config and adapt it to your requirements from <https://raw.githubusercontent.com/cloudposse/atmos/master/examples/complete/atmos.yaml>
│
│ with module.website_cognito_setup.module.iam_roles.module.account_map.data.utils_component_config.config,
│ on .terraform/modules/website_cognito_setup.iam_roles.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
Aha, so when you use the Cloud Posse developed components, they will frequently leverage the stack configurations. Under the hood they use our other open source child modules.
Our Cloud Posse components frequently use a terraform data provider which reads remote state and looks up configuration from the stacks. That’s why you’re getting this error.
To be clear, it doesn’t require atmos
, but it requires stack configurations. If that’s not what you want, then you’ll need to fork it.
Oh Okay. Thanks a lot. Will try that approach
2023-04-25
Hi @everyone, i am having an issue while running the atmos describe affected --verbose=true
in bitbucket pipelines. the repo was already cloned into the container in that particular step of the pipeline, but it giving an errors like below. Thank you
Executing command:
/usr/bin/atmos describe affected --file affected.json --verbose=true --repo-path $BITBUCKET_CLONE_DIR
the target remote repo is not a Git repository. Check that it was initialized and has '.git' folder: repository does not exist
exit status 1
--------------
/usr/bin/atmos describe affected --file affected.json --verbose=true --sha $BITBUCKET_COMMIT
Cloning repo '<http://bitbucket.org/**********/********>' into the temp dir '/tmp/168236982171'
Checking out the HEAD of the default branch ...
authentication required
exit status 1
-------------------
Executing command:
/usr/bin/atmos describe affected --file affected.json --verbose=true --ref "refs/heads/$BITBUCKET_BRANCH"
Cloning repo '<http://bitbucket.org/*********/********>' into the temp dir '/tmp/16823701031153'
Checking out Git ref '"refs/heads/$BITBUCKET_BRANCH"' ...
authentication required
exit status 1
-------------------
does it have ‘.git’ folder? In the command atmos describe affected --file affected.json --verbose=true --repo-path $BITBUCKET_CLONE_DIR
it will not work if it’s not a valid git folder
yes i verified it in local and reinitialized using git init
as-well
and it is correctly pointing to right head aswell
we never tested it with Bitbucket, might be an issue with that. We are using go-git
Go lib to do all things with Git
or might be not related to Bitbucket
if you just copy the whole repo into a temp dir (including the .git
folder), then use the command atmos describe affected --file affected.json --verbose=true --repo-path xxxx
?
manually copy, on your computer
Ahhh, i will do that and see, how it goes. Thank you
yes, just clone the entire folder into a temp folder on your computer
then use it in --repo-path
argument
at least you’ll know that this part works
sounds good and i will post the outcome, once it’s done
it is giving me the same kind of error but the directory has .git
folder
the target remote repo is not a Git repository. Check that it was initialized and has '.git' folder: repository does not exist
cat .git/HEAD
ref: refs/heads/feature/testing-pipeline
for some reason, https://github.com/go-git/go-git does not like that .git
folder (this part of the error repository does not exist
is from go-git
)
A highly extensible Git implementation in pure Go.
https://stackoverflow.com/questions/8927070/git-error-conq-repository-does-not-exist (Bitbucket users complain)
I’m getting the following errors in Git using BitBucket:
conq: repository does not exist. fatal: The remote end hung up unexpectedly How do I rectify this issue? I’ve carried out the following:
…
please review it
(this issues are difficult to answer since Atmos does not do anything with Git and it’s folders, go-git
does. Whatever the lib does, we don’t control it in Atmos code, and go-git
is far from perfect)
Ahhh, Got it. Thanks so much for the quick response
So we fixed the repository does not exist error
now is giving us authentication required
the two commands before atmos describe affected
are git status
and git config -l
and the both work right before the atmos command
so it is estrange that atmos does not seem to use the credential in the same shell/run/pipeline
atmos uses go-git
whatever it does
if we run this locally against the same repo it works so there is definitely something missing in the pipeline
there are a lot of discussions about that for go-git
https://github.com/go-git/go-git/issues/116
I am cloning a private library
And comes with a username and password
But this doesn’t seem to work and got authentication required
error
options := git.CloneOptions{
URL: getGitURL(r.repo, username, password), // <https://username:[email protected]/owner/repo.git>
Progress: os.Stdout,
SingleBranch: true,
Depth: 1,
RecurseSubmodules: git.DefaultSubmoduleRecursionDepth,
}
_, err := git.CloneContext(ctx, memory.NewStorage(), fs, &options)
that def works in GitHub actions (we are using atmos describe affected
in the actions)
in github we have 0 issues
but in a container, it does not for a private repo
that’s go-git
issues, which can prob be solved by using
repo, err := git.PlainClone(pathToRepo, false, &git.CloneOptions{
Auth: &http.BasicAuth{
Username: "yourUsername",
Password: personalAccessToken},
URL: "<https://github.com/go-git/go-git>",
Progress: os.Stdout,
})
but it does not look like a good solution if you need to specify your username and a PAT (as the command’s arguments?)
yes, I think there is a setting that we might not have access to allow the runners to authenticate using ssh
this worked atmos describe affected --ref refs/heads/main --ssh-key ${BITBUCKET_SSH_KEY_FILE} --verbose=true
v1.34.1 what Add ExecuteDescribeStacks function to pkg package and wrap the same function from the internal package Add tests why We need to use the ExecuteDescribeStacks in the terraform utils provider, but all code in the internal package is not visible to the calling code. internal package is used to reduce the public API surface. Packages within an internal/directory are therefore said to be internal packages references <a href=”https://go.dev/doc/go1.4#internalpackages“…
what
Add ExecuteDescribeStacks function to pkg package and wrap the same function from the internal package Add tests
why
We need to use the ExecuteDescribeStacks in the terraform utils provider…
2023-04-26
2023-04-27
v1.34.2 what Add workspace to the outputs of atmos describe stacks command why We often need to know the terraform workspace for each component (taking into account that Terraform workspaces can be overridden per component, so they are not always the same as the stack names) test tenant1-ue2-dev: components: terraform:
top-level-component1:
workspace: tenant1-ue2-dev
test/test-component-override-3:
workspace: test-component-override-3-workspace
what
Add workspace to the outputs of atmos describe stacks command
why
We often need to know the terraform workspace for each component (taking into account that Terraform workspaces can be over…
v1.34.2 what Add workspace to the outputs of atmos describe stacks command why We often need to know the terraform workspace for each component (taking into account that Terraform workspaces can be overridden per component, so they are not always the same as the stack names) test tenant1-ue2-dev: components: terraform:
top-level-component1:
workspace: tenant1-ue2-dev
test/test-component-override-3:
workspace: test-component-override-3-workspace