#atmos (2023-04)

2023-04-04

2023-04-06

Andrew Eells avatar
Andrew Eells

hello all wave we are currently deploying AWS components (such as VPC/EC2 etc) in a stack from the atmos CLI, something like this:
atmos terraform plan vpc -s <id-env-region
is there a way to deploy all components in that same stack with a single CLI command? otherwise, our CI/CD system has to define some sort of list and loop over all components and

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can create reusable workflows and then provide atmos stack on the command line

Andrew Eells avatar
Andrew Eells

perfect, thank you, this looks exactly like what we are looking for. apologies for the newbie question but we are all picking up atmos for the first time and enjoying the ride!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks for using it, let us know if you need any help

Andrew Eells avatar
Andrew Eells

awesome. thank you Andriy

jose.amengual avatar
jose.amengual

watch out with this guy @Andriy Knysh (Cloud Posse) he is nice but not as nice as me

2
1
jose.amengual avatar
jose.amengual

(we work together)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re also using workflows for this. So we define all the steps to bring up a layer of your stack.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(or destroy)

jose.amengual avatar
jose.amengual

and they are sequential, they do not run in parallel right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not yet, we have a task to add parallel steps

jose.amengual avatar
jose.amengual

I prefer sequential

2023-04-07

Release notes from atmos avatar
Release notes from atmos
05:54:38 PM
GitHub - cloudposse/github-action-atmos-affected-stacks: A composite workflow that runs the atmos describe affected commandattachment image

A composite workflow that runs the atmos describe affected command - GitHub - cloudposse/github-action-atmos-affected-stacks: A composite workflow that runs the atmos describe affected command

GitHub - cloudposse/github-action-atmos-component-updater: GitHub Action that can be used as workflow for automatic update via Pull Requests infrastructure repository according to versions to components sourcesattachment image

GitHub Action that can be used as workflow for automatic update via Pull Requests infrastructure repository according to versions to components sources - GitHub - cloudposse/github-action-atmos-com…

johncblandii avatar
johncblandii

wild thought/idea incoming. :wink:

are there plans for dynamically referencing remote state without updating a TF module to specifically call out remote state? Many times we just need the ARN anyway, but sometimes we need the name or similar.

Potential example:

map_environment:
  APP_NAME: echoserver
  S3_BUCKET: "{{ remote-state.s3-access-logs.bucket_id }}"

Just like [context.tf](http://context.tf), we could use remote_state (or similar) as a variable in a generic [stacks-state.tf](http://stacks-state.tf) or whatever (since components already have a [remote-state.tf](http://remote-state.tf)) copied to a component. Then it looks up state, takes the values it requested, and uses them (the biggest caveat of how to use them; don’t want coalesce(remote.blah, var.blah) littered on every possible value.

So another thought is one component Atmos calls to get the remote values then takes those values and passes it to the expected component as it normally does (tfvars). That component just loops over all options provided, returns them to Atmos via TF outputs, Atmos replaces references in YAML, and creates the tfvars.

This slows down since you have to plan/apply 2 projects, but it allows us an easy path to referencing outputs without writing TF to reference them.

atmos terraform plan xyz -s my-stack -> runs terraform plan remote-state -s my-stack -> processes outputs -> creates tfvars file/backend json -> runs terraform plan xyz -s my-stack

if no remote-state references exist, it does what it does now.

1
johncblandii avatar
johncblandii

@Andriy Knysh (Cloud Posse)

johncblandii avatar
johncblandii

as a side note, Go templates could help with basic ARN generation as well.

if we set context or something we could potentially provide all necessary values and Atmos gen that ARN so we don’t have to do that in TF

johncblandii avatar
johncblandii

cc @Jeremy G (Cloud Posse) (this is an off-shoot of that thought/discussion)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I don’t like the idea of Atmos generating an ARN rather than using Terraform to do it. Also, Atmos is not Terraform and does not directly have access to remote state.

You might be able to refine our current remote state Terraform implementation with a more generic remote state “mixin”, but unless you want to run through a separate Terraform apply to read the values, I don’t think Atmos can wire the outputs to the inputs for you. You might come up with some kind of “convention over configuration” where an input has a specific naming format and the mixin is configured to place a value with that name as a key in an output map and then end up with something like

locals {
  var_name = try(modules.remote[var.component]["var_name"], var.var_name)
}

I would need to see some fully worked-out examples to see if that is really simplifying anything. I don’t think it would be possible to satisfy the constraints of a mixin, which is that the component should work without any code modification whether or not the mixin is present.

johncblandii avatar
johncblandii

right. definitely a complex scenario. if it shows up as a need more often, i might take a rough stab at a POC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

One important design decision is atmos isn’t bound to AWS. We want to keep it platform agnostic. Our recommendation is to make the components support references and have it dereference those inputs. That’s how we currently do everything

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If atmos starts reading terraform remote state we’re treading down a slippery slope. There’s all the configuration of backends to accesss remote state and a dozen types of remote ends

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I could imagine atmos running a command that output some settings and those settings are accessible

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes. Also, Atmos does not know anything about roles, credentials etc. Underlaying tools (e.g. terraform) do. So Atmos can’t access AWS or any other clouds/platforms on its own

johncblandii avatar
johncblandii

That’s a solid point @Erik Osterman (Cloud Posse).

johncblandii avatar
johncblandii

@Andriy Knysh (Cloud Posse) that’s why I was suggesting a TF component that does it, but I get how other providers may be problematic for that.

johncblandii avatar
johncblandii

What about reading things like name, etc from other stacks to reference them?

That’s all YAML and agnostic

johncblandii avatar
johncblandii

That’s another chicken/egg problem for sure so definitely not an easy thing.

Just random thoughts.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is done using the remote-state module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

johncblandii avatar
johncblandii

Yeah, just thinking through ways not to define a bunch of those and doing something more dynamic

1

2023-04-08

Release notes from atmos avatar
Release notes from atmos
01:54:36 PM

v1.33.0 what Update Atmos logs Add logs.file and logs.level to atmos.yaml Update docs https://atmos.tools/cli/configuration why Allow specifying a file for Atmos to write logs to Allow specifying a log level to control the amount of Atmos logging Logs Atmos logs are configured in the logs section: logs: file: “/dev/stdout” # Supported log levels: Trace, Debug, Info, Warning, Off (Off is the default and is used if logs.level is not…

Release v1.33.0 · cloudposse/atmosattachment image

what

Update Atmos logs Add logs.file and logs.level to atmos.yaml Update docs https://atmos.tools/cli/configuration

why

Allow specifying a file for Atmos to write logs to Allow specifying a log …

CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

2023-04-10

2023-04-11

Michael Dizon avatar
Michael Dizon

quick question about atmos vendor. my component.yaml looks like this

uri: github.com/cloudposse/terraform-aws-ec2-instance.git/?ref={{.Version}}
version: 0.47.1

but when I pull, I get this error:

subdir "%253Fref=0.47.1" not found

how should the url be formatted?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what happens if you remove the /

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
uri: github.com/cloudposse/terraform-aws-ec2-instance.git?ref={{.Version}}
version: 0.47.1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Component Vendoring | atmos

Use Component Vendoring to make a copy of 3rd-party components in your own repo.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) interesting usage of vendor @Michael Dizon is using it to vendor modules and not components

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since we support generating the backend in atmos, really any module we have can be a component

Michael Dizon avatar
Michael Dizon

so when i remove the / I get this error:

error downloading '<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git
Michael Dizon avatar
Michael Dizon

ah interesting. so presently, the intent is for this to point to the terraform-aws-components repo?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no, it can point to any endpoint supported by go-getter

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As implemented, we usually use it for components in our monorepo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what about

uri: github.com/cloudposse/terraform-aws-ec2-instance.git//?ref={{.Version}}
version: 0.47.1
Michael Dizon avatar
Michael Dizon
Pulling sources for the component 'ec2-instance' from 'github.com/cloudposse/terraform-aws-ec2-instance.git//?ref=0.47.1' and writing to '/Users/mdizon/Code/xxx/xxx-terraform/components/terraform/ec2-instance'

error downloading '<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git
Michael Dizon avatar
Michael Dizon

formatted it like this and got the same result

github.com/cloudposse/terraform-aws-ec2-instance?ref={{.Version}}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can confirm this appears to be a bug.

erik@Eriks-MacBook-Pro /tmp % export ATMOS_LOGS_LEVEL=Trace    
erik@Eriks-MacBook-Pro /tmp % atmos vendor pull --component bar
Pulling sources for the component 'bar' from 'github.com/cloudposse/terraform-aws-ec2-instance.git//?ref=0.47.1' and writing to 'components/terraform/bar'

error downloading '<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git
Michael Dizon avatar
Michael Dizon

ah

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#114 Using git always gives error 128

All of these give the same error:

go-getter "<git://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:16:38 Error downloading: error downloading '<git://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git

go-getter <git://github.com/kelseyhightower/hashiconf-eu-2016.git> "dest"
2018/09/08 21:16:49 Error downloading: error downloading '<git://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git

go-getter "<git://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:16:53 Error downloading: error downloading '<git://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git

go-getter "<git://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:16:57 Error downloading: error downloading '<git://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git

go-getter "git::<http://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:17:09 Error downloading: error downloading '<http://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git

go-getter "git::<https://github.com/kelseyhightower/hashiconf-eu-2016.git>" "dest"
2018/09/08 21:17:14 Error downloading: error downloading '<https://github.com/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git

go-getter [email protected]:kelseyhightower/hashiconf-eu-2016.git "dest"
2018/09/08 21:17:53 Error downloading: error downloading '<ssh://[email protected]/kelseyhightower/hashiconf-eu-2016.git>': /usr/bin/git exited with 128: fatal: Not a git repository (or any of the parent directories): .git

This also occurs on nomad. What is the problem? I’m using the latest version.

Michael Dizon avatar
Michael Dizon

interesting.. this worked:

uri: github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, interesting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I was just going to say, I was able to do //test to vendor the tests, so thought it was a problem with root only

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but that makes sense that /// works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you get this working, could I trouble you to open a doc here on how to vendor modules?

https://github.com/cloudposse/atmos/blob/master/website/docs/core-concepts/components/component-vendoring.md


title: Component Vendoring
sidebar_position: 3
sidebar_label: Vendoring
description: Use Component Vendoring to make a copy of 3rd-party components in your own repo.
id: vendoring

Atmos natively supports the concept of “vendoring”, which is making a copy of the 3rd party components in your own repo. Our implementation is primarily inspired by the excellent tool by VMware Tanzu, called vendir. While atmos does not call vendir, it functions and supports a configuration that is very similar.

After defining the component.yaml configuration, the remote component can be downloaded by running the following command:

atmos vendor pull -c components/terraform/vpc

Schema: component.yaml

To vendor a component, create a component.yaml file stored inside of the components/_type_/_name_/ folder (e.g. components/terraform/vpc/).

The schema of a component.yaml file is as follows:

apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
  name: vpc-flow-logs-bucket-vendor-config
  description: Source and mixins config for vendoring of 'vpc-flow-logs-bucket' component
spec:
  source:

    # 'uri' supports all protocols (local files, Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP),
    # and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
    # In 'uri', Golang templates are supported  <https://pkg.go.dev/text/template>
    # If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
    uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
    version: 0.194.0

    # Only include the files that match the 'included_paths' patterns
    # If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'

    # 'included_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
    # <https://en.wikipedia.org/wiki/Glob_(programming)>
    # <https://github.com/bmatcuk/doublestar#patterns>
    included_paths:
      - "**/*.tf"
      - "**/*.tfvars"
      - "**/*.md"

    # Exclude the files that match any of the 'excluded_paths' patterns
    # Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
    # 'excluded_paths' support POSIX-style Globs for file names/paths (double-star/globstar `**` is supported)
    excluded_paths:
      - "**/context.tf"

  # Mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
  # All mixins are processed in the order they are declared in the list.
  mixins:
    # <https://github.com/hashicorp/go-getter/issues/98>
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
      filename: context.tf
    - uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
      version: 0.194.0
      filename: introspection.mixin.tf
Michael Dizon avatar
Michael Dizon

sure no problem!

Michael Dizon avatar
Michael Dizon

tomorrow

1
Michael Dizon avatar
Michael Dizon
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

re-reviewed

Michael Dizon avatar
Michael Dizon

ok, in a meeting right now. will update after i come out

Michael Dizon avatar
Michael Dizon

@Erik Osterman (Cloud Posse) committed your suggestions. learned something new today, didn’t know github could do that!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for the contributions!

Michael Dizon avatar
Michael Dizon

dude thanks for the amazing tools and modules!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Component Vendoring | atmos

Use Component Vendoring to make a copy of 3rd-party components in your own repo.

1

2023-04-12

Release notes from atmos avatar
Release notes from atmos
10:54:38 PM

v1.33.1 Update documentation and examples for vendoring modules as components (

…#364)

  • Update documentation and examples to include instructions on vendoring components from github

  • update: removed update from component.yaml

  • update: provide more thorough instructions

  • Update…

Release v1.33.1 · cloudposse/atmosattachment image

Workflow automation tool for DevOps. Keep configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile. - Release v1.33.1 · cloudposse/atmos

Update documentation and examples for vendoring modules as components by mikedizon · Pull Request #364 · cloudposse/atmosattachment image

what Updated docs and examples with instructions on vendoring components from github repos why appending /// to the end of a github uri is not obvious

2023-04-13

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

explains how any module can now be used as a component without needing to write a component that wraps it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically any module can be used as a root module within the atmos framework.

1
Release notes from atmos avatar
Release notes from atmos
09:14:37 PM

v1.33.2 what Rename atmos describe dependants to atmos describe dependents Add alias to allow atmos describe dependants why There are two acceptable ways to spell dependent depending (no pun intended) on whether you are speaking American English (dependent) or British English (dependant)

Release v1.33.2 · cloudposse/atmosattachment image

what

Rename atmos describe dependants to atmos describe dependents Add alias to allow atmos describe dependants

why

There are two acceptable ways to spell dependent depending (no pun intended) o…

Release notes from atmos avatar
Release notes from atmos
09:34:38 PM

v1.33.2 what Rename atmos describe dependants to atmos describe dependents Add alias to allow atmos describe dependants why There are two acceptable ways to spell dependent depending (no pun intended) on whether you are speaking American English (dependent) or British English (dependant)

2023-04-14

Release notes from atmos avatar
Release notes from atmos
09:34:37 PM

v1.34.0 what Update env stack config section Allow using null to unset the ENV var why If it’s set to null, it will not be set as ENV var in the executing process (will be just skipped) Setting it to null will override all other values set in the stack configs for the component This is useful if an ENV var is set globally in top-level stacks for the entire configuration, but needs to be unset for some specific components test Set TEST_ENV_VAR4 to some value components: terraform:…

Release v1.34.0 · cloudposse/atmos

what

Update env stack config section Allow using null to unset the ENV var

why

If it’s set to null, it will not be set as ENV var in the executing process (will be just skipped) Setting it to nu…

2023-04-15

2023-04-19

Austin Blythe avatar
Austin Blythe

Can Atmos + Terraform be used to create new AWS accounts in an organization and provision resources into that new account as a single operation? I’m assuming the account creation is asynchronous, but the Terraform resource docs don’t specify.

kevcube avatar
kevcube

atmos may make this possible but it’d require a lot of additional logic, two steps makes more sense

kevcube avatar
kevcube

because the account ID won’t be known, and you need to instantiate a new provider to work on that second account

kevcube avatar
kevcube

so there will necessarily be multiple calls to “terraform” which I don’t think atmos is capable of today

kevcube avatar
kevcube

unless it has a terragrunt style “apply-all” command that i don’t know about

Austin Blythe avatar
Austin Blythe

I see that atmos has workflows which allow sequential commands, but I don’t see anything that would enforce the “wait” or dependency ordering. I think remote outputs and some dynamic stuff in the providers can handle finding the account id. I’m mainly just wondering on the orchestration part.

The two-steps approach makes sense, but how is this typically done in a CI/CD environment? I basically want to add a new account to some config, submit a PR, then have it created and a baseline config applied

kevcube avatar
kevcube

I’ve never fully automated that. Usually just have a common “account-baseline” module that we add to the new account after creation

kevcube avatar
kevcube

could be done in CI/CD in two steps still

Austin Blythe avatar
Austin Blythe

Yeah it does look like there is a status check API on the create account operation: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/organizations/describe-create-account-status.html

Probably not worth optimizing with full automation. Using the CLI to poll the status and then run the baseline tf apply as a second step seems reasonable

kevcube avatar
kevcube

well the terraform resource for adding an account to an org shouldn’t be “created” until this status check would return true anyway, i don’t think you’ll need that CLI bit

Austin Blythe avatar
Austin Blythe

That’s what I was wondering, but haven’t found confirmation of that

kevcube avatar
kevcube

yeah. in my experience, AWS account creation happens really fast, like maybe a few seconsd

Austin Blythe avatar
Austin Blythe

that’s what makes me think it is not waiting for the full completion

Austin Blythe avatar
Austin Blythe

as i’ve seen in most guides it can take several minutes

Austin Blythe avatar
Austin Blythe

And the API docs seem to indicate it’s a background process that could take minutes: https://docs.aws.amazon.com/organizations/latest/APIReference/API_CreateAccount.html

kevcube avatar
kevcube

Oh, that’s probably why I’ve perceived it as only being seconds.

1
Austin Blythe avatar
Austin Blythe

Actually looking at the TF provider code, it does look like it is calling and waiting on that status endpoint. So maybe it’s just fast the majority of the time, but could take a while

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, we do this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

2
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In our commercial reference architecture (for sale), we leverage workflows to bring everything up

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Workflows can call other workflows

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
account | The Cloud Posse Developer Hub

This component is responsible for provisioning the full account hierarchy along with Organizational Units (OUs). It includes the ability to associate Service Control Policies (SCPs) to the Organization, each Organizational Unit and account.

Austin Blythe avatar
Austin Blythe

Thanks, @Erik Osterman (Cloud Posse). Sounds like I’m on the right track. Do you all ever use nested OUs?Unless i’m missing something, that account factory only supports one level deep. I’m curious if that is an intentional best practice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes & no. We don’t recommend them b/c of resource naming and disambiguation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

architecturally I like it, but in practice it’s problematic if you subscribe to our opinionated naming conventions.

Austin Blythe avatar
Austin Blythe

ah fair, that makes sense

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in our convention, org = namespace ou = tenant account = stage region = environment resource = name

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so imagine an org called acme with an ou called plat (For platform), with an account called prod with resources in us-east-2 (we use use2), and a cluster (eks)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the resource becomes acme-plat-prod-use2-eks and then all the sub-resources assocaited with the cluster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

now introduce sub-OUs, and you and 4-5 characters, and now you’re definitely hitting limits on resource names for many systems.

Austin Blythe avatar
Austin Blythe

That’s helpful, thanks!

Matthew Reggler avatar
Matthew Reggler

Hi, to jump on this, one option I’ve considered is to use a convention along the lines of

namespace = org tenant = “smallest” ou stage = account environment = region name = resource

taking your example: an org called acme with an ou called plat with child ou’s called foo and bar with an account called prod in each ou with resources in us-east-2 and a resource (eks)

resource in plat-prod: acme-plat-prod-use2-eks resource in (plat-)foo-prod: acme-foo-prod-use2-eks resource in (plat-)bar-prod: acme-bar-prod-use2-eks

There is a loss of visibility inherent here (which can only be partially mitigated by adding a “ou-path” tag/attribute/etc to each null-label), and some tweaking of the account module required (add support for the parent_id property in the YAML for an optional second layer of aws_organizations_organizational_unit resources).

However, I don’t think this would actually break anything in account-map or elsewhere, tenant still refers to an ou, just with an additional constraint.

Have you ever considered something like this? Or is “tenant=ou” by convention or technical limitation more immutable than I have considered here?

Matthew Reggler avatar
Matthew Reggler

I suppose you would be in trouble if you reused an ou name across different branches of the your aws org hierarchy, but should be fairly easy to guard against

Austin Blythe avatar
Austin Blythe

Yeah, we would have that issue is we have multiple OUs with child OUs based on environment, like dev and prod

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(disregard my most recent messages, deleted)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am jumping on a call - will get back

Matthew Reggler avatar
Matthew Reggler

Agreed Austin, that wouldn’t work in the model I have described here. However, the refarch’s current OU heirarchy model is entirely flat with by necessity, large, low-resolution OUs, so env-based child OUs wouldn’t be viable in that model either.

There’s a tension here between the refarch’s flat structure (which I otherwise love), and a desire for nested OUs — admittedly in my attempt in an approach that adds a irritating uniqueness constraint on the name of “child OUs”.

The other option: acme-plat_foo-prod-use2-eks , is my least favourite, as not only does it have the length issues Erik mentioned, but it feels like an abuse of the tenant keyword.

Matthew Reggler avatar
Matthew Reggler

In my hand-waved model the level-1 OU would be a broad project category, the level-2 OU would be a specific project, and each level-2 OU would have a dev, stag and prod account trio.

This would let you add governance controls and shared atmos-yaml configuration with slightly more granularity than in the refarch, with a downside that “projects” (or whatever an equivalent keyword is in your organisation’s internal parlance) would need to by either intentionally uniquely named, or assigned a random id-phrase.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in reflection, I think you can mostly get way with the hierarchical OUs, provided you a) use the most specific OU as the tenant, and ensure that the most-specific OU is always unique across all parent OUs

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe there’s someway to create a rego policy to enforce this. Since atmos supports rego, this would then be maintainable.

1
Austin Blythe avatar
Austin Blythe

So with the standard model, to apply different SCPs between dev and prod, you would have to do that on each explicit account in the Atmos stack, correct? To me that’s the only real benefit to the OU nesting is being apply to take advantage of the policy inheritance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Keep in mind it’s all IAC and with atmos, you get imports

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So there’s really no benefit to the nesting.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

With imports it’s all still DRY.

girishmaddineni1998 avatar
girishmaddineni1998

@here Hi, everyone. i am trying to apply the resources through the bitbucket pipeline using atmos. using atmos workflows to do the plan and apply multiple components at a time and i am just wondering is there a way where i can apply only the output of the Affected components( atmos affected describe) to atmos apply instead of going to whole workflow everytime. cheers .

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Awesome to hear you’re building out those pipelines.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We did build atmos describe affected for this purpose, but have not implemented the apply-affected command you describe.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Mostly b/c gitops best practices for terraform involve reviewing the plan before apply, and that woudl require tighter integration with your build process

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you could implement this probably quite easily with some jq foo and xargs

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then add your own subcommand for it https://atmos.tools/core-concepts/subcommands/

Atmos Subcommands | atmos

Atmos can be easily extended to support any number of custom commands, what we call “subcommands”.

girishmaddineni1998 avatar
girishmaddineni1998

Ahh, i see. Thank you so much for the quick response

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Fwiw, we’re implementing GitHub Actions support

girishmaddineni1998 avatar
girishmaddineni1998

Awesome

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


you could implement this probably quite easily with some jq

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

run atmos describe affected, use jq to select the components and stacks, then in a loop you can call atmos terraform apply

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you can add all of that into a custom command

1
girishmaddineni1998 avatar
girishmaddineni1998

sounds good and i am working on that now. Thanks so much

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

by custom command we mean two possible things:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
this1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can add that script as a workflow step using the type shell https://atmos.tools/core-concepts/workflows/
Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

girishmaddineni1998 avatar
girishmaddineni1998

The first one sounds cool to use

jose.amengual avatar
jose.amengual

@Andriy Knysh (Cloud Posse) do not trust this guy, he works with that other guy that is not as nice as me

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, and note that you can create a custom atmos command using any complex script (including other atmos commands including other custom commands), and then you can use that custom command in a workflow step of type atmos

jose.amengual avatar
jose.amengual

how does the atmos Github action does this today?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in a shell loop

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos custom commands can be as simple as calling a shell script

# Custom CLI commands
commands:
  - name: aws
    description: Execute AWS commands
    commands:
      - name: assume-role
        description: Execute 'aws assume-role' command
        steps:
          - set-aws-assume-role-credentials
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or as complex as this example

  - name: manage-s3-assets
    description: Pushes or syncs assets to the given s3 bucket
    verbose: true
    flags:
    - name: environment
      shorthand: e
      description: Environment
      required: true
    - name: stage
      shorthand: s
      description: Stage
      required: true
    - name: mode
    - name: from
      description: From Path
      required: true
    - name: datasource
      description: Target
      required: true
    - name: datasource_path
      description: Target path
      required: true
    component_config:
      component: "deploy"
      stack: "xxx-{{ .Flags.environment }}-{{ .Flags.stage }}"
    env:
    - key: AWS_REGION
      value: '{{ .ComponentConfig.settings.standard_env.AWS_REGION }}'
    - key: S3_CMD
      value: '{{ index .ComponentConfig.settings.aws_commands.s3_cmd .Flags.mode}}'
    - key: BUCKET_NAME
      value: "xx-{{ .Flags.environment }}-{{ .Flags.stage }}-{{ .Flags.datasource }}"
    - key: AWS_PROFILE
      value: "xx-gbl-{{ .Flags.stage }}-terraform"
    steps:
    - aws s3 ${S3_CMD} {{ .Flags.from }} s3://${BUCKET_NAME}/${DATA_SOURCE_PATH}
jose.amengual avatar
jose.amengual

we are going to create a custom command so people can run this in any pipeline and once we.move to github actions we will definitely use your actions.

this1
jose.amengual avatar
jose.amengual

we are stuck with bitbucket internally but for external repos we can use github (after approval )

1

2023-04-20

2023-04-21

Amos avatar

Hi everyone, I’m currently playing around with Atmos and as far as I understand, the components modules composed in way to provide naming convention within [context.tf](http://context.tf) and [remote-state.tf](http://remote-state.tf) for fetching outputs of other components.

Since I haven’t see any terraform data source definition in vpc module by cloudposse, I wonder how are you facing a scenario when you need to use an existing vpc instead of creating one with atmos components.

Is mixins the right solution for such cases?

jose.amengual avatar
jose.amengual

I do not use the remote-state module cloudposse uses but I do data lookups so in my remote state I will have something like :

data "aws_vpc" "main" {
  tags = {
    Name = format("%s-%s-%s", var.namespace, var.environment, var.vpc_name)
  }
}
jose.amengual avatar
jose.amengual

because as you said the [context.tf](http://context.tf) maintains the naming convention is easy to find any resource

jose.amengual avatar
jose.amengual

now if is a resource you did not create you can still craft a data lookup to find the existing vpc base on tags and such

Amos avatar

Thanks @jose.amengual for your response. I’m talking about scenario that we have a VPC that was created outside of cloudposse context and you are require to use an existing VPC or any other existing resource.

CloudPosse recommends to use their own aws components or writing your own stacks as long as you follow their conventions and then we can also use how many data sources that we need as you mentioned and develop our own building blocks.

There are some differences between startup companies that would much appreciate your intervention in all of their aws account management and so on.

Within enterprise companies is the exact opposite, they want you to follow their rules, their policies, they would like to provide you part of the infrastructure and we should use it according to their guidelines as long as it comply with product requirements that you deploy on their tenant.

I was more curious to know how cloudposse team face in such scenarios and keep using their aws terraform components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Nothing about atmos requires using context or remote state. We use those as our best practices for terraform, but atmos doesn’t enforce it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Atmos is used by massive enterprises

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

With conventions far different from what we do in our reference architecture, which is why everything is configurable via the CLI config

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We recommend companies use what serves them best. We provide components we use, but users of atmos, enterprises and startups alike are not required to use our components and the tool works with any vanilla terraform or the components we provide. You can use it with any cloud, GCP, azure, AWS, etc.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

JFYI, the components that we open source are designed to be internally consistent, and won’t work as well when trying to leverage them in existing environments that have different assumptions.

Amos avatar

Got it, Thanks for your clarification Erik, We actually already did a huge project with Atmos for provisioning environments on Azure for enterprise companies , it works pretty well and we are happy with our outcome.

We align it to our needs by developing some super module which wire all the connections between modules, we were influenced by Azure Cloud Adoption Framework module but since we had a special requirements we decided to develop our own super module and plug it with Atmos.

The thing is that now we should do the same for aws infrastructre provisioning and we know that cloudposse has their own aws terraform components. Therefore, we thought about give a try and use it instead of reinvent the wheel with another intermediate layer between Atmos and modules for our needs but looks like we can not avoid it.

aztfmod/terraform-azurerm-caf

Terraform supermodule for the Terraform platform engineering for Azure

2023-04-22

2023-04-23

2023-04-24

Abhijit Shingate avatar
Abhijit Shingate

Hi All, Is there a way to bypass atmos and use terraform module using plan terraform commands without stacks??

Abhijit Shingate avatar
Abhijit Shingate

I am trying to use the cloudposse/components/aws//modules/cognito module using plain terraform plan command

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos does the following: 1) allows you to organize component/stack configurations in a hierarchical and DRY way; 2) generates TF varfiles and backend configs from the stack config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use any module/component w/o atmos, you just need to cd into the component folder and provide all the required variables. Then use plain TF commands

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) I think what he wants to do is akin to our atmos terraform shell

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos terraform shell | atmos

This command starts a new SHELL configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you run atmos terraform shell, it will set up your environment so you can run a “plain terraform plan command”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, terraform shell can be used

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but only if you have atmos.yaml defined

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the error above says that atmos.yaml does not exist (https://atmos.tools/quick-start/configure-cli)

Configure CLI | atmos

In the previous step, we’ve decided on the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so if you want Atmos to generate all the varfiles for the component, you need: 1) create atmos.yaml (https://atmos.tools/quick-start/configure-cli) ; 2) configure Atmos components and stacks (https://atmos.tools/quick-start/create-components, https://atmos.tools/quick-start/create-atmos-stacks)

Create Components | atmos

In the previous steps, we’ve configured the repository, and decided to provision the vpc-flow-logs-bucket and vpc Terraform

Create Atmos Stacks | atmos

In the previous step, we’ve configured the Terraform components and described how they can be copied into the repository.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on the other hand, if you want to use a TF module/component w/o Atmos (just TF code), then you cd into the folder and provide all the variables and backend config, then use plain TF commands

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a working example (which we are using for testing) https://github.com/cloudposse/atmos/tree/master/examples/complete with everything defined (atmos.yaml, components, stacks, etc.)

Abhijit Shingate avatar
Abhijit Shingate

Okay. Will explore this more. Our organisation is using terragrunt so wanted to explore how to use cloudposse modules with it. Thank you.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you need help with using the components, let us know. Also, we can help with integrating Atmos if you want to explore it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, so in that case, wouldn’t it be more about overriding the terraform command? We haven’t tried it, but others have mentioned they are using terragrunt.

Abhijit Shingate avatar
Abhijit Shingate

but it gives me following error

Abhijit Shingate avatar
Abhijit Shingate
Error: 
│ 'atmos.yaml' CLI config files not found in any of the searched paths: system dir, home dir, current dir, ENV vars.
│ You can download a sample config and adapt it to your requirements from <https://raw.githubusercontent.com/cloudposse/atmos/master/examples/complete/atmos.yaml>
│ 
│   with module.website_cognito_setup.module.iam_roles.module.account_map.data.utils_component_config.config,
│   on .terraform/modules/website_cognito_setup.iam_roles.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, so when you use the Cloud Posse developed components, they will frequently leverage the stack configurations. Under the hood they use our other open source child modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our Cloud Posse components frequently use a terraform data provider which reads remote state and looks up configuration from the stacks. That’s why you’re getting this error.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To be clear, it doesn’t require atmos, but it requires stack configurations. If that’s not what you want, then you’ll need to fork it.

Abhijit Shingate avatar
Abhijit Shingate

Oh Okay. Thanks a lot. Will try that approach

jose.amengual avatar
jose.amengual
1
1

2023-04-25

girishmaddineni1998 avatar
girishmaddineni1998

Hi @everyone, i am having an issue while running the atmos describe affected --verbose=true in bitbucket pipelines. the repo was already cloned into the container in that particular step of the pipeline, but it giving an errors like below. Thank you

Executing command:

/usr/bin/atmos describe affected --file affected.json --verbose=true --repo-path $BITBUCKET_CLONE_DIR

the target remote repo is not a Git repository. Check that it was initialized and has '.git' folder: repository does not exist

exit status 1

--------------
/usr/bin/atmos describe affected --file affected.json --verbose=true --sha $BITBUCKET_COMMIT

Cloning repo '<http://bitbucket.org/**********/********>' into the temp dir '/tmp/168236982171'
Checking out the HEAD of the default branch ...

authentication required

exit status 1
-------------------
Executing command:

/usr/bin/atmos describe affected --file affected.json --verbose=true --ref "refs/heads/$BITBUCKET_BRANCH"

Cloning repo '<http://bitbucket.org/*********/********>' into the temp dir '/tmp/16823701031153'

Checking out Git ref '"refs/heads/$BITBUCKET_BRANCH"' ...

authentication required

exit status 1
-------------------
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

does it have ‘.git’ folder? In the command atmos describe affected --file affected.json --verbose=true --repo-path $BITBUCKET_CLONE_DIR

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will not work if it’s not a valid git folder

girishmaddineni1998 avatar
girishmaddineni1998

yes i verified it in local and reinitialized using git init as-well

girishmaddineni1998 avatar
girishmaddineni1998

and it is correctly pointing to right head aswell

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we never tested it with Bitbucket, might be an issue with that. We are using go-git Go lib to do all things with Git

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or might be not related to Bitbucket

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you just copy the whole repo into a temp dir (including the .git folder), then use the command atmos describe affected --file affected.json --verbose=true --repo-path xxxx ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

manually copy, on your computer

girishmaddineni1998 avatar
girishmaddineni1998

Ahhh, i will do that and see, how it goes. Thank you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, just clone the entire folder into a temp folder on your computer

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then use it in --repo-path argument

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

at least you’ll know that this part works

girishmaddineni1998 avatar
girishmaddineni1998

sounds good and i will post the outcome, once it’s done

girishmaddineni1998 avatar
girishmaddineni1998

it is giving me the same kind of error but the directory has .git folder

the target remote repo is not a Git repository. Check that it was initialized and has '.git' folder: repository does not exist
girishmaddineni1998 avatar
girishmaddineni1998
cat .git/HEAD     
ref: refs/heads/feature/testing-pipeline
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for some reason, https://github.com/go-git/go-git does not like that .git folder (this part of the error repository does not exist is from go-git)

go-git/go-git

A highly extensible Git implementation in pure Go.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Git error: conq: repository does not exist

I’m getting the following errors in Git using BitBucket:

conq: repository does not exist. fatal: The remote end hung up unexpectedly How do I rectify this issue? I’ve carried out the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please review it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(this issues are difficult to answer since Atmos does not do anything with Git and it’s folders, go-git does. Whatever the lib does, we don’t control it in Atmos code, and go-git is far from perfect)

girishmaddineni1998 avatar
girishmaddineni1998

Ahhh, Got it. Thanks so much for the quick response

jose.amengual avatar
jose.amengual

So we fixed the repository does not exist error

jose.amengual avatar
jose.amengual

now is giving us authentication required

jose.amengual avatar
jose.amengual

the two commands before atmos describe affected are git status and git config -l and the both work right before the atmos command

jose.amengual avatar
jose.amengual

so it is estrange that atmos does not seem to use the credential in the same shell/run/pipeline

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos uses go-git

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

whatever it does

jose.amengual avatar
jose.amengual

if we run this locally against the same repo it works so there is definitely something missing in the pipeline

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are a lot of discussions about that for go-git https://github.com/go-git/go-git/issues/116

#116 authentication required

I am cloning a private library

And comes with a username and password

But this doesn’t seem to work and got authentication required error

	options := git.CloneOptions{
		URL:               getGitURL(r.repo, username, password), // <https://username:[email protected]/owner/repo.git>
		Progress:          os.Stdout,
		SingleBranch:      true,
		Depth:             1,
		RecurseSubmodules: git.DefaultSubmoduleRecursionDepth,
	}

	_, err := git.CloneContext(ctx, memory.NewStorage(), fs, &options)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that def works in GitHub actions (we are using atmos describe affected in the actions)

jose.amengual avatar
jose.amengual

in github we have 0 issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but in a container, it does not for a private repo

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s go-git issues, which can prob be solved by using

repo, err := git.PlainClone(pathToRepo, false, &git.CloneOptions{
			Auth: &http.BasicAuth{
				Username: "yourUsername",
				Password: personalAccessToken},
			URL:      "<https://github.com/go-git/go-git>",
			Progress: os.Stdout,
		})
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but it does not look like a good solution if you need to specify your username and a PAT (as the command’s arguments?)

jose.amengual avatar
jose.amengual

yes, I think there is a setting that we might not have access to allow the runners to authenticate using ssh

jose.amengual avatar
jose.amengual

this worked atmos describe affected --ref refs/heads/main --ssh-key ${BITBUCKET_SSH_KEY_FILE} --verbose=true

1
Release notes from atmos avatar
Release notes from atmos
03:34:36 AM

v1.34.1 what Add ExecuteDescribeStacks function to pkg package and wrap the same function from the internal package Add tests why We need to use the ExecuteDescribeStacks in the terraform utils provider, but all code in the internal package is not visible to the calling code. internal package is used to reduce the public API surface. Packages within an internal/directory are therefore said to be internal packages references <a href=”https://go.dev/doc/go1.4#internalpackages“…

Release v1.34.1 · cloudposse/atmosattachment image

what

Add ExecuteDescribeStacks function to pkg package and wrap the same function from the internal package Add tests

why

We need to use the ExecuteDescribeStacks in the terraform utils provider…

2023-04-26

2023-04-27

Release notes from atmos avatar
Release notes from atmos
02:34:38 PM

v1.34.2 what Add workspace to the outputs of atmos describe stacks command why We often need to know the terraform workspace for each component (taking into account that Terraform workspaces can be overridden per component, so they are not always the same as the stack names) test tenant1-ue2-dev: components: terraform:

    top-level-component1:
      workspace: tenant1-ue2-dev

    test/test-component-override-3:
      workspace: test-component-override-3-workspace
Release v1.34.2 · cloudposse/atmosattachment image

what

Add workspace to the outputs of atmos describe stacks command

why

We often need to know the terraform workspace for each component (taking into account that Terraform workspaces can be over…

Release notes from atmos avatar
Release notes from atmos
02:54:36 PM

v1.34.2 what Add workspace to the outputs of atmos describe stacks command why We often need to know the terraform workspace for each component (taking into account that Terraform workspaces can be overridden per component, so they are not always the same as the stack names) test tenant1-ue2-dev: components: terraform:

    top-level-component1:
      workspace: tenant1-ue2-dev

    test/test-component-override-3:
      workspace: test-component-override-3-workspace
    keyboard_arrow_up