#geodesic (2021-06)

geodesic https://github.com/cloudposse/geodesic

Discussions related to https://github.com/cloudposse/geodesic

Archive: https://archive.sweetops.com/geodesic/

2021-06-02

Brian Ojeda avatar
Brian Ojeda
# stacks/wf.yaml
workflows:
  plan-all:
    description: Run 'terraform plan' and 'helmfile diff' on all components for all stacks
    steps:
      - job: terraform plan vpc
        stack: ue2-dev
      - job: terraform plan eks
        stack: ue2-dev
      - job: helmfile diff nginx-ingress
        stack: ue2-dev
      - job: terraform plan vpc
        stack: ue2-staging
      - job: terraform plan eks
        stack: ue2-staging

Should it be possible to run all jobs for a given workflow without passing the stack arg? e.g. run all defined jobs for all stacks

atmos workflow plan-all -f wf
Matt Gowie avatar
Matt Gowie

Not supported today AFAIK, but might be a good GH issue / feature request.

Brian Ojeda avatar
Brian Ojeda

Okay.

Brian Ojeda avatar
Brian Ojeda

I think I will be submitting MR for several of CP’s projects. I want to confirm with my new employer that I am allowed to contribute to open source projects.

1

2021-06-04

Brian Ojeda avatar
Brian Ojeda

Are there any examples of using atmos with helm/helmchart? Something similar to the terraform and atmos tut?

Brian Ojeda avatar
Brian Ojeda

Not looking for anything formal or polished (like terraform + atmos tut)? Maybe something that is on a WIP branch of some project?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t think we have a doc on it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However all the atmos commands have —help flag

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And the sub commands typically follow the format of the command itself

Brian Ojeda avatar
Brian Ojeda

Okay. I will look at it. I already read the atmos source to learn the atmos+terraform abstraction. I can do the same for helm too.

2021-06-10

Neeraj Mittal avatar
Neeraj Mittal

how do you suggest loading file system from keybase within geodesic container?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haven’t tried it (after Keybase acquisition by zoom, it’s longevity is in question and don’t recommend it anymore)

Neeraj Mittal avatar
Neeraj Mittal

any good alternatives?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What problem are you wanting to solve? :-)

Neeraj Mittal avatar
Neeraj Mittal

I work from multiple systems, so keep some secrets in keybase for one and another is sharing secrets with team

Neeraj Mittal avatar
Neeraj Mittal

is my approach to use keybase for the purpose right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What about using SSM instead?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…with chamber

Neeraj Mittal avatar
Neeraj Mittal

and restrict access to teams (roles) for specific paths

1
1
Neeraj Mittal avatar
Neeraj Mittal

nice approach

Alan Cox avatar
Alan Cox

our authentication to AWS is managed through AWS SSO. our credentials have three parts: • aws_access_key_idaws_secret_access_keyaws_session_token all the auth examples i’ve seen for geodesic and atmos rely on aws-vault and (from what i can tell) aws-vault only supports authentication with aws_access_key_id and aws_secret_access_key.

is it possible to use cloudposse’s tools in this case?

Matt Gowie avatar
Matt Gowie

@Jeremy G (Cloud Posse) can likely point you in the right direction when he’s got a minute.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Use AWS CLI v2 which has direct support for AWS SSO. Sign in from within Geodesic.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

AWS CLI v2 is the default in Debian-based Geodesic starting with version 0.146.0

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Alan Cox Set up your $HOME/.aws/config file for AWS SSO as directed by AWS, then from inside Geodesic you can run aws sso login. Make sure you have AWS_PROFILE set first.

Alan Cox avatar
Alan Cox

awesome. thanks!

2021-06-11

Alan Cox avatar
Alan Cox

is geodesic expected to keep aws-vault credentials in between sessions?

localhost $> geodesic
geodesic $> export AWS_VAULT_BACKEND=file
geodesic $> aws-vault list # as i expected, returns one profile that has no credentials and no sessions
geodesic $> aws-vault add wispmaster.root # "Added credentials to profile "wispmaster.root" in vault
geodesic $> aws-vault list # as i expected, returns one profile that has credentials but no sessions
geodesic $> exit
localhost $> geodesic
geodesic $> export AWS_VAULT_BACKEND=file
geodesic $> aws-vault list # not what i expect ... returns one profile that has no credentials and no sessions

i would think that geodesic would maintain the aws-vault credentials from one session to the next.

2021-06-12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It mounts your home directory for caching

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would explore the .aws folder to see what’s there

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Geodesic is just a Docker image. However you run the container determines the behavior. So I would think more about it In terms of how you would accomplish it with Docker rather than how we do it in geodesic. It would be identical. We bind mount volumes into the container and so long as all the paths are correct it will work as expected

2021-06-17

Markus Muehlberger avatar
Markus Muehlberger

I think I’ve finally wrapped my head around the stack-based approach to SweetOps. The only thing I’m missing is the recommended way of having different module versions (in a March thread, I read that this is possible) of a component (e.g. for prod and dev) in the infrastructure repo. My understanding is that components don’t get imported on the fly with Terraform but are synced regularly with vendir (once usable).

I want to avoid provisioning a component that I want to use in the dev/staging account in production because stacks trigger automatically.

Once again, thanks for all the amazing work you do around that topic!

Cody Halovich avatar
Cody Halovich

hey folks, how can I get an s3 backend configured using atmos? My stack file looks like the following. I do not have a provider or a terraform block defined in the tf files.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Just run:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos terraform backend generate component tfstate-backend --stack ue1-root
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

replace tfstate-backend with the name of your component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

run that for every component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we commit the backend.tf.json file to VCS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, remember you can always run:

atmos terraform help

for a list of commands

Cody Halovich avatar
Cody Halovich

beautiful, i knew there had to be something i was missing. Thanks Erik!

1
Cody Halovich avatar
Cody Halovich

@Erik Osterman (Cloud Posse) you run a backend generate for every environment everytime you want to switch from dev to staging for example?

Cody Halovich avatar
Cody Halovich

i want to work on vpc dev, ok i run generate, have it write to the component directory. now i want to promote to staging, so i have to generate/copy a new backend config?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Nope. We never need the generate the file again. We use the workspace to separate environments.

Cody Halovich avatar
Cody Halovich

ok, i see the way the tfstate is structured, so you just change the key in each components s3 backend configuration

1
Cody Halovich avatar
Cody Halovich

and the bucket is essentially namespaced by the environment/stack

Cody Halovich avatar
Cody Halovich

thanks again, no need to respond if my above findings are correct, I know you’re busy and i think i’ve got this all figured. vendir and variant are awesome, and your guys wrapping them into geodesic makes quite a useful tool!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you got it

Cody Halovich avatar
Cody Halovich

beauty, yeah i’ve actually got all this going fine now, thanks again for your help !

Cody Halovich avatar
Cody Halovich

my components and my stacks are now in their own repo’s, they are brought into the container by vendir. each time i make a change i just commit to their repo’s and do vendir sync to bring the latest in

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

when you rerun vendir, though, doesn’t it want to delete the backend file?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Feature Ignore Paths by joelholmes · Pull Request #64 · vmware-tanzu/carvel-vendirattachment image

Issue: #37 Created a function that will copy existing files listed in the destination directory to the staging area prior to deletion preserving the content. I felt that pulling the content into th…

Cody Halovich avatar
Cody Halovich

the backends are committed to their repo’s

Cody Halovich avatar
Cody Halovich

the backend.tf.json

Cody Halovich avatar
Cody Halovich

and they’re all s3 backed, just to be explicit

Cody Halovich avatar
Cody Halovich
  1 terraform:
  2   vars:
  3     stage: "dev"
  4     namespace: "gfca"
  5     region: "ca-central-1"
  6 
  7 components:
  8   terraform:
  9     setup:
 10       name: "terraform"
 11     vpc:
 12       backend:
 13         s3:
 14           bucket: "gfca-dev-terraform-state"
 15           key: "state/vpc"                                                                                                                                            
 16           workspace_key_prefix: "vpc"
 17       vars:
 18         name: "vpc"
 19         availability_zones: ['ca-central-1a', 'ca-central-1b']
Cody Halovich avatar
Cody Halovich

whenever I run with atmos terraform apply -s ^^THISONE^^, it gives me local state. if i add terraform { backend "s3" } to my tf files, it wont honor the backend configuration and prompts me for a bucket name and a key.

Cody Halovich avatar
Cody Halovich

i’ve run the tfstate project and copypasta’d the backend.tf into each module at this point as a workaround? The atmos docs say I can provide it inline in the stacks file, it would be great if somebody could help determine why thats not working for me?

2021-06-18

2021-06-21

    keyboard_arrow_up