#geodesic (2021-06)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2021-06-02
# stacks/wf.yaml
workflows:
plan-all:
description: Run 'terraform plan' and 'helmfile diff' on all components for all stacks
steps:
- job: terraform plan vpc
stack: ue2-dev
- job: terraform plan eks
stack: ue2-dev
- job: helmfile diff nginx-ingress
stack: ue2-dev
- job: terraform plan vpc
stack: ue2-staging
- job: terraform plan eks
stack: ue2-staging
Should it be possible to run all jobs for a given workflow without passing the stack
arg? e.g. run all defined jobs for all stacks
atmos workflow plan-all -f wf
Not supported today AFAIK, but might be a good GH issue / feature request.
Okay.
I think I will be submitting MR for several of CP’s projects. I want to confirm with my new employer that I am allowed to contribute to open source projects.
2021-06-04
Are there any examples of using atmos with helm/helmchart? Something similar to the terraform and atmos tut?
Not looking for anything formal or polished (like terraform + atmos tut)? Maybe something that is on a WIP branch of some project?
I don’t think we have a doc on it
And the sub commands typically follow the format of the command itself
Okay. I will look at it. I already read the atmos source to learn the atmos+terraform abstraction. I can do the same for helm too.
2021-06-10
how do you suggest loading file system from keybase within geodesic container?
Haven’t tried it (after Keybase acquisition by zoom, it’s longevity is in question and don’t recommend it anymore)
any good alternatives?
What problem are you wanting to solve? :-)
I work from multiple systems, so keep some secrets in keybase for one and another is sharing secrets with team
is my approach to use keybase for the purpose right?
What about using SSM instead?
…with chamber
nice approach
our authentication to AWS is managed through AWS SSO. our credentials have three parts:
• aws_access_key_id
• aws_secret_access_key
• aws_session_token
all the auth examples i’ve seen for geodesic
and atmos
rely on aws-vault
and (from what i can tell) aws-vault
only supports authentication with aws_access_key_id
and aws_secret_access_key
.
is it possible to use cloudposse’s tools in this case?
@Jeremy G (Cloud Posse) can likely point you in the right direction when he’s got a minute.
Use AWS CLI v2 which has direct support for AWS SSO. Sign in from within Geodesic.
AWS CLI v2 is the default in Debian-based Geodesic starting with version 0.146.0
@Alan Cox Set up your $HOME/.aws/config
file for AWS SSO as directed by AWS, then from inside Geodesic you can run aws sso login
. Make sure you have AWS_PROFILE
set first.
awesome. thanks!
2021-06-11
is geodesic expected to keep aws-vault credentials in between sessions?
localhost $> geodesic
geodesic $> export AWS_VAULT_BACKEND=file
geodesic $> aws-vault list # as i expected, returns one profile that has no credentials and no sessions
geodesic $> aws-vault add wispmaster.root # "Added credentials to profile "wispmaster.root" in vault
geodesic $> aws-vault list # as i expected, returns one profile that has credentials but no sessions
geodesic $> exit
localhost $> geodesic
geodesic $> export AWS_VAULT_BACKEND=file
geodesic $> aws-vault list # not what i expect ... returns one profile that has no credentials and no sessions
i would think that geodesic would maintain the aws-vault credentials from one session to the next.
2021-06-12
It mounts your home directory for caching
I would explore the .aws folder to see what’s there
Geodesic is just a Docker image. However you run the container determines the behavior. So I would think more about it In terms of how you would accomplish it with Docker rather than how we do it in geodesic. It would be identical. We bind mount volumes into the container and so long as all the paths are correct it will work as expected
2021-06-17
I think I’ve finally wrapped my head around the stack-based approach to SweetOps. The only thing I’m missing is the recommended way of having different module versions (in a March thread, I read that this is possible) of a component (e.g. for prod and dev) in the infrastructure repo. My understanding is that components don’t get imported on the fly with Terraform but are synced regularly with vendir (once usable).
I want to avoid provisioning a component that I want to use in the dev/staging account in production because stacks trigger automatically.
Once again, thanks for all the amazing work you do around that topic!
hey folks, how can I get an s3 backend configured using atmos? My stack file looks like the following. I do not have a provider or a terraform block defined in the tf files.
Just run:
atmos terraform backend generate component tfstate-backend --stack ue1-root
replace tfstate-backend
with the name of your component
run that for every component
we commit the backend.tf.json
file to VCS
also, remember you can always run:
atmos terraform help
for a list of commands
@Erik Osterman (Cloud Posse) you run a backend generate for every environment everytime you want to switch from dev to staging for example?
i want to work on vpc dev, ok i run generate, have it write to the component directory. now i want to promote to staging, so i have to generate/copy a new backend config?
Nope. We never need the generate the file again. We use the workspace to separate environments.
ok, i see the way the tfstate is structured, so you just change the key in each components s3 backend configuration
and the bucket is essentially namespaced by the environment/stack
thanks again, no need to respond if my above findings are correct, I know you’re busy and i think i’ve got this all figured. vendir and variant are awesome, and your guys wrapping them into geodesic makes quite a useful tool!
you got it
beauty, yeah i’ve actually got all this going fine now, thanks again for your help !
my components and my stacks are now in their own repo’s, they are brought into the container by vendir. each time i make a change i just commit to their repo’s and do vendir sync to bring the latest in
when you rerun vendir, though, doesn’t it want to delete the backend file?
I’m waiting on this PR https://github.com/vmware-tanzu/carvel-vendir/pull/64
Issue: #37 Created a function that will copy existing files listed in the destination directory to the staging area prior to deletion preserving the content. I felt that pulling the content into th…
the backends are committed to their repo’s
the backend.tf.json
and they’re all s3 backed, just to be explicit
1 terraform:
2 vars:
3 stage: "dev"
4 namespace: "gfca"
5 region: "ca-central-1"
6
7 components:
8 terraform:
9 setup:
10 name: "terraform"
11 vpc:
12 backend:
13 s3:
14 bucket: "gfca-dev-terraform-state"
15 key: "state/vpc"
16 workspace_key_prefix: "vpc"
17 vars:
18 name: "vpc"
19 availability_zones: ['ca-central-1a', 'ca-central-1b']
whenever I run with atmos terraform apply -s ^^THISONE^^
, it gives me local state. if i add terraform { backend "s3" }
to my tf files, it wont honor the backend configuration and prompts me for a bucket name and a key.
i’ve run the tfstate project and copypasta’d the backend.tf into each module at this point as a workaround? The atmos docs say I can provide it inline in the stacks file, it would be great if somebody could help determine why thats not working for me?