It mounts your home directory for caching
I would explore the .aws folder to see what’s there
Geodesic is just a Docker image. However you run the container determines the behavior. So I would think more about it In terms of how you would accomplish it with Docker rather than how we do it in geodesic. It would be identical. We bind mount volumes into the container and so long as all the paths are correct it will work as expected
is geodesic expected to keep aws-vault credentials in between sessions?
localhost $> geodesic geodesic $> export AWS_VAULT_BACKEND=file geodesic $> aws-vault list # as i expected, returns one profile that has no credentials and no sessions geodesic $> aws-vault add wispmaster.root # "Added credentials to profile "wispmaster.root" in vault geodesic $> aws-vault list # as i expected, returns one profile that has credentials but no sessions geodesic $> exit localhost $> geodesic geodesic $> export AWS_VAULT_BACKEND=file geodesic $> aws-vault list # not what i expect ... returns one profile that has no credentials and no sessions
i would think that geodesic would maintain the aws-vault credentials from one session to the next.
how do you suggest loading file system from keybase within geodesic container?
Haven’t tried it (after Keybase acquisition by zoom, it’s longevity is in question and don’t recommend it anymore)
any good alternatives?
What problem are you wanting to solve? :-)
I work from multiple systems, so keep some secrets in keybase for one and another is sharing secrets with team
is my approach to use keybase for the purpose right?
What about using SSM instead?
and restrict access to teams (roles) for specific paths
our authentication to AWS is managed through AWS SSO. our credentials have three parts:
all the auth examples i’ve seen for
atmos rely on
aws-vault and (from what i can tell)
aws-vault only supports authentication with
is it possible to use cloudposse’s tools in this case?
@Jeremy (Cloud Posse) can likely point you in the right direction when he’s got a minute.
Use AWS CLI v2 which has direct support for AWS SSO. Sign in from within Geodesic.
AWS CLI v2 is the default in Debian-based Geodesic starting with version 0.146.0
@ Set up your
$HOME/.aws/config file for AWS SSO as directed by AWS, then from inside Geodesic you can run
aws sso login. Make sure you have
AWS_PROFILE set first.
Are there any examples of using atmos with helm/helmchart? Something similar to the terraform and atmos tut?
Not looking for anything formal or polished (like terraform + atmos tut)? Maybe something that is on a WIP branch of some project?
I don’t think we have a doc on it
However all the atmos commands have —help flag
And the sub commands typically follow the format of the command itself
Okay. I will look at it. I already read the atmos source to learn the atmos+terraform abstraction. I can do the same for helm too.
# stacks/wf.yaml workflows: plan-all: description: Run 'terraform plan' and 'helmfile diff' on all components for all stacks steps: - job: terraform plan vpc stack: ue2-dev - job: terraform plan eks stack: ue2-dev - job: helmfile diff nginx-ingress stack: ue2-dev - job: terraform plan vpc stack: ue2-staging - job: terraform plan eks stack: ue2-staging
Should it be possible to run all jobs for a given workflow without passing the
stack arg? e.g. run all defined jobs for all stacks
atmos workflow plan-all -f wf
Not supported today AFAIK, but might be a good GH issue / feature request.
I think I will be submitting MR for several of CP’s projects. I want to confirm with my new employer that I am allowed to contribute to open source projects.