#geodesic (2019-04)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2019-04-01
There is 1 event this week
April 3rd, 2019 from 11:30 AM to 12:20 PM GMT-0700 at https://zoom.us/j/684901853
2019-04-02
Hello,
I’m rebuilding some packages of the cloudposse/packages container and getting this error: mktemp: failed to create directory via template ‘../../tmp/tmp.XXXXXXXXXX’: No such file or directory
I could observe that the problem is related to this file included at makefile: “/packages/tasks/Makefile.apk”
… export APK_TMP_DIR := $(realpath $(shell mktemp -p ../../tmp -d)) …
As I wasn’t interested in apk packages I’ve just erased this file: echo > /packages/tasks/Makefile.apk to not get errors during the build process, but I thing that It worth’s be mentioned this problem, just to check if I doing something wrong
/packages # echo “1.14.0” > /packages/vendor/kubectl/VERSION /packages # make -C install kubectl make: Entering directory ‘/packages/install’ make[1]: Entering directory ‘/packages/vendor/kubectl’ mktemp: failed to create directory via template ‘../../tmp/tmp.XXXXXXXXXX’: No such file or directory curl –retry 3 –retry-delay 5 –fail -sSL -o /packages/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.14.0 /bin/linux/amd64/kubectl && chmod +x /packages/bin/kubectl make[1]: Leaving directory ‘/packages/vendor/kubectl’ make: Leaving directory ‘/packages/install’
Thanks for reporting
@Pablo Costa can you open an issue for that?
Sure. It’s done.
I’ll have @Maxim Mironenko (Cloud Posse) look into it
for now, maybe use an earlier version of cloudposse/packages
?
(btw, if you’re using alpine, then we suggest using apk add
instead of the make based installer)
Thanks Erik, btw how is the policy for package updates ? For example helm version is at 2.11 (avaliable 2.13) terraform is 0.11.11 (avaliable 0.11.13)
oh, open policy!
open a PR and we’ll approve it usually with in a couple of hours
only reason our versions lag is (we or you) didn’t submit a PR to update it
since we can version pin everything, we’re very open to keeping things current.
Here are the slides from the “Los Angeles Kubernetes Meetup” where we presented on Geodesic.
Geodesic is a cloud automation shell. It’s the superset of all other tools including (terraform, terragrunt, chamber, aws-vault, aws-okta, kops, gomplate, helm, helmfile, aws cli, variant, etc) that we use to automate workflows. You can think of it like a swiss army knife for creating and building
Is there a VOD of the talk by any chance?
Geodesic is a cloud automation shell. It’s the superset of all other tools including (terraform, terragrunt, chamber, aws-vault, aws-okta, kops, gomplate, helm, helmfile, aws cli, variant, etc) that we use to automate workflows. You can think of it like a swiss army knife for creating and building
2019-04-03
April 3rd, 2019 from 11:30 AM to 12:20 PM GMT-0700 at https://zoom.us/j/684901853
I’m going to hang out in this zoom for a little bit in case anyone has any questions.
Oooh it was later today doh
Couldnt make the 6:30 one!
(fwiw, we had daylight savings in the US on March 10)
2019-04-04
So should I adjust my calendar invites from 6:30 PM to 7:30PM?
I am not sure :-)
2019-04-05
hey hey
danyone know why kops is still 1.10.* in geodesic?
No reason - just haven’t received your PR :-)
haha
We are almost out of the build and migrate madness
We’ve released our version of shared vpc for kops
We are testing 1.12
rather than 1.11.0
which file??
2019-04-08
There is 1 event this week
April 10th, 2019 from 11:30 AM to 12:20 PM GMT-0700 at https://zoom.us/j/684901853
So if I wanted to get starting trying to use geodesic where is the best place to start. I see lots of information about and references to geodesic and reference architectures, just wonder if there is a certain order to follow. Any info would be helpful, thanks
@oscarsullivan_old recently went through this, made some docs https://github.com/osulli/geodesic-getting-started
A getting-started guide for Cloud Posse’s Geodesic. - osulli/geodesic-getting-started
it’s on my list to investigate using geodesic, i just haven’t gotten there yet~
Awesome thanks so much! I’ll give that a look. Really appreciate that
The 20 foot view is, you make a dockerfile based off geodesic with some specific stuff in it and then you use the resultant container as your “shell” for the given environment you built it for, if my understanding is correct.
Thats helpful to keep in mind
Im going to document my journey as a noob with this and see if I can come out with some notes/docs for setting this up
Yep AMA on Geodesic, will try to answer
also tune in on Wednesday and happy to answer and demo then
find time in #geodesic
@oscarsullivan_old https://sweetops.slack.com/archives/CB84E9V54/p1554739203014100
There is 1 event this week
correct?
ty
That’s local to your time
Awesome!
BST
was my next question haha
I’ll try and revisit my PR
but maybe read this instead of the master branch readme
What Update the guides with clearer examples Add Example project that I actually use with Geodesic and Terraform Why Several more weeks worth of experience using the tools Some clear errors in t…
it was written 2 weeks later I think
(which is p significant at my rate)
I also share a TF project that I literally use
Awesome
that shows you HOW to use geodesic
thanks for all the info
how to leverage it
and have one tf project for say your API that can be used for dev/staging/prod without duping files etc
np, catch you wednesday
see you then!
Thanks @oscarsullivan_old !!
Note that https://github.com/cloudposse/reference-architectures though a bit hard in itself to grasp, shows how Geodesic was designed to be used. Reference Architectures will actually generate your Geodesic Docker container source repos for you.
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
@Jeremy G (Cloud Posse) a nice feature for reference-architectures would be to just setup the geodesic modules and not affect any AWS accounts
I give it my existing account IDs and any other key details like root account ID
and it generates the geodesic module repos
reference-architectures is very proscriptive, and its primary purpose is to solve the cold-start problem and get you up and running on AWS quickly, starting from nothing. Importing your existing infrastructure and making sense of it automatically is an entirely different concept and a whole other project.
I pretty much agree with Jeremy
it’s a realllllly hard problem to “generalize” a solution where we are not in control of the configuration
(maybe one day!)
No right I get that. But ref arch does this already right in a linear step? It builds the modules then sets up the accounts or is it all intertwined?
It is linear
it’s certainly possible (just not a priority yet)
That makes sense. Perhaps you could point me to the file that triggers it.. although I imagine it is the make file
2019-04-09
i’ve noticed the build-harness has a lot of cloudposse references… is forking this repo also recommended when getting setup with geodesic? it seems to be a dependency and affects commands like readme generation and such.
@Josh Larsen yes/no - I think it’s a great idea to fork it for your own org’s needs
that said, there’s no easy way to take advantage of your fork in our repos
Youd have to fork it to use your fork of the readme generatie for example. The readme generator references CP a lot
2019-04-10
If you add sops to the geodesic
RUN wget <https://github.com/mozilla/sops/releases/download/3.2.0/sops-3.2.0.linux> -O /usr/bin/sops && chmod +x /usr/bin/sops
and set the env vars
RUN curl <https://keybase.io/user/pgp_keys.asc> | gpg --import
ENV SOPS_KMS_ARN="arn:aws:kms:xx-xxx-x:xxxx:key/xxx-xxx-xxx-xxx-xxxx"
ENV SOPS_PGP_FP="APGPKEY,ANOTHERPGPKEY,ETC"
You can use sops encryption before pushing files into storage so they’re only un-encrypted within the container during use
I believe we provide a sops
package
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
(so in fact, sops ships with geodesic)
You know… Didnt know that
it’s pretty good; we use it alongside sopstool
for keeping our tls certs secure in storage
also, if our sops
is not current, feel free to submit PR against cloudposse/packages
to update it.
April 10th, 2019 from 11:30 AM to 12:20 PM GMT-0700 at https://zoom.us/j/684901853
2019-04-11
Question I forgot I had yesterday: I know that Geodesic opens a port (range?) for handy kubectl port-forward
s. How do I use it, and what version was it released in? (in the last month, or older?)
Yes, that’s supported - it’s been in the container for years
there’s an env inside the container that contains the port number
what We port map a random port into the geodesic container This port is what should be used for proxying kubectl proxy –port=${KUBERNETES_API_PORT} –address=0.0.0.0 –accept-hosts='.*' wh…
That said, I think we should rename the port to be more generic
so it can be repurposed/overloaded
Thanks, I was looking for $GEODESIC*
Yea, that would make more sense
I think we should rename it to GEODESIC_PORT
that’s exactly what I thought it was
what’s confusing is in the wrapper
we call it one thing (GEODESIC_PORT
) and I guess we end up renaming it in the container
If you open an issue on geodesic
, we’ll track it and fix that
Will do, cheers!
Infrastructure as code, pipelines as code, and now we even have code as code! =P In this talk, we show you how we build and deploy applications with Terraform using GitOps with Codefresh. Cloud Posse is a power user of Terraform and have written over 140 Terraform modules. We’ll share how we handl
Here’s how we used geodesic
with Codefresh to achieve GitOps with terraform on Codefresh
What happened with Atlantis?
We’re still using it, but customers have asked us to use codefresh instead
in the end, I think we were able to reproduce a lot of what atlantis does. Still need better locking mechanisms and support of CODEOWNERS
for blocking apply
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
Here’s what it looks like https://github.com/cloudposse/testing.cloudposse.co/pull/75
what Demo of adding a new user bucket why GitOps rocks! =)
2019-04-12
2019-04-13
how does geodesic handle issues of uid/gid when mounting dirs from the host?
could someone point me to the code where that happens?
It does nothing with uid/gid mapping
2019-04-14
Geodesic installs a wrapper script to run the container, which launches the container with something like (but not exactly, and with other options) docker run -it --privileged --volume=$HOME=/localhost
. The shell inside the Geodesic container is run as root. The permissions mapping is handled by Docker. I use docker-machine
on my Mac, so everything Docker runs runs as me and the root user inside the Docker container has the same permissions as I do on the host. Files created on the host from Geodesic are owned by me and files I cannot read cannot be read from inside Geodesic.
I think the behavior is slightly different on linux, where UID/GID within the container are preserved on the host machine, but what @Jeremy G (Cloud Posse) describes is correct on OSX.
This has actually caused a problem with us on our build server. The Jenkins user can no longer clean up some of the workspaces because files end up getting owned by root after unit testing and such =( On my backlog to fix that.
Hrmmm okay I can help address that. So when we run geodesic containers in a headless fashion (e.g. cicd or Atlantis), we never use the wrapper script. We always run it in the “native” way for that platform. So for ECS it’s a task and for Kubernetes it’s a pod in a deployment. We never mount localhost, which is only ever recommended for local development and not the general workflow.
If localhost is not mounted the host machine is always insulated from these kinds of errors.
Keep in mind we always build an image that uses geodesic as it’s base and we don’t run geodesic directly
Ah, I meant the more generic problem of root files inside a container against a local mount becoming root owned files on the host operating system. This isn’t even using geodesic. Though, a lot of our engineers here do use linux as their day-to-day O/S so it would affect them as well.
On linux, host permissions for processes running in Docker containers should be managed with user namespaces: https://docs.docker.com/engine/security/userns-remap/
Linux namespaces provide isolation for running processes, limiting their access to system resources without the running process being aware of the limitations. For more information on Linux namespaces, see Linux…
The awkward thing though is I think we want to be able to be root in the geodesic container while developing locally
but we want files owned by the host user
user namespaces give you that: it maps whatever UIDs you user in Docker to whatever you want on the host
Yeh ubuntu runs as root and sets files as root:root
2019-04-15
ah ok, that explains what i’m seeing
on macos it just works, but on our linux workstations the host mounted files are written with root:root
Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on…
I was hoping they supported uid/gid mapping in bind-mount (by now), but it’s not supported
we could technically drop to a non-privileged user in geodesic, but haven’t optimized for that.
There are no events this week
2019-04-16
Hello, everyone. I am trying to follow the cold-start procedure described here https://docs.cloudposse.com/reference-architectures/cold-start/ but it seems that it is outdated, right now I have an issues trying to create users, could anybody help me?
@Eugene Korekin sorry for the troubles!
yes, the cold-start docs are quite out of date and refer to a older implementation.
our current process is being kept up to date here: https://github.com/cloudposse/reference-architectures
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
thanks, Eric! I’ll take a look
@ericthortonjohnson could you please tell me what would be the best approach if I already created the master aws account in aws orgs? I think I can’t remove it
@Eugene Korekin
I used the older implementation to create it
soooo you have a couple of options.
did you create those AWS accounts by hand or using terraform?
I used the existing account as the base for the procedure described in the old cold start, the master account referring to the existing account was created in aws orgs in the process
ok, so option (1) is to continue going down the path you were. there’s nothing wrong with that per say.
so the master account was created via terraform, but it already has some ec2 instances etc
so there’s no one way to bring up this infrastructure. the idea is the geodeisc is just a run time environment.
what you do inside of it is entirely open-ended.
I stuck on that path, as the user creation step requires some contents present in ssm and I cannot find how to create this content with the old procedure
hrmmm so mostly likely, the user creation stuff will then need to use older versions of the modules that pre-date the SSM dependencies
i can’t say what version that would be.
to start fresh though, would involve the following:
- password reset on each account
here is the error I have
- login to each account, accept t&c
- terraform destroy accounts
then you have a clean base
is there a way to somehow import existing accounts?
they are already in use
you can import existing accounts, however @Jeremy G (Cloud Posse) recently went through this with one of our current clients and if every parameter doesn’t match 100% it wants to recreate them
(jumping on a )
I see, is there a way to only create a new account (let’s say ‘testing’) and leave the master one as it is (including all the existing users)?
yes, you can probabblby do that
use accounts_enabled
flag to only select testing
e.g. get your feet wet with the system
there’s a lot of moving pieces, so getting your hands dirty with one account would be a good idea
so, do I just need to skip ‘make root’ and start right from ‘make children’?
I’ve just tried it, and it doesn’t seem to work, the make command doesn’t provide any output
You need to set up your configs/root.tfvars
with all the right configuration (hopefully it’s reasonably self-explanatory) including accounts_enabled
limited to the accounts you want to create. Then set yourself up with AWS admin credentials in your “root” AWS account and run make root
. This creates the children accounts and sets up roles and network CIDRs and so forth and creates a bootstrap user you will use for the rest of the configuration.
but in won’t change anything inside the root account in the process, right?
It will definitely change things inside the root account. The root account is where your users are created and roles for the children account are created and DNS entries for children DNS subdomains are created.
I don’t want to change any of the existing users, won’t it be possible to proceed without that?
in other words, would it be possible to use the existing user entries in the master account and reuse then in the children ones?
so our reference architectures don’t provide that level of configurability b/c the number of permutations is insurmountable
however, all of our terraform modules are compose of other modules
you can pick and choose exactly what you want to use
2019-04-17
Office Hours Today from 11:30 AM to 12:20 PM at https://zoom.us/j/684901853
(PST)
2019-04-22
There are no events this week
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
2019-04-23
question: let’s say my terraform-root-modules is a private repo… how can i get my github keys into geodesic so the terraform init
will be able to retrieve the modules?
@Josh Larsen you have a few options
are you running this under CI/CD or as a human?
here’s how we do it with #codefresh https://github.com/cloudposse/testing.cloudposse.co/blob/master/codefresh/terraform/pipeline.yml#L99-L103
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
then we add the ssh public key as a deploy key to the root modules repo
ideally i’d like to do both
but let’s start with human for now
i’m thinking something like cat /localhost/.ssh/id_rsa | ssh-add -
would do it, but would just have to do that every time i started the shell. or is there a better way?
By default, Geodesic will start an ssh-agent
and add ~/.ssh/id_rsa
at startup. You can also write your own scripts to be run at startup. See https://github.com/cloudposse/geodesic/pull/422
what In addition to some small cleanups and additions, provide a capability for users to customize Geodesic at runtime. why Because people vary, their computers vary, what they are trying to accomp…
are you on a mac or linux?
on linux, we mount your ssh agent socket into the container
we can’t do that on a mac
the other option is to store your ssh key in SSM
i’m on mac… but i think i get the idea. thanks.
2019-04-24
I was experimenting today with terragrunt
vs vanilla terraform init
and wanted to point out a few important differences between these two similar methods:
terragrunt { source = "<blueprint source>" }
andterraform init -from-module=<blueprint source>
terragrunt
allows the use of:
[override.tf](http://override.tf)
[1] files in the CWD- “mix-in” files in the CWD (
*.tf
files that do not match blueprint filenames), useful for adding one-off resources to a blueprint - “upstage” files in the CWD (
*.tf
files that do match a blueprint filename), these replace the contents of a blueprint source file of the same name), useful for removing blueprint resources This is due to the fact thatterragrunt init
creates a tmp dir, clones theSOURCE
to the tmp dir, and then copies/overwrites (aka “upstages”) all files in the CWD to the tmp dir (overwriting any duplicates).
Whereas terraform init -from-module=
requires that the CWD contains zero files matching *.tf
or *.tf.json
. This prevents all the above techniques: overrides, mix-ins, and upstage files.
Has anyone thought about how to support/implement the override, mix-in and upstage patterns without the use of terragrunt
?
[1] https://www.terraform.io/docs/configuration-0-11/override.html
Terraform loads all configuration files within a directory and appends them together. Terraform also has a concept of overrides, a way to create files that are loaded last and merged into your configuration, rather than appended.
See our overrides strategy
Terraform loads all configuration files within a directory and appends them together. Terraform also has a concept of overrides, a way to create files that are loaded last and merged into your configuration, rather than appended.
We support that too, only it’s explicit
is that the overrides/
directory ?
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
Yes
Not “ideal” imo, but identical strategy
We copy to . Rather than to a “cache” folder
Terraform just does a basic check for any .tf file
IMO that should be optional
Perhaps a -force-copy arg
I’m trying to grok the two init
calls to terraform
there
the first one I’m guessing makes use of TF_CLI_ARGS_init
, and the second one does what?
loads any new modules that were included as a result of the new files being cp
ed ?
It’s self mutilating code
At first you init with no overrides which pulls down the source
So we need to rein it
Re init
yup ok got it
The second one would fail if we tried to init again from a remote module
So we null it out
We could also unset it from the env
Both ok
Sitting down for early dinner
ok thanks for the explainer
2019-04-25
Hi- if I have started the “quick start” with example.io, had an issue and ran make clean (after terraform had created some things), how can I most easily “reset”?
I want to run terraform destroy
essentially
I am getting, for example-
* module.staging.aws_organizations_account.default: 1 error(s) occurred:
* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
status code: 400, request id: 5b93365b-67b0-11e9-808b-3fc6f3a60880
* module.dev.aws_organizations_account.default: 1 error(s) occurred:
* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
status code: 400, request id: 5b9335cd-67b0-11e9-b64d-9b693972455f
* module.prod.aws_organizations_account.default: 1 error(s) occurred:
* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
status code: 400, request id: 5b92998f-67b0-11e9-b23d-dd34fe4bf6c3
* module.audit.aws_organizations_account.default: 1 error(s) occurred:
* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
status code: 400, request id: 5b93362c-67b0-11e9-a70f-ffba4d75ead7
@Mike Pfaffroth recommend to start here instead: https://github.com/cloudposse/reference-architectures
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
@Erik Osterman (Cloud Posse) I am running that (I’m in the make root
stage)
so terraform destroy
will fail when trying to delete the accounts
you can destroy the accounts after jumping through a lot of hoops (password reset, login, accept t&c, etc - for each child account)
right- I guess what I was trying to say is- is that process documented anywhere?
I know I’m in a bad position, I am trying to find a similar checklist
aha! yes, good question
no, we have not documented it. @Jeremy G (Cloud Posse) would it be relatively quick for you to open an issue against the reference-architectures
that describes the process you took? It’s been a long enough time that I can’t recall every step.
i just remember doing the password reset on every child account so that I could log in
then i think there was some kind of terms & conditions I had to accept (checkbox/click)
and after that, I think it’s sufficient enough to retry the terraform destroy
that said, I don’t think it’s possible to reuse the same email address on round 2
@oscarsullivan_old or @Jan or @Josh Larsen might have some more recent recollection
2019-04-26
@Erik Osterman (Cloud Posse) I did not delete any accounts. Not only are there lots of hoops to go through, but also deleting accounts take a long time (90 days, I think) and the account email address remains permanently associated with the “deleted” account, and therefore cannot be used on a new account.
@Mike Pfaffroth What I recommend is to reuse the created accounts. Run make root
up to the point where terraform
complains that it cannot create the accounts, then import the accounts into Terraform using terraform import
.
interesting… I will try that approach. Appreciate the tip.
hm… even on a brand new account I am not able to create these- it always dies when trying to create the others:
* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
status code: 400, request id: 7de7c003-6846-11e9-9002-752f3d4639b5
I changed the email address as well, just to make sure it wasn’t trying to create ones that were already there
I am the root account:
➜ reference-architectures git:(master) ✗ aws sts get-caller-identity
{
"UserId": "1redacted7",
"Account": "1redacted7",
"Arn": "arn:aws:iam::1redacted7:root"
}
is there a special permissions I need? or is there a documentation process for signing up for an account in the right way within an organization?
Sounds like maybe yuo’re not provisioning from the top-level payer account
this is the “root” account
If you are already in a subaccount, AWS does not let you create more subaccounts
so if I understand it correctly I need to sign up for a brand new account, and then geodesic creates organizations and IAM users inside that account for each environment @Erik Osterman (Cloud Posse)?
yes, more or less. you can technically use an existing root level account too, but you run the risk of conflicting resources.
Our reference-architectures are what we use in our consulting to stand up new accounts for our customers. We have a very specific focus.
yup- totally understood. just wanted to make sure I understood how it worked. Thanks for your help!
2019-04-29
There are no events this week
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)