#geodesic (2019-06)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2019-06-03
There are no events this week
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
2019-06-05
Error configuring the backend “s3”: Not a valid region: eu-north-1 I get this error while trying to create tfstate-backend Is eu-north-1 not allowed?
Current Terraform Version terraform 0.11.10 Use-cases AWS has just publicly announced the availability of the eu-north-1 (Stockholm) region: https://aws.amazon.com/blogs/aws/now-open-aws-europe-sto…
office hours starting now: https://zoom.us/j/684901853
2019-06-06
Hi folks, it’s been a while! I’ve got a tiny question regarding geodesic and direnv: I’d like to automate the execution of chamber to fetch a stored GitHub token and private key after assuming a role. I thought that having a .envrc
file in /conf
that does that would be a good idea, but it seems, that direnv is not running after assume-role
. Any pointers on how to achieve that?
Hrmmm it should definitely operate even after assume role
Are you running a current version of geodesic?
Ohhhhhhh here’s what maybe is happening. You want it to rerun after assume role, however it runs only once
Exactly!
You would need to flush the direnv cache so it triggers again
I forget how to do that
What would be the easiest way to run a post-asssume-role
command? Doesn’t need to be direnv, I would just want to execute some shell commands. Is there any way?
2019-06-10
There are no events this week
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
@Erik Osterman (Cloud Posse) quick question:
Is this https://docs.cloudposse.com/reference-architectures/cold-start/ still pretty much up to date?
it looks like it may be out of date?
It’s mstly out of date
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
2019-06-11
in the cold start instruction accounts are provisioned, but in the process a e-mail account like [[email protected]] is needed. Is there a workaround because want to use our general department e-mail address.
Use plus addressing. By default the reference architectures in the repo above do that. See root.tfvars
Each AWS account requires a unique email address because that is how AWS identifies an account.
How can we use geodesic with for example an mgmt vpc that is connected to a staging vpc and a prod vpc. We use bitbucket server througout the organization. How does this work with the different accounts. Are the examples of custom (terraform)modules?
Think of geodesic as just a preconfigured shell with all the tools required for cloud automation
What you describe is a configuration not a tool
So you would add the configuration to geodesic and run it
This is where our root modules come in
Those provide blueprints for typical configurations like the ones you described
@JeroenK in https://github.com/cloudposse/terraform-root-modules, there are a few examples of VPC peering:
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Cross-account VPC peering: https://github.com/cloudposse/terraform-root-modules/tree/master/aws/vpc-peering
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Kops - legacy account (created manually) VPC peering: https://github.com/cloudposse/terraform-root-modules/tree/master/aws/kops-legacy-account-vpc-peering
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
EKS - backing services (where you run things like RDS, ElastiCache etc.) VPC peering: https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks-backing-services-peering/main.tf
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
as @Erik Osterman (Cloud Posse) mentioned, geodesic
has nothing to do with configuration (code, data, settings), it’s a cloud automation shell with many tools inside, used to secure access to AWS (assume role or enterprise auth like Okta) and orchestration of cloud operations
configuration usually consists of code (terraform, helm, helmfile, etc.) and data (variables, NEV variables, other settings)
for code, we use module hierarchy: root modules (catalog of module invocations to provision entire infrastructure) - infrastructure modules (e.g. RDS, EKS, ECS - these are usually combination of other low-level modules) - low-level modules (usually to provision one or a few AWS resources, e.g. IAM role, S3 bucket with permissions, VPC with subnets, etc.)
all those modules are usually “identity-less”, meaning they don’t care where and how they will be provisioned, all configuration is provided from TF variables, ENV variables, SSM param store, Vault, etc.)
to directly answer your question, what we do is this:
- Create low-level modules (e.g. VPC, IAM, S3, etc.)
- Create infrastructure modules (e.g. EKS, ECS, RDS, Aurora), using the low-level modules
- Create a reusable catalog of module invocations (we call it
root modules
) that uses all other modules from the above
- Provide configuration to the modules (usually using TF vars from files or Dockerfile, ENV vars, and SSM param store using
chamber
- depends on use case and whether the data are secrets or not)
- And finally, from
geodesic
, login to the AWS account (by assuming IAM role), all configuration gets populated from the sources described in #4, and provision infrastructure for the particular account using the root modules invocations (which, once inside thegeodesic
shell for the particular AWS account, already know how and where they will be provisioned since they got all the configuration)
@Erik Osterman (Cloud Posse) do you have any docs or advice for upgrading to the most recent geodesic with terraform 0.12 with the purpose of upgrading to 0.12 wholly? i just noticed when i do make deps
now terraform says the directory is not totally empty (before it would just ignore the envrc tfvars). also, should i be concerned that it may distort my remote state file?
@Josh Larsen - we ran into this too
it’s aggravating.
I can give you a temporary workaround (haven’t tested it), but I think it hsould work
basically, run terraform init blah
and it should init the files to the blah
folder
ok, but that might mess with the tfstate pathing… new state file would for /account-dns
might change to /blah/account-dns
no?
then set export TF_DATA_DIR=$(pwd)/.terraform
oh
i see what you mean.
guess i could copy it all up one folder after init, just clunky
for now, I suggest overloading deps
target until we have a cleaner workaround
e..g do doing the extra copy step
ok, then its safe to assume geodesic is not really fully in line with 0.12 quite yet?
It’s fair to say our strategy of terraform init -from-module=....
does not work as-is with 0.12
ok, fair enough. we will try working around it. i do like that adding the version to .envrc changes the terraform version. nifty.
yea, happy with that part
so there’ss a -force-copy
arg now, but I wish it applied force to the “right” copy operation
so all the terraform
commands support specifying the path
that path can be added to the TF_CLI*
envs
2019-06-12
Public #office-hours starting now! Join us on Zoom if you have any questions. https://zoom.us/j/684901853
is it possible to make changes and not have to rebuild the shell everytime?
Use /localhost
Also, we have this PR pending for docs: https://github.com/cloudposse/docs/pull/460
what Document workflow for developing terraform modules locally why Existing documentation does not cover the workflow
Amazing
Thanks so much, that provided a ton of clarity
(@Jeremy G (Cloud Posse) )
When I follow these instructions I get an error:
Error copying source module: error downloading `file:///Users/justin/infrastructure/terraform-root-modules/aws/vpc` : source path error: stat /Users/justin/infrastructure/terraform-root-modules/aws/vpc: no such file or directory
I followed the exact folder structures and everything
Somewhere that is referenced
As a convenience, Geodesic mounts your home directory into the Geodesic container and creates a symbolic link so that you can reach your home directory using the same absolute path inside Geodesic that you would use on your workstation. This means that as long as you do your development in directories under your home directory (and on the same disk device), your workstation's absolute paths to your development files will work inside Geodesic just as well as outside it.
Sorry I must be missing something
Haven’t tested that myself
Mapping of Home directory was added in Geodesic 0.94.0 https://github.com/cloudposse/geodesic/releases/tag/0.94.0
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
2019-06-13
Sorry another noob question:
How do I get domain resolutions to work for the member accounts, lets say [app.dev.example.com](http://app.dev.example.com)
in the dev account just being a static s3 site
I have been digging around the root modules trying to figure this out and so far no luck
so a few things are going on
first you need to delegate [dev.example.com](http://dev.example.com)
to the dev
account
the account-dns
root module handles creating the zone and is invoked in each child account
then the root-dns
module delegates the DNS to each child account
So I went through the setup of the reference architectures, I have the root account with the NS
records set for the dev
account. In the dev
account the NS
records are setup as well and then I created an A
record in the dev
account to point to the bucket
@Erik Osterman (Cloud Posse) would the original hosted zone I had setup for the root domain be interfering with it?
@jober are you using https://github.com/cloudposse/reference-architectures
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
?
@Erik Osterman (Cloud Posse) yes
Everything is working as far as the account shells and such. Just having the issue with Route53. I have a suspicion that the original hosted zone setup on the root account is affecting the reference-architecture setup
I moved the registrar to point to the new name servers, and move over any legacy record sets, still no luck
Got it to work
Great job!
What was it in the end?
Forgot to update the registrar to the new nameservers
knew it was going to be a noob mistake, thanks for the patience
2019-06-17
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
2019-06-19
was there something specific to fix this assume-role (win10/wsl/ubuntu18lts) ?
all good; found the file from the last time I updated geodesic ENV ASSUME_ROLE_INTERACTIVE=false ftw
How are you supposed to use the legacy s3 storage? https://github.com/cloudposse/geodesic/commit/4170a58766fa925800c4293886b32da8d254bff9
I tried adding the following to docker
ENV TF_BUCKET_PREFIX=
ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
getting the feeling I’ll have to clear the TF_BUCKET_PREFIX in the .envrc every folder as it still populates it with path depth I dont want
- [direnv] use new TF bucket prefix method TF_BUCKET_PREFIX_FORMAT selects the format to use for setting the TF remote state bucket prefix/key: the original
$(basename $(pwd))
leaf-only form…
ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
yup; works. I was trying to cheat and use the envrc file in a folder higher up (i.e. /conf/frankfurt/nginx/ (I put the file in frankfurt) to set it to use TF11 while i migrate some of the easier bits in my control first.
Because it changes the env var as use terraform
is initialised it was screwing with what I expected
hrmmm
something like that should work, but maybe there’s a bug somewhere in what we have
its just because the old one was root based so it gave no trucks about /{this folder/nginx I got around the region issue using workspaces
then it was fixed recently
Is there a way to run multiple geodesics at the same time. it always seems to boot into whichever is running first
So you would like multiple sessions of the same image?
I think we could add an option for that
Right now it gives the Docker container the name of the image so it doesn’t work with concurrent sessions
It always execs into the running image if one is found
timezone diff
So i have root.xxx and prod.xxx If i make all on root it boots into that container if i then do the same on prod I end up in roots container
ideally should be able to have both open.
That’s not right! Have you installed the wrapper lately?
Try reinstalling it
i tend to use make all
habitually. seemed odd tbh
geodesics up-to-date (hence all the oh fudge that assume role thing i’d been avoiding that breaks in wsl) I’ll dig deeper if its not expected to do that as its probably something stupid
I think this is what yoou want
we have that in many dockerfiles
#office-hours starting now! https://zoom.us/j/684901853
Have a demo of using Codefresh for ETL
question regarding geodesic in CICD / automated environments. looking at https://github.com/cloudposse/testing.cloudposse.co/blob/master/codefresh/terraform/pipeline.yml i think im missing how the assume-role
actually gets executed. as far as i can tell, theres no way to setup aws-vault to be completely non interactive (it always asks for the passphrase prompt). so, in a sentence: how are roles getting assumed in CICD environments
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
aws-vault
is for humans
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
in the CI/CD context, the credentials are provided via alternative means
For example, one way is to update a Codefresh shared secret with temporary credentials
E.g. if you don’t like the idea of long-lived creds stored in codefresh, this is one way
#!/bin/bash
set -e
eval "$(aws-vault exec cpco-testing-admin --assume-role-ttl=1h --session-ttl=12h -- sh -c 'export -p')"
output="/dev/shm/codefresh.yaml"
cat<<__EOF__>$output
apiVersion: "v1"
kind: "context"
owner: "account"
metadata:
name: "aws-assume-role"
spec:
type: "secret"
data:
AWS_SESSION_TOKEN: "${AWS_SESSION_TOKEN}"
AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
AWS_SECURITY_TOKEN: "${AWS_SECURITY_TOKEN}"
AWS_PROFILE: "default"
AWS_DEFAULT_PROFILE: "default"
AWS_VAULT_SERVER_ENABLED: "false"
__EOF__
codefresh auth create-context --api-key $CF_API_KEY
codefresh patch context -f $output
rm -f ${output}
how are you able to use aws-vault without the manual passphrase input in that script?
Set the AWS_VAULT_FILE_PASSPHRASE env var
oh wow thanks! been looking all over and never found that
i ended up writing a little tool, since working with aws-vault in ci pipelines was a bit too clunky for my tastes https://github.com/BetterWorks/go-assume its a quick and dirty script i threw together this afternoon but it works
@dustinvb has joined the channel
2019-06-20
2019-06-21
2019-06-24
There are no events this week
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Jul 03, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
2019-06-25
Hey everyone, following the quick start docs at https://docs.cloudposse.com/geodesic/module/quickstart/ and i’m running into:
docker run -e CLUSTER_NAME \ -e DOCKER_IMAGE=cloudposse/${CLUSTER_NAME} \ -e DOCKER_TAG=dev \ cloudposse/geodesic:latest -c new-project | tar -xv -C .
docker: invalid reference format.
See 'docker run --help'.
@sweetops the quick start docs are out of date and not functional. Use the github.com/cloudposse/reference-architectures instead
ah okay. thanks Erik!
Also, archives are here: https://archive.sweetops.com/geodesic/
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
If you get stuck, maybe some nuggets in there.
@dalekurt has been recently working through these
So, I pulled the repo, edited configs/root.tfvars
, and exported the aws account’s root master keys to ENV vars, I’m getting:
terraform init -from-module=modules/root accounts/root
Copying configuration from "modules/root"...
Error: Target directory does not exist
Cannot initialize non-existent directory accounts/root.
make: *** [root/init] Error 1
Not sure. @Jeremy G (Cloud Posse) provisioned these this week. Any ideas?
oh, i was running tf 0.12
aha, yes, not updated for 0.12
yeah, that’s my bad haha
Yes, you need to have terraform
version 0.11 installed on your workstation.
I will be pushing some updates to the Reference Architecture sometime in the next few days.
The main thing is updating the baseline version of Geodesic, and fixing the race condition in making the Docker images. Currently, Terraform often tries to build the Docker images before all the files are in place.
The other big things are to update Kubernetes to 1.12.9, switch from kube-dns
to coredns
, and to pin the versions of terraform and helm installed in the Docker images.
@Jeremy G (Cloud Posse) I’m guessing this is the race condition you mentioned?
Error: Error applying plan:
1 error occurred:
* module.account.module.docker_build.null_resource.docker_build: Error running command 'docker build -t root.blvd.co -f Dockerfile .': exit status 1. Output:
#2 [internal] load .dockerignore
#2 digest: sha256:c8c62ec01c2e58b7ca35e6a8231270186f80ab4c83633dace3b2a61f6e9dc939
#2 name: "[internal] load .dockerignore"
#2 started: 2019-06-25 19:16:05.8271816 +0000 UTC
#2 completed: 2019-06-25 19:16:05.8272689 +0000 UTC
#2 duration: 87.3µs
#2 started: 2019-06-25 19:16:05.8274642 +0000 UTC
#2 completed: 2019-06-25 19:16:05.8712445 +0000 UTC
#2 duration: 43.7803ms
#2 transferring context: 2B 0.0s done
#1 [internal] load build definition from Dockerfile
#1 digest: sha256:045540caaa44e0ec4d861b43e9328ac90843e9d94c485db1703c3e559ed7dc07
#1 name: "[internal] load build definition from Dockerfile"
#1 started: 2019-06-25 19:16:05.8264853 +0000 UTC
#1 completed: 2019-06-25 19:16:05.8265771 +0000 UTC
#1 duration: 91.8µs
#1 started: 2019-06-25 19:16:05.8272773 +0000 UTC
#1 completed: 2019-06-25 19:16:05.8602995 +0000 UTC
#1 duration: 33.0222ms
#1 transferring dockerfile: 2B 0.0s done
failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount930443153/Dockerfile: no such file or directory
@sweetops Yes, that is the race condition. You can just run make root
again. When it comes time to make the children, the make children
command is safe to run multiple times, but to save time, I recommend you make each child one at a time. Or you can wait a couple of days for the next release of the reference architecture.
Okay. I’ve still got some conceptual work to do on my end so I’ll probably just hold.
Since you are waiting on it, I will make an effort to get the release out today.
oh, cool. I mean, no rush really, I don’t want to divert your focus for your day haha.
No worries, it’s one of the things I’m currently working on for a new client.
awesome. I appreciate the help.
@Jeremy G (Cloud Posse) Question for you, When spinning these accounts up, I want to rename the dev
account to sandbox
. Is that as simple as s/dev/sandbox/
in accounts_enabled[]
in root.tfvars
, renaming dev.tfvars and then stage=sandbox
in that file?
Honestly I’m not sure. I think it would be best to copy rather than rename /configs/dev.tfvars
-> /configs/sandbox.tfvars
and then customize what you want installed in the sandbox. Keep in mind that by default the dev
environment does NOT include a Kubernetes cluster.
Yes, you also need to change stage = "dev"
to stage = "sandbox"
inside sandbox.tfvars
and replace dev
with sandbox
in accounts_enabled[]
in root.tfvars
I expect that is all you need to do, but I’m not positive.
Also keep in mind that the “stage” name shows up as a part of nearly every label there is, so we try to keep it short in order to avoid running into issues with names getting too long. So I suggest you pick a 3 or 4 letter name instead of a 7 letter name like “sandbox”.
@sweetops We have pushed out a new reference-architecture release for you. Skimped a tiny bit on the testing, so please let me know if you find any issues. https://github.com/cloudposse/reference-architectures/releases/tag/0.14.0
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
oh awesome. pulling now
Ran into some terraform errors
I was afraid of that. Please paste
in this thread
Okay, sending you a log of the run. It’s a bit verbose so I’ll send as a file.
Sent you the full log, here’s the actual errors, for this thread:
I got the log, that’s not actually a Terraform error. Your AWS access key is lacking permissions.
oh, crap you’re right
oohh, i’m in the new account waiting period on this new root account I spun up.
okay, fixed that.
BTW, how did you get out of the waiting period so quickly?
Not Terraform. You need to set environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
to static (not sesson) keys with a lot of privileges. Typically they are the root keys of the root account.
yeah, this new aws root account was in the ‘waiting period’, I fixed that now
should have checked that after I spun the account up heh
so, will failing where it did cause any problems, or will make root
pick up where it left off?
It is safe to run make root
again, but I added a make root/init-resume
just for this sort of thing.
okay, I’ll give make root/init-resume
a go then
After make root/init-resume
(but not after make root
) you need to run make root/provision
okay, init-resume
finished super fast
Yes, it’s mainly to get you to a viable docker image. I now realize you were already past that. So make root/provision
okay
running root/provision
When that finishes, that will be the equivalent of having run make root
successfully and you can proceed from there.
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Jun 26, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-06-26
Hi guys, how do I upgrade Ansible to 2.8.1 on Geodesic 0.112.0
have tried apk add ansible
apk add --upgrade ansible
apk add ansible-2.8.1
and apk add ansible-2.8.1-r0
(https://pkgs.alpinelinux.org/package/edge/main/x86/ansible)
Also pip isn’t in the image by default so I figure it is not pip that installs ansible
Bumps ansible from 2.7.10 to 2.8.1. Commits See full diff in compare view Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a…
so it is pip
❌ . (none) ~ ➤ pip install
bash: pip: command not found
but why isn’t it in my shell, especially when it isn;t removed in https://github.com/cloudposse/geodesic/blob/master/Dockerfile
ohhh it’s a different stage of the build FROM alpine:3.9.3 as python
dang it
Solution for your docker file
apk add py-pip
pip install --upgrade ansible==2.8.1
Erik, Jeremy, thanks for the help yesterday getting the reference architecture up and running. I was able to finish things up this morning and have it all built. Really impressive stuff. Going through it all this morning trying to get a firm grasp on how it all works.
#office-hours starting now https://zoom.us/j/508587304
2019-06-27
Aha, for that we have https://github.com/cloudposse/packages/tree/master/vendor/assume-role
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages
Easily assume AWS roles in your terminal. Contribute to remind101/assume-role development by creating an account on GitHub.
probably about the same.