#geodesic (2018-07)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2018-07-05

@jonathan.olson has joined the channel
2018-07-20

@Andriy Knysh (Cloud Posse) has joined the channel

@Sebastian Nemeth @Yoann if you have questions about geodesic
and https://docs.cloudposse.com/reference-architectures, please ask them in this channel

Thanks

btw, welcome

@Sebastian Nemeth has joined the channel

@Yoann has joined the channel

@tamsky has joined the channel
2018-07-21

Thanks very much! My q was that each module sets up a hosted zone e.g. root.company.com and prod.company.com… how and where would we set up say, CNAME records that map company.com to prod.company.com?

I am afk so cannot really share any details easily, but you are correct. One geodesic module per AWS account. One delegated dns zone per account. The root account is responsible for delegating the zones, but first need to create the zones

testing.cloudposse.co - Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS

If you look for account-dns that is where the sub account provisions a zone

root.cloudposse.co - Example Terraform Reference Architecture for Geodesic Module Parent (“Root”) Organization in AWS.

If you look for root-dns that is where the delegation happens

@Sebastian Nemeth take a look https://docs.cloudposse.com/reference-architectures/

So for a cold start, there are a number steps that must be followed in order to get everything linked up.

Holy shit guys, it’s Saturday!

V. much appreciate your response on the weekend. Was not expecting that!

here are steps we perform to provision dns
on all accounts:
- In https://github.com/cloudposse/root.cloudposse.co, we fisrt provision the
parent
zone (https://github.com/cloudposse/terraform-root-modules/blob/master/aws/root-dns/parent.tf) and theroot
zone (https://github.com/cloudposse/terraform-root-modules/blob/master/aws/root-dns/root.tf)
root.cloudposse.co - Example Terraform Reference Architecture for Geodesic Module Parent (“Root”) Organization in AWS.
terraform-root-modules - Collection of Terraform root module invocations for provisioning reference architectures
terraform-root-modules - Collection of Terraform root module invocations for provisioning reference architectures

Haha! We’re mostly around in some capacity. :)

- Then we provision all other accounts (
testing
,staging
,prod
, etc)

terraform-root-modules - Collection of Terraform root module invocations for provisioning reference architectures

@Andriy Knysh (Cloud Posse) has provisioned this stack dozens of times

- Then we come back to
root
and provision Zone delegation (Name servers)

terraform-root-modules - Collection of Terraform root module invocations for provisioning reference architectures

^ that’s a high level description to provision dns
on all accounts

more details are in the docs https://docs.cloudposse.com/reference-architectures/cold-start/

please ask more questions

Okay, so parent zone = company.com…? So then what’s root hosted zone for?
I mean, I get that each module has its own zone and the parent delegates to the modules… but then why a separate root hosted zone?

One more consideration I don’t know if we call out in our docs is you should have an infrastructure domain that is separate from your branded “vanity” domain

The infrastructure domains are for service discovery and should be canonical, while vanity domains how you expose services publicly.

what’s diff between parent and root?

good question @Sebastian Nemeth

I try

we use [xxxxx.cloudposse.co](http://xxxxx.cloudposse.co)
on all accounts so it’s mostly for:

- To have consistent naming

Yes, for root account not strictly required, but we do it for consistency. To date, we haven’t had to provision any sub domains off of the root account since all it handles is identity. But if for some reason we did need to, it would be on that sub domain.

- Parent zone
[company.com](http://company.com)
is just DNS name, but we name our AWS accountsroot
,prod
,staging
, etc.

so mostly to have the same naming on all accounts, and also the same naming on all AWS resources (e.g. roles cp-root-admin
, cp-testing-admin
)

and AWS profiles

okay makes sense… so the logical hierarchy is…
parent
- root
- audit
- prod
- staging
- testing

etc

for DNS yes

from the point of view of AWS organization:

so then, is there ever a use case for actually using root’s dns records?

root (Organization)
- audit
- prod
- staging
- testing

mmhmmm

okay, cool these structures would really help in the documentation

Agree!

thanks, we’ll add that, nice point

And maybe a line about the concept that each module has its own xxxx.cloudposse.co and parent delegates from the TLD

yep

we have something like that in the docs, but agree, it’s not easy to follow unless we make big notes about that

I am excited about concentrating these questions in channel. We have a lot of it in #announcements, but with so much chatter it’s lost in the noise. Having it in here will help in the feedback loop, so we can review discussions and update docs.

also @Sebastian Nemeth if you have some input, you can create issues in any of the repos or in the docs

Gotcha

not sure if you saw that in the other channel, so here it is again FIY

We just released Cloud Posse reference architectures:
<https://github.com/cloudposse/terraform-root-modules> - Collection of Terraform root module invocations for provisioning reference architectures
<https://github.com/cloudposse/root.cloudposse.co> - Terraform Reference Architecture of a Geodesic Module for a Parent ("Root") Organization in AWS
<https://github.com/cloudposse/prod.cloudposse.co> - Terraform Reference Architecture of a Geodesic Module for a Production Organization in AWS
<https://github.com/cloudposse/staging.cloudposse.co> - Terraform Reference Architecture of a Geodesic Module for a Staging Organization in AWS
<https://github.com/cloudposse/dev.cloudposse.co> - Terraform Reference Architecture of a Geodesic Module for a Development Sandbox Organization in AWS
<https://github.com/cloudposse/audit.cloudposse.co> - Terraform Reference Architecture of a Geodesic Module for an Audit Logs Organization in AWS
<https://github.com/cloudposse/testing.cloudposse.co> - Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS
They show how we provision AWS accounts and what Terraform modules we use.
Complete description is here <https://docs.cloudposse.com/reference-architectures>
We will be improving the repos and the docs. Your input and PRs are very welcome.

@Andriy Knysh (Cloud Posse) sorry didn’t fill you in - @Sebastian Nemeth is with the startup in Helsinki. They found us by way of our reference architectures and modules.

Now they are rolling up their sleeves

very nice


@Sebastian Nemeth we’ll be glad to help and answer any questions

I’m sure we’ll have plenty - feel very lucky to have you guys on the other side of the line. This stuff is painful to get right.
2018-07-23

Can we automate geodesic terraform to build and destroy a cluster in a CI env ?

@Yoann We have this on our roadmap for this quarter, but do not have any reference architectures for how to do it.

if you want to get a headstart on it, i can walk you through how we would approach it

a geodesic module is just a container. so if you run that container in a context which provides either the AWS credentials as environment variables (AWS_ACCESS_KEY_ID
, and AWS_SECRET_ACCESS_KEY
) to the container or where there’s an AWS instance profile with sufficient access credentials, then you can just run the commands you would normally run inside the container

something like:

docker run --env-file /dev/shm/secrets.env --workdir /conf/myproject/ mycompany/staging.mycompany.com:1.2.3 chamber exec myproject -- terraform apply

that’s saying start a docker container with environment variables from a file called secrets.env

change directory to /conf/myproject

run chamber exec myproject
to get SSM secrets from myproject

and then run terraform apply

so you see, deploying infrastructure with CI/CD is not really much different than any deployment process, since it’s all bundled in a container.
2018-07-24

hi, is anyone using build chart instructions from here https://docs.cloudposse.com/release-engineering/cicd-process/build-charts/

it seems the latest build-harness
don’t have target make helm/chart/build-all

@Igor Rodionov

the same is available in repo though https://github.com/cloudposse/build-harness/blob/17b052bb53da80b80acbf3b7dc3c5e32bbfeb16b/modules/helm/Makefile.chart#L48
build-harness - Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more

i tried with latest, 0.6.13, 0.6.12

Thanks @Erik Osterman (Cloud Posse) that seems pretty straightforward I wilkl have a look at it when I achieve to deploy my kops cluster. ATM I could not install the staging env due to VPC conflicts

what kind of VPC conflicts?

Haven’t encountered that before.

Well I tried to run both kops-aws-platform and backing-services from the staging image and it complains on module.kops_vpc_peering.module.kops_metadata.data.aws_vpc.kops: data.aws_vpc.kops: no matching VPC found

Aha, @Andriy Knysh (Cloud Posse) will tell you how to fix it

it comes down to a bad env setting for the cluster name

i think he’s working on a fix to docs for that.

i ran into this today

Hey guys, is there any way to re-use an existing AWS linked account to initialize testing, audit, staging and prod geomodules?

I have a bunch of old accounts that would be cleaner to re-name and re-purpose.

@Igor Rodionov has joined the channel

@alebabai has joined the channel

@evan has joined the channel

@mcrowe has joined the channel

@sarkis has joined the channel

Yes you should be able to reuse them.

You can use ‘terraform import’ command with the resource name and account is to import state

@Sebastian Nemeth
2018-07-25

Problems with cold_start doco:

I’ll pin this here, and post fixes and small problems I find with the cold_start doco here…

This line under Provision iam Project to Create root IAM role…
“Update the TF_VAR_root_account_admin_user_names variable in Dockerfile for the root account with your own values.”
Doesn’t make sense… in root.cloudposse.co repo, the root account admin user names are no longer kept in the Dockerfile, they’re in the attached terraform templates (it seems)

@Sebastian Nemeth thanks. The docs show the older version, we updated the repos since then. We need to update the docs (will do it this week).

Cool, I’ll wait for the updates then. Will keep posting what I find in this thread…

you can open an issue for that in the docs. And all other problems you find. Thanks

ah

gotcha

Yea thanks, we see the open issues so we don’t forget to fix it

For now, you need to update the names not in the Dockerfile, but in the .tfvars files

@Sebastian Nemeth pinned a message to this channel.

@Arkadiy has joined the channel

hi, i am looking for strategy to use conf with local dev environment, the way i did the workaround is to manuall add dockerargs in wrapper as --volume=$(pwd)/aws:/conf)
. So when i run it within the repo root, it mounts my conf to wrapper and i can do changes in my IDE and sync it directly from geodesic shell

hrm

so for local dev, i always cd /localhost

and work from there

the drawback is, I have to copy all standard modules to my repo, cause if I do COPY --from=terraform-root-modules /aws/tfstate-backend/ /conf/tfstate-backend/
the root modules were hidden

/localhost
= $HOME
<– host machine

yes that I am aware of

hrmm can you elaborate on what you want to achieve beyond IDE?

or why this is insufficieint with IDE.

actually you are right with this, it just being habitual I ended up using the default root

i just have too much nesting in my local so felt lazy about this

yea, that is a challenge - also with a lot of docker inheritance it’s not a universal solution for all kind of debugging.

/localhost/github/ops/new/dev.niki.ai

i find it works really well with terraform-root-modules

hmm

but this still lacks 1 thing, not a major glitch, to work with root modules I have to switch to ~
but then to work with my modules I have to switch back to localhost

can’t we force docker mount to show the content of container in the mount directory

oh

yea, that would be nice. something that automatically drops you in the context of your IDE path

if you open terminal within ide, it drops you there only

basically, just need it to pass the --workdir
arg to docker

yes

or do a cd /....
upon entering the shell, perhaps with something in /etc/profile.d

i don’t have a idea right now on what the convention would look like

open to suggestions

perhaps caller can set a GEODESIC_WORKDIR
env?

I think I didn’t follow you completely

what would be advantage of GEODESIC_WORKDIR

it would be placeholder for my localhost directory?

to rephrase the question, it would be nice to see in IDE what is coming up from root-modules

that way i have a collective view of what is my total infra, root+custom modules in one dir

the GEODESIC_WORKDIR
would be where you would get dropped in the geodeisc shell

it would be used only if present.

that would spare the caller the need to remember to cd /localhost/dev/.......

remake - A watcher tool for Make

haha i just came across that tool the other day.

it’s useful for go
or say c++
projects. it will watch a directory and call a make target anytime it changes.

not really applicable in this situation, but thought i’d share anyways

collective view of what is my total infra, root+custom modules in one dir

this is really asking for an overlay filesystem

i have seen fuse filesystems that will mimick this

but i think that would be HIGHLY experimental

stepping out

I think @rohit.verma was asking about seeing the files coming from root-modules in the IDE. Basically on the host file system to see the files from the container. If you work with let’s say testing.cloudposse.co, now it’s completely empty in the IDE on the host, you can see the Terraform files only inside geodesic

ohhhh - i see

but in my case, I would do

GEODESIC_WORKDIR=/locahost/Dev/cloudposse/terraform-root-modules

then start the container, and it would drop me in that folder where I iterate.

Then in my IDE, I operate in /Users/erik/Dev/cloudposse/terraform-root-modules
2018-07-26

Hey guys - the cold start docs describe setting up aws-vault on your local machine, but it’s also available in the geodesic container. Does it have to be set up in both places, or does geodesic mount ~/.aws/config or something? Or is it best practice to only use aws-vault from geodesic?

@Sebastian Nemeth that is correct, geodesic mounts ~/.aws/config: https://github.com/cloudposse/geodesic/blob/a430b746c88fb81be59db8b1a42ce4a38dc4a3bd/Dockerfile#L166

@Sebastian Nemeth it’s only needed for locally when doing native development

(e.g. docker compose)

e.g. chamber exec app -- docker-compose up

I agree though - the docs should be updated to make that clear.

you’re the second person in the last day to point that out.

Glad I’m helping
2018-07-28

Hey guys, few issues posted under root.cloudposse.co

Blocker for me atm is geodesic doesn’t seem to be mounting ~/.aws/config correctly

ping me this week and we’ll screenshare to get to the bottom of it

Checking
2018-07-29

@rohit.verma anyone on your team using Geodesic on WSL?

Learn more about how the Windows Subsystem for Linux works.
2018-07-30

no, we have only linux and mac machines

Thanks! I think we are close to getting it working on windows

@Sebastian Nemeth how about this?



With a couple of tweaks the WSL (Windows Subsystem for Linux, also known as Bash for Windows) can be used with Docker for Windows.

@Erik Osterman (Cloud Posse) I actually followed this guide to set up docker on WSL and it’s a good reference for you to include in your documentation. I have the path set up as per this guide, so that mounting from /c works fine, but our problem is with /home/username.

I went fishing btw: https://stackoverflow.com/questions/51589077/docker-on-wsl-wont-bind-mount-home https://superuser.com/questions/1344407/docker-on-wsl-wont-bind-mount-home
I have the strangest situation using Docker on WSL (Windows Subsystem for Linux, Ubuntu 16.04). I’m trying to bind mount /home/username (or just $HOME for convenience) as a volume in a container, and
Originally asked this on StackOverflow, but I thought SuperUser might be more appropriate. I have the strangest situation using Docker on WSL (Windows Subsystem for Linux, Ubuntu 16.04). I’m tryin…

Ok, I was thinking maybe needed to use /c/Users/username instead of /home/username

Upvoted

hm

That’s an interesting idea

That’s v. interesting.
docker run -it --rm -v /C/Users/sebas_000/AppData/Local/lxss/home/martaver:/test alpine sh
works

Ok, I think then that’s why I see suggestions to run something like this

mount –bind /mnt/c /c

To create a “proxy” path

You could do that to shorten the path above, albeit ugly

windows 10 build 1503.11 running Xenial 16.04.2 LTS with docker client 17.03.0 connecting to the hyper-V docker-for-windows daemon 17.03.1 and docker-compose 1.11.1 From within wsl things like dock…

Off to bed! Lmk how it goes

@Sebastian Nemeth tracking this issue here: https://github.com/cloudposse/geodesic/issues/199
what Mounting $HOME to /localhost is not working on WSL (Windows Shell for Linux) why Explain why this is a problem and what is the expected behavior. Explain why this feature request or enhancemen…

@Erik Osterman (Cloud Posse) I have replaced kube-lego with cert-manager, https://github.com/nikiai/dev.niki.ai/blob/ae192f5bc7a4b1287b2d611267d200f15be845e4/aws/kops/helmfile.yaml#L303. In case you want to include this in geodesic
Contribute to dev.niki.ai development by creating an account on GitHub.

Added issue here: https://github.com/cloudposse/helmfiles/issues/1
what Add Cert Manager as Kube Lego Alternative why Kube Lego deprecated references https://github.com/nikiai/dev.niki.ai/blob/ae192f5bc7a4b1287b2d611267d200f15be845e4/aws/kops/helmfile.yaml#L303-L335

Kube-lego is now depricated

Awesome! Thanks… we shall incorporate it

Same annotations?

@rohit.verma @Max Moon what are your thoughts on moving helmfiles to a separate repo that can be rev’d separately?

Especially since they can now be decomposed to helmfile.d

most likely I would recommend to use it in root-modules

you can actually rename terraform-root-modules to root-modules

please see my repo, i am kindly doing the same

I’ve struggled with mixing the technologies

Charts are separate

Terraform is separate

Seems like helmfiles should be separate

I work on principal of environment, what are the things which aggregately define my environment will come in same repo in the end

doesn’t matter from where it came

Environments are to me the account repos

we can create a new repo and in docker file add the same syntax

Those tie everything together

yes, environmet = account repo

Packages come from packages

Terraform code from root modules

Charts from charts.

Seems like clean separations no?

I agree completely with you

Cool I’ll track this in a separate issue

what i meant is that in dockerfile Assume we have
COPY --from=terraform-root-modules /aws/tfstate-backend/ /conf/tfstate-backend/
COPY --from=charts /aws/basic /conf/basic (includes, dashboard etc...)
COPY --from=charts /aws/monitor /conf/monitor (includes, kube-prometheus etc...)
COPY --from=services /aws/services /conf/services
But when we are overriding anything e.g monitor, we would be overriding in dev.niki.ai only

As I explained earlier, to view the root-modules charts in my ide, i end up copying all to my code only. On first setup I did copy from terraform root, but then I moved it to my repo

Ok, makes sense. I recommend your approach as well.

also one thing I wanted to ask since started using helmfile, how can we sync only updated services. Helmfile sync always update all the services

doesn’t it use any hashing technique for a release

We use selectors extensively

Doesn’t that mitigate it?

Basically, I never call sync without a selector

I treat helmfile like a package manager

There also helmfile diff

I know @Daren loves this

I saw you recommended upsert

Oh yea, on that discussion

No response yet though, right?

This idea comes from the kubectl apply -f –prune where kubectl deletes any resources that aren't referenced. Helmfile should have an option for the sync command that checks what helm charts ar…

(For reference)

yes

I will start migrating things to helmfile.d, than will hash of individual file

will share once done

@alebabai is converting them

Pr is undergoing testing

i need to do for our micro-services

is there a utility (shell 1 liners for this)? or you are doing manually

i think yq on name array and echo in name files should do

Yea, it’s called vim ;)

No fun

Oh, you mean for the testing?

That could work, but lots of envs in chamber also need to be set

why we need to worry about chamber when migrating from helmfile to helmfile.d

Oh we just use that to store the envs used by the files

Migrating is actually risky

Let me explain

helm —set does automatic type casting

If it sees a string that looks like a number it makes it an int

If it sees a Boolean sting it makes it a Boolean

So when converting to inline values it’s critical to maintain cast

Errors won’t be obvious

Subtle feature of charts might get accidentally disable if true is a string

While certain annotations require values to numbers to be strings

So the helmfile ‘set’ declaration just called —set

That’s why we care

got it

@Sebastian Nemeth hows it going?

See these slides? Wonder if it’s related to the HOME debacle



Windows 10 allows you to run native Linux binaries with the WSL. Let’s see how we can use a good development environment for Vagrant and Docker using VMware Wo…

This gave me an idea and I figure out a workaround
Detailed in https://github.com/cloudposse/geodesic/issues/199
what Mounting $HOME to /localhost is not working on WSL (Windows Shell for Linux) why Explain why this is a problem and what is the expected behavior. Explain why this feature request or enhancemen…

That’s awesome! Thanks for sharing

@Erik Osterman (Cloud Posse) is there any update on when this or similar solution will be rolled into geodesic?

We’d really rather just reference geodesic at the trunk than have to fork/build/push our own image

Was hoping today, but still not happy with the current PR solution. It appears that depending on the method of installation or version of WSL that the home path varies.

Ahhh that’s true. I’ve uninstalled WSL and reinstalled it on my system, so my home path might not even be the default one with current Windows.

what $HOME path now is built dynamically (search wsl dir on windows, getting windows and linux users names) why By default $HOME refers on /usr/local/bin, but in case with WSL, docker installed…

Here’s the current progress. A few more minor nitpicks to be solved and then we’ll merge.

We cannot test your path, however, so might not work
2018-07-31

@i5okie has joined the channel

@Sebastian Nemeth this is the fix for the make install
problem you reported:

what Add default value for INSTALL_PATH why On wsl INSTALL_PATH defaults to build-harness path

You’ll need to incorporate that into you account repo

@rohit.verma - you might dig this:

what Add support for Docker for Mac (DFM) Kubernetes or Minikube why Faster LDE, protyping howto I got it working very easily. Here's what I did (manually): Enable Kubnernetes mode in DFM. Disa…

I was able to get geodesic working with “Docker for Mac / Kubernetes” with relatively little effort

@Max Moon @alebabai