#geodesic (2018-12)

geodesic https://github.com/cloudposse/geodesic

Discussions related to https://github.com/cloudposse/geodesic

Archive: https://archive.sweetops.com/geodesic/

2018-12-03

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for starters, what i’m thinking to standardize is the discrepancy between what is TF_VAR_ and not.

Jan avatar

so I am carrying on with the coldstart setup currently

Jan avatar

yes, that would already go a long way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i find we have to do a lot of mapping. my thinking is to focus on some canonical variables that are not TF_VAR_ prefixed

Jan avatar

if the one picked isnt great can always change em, all at the same time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

right - need to pick one and run with it and see how it goes. we’ve kind of been flipflopping.

Jan avatar

that could make a lot of a sense

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and then make it easier to access all envs from terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jan what do you think of the pipelining we’re doing?

Jan avatar

Honestly I have yet to get that far

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

basically, things like chamber exec kops -- helmfile exec ...

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then helmfile calls helm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s a lot of layers, but it’s also very UNIXy

Jan avatar

so I mean in general I like to have as few moving parts as possible

Jan avatar

as simple as possible, as complex as necessary

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is a great quote

Jan avatar

but start with consistent and functional and go from there

1
Jan avatar

Another cool one is “Both simplicity and trust, once lost are hard regained”

Jan avatar

that sorta vibe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Perceived Inconsistencies · Issue #317 · cloudposse/geodesic

what Identify and address inconsistencies why we cannot begin to standardize something which is inconsistent.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you jot down any thing that comes to mind?

Jan avatar

will do

Jan avatar
Mondays “refresh my memory” issue is I cd into testing/ make initdocker/buildinstall
Jan avatar

and I dont have a testing image locally built

Jan avatar

and yes the ENV DOCKER_IMAGE= is set

Jan avatar

I recall running into this before

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrm…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
make init
make docker/build install
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that should result in both an image locally and a script installed to /usr/local/bin/xxxx

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what is the error you get when running xxxx?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what I’ve seen happen is that the image that was built was with some tag Y

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

by the DOCKER_IMAGE env is referencing some tag Z

Jan avatar

so afterwards I end up with

 docker images
REPOSITORY                          TAG                 IMAGE ID            CREATED             SIZE
cloudposse/testing.cloudposse.co    latest              938b6ad92d1a        26 minutes ago      1.08GB
jdnza/root.aws.tf                   latest              4e5bad3816ca        5 days ago          1.12GB
Jan avatar

the error on make install

Jan avatar
make install
Password:
# Installing testing.aws.tf from jdnza/testing.aws.tf:latest...
Unable to find image 'jdnza/testing.aws.tf:latest' locally
docker: Error response from daemon: manifest for jdnza/testing.aws.tf:latest not found.
See 'docker run --help'.
# Installed testing.aws.tf to /usr/local/bin/testing.aws.tf
Jan avatar

make docker/build results in

Successfully tagged cloudposse/testing.cloudposse.co:latest
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, in the Makefile, i think something is set wrong.

Jan avatar

yes now I recall

Jan avatar
export CLUSTER ?= testing.cloudposse.co
export DOCKER_ORG ?= cloudposse
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so regarding this…. we’ve gone back and forth

Jan avatar

where this was a pain was that in the docs it is never mentioned to update the Makefile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i like doing something like:

export CLUSTER ?= $(shell basename $(pwd))
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

etc..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but this is also more “magic”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what in your opinion is the right balance?

Jan avatar

So I mean I would have the Makefile do so

Jan avatar

I mean if you are for example already do a cp testing.cloudposse.co/ testing.aws.tf/

Jan avatar

then make docker/build should quite happily just build with the pwd

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, i’ll track this in that issue

Jan avatar

export CLUSTER ?= $(shell basename $(pwd))

Jan avatar

would solve it for all other envs

Jan avatar

what to do for org though

Jan avatar

this all makes me come back to my idea of having a coldstart module that renders all the coldstart resources with some high level inputs I provide

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh yes, coldstart module is top of mind

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is actually all feeding into that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i want to basically collect all the datapoints and then use that to create the module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so asking for docker registry url is actually important

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

will add that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Perceived Inconsistencies · Issue #317 · cloudposse/geodesic

what Identify and address inconsistencies why we cannot begin to standardize something which is inconsistent.

Jan avatar

oki macbook battery is about to die, will switch to phone

Jan avatar

something else…

Jan avatar
cd account-dns
init-terraform
terraform apply
Jan avatar

running without terraform plan is an anti-pattern that tf should be stopping with the info from the duynamodb table or not?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, so best practices are to run terraform plan -out... and then terraform apply planfile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suppose from a consistency perspective, we should write the docs that way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is that what you’re suggesting?

Jan avatar
Allow any of the cold-start resources to be provided as overrides · Issue #318 · cloudposse/geodesic

what I have several resources that a cold-start would create, such as parent domain or possibly pre-existing root account. I would like to be able to set these as overrides so that the are taken in…

Jan avatar

I would be consistent in that, wrapping other tools means that users will come with varying degrees “pre-trained” behavior. Generally I would avoid un-training best practices

Jan avatar

k 2% battery, im out

Jan avatar

is it possible to use geodesic with aws-vault and the osx keychain?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No, need to use the lowest common denominator which is file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use that by default and use it between OSX and geodesic and Windows even :-)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(WSL)

Jan avatar

kk, is it expected then that I need to enter my password on every aws-vault exec accountnamr -- command that need aws access

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

instead, run aws-vault exec accountname -- bash

1
Jan avatar

ahhh

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you have a sequence of commands you need to run interactively

Jan avatar

thanks mate

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, if using direnv, you can add this to the .envrc of a directory

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
eval "$(chamber exec kops -- sh -c "export -p")"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

replace kops with the namespace

Jan avatar

yea thats what I was looking for

Jan avatar

cheers

tamsky avatar


instead, run aws-vault exec accountname -- bash

I find I need aws-vault exec accountname -- bash -l … otherwise I see: bash: prompter: command not found

Jan avatar

talking about using it outside of a geodesic container now

Jan avatar

I have yet to have a chance to migrate all my credentials / env’s over to this work flow pattern

Jan avatar

very soon id like to have a executable bin per aws account/env

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


executable bin
geodesic docker wrapper script?

Jan avatar

exactly. essentially the same as with env.cloudposse.co bin

Jan avatar

just for all my existing accounts

Jan avatar

then assume-role

Jan avatar

and what not

Jan avatar

I have currently about 30+ aws accounts (10 odd via geodesic)

Jan avatar

using the same tooling/patterns would make life easier on my end

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, agree

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so my thinking so far (inspired by our earlier conversation) is to create various Dockerfile templates

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. Dockerfile.root, Dockerfile.audit, Dockerfile.default

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

those would be terraform templates (e.g. with $var parameters)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

each one would implement some kind of opinionated account structure

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then there would be a terraform module that takes as an input the template you want to use

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and passes the parameters to the template to generate the dockerfile. then it uses local exec provisioner to build the docker image and run it to achieve various outcomes.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i am thinking for coldstart terraform state is either discarded or persisted in git

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and to not deal with remote state or once available, rely on the managed-state-as-a-service offering that’s soon to be released by hashicorp

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ah, yes, you do need to run with bash -l inside of geodesic

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i think jan was running this natively

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the -l is to load the /etc/profile.d scripts

2018-12-04

Jan avatar

So I have another interesting question…. I have a situation where I have 6 existing aws accounts I want to manage with geodesic. later I would want to import a full AWS org too

Jan avatar

the “import” part is of interest to me to understand

joshmyers avatar
joshmyers

Could you describe a little more what you mean by import a full AWS org @Jan?

Jan avatar

Well so to start with I have 1 divisions aws accounts I will want to manage, they already exist

Jan avatar

They already are in an AWS org

Jan avatar

I won’t manage the org to start with

Jan avatar

Just walking, gimme a few minutes

Jan avatar

Actually I guess there is nothing to. Import

Jan avatar

It’s just where the top level. Iam user is

Jan avatar

And what account the iam users are crated in

joshmyers avatar
joshmyers

Have you looked at using terraform import to create state for those resources once you want them managed. Once in the state, add the terraform resource config info geodesic

Jan avatar

Yea that could solve it should I need to later

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jan there is some info about provisioning or importing an org in our docs https://docs.cloudposse.com/reference-architectures/cold-start/#provision-organization-project-for-root

1
Jan avatar

Brilliant thanks

2018-12-05

Jan avatar

heya, on assume-role I am seeing this now

2018/12/05 13:55:44 Request body type has been overwritten. May cause race conditions
Jan avatar
aws-vault: error: Failed to get credentials for admin (source profile for admin-users-bootstrap): RequestError: send request failed
caused by: Post <https://sts.amazonaws.com/>: dial tcp: lookup sts.amazonaws.com on 8.8.8.8:53: read udp 172.17.0.2:40888->8.8.8.8:53: i/o timeout
Jan avatar

Ah I think I know the issue

Jan avatar

So the corporate network I am on does not allow using ANY dns other than the local one

Jan avatar

forcing 8.8.8.8 times out

Jan avatar
cloudposse/geodesic

Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. https://slack.cloudposse.com/ - cloudposse/geodesic

Jan avatar

export DOCKER_DNS=${DNS:-8.8.8.8}

Jan avatar

might want to rethink setting that as a default. Many corporates will not allow using an external DNS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, I think you are right. We can remove this default but expose the option. Would that be good?

Jan avatar

Yea totally

Jan avatar
Dont set docker DNS · Issue #320 · cloudposse/geodesic

what doing a assume-role in a geodesic container while on a network that does not allow using any external DNS servers directly, which is very common in corporate networks, results in a timeout for…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

good point @Jan thanks, we’ll review it

Jan avatar

@Andriy Knysh (Cloud Posse) what is TF_VAR_local_name_servers meant/intended to get populated by?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you asking about just local or all TF_VARs like name_servers?

Jan avatar

only local

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s for local dev environment, on local computers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can setup DNS for that so the devs would be able to test locally

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in your browser you would go to [local.example.net](http://local.example.net) to see the app

Jan avatar

yea I guess I left it in the setup im running through

Jan avatar

mistakenly

Jan avatar

running though getting a setup going with 3 of my team mates

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know how it goes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and thanks again for testing and finding all those issues

Jan avatar
warning: adding embedded git repository: build-harness
hint: You've added another git repository inside your current repository.
hint: Clones of the outer repository will not contain the contents of
hint: the embedded repository and will not know how to obtain it.
hint: If you meant to add a submodule, use:
hint:
hint: 	git submodule add <url> build-harness
hint:
hint: If you added this path by mistake, you can remove it from the
hint: index with:
hint:
hint: 	git rm --cached build-harness
hint:
hint: See "git help submodule" for more information.
Jan avatar

would having build-hardness in the .gitignore make sense?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Definitely. Surprised it’s not there.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yep 100%

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Jan avatar

yea I meant in the root.cloudposse.co

Jan avatar

and those

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Remove default for DOCKER_DNS by osterman · Pull Request #321 · cloudposse/geodesic

what Remove default value for DOCKER_DNS why In corporate environments, DNS is often tightly regulated No longer needed to address the original problem Closes #320

1
fiesta_parrot1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Configure cache directory by osterman · Pull Request #322 · cloudposse/geodesic

what Configure terraform to use persistent cache folder for plugins why Speedup terraform inits Fixes #312 demo

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Fix autocomplete for terraform and aws by osterman · Pull Request #323 · cloudposse/geodesic

what Fix autocomplete for aws and terraform why They broke when we moved to install alpine packages Fixes #300

2

2018-12-10

Jan avatar
Terragrunt region env vars should be variable · Issue #34 · cloudposse/root.cloudposse.co

env_vars = { TF_VAR_aws_assume_role_arn = &quot;${get_env(&quot;TF_VAR_aws_assume_role_arn&quot;, &quot;arnawsiam:role/atlantis&quot;)}&quot; AWS_DEFAULT_REGION = &quot;us-west-2&qu…

Jan avatar

odd, I updated it but doesnt reflect here

joshmyers avatar
joshmyers

@Jan jdn-za changed the title from Terragrunt region env vars should be variable to Terragrunt region & Account paramaters should be variable 4 minutes ago

Jan avatar

much of a muchness

Jan avatar

So on my “list” for today is to get some users provisioned into the ref architecture I have built

Jan avatar

get a vpc setup

Jan avatar

Finding it frustrating trying to figure out what to do next and if I am on the right path

Jan avatar

might just be the pain killers

joshmyers avatar
joshmyers

@Jan That doesn’t look to be re users

Jan avatar

so yea kinda explaining badly

Jan avatar

im looking at vpc first

Jan avatar
cloudposse/terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

Jan avatar

so do I add these into the Dockerfile to be pulled?

Jan avatar

how does one “inject” a tf module into the geodesic toolchain

joshmyers avatar
joshmyers
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Jan avatar
Jan
12:30:18 PM

Maybe I need to not use the docs as the reference just yet?

Jan avatar

ok so

joshmyers avatar
joshmyers

So Geodesic takes the tfvars from foo.example.com conf dir and uses multi stage docker builds to pull the actual TF files from their root modules repo. If we want to add some more TF resources, I think we can just add them to the conf dir and or better yet, create a module and add as a git submodule

joshmyers avatar
joshmyers
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Jan avatar

k im with you

joshmyers avatar
joshmyers

in my geodesic container actually has:

joshmyers avatar
joshmyers
12:31:54 PM
joshmyers avatar
joshmyers

Obvious thing to watch out for is naming clashes, other than that I think it should JFW

joshmyers avatar
joshmyers
Remove hardcoded region from terragrunt vars by joshmyers · Pull Request #35 · cloudposse/root.cloudposse.co

what This should address #34 testing $ make docker/build $ make install $ root.cloudposse.co …SNIP… ✗ (none) iam ⨠ echo $AWS_DEFAULT_REGION us-west-2 -> Run &#39;assume-role&#39; to logi…

Jan avatar

taking a look

Jan avatar

was in a meeting

Jan avatar

what about the root account iD?

joshmyers avatar
joshmyers
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Jan avatar
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Jan avatar

also what version of git are you using?

Jan avatar

git submodule add –tag

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

docs might be wrong.

Jan avatar

PR submitted

Jan avatar

–tag isnt supported

joshmyers avatar
joshmyers

TF_VAR_aws_assume_role_arn will always be set to whatever is in the Dockerfile, so shouldn’t fall back to the default, which is still a hardcoded thing

Jan avatar

right

joshmyers avatar
joshmyers

Could have been better to default to an empty string there

joshmyers avatar
joshmyers

When you add a git submodule, it is a pointer to another repo git SHA. Once you add the submodule, you should be able to checkout whatever branches from that repo

Jan avatar

cool so the submodule should be in the conf dir

joshmyers avatar
joshmyers
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

joshmyers avatar
joshmyers

or you could literally dump some .tf files in conf/myawesometfresources

joshmyers avatar
joshmyers

Be good to get a steer on this, I’m just poking around

Jan avatar

so when adding a submodule

Jan avatar

I guess I need to still add my own .tfvars file in the conf/myawesometfresources/ dir

joshmyers avatar
joshmyers

Yeah, it looks like the general approach is to be data driven for each env and your TF resources live in another sub module

Jan avatar

mmmmmm

Jan avatar
module "label" {
  source     = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.3>"
  namespace  = "${var.namespace}"
  name       = "${var.name}"
  stage      = "${var.stage}"
  delimiter  = "${var.delimiter}"
  attributes = "${var.attributes}"
  tags       = "${var.tags}"
}

resource "aws_vpc" "default" {
  cidr_block                       = "${var.cidr_block}"
  instance_tenancy                 = "${var.instance_tenancy}"
  enable_dns_hostnames             = "${var.enable_dns_hostnames}"
  enable_dns_support               = "${var.enable_dns_support}"
  enable_classiclink               = "${var.enable_classiclink}"
  enable_classiclink_dns_support   = "${var.enable_classiclink_dns_support}"
  assign_generated_ipv6_cidr_block = true
  tags                             = "${module.label.tags}"
}

resource "aws_internet_gateway" "default" {
  vpc_id = "${aws_vpc.default.id}"
  tags   = "${module.label.tags}"
}

Jan avatar

so for example I now have added a submodule for vpc (https://github.com/cloudposse/terraform-aws-vpc) to conf/aws-vpc

cloudposse/terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using git submodules in geodesic for terraform is not well understood (by me)

cloudposse/terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@rohit.verma is doing it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m still grappling with the best way to achieve it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/geodesic-aws-atlantis

Geodesic module for managing Atlantis with ECS Fargate - cloudposse/geodesic-aws-atlantis

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this was more of a prototype interface for achieving it. but keep in mind, my goal was not to solve it for terraform but more generally, how to keep certain capabilities out of the core.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In this case, I wanted to provide the atlantis capability without shipping it in the base image.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the idea was to expose a make install target and that always gets called during docker build by the Dockerfile

Jan avatar

I guess I would need to mod these variables to lookup from the geodessic env’s

Jan avatar

TF_VAR_stage or TF_VAR_namespace

Jan avatar

I mean they should already override these right?

joshmyers avatar
joshmyers

https://github.com/cloudposse/root.cloudposse.co/blob/master/Dockerfile#L19 is already set for example, so that should just work..

cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Jan avatar

yea for those ones

Jan avatar

then name = “${var.name}”

Jan avatar

I need a .tfvars ?

joshmyers avatar
joshmyers

If they aren’t being set in your equivalent root.example.com Dockerfile as TF_VAR, you will want to set some defaults for them via a tfvars file, that lives in [root.example.com/conf/myawesometfmodule/](http://root.example.com/conf/myawesometfmodule/)

Jan avatar

so that right there is an expectation in terms of usage, that should be documented

Jan avatar

updating my pr

Jan avatar

any/all modules should list in the readme what would be inherited vs what needs to be added to be injected into geodessic

joshmyers avatar
joshmyers
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

joshmyers avatar
joshmyers
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Jan avatar

Id even have a tfvars.example for the ones that are not inherited maybe two tfvars.geodessic-example tfvars.standalone-example

joshmyers avatar
joshmyers

Aye, could do https://www.terraform.io/docs/configuration/variables.html#variable-files - so like above you’d want them to be *.auto.tfvars

Configuring Input Variables - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

Jan avatar

why .auto.tfvars?

joshmyers avatar
joshmyers
02:52:39 PM
joshmyers avatar
joshmyers

@Jan Hows that working out?

Jan avatar
Conventions should be like cuddles by jdn-za · Pull Request #23 · cloudposse/terraform-aws-vpc

I would like to be use to pull any of the cloudposse terraform modules in relation to geodesic and have the same expected use as a convention … and that needs to be a convention that can be taken…

Jan avatar

clickbait title

1
Jan avatar

but id like to see an <modules_name>.auto.tfvars.example file in every cloudposse terraform module

Jan avatar
Jan
03:53:38 PM
Jan avatar

make is explicit where the things are required / inherited / defaults

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jan all modules are ‘identity-less’, they don’t care about how and where they get deployed. All input vars are provided from higher-level modules. In many cases, they are provided as ENV vars from the Dockerfiles for the corresponding environment. In this case, we are able to re-use all modules without changing anything for all possible configurations of AWS accounts, stages/environments, etc.

joshmyers avatar
joshmyers

terraform-docs is in use for auto creating readme’s of variable requirements etc

Jan avatar

So you would still not describe how to use these modules alongside geodesic?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The modules really exist out side of geodesic.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then terraform-root-modules is 100% about how we use them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the root modules are the most opinionated incarnations.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

they are so specific, that we’re distributing them more as examples of how we organize our infrastructure. they are meant more for reference than to be taken wholesale.

Jan avatar

So I think where my understanding went very wrong was when I DIDN’T understand that the terraform-root-modules are not part of geodesic

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

AHA

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ll call that out.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, we’ve many times debated what to call them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

“root modules” is a very literal name based on what terraform calls them. but root is also very overloaded.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i like the term “catalog” that you mentioned

Jan avatar

Maybe called them example-company-terraform-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
06:32:32 AM
Jan avatar

Normally in a company I build up a terraform service catalog, just a collection of tf models that have the opinions; best practices and non functional requirements of that organization included

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
06:33:31 AM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so if you already using a container like geodesic, then you define all required ENV vars and copy all required modules to the final container

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the Dockerfile

Jan avatar

that still means I need to find the required vs defaults

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no need to use .tfvar files and git submodules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but when you use a module, you see all its variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can update whatever you need

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a simple pattern, add ENV vars and copy the modules you need

joshmyers avatar
joshmyers

@Andriy Knysh (Cloud Posse) rather than using submodules, would the other way be to just dump your high level module for root.example.com in the conf/foo dir ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we copy all modules we need in Dockerfiles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we use terraform-root-modules

joshmyers avatar
joshmyers

I see the terraform modules and geodesic as separate. Geodesic including makefile/dockerfile etc is the glue that is necessary for orchestration. The other modules are there to use if you want to

joshmyers avatar
joshmyers

Ah indeed, that is another way

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but if you want to have another repo to copy from, and don’t want to put it into terraform-root-modules, then you add a Dockerfile to that repo, build an image, and then use Docker multi-stage to copy it into the final image

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use other ways of doing it, e.g. submodules, var files, etc., but it’s a completely diff pattern

Jan avatar
~/dev/resources/cloudposse/forks/root.cloudposse.co   master  find . | grep tfvars
./conf/terraform.tfvars
./conf/atlantis-repos/terraform.tfvars
./conf/iam/terraform.tfvars
./conf/users/terraform.tfvars
./conf/root-iam/terraform.tfvars
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so copying modules and specifying vars are separate concerns

Jan avatar

then use of dockerfile env vars and .tfvars and auto.tfvars.example is very confusing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes @Jan you correct, we used the vars inconsistently

Jan avatar

they all do the same thing at different places

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

by specifying some of them in Dockerfiles, and some in var files

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that could be improved

Jan avatar

and there is no stated explanation of where and how which one

joshmyers avatar
joshmyers

@Andriy Knysh (Cloud Posse) What is the reasoning behind https://github.com/cloudposse/geodesic-aws-atlantis being a submodule but TF code (high level modules like terraform-root-modules) is Docker multi stage?

cloudposse/geodesic-aws-atlantis

Geodesic module for managing Atlantis with ECS Fargate - cloudposse/geodesic-aws-atlantis

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atlantis was just a test, it’s not in the official docs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Erik Osterman (Cloud Posse) was playing with that

joshmyers avatar
joshmyers

Ah OK

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jan to define vars, you can use just Dockerfiles, just var files, or a combination of them

Jan avatar

and when I get to 60+ tf modules?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i agree, one way would be better to not confuse people

Jan avatar

I have a massive dockerfile with vars

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, I’m more and more anti-overloading-the-dockerfile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it’s great for defining true globals, but more than that it’s confusing because the variables are not tightly coupled with the code they modify

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re not yet opinionated down to how envs are managed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i think the .tfvar or .envrc (direnv) approach are both nice.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for someone looking at this strictly from a terraform perspective, they might be inclined to use .tfvar

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

while someone looking at this from a more general perspective that also works with kops, might like the direnv approach. i think we can leave this up to the end user.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but we should document the pros/cons and various env management strategies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Managing Environment Variables "At Scale" · Issue #305 · cloudposse/docs

what Describe the problem with too many environment variables and how to manage them why There are a lot of ways to manage them. There are tradeoffs with all of them - thus a true &quot;best practi…

Jan avatar

Agreed on documenting the pros/cons. Even more so I would keep a list of these decisions where it can work either way and state which way and why geodesic went with (doesn’t mean that the other ways can’t be used)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I’ve seen some companies do that really well. We’ve not started that best practice.

Jan avatar

I think it would help alot when starting fresh and reading the docs /code base to try get “project context”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, agreed. I think we’re able to start doing that now. Less decisions happening on a day-to-day, week-to-week basis.

Jan avatar

every time I need to set a new var

Jan avatar

and then I need to worry about unique var names at the env level?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you would have to worry about that in any case, in Dockerfiles or in var files, or any other place

joshmyers avatar
joshmyers

There is no getting away from having to set vars

joshmyers avatar
joshmyers

^^

Jan avatar

sure,….

Jan avatar

are you doing 1 geodesic module per aws account or smaller units?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We started originally with one geodesic per cluster, but currently lean more towards one geodesic per account. This has worked so far for our customers, but perhaps other organizations need different levels of logical grouping. I think this can be open ended.

Jan avatar

I will experiment with this a fair bit I suspect, like building a account specific geodesic image and then a project specific geodesic module using the account image as a layer below

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep! agree with that

Jan avatar

so I mean if I have a “dev” account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

per environment (prod, staging, dev, etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which could be in separate AWS accounts, or in one (depending on requirements and client requests)

Jan avatar

and I want multiple vpc’s and then I want to pass ` name = “${var.name}”`

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, but you have different stages, so no naming conflicts even if you deploy all stages into one account

Jan avatar

I dont want to then have a ENV TF_VAR_fooname="devFoo

Jan avatar

per vpc / project / resource

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

name should be the same for all accounts/environments (prod, staging, dev)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we have stage

Jan avatar

name in the sense of vpc?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the sense of application/solution name

Jan avatar

(im looking at the terraform vpc module, just to be clear here)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example, your VPC could be named like this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

cp-prod-myapp for prod

Jan avatar

and in the case where I have a pre-production account that has like 30+ vpcs?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is where you probably treat those as separate projects. Each with their own TF state.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

cp-staging-myapp for staging

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then you use attributes to add additional params to each ID generated by the label modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

cp-prod-myapp-unit1

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

cp-prod-myapp-unit2

Jan avatar

And where are you setting the attributes then?

Jan avatar

as env vars in your dockerfile

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each module has a var attributes

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can set them in a few diff ways:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Directly in a higher-level module
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. In Dockerfile
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. in var file
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it all depends on a particular use-case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you either hardcode them (e.g. in a higher-level module) if they never change, or use ENV vars if you think they might change

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but don’t worry about name collision between environments/stages, you have stage for that

Jan avatar

so lets say for example using the backing-services root module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

keep in mind that the backing-services root module is more like a reference architecture than something you should be leveraging wholesale.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

backing-services are specific to our organization

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. some other company might want to have mongo thrown in there

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

another way of thinking about terraform-root-modules` is that it’s your highly proprietary integration of all the other modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the “top most level” modules are what make your infrastructure specific to the problem at hand that you’re solving.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so i dont’ think we could ever successfully generalize that in a meaningful way without creating crazy bloat. terraform-root-modules is your “starting off point” where you hardfork/diverge.

Jan avatar

All clear

1
Jan avatar

to create the vpc

Jan avatar
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Jan avatar

what is the expectation for setting this is a cidr I want

Jan avatar

wait a sec

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to create your own terraform-root-modules or similar repo with diff name if needed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and provide the CIDR block you need (hardcoded or from var)

Jan avatar

the backing-services is a kops/k8s specific vpc?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in our terraform-root-modules yes, it was intended to be used and peered with a kops VPC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your terraform-root-modules, it could be in the way you want

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

our terraform-root-modules was not intended to be copied verbatim, it’s just a reference architecture

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but it uses the TF modules, which could be used everywhere

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the label module is the central piece here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it allows to specify unique names for all environments without naming collisions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and it does not matter if you deploy everything into just one AWS account, or multiple account per environment

Jan avatar

So I mean I missed that this part is meant to be copied and modded to my needs

Jan avatar

is it stated someplace int he getting started?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here is an example of using attributes in the same module to create unique names if you have more than one resource of the same type (e.g. role, distribution, etc.) https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L7

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


So I mean I missed that this part is meant to be copied and modded to my needs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jan that’s not probably stated in the docs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea they are not perfect

Jan avatar

So I mean from a “fresh to the project” perspective im not sure there is enough explained about how its meant to be used as yet

Jan avatar

I get they are not perfect

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

docs is always the most difficult thing to create

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you either want to be concise, or you spend a year to create a book

Jan avatar

but maybe it would add huge value to sitdown with a fresh user and write downt he step by step requirements to get from 0 to a set account/vpc/resources blueprint

joshmyers avatar
joshmyers

A module in terraform-root-modules is little more than some variables and calling other modules which do the heavy lifting.

Jan avatar

maybe take a multi az wordpress on k8s with multiple envs

Jan avatar

and rds / elasticache

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all good points @Jan

Jan avatar

I have approached diving into the docs and code base knowing what I want to create (and how to do so in terraform) in order to try understand how geodesic would work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yea, geodesic is just a container with a bunch of tools required to provision infrastructures

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform-root-modules is an example of a reference architecture, shows how to use the TF modules, and is just one way of doing things

Jan avatar

well its been a pretty frustrating process (to be fair I was warned it would be)

Jan avatar

so let me start back at the begging with my new understanding

Jan avatar

how would I best go about:

  1. setting up a geodesic project with 7 accounts (all pre-existing)
  2. setting up users to be created
  3. setting up env / project vpc(s) in these accounts
  4. setting up resources in some of these
  5. peering vpc’s selectivley
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is your secret sauce. We could never implement this in a generic way.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is what you’d describe in root modules that get invoked in various geodesic repos.

Jan avatar

I had before used and modified the reference architecture

Jan avatar

and have that all working so far as admin users with cross account sts assume and dns delegation with acm etc

Jan avatar

if you were to describe how to do my list, not in depth, what should I follow

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you already forked terraform-root-modules

Jan avatar

just did so

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s your collection of module invocation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

start updating it to suit your needs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. change the VPC CIDR, add more modules you need, remove the modules you don’t need

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then it would be ready to be deployed to all accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that you add some modules to it which could be used only in some accounts, but not in the other

Jan avatar

let me go remove everything I created previously first

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in Dockerfile(s) just copy what you need

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

per account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the COPY commands will be different for all accounts b/c you need different modules in each (some modules are the same for all)

Jan avatar

So the root modules would become the whole module catalog

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

of your organization

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s your flavor of how to leverage the generic building blocks to achieve the specific customizations that you need.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

without needing to write everything from the ground up. the idea is to write all terraform-aws* modules very generically

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then a company writes their own root modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

root modules is like the main(...) { } of your infrastructure

Jan avatar

Yea that makes sense, high level stack components sorta vibe

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

create 7 repos, one per account

Jan avatar

other than fully remote TF modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

7 Dockerfiles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and update all ENV vars in the repos to reflect the accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, exactly, the root modules would become the whole module catalog

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the whole infrastructure catalog, for ALL accounts

Jan avatar

and there afterwards when I want to add 3rd party ones to only some accounts?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

either add them to the root modules (if you think it makes sense and you might want to re-use it somewhere else), or add it directly to one of the infra repo

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to add them to more than one infra repos, then better to add them to the root modules

Jan avatar

how large do these root module end up becoming?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

reflecting the whole infrastructure

Jan avatar

I might just batch them into categories or something

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but sure, you can split it into a few repos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and add them to the final Dockerfile as multi-stage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the point here is that you could re-use the code from terraform-root-modules in many environments (prod, staging, dev), some of them will be the same, some unique to a particular repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(wow i have some reading to do today)

Jan avatar

hahah

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

expect some threaded converstations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Jan avatar
# Disable Role ARN (necessary for root account on cold-start)
[ "${DISABLE_ROLE_ARN}" == "0" ] || sed -Ei 's/^(\s+role_arn\s+)/#\1/' main.tf
Jan avatar

nice addition!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

though still a nasty hack!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Jan avatar

better this way than 1 vim and 1 sed

Jan avatar

@Andriy Knysh (Cloud Posse) thanks for your explanations earlier btw

Jan avatar

I seem to be getting on now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no problem, looks like it @Jan

Jan avatar

cool vpc done

Jan avatar

however the subnet math seems a bit odd…

Jan avatar

I provided a /22 range over 3az 2 tier and ended up with 6 x /26’s

Jan avatar

mmmmm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we generate subnets in a table manner

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that takes into account a max number

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

stable *not table

Jan avatar
cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Jan avatar

so first off was surprised by ipv6 being enabled

Jan avatar

Maximum number of subnets that can be created. The variable is used for CIDR blocks calculation

Jan avatar

still…

Jan avatar

I mean I did expect 6 subnets, which is what I got back

Jan avatar

they are just smaller than expected

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the full calculation is broken down in the README

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s rather complicated (and necessarily so)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

crap

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i don’t see the calculation there anymore

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i think i’m getting confused by our other subnet module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(@Jan sorry i’m busy onboarding a couple new hires today so i’m very distracted)

Jan avatar

so I mean a /22 would be 1022 usable ip’s so even removing say 5 for aws uses per subnet I would still expect like 165 odd useable per subnet

Jan avatar

all good mate

Jan avatar

its like 10:20pm here so probably wont carry on much longer

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’ll catch up on this discussion later on today

Jan avatar

cool im gonna carry on for now since I will rebuild these vpc’s any way

Jan avatar

maybe when you are back, how would I best go about creating multiple vpcs within a single geodesic module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is where you start now with your clean terraform-root-module repo and start to create a new project there in to describe the infrastructure that you want.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no 2 companies will ever have the same terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ping me when you’re online

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we can resync

Jan avatar

say stage{a,b,c,d,e}.example.com

Jan avatar

I have a pre calculated list of /22’s and /23’s that would get used, just read from a remote state output of a map

Jan avatar

stagea.example.copm = 10.0.0.0/22

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Better Describe Purpose of Root Modules · Issue #52 · cloudposse/terraform-root-modules

what the root modules are the most opinionated incarnations of modules that seldom translate verbatim across organizations. This is your secret sauce. We could never implement this in a generic way…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@daveyu or @Max Moon any thoughts you’d like to add to this issue?

Better Describe Purpose of Root Modules · Issue #52 · cloudposse/terraform-root-modules

what the root modules are the most opinionated incarnations of modules that seldom translate verbatim across organizations. This is your secret sauce. We could never implement this in a generic way…

Jan avatar

That clears things up nicely. I would suggest having this information included into the getting started guide

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I reached back out to our technical writer to see if she has availability

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ll make a note that we should also call this out in the quickstart

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I created that issue to better address the purpose of root modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in short, you nailed it with this comment:

fiesta_parrot1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


root modules would become the whole module catalog of your organization

2018-12-11

Jan avatar

wrong channel

Jan avatar

oops

2018-12-12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
07:09:27 PM

set the channel description: Discussions related to https://github.com/cloudposse/geodesic

2018-12-19

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

1
fiesta_parrot1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

rejoice!

2018-12-23

Jan avatar
aylei/kubectl-debug

Debug your pod by a new container with every troubleshooting tools pre-installed - aylei/kubectl-debug

Jan avatar

Nifty for debugging

2018-12-28

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jbye I think it’s good you’re looking at both Gruntworks library as well as our own. They build solid stuff.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The key differentiator is our approach.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

1) we don’t rely on a wrapper like terragrunt

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

2) we containerize our entire tool chain and docker extensively to deliver the solution

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

3) everything is 100% apache2 open source (we accept most PRs)

jbye avatar

I definitely like that 3rd point it’s very hard for us to evaluate Gruntwork without being able to look at any real source code

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(i think they will let you do that though if you maybe sign an NDA…)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is all about laying out the account architecture which is central to our approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and then we provide an example service catalog for how we use our 100+ terraform modules here: https://github.com/cloudposse/terraform-root-modules

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

jbye avatar

They offer a 30-day money back agreement after purchase, so you have to pay, check it all out, then cancel within 30 days if you don’t like it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you looking to adopt kubernetes?

jbye avatar

Eventually, yes. For now, though, we are looking at an ECS-based architecture

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have a look also at #airship by @maarten for ECS

maarten avatar
maarten
10:15:36 PM

@maarten has joined the channel

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is what one of our single account architectures looks like: https://github.com/cloudposse/testing.cloudposse.co

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically every AWS account is a repo, defined with a Dockerfile and pulls in source from a shared repository (terraform-root-modules) of code.

jbye avatar

Interesting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And we deploy https://www.runatlantis.io/ with each account

Terraform For Teams | Atlantis

Atlantis: Terraform For Teams

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

gruntworks takes a different approach. I think they have a monorepo for all AWS account and then a service catalog like our with apps to deploy therein.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

they use a shared terraform statebucket for all accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we deploy (1) state bucket/dynamodb per account and share nothing (heck even our account repos share nothing)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this lets every account act autonomously and be managed by different teams

jbye avatar

Thanks for the info, Erik. I’ll have to take some time to digest it and see how it fits with how we are doing things. We already have a big chunk of Terraform code we wrote, but it has a fair number of issues and is all based around EC2 instances, not ECS. So as we move to using containers, we’re looking at a fresh start using a more mature codebase.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, this is a good transition point to consider other alternatives

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Added some more thoughts here: https://github.com/cloudposse/docs/issues/351

Document Cloud Posse vs Gruntworks · Issue #351 · cloudposse/docs

what Describe our differentiators Gruntworks is an awesome contributor to open source and demonstrate solid engineering skills. They have a vast, well-tested, library of proprietary terraform modul…

jbye avatar

@Erik Osterman (Cloud Posse) do you have an infrastructure diagram of any sort showing how you lay out the VPCs, bastions, peering, etc. in your reference arch?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

good question: at this point, we don’t opine down to that level.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we provide the various components, but don’t say what your network should look like

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Check out our various subnet modules that implement the different kinds of strategies

    keyboard_arrow_up