#geodesic (2018-12)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2018-12-03
rgr
for starters, what i’m thinking to standardize is the discrepancy between what is TF_VAR_
and not.
so I am carrying on with the coldstart setup currently
yes, that would already go a long way
i find we have to do a lot of mapping. my thinking is to focus on some canonical variables that are not TF_VAR_
prefixed
if the one picked isnt great can always change em, all at the same time
right - need to pick one and run with it and see how it goes. we’ve kind of been flipflopping.
that could make a lot of a sense
and then make it easier to access all envs from terraform.
@Jan what do you think of the pipelining we’re doing?
Honestly I have yet to get that far
basically, things like chamber exec kops -- helmfile exec ...
then helmfile
calls helm
it’s a lot of layers, but it’s also very UNIXy
so I mean in general I like to have as few moving parts as possible
as simple as possible, as complex as necessary
this is a great quote
Another cool one is “Both simplicity and trust, once lost are hard regained”
that sorta vibe
what Identify and address inconsistencies why we cannot begin to standardize something which is inconsistent.
can you jot down any thing that comes to mind?
will do
Mondays “refresh my memory” issue is I cd into testing/ make init | docker/build | install |
and I dont have a testing image locally built
and yes the ENV DOCKER_IMAGE=
is set
I recall running into this before
hrm…
make init
make docker/build install
that should result in both an image locally and a script installed to /usr/local/bin/xxxx
what is the error you get when running xxxx
?
what I’ve seen happen is that the image that was built was with some tag Y
by the DOCKER_IMAGE
env is referencing some tag Z
so afterwards I end up with
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
cloudposse/testing.cloudposse.co latest 938b6ad92d1a 26 minutes ago 1.08GB
jdnza/root.aws.tf latest 4e5bad3816ca 5 days ago 1.12GB
the error on make install
make install
Password:
# Installing testing.aws.tf from jdnza/testing.aws.tf:latest...
Unable to find image 'jdnza/testing.aws.tf:latest' locally
docker: Error response from daemon: manifest for jdnza/testing.aws.tf:latest not found.
See 'docker run --help'.
# Installed testing.aws.tf to /usr/local/bin/testing.aws.tf
make docker/build
results in
Successfully tagged cloudposse/testing.cloudposse.co:latest
ok, in the Makefile
, i think something is set wrong.
AH!
yes now I recall
export CLUSTER ?= testing.cloudposse.co
export DOCKER_ORG ?= cloudposse
so regarding this…. we’ve gone back and forth
where this was a pain was that in the docs it is never mentioned to update the Makefile
i like doing something like:
export CLUSTER ?= $(shell basename $(pwd))
etc..
but this is also more “magic”
what in your opinion is the right balance?
So I mean I would have the Makefile do so
I mean if you are for example already do a cp testing.cloudposse.co/ testing.aws.tf/
then make docker/build should quite happily just build with the pwd
ok, i’ll track this in that issue
export CLUSTER ?= $(shell basename $(pwd))
would solve it for all other envs
what to do for org though
this all makes me come back to my idea of having a coldstart module that renders all the coldstart resources with some high level inputs I provide
oh yes, coldstart module is top of mind
this is actually all feeding into that
i want to basically collect all the datapoints and then use that to create the module.
so asking for docker registry url is actually important
will add that
what Identify and address inconsistencies why we cannot begin to standardize something which is inconsistent.
oki macbook battery is about to die, will switch to phone
something else…
cd account-dns
init-terraform
terraform apply
running without terraform plan
is an anti-pattern that tf should be stopping with the info from the duynamodb table or not?
yes, so best practices are to run terraform plan -out...
and then terraform apply planfile
I suppose from a consistency perspective, we should write the docs that way
is that what you’re suggesting?
what I have several resources that a cold-start would create, such as parent domain or possibly pre-existing root account. I would like to be able to set these as overrides so that the are taken in…
yea
I would be consistent in that, wrapping other tools means that users will come with varying degrees “pre-trained” behavior. Generally I would avoid un-training best practices
k 2% battery, im out
is it possible to use geodesic with aws-vault and the osx keychain?
No, need to use the lowest common denominator which is file
We use that by default and use it between OSX and geodesic and Windows even :-)
(WSL)
kk, is it expected then that I need to enter my password on every aws-vault exec accountnamr -- command that need aws access
ahhh
if you have a sequence of commands you need to run interactively
thanks mate
also, if using direnv
, you can add this to the .envrc
of a directory
eval "$(chamber exec kops -- sh -c "export -p")"
replace kops
with the namespace
yea thats what I was looking for
cheers
instead, run aws-vault exec accountname -- bash
I find I need aws-vault exec accountname -- bash -l
… otherwise I see:
bash: prompter: command not found
talking about using it outside of a geodesic container now
I have yet to have a chance to migrate all my credentials / env’s over to this work flow pattern
very soon id like to have a executable bin per aws account/env
executable bin
geodesic docker wrapper script?
exactly. essentially the same as with env.cloudposse.co bin
just for all my existing accounts
oldacc1.aws.com bin
then assume-role
and what not
I have currently about 30+ aws accounts (10 odd via geodesic)
using the same tooling/patterns would make life easier on my end
yea, agree
so my thinking so far (inspired by our earlier conversation) is to create various Dockerfile
templates
e.g. Dockerfile.root
, Dockerfile.audit
, Dockerfile.default
those would be terraform templates (e.g. with $var
parameters)
each one would implement some kind of opinionated account structure
then there would be a terraform module that takes as an input the template you want to use
and passes the parameters to the template to generate the dockerfile. then it uses local exec provisioner to build the docker image and run it to achieve various outcomes.
i am thinking for coldstart terraform state is either discarded or persisted in git
and to not deal with remote state or once available, rely on the managed-state-as-a-service offering that’s soon to be released by hashicorp
ah, yes, you do need to run with bash -l
inside of geodesic
i think jan was running this natively
the -l
is to load the /etc/profile.d
scripts
2018-12-04
So I have another interesting question…. I have a situation where I have 6 existing aws accounts I want to manage with geodesic. later I would want to import a full AWS org too
the “import” part is of interest to me to understand
Could you describe a little more what you mean by import a full AWS org @Jan?
Well so to start with I have 1 divisions aws accounts I will want to manage, they already exist
They already are in an AWS org
I won’t manage the org to start with
Just walking, gimme a few minutes
Actually I guess there is nothing to. Import
It’s just where the top level. Iam user is
And what account the iam users are crated in
Have you looked at using terraform import to create state for those resources once you want them managed. Once in the state, add the terraform resource config info geodesic
Yea that could solve it should I need to later
@Jan there is some info about provisioning or importing an org in our docs https://docs.cloudposse.com/reference-architectures/cold-start/#provision-organization-project-for-root
Brilliant thanks
2018-12-05
heya, on assume-role I am seeing this now
2018/12/05 13:55:44 Request body type has been overwritten. May cause race conditions
aws-vault: error: Failed to get credentials for admin (source profile for admin-users-bootstrap): RequestError: send request failed
caused by: Post <https://sts.amazonaws.com/>: dial tcp: lookup sts.amazonaws.com on 8.8.8.8:53: read udp 172.17.0.2:40888->8.8.8.8:53: i/o timeout
Ah I think I know the issue
So the corporate network I am on does not allow using ANY dns other than the local one
forcing 8.8.8.8 times out
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. https://slack.cloudposse.com/ - cloudposse/geodesic
export DOCKER_DNS=${DNS:-8.8.8.8}
might want to rethink setting that as a default. Many corporates will not allow using an external DNS
Yes, I think you are right. We can remove this default but expose the option. Would that be good?
Yea totally
what doing a assume-role in a geodesic container while on a network that does not allow using any external DNS servers directly, which is very common in corporate networks, results in a timeout for…
good point @Jan thanks, we’ll review it
@Andriy Knysh (Cloud Posse) what is TF_VAR_local_name_servers
meant/intended to get populated by?
you asking about just local
or all TF_VARs like name_servers
?
only local
that’s for local dev environment, on local computers
we can setup DNS for that so the devs would be able to test locally
ah
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
then in your browser you would go to [local.example.net](http://local.example.net)
to see the app
yea I guess I left it in the setup im running through
mistakenly
running though getting a setup going with 3 of my team mates
nice
let us know how it goes
and thanks again for testing and finding all those issues
warning: adding embedded git repository: build-harness
hint: You've added another git repository inside your current repository.
hint: Clones of the outer repository will not contain the contents of
hint: the embedded repository and will not know how to obtain it.
hint: If you meant to add a submodule, use:
hint:
hint: git submodule add <url> build-harness
hint:
hint: If you added this path by mistake, you can remove it from the
hint: index with:
hint:
hint: git rm --cached build-harness
hint:
hint: See "git help submodule" for more information.
would having build-hardness in the .gitignore make sense?
Definitely. Surprised it’s not there.
yep 100%
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
and in .dockerignore
https://github.com/cloudposse/terraform-root-modules/blob/master/.dockerignore
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
yea I meant in the root.cloudposse.co
and those
what Remove default value for DOCKER_DNS why In corporate environments, DNS is often tightly regulated No longer needed to address the original problem Closes #320
what Configure terraform to use persistent cache folder for plugins why Speedup terraform inits Fixes #312 demo
what Fix autocomplete for aws and terraform why They broke when we moved to install alpine packages Fixes #300
2018-12-10
env_vars = { TF_VAR_aws_assume_role_arn = "${get_env("TF_VAR_aws_assume_role_arn", "arniam:role/atlantis")}" AWS_DEFAULT_REGION = "us-west-2&qu…
odd, I updated it but doesnt reflect here
@Jan jdn-za changed the title from Terragrunt region env vars should be variable to Terragrunt region & Account paramaters should be variable 4 minutes ago
yea
much of a muchness
So on my “list” for today is to get some users provisioned into the ref architecture I have built
get a vpc setup
Finding it frustrating trying to figure out what to do next and if I am on the right path
.<
might just be the pain killers
I mean I guess https://docs.cloudposse.com/geodesic/module/with-terraform/
@Jan That doesn’t look to be re users
so yea kinda explaining badly
im looking at vpc first
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc
so do I add these into the Dockerfile to be pulled?
how does one “inject” a tf module into the geodesic toolchain
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
Maybe I need to not use the docs as the reference just yet?
ok so
So Geodesic takes the tfvars from foo.example.com conf
dir and uses multi stage docker builds to pull the actual TF files from their root modules repo. If we want to add some more TF resources, I think we can just add them to the conf dir and or better yet, create a module and add as a git submodule
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
k im with you
in my geodesic container actually has:
Obvious thing to watch out for is naming clashes, other than that I think it should JFW
@Jan https://github.com/cloudposse/root.cloudposse.co/pull/35 should address your issue
what This should address #34 testing $ make docker/build $ make install $ root.cloudposse.co …SNIP… ✗ (none) iam ⨠ echo $AWS_DEFAULT_REGION us-west-2 -> Run 'assume-role' to logi…
taking a look
was in a meeting
Yep
what about the root account iD?
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
also what version of git are you using?
git submodule add –tag
docs might be wrong.
PR submitted
–tag isnt supported
TF_VAR_aws_assume_role_arn
will always be set to whatever is in the Dockerfile, so shouldn’t fall back to the default, which is still a hardcoded thing
right
Could have been better to default to an empty string there
When you add a git submodule, it is a pointer to another repo git SHA. Once you add the submodule, you should be able to checkout whatever branches from that repo
cool so the submodule should be in the conf dir
It would be one way of doing it - https://github.com/cloudposse/root.cloudposse.co/blob/master/.gitmodules
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
or you could literally dump some .tf files in conf/myawesometfresources
Be good to get a steer on this, I’m just poking around
git submodule add –tag … is not valid syntax https://git-scm.com/docs/git-submodule In relation to geodessic the conf/ path is important
so when adding a submodule
I guess I need to still add my own .tfvars file in the conf/myawesometfresources/ dir
Yeah, it looks like the general approach is to be data driven for each env and your TF resources live in another sub module
mmmmmm
module "label" {
source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.3>"
namespace = "${var.namespace}"
name = "${var.name}"
stage = "${var.stage}"
delimiter = "${var.delimiter}"
attributes = "${var.attributes}"
tags = "${var.tags}"
}
resource "aws_vpc" "default" {
cidr_block = "${var.cidr_block}"
instance_tenancy = "${var.instance_tenancy}"
enable_dns_hostnames = "${var.enable_dns_hostnames}"
enable_dns_support = "${var.enable_dns_support}"
enable_classiclink = "${var.enable_classiclink}"
enable_classiclink_dns_support = "${var.enable_classiclink_dns_support}"
assign_generated_ipv6_cidr_block = true
tags = "${module.label.tags}"
}
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.default.id}"
tags = "${module.label.tags}"
}
so for example I now have added a submodule for vpc (https://github.com/cloudposse/terraform-aws-vpc) to conf/aws-vpc
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc
Using git submodules in geodesic for terraform is not well understood (by me)
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc
@rohit.verma is doing it
I’m still grappling with the best way to achieve it.
Geodesic module for managing Atlantis with ECS Fargate - cloudposse/geodesic-aws-atlantis
this was more of a prototype interface for achieving it. but keep in mind, my goal was not to solve it for terraform but more generally, how to keep certain capabilities out of the core.
In this case, I wanted to provide the atlantis capability without shipping it in the base image.
the idea was to expose a make install
target and that always gets called during docker build
by the Dockerfile
I guess I would need to mod these variables to lookup from the geodessic env’s
TF_VAR_stage
or TF_VAR_namespace
I mean they should already override these right?
https://github.com/cloudposse/root.cloudposse.co/blob/master/Dockerfile#L19 is already set for example, so that should just work..
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
yea for those ones
then name = “${var.name}”
I need a .tfvars ?
If they aren’t being set in your equivalent root.example.com Dockerfile as TF_VAR, you will want to set some defaults for them via a tfvars file, that lives in [root.example.com/conf/myawesometfmodule/](http://root.example.com/conf/myawesometfmodule/)
so that right there is an expectation in terms of usage, that should be documented
updating my pr
any/all modules should list in the readme what would be inherited vs what needs to be added to be injected into geodessic
Cool, it is along the same lines as https://github.com/cloudposse/root.cloudposse.co/blob/master/conf/iam/terraform.tfvars
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
^^ gets munged together with https://github.com/cloudposse/terraform-root-modules/tree/master/aws/iam
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
Id even have a tfvars.example for the ones that are not inherited maybe two tfvars.geodessic-example
tfvars.standalone-example
Aye, could do https://www.terraform.io/docs/configuration/variables.html#variable-files - so like above you’d want them to be *.auto.tfvars
Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.
why .auto.tfvars?
oh
duh
@Jan Hows that working out?
I would like to be use to pull any of the cloudposse terraform modules in relation to geodesic and have the same expected use as a convention … and that needs to be a convention that can be taken…
but id like to see an <modules_name>.auto.tfvars.example file in every cloudposse terraform module
make is explicit where the things are required / inherited / defaults
@Jan all modules are ‘identity-less’, they don’t care about how and where they get deployed. All input vars are provided from higher-level modules. In many cases, they are provided as ENV vars from the Dockerfiles for the corresponding environment. In this case, we are able to re-use all modules without changing anything for all possible configurations of AWS accounts, stages/environments, etc.
terraform-docs is in use for auto creating readme’s of variable requirements etc
So you would still not describe how to use these modules alongside geodesic?
The modules really exist out side of geodesic.
Then terraform-root-modules
is 100% about how we use them
the root modules are the most opinionated incarnations.
they are so specific, that we’re distributing them more as examples of how we organize our infrastructure. they are meant more for reference than to be taken wholesale.
So I think where my understanding went very wrong was when I DIDN’T understand that the terraform-root-modules are not part of geodesic
AHA
I’ll call that out.
also, we’ve many times debated what to call them
“root modules” is a very literal name based on what terraform calls them. but root is also very overloaded.
i like the term “catalog” that you mentioned
Maybe called them example-company-terraform-modules
Normally in a company I build up a terraform service catalog, just a collection of tf models that have the opinions; best practices and non functional requirements of that organization included
so if you already using a container like geodesic
, then you define all required ENV vars and copy all required modules to the final container
in the Dockerfile
that still means I need to find the required vs defaults
no need to use .tfvar
files and git submodules
but when you use a module, you see all its variables
you can update whatever you need
it’s a simple pattern, add ENV vars and copy the modules you need
@Andriy Knysh (Cloud Posse) rather than using submodules, would the other way be to just dump your high level module for root.example.com in the conf/foo
dir ?
we copy all modules we need in Dockerfiles
that’s why we use terraform-root-modules
I see the terraform modules and geodesic as separate. Geodesic including makefile/dockerfile etc is the glue that is necessary for orchestration. The other modules are there to use if you want to
Ah indeed, that is another way
but if you want to have another repo to copy from, and don’t want to put it into terraform-root-modules
, then you add a Dockerfile to that repo, build an image, and then use Docker multi-stage to copy it into the final image
you can use other ways of doing it, e.g. submodules, var files, etc., but it’s a completely diff pattern
~/dev/resources/cloudposse/forks/root.cloudposse.co master find . | grep tfvars
./conf/terraform.tfvars
./conf/atlantis-repos/terraform.tfvars
./conf/iam/terraform.tfvars
./conf/users/terraform.tfvars
./conf/root-iam/terraform.tfvars
so copying modules and specifying vars are separate concerns
then use of dockerfile env vars and .tfvars and auto.tfvars.example is very confusing
yes @Jan you correct, we used the vars inconsistently
they all do the same thing at different places
by specifying some of them in Dockerfiles, and some in var files
that could be improved
and there is no stated explanation of where and how which one
@Andriy Knysh (Cloud Posse) What is the reasoning behind https://github.com/cloudposse/geodesic-aws-atlantis being a submodule but TF code (high level modules like terraform-root-modules) is Docker multi stage?
Geodesic module for managing Atlantis with ECS Fargate - cloudposse/geodesic-aws-atlantis
atlantis
was just a test, it’s not in the official docs
@Erik Osterman (Cloud Posse) was playing with that
Ah OK
@Jan to define vars, you can use just Dockerfiles, just var files, or a combination of them
and when I get to 60+ tf modules?
i agree, one way would be better to not confuse people
I have a massive dockerfile with vars
yea, I’m more and more anti-overloading-the-dockerfile
I think it’s great for defining true globals, but more than that it’s confusing because the variables are not tightly coupled with the code they modify
we’re not yet opinionated down to how envs are managed.
i think the .tfvar
or .envrc
(direnv) approach are both nice.
for someone looking at this strictly from a terraform perspective, they might be inclined to use .tfvar
while someone looking at this from a more general perspective that also works with kops
, might like the direnv
approach. i think we can leave this up to the end user.
but we should document the pros/cons and various env management strategies
what Describe the problem with too many environment variables and how to manage them why There are a lot of ways to manage them. There are tradeoffs with all of them - thus a true "best practi…
Agreed on documenting the pros/cons. Even more so I would keep a list of these decisions where it can work either way and state which way and why geodesic went with (doesn’t mean that the other ways can’t be used)
Yea, I’ve seen some companies do that really well. We’ve not started that best practice.
I think it would help alot when starting fresh and reading the docs /code base to try get “project context”
Yea, agreed. I think we’re able to start doing that now. Less decisions happening on a day-to-day, week-to-week basis.
every time I need to set a new var
and then I need to worry about unique var names at the env level?
you would have to worry about that in any case, in Dockerfiles or in var files, or any other place
There is no getting away from having to set vars
^^
sure,….
are you doing 1 geodesic module per aws account or smaller units?
We started originally with one geodesic per cluster, but currently lean more towards one geodesic per account. This has worked so far for our customers, but perhaps other organizations need different levels of logical grouping. I think this can be open ended.
I will experiment with this a fair bit I suspect, like building a account specific geodesic image and then a project specific geodesic module using the account image as a layer below
yep! agree with that
so I mean if I have a “dev” account
per environment (prod, staging, dev, etc.)
which could be in separate AWS accounts, or in one (depending on requirements and client requests)
and I want multiple vpc’s and then I want to pass ` name = “${var.name}”`
yes, but you have different stages
, so no naming conflicts even if you deploy all stages into one account
I dont want to then have a ENV TF_VAR_fooname="devFoo
per vpc / project / resource
name
should be the same for all accounts/environments (prod, staging, dev)
that’s why we have stage
name in the sense of vpc?
in the sense of application/solution name
(im looking at the terraform vpc module, just to be clear here)
for example, your VPC could be named like this:
cp-prod-myapp
for prod
and in the case where I have a pre-production account that has like 30+ vpcs?
This is where you probably treat those as separate projects. Each with their own TF state.
cp-staging-myapp
for staging
then you use attributes
to add additional params to each ID generated by the label
modules
for example
cp-prod-myapp-unit1
cp-prod-myapp-unit2
And where are you setting the attributes then?
as env vars in your dockerfile
you can set them in a few diff ways:
- Directly in a higher-level module
- In Dockerfile
- in var file
it all depends on a particular use-case
and you either hardcode them (e.g. in a higher-level module) if they never change, or use ENV vars if you think they might change
but don’t worry about name collision between environments/stages, you have stage
for that
so lets say for example using the backing-services root module
keep in mind that the backing-services
root module is more like a reference architecture than something you should be leveraging wholesale.
backing-services
are specific to our organization
e.g. some other company might want to have mongo thrown in there
another way of thinking about terraform-root-modules` is that it’s your highly proprietary integration of all the other modules
the “top most level” modules are what make your infrastructure specific to the problem at hand that you’re solving.
so i dont’ think we could ever successfully generalize that in a meaningful way without creating crazy bloat. terraform-root-modules
is your “starting off point” where you hardfork/diverge.
to create the vpc
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
what is the expectation for setting this is a cidr I want
mmm
wait a sec
you need to create your own terraform-root-modules
or similar repo with diff name if needed
and provide the CIDR block you need (hardcoded or from var)
the backing-services is a kops/k8s specific vpc?
in our terraform-root-modules
yes, it was intended to be used and peered with a kops
VPC
in your terraform-root-modules
, it could be in the way you want
our terraform-root-modules
was not intended to be copied verbatim, it’s just a reference architecture
but it uses the TF modules, which could be used everywhere
the label
module is the central piece here
it allows to specify unique names for all environments without naming collisions
and it does not matter if you deploy everything into just one AWS account, or multiple account per environment
So I mean I missed that this part is meant to be copied and modded to my needs
is it stated someplace int he getting started?
here is an example of using attributes
in the same module to create unique names if you have more than one resource of the same type (e.g. role, distribution, etc.) https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L7
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
So I mean I missed that this part is meant to be copied and modded to my needs
@Jan that’s not probably stated in the docs
yea they are not perfect
So I mean from a “fresh to the project” perspective im not sure there is enough explained about how its meant to be used as yet
I get they are not perfect
docs is always the most difficult thing to create
you either want to be concise, or you spend a year to create a book
but maybe it would add huge value to sitdown with a fresh user and write downt he step by step requirements to get from 0 to a set account/vpc/resources blueprint
A module in terraform-root-modules is little more than some variables and calling other modules which do the heavy lifting.
maybe take a multi az wordpress on k8s with multiple envs
and rds / elasticache
all good points @Jan
I have approached diving into the docs and code base knowing what I want to create (and how to do so in terraform) in order to try understand how geodesic would work
thanks again
so yea, geodesic
is just a container with a bunch of tools required to provision infrastructures
terraform-root-modules
is an example of a reference architecture, shows how to use the TF modules, and is just one way of doing things
well its been a pretty frustrating process (to be fair I was warned it would be)
so let me start back at the begging with my new understanding
how would I best go about:
- setting up a geodesic project with 7 accounts (all pre-existing)
- setting up users to be created
- setting up env / project vpc(s) in these accounts
- setting up resources in some of these
- peering vpc’s selectivley
This is your secret sauce. We could never implement this in a generic way.
This is what you’d describe in root modules that get invoked in various geodesic repos.
I had before used and modified the reference architecture
and have that all working so far as admin users with cross account sts assume and dns delegation with acm etc
if you were to describe how to do my list, not in depth, what should I follow
you already forked terraform-root-modules
just did so
that’s your collection of module invocation
start updating it to suit your needs
e.g. change the VPC CIDR, add more modules you need, remove the modules you don’t need
then it would be ready to be deployed to all accounts
note that you add some modules to it which could be used only in some accounts, but not in the other
let me go remove everything I created previously first
then in Dockerfile(s) just copy what you need
per account
the COPY
commands will be different for all accounts b/c you need different modules in each (some modules are the same for all)
So the root modules would become the whole module catalog
of your organization
it’s your flavor of how to leverage the generic building blocks to achieve the specific customizations that you need.
without needing to write everything from the ground up. the idea is to write all terraform-aws*
modules very generically
then a company writes their own root modules
root modules is like the main(...) { }
of your infrastructure
create 7 repos, one per account
other than fully remote TF modules
7 Dockerfiles
and update all ENV vars in the repos to reflect the accounts
yes, exactly, the root modules would become the whole module catalog
the whole infrastructure catalog, for ALL accounts
and there afterwards when I want to add 3rd party ones to only some accounts?
either add them to the root modules (if you think it makes sense and you might want to re-use it somewhere else), or add it directly to one of the infra repo
if you want to add them to more than one infra repos, then better to add them to the root modules
mmm
how large do these root module end up becoming?
reflecting the whole infrastructure
I might just batch them into categories or something
but sure, you can split it into a few repos
and add them to the final Dockerfile as multi-stage
the point here is that you could re-use the code from terraform-root-modules
in many environments (prod, staging, dev), some of them will be the same, some unique to a particular repo
(wow i have some reading to do today)
hahah
expect some threaded converstations
# Disable Role ARN (necessary for root account on cold-start)
[ "${DISABLE_ROLE_ARN}" == "0" ] || sed -Ei 's/^(\s+role_arn\s+)/#\1/' main.tf
nice addition!
though still a nasty hack!
better this way than 1 vim and 1 sed
@Andriy Knysh (Cloud Posse) thanks for your explanations earlier btw
I seem to be getting on now
no problem, looks like it @Jan
cool vpc done
however the subnet math seems a bit odd…
I provided a /22 range over 3az 2 tier and ended up with 6 x /26’s
mmmmm
we generate subnets in a table manner
that takes into account a max number
stable *not table
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets
so first off was surprised by ipv6 being enabled
Maximum number of subnets that can be created. The variable is used for CIDR blocks calculation
still…
I mean I did expect 6 subnets, which is what I got back
they are just smaller than expected
the full calculation is broken down in the README
it’s rather complicated (and necessarily so)
crap
i don’t see the calculation there anymore
i think i’m getting confused by our other subnet module
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
(@Jan sorry i’m busy onboarding a couple new hires today so i’m very distracted)
so I mean a /22 would be 1022 usable ip’s so even removing say 5 for aws uses per subnet I would still expect like 165 odd useable per subnet
all good mate
its like 10:20pm here so probably wont carry on much longer
i’ll catch up on this discussion later on today
cool im gonna carry on for now since I will rebuild these vpc’s any way
maybe when you are back, how would I best go about creating multiple vpcs within a single geodesic module
this is where you start now with your clean terraform-root-module
repo and start to create a new project there in to describe the infrastructure that you want.
no 2 companies will ever have the same terraform-root-modules
ping me when you’re online
we can resync
say stage{a,b,c,d,e}.example.com
I have a pre calculated list of /22’s and /23’s that would get used, just read from a remote state output of a map
stagea.example.copm = 10.0.0.0/22
etc
what the root modules are the most opinionated incarnations of modules that seldom translate verbatim across organizations. This is your secret sauce. We could never implement this in a generic way…
@daveyu or @Max Moon any thoughts you’d like to add to this issue?
what the root modules are the most opinionated incarnations of modules that seldom translate verbatim across organizations. This is your secret sauce. We could never implement this in a generic way…
That clears things up nicely. I would suggest having this information included into the getting started guide
I reached back out to our technical writer to see if she has availability
I’ll make a note that we should also call this out in the quickstart
I created that issue to better address the purpose of root modules
root modules would become the whole module catalog of your organization
2018-12-11
Get started with one of our guides, or jump straight into the API documentation.
wrong channel
oops
2018-12-12
set the channel description: Discussions related to https://github.com/cloudposse/geodesic
2018-12-19
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
rejoice!
2018-12-23
Debug your pod by a new container with every troubleshooting tools pre-installed - aylei/kubectl-debug
Nifty for debugging
2018-12-28
@jbye I think it’s good you’re looking at both Gruntworks library as well as our own. They build solid stuff.
The key differentiator is our approach.
1) we don’t rely on a wrapper like terragrunt
2) we containerize our entire tool chain and docker extensively to deliver the solution
3) everything is 100% apache2 open source (we accept most PRs)
I definitely like that 3rd point it’s very hard for us to evaluate Gruntwork without being able to look at any real source code
(i think they will let you do that though if you maybe sign an NDA…)
Also, we just released: https://github.com/cloudposse/reference-architectures
this is all about laying out the account architecture which is central to our approach
and then we provide an example service catalog for how we use our 100+ terraform modules here: https://github.com/cloudposse/terraform-root-modules
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
They offer a 30-day money back agreement after purchase, so you have to pay, check it all out, then cancel within 30 days if you don’t like it
are you looking to adopt kubernetes?
Eventually, yes. For now, though, we are looking at an ECS-based architecture
ok
Have a look also at #airship by @maarten for ECS
@maarten has joined the channel
This is what one of our single account architectures looks like: https://github.com/cloudposse/testing.cloudposse.co
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
Basically every AWS account is a repo, defined with a Dockerfile and pulls in source from a shared repository (terraform-root-modules) of code.
Interesting
And we deploy https://www.runatlantis.io/ with each account
Atlantis: Terraform For Teams
gruntworks takes a different approach. I think they have a monorepo for all AWS account and then a service catalog like our with apps to deploy therein.
they use a shared terraform statebucket for all accounts
we deploy (1) state bucket/dynamodb per account and share nothing (heck even our account repos share nothing)
this lets every account act autonomously and be managed by different teams
Thanks for the info, Erik. I’ll have to take some time to digest it and see how it fits with how we are doing things. We already have a big chunk of Terraform code we wrote, but it has a fair number of issues and is all based around EC2 instances, not ECS. So as we move to using containers, we’re looking at a fresh start using a more mature codebase.
yea, this is a good transition point to consider other alternatives
Added some more thoughts here: https://github.com/cloudposse/docs/issues/351
what Describe our differentiators Gruntworks is an awesome contributor to open source and demonstrate solid engineering skills. They have a vast, well-tested, library of proprietary terraform modul…
@Erik Osterman (Cloud Posse) do you have an infrastructure diagram of any sort showing how you lay out the VPCs, bastions, peering, etc. in your reference arch?
good question: at this point, we don’t opine down to that level.
we provide the various components, but don’t say what your network should look like
Check out our various subnet modules that implement the different kinds of strategies