#geodesic (2019-08)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2019-08-01
Is it possible override the terraform backend s3 key? I created the following top-level folder structure:
root
-- vpc
-- -- terraform.envrc
I would like the key to be : root/vpc/terraform.tfstate
so I set TF_CLI_INIT_BACKEND_CONFIG_KEY=root/vpc/terraform.tfstate
.
This does not override the default cause with the key ends up being vpc/terraform.tfstate
. I sit possible to override the key?
Got it. I needed to remove ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
from my Dockerfile so it would use the full directory path afer /conf
.
You got it!
Ran into another issue where I have created additional root-dns entries that have a hyphen in the name. The label is removing the hyphen from the stage however I assumed the regex_replace_chars
should not replace hyphens. Is there something else that could be stripping the hyphen?
Got it. Took some digging to realize was on an older version on terraform-null-label from reference-architecture. Updated to 0.11.1
and it works perfectly.
ah sweet!
yes, we had a bug I think related to that
2019-08-02
2019-08-04
Regarding helmfiles, is multi-stage dockerfiles the current approach? How does that relate to /templates/conf/helmfiles/...
in reference-architectures
?
great you ask
no - we’re using remote helmfiles pinned to releases now
It looks like this:
# Ordered list of releases.
# Terraform-module-like URLs for importing a remote directory and use a file in it as a nested-state file.
# The nested-state file is locally checked-out along with the remote directory containing it.
# Therefore all the local paths in the file are resolved relative to the file.
helmfiles:
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/reloader.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/cert-manager.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/prometheus-operator.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/cluster-autoscaler.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/kiam.yaml?ref=0.51.2>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/external-dns.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/teleport-ent-auth.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/teleport-ent-proxy.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/aws-alb-ingress-controller.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/kube-lego.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/nginx-ingress.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/heapster.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/dashboard.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/forecastle.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/keycloak-gatekeeper.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/fluentd-elasticsearch-aws.yaml?ref=0.47.0>
- path: git::<https://github.com/cloudposse/helmfiles.git@releases/kubecost.yaml?ref=0.50.0>
2019-08-05
Okay; reason I was asking is I attempted to add /helmfiles/
templates to the configs/prod.tf
in reference-architecture. Failed to render templates as it was looking for a few env vars that are not setup (KOPS_CLUSTER_NAME
and STAGE
). That got me thinking those templates are just reference files for after the initial architecture is setup. Is that the general idea?
So the Helmfiles will require a ton of settings
We do not have those documented. But if you share specific ones I can help.
Usually we set STAGE
in the Dockerfile
. If you only have one cluster in a stage, then you can also set KOPS_CLUSTER_NAME
in the Dockerfile
so it’s available globally.
Yeah i’m planning to do that. It was more of a reference to that value not being set in the ref-architecture when attempting to add it templates
in prod.tfvars. Helmfiles would be a post bootstrap step I’m assuming
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Aug 14, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Hi! I’ve been reviewing the terraform-root-modules
resource in order to help bring a little order to the chaos AWS I’ve inherited but I’m having trouble understanding where real human users are managed.
I see that the aws/users
seems to be set up for this, but it seems incomplete or something, the welcome.txt
references a username
var that doesn’t exist, and isn’t used as a template source as far as I can see anyways.
@Tim Jones Have you seen the reference-architectures repo? Its leverages the root-modules. It creates your admin users using aws/users
root-module
@Tega McKinney yes but it’s been very much broken with the release of Terraform v0.12
I’m having trouble building from geodesic with terraform 0.11 since 0.117.0
is there a better way to use both 0.11 and 0.12?
Yes, but I am afk
@Andriy Knysh (Cloud Posse) can you show Dave
Basically install terraform_0.11@cloudposse
And write “use terraform 0.11”
In your .envrc
where/when do I install 0.11? before geodesic 0.117.0, apk add terraform_0.11@cloudposse
worked in the Dockerfile
Yep, add that to your Dockerfile
This way we can support multiple concurrent major/minor versions
Yes, but during docker build:
ERROR: unsatisfiable constraints:
terraform-0.12.0-r0:
breaks: world[terraform=0.11.14-r0]
The command '/bin/sh -c apk add terraform_0.11@cloudposse terraform@cloudposse==0.11.14-r0' returned a non-zero
Show me your Dockerfile
You need to remove the second package there
dockerfile
RUN apk add terraform_0.11@cloudposse==0.11.14-r0
is what you want
the long & short of it is that since we upgraded to the alpine:3.10 series, theres been no new 0.11 release, so no package for 0.11 was built under terraform
.
however, we explicitly build a terraform_0.11
package and a terraform_0.12
package
like a python2
and python3
package
and from alpine:3.10
, the terraform
package will be 0.12.x
behind the scenes, we’re installing a symlink to /usr/local/terraform/x.y/bin/terraform
that points to /usr/local/bin/terraform-x.y
that way when we write use terraform 0.11
, we can set PATH=/usr/local/terraform/0.11/bin:$PATH
and it will automatically find the correct version of terraform
without changing code
and not using alias
@dave.yu this is what we used in Dockerfile to install both TF 0.11 and 0.12 under geodesic 0.117
# Install terraform 0.11 for backwards compatibility
RUN apk add terraform_0.11@cloudposse
# Install terraform 0.12
RUN apk add terraform_0.12@cloudposse terraform@cloudposse==0.12.3-r0
then, for the modules that use TF 0.12, we use
use envrc
use terraform 0.12
use tfenv
and for the modules that use TF 0.11
use envrc
use terraform 0.11
use tfenv
2019-08-06
thanks @Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) RUN apk add terraform_0.11@cloudposse
was the trick (instead of RUN apk add terraform_0.11@cloudposse terraform@cloudposse==0.11.14-r0
)
2019-08-07
#office-hours hours starting in 15m https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8
2019-08-08
Has any configured kiam beyond the helmfiles defaults for cross account resource access from kops pods? I’m running into a situation where my cross account policies are not allowing me access and I’m thinking maybe kiam is not properly setup to assume roles
I think it’s starting to make sense now. I hadn’t realized kiam-server roles were not setup and no sts:AssumeRole on the masters were configured either.
FYI - this TGIK by Joe Beda helped in that understanding. https://www.youtube.com/watch?v=vgs3Af_ew3c
@Erik Osterman (Cloud Posse) Any reasons that with the kiam setup, it is essentially using the master nodes role vs creating a kiam-server specific role and allowing it to establish the trust relationship with pod roles?
no, but what you describe is a better practice
basically, there should be a separate node pool strictly for kiam-server
and assumed roles
I did not go as far as a separate node pool however I did create the kiam-server assume-role and set assume-role arg on kiam-server instead of it detecting the node role.
Right now we run the kiam-server
on the masters, and we treat the master
role created by kops
as the kaim-server
role. As far as I can see, there is not much added security or convenience in creating a separate kiam-server
role until you get to the point of creating separate instances for the kiam servers and giving them instance roles for the kiam-server. In our configuration, anything on the master nodes can directly assume any pod role that kiam
can authorize. With a separate kiam-server
role, this is still the case, it’s just that there would be an extra intermediate step of assuming the kiam-server
role.
To answer your question @Tega McKinney, the reason we treat the master role like it is the kiam server role is because it is a lot easier. While we will likely do it eventually, it is going to be a lot of work to separate out the kiam server role from the master role in all of our Terraform.
2019-08-09
have not tried to accomplish cross account kiam
generally we try never to cross account boundaries
2019-08-11
2019-08-12
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Aug 21, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-08-15
Hello all, I’m playing with geodesic
and it doesn’t seem to be generating the dynamic cluster correctly. I get output that is example.foo.bar
.
$ export CLUSTER_NAME=test.myco.com
$ docker run -e CLUSTER_NAME \
-e DOCKER_IMAGE=cloudposse> -e DOCKER_IMAGE=cloudposse/${CLUSTER_NAME} \
-e DOCKER_TAG=dev \
cloudposse/geodesic:latest -c new-projec> -e DOCKER_TAG=dev \
> cloudposse/geodesic:latest -c new-project | tar -xv -C .
Building project for example.foo.bar...
./
example.foo.bar/Dockerfile
example.foo.bar/Makefile
example.foo.bar/conf/
example.foo.bar/conf/.gitignore
I’m looking through various source, it’s pretty challenging to piece it all together (feels over abstracted). So wondering if I’m missing something.
Also new to Terraform so the concepts for how it’s organized is a bit confusing atm
2019-08-16
Hey Ryan - sorry - our docs are really out of date.
Here’s an example of how we use it: https://github.com/cloudposse/testing.cloudposse.co
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
2019-08-19
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Aug 28, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-08-22
Any thoughts on why I receive the following error:
✗ . (none) backend ⨠ terraform init
Copying configuration from "git::<https://github.com/cloudposse/terraform-root-modules.git//aws/tfstate-backend?ref=tags/0.35.1>"...
Error: Can't populate non-empty directory
The target directory . is not empty, so it cannot be initialized with the
-from-module=... option.
Latest Geodesic.
✗ . (none) backend ⨠ ls -la
total 16
drwxr-xr-x 2 root root 4096 Aug 22 15:00 .
drwxr-xr-x 3 root root 4096 Aug 22 09:49 ..
-rw-r--r-- 1 root root 380 Aug 22 15:29 .envrc
-rw-r--r-- 1 root root 122 Aug 22 15:00 Makefile
⧉ oscar
✗ . (none) backend ⨠ cat .envrc
# Import the remote module
export TF_CLI_INIT_FROM_MODULE="git::<https://github.com/cloudposse/terraform-root-modules.git//aws/tfstate-backend?ref=tags/0.35.1>"
export TF_CLI_PLAN_PARALLELISM=2
source <(tfenv)
use terraform 0.12
use tfenv
add export TF_MODULE_CACHE=.module
@oscar It has been discussed here a while back, check through history, as @Andriy Knysh (Cloud Posse) says ^^
Thanks both, I did actually have a scan through 0.12 and geodesic channel but couldn’t find it.
and in Makefile.tasks
, change it to:
-include ${TF_MODULE_CACHE}/Makefile
deps:
mkdir -p ${TF_MODULE_CACHE}
terraform init
## Reset this project
reset:
rm -rf ${TF_MODULE_CACHE}
Ah you know I can see it now
19th July
what Use an empty cache folder to initialize module why Terraform 0.12 no longer allows initialization of folders with even dot files =( Example of how to use direnv with terraform 0.12 usage e…
Thanks just had a read of that
Makes sense. I must have missed this this afternoon!
is source <(tfenv)
required?
@Andriy Knysh (Cloud Posse) I get this from the new Makefile
✗ . (none) backend ⨠ make
Makefile:4: *** missing separator. Stop.
⧉ oscar
✗ . (none) backend ⨠ make reset
Makefile:4: *** missing separator. Stop.
⧉ oscar
✗ . (none) backend ⨠ make deps
\Makefile:4: *** missing separator. Stop.
Makefile:4: *** missing separator. Stop.
is usually causes by spaces (replace with tabs)
2019-08-23
Thanks =] All working now
2019-08-26
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Sep 04, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-08-27
How are folks doing vanity domains using geodesic / ref arch ?
Currently the parent dns name (foo.com) lives in the root account, and by default, users don’t have admin in root
So no one can actually delegate example.foo.com down to example.prod.foo.com
Easily solvable but wondering what folks are doing
by design, the branded domain is provisioned in the root account, since it will contain references to the service discovery domain in any account.
e.g. corp/shared account, prod account, data account, etc.
branded = vanity
terraform {
required_version = "~> 0.12.0"
backend "s3" {}
}
provider "aws" {
version = "~> 2.17"
assume_role {
role_arn = var.aws_assume_role_arn
}
}
variable "aws_assume_role_arn" {
type = string
}
data "aws_route53_zone" "ourcompany_com" {
# Note: The zone name is the domain name plus a final dot
name = "ourcompany.com."
}
# allow_overwrite lets us take over managing entries that are already there
# use sparingly in live domains unless you know what's what
resource "aws_route53_record" "apex" {
allow_overwrite = true
zone_id = data.aws_route53_zone.ourcompany_com.zone_id
type = "A"
ttl = 300
records = ["1.2.3.4"]
}
resource "aws_route53_record" "www" {
allow_overwrite = true
zone_id = data.aws_route53_zone.ourcompany_com.zone_id
type = "CNAME"
ttl = 300
records = ["app.prod.ourcompany.org"]
Right, but using the ref arch, even admin users in the root account don’t have access to do anything to route53 resources
They should if they assume the right role.
Which role is that?
namespace-root-admin
. In the ref arch you pick a namespace for all your accounts, like cpco
, and then the role in your prod environment is cpco-prod-admin
. So you also get a cpco-root-admin
role that you can assume to do whatever you need to do in the root account.
2019-08-28
Yes similar to above I would create a R53 role in root account and assume the role using a second provider block and then for the r53 record resource use the root account t provider.
Why not add route53 perms to the root admin group that already exists?
Currently the users in root admin group, in the root account can only do IAM stuff for their own users
Ah, nevermind me
What was the issue/thing for future reference?
Me being stupid and missing the admin role
#office-hours starting now! join us here https://zoom.us/s/508587304
2019-08-30
@Erik Osterman (Cloud Posse) what is to stop a team just having one Geodesic module and then having nested .envrc in /conf to simulate global variables? The only disadvantage I spot is the swapping of AWS-crednentials which could be easily solved.
It can work that way
Nothing stopping anyone.
basically it just comes down to our convention; how to manage multiple different tools pinned to different versions in different stages.
so the problem with the one container is pretty much all tools have to be the same version
or you need to use a version manager for every tool
most OSes don’t make it easy to have multiple versions of software installed at the same time
terraform
is one tool that needs to be versioned
but helm
is very strict about versions of client/server matching. not pinning it means you force upgrading everywhere.
recently upgraded alpine which upgraded git. then we saw the helm-git
provider break because some flags it depends on were broken
recently we upgraded variant
and that broke some older scripts
my point is just that having one shell for all environments makes it difficult to be experimental while not also breaking prod toolchain
also, 99.9% of companies don’t worry about this. i basically see most companies operate infrastructure in a monorepo and don’t try to strictly version their toolchain the way we do in a way that allows versions of software to be promoted. we’ve just been really strict about it
e.g. /conf/prod/.envrc (all the account specific vars)
BAU: /conf/prod/eu-west-1/terraform/my_project/* /conf/prod/eu-west-2/terraform/my_project/*
Likewise an interesting conversation came up internally yesterday:
In the same way we would promote code in git dev –> master… how would one promote new variables etc along Geodesic /conf? Or new version pegging in the .envrc from-module line? The only solution to make it easy is have all the Geodesic modules in one repo so that it would be easily spotted if you missed updating one environment in the PR.
I feel I have not seen the Cloudposse way
it’s not the way we do it, true… but that doesn’t make it wrong
most companies do structure it this way
this is the way terraform recommends it
this is the way terragrunt recommends it
at cloudposse, we’ve taken the convention of “share nothing” down to the repo
which opens up awesome potential
it means the git history isn’t even shared between stages
it means webhooks aren’t shared between stages
it means github team permissions aren’t shared between stages
it means one PR can never modify more than one stage at a time
forcing the convention that you strictly test before rollout to production
the mono repo convention requires self-discipline, but it doesn’t enforce it.
Nominating this thread (and Erik’s additional “we don’t share…” statements from later this same day) for pinned status.
but in geodesic you have all your env vars because you are wrapping geodesic in your env specific container, no?
Not sure I follow.. but normally you would have acme.stage.com but I wonder why not have: acme.com
and then: /conf/ ../dev/.envrc (with equivalent variables of Dockerfile) /prod/.envrc (with equivalent variables of Dockerfile)
It’s a bit weird but something like that came up in a meeting when I was introducing Geodesic to another team and I didn’t really have an answer other than “that isn’t the way”, but it could work without losing any features so…?
yep, you can totally do it this way
where you have one folder per stage
we do something very similar for regions
/conf/us-east-1/eks
and /conf/us-west-2/eks
where the us-east-1
folder has an .envrc
file with export REGION=us-east-1
this could just as well be modified to
/conf/prod/us-east-1/eks
where the /conf/prod/.envrc
file has STAGE=prod
It’s a matter of “what are you trying to solve for?” It is, in fact, tedious to roll out changes to 4 environments under the reference architecture, in that you need to checkout, branch, commit, pull request, merge, and release 4 times. With everything in 1 repo you can do less with Git, but you still need to apply changes in 4 folders, but now your PRs could be for any environment, so following the evolution of just one environment becomes much harder. Having a 1-to-1 mapping of Geodesic shells to environments just makes it a lot easier to keep everything straight.
@Erik Osterman (Cloud Posse) Given the example above, do your REGION
and AWS_REGION
environment variables not get overwritten when you cd
into eks
directory?
I have a similar structure but my environment variables are being overwritten.
I have a structure with /conf/<tenant>/eu-west-1/platform-dns
where I set /conf/<tenant>/eu-west-1/.envrc
to REGION=eu-west-1
however region is being overwritten back to eu-central-1
when I change directory into /conf/<tenant>/eu-west-1/platform-dns
@Tega McKinney You need to put source_up
as the first line of your .envrc
file in order to pick up setting in parent directories. Otherwise direnv
only uses the settings in the current directory to override whatever is in your environment, which means you get different results if you
cd /conf
cd eu-west-1
cd platform-dns
than if you just
cd /conf
cd eu-west-1/platform-dns
More of a probing question than ‘how to do this’. Curious to what other’s think, specifically Erik and Jeremy
The first thing I know would be “broken” is role assumption to AWS. It’d be really easy to assume role in dev, do some stuff, then not be able to apply a different stage because you’re still the the dev account.
I think the separation can be healthy even if it’s not DRY. It’s like the age-old “do we deploy centralized logging per stage or put all stages in one place” kind of argument. There’s pros and cons to both, but if you’re trying to prevent leakage between environments, why not prevent it at the tool level too
The separation between stages also allows you to test changes to your tool chain in safe environments, rather than break the production one.
I think we could even get around the role assumption piece
the role assumption works on ENVs as well
so if you assume the role in the proper folder, it assume the proper role
but if the user then goes to /conf/dev
while being assumed as prod
, GOOD LUCK!!!
again, most companies aren’t strict about enforcing this stuff. they get away with it. we just make sure it’s that much harder
So this can be done easily and safely. You have aws profile set in the conf .envrc and it just assumes the role based on that, but my company uses AD to authenticate to AWS so AWS vault doesnt work so it is done external. Bit vague and can go info more technical detail if anyone curious.
we’ve use it with SSO too (okta)
You have aws profile set in the conf .envrc and it just assumes the role based on that
it’s just hard to enforce what happens if they leave a folder
you need to ensure that in transition from /conf/prod
to /conf/dev
they still don’t have their prod
role assumed
That’s true.
O Rly how do yo do SSO (azure ad) and aws cault???
you don’t use aws-vault
aws-vault is one way to do auth
aws-okta
is another
aws-keycloak
basically, a different cli is used
CLI tool which enables you to login and retrieve AWS temporary credentials using a SAML IDP - Versent/saml2aws
aws-vault like tool for Okta authentication. Contribute to segmentio/aws-okta development by creating an account on GitHub.
Which of these do you use and prefer btw?
I have only used aws-okta
and it works really well
Yup, makes CI/CD a bit cleaner too
I also like that the toolchain follows the same “TRUST THE ENVIRONMENT” we yell to our ~devs~elves for 12-factor style apps, but maybe not everybody follows 12-factor app style.
on the one hand, I’m jealous of companies that take the monorepo approach to infrastructure. you can easily bring up and tear down your entire environment. You can open PRs that fix problems across all environments in one fell swoop. You can easily setup cross account peering because you control all accounts in a single session. You can do all sorts of automation.
And this is what my team encouraged me to do. They were saying “why have N conf directories with all the little components when you can have 3: application, aws account, and shared services”
My answer was: That’s not the way. This very monolithic. We want flexibility.
Then they said; We dont want flexibility. We want to ensure all environments are the same. What if someone forgets to update the dev project etc
ya, guess they just want to optimize for different things.
these are often opinions strongly held by an organization. they are often influenced by how they got to where they are today. these strongly held opinions are not easily changed because most of the people who were involved in getting the organization to where they are today were the ones who made them.
just like it’s difficult for us (cloudposse) to get our head around how we would manage infrastructure in that way. It’s not an uncommon belief and many do, just we see all the problems that go along with that too.
one thing i struggle with is a companies “prod” account almost never equals their “staging” account
they might run 4 clusters in 4 regions in prod
but they run one region in staging
they run multi demo environments in staging, but none in prod
they run shared services in one account (like Jenkins, Keycloak, etc), yet don’t have another “staging” account for those shared services
so I instead argue we want to ensure the same code pinned at some version runs in some account
we want some assurances that that was tested
but the way it runs in a particular stage isn’t the same
Mmm it makes sense. This is deffo a topic for next wednesday. I’ll try to prep some more specific examples and files structures so we can all cross examine.
yea for sure
also, willing to explore this in a deeper session
i’d like to offer this as an alternative strategy for companies who prefer it
it’ll definitely appeal more to the terragrunt crowd (as well)
but this also is freggin scary. i think it’s optimizing for the wrong use-case where you start from scratch. i think it’s better to optimize for day to day operations and stability.
also, what i struggle with is where do you draw the line?
i’m sure most of the engineers would agree “share nothing” is the right approach
but despite that, tools, state buckets, repos, dns zones, accounts, webhooks, ci/cd, etc are all shared.
we’ve taken the (relatively speaking) extreme position to truly share nothing.
we don’t share the tools, they are in different containers.
we don’t share the state buckets, they are in different accounts.
we don’t share the repos, each account corresponds to a repo
we don’t share DNS zones. each account has it’s own service discovery domain
we don’t share webhooks, because each account has it’s own repo
we don’t share CI/CD (for infra) because each account has it’s own atlantis, and each atlantis receives it’s own webhooks
etc…
yes, all of that looks like very difficult to setup and maintain from the start, but in the end it’s much easier to manage security and access without jumping through many hoops (and still having holes)
in that share nothing architecture, the only point of access to an account is to add a user to the identity account and allow it to assume a role with permissions to access resources in the other accounts
note that we still share all the code (from terraform modules, helmfiles, and the root-modules
catalog) in all account repos, so no repetition there (we load them dynamically using tags with semantic versioning)
we just have different settings (vars, ENvs, etc.) for each account
as you can see for example here https://github.com/cloudposse/testing.cloudposse.co/tree/master/conf/ecs, there is no code in the account repos, just settings (not secrets)
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
all code (logic) is shared and open (if it’s your company secret, make the repo private)
Yes that example is similar to how were doing it now. We have a few levels of abstraction going on. I can go over this next wednesday for 10-15 minutes d