#geodesic (2019-08)

geodesic https://github.com/cloudposse/geodesic

Discussions related to https://github.com/cloudposse/geodesic Archive: https://archive.sweetops.com/geodesic/

2019-08-30

oscar avatar
oscar

@Erik Osterman (Cloud Posse) what is to stop a team just having one Geodesic module and then having nested .envrc in /conf to simulate global variables? The only disadvantage I spot is the swapping of AWS-crednentials which could be easily solved.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It can work that way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Nothing stopping anyone.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

basically it just comes down to our convention; how to manage multiple different tools pinned to different versions in different stages.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the problem with the one container is pretty much all tools have to be the same version

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or you need to use a version manager for every tool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

most OSes don’t make it easy to have multiple versions of software installed at the same time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

terraform is one tool that needs to be versioned

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but helm is very strict about versions of client/server matching. not pinning it means you force upgrading everywhere.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

recently upgraded alpine which upgraded git. then we saw the helm-git provider break because some flags it depends on were broken

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

recently we upgraded variant and that broke some older scripts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

my point is just that having one shell for all environments makes it difficult to be experimental while not also breaking prod toolchain

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, 99.9% of companies don’t worry about this. i basically see most companies operate infrastructure in a monorepo and don’t try to strictly version their toolchain the way we do in a way that allows versions of software to be promoted. we’ve just been really strict about it

oscar avatar
oscar

e.g. /conf/prod/.envrc (all the account specific vars)

BAU: /conf/prod/eu-west-1/terraform/my_project/* /conf/prod/eu-west-2/terraform/my_project/*

oscar avatar
oscar

Likewise an interesting conversation came up internally yesterday:

In the same way we would promote code in git dev –> master… how would one promote new variables etc along Geodesic /conf? Or new version pegging in the .envrc from-module line? The only solution to make it easy is have all the Geodesic modules in one repo so that it would be easily spotted if you missed updating one environment in the PR.

oscar avatar
oscar

I feel I have not seen the Cloudposse way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s not the way we do it, true… but that doesn’t make it wrong

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

most companies do structure it this way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is the way terraform recommends it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is the way terragrunt recommends it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

at cloudposse, we’ve taken the convention of “share nothing” down to the repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

which opens up awesome potential

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it means the git history isn’t even shared between stages

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it means webhooks aren’t shared between stages

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it means github team permissions aren’t shared between stages

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it means one PR can never modify more than one stage at a time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

forcing the convention that you strictly test before rollout to production

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the mono repo convention requires self-discipline, but it doesn’t enforce it.

tamsky avatar
tamsky

Nominating this thread (and Erik’s additional “we don’t share…” statements from later this same day) for pinned status.

2
joshmyers avatar
joshmyers

but in geodesic you have all your env vars because you are wrapping geodesic in your env specific container, no?

oscar avatar
oscar

Not sure I follow.. but normally you would have acme.stage.com but I wonder why not have: acme.com

and then: /conf/ ../dev/.envrc (with equivalent variables of Dockerfile) /prod/.envrc (with equivalent variables of Dockerfile)

oscar avatar
oscar

It’s a bit weird but something like that came up in a meeting when I was introducing Geodesic to another team and I didn’t really have an answer other than “that isn’t the way”, but it could work without losing any features so…?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep, you can totally do it this way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

where you have one folder per stage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we do something very similar for regions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

/conf/us-east-1/eks and /conf/us-west-2/eks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

where the us-east-1 folder has an .envrc file with export REGION=us-east-1

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this could just as well be modified to

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

/conf/prod/us-east-1/eks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

where the /conf/prod/.envrc file has STAGE=prod

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

It’s a matter of “what are you trying to solve for?” It is, in fact, tedious to roll out changes to 4 environments under the reference architecture, in that you need to checkout, branch, commit, pull request, merge, and release 4 times. With everything in 1 repo you can do less with Git, but you still need to apply changes in 4 folders, but now your PRs could be for any environment, so following the evolution of just one environment becomes much harder. Having a 1-to-1 mapping of Geodesic shells to environments just makes it a lot easier to keep everything straight.

Tega McKinney avatar
Tega McKinney

@Erik Osterman (Cloud Posse) Given the example above, do your REGION and AWS_REGION environment variables not get overwritten when you cd into eks directory?

I have a similar structure but my environment variables are being overwritten.

I have a structure with /conf/<tenant>/eu-west-1/platform-dns where I set /conf/<tenant>/eu-west-1/.envrc to REGION=eu-west-1 however region is being overwritten back to eu-central-1 when I change directory into /conf/<tenant>/eu-west-1/platform-dns

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

@Tega McKinney You need to put source_up as the first line of your .envrc file in order to pick up setting in parent directories. Otherwise direnv only uses the settings in the current directory to override whatever is in your environment, which means you get different results if you

cd /conf
cd eu-west-1
cd platform-dns

than if you just

cd /conf
cd eu-west-1/platform-dns
oscar avatar
oscar

More of a probing question than ‘how to do this’. Curious to what other’s think, specifically Erik and Jeremy

Alex Siegman avatar
Alex Siegman

The first thing I know would be “broken” is role assumption to AWS. It’d be really easy to assume role in dev, do some stuff, then not be able to apply a different stage because you’re still the the dev account.

I think the separation can be healthy even if it’s not DRY. It’s like the age-old “do we deploy centralized logging per stage or put all stages in one place” kind of argument. There’s pros and cons to both, but if you’re trying to prevent leakage between environments, why not prevent it at the tool level too

The separation between stages also allows you to test changes to your tool chain in safe environments, rather than break the production one.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we could even get around the role assumption piece

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the role assumption works on ENVs as well

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so if you assume the role in the proper folder, it assume the proper role

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but if the user then goes to /conf/dev while being assumed as prod, GOOD LUCK!!!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

again, most companies aren’t strict about enforcing this stuff. they get away with it. we just make sure it’s that much harder

oscar avatar
oscar

So this can be done easily and safely. You have aws profile set in the conf .envrc and it just assumes the role based on that, but my company uses AD to authenticate to AWS so AWS vault doesnt work so it is done external. Bit vague and can go info more technical detail if anyone curious.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’ve use it with SSO too (okta)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


You have aws profile set in the conf .envrc and it just assumes the role based on that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s just hard to enforce what happens if they leave a folder

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you need to ensure that in transition from /conf/prod to /conf/dev they still don’t have their prod role assumed

oscar avatar
oscar

That’s true.

oscar avatar
oscar

O Rly how do yo do SSO (azure ad) and aws cault???

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you don’t use aws-vault

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aws-vault is one way to do auth

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aws-okta is another

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aws-keycloak

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

basically, a different cli is used

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Versent/saml2aws

CLI tool which enables you to login and retrieve AWS temporary credentials using a SAML IDP - Versent/saml2aws

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
segmentio/aws-okta

aws-vault like tool for Okta authentication. Contribute to segmentio/aws-okta development by creating an account on GitHub.

oscar avatar
oscar

Ahhh yes. Thank you. I’ve been using aws azure sso login from npm

1
oscar avatar
oscar

Which of these do you use and prefer btw?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have only used aws-okta and it works really well

joshmyers avatar
joshmyers

Yup, makes CI/CD a bit cleaner too

Alex Siegman avatar
Alex Siegman

I also like that the toolchain follows the same “TRUST THE ENVIRONMENT” we yell to our devsselves for 12-factor style apps, but maybe not everybody follows 12-factor app style.

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

on the one hand, I’m jealous of companies that take the monorepo approach to infrastructure. you can easily bring up and tear down your entire environment. You can open PRs that fix problems across all environments in one fell swoop. You can easily setup cross account peering because you control all accounts in a single session. You can do all sorts of automation.

1
oscar avatar
oscar

And this is what my team encouraged me to do. They were saying “why have N conf directories with all the little components when you can have 3: application, aws account, and shared services”

My answer was: That’s not the way. This very monolithic. We want flexibility.

Then they said; We dont want flexibility. We want to ensure all environments are the same. What if someone forgets to update the dev project etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, guess they just want to optimize for different things.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

these are often opinions strongly held by an organization. they are often influenced by how they got to where they are today. these strongly held opinions are not easily changed because most of the people who were involved in getting the organization to where they are today were the ones who made them.

just like it’s difficult for us (cloudposse) to get our head around how we would manage infrastructure in that way. It’s not an uncommon belief and many do, just we see all the problems that go along with that too.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

one thing i struggle with is a companies “prod” account almost never equals their “staging” account

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

they might run 4 clusters in 4 regions in prod

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but they run one region in staging

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

they run multi demo environments in staging, but none in prod

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

they run shared services in one account (like Jenkins, Keycloak, etc), yet don’t have another “staging” account for those shared services

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so I instead argue we want to ensure the same code pinned at some version runs in some account

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we want some assurances that that was tested

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but the way it runs in a particular stage isn’t the same

oscar avatar
oscar

Mmm it makes sense. This is deffo a topic for next wednesday. I’ll try to prep some more specific examples and files structures so we can all cross examine.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea for sure

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, willing to explore this in a deeper session

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’d like to offer this as an alternative strategy for companies who prefer it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’ll definitely appeal more to the terragrunt crowd (as well)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but this also is freggin scary. i think it’s optimizing for the wrong use-case where you start from scratch. i think it’s better to optimize for day to day operations and stability.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, what i struggle with is where do you draw the line?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’m sure most of the engineers would agree “share nothing” is the right approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but despite that, tools, state buckets, repos, dns zones, accounts, webhooks, ci/cd, etc are all shared.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’ve taken the (relatively speaking) extreme position to truly share nothing.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we don’t share the tools, they are in different containers.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we don’t share the state buckets, they are in different accounts.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we don’t share the repos, each account corresponds to a repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we don’t share DNS zones. each account has it’s own service discovery domain

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we don’t share webhooks, because each account has it’s own repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we don’t share CI/CD (for infra) because each account has it’s own atlantis, and each atlantis receives it’s own webhooks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

etc…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, all of that looks like very difficult to setup and maintain from the start, but in the end it’s much easier to manage security and access without jumping through many hoops (and still having holes)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in that share nothing architecture, the only point of access to an account is to add a user to the identity account and allow it to assume a role with permissions to access resources in the other accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that we still share all the code (from terraform modules, helmfiles, and the root-modules catalog) in all account repos, so no repetition there (we load them dynamically using tags with semantic versioning)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we just have different settings (vars, ENvs, etc.) for each account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as you can see for example here https://github.com/cloudposse/testing.cloudposse.co/tree/master/conf/ecs, there is no code in the account repos, just settings (not secrets)

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all code (logic) is shared and open (if it’s your company secret, make the repo private)

oscar avatar
oscar

Yes that example is similar to how were doing it now. We have a few levels of abstraction going on. I can go over this next wednesday for 10-15 minutes d

2019-08-28

oscar avatar
oscar

Yes similar to above I would create a R53 role in root account and assume the role using a second provider block and then for the r53 record resource use the root account t provider.

joshmyers avatar
joshmyers

Why not add route53 perms to the root admin group that already exists?

joshmyers avatar
joshmyers

Currently the users in root admin group, in the root account can only do IAM stuff for their own users

joshmyers avatar
joshmyers

Ah, nevermind me

oscar avatar
oscar

What was the issue/thing for future reference?

joshmyers avatar
joshmyers

Me being stupid and missing the admin role

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

#office-hours starting now! join us here https://zoom.us/s/508587304

2019-08-27

joshmyers avatar
joshmyers

How are folks doing vanity domains using geodesic / ref arch ?

joshmyers avatar
joshmyers

Currently the parent dns name (foo.com) lives in the root account, and by default, users don’t have admin in root

joshmyers avatar
joshmyers
So no one can actually delegate [example.foo.com> down to <http://example.prod.foo.com example.prod.foo.com](http://example.foo.com)
joshmyers avatar
joshmyers

Easily solvable but wondering what folks are doing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

by design, the branded domain is provisioned in the root account, since it will contain references to the service discovery domain in any account.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. corp/shared account, prod account, data account, etc.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

branded = vanity

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
terraform {
  required_version = "~> 0.12.0"

  backend "s3" {}
}

provider "aws" {
  version = "~> 2.17"

  assume_role {
    role_arn = var.aws_assume_role_arn
  }
}

variable "aws_assume_role_arn" {
  type = string
}

data "aws_route53_zone" "ourcompany_com" {
  # Note: The zone name is the domain name plus a final dot
  name = "ourcompany.com."
}


\# allow_overwrite lets us take over managing entries that are already there

\# use sparingly in live domains unless you know what's what
resource "aws_route53_record" "apex" {
  allow_overwrite = true
  zone_id         = data.aws_route53_zone.ourcompany_com.zone_id
  type            = "A"
  ttl             = 300
  records         = ["1.2.3.4"]
}

resource "aws_route53_record" "www" {
  allow_overwrite = true
  zone_id         = data.aws_route53_zone.ourcompany_com.zone_id
  type            = "CNAME"
  ttl             = 300
  records         = ["app.prod.ourcompany.org"]
joshmyers avatar
joshmyers

Right, but using the ref arch, even admin users in the root account don’t have access to do anything to route53 resources

Alex Siegman avatar
Alex Siegman

They should if they assume the right role.

joshmyers avatar
joshmyers

Which role is that?

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

namespace-root-admin. In the ref arch you pick a namespace for all your accounts, like cpco, and then the role in your prod environment is cpco-prod-admin. So you also get a cpco-root-admin role that you can assume to do whatever you need to do in the root account.

2019-08-26

SweetOps #geodesic avatar
SweetOps #geodesic
04:00:08 PM

There are no events this week

Cloud Posse avatar
Cloud Posse
04:01:56 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about geodesic, get live demos and learn from others using it. Next one is Sep 04, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2019-08-23

oscar avatar
oscar

Thanks =] All working now

2019-08-22

oscar avatar
oscar

Any thoughts on why I receive the following error:

 ✗ . (none) backend ⨠ terraform init
Copying configuration from "git::<https://github.com/cloudposse/terraform-root-modules.git//aws/tfstate-backend?ref=tags/0.35.1>"...

Error: Can't populate non-empty directory

The target directory . is not empty, so it cannot be initialized with the
-from-module=... option.

Latest Geodesic.

oscar avatar
oscar
 ✗ . (none) backend ⨠ ls -la
total 16
drwxr-xr-x 2 root root 4096 Aug 22 15:00 .
drwxr-xr-x 3 root root 4096 Aug 22 09:49 ..
-rw-r--r-- 1 root root  380 Aug 22 15:29 .envrc
-rw-r--r-- 1 root root  122 Aug 22 15:00 Makefile
 ⧉  oscar
 ✗ . (none) backend ⨠ cat .envrc

\# Import the remote module
export TF_CLI_INIT_FROM_MODULE="git::<https://github.com/cloudposse/terraform-root-modules.git//aws/tfstate-backend?ref=tags/0.35.1>"
export TF_CLI_PLAN_PARALLELISM=2

source <(tfenv)

use terraform 0.12
use tfenv
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

add export TF_MODULE_CACHE=.module

joshmyers avatar
joshmyers

@oscar It has been discussed here a while back, check through history, as @Andriy Knysh (Cloud Posse) says ^^

joshmyers avatar
joshmyers

hey @Andriy Knysh (Cloud Posse)

1
oscar avatar
oscar

Thanks both, I did actually have a scan through 0.12 and geodesic channel but couldn’t find it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and in Makefile.tasks, change it to:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
-include ${TF_MODULE_CACHE}/Makefile

deps:
	mkdir -p ${TF_MODULE_CACHE}
	terraform init


\## Reset this project
reset:
	rm -rf ${TF_MODULE_CACHE}
oscar avatar
oscar

Ah you know I can see it now

oscar avatar
oscar

19th July

joshmyers avatar
joshmyers
Direnv with Terraform 0.12 by osterman · Pull Request #500 · cloudposse/geodesic

what Use an empty cache folder to initialize module why Terraform 0.12 no longer allows initialization of folders with even dot files =( Example of how to use direnv with terraform 0.12 usage e…

oscar avatar
oscar

Thanks just had a read of that

oscar avatar
oscar

Makes sense. I must have missed this this afternoon!

oscar avatar
oscar

is source <(tfenv) required?

oscar avatar
oscar

@Andriy Knysh (Cloud Posse) I get this from the new Makefile

 ✗ . (none) backend ⨠ make
Makefile:4: *** missing separator.  Stop.
 ⧉  oscar
 ✗ . (none) backend ⨠ make reset
Makefile:4: *** missing separator.  Stop.
 ⧉  oscar
 ✗ . (none) backend ⨠ make deps
\Makefile:4: *** missing separator.  Stop.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Makefile:4: *** missing separator. Stop. is usually causes by spaces (replace with tabs)

2019-08-19

SweetOps #geodesic avatar
SweetOps #geodesic
04:00:07 PM

There are no events this week

Cloud Posse avatar
Cloud Posse
04:04:22 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about geodesic, get live demos and learn from others using it. Next one is Aug 28, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2019-08-16

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey Ryan - sorry - our docs are really out of date.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s an example of how we use it: https://github.com/cloudposse/testing.cloudposse.co

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

2019-08-15

Ryan avatar

Hello all, I’m playing with geodesic and it doesn’t seem to be generating the dynamic cluster correctly. I get output that is example.foo.bar.

$ export CLUSTER_NAME=test.myco.com

$ docker run -e CLUSTER_NAME \
-e DOCKER_IMAGE=cloudposse> -e DOCKER_IMAGE=cloudposse/${CLUSTER_NAME} \
-e DOCKER_TAG=dev \
cloudposse/geodesic:latest -c new-projec> -e DOCKER_TAG=dev \
> cloudposse/geodesic:latest -c new-project | tar -xv -C .
Building project for example.foo.bar...
./
example.foo.bar/Dockerfile
example.foo.bar/Makefile
example.foo.bar/conf/
example.foo.bar/conf/.gitignore

I’m looking through various source, it’s pretty challenging to piece it all together (feels over abstracted). So wondering if I’m missing something.

Ryan avatar

Also new to Terraform so the concepts for how it’s organized is a bit confusing atm

2019-08-12

SweetOps #geodesic avatar
SweetOps #geodesic
04:00:02 PM

There are no events this week

Cloud Posse avatar
Cloud Posse
04:04:12 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about geodesic, get live demos and learn from others using it. Next one is Aug 21, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2019-08-11

2019-08-09

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have not tried to accomplish cross account kiam

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

generally we try never to cross account boundaries

2019-08-08

Tega McKinney avatar
Tega McKinney

Has any configured kiam beyond the helmfiles defaults for cross account resource access from kops pods? I’m running into a situation where my cross account policies are not allowing me access and I’m thinking maybe kiam is not properly setup to assume roles

Tega McKinney avatar
Tega McKinney

I think it’s starting to make sense now. I hadn’t realized kiam-server roles were not setup and no sts:AssumeRole on the masters were configured either.

FYI - this TGIK by Joe Beda helped in that understanding. https://www.youtube.com/watch?v=vgs3Af_ew3c

Tega McKinney avatar
Tega McKinney

@Erik Osterman (Cloud Posse) Any reasons that with the kiam setup, it is essentially using the master nodes role vs creating a kiam-server specific role and allowing it to establish the trust relationship with pod roles?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no, but what you describe is a better practice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

basically, there should be a separate node pool strictly for kiam-server and assumed roles

Tega McKinney avatar
Tega McKinney

I did not go as far as a separate node pool however I did create the kiam-server assume-role and set assume-role arg on kiam-server instead of it detecting the node role.

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

Right now we run the kiam-server on the masters, and we treat the master role created by kops as the kaim-server role. As far as I can see, there is not much added security or convenience in creating a separate kiam-server role until you get to the point of creating separate instances for the kiam servers and giving them instance roles for the kiam-server. In our configuration, anything on the master nodes can directly assume any pod role that kiam can authorize. With a separate kiam-server role, this is still the case, it’s just that there would be an extra intermediate step of assuming the kiam-server role.

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

To answer your question @Tega McKinney, the reason we treat the master role like it is the kiam server role is because it is a lot easier. While we will likely do it eventually, it is going to be a lot of work to separate out the kiam server role from the master role in all of our Terraform.

1

2019-08-07

2019-08-06

daveyu avatar
daveyu

thanks @Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) RUN apk add [email protected] was the trick (instead of RUN apk add [email protected] [email protected]==0.11.14-r0)

2

2019-08-05

Tega McKinney avatar
Tega McKinney

Okay; reason I was asking is I attempted to add /helmfiles/ templates to the configs/prod.tf in reference-architecture. Failed to render templates as it was looking for a few env vars that are not setup (KOPS_CLUSTER_NAME and STAGE). That got me thinking those templates are just reference files for after the initial architecture is setup. Is that the general idea?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So the Helmfiles will require a ton of settings

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do not have those documented. But if you share specific ones I can help.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Usually we set STAGE in the Dockerfile. If you only have one cluster in a stage, then you can also set KOPS_CLUSTER_NAME in the Dockerfile so it’s available globally.

Tega McKinney avatar
Tega McKinney

Yeah i’m planning to do that. It was more of a reference to that value not being set in the ref-architecture when attempting to add it templates in prod.tfvars. Helmfiles would be a post bootstrap step I’m assuming

SweetOps #geodesic avatar
SweetOps #geodesic
04:00:07 PM

There are no events this week

Cloud Posse avatar
Cloud Posse
04:02:33 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about geodesic, get live demos and learn from others using it. Next one is Aug 14, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Tim Jones avatar
Tim Jones

Hi! I’ve been reviewing the terraform-root-modules resource in order to help bring a little order to the chaos AWS I’ve inherited but I’m having trouble understanding where real human users are managed. I see that the aws/usersseems to be set up for this, but it seems incomplete or something, the welcome.txt references a username var that doesn’t exist, and isn’t used as a template source as far as I can see anyways.

Tega McKinney avatar
Tega McKinney

@Tim Jones Have you seen the reference-architectures repo? Its leverages the root-modules. It creates your admin users using aws/users root-module

Tim Jones avatar
Tim Jones

@Tega McKinney yes but it’s been very much broken with the release of Terraform v0.12

daveyu avatar
daveyu

I’m having trouble building from geodesic with terraform 0.11 since 0.117.0

daveyu avatar
daveyu

is there a better way to use both 0.11 and 0.12?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, but I am afk

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) can you show Dave

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically install [email protected]

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And write “use terraform 0.11”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In your .envrc

daveyu avatar
daveyu

where/when do I install 0.11? before geodesic 0.117.0, apk add [email protected] worked in the Dockerfile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, add that to your Dockerfile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This way we can support multiple concurrent major/minor versions

daveyu avatar
daveyu

Yes, but during docker build:

ERROR: unsatisfiable constraints:
  terraform-0.12.0-r0:
    breaks: world[terraform=0.11.14-r0]
The command '/bin/sh -c apk add [email protected] [email protected]==0.11.14-r0' returned a non-zero
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Show me your Dockerfile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You need to remove the second package there

daveyu avatar
daveyu

dockerfile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

RUN apk add [email protected]==0.11.14-r0 is what you want

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the long & short of it is that since we upgraded to the alpine:3.10 series, theres been no new 0.11 release, so no package for 0.11 was built under terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

however, we explicitly build a terraform_0.11 package and a terraform_0.12 package

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

like a python2 and python3 package

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and from alpine:3.10 , the terraform package will be 0.12.x

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

behind the scenes, we’re installing a symlink to /usr/local/terraform/x.y/bin/terraform that points to /usr/local/bin/terraform-x.y

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that way when we write use terraform 0.11, we can set PATH=/usr/local/terraform/0.11/bin:$PATH and it will automatically find the correct version of terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

without changing code

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and not using alias

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@dave.yu this is what we used in Dockerfile to install both TF 0.11 and 0.12 under geodesic 0.117

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

\# Install terraform 0.11 for backwards compatibility
RUN apk add [email protected]


\# Install terraform 0.12
RUN apk add [email protected] [email protected]==0.12.3-r0
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then, for the modules that use TF 0.12, we use

use envrc
use terraform 0.12
use tfenv
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and for the modules that use TF 0.11

use envrc
use terraform 0.11
use tfenv

2019-08-04

Tega McKinney avatar
Tega McKinney

Regarding helmfiles, is multi-stage dockerfiles the current approach? How does that relate to /templates/conf/helmfiles/... in reference-architectures?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

great you ask

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no - we’re using remote helmfiles pinned to releases now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It looks like this:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

\# Ordered list of releases.

\# Terraform-module-like URLs for importing a remote directory and use a file in it as a nested-state file.

\# The nested-state file is locally checked-out along with the remote directory containing it.

\# Therefore all the local paths in the file are resolved relative to the file.

helmfiles:
  - path: git::<https://github.com/cloudposse/[email protected]/reloader.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/cert-manager.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/prometheus-operator.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/cluster-autoscaler.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/kiam.yaml?ref=0.51.2>
  - path: git::<https://github.com/cloudposse/[email protected]/external-dns.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/teleport-ent-auth.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/teleport-ent-proxy.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/aws-alb-ingress-controller.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/kube-lego.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/nginx-ingress.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/heapster.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/dashboard.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/forecastle.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/keycloak-gatekeeper.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/fluentd-elasticsearch-aws.yaml?ref=0.47.0>
  - path: git::<https://github.com/cloudposse/[email protected]/kubecost.yaml?ref=0.50.0>

2019-08-02

2019-08-01

Tega McKinney avatar
Tega McKinney

Is it possible override the terraform backend s3 key? I created the following top-level folder structure:

root
-- vpc
-- -- terraform.envrc

I would like the key to be : root/vpc/terraform.tfstate so I set TF_CLI_INIT_BACKEND_CONFIG_KEY=root/vpc/terraform.tfstate.

This does not override the default cause with the key ends up being vpc/terraform.tfstate. I sit possible to override the key?

Tega McKinney avatar
Tega McKinney

Got it. I needed to remove ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd" from my Dockerfile so it would use the full directory path afer /conf.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You got it!

Tega McKinney avatar
Tega McKinney

Ran into another issue where I have created additional root-dns entries that have a hyphen in the name. The label is removing the hyphen from the stage however I assumed the regex_replace_chars should not replace hyphens. Is there something else that could be stripping the hyphen?

Tega McKinney avatar
Tega McKinney

Got it. Took some digging to realize was on an older version on terraform-null-label from reference-architecture. Updated to 0.11.1 and it works perfectly.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ah sweet!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, we had a bug I think related to that

    keyboard_arrow_up