#geodesic (2019-03)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2019-03-01

Did I set something up wrong? 1) What’s with the unescaped characters 2) How come none of my vault profiles appear? Thanks

Ah! Nvm! For some reason a character kept being placed.. backspace solved it
2019-03-02

^ I would consider that as unexpected behaviour though. I’m not placing the -
character as well as the glitchy character before assume-role
on the CLI

https://github.com/cloudposse/reference-architectures#1-provision-root-account
make root
errors due to permission errors… I’ve noticed artifacts
is created under root:root
.. any reason why?
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Ok seems to be working with some hacky solution of running make root
having it fail then chmod 777
on reference-architectures
dir lol

Hrm… Not sure - didn’t see this when I ran it on Linux, but sounds entirely plausible that we have some permissions issue.

@Erik Osterman (Cloud Posse) happened again btw exact same steps…
clone ref-arc change configs make root permission error bc artefacts under root:root

Error: Error applying plan:
2 error(s) occurred:
* module.tfstate_backend.aws_s3_bucket.default: 1 error(s) occurred:
* aws_s3_bucket.default: Error creating S3 bucket: IllegalLocationConstraintException: The us-west-2 location constraint is incompatible for the region specific endpoint this request was sent to.
status code: 400, request id: C43BDB5AF4DC7779, host id: V/wPm7gDiU5Dic9bDogXcAunrYnK5Y2l5g9StldhV17/dtjo5t4+PD5JAztIubtGZ1iNCHHeKgE=
* module.tfstate_backend.aws_dynamodb_table.with_server_side_encryption: 1 error(s) occurred:
root.tfvars
:
# The default region for this account
aws_region = "eu-west-2"
Any thoughts? Did a grep of reference-architecture and no mention of us-west-2
.. must be on the S3 Backend module right?

^ solutions to this are in an existing issue on github :] just gotta wait for my sub-account limit to increase now…
2019-03-03

When in root.tfvars
of reference infra it specifics really old versions of root modules and geodesic.. any reason I shouldn’t put them both to the latest releases or does it eventually pull them down?

We’ve done some refactoring of environment variables that is probably incompatible with the current version of the reference architectures

In our next customer engagement that involves a cold-start we’ll clean this up.

Awesome thanks.

If I use reference architectures now am I committing to some old setup? Could I have run it again to update it? Would I want to? I still haven’t established to full reach and effect of it. Based on the breakdown of each step it shouldn’t matter want version of geodesic is used for setup. I do wonder how to update my geodesic though for a stage?

not that old. I think we have a call tomorrow - I can fill you in. The main change that happened after the ref arch is the introduction of tfenv

also, a move towards using direnv
to define local project settings rather than using globals

Interface-wise this should be stable now for a while. We hadn’t “solved” how to manage envs in a scaleable fashion until recently
2019-03-04

.. if they’re not created already with reference-architectures

How do you guys ‘hotswap’ backends in geodesic env?

I feel like it has something to do with direnv
but I don’t really get it

Are you referring to S3 backends?

Yes!

Let’s say I have an api project with s3 as the backend. This project exists on dev staging prod. Since we don’t share buckets and buckets are uniquely named and you can’t interpolate in the backend config, how do you not duplicate code?

aha yes

so this is solved with environment variables

we use direnv
to define them for terraform
consumption

so in a project, you’ll define a .envrc

and in the Dockerfile
you’ll define some globals

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Here’s a sample .envrc


You’ll see, this is very DRY

We specify an ENV for the remote module, which will be normalized with tfenv
to a “terraform compatible” env

use terraform
+ use tfenv
setups up the TF_CLI_ARGS_init

using the the cloudposse direnv stdlib is totally optional.

@rohit.verma for example, decided not to use our helpers and just defines his TF_CLI_ARGS_init
variables explicitly.

the direnv stdlib is defined here: https://github.com/cloudposse/geodesic/tree/master/rootfs/etc/direnv/rc.d
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

note, the use terraform
helper will do this:

export TF_BUCKET_PREFIX=${TF_BUCKET_PREFIX:-$(basename $(pwd))}

this is so that each of your project folders gets it’s own state folder inside the state bucket

by default it uses the current folder name
2019-03-06

Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co

Long as it’s consistent throughout a project, life is good! Thank you editorconfig!

I’ve changed to it to spaces in mine

If it’s your project, that’s your prerogative

I like to lint for editorconfig violations in CI, for those few editors that don’t honor its settings by default, https://github.com/jedmao/eclint
Validate or fix code that doesn’t adhere to EditorConfig settings or infer settings from existing code. - jedmao/eclint

Thanks I’ll bookmark that.. don’t yet have a linter in my CI

@Erik Osterman (Cloud Posse) what steps should one take to change the template used for make readme
? The cloudposse template (wherever that is located) adds all the logos etc to the top of my README. Would like to be able to change it so it corresponds to my team.. thoughts?

certainly…

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness

here are the ENVs you can play with

README_TEMPLATE_FILE
is what you want


here’s the template

Thanks! That looks like I’d therefore have to fork build harness entirely? Or is there a super neat way this can be a geodesic conf setting

@Erik Osterman (Cloud Posse).. as promised..
@everyoneelse https://github.com/osulli/geodesic-getting-started hope this helps those starting Geodesic.
A getting-started guide for Cloud Posse’s Geodesic. - osulli/geodesic-getting-started

@oscarsullivan_old this is phenomenal! thanks so much for putting this together. hearing it from your hands-on perspective is invaluable.
A getting-started guide for Cloud Posse’s Geodesic. - osulli/geodesic-getting-started

What OS was https://github.com/cloudposse/github-authorized-keys tested on (im assuming in aws)?
Use GitHub teams to manage system user accounts and authorized_keys - cloudposse/github-authorized-keys

It was tested on quite a few Linux distos

However the problem is usually that useradd varies by distro so you always need to update the template

It defaults to alpine

I was running it on ub18 in docker (just following the readme tbh) Not wholly sure why it failed tbh, said it couldnt talk to github which seems self explanatory; I was just wondering if the pathing / commands its expecting to execute on the host are correct

can you share the error @chrism

devil is in the details

yep 1 mo

the instructions seem wonky as well; --expose
is an override of EXPOSE in docker but the docs show it like port -p 301:301

{"level":"info","msg":"Run syncUsers job on start","time":"2019-03-06T17:06:42Z"}
{"job":"syncUsers","level":"error","msg":"Connection to github.com failed","subsystem":"jobs","time":"2019-03-06T17:07:12Z"}
{"level":"info","msg":"Run ssh integration job on start","time":"2019-03-06T17:07:12Z"}
{"job":"sshIntegrate","level":"info","msg":"Ensure file /usr/bin/github-authorized-keys","subsystem":"jobs","time":"2019-03-06T17:07:12Z"}
{"job":"sshIntegrate","level":"info","msg":"Ensure exec mode for file /usr/bin/github-authorized-keys","subsystem":"jobs","time":"2019-03-06T17:07:12Z"}
{"job":"sshIntegrate","level":"info","msg":"Ensure AuthorizedKeysCommand line in sshd_config","subsystem":"jobs","time":"2019-03-06T17:07:12Z"}
{"job":"sshIntegrate","level":"info","msg":"Ensure AuthorizedKeysCommandUser line in sshd_config","subsystem":"jobs","time":"2019-03-06T17:07:12Z"}
{"job":"sshIntegrate","level":"info","msg":"Restart ssh","subsystem":"jobs","time":"2019-03-06T17:07:12Z"}
{"job":"sshIntegrate","level":"info","msg":"Output: ","subsystem":"jobs","time":"2019-03-06T17:07:12Z"}
{"job":"sshIntegrate","level":"error","msg":"Error: fork/exec /usr/sbin/service: no such file or directory","subsystem":"jobs","time":"2019-03-06T17:07:12Z"}
{"level":"info","msg":"Start jobs scheduler","time":"2019-03-06T17:07:12Z"}
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] GET /user/:name/authorized_keys --> github.com/cloudposse/github-authorized-keys/server.Run.func1 (3 handlers)
[GIN-debug] Listening and serving HTTP on :301

docker run -v /:/host --expose "301" -p 127.0.0.1:301:301 -e GITHUB_API_TOKEN=x -e GITHUB_ORGANIZATION=x
-e GITHUB_TEAM=ops -e SYNC_USERS_INTERVAL=200 -e LISTEN=:301 -e INTEGRATE_SSH=true cloudposse/github-authorized-keys

{“job”<i class=”em em-“sshIntegrate”,”level””></i>“error”,”msg”<i class=”em em-“Error”></i> fork/exec /usr/sbin/service: no such file or directory”,”subsystem”<i class=”em em-“jobs”,”time””></i>“2019-03-06T1712Z”}

this is assuming a systemd
setup

sounds like you don’t have systemd

try INTEGRATE_SSH=false

Nah still failed
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: GithubOrganization - ******","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: GithubTeamName - D*******s","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: GithubTeamID - *","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: EtcdEndpoints - []","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: EtcdPrefix - /github-authorized-keys","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: EtcdTTL - 24h0m0s seconds","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: UserGID - ","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: UserGroups - []","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: UserShell - /bin/bash","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: Root - /","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: Interval - 200 seconds","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: IntegrateWithSSH - false","time":"2019-03-07T13:25:01Z"}
{"class":"RootCmd","level":"info","method":"RunE","msg":"Config: Listen - :301","time":"2019-03-07T13:25:01Z"}
{"level":"info","msg":"Run syncUsers job on start","time":"2019-03-07T13:25:01Z"}
{"job":"syncUsers","level":"error","msg":"Connection to github.com failed","subsystem":"jobs","time":"2019-03-07T13:25:06Z"}
{"level":"info","msg":"Start jobs scheduler","time":"2019-03-07T13:25:06Z"}
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] GET /user/:name/authorized_keys --> github.com/cloudposse/github-authorized-keys/server.Run.func1 (3 handlers)
[GIN-debug] Listening and serving HTTP on :301

bleh docker networking issue

{"level":"info","msg":"Run syncUsers job on start","time":"2019-03-07T13:39:56Z"}
adduser: unrecognized option: disabled-password
BusyBox v1.25.1 (2016-10-26 16:15:20 GMT) multi-call binary.
Usage: adduser [OPTIONS] USER [GROUP]
Create new user, or add USER to GROUP
-h DIR Home directory
-g GECOS GECOS field
-s SHELL Login shell
-G GRP Add user to existing group
-S Create a system user
-D Don't assign a password
-H Don't create home directory
-u UID User id
-k SKEL Skeleton directory (/etc/skel)
{"job":"syncUsers","level":"error","msg":"exit status 1","subsystem":"jobs","time":"2019-03-07T13:39:57Z"}
adduser: unrecognized option: disabled-password
lol

The INTEGRATE_SSH works as expected but as you’d expect it shits a brick. Meh. Kinda wish this was an apt package or cron job lol

Need to jiggle the linux env vars

think ive got it now even if its only added 1 of 3 users Getting there

Got to the point where its added a user to match my github name, but no keys and no errors I’d missed SYNC_USERS_ROOT=/host

woot got it

Now back to crying into my packer script.

Started looking at getting bastion running along side it. This would probably work better as a docker compose; or after much alcohol.

have you seen the cloud formation?

someone else submitted that

If you want to share your working configuration we can add it to examples maybe

not noticed the cloudformation. Tbh I was tripping up over a) not reading further down the damn page (rtfm fail) And not not realising team names had to be lowercase (org names are case sensitive) (which is githubs fault)

I want to incorporate this in our docs.


even explaining this is a big win as a way to show how the abstraction works.

No prob

Sure is useful

I was wondering actually if you need to specify the key and encrypt variables or if they’re done automatically..

Ideally if you open that as a PR we can review it and leave comments
2019-03-07
2019-03-08

Guys any thoughts on the following:

#terraform.tf
terraform {
backend "s3" {}
}
#variables.tf
variable "stage" {}
variable "namespace" {}
variable "aws_region" {}
variable "tf_bucket_region" {}
variable "tf_bucket" {}
variable "tf_dynamodb_table" {}
variable "TF_VAR_tf_bucket_region" {}
variable "TF_VAR_tf_bucket" {}
variable "TF_VAR_tf_dynamodb_table" {}
#Dockerfile
# Terraform vars
ENV TF_VAR_region="${AWS_REGION}"
ENV TF_VAR_account_id="${AWS_ACCOUNT}"
ENV TF_VAR_namespace="${NAMESPACE}"
ENV TF_VAR_stage="${STAGE}"
ENV TF_VAR_domain_name="${DOMAIN_NAME}"
ENV TF_VAR_zone_name="${DOMAIN_NAME}"
# chamber KMS config
ENV CHAMBER_KMS_KEY_ALIAS="alias/${TF_VAR_namespace}-${TF_VAR_stage}-chamber"
# Terraform State Bucket
ENV TF_BUCKET_REGION="${AWS_REGION}"
ENV TF_BUCKET="${TF_VAR_namespace}-${TF_VAR_stage}-terraform-state"
ENV TF_DYNAMODB_TABLE="${TF_VAR_namespace}-${TF_VAR_stage}-terraform-state-lock"
CLI:
✓ (healthera-sandbox-admin) backend ⨠ terraform init
Initializing modules...
- module.terraform_state_backend
- module.terraform_state_backend.base_label
- module.terraform_state_backend.s3_bucket_label
- module.terraform_state_backend.dynamodb_table_label
Initializing the backend...
bucket
The name of the S3 bucket
Any thoughts on why it would be asking for the bucket name? AKA -backend-config="bucket=my-state-bucket"

Something must be wrong with the environment variables

If you pm me your TF_CLI envs I can evaluate

Ah just seen this. 2 secs will send over.

# Terraform vars
ENV TF_VAR_region="${AWS_REGION}"
ENV TF_VAR_account_id="${AWS_ACCOUNT}"
ENV TF_VAR_namespace="${NAMESPACE}"
ENV TF_VAR_stage="${STAGE}"
ENV TF_VAR_domain_name="${DOMAIN_NAME}"
ENV TF_VAR_zone_name="${DOMAIN_NAME}"
# chamber KMS config
ENV CHAMBER_KMS_KEY_ALIAS="alias/${TF_VAR_namespace}-${TF_VAR_stage}-chamber"
# Terraform State Bucket
ENV TF_BUCKET_REGION="${AWS_REGION}"
ENV TF_BUCKET="${TF_VAR_namespace}-${TF_VAR_stage}-terraform-state"
ENV TF_DYNAMODB_TABLE="${TF_VAR_namespace}-${TF_VAR_stage}-terraform-state-lock"
That’s my dockefifle for tf stuff

If I so an env | grep -i bucket
I can see them

is the domain just for display, or do we actually modify resources on the TF_VAR_domain_name
?

So this is an older dockerfile. Having the tf envs in the dockerfile led to sprawl. e.g. domain_name defined as a global does not make sense. It was used by one module. This is why we moved to direnv.

Do you have “use terraform” and “use tfenv” in your .envrc

Yeh but I have no backend .envrc

I think I tried it but it changes nothing

Is there a project with backend interpolation working along with newest dockefifle?

It will definitely not work to call terraform init if you are not setting up the environment with tfenv

Would be fab If you get a chance to send me bash history or stdinout of files you edit to tfenv the backend variables and then how they’re used

There s just too much guess work. Need to see how someone does it

What’s in their conf/backend/.envrc And what do they run to In It project What’s in their terraform backend block and variables file

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

This works

Have a demo of this at a meetup lastnight

Please work backwards from terraform init

Understand how to pass environment variables to terraform init

Then try setting those explicitly without using our env mapping

Tfenv is what we use to map envs

It’s incredibly simple. https://github.com/cloudposse/tfenv/blob/master/main.go
Transform environment variables for use with Terraform (e.g. HOSTNAME
⇨ TF_VAR_hostname
) - cloudposse/tfenv

Thanks.. I feel like I’m getting closer and following what you’re saying but

export TF_CLI_INIT_BACKEND_CONFIG_BUCKET=${TF_BUCKET}
export TF_CLI_INIT_BACKEND_CONFIG_REGION=${TF_BUCKET_REGION}
export TF_CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE=${TF_DYNAMODB_TABLE}
use tfenv
source <(tfenv)

terraform init
tfenv
terraform init
tfenv terraform init

none of those work

Understand how to pass environment variables to terraform init
This is exactly what I’m trying to figure out.
At the moment I’m manually running terraform init -backend-config="bucket=${TF_BUCKET}" -backend-config="region=${TF_BUCKET_REGION}" -backend-config="dynamodb_table=${TF_DYNAMODB_TABLE}"

Also tried this as my .envrc and rebuild:
# Terraform State Bucket
export BUCKET_REGION="${AWS_REGION}"
export BUCKET="${VAR_namespace}-${VAR_stage}-terraform-state"
export DYNAMODB_TABLE="${VAR_namespace}-${VAR_stage}-terraform-state-lock"
#export CLI_INIT_BACKEND_CONFIG_REGION=${BUCKET_REGION}
#export CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE=${DYNAMODB_TABLE}
###
export TF_CLI_INIT_FROM_MODULE=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/tfstate-backend?ref=tags/0.53.4>
export TF_CLI_INIT_BACKEND_CONFIG_BUCKET=${BUCKET}
source <(tfenv)
terraform init

as per readme for tfenv

Man this makes me so frustrated! Every readme is just 2 lines away from being comprehensible. Never is there a final usage example

“Here’s what it looks like top to bottom”


31 export TF_CLI_INIT_BACKEND_CONFIG_BUCKET=terraform-state-bucket
32 tfenv
33 tfenv terraform init
So this works.. but now how do I get my .envrc
to automatically be doing this

` 28 direnv allow /conf/backend/.envrc` I tried this however but that didn’t do anything

hang on think Im onto something

# Terraform State Bucket
export BUCKET_REGION="${AWS_REGION}"
export BUCKET="${VAR_namespace}-${VAR_stage}-terraform-state"
export DYNAMODB_TABLE="${VAR_namespace}-${VAR_stage}-terraform-state-lock"
#export CLI_INIT_BACKEND_CONFIG_REGION=${BUCKET_REGION}
#export CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE=${DYNAMODB_TABLE}
###
export TF_CLI_INIT_BACKEND_CONFIG_BUCKET=${BUCKET}
source <(tfenv)
use tfenv
Nope after a rebuild none of the following work
2 terraform init
3 tfenv terraform init
4 tfenv

It can’t be due to interpolation in envrc surely

Man I’m fuming at this point. This is so easy to doc

1) Example .envrc 2) Example using .envrc

Wait what I have to GO INTO the conf dir to activate these variables?
✓ (healthera-sandbox-admin) ~ ⨠ cd /conf/backend/
direnv: loading .envrc
direnv: using terraform
direnv: using atlantis
direnv: using tfenv
direnv: export +BUCKET +BUCKET_REGION +DYNAMODB_TABLE +TF_BUCKET_PREFIX +TF_CLI_ARGS_init +TF_CLI_INIT_BACKEND_CONFIG_BUCKET +TF_CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE +TF_CLI_INIT_BACKEND_CONFIG_KEY +TF_CLI_INIT_BACKEND_CONFIG_REGION +TF_STATE_FILE +TF_VAR_bucket +TF_VAR_bucket_region +TF_VAR_direnv_diff +TF_VAR_direnv_watches +TF_VAR_dynamodb_table +TF_VAR_oldpwd +TF_VAR_tf_bucket_prefix +TF_VAR_tf_cli_args_init +TF_VAR_tf_state_file ~TF_VAR_pwd ~TF_VAR_shlvl

No idea. Feel like I’ve tried every permutation that could possibly be inferred from the README

wow I think I got it this time

holy shit I did

.envrc
goes into the terraform module

you cd into the terraform module dir

you type direnv allow
if it prompts

and bam

that was stupidly hard

amazing

even better

you don’t even need it in the directory of the project

can be in the root terraform dir

i.e.
devops/terraform/providers/aws/.envrc
instead of
devops/terraform/provdiers/aws/vpc/.envrc

OK when you’re next around, since I get it now, we should cover best practices so I can then doc this, please.

And also is encrypt
required to be defined?
terraform {
backend "s3" {
encrypt = true
}
}
✓ (healthera-sandbox-admin) aws ⨠ direnv allow
direnv: loading .envrc
direnv: using terraform
direnv: using atlantis
direnv: using tfenv
direnv: export +BUCKET +BUCKET_REGION +CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE +CLI_INIT_BACKEND_CONFIG_REGION +DYNAMODB_TABLE +TF_BUCKET_PREFIX +TF_CLI_ARGS_init +TF_CLI_INIT_BACKEND_CONFIG_BUCKET +TF_CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE +TF_CLI_INIT_BACKEND_CONFIG_KEY +TF_CLI_INIT_BACKEND_CONFIG_REGION +TF_STATE_FILE +TF_VAR_bucket +TF_VAR_bucket_region +TF_VAR_cli_init_backend_config_dynamodb_table +TF_VAR_cli_init_backend_config_region +TF_VAR_direnv_diff +TF_VAR_direnv_watches +TF_VAR_dynamodb_table +TF_VAR_oldpwd +TF_VAR_tf_bucket_prefix +TF_VAR_tf_cli_args_init +TF_VAR_tf_state_file ~TF_VAR_pwd ~TF_VAR_shlvl
Can’t see it there

Ok so:
Current setup:
terraform/aws/.envrc
# Terraform State Bucket
export BUCKET="${NAMESPACE}-${STAGE}-terraform-state"
export BUCKET_REGION="${AWS_REGION}"
export DYNAMODB_TABLE="${NAMESPACE}-${STAGE}-terraform-state-lock"
export TF_CLI_INIT_BACKEND_CONFIG_BUCKET=${BUCKET}
export TF_CLI_INIT_BACKEND_CONFIG_REGION=${BUCKET_REGION}
export TF_CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE=${DYNAMODB_TABLE}
use terraform
use atlantis
use tfenv
terraform/aws/vpc/.envrc
# Terraform State Bucket
export BUCKET_KEY="backend"
#export BUCKET_REGION="${AWS_REGION}"x
# Terraform init bucket settings
#export TF_CLI_INIT_BACKEND_CONFIG_KEY=${BUCKET_KEY}
use terraform
use atlantis
use tfenv
Note the commented out BUCKET_REGION…
Two commands:
BUCKET_REGION commented out:
✓ (healthera-sandbox-admin) backend ⨠ terraform plan
var.bucket_region
Enter a value:
BUCKET_REGION not commented out:
TF works as expected
#####
The question:
How do I manage regions with .envrc… its as though I can only have one .envrc at a time. This suggests I should define the region in the Dockerfile as a global, however that would mean I need a geodesic module per region. Ideally I have a ‘prod’ account and use the regions inside it.
Like wise for state Key where should I set this.
This is all in the context of one bucket per AWS account and each AWS account has infra running on multiple data centers…

The ultimate goal is so that I only have one .envrc for all my terraform projects (DRY; not one per TF project) and eventually TFENV spits out something like
export TF_VAR_tf_cli_args_init='-backend-config=region=eu-west-2 -backend-config=dynamodb_table=acme-sandbox-terraform-state-lock -backend-config=bucket=acme-sandbox-terraform-state -backend-config=key=backend/terraform.tfstate'

although I think key needs to have backend/eu-west-2/terraform.tfstate
tbf

Hang on I think I’ve got it…

Nope

Thought maybe having the main .envrc in the tf root and then another .envrc in the indiivudla project with source <(tfenv)
inside would allow two

Yeh really feel like to deploy a TF project across regions you’d need to have another geodesic module..

According to direnv it should be possible to load two .envrcs with source_env ..
or source_up
but those commands aren’t found

That’s odd. The source_env works for us as we used it this week. Something is wrong with the direnv integration in your shell.

Did you upgrade geodesic? The earlier version in the ref arch perhaps did not support direnv

We rolled out a lot of enhancements during January that did not percolate through to docs and ref arch due to a very aggressive 2 week sprint for a customer.

I suggest to try updating the geodesic base image to the latest one

Thanks @Erik Osterman (Cloud Posse) Will confirm whether I am on latest geodesic vase and root module base

The fact that source env is not working I think is a good hint to why you are having a lot of grief :-)

To deploy across regions what you want to do is use a directory approach in conf to support that

Something like conf/us-west-2/backing-services

Then in the us-west-2 folder set the AWS_DEFAULT_REGION to us-west-2 in the envrc

FROM cloudposse/terraform-root-modules:0.53.0 as terraform-root-modules
FROM cloudposse/helmfiles:0.19.1 as helmfiles
FROM cloudposse/geodesic:0.72.2
Damn

Already on latest

Haha Erik I’m such a muppet. source up
is a thing to go into .envrc
not a CLI command

wasn’t clear to me ¯_(ツ)_/¯

aws/.envrc
# DRY variables - not changed per project
# Terraform State Bucket
export BUCKET="${NAMESPACE}-${STAGE}-terraform-state"
export BUCKET_REGION="${AWS_REGION}"
export DYNAMODB_TABLE="${NAMESPACE}-${STAGE}-terraform-state-lock"
aws/backend/.envrc
source_env ..
# Terraform State Bucket
export BUCKET_KEY="backend"
# Terraform init bucket settings
export TF_CLI_INIT_BACKEND_CONFIG_KEY=${BUCKET_KEY}
use terraform
use atlantis
use tfenv


haha, yea, glad you got to the bottom of it. definitely worth reading up on https://direnv.net the full capabilities of direnv

looks like you’re getting it though …

https://docs.cloudposse.com/terraform/terraform-best-practices/#use-terraform-cli-to-set-backend-parameters doesn’t quite show the final commands

https://archive.sweetops.com/geodesic/#b3fc9758-0fad-4afe-a80b-2873fdef4907 so close here last time @Erik Osterman (Cloud Posse) lols

Hey @oscarsullivan_old sorry been so focused on the conference. Back to life as normal next week.

If I find some time today will take a look.

Thanks back to trying reference-architectures on another account but that’s erroring a few times lool

in a rush to get it working this weeekend before I lose hold of the company credit card

will reach out this afternoon

in the middle of a meeting atm

@Erik Osterman (Cloud Posse) I think I found the docs you’re referring to here: https://github.com/osulli/geodesic-getting-started If those are they, I’ll read through this and ping back if I have any issues
A getting-started guide for Cloud Posse’s Geodesic. - osulli/geodesic-getting-started

Let me know if you spot errors I’m gonna tidy it tomorrow. Noticed a few issues. Also the backend example is wrong

Need @Erik Osterman (Cloud Posse) input on how to get it to stop prompting me for a bucket name. For now I’m manually setting the bucket via CLI art but ideally that’s part of geodesic.. which I know it is just for some reason I can’t get it to work.

Well, I’m working through making a root account, which it looks like you might have skipped?

Reference architecture was a real ball ache for mw

I did not get it working

Tried again this afternoon and faced a few errors and ditched it againnafyer a few hours

Ah, yeah, my company is at the point we want to build out something similar to the ref architecture, so I was just going to spin it up quick and see how these tools handle it, and maybe just use them

Also I don’t like using stuff I don’t understand and therefore can’t debug

Sometimes if automation goes wrong you can manually do a step then continue say

But with ref arch I am flying blind

So, we also have an existing account I could use as our root account, and that might make the most sense, but I wanted to test this on a blank canvas to be sure I didn’t trash anything there.

Don’t run it on your current account

No one knows what would happen lol

That’s not been done before

Exactly. I’m just thinking forward, because eventually once I get these k8s clusters built inside the proper account arch, I’m going to have to deal with moving data

and that’s not going to be a small project

So I had two eoutwd

Routes

Use existing root account and manually create sub accounts their VPCs, do I am roles and vpc peering

Or new root account and ref arch

Well new root accounts need an AWS sub account limit increase and that took my 9 days to get it

Furthermore every time I’ve run it I get some sort of errors

Several were my fault this time tbf

But it’s not very uh

Idempotent

And if a bootstrap tool isn’t idempotent it’s stressful when it errors midway

Yeah, that’s my concern also

But I just started looking a couple hours ago, not really a fair shake :)

If you follow all 3 guides on my gitjub you’ll have

Your existing root account

Aws SSO for console access

Sub accounts with aws organisations

And a geodesic module for your sandbox account

This weekend I’m working on VPC peering
2019-03-09

@oscarsullivan_old are you around?

I can help jump on a zoom quickly and see if I can get you unstuck

No sorry Erik not for the rest of the night!

ah bummer

Sorry thank you though
2019-03-11

Where in conf/ or rootfs/ should I put bash aliases? Getting a bit dull typing out my full path to my terraform projects

stick those in rootfs/etc/profile.d/mycompanyname.sh

or some filename like that

profile.d
is loaded by the shell when you start up

Hey all, question that’s been on my mind. AFAIK Geodesic sets remote state S3 key according to the directory basename/leaf directory

has the possibility of using the whole directory come up? e.g. (/conf/)cluster1/{vpc,kops}
, (/conf/)cluster2/{vpc,kops}

AFAIK Geodesic sets remote state S3 key according to the directory basename/leaf directory
I’ve had to make changes for this ability.. it wasn’t doing it automatically for me

It would be nice to have to generate the path w.r.t. /conf

https://sweetops.slack.com/archives/CB84E9V54/p1552297619106900?thread_ts=1552053112.046100&cid=CB84E9V54 this is what I ahve to do
aws/.envrc
# DRY variables - not changed per project
# Terraform State Bucket
export BUCKET="${NAMESPACE}-${STAGE}-terraform-state"
export BUCKET_REGION="${AWS_REGION}"
export DYNAMODB_TABLE="${NAMESPACE}-${STAGE}-terraform-state-lock"
aws/backend/.envrc
source_env ..
# Terraform State Bucket
export BUCKET_KEY="backend"
# Terraform init bucket settings
export TF_CLI_INIT_BACKEND_CONFIG_KEY=${BUCKET_KEY}
use terraform
use atlantis
use tfenv

Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

we generate it here

since switching to .envrc files, I’m having the same issue

@Erik Osterman (Cloud Posse) yeah it’s a really easy change, just curious if it came up

just lack of time


aka the bucket prefix doesn’t seem to be respected

rc.d/terraform
:
pwd_tmp=$(pwd)
export TF_BUCKET_PREFIX=${TF_BUCKET_PREFIX:-${pwd_tmp:1}}

(or so. actually we need to remove /conf
, oops)

export TF_BUCKET_PREFIX=${TF_BUCKET_PREFIX:-$(pwd | cut -d/ -f2-)}

this will strip off the first part (/conf
)

are you sure it’s not -f3-

you’re totally right

i had a brainfart

that works perfect on my Geodesic and local

state folders should be relate to conf

e.g. ordinarily, we would have /conf/vpc
, so the state bucket folder should be vpc

or /conf/us-west-2/vpc
should be us-west-2/vpc

right, would be interested in having it as an option

where s3://$BUCKET/$PREFIX/terraform.tfstate

@raehik if you want to open a PR for that change we’ll promptly review
https://github.com/cloudposse/geodesic/blob/master/rootfs/etc/direnv/rc.d/terraform#L16

it’s a possible breaking change, right? so how would I make it an option, check for ${TF_USE_FULL_PWD}
?

yea, it’s possibly breaking, though I suspect anyone who was using subfolders would have already encountered this problem.

We can add a flag. I’d default it to the new format.

Is this for the s3 key?

maybe we have TF_USE_CWD
vs TF_USE_PWD

It would be nice if it were a new var

CWD = current working directory

Why a new var?

PWD = present working directory?

oh

*print working directory, my b

because BUCKET prefix is something else

so for clarification, we’re talking about defaulting TF_BUCKET_PREFIX

to something more accurate

ok, how about this

TF_BUCKET_PREFIX_FORMAT

and we can have one format be pwd
and other format be basename
or basename-pwd
or something like that?

that makes sense

more descriptive than just a flag

Sorry was getting confused.
On another similar note, the key
for the s3 key isn’t set automatically I don’t think. Can anyone else confirm or perhaps just my setup

Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

that’s where it’s set

export TF_CLI_INIT_BACKEND_CONFIG_KEY="${TF_BUCKET_PREFIX}/${TF_STATE_FILE}"

thanka

@oscarsullivan_old and @raehik this is fixed now for your guys right? https://github.com/cloudposse/reference-architectures/issues/13
during a first-run of R-A, I changed in root.tfvars aws_region = "us-west-2" to aws_region = "us-east-2" during make root this generated an error - leading me to suspect there…

In theory Yeh! Not tried it

so, are [root.cloudposse.co](http://root.cloudposse.co)
and [test.cloudposse.co](http://test.cloudposse.co)
repos just examples or outdated in favor of the reference-arcitechtures
repo?

the *.[cloudposse.co](http://cloudposse.co)
are examples of how we use geodesic; these are what we use for presentations and demos

the reference-architectures
is a first stab at automating that process but makes a very strong assumption: start with a virgin AWS root account

what we implement for our customers tends to be a little bit ahead of what we have in *.[cloudposse.co](http://cloudposse.co)
just b/c we don’t have the time to keep them updated

so for a cold start you would recommend reference-architecture
over using the examples?

it somewhat depends on your existing level of experience with AWS, docker, multi-account architectures, terraform… and also if you’re going to use k8s.

ok assuming a good understanding and experience in all of those + planning to use k8s

Then I suggest kicking the tires for reference architectures. Before you start, have a look at the open issues so you can be prepared of common issues.

I can also give you a review over zoom


the first step is to request an increase in the number of AWS accounts

yup


Expect 7 days for increase @Josh Larsen I sat around for 1.5 business weeks waiting for the account increase lols
2019-03-12

hey hey

how is terraform module dependancy / execution order dealt with in geodesic ?

having /conf/tfstate-backend /conf/vpc /conf/foo /conf/bar (other than knowing that tfstate needs to be fiorst then vpc then others)

thinking about it more from a pipeline point of view

Pretty nice concept for RKE https://github.com/yamamoto-febc/terraform-provider-rke
Terraform provider plugin for deploy kubernetes cluster by RKE(Rancher Kubernetes Engine) - yamamoto-febc/terraform-provider-rke

How come Packer isn’t included in geodesic??

---
- hosts: localhost
become: yes
pre_tasks:
- name: Check if running Ubuntu
fail: msg="DevOps Workstation can only be run on Ubuntu."
when: ansible_distribution != "Ubuntu"
- name: Update apt cache
become: yes
apt:
cache_valid_time: 600
update_cache: yes
vars_files:
- vars/devops-workstation/requirements.yml
- vars/devops-workstation/settings.yml
roles:
- fubarhouse.golang
- geerlingguy.docker
tasks:
- name: Install pip3 requirements
pip:
chdir: vars/devops-workstation/
executable: pip3
requirements: requirements.txt
extra_args: --no-cache-dir
- name: Set symlink to code directory
file:
src: "{{ sd }}"
dest: /devops
owner: root
group: root
state: link
- name: Install Terraform
unarchive:
src: <https://releases.hashicorp.com/terraform/{{> terraform }}/terraform_{{ terraform }}_linux_amd64.zip
dest: /usr/local/bin
remote_src: yes
mode: 775
owner: root
group: root
- name: Install Packer
unarchive:
src: <https://releases.hashicorp.com/packer/{{> packer }}/packer_{{ packer }}_linux_amd64.zip
dest: /usr/local/bin
remote_src: yes
mode: 775
owner: root
group: root
- name: Set symlink for Go
file:
src: /usr/local/go/bin/go
dest: /usr/local/bin/go
mode: 775
owner: root
group: root
state: link
- name: Set symlink for Scenery
file:
src: /home/{{ user }}/go/bin/scenery
dest: /usr/local/bin/scenery
mode: 775
owner: root
group: root
state: link
- name: Set symlink for Dep
file:
src: /home/{{ user }}/go/bin/dep
dest: /usr/local/bin/dep
mode: 775
owner: root
group: root
state: link
- name: Install Terragrunt
get_url:
url: <https://github.com/gruntwork-io/terragrunt/releases/download/v{{> terragrunt }}/terragrunt_linux_amd64
dest: /usr/local/bin/terragrunt
mode: 775
owner: root
group: root
- name: Install aws-vault
get_url:
url: <https://github.com/99designs/aws-vault/releases/download/v{{> aws_vault }}/aws-vault-linux-amd64
dest: /usr/local/bin/aws-vault
mode: 775
owner: root
group: root
- name: Run dep to install Terratest
shell: cd {{ GOPATH }} && dep init && dep ensure -add github.com/gruntwork-io/terratest/modules/terraform

should use something like the playbook I use to setup my local

Would be fab if we can get it included. Just checked out repo and looks like you host your own terraform image thats imported into Alpine so not sure to what extend I can do this for you and PR.

Also pip is INCREDIBLY behind so not a viable source, which would have been an easy PR.

but also my solution for now is running on my local outside of geodesic with:
aws-vault exec healthera-mgmt-iac bash manage.sh build base.json


works for me

If it’s out of date, submit PR here: https://github.com/cloudposse/packages/tree/master/vendor/packer
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages

@Erik Osterman (Cloud Posse) Regarding terraform-root-modules#132 (chamber dependencies): I’ve forked the repo already to adapt it to my needs, I just wanted to point it out. Do you want me to continue to document issues like this for future reference or is that just clutter for you?

f it’s out of date, submit PR here:
Sorry I meant pip’s version is out of date.

do you include packer in the base Geodesic image though? I couldn’t see it

we cannot ship everything in the base image b/c it gets toooooooo big

that’s why we have the cloudposse/packages
distribution

just run: RUN apk add --update packer@cloudposse
to your Dockerfile

thaaaaanks

perfect

@oscarsullivan_old oh you are correct… i got stonewalled by tech support saying my account was too new to increase that limit. i guess this is a new policy now. he wouldn’t even give me a solid timeframe at all… just said the account has to be around for a “few weeks” before they will up the limit.

WOW

Wow, didn’t encounter that

that is lame.

that’s amazing

…-ly lame yes

he said it was a new policy

haha wow just did mine last week

could be due to the amount of people trying Cloudposse’s ref arch?

Don’t they have their own thing Launchpad, since they acknowledge, that multi-account setups are kinda best practice nowadays?

¯_(ツ)_/¯

haha

So basically, you need to have some spare accounts lying around to “be prepared”.

haha man, there’s going to be a black market now for “aged” AWS accounts

like there is for domains, email addresses, instagram accounts, facebook accounts, etc

Well I have instructions on how to do it maunally

just, shit.. that’s all

I mean its not

or people like @Dombo who I think requested account limits of 1000


like I’ve said before I like knowing what’s happening… when I use refarchitecture no idea what is happening REALLY and when it errors its a ballache to diagnose

i’ve followed the ref arch, and continued manually when encountered errors

got there in the end

I hit a billing error or something

Idk it just got a bit frustrating

@oscarsullivan_old are you refering to just the setup piece or to geodesic in general

all

aws setup and geodesic seutp

I am the doc master

DevOps Engineer, passionate teacher, investigative philomath. - osulli

see aws* and geodesic-getting-started

Can help as been super busy at work so a few bits not quite up to scrap.. that’s its weakness as well as not being automated

thanks @oscarsullivan_old for that getting started doc… it helps a lot just in understanding how all the env vars are utilized now. very helpful.

though i am curious on your reasoning to put the vpn and in general have a management account separate from the root.

In the same way I don’t install nginx as root or use sudo to solve my problems

Root is root and isn’t meant for anything but being the root. Applies to permissions and accounts

here is the reply from tech support.. you might add this into the cold start docs.

thanks @Josh Larsen! will open an issue incase it helps others

We’re receiving reports from community members that requesting account limits is taking longer. For fresh AWS root accounts, it’s even more delayed.

Is there a way to easily “unmake” the root account bootstrapping to test from a fresh place on the reference architectures? I feel like I should be able to do a terraform destroy in here somewhere and accomplish that, but not seeing built in make targets to make it easy

so, it depends.

basically, the problem is AWS makes it programatically impossible to destroy accounts created using the API without first logging in and accepting T&Cs

this is a big reason we haven’t yet tried to tackle automated e2e testing

so since it means we’d need to start from point X, it’s not clear where point X is

Well, with the dynamodb problems AWS had this afternoon, the bootstrapping on my root account broke

So I was just hoping I could “terraform destroy” the right way

unfortunately no make target that implements it right now

I can fix the state mismatch by hand

Was just curious

but we’ve been talking more about this internally

ultimately, we want it to be as easy as docker-compose up

It’d be nice if we could clear out an account with a terraform destroy, even if it doesn’t kill the account itself, just for testing purposes.

@Erik Osterman (Cloud Posse) @Alex Siegman AWS purge tool

ya, thought this is where it gets complicated

b/c one of the steps is to create the accounts

so now to bring it back up it needs to skip that

tf is not good at skipping

tf certainly is not. Where is the state stored for the account that is brought up? in the root account? or in the subaccount?

not if 99% of everything is done using local-exec

we’re using terraform to setup the reference architectures, but perhaps it was not the ideal tool for the job. we’re basically using terraform to generate terraform code.

then executing that terraform code inside of docker.

because we’re using local-exec a lot to call docker
, there’s no easy way to recover state. terraform cannot compute an accurate plan.

There’s also cloudnuke… Don’t need to use terraform if you really want to just destroy everything

hahaha

too true


A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it - gruntwork-io/cloud-nuke

~too bad it doesn’t delete ECS too~

a number of other tools though

Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke

Yeah, it doesn’t do everything, for sure, but if the idea is right, and the design is right, open a PR

~though that’s the thing; they have a PR for that, but won’t merge it b/c it won’t selectively nuke~i class=”em em-smiley”></i>

First attempt at addressing #32 This implements nuking of: ECS tasks (indirectly, by draining ECS services) ECS services ECS clusters This does NOT implement nuking of: ECS task definitions Targ…

oh, maybe this is out of date now

nm

haha, it was @sarkis who opened the issue

Resources like ECSCluster, Tasks, Services should all be nuked.


thanks @loren this was a great suggestion. i think we can work with something like this.

Yeah, they merged it, but removed some functionality due to limitations they couldn’t figure out at the time
2019-03-13

Anyone used terraform for helm + k8 without wondering why they f’king bothered and didn’t just use the 5 or 6 lines of shell they usually used to set helm up

some of this stuffs like self-flagellation

@chrism we have been using Helmfile

I am curious to learn what problems you have run into using helm with terraform.

(Also in our experience it’s never been 5-6 lines! So many configuration options.)

I have it wired up to deploy a cluster in a private network via RKE; so that bits fine; there’s a terraform plugin which does most of what’s needed. I scripted out the terraform to setup helm to deploy rancher to it. But its flaky as hell

RKE?

Rancher?

All the “workarounds” to setting up tiller basically end in people posting commands to execute manually. And for extra fun because all this junks then in the tf state file when you destroy it locks up trying to undo helm which doesnt work very well

and yeah 6 or 7 lined is a little off 12 is more accurate


Ultimately though we’re using rancher, then we launch clusters from rancher (cant script that yet)

I’ve a growing hatred for acls and security groups

There should be a training mode where you just run shit for a while and they spit out by tag which group needs to talk to what on which ports

TF should have just gone with helm file and avoided the mass of extra types it has to support and the usual code rot / delayed releases

You know you’re starting to lose the plot when you have helpers called fuckingssh to ssh via a bastion but making sure the stupid keys written out by terraform are chmod 400 because too many shits were given

Have you seen our kopsctl cli using variant? It handles SSH using key stored in SSM

Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Haha

@mumoshu once proposed a Helmfile provider for terraform

just seems more logical for something that doesn’t “undo” very well

running helm reset only works half the time and when it does the bugger hangs around thinking about it

I hate helm

I get the idea

But the implementation makes me want to throw shit

Yea, the current implementation leaves a lot to be desired

apparently helm 3 is available for tire kicking

Rancher switching to helm from rke-addons was also a total pita

I would like to see your rancher setup

you’re using the rancher cli with geodesic?

If so, we should probably add a package for it

Sorry totally missed that (bloody threading notifications are just a white dot on the channel) Yeah I’ve customised my image to include a few extra tools and add a few non-repository terraform plugins

RKE being KOPS by Rancher basically

I’ve set folders in my conf for regions which I’m pulling our internal modules into (using make file + env files to override the region on build etal)

RUN apk add nano && \
apk add gnupg
RUN wget <https://github.com/mozilla/sops/releases/download/3.2.0/sops-3.2.0.linux> -O /usr/bin/sops
RUN mkdir -p ~/.terraform.d/plugins/ && wget <https://github.com/yamamoto-febc/terraform-provider-rke/releases/download/0.9.0/terraform-provider-rke_0.9.0_linux-amd64.zip> -O ~/.terraform.d/plugins/trke.zip
RUN unzip ~/.terraform.d/plugins/trke.zip -d ~/.terraform.d/plugins/ && rm ~/.terraform.d/plugins/*.zip -f
RUN wget <https://github.com/stefansundin/terraform-provider-ssh/releases/download/v0.0.2/terraform-provider-ssh_v0.0.2_linux_amd64.zip> -O ~/.terraform.d/plugins/ssh.zip
RUN unzip ~/.terraform.d/plugins/ssh.zip -d ~/.terraform.d/plugins/ && rm ~/.terraform.d/plugins/*.zip -f
RUN wget <https://github.com/kubernauts/tk8/releases/download/v0.6.0/tk8-linux-amd64> -O /usr/bin/tk8
RUN wget <https://github.com/rancher/rke/releases/download/v0.1.17/rke_linux-amd64> -O /usr/bin/rke
These are the main things i pull in that arent in the default image

I’m using SSM in much the same way you are for passing around extra things. With the rancher stuff I was using local files to maintain what tiny portion of my sanity remains

The ssh terraform module handles the ssh tunnelling for the stuff that follows after it (local_exec) The ticket for making connection{} handle that in terraform as you’d expect is still open/under discussion etal but this works fine, its blunt but fine.

hadn’t seen https://github.com/mumoshu/variant before
Write modern CLIs in YAML. Bash + Workflows + Dataflows + Dependency Injection, JSON Schema for inputs validation - mumoshu/variant

Looks quote nice even if the idea of more yaml in my life isn’t hugely appealing

I like the simple DSL

it’s the “glue” for combining tools and presenting a common interface that your team can use

@chrism i’ve been looking for a migration path from yaml+bash to something more maintainable, as an additional feature to variant
e.g. bash + yaml –> (yaml OR starlark | OR tengo OR Lua) + (bash OR starlark OR tengo OR Lua) |
https://github.com/d5/tengo https://github.com/google/starlark-go
A fast script language for Go. Contribute to d5/tengo development by creating an account on GitHub.
Starlark in Go: the Starlark configuration language, implemented in Go - google/starlark-go

wow, neat!

or maybe just export yaml+bash as golanga sources s such the interop with other golang code becomes easier..

they took a process that reliably worked, and swapped it for one that can fail magically

and added 2 more pages of instructions to the setup

the Rancher K2 stuff looked interesting (or was it k3) if for nothing more than them reducing the amount of shit required

(something I liked about nomad)

Quickly shoved a aws_security_group in my config; now to spend 20 minutes changing to separate entries because terraform thinks it needs to delete it to update

I am thinking of maybe setting up a weekly recurring “office hours” that would be free for anyone to join

it would be specifically to help those using geodesic
and our modules

Thinking 11am PST on Wednesdays.

Sounds great. Would love to both take part in a Q and an A perspective. Is 11:30am-12pm PST feasible? Allows me to get home and unpack after work before joining

Yes that works
2019-03-14

anyone got a TLDR price comparison of running a cluster in EKS vs running it manually with partially reserved nodes.

Saw an article by cloudhealth but you know when you really cba reading pages of shit just for a THIS COSTS MORE

I don’t have a cost comparison, but it basically replaces your master nodes for a $144 per month fee.

Too expensive for small clusters, but better when sizing up and desiring HA.

Update on AWS account limit increase: They approved an account I created last Thursday for 10 sub-accounts. (The account is completely independent, but same company had another account previously though, which shared the same billing address, not sure if that matters).

Fun fact; when you setup rancher you have to give it a hostname for the cluster to use. Fair enough you might think. But if you’re setting it up on a private network with an internal lb and planning to expose it from the public subnet via nginx, IF you use a real domain name all the k8 clusters you make HAVE to be able to talk to that dns entry.
There’s no x-host override to pass down (so say you could set rancher to use myinternallb.local and nginx could have mycluster.domain.com)

If you set nginx to pass the lb name as host header down because of the way they’ve written the damn UI it fails

the ui loads

but all the calls are fully resolved by URL so it tries calling into nginx with the local lb name

Other things have a similar issue like identity server; but they had the foresight to allow you to pass additional headers in.
2019-03-18

added an integration to this channel: Google Calendar for Team Events

March 20th, 2019 from 11:30 AM to 12:20 PM GMT-0700 at https://zoom.us/j/684901853

2019-03-19

Curious, does the calendar figure out the right time zone to show me based on my settings in slack? 1:30 PM with no TZ info is basically a number without units, useless! (Pet peeve, I work from 3 timezones in the US plus Japan regularly, so I have to deal with TZ translation all the time)

Thanks I will update the title to include the TZ

11:30am PST

I correctly get 6:30 PM

v cool

I’ve just opened a PR to my getting started guide with an example project showing Geodesic and Terraform interactions (A project I actually use, though anonymised so I’ve not tested it with that anonymisation). Updated the docs as well for clarity and accuracy. https://github.com/osulli/geodesic-getting-started/pull/1
What Update the guides with clearer examples Add Example project that I actually use with Geodesic and Terraform Why Several more weeks worth of experience using the tools Some clear errors in t…

Thanks @oscarsullivan_old! Will review that

The example I had given before on how to use .envrc was totally wrong and its been bothering me for over a week that I might be leading others the wrong way!

I need to get me some of that make readme
magic

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

@tamsky has also been working on some .envrc
docs

https://github.com/osulli/geodesic-getting-started/pull/1/files#r267123117 tis an interesting point you raised. Left my thoughts on it… I still don’t have a solution

Been thinking about it over the last few days

if you can make it on the call on wednesday, we can review the multi-region discussion

should be quite straight forward actually

Sure can. Would be happy to answer any Qs as well from you or others on how I recently setup Geodesic

6:30 PM London time, tomorrow
2019-03-20

multi region is interesting, where does your state live?

hope ya’ll are namespacing things by region!

March 20th, 2019 from 11:30 AM to 12:20 PM GMT-0700 at https://zoom.us/j/684901853

Dangit, I meant to show up to that so I could listen in. I need to add these to my calendar!

@Alex Siegman we’ll have another one next week

2019-03-21

How are you abstracting lists to .envrc
s?
export AVAILABILITY_ZONES=["eu-west-2a", "eu-west-2b", "eu-west-2c"]
export AVAILABILITY_ZONES="eu-west-2a", "eu-west-2b", "eu-west-2c"

Still want to give you an answer for the future: export AVAILABILITY_ZONES='["eu-west-2a", "eu-west-2b", "eu-west-2c"]'
should work.



@mmuehlberger ended up coming back to this for elsewhere. How do you then use this? ${var.availability_zones” doesn’t do the tirck

Oh hold on. I’m doing this ["${var.readonly}"]

"${var.readonly}"
should be fine.

Wait, I confused myself already.

Haha

export AVAILABILITY_ZONES='["eu-west-2a", "eu-west-2b", "eu-west-2c"]'
variable "readonly" {}
principals_readonly_access = "${var.readonly}"

* module.ecr.var.principals_readonly_access: variable principals_readonly_access in module ecr should be type list, got string

Yeah, readonly
is of type string, if you don’t specify otherwise.

thanks

variable "readonly" {
type = "list"
}


That would make readonly
a list.

Perfect. Was missing that type declaration

Much abstraction. Very wow

If you give it a default of []
it automatically infers it.

so variable "readonly" []
instaed of {}

or
variable "readonly" {
[]
}

variable "readonly" {
default = []
}

cool thanks

I’m getting invalid inputs

Nvm I’ll just use a tfvars file with direnv
availability_zones=["eu-west-2a", "eu-west-2b", "eu-west-2c"]

if you do choose to use envs, you need to quote the assignment for complex var-types…

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

for a map:
export TEST='{bar="baz", foo="zzz"}'

Thanks both. I should be using tfvars file anyway since it is TF specific variable.

When you type yes outside of a question and /usr/bin/yes goes into an infinite loop that ctrl+c wont break

omg I do that all the time with terraform plan

terraform plan
yes
y y y y y y y y y...

With terraform it supports a flag to default to yes on all prompts

No need to script it

it’s just a mistake hahaha

I go to accept it thinking it was apply

whew, tks for reminding me! i went to apply a config and got distracted, forgot it was waiting on my input

of course my temp cred had expired. sigh

another thing i’m looking forward to in 0.12, support for credential_process
providers

Haha. Can’t you increase the max time the creds last inside of IAM portal though?

aws-vault exec --server
is to address this problem provided a sufficiently long session duration (max 12 h)

i unfortunately do not have permissions to do that in this particular environment. it’s a federated identity, 1hr max. only way to get a cred is with custom utilities. i might have to steal that –server idea though… i can easily write my own tool…

yes, there are some IAM metadata proxies out there that emulate the AWS behavior

could be a good place to start @loren


Yep, on it, thanks!
2019-03-22

of those of you using geodesic (recent release), does your terminal look like this when you use assume-role
?


here’s an animated gif (slightly washed out) https://sweetops.com/wp-content/uploads/2019/03/geodesic-demo-1.gif


or an SVG animation: https://sweetops.com/wp-content/uploads/2019/03/termtosvg_fmnxoium.svg

Basically, what I want to understand if it renders “normally” for you, or if you have some ugliness
2019-03-25

Is that not working as expected?

There is 1 event this week
March 27th, 2019 from 11:30 AM to 12:20 PM GMT-0700 at https://zoom.us/j/684901853

which terraform module is responsible for creating a billing user/group such that a user can access the billing section of the console

it seems to me that the admin users created have admin perms on the sub accounts, but zero perms on the root account. i’d like to be able to add an additional group policy to allow some of these users to access billing

We don’t have any turnkey groups like that right now. Just haven’t gotten around to it.

Maybe add the group+policy here? https://github.com/cloudposse/terraform-root-modules/tree/master/aws/root-iam
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

I ended up adding a new module that creates the group policy, like terraform-aws-organization-access-group, calling that module in root-iam, and then adding the group to the list of groups used in the users root module (on a per user basis)
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

has anyone been able to port forward from the geodesic shell? Im trying to view kubernetes-dashboard on my host machine but unable to get port forwarding working.


i have tried that but saying site cannot be reached when going to localhost:54515

@casey take a look here https://github.com/cloudposse/docs/issues/428
what We port map a random port into the geodesic container This port is what should be used for proxying kubectl proxy –port=${KUBERNETES_API_PORT} –address=0.0.0.0 –accept-hosts='.*' wh…

thank you, that worked!
2019-03-26

Awesome @Abel Luck ! Sounds spot on
2019-03-27

@Erik Osterman (Cloud Posse) did you ask about this before


March 27th, 2019 from 11:30 AM to 12:20 PM GMT-0700 at https://zoom.us/j/684901853
2019-03-28

What’s the deal with teleport (noticed it slid into geodesic)

Teleport is not (or at least should not be) in the released Geodesic. Where did you “notice it slid in”?

In the commit history… Unless Im mistaken

Which is wholly possible

We have published a helmfile and chart for Teleport recently, but that should not have affected Geodesic.

Its listed on 0.76 release

I hadn’t checked the diff. Mobile GitHub is nightmare fuel

Oh that, yes, sorry, my mistake. Geodesic includes our kops
manifest template, which we updated to support installing Teleport ssh
agents on the instances.

It is the companion to our charts and helmfiles which deploy the Teleport proxy and auth daemons.

Cool. Teleport looks pretty neat if not more complicated than the usual bastion stuff

Teleport is a lot more complicated to set up, but it is indeed very neat. And in High Availability mode, very robust.

Has anyone had this problem with ubuntu 16.04? I updated Geodesic to 0.85.0 and I am now getting the following when I go to assume role:


@Jeremy G (Cloud Posse)

Can you try the most recent previous release and see if it works? If so, then we will know it was a recent change

0.86.0?

oh miss read sorry

0.84.1

yeah one sec

another note is that it is working on mac for 0.85.0

ohhh interesting

is this issue on your WSL machine?

no im on ubuntu 16.04

ok

@oscarsullivan_old reported some problem too (he’s also on ubuntu)

i’ll be free this afternoon to take a look

yeah, confirming 0.84.1 is not working either

ohhh interesting

I think it has something to do with the interactive prompt that shows, I found this issue but not sure what the resolution was

what assume-role interactive does not works on linux ✗ (none) ~ ⨠ assume-role Failed to read /dev/tty Usage: assume-role [role]

maybe ` 0.80.0` broke it?

when we went from alpine 3.8 -> 3.9

ill try 0.79.0 one sec

nope 0.79.0 same issue

0.56.0 working, but non-interactive

*0.57.0

The solution as per comments was https://github.com/cloudposse/geodesic/pull/390
what Disable bash completion for fzf if non-interactive terminal why The upstream bash completion script has a bug where it doesn’t handle this use-case

but I added noise to the issue because I thought my issue was the same one, but it wasn’t

@casey I don’t have access to an Ubuntu machine running Docker running Geodesic to test, but I am responsible for most of the recent changes to Geodesic and am happy to help. Do you see any problems before trying to assume-role
?

no

assume-role is working on geodesic 0.57.0

i bumped to 0.85.0 working on my mac, and everything was fine on osx

went back ubuntu 16.04 at home and it was not working

Please try echo foo | fzf --height 30% --preview 'echo bar'
and LMK if that works in 0.8[56].0 on Ubuntu

You should see an interactive prompt, and then just hit return and the prompt should clear.

getting

Failed to read /dev/tty


OK, the problem is fzf
. What does env | grep TERM
print?

TERM=xterm-256color

Are you in fact running some kind of xterm
or are you running a script without a tty?

I dont know

im just using the default terminal on ubuntu

Can you change the terminal window size?

yeah

When you resize the window, does echo ${COLUMNS}x${LINES}
change?

yeah

Let’s simplify a bit and unset FZF_DEFAULT_OPTS
and then try the above fzf
command again.

same

Failed to read /dev/tty

OK. Let’s back up. (If you have time to work through this now).

i can go at it for another 10 min

then we can resume when I am back if its not enough time

great. Try ([[ -t 1 ]] && echo true) || echo false

im doing this inside geodesic container correct?

Do it inside the Geodesic container and then open a second window on the Ubuntu machine (not Geodesic) and run that same command line.

saying bash syntax error near `true)’

bash: conditional binary operator expected
bash: syntax error near
true)’
`

Sorry, typo, fixed

both are saying true

Let me do a little research. When can you pick this up again?

just ping me

ill be back in like an hour or 2

OK

Ping me with @Jeremy G (Cloud Posse) 6 hours from now if you haven’t heard from me before then.

As a workaround until this is fixed, setting export ASSUME_ROLE_INTERACTIVE=false
should get you back to work. You can use our new customization features to set this up automatically each time you run the container. See https://github.com/cloudposse/geodesic/pull/422 for how to do that.
what In addition to some small cleanups and additions, provide a capability for users to customize Geodesic at runtime. why Because people vary, their computers vary, what they are trying to accomp…

@casey The assume-role
failure is due to a bug in the Ubuntu kernel. See https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1813873
Hi, The most recent set of Ubuntu kernels applied a variety of tty patches including: https://github.com/torvalds/linux/commit/c96cf923a98d1b094df9f0cf97a83e118817e31b But have not applied the more recent https://github.com/torvalds/linux/commit/d3736d82e8169768218ee0ef68718875918091a0 patch. This second patch is required to prevent a rather serious regression where userspace applications reading from stdin can receive EAGAIN when they should not. I will try to link correspondence from th…

Solution is to upgrade your Ubuntu kernel to 4.4.0-143.169 or 4.15.0-46. See https://github.com/cloudposse/geodesic/issues/427#issuecomment-477805621


that was it!

how did you figure that out?! thank you though

@Jeremy G (Cloud Posse) knows this kind of stuff inside and out

clearly. That was impressive