#geodesic

Discussions related to https://github.com/cloudposse/geodesic Archive: https://archive.sweetops.com/geodesic/

2019-10-20

dalekurt

I’m having some trouble decrypting the IAM user password created. Using the decrypt command echo "MY_PGP_MESSAGE" \| base64 --decode \| keybase pgp decrypt I get the error message “ERROR decrypt error: unable to find a PGP decryption key for this message”

dalekurt

Am I missing something here?

Erik Osterman

Are you running that natively on your Mac?

dalekurt

Yes, on my mac.

Erik Osterman

Sounds like something wrong with keybase install

Erik Osterman

Or using wrong keybase username

dalekurt

That latter was my assumption.

2019-10-14

oscar

@Jeremy Grodberg - I’ve updated to the latest Geodesic (0.123.1), and tried both [email protected] and [email protected] and still end up with TF v0.12.7 any thoughts?

FROM cloudposse/geodesic:0.123.1

RUN apk add --update --no-cache \
  [email protected]
oscar

No existing containers running either

oscar

N’aaww I got it. Don’t worry. User error to do with changing the docker image name and not adding it to the bin, meaning make run was still using the old image.,

SweetOps #geodesic
04:00:02 PM

There are no events this week

Cloud Posse
04:03:54 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Oct 23, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

oscar

I did try the make *.md5 command though and that came up with no target found.

oscar

As of my upgrade to 0.123.1 from 0.122.4 I’m no longer able to run terraform init using the TF_MODULE_CACHE setting (as per logic in Geodesic’s terraform script.. below…)

	# Starting with Terraform 0.12, you can no longer initialize a directory that contains any files (not even dot files)
	# To mitigate this, define the `TF_MODULE_CACHE` variable with an empty directory.
	# This directory will be used for `terraform init -from-module=...`, `terraform plan`, `terraform apply`, and `terraform destroy`
	if [ -n "${TF_MODULE_CACHE}" ]; then
		export TF_CLI_PLAN="${TF_MODULE_CACHE}"
		export TF_CLI_APPLY="${TF_MODULE_CACHE}"
		export TF_CLI_INIT="${TF_MODULE_CACHE}"
		export TF_CLI_DESTROY="${TF_MODULE_CACHE}"
	fi
}
oscar

The only thing changed is package versions of terraform#

oscar

I’ve also reviewed terraform releases and can’t see what could have done this between 0.12.4 and 0.12.10

oscar
TF_VAR_tf_module_cache=.module
TF_CLI_ARGS_init=-backend-config=region=eu-west-1 -backend-config=bucket=development-terraform-state -backend-config=dynamodb_table=development-terraform-state-lock -from-module=git:<i class="em [email protected]"></i>xcv.git?ref=master
TF_MODULE_CACHE=.module
oscar

Ok appears to be because I lost my use terraform along the way in my .envrc

oscar

Yarp. Re-added

use terraform
use tfenv

and all gucci on latest geodesic and just [email protected] instead of [email protected]_0.12

Erik Osterman

excellent!

oscar

I’m trying to ssh-add /path/to/private/key inside of BitBucket pipelines however I get:

> ssh-add /opt/atlassian/pipelines/agent/ssh/id_rsa
Could not open a connection to your authentication agent.
oscar

Am I missing something? Is this BB P thing e.g. non exposed service etc

Erik Osterman

looks like your ssh agent is not started

Erik Osterman

we only start it by default in interactive sessions

oscar

ahhh

oscar

gotcha

Erik Osterman
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Erik Osterman
		source <(gosu ${ATLANTIS_USER} ssh-agent -s)
		ssh-add - <<<${ATLANTIS_SSH_PRIVATE_KEY}
		# Sanitize environment
		unset ATLANTIS_SSH_PRIVATE_KEY
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Erik Osterman

here’s how we do it for atlantis

oscar

just run

# Otherwise launch a new agent
if [ -z "${SSH_AUTH_SOCK}" ] \|\| ! [ -e "${SSH_AUTH_SOCK}" ]; then
	ssh-agent \| grep -v '^echo' >"${SSH_AGENT_CONFIG}"
	. "${SSH_AGENT_CONFIG}"

	# Add keys (if any) to the agent
	if [ -n "${SSH_KEY}" ] && [ -f "${SSH_KEY}" ]; then
		echo "Add your local private SSH key to the key chain. Hit ^C to skip."
		ssh-add "${SSH_KEY}"
	fi
oscar

ah nice

Erik Osterman

you could maybe do sometihng similar

oscar

thanks

oscar

what’s gosu

Erik Osterman

very easy utility to drop permissions to a user

Erik Osterman

easier than using sudo or su

oscar

super thx

Erik Osterman

you may not need it

2019-10-11

Jeremy Grodberg

@oscar If you want to have Geodesic install the latest APKs during the build process, first run make geodesic_apkindex.md5 to invalidate the Docker cache layer that caches the old APK indexes.

oscar

Thanks!

2019-10-10

2019-10-09

joshmyers
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

cloudposse/build-harness

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness

cloudposse/build-harness

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness

joshmyers

to essentially clone the build-harness and include the targets?

Erik Osterman

With hundreds of repos we wanted a simple one-liner to import the build-harness that was future proofed so we didn’t have to copy pasta a lot of boiler plate and try to keep it up to date.

Erik Osterman

Also note that http://git.io no longer supports vanity short links

joshmyers

If this isn’t in a public Github repo, it is going to require a token to clone the https repo, as well as already having an ssh key…

Erik Osterman

No token needed to clone public repos over https

Erik Osterman

But yes if you wanted to make it private it would require a token

joshmyers

Yup, wondered how much work to do this all over ssh rather than https

Erik Osterman

Guess you could do this instead of curl

Erik Osterman

git archive –[email protected]:myrepo master path/to/file1 | tar -xOf -

1
joshmyers

ah, that maybe the magic I’m after

2019-10-08

oscar

FYI I’ve gutted the way I use Geodesic. Instead of one module per account I now use the following structure

oscar
└── terraform
    ├── .envrc
    └── aws
        ├── .envrc
        ├── bastion
        │   ├── .envrc
        │   └── Makefile
        ├── development
        │   ├── .envrc
        │   └── account
        ├── production
        │   └── .envrc
        └── shared
            ├── .envrc
            ├── .module
            ├── .terraform
            ├── Makefile
            └── terraform.tfvars
oscar

If anyone is keen on reducing the maintenance of several modules and only holding one I can give some practices

Erik Osterman

for clarification - how to use geodesic with a monorepo, right?

oscar

Yes.

Before my monorepo was:

geodesic:

  • dev
  • staging
  • prod
  • bastion
oscar

but now it is:

geodesic:

  • geodesic.acme
Erik Osterman

that looks good - but i would also add region

Erik Osterman

if you want to future proof yourself some more

Erik Osterman

also, I think I would do terraform/stage/cloud/region

Erik Osterman

e.g terraform/prod/cloudflare and terraform/prod/aws/us-east-1

Erik Osterman

seems natural to bundle everything for prod together.

Erik Osterman

are you using github?

oscar
04:45:57 PM

True I was thinking about this

if you want to future proof yourself some more

oscar
04:46:26 PM

No this is on our internal VCS

are you using github?

Erik Osterman

aha

oscar

x)

Erik Osterman

i have some thoughts on the mono repo + geodesic + github actions

Erik Osterman

another thing to consider….

Erik Osterman

so terraform as a community seems to be going more the way of workspaces

oscar

Yes I have also shifted to workspaces

oscar

mainly to conform with terraform cloud

Erik Osterman

right

oscar

but I obviously have my IP Whitelist issue

oscar

but not workspaces as in terraform workspace add

oscar

just rather

Erik Osterman

so then i’m thinking the stage folder makes less sense - as it doesn’t help you be as DRY

oscar
├── terraform-aws-app
│   ├── README.md
│   ├── <http://eks.tf>
│   ├── environments
│   │   └── development
│   ├── <http://terraform.tf>
│   ├── <http://variables.tf>
│   ├── <http://versions.tf>
Erik Osterman

yea, something like that

Erik Osterman
Feature: Conditionally load tfvars/tf file based on Workspace · Issue #15966 · hashicorp/terraform

Feature Request Terraform to conditionally load a .tfvars or .tf file, based on the current workspace. Use Case When working with infrastructure that has multiple environments (e.g. &quot;staging&q…

Erik Osterman

@sarkis shared this which piqued my interest

Erik Osterman

using yamldecode you can now define your own configs for environments

oscar

I’ve seen that

Erik Osterman

then load those based on the workspace name

oscar

but I’ve got a super duper thing going on

Erik Osterman

oscar

it’s hard to type out

oscar

but hopefully I can make office hours tomorrow

Erik Osterman

yes!

Erik Osterman

i want to see it.

oscar

The gist is:

  • One geodesic module for all envs.
    └── terraform
      ├── .envrc
      └── aws
          ├── .envrc
          ├── bastion
          │   ├── .envrc
          │   └── Makefile
          ├── development
          │   ├── .envrc
          │   └── account
          ├── production
          │   └── .envrc
          └── shared
              ├── .envrc
              ├── .module
              ├── .terraform
              ├── Makefile
              └── terraform.tfvars
    
  • Terraform infra definitions (root modules that uses child modules) has ‘workspace’-like configs
    ├── terraform-aws-app
    │   ├── README.md
    │   ├── <http://eks.tf>
    │   ├── environments
    │   │   └── development
    │   ├── <http://terraform.tf>
    │   ├── <http://variables.tf>
    │   ├── <http://versions.tf>
    
  • Important files like Makefiles to operate Terraform & terraform.tfvars are loaded at ‘terraform init’ run time
oscar

so this means that you don’t have to keep rebuilding the geodesic shell

oscar

it just pulls them from git at use-time meaning you always get the latest

oscar

And any variables set in Geodesic are extremely static and very unlikely to change (e.g. Account ID, VPC ID)

oscar

arguably VPC ID isn’t even needed anymore as I get that from DATA, that’s just a legacy variable

oscar
cloudposse/packages

Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages

2019-10-07

chrism

My laptops been fine; its weird af.

chrism

tbh if you just shove the fstab setting back in it works fine; It was only C that went to pot; my other shared drives worked fine.

SweetOps #geodesic
04:00:08 PM

There are no events this week

Cloud Posse
04:00:16 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Oct 16, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

Jeremy Grodberg

@Erik Osterman Where did this come from exactly? The time zone is half wrong. It should say PDT, GMT-7

2019-10-04

chrism

joys of windows update

dirname: missing operand
Try 'dirname --help' for more information.
/usr/local/bin/prod.xx.uk: line 115: cmd.exe: command not found

docker loads; just no aws-vault

chrism

i think the windows wsl interop died; god knows

chrism

no fstab entries; using the LOCAL_HOME=”C:\Users\xx\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\home\xxx” env set before calling it works for now though lol ohhhhhhh windows

chrism

seems related to the wsl conf; its not changed but its also not working

Erik Osterman

@chrism dang!!

Erik Osterman

yea, what a PIA

chrism

its fine; its all windows fault

Erik Osterman

is it something you think you can fix? we can review the PR

chrism

I don’t think the code needs fixing tbh

chrism

The wsl process is supposed to map all the windows drives to fstab on boot; but its not working. The bootstrap script uses the fstab to find the location

chrism

“potentially” this all magically starts to work proplerly in WSL2 as the “wsl home” is a virtual drive not just a windows folder

chrism

I just need to fix my wsl to fix it

Erik Osterman

ok, report back on what you come up with

chrism

If im torching it I might as well try wsl 2 and see if it solves all the other issues (hopefully all the hacks made to get wsl 1 working wont break wsl 2 )

chrism

i backup my user dir anyway so its just the tedium of waiting for windows to complete steps purging it all out. Not quite as good as when I could use vmware (damn hyperv) and snapshot ubuntu, but it’ll do

chrism

wsl2 still requires you to be on the ‘goat sacrifice’ branch (tech preview); reinstalled wsl and the mount issue wasnt fixed. My laptops up to the same build and is fine. Fixed it in the end with mount -t drvfs C: /c in the /etc/wsl.conf

1
chrism

left the full path in my bashrc file though after testing; cuts down on the faff

loren

dang, scared to reboot now… haven’t seen the mount issue yet on wsl myself…

$ cat /etc/wsl.conf
[automount]
enabled = true
options = "metadata,umask-22,fmask-11"
mountFsTab = true
$ cat /etc/fstab
LABEL=cloudimg-rootfs   /        ext4   defaults        0 0
C:           /mnt/c         drvfs   rw,metadata,noatime,case=off,uid=1000,gid=1000,umask=22,fmask=11     0    0

2019-10-03

dalekurt

@Jeremy Grodberg Thank you, that was a pretty comprehensive response. I will have a look at this later today

2019-10-02

Erik Osterman

@here public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

dalekurt

@Jeremy Grodberg Eric suggested that I speak with you regarding my current issue with the reference-architecture, as you had a similar challenge. Here is a Gist of it - https://gist.github.com/dalekurt/7c451ba3914f066bf16b42392904aca1

Jeremy Grodberg

@dalekurt I need more context to really understand what is going on here, but my guess is that generally the reference architecture scripts still require Terraform v0.11 but the make root scripts are using the default terraform which is now v0.12. @Erik Osterman when Terraform is being bootstrapped (initial S3 buckets created and initialized) where are the scripts getting the terraform binary from?

dalekurt

@Jeremy Grodberg If I recall correctly, I have terraform v0.11 installed (at home) I did perform a git pull recently. The make root and make children executed successfully but make finalize failed with the error in the gist.

dalekurt

Those are executed in the docker container and not my local installation of terraform from my understanding.

Jeremy Grodberg

@dalekurt Geodesic has both versions of terraform (0.11 and 0.12) installed so we can use both during the current transition period, but defaults to 0.12. We use direnv to read .envrc files in directories, and in those .envrc files we (should) have either use terraform 0.11 or use terraform 0.12 to pick which version to use. That mechanism works fine, we use it in all our current projects, but we use reference-architecture only rarely and it is therefore often broken around issues like this. My guess is that the bootstrap code is missing a use terraform 0.11 somewhere.

Jeremy Grodberg

@dalekurt Looking into it some more, I can see a bunch of things that could have gone wrong. I would check which S3 buckets were created for the Terraform state files (just check root and one child account, like audit), if they were populated with state during make root and make children, and what name the Terraform module thinks the S3 bucket should have when it is erroring out. My current guess is that the buckets are being created correctly but the IAM role is lacking permissions. We recently started locking down S3 buckets by default and ref-arch might have been relying on them being open until everything was finalized. If the bucket exists but you are getting NoSuchBucket errors, that is a permissions problem. If the bucket really does not exist, either you missed an earlier error where the creation failed or the name of the bucket that was created does not match the name of the bucket TF is attempting to read, which can happen when we update one thing but forget to update another, or are using incompatible versions.

dalekurt

@Jeremy Grodberg A follow up to this, I dumped the previously created sub accounts and cleaned up the root. With the latest commits I started over and the result was the same. The s3 bucket was created, in my case lunarops-io-security-terraform-state. Is there a fix or workaround for this

Jeremy Grodberg

@Erik Osterman As I look at ref-arch now, I see that while the README lists security and identity as accounts that can be enabled, there is no /config/security.tfvars or /config/identity.tfvars file. What do you want to do about that?

Jeremy Grodberg

@dalekurt Despite them being listed in the README, we do not have working support for the security or identity accounts. I suggest you stick to the list of accounts enabled by default: https://github.com/cloudposse/reference-architectures/blob/62f20f5bf365944f54e1bbd20a85993ffbae24f6/configs/root.tfvars#L78-L85

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Erik Osterman

Jermey is correct we didn’t finish implementing those 2 accounts

Erik Osterman

The idea is to offload whAt we have in root to those 2

dalekurt

Yes, that was explained to me. However it is not the only account that is affected. I just picked that from the list.

Jeremy Grodberg

@dalekurt Also, it looks like you set namespace = "lunarops-io". It should be a single word, and we try to keep it to 4-6 letters. Maybe just lunar or rops?

dalekurt

The other sub accounts are also affected by this error, as previously shared in the gist. Dev, staging, etc…

dalekurt

Ah ok. Could that be an issue?

Jeremy Grodberg

Is the audit bucket named lunarops-io-audit-terraform-state or lunarops-audit-terraform-state?

dalekurt

It is

dalekurt

the first

dalekurt

I typed better at the keyboard. The audit bucket is named lunarops-io-audit-terraform-state

dalekurt
dalekurt

Would it result in a breaking change if I changed the namespace?

Jeremy Grodberg

Yes, changing the namespace breaks everything.

Jeremy Grodberg

We use naming conventions all over the place.

dalekurt

Ok, does it require manual clean up or the tf state would effect that.

dalekurt

I’m early in this deployment as it is, so i can handle a breaking change. I’m just hoping I don’t have to do the clean up.

dalekurt

And the max the namespace should be is 6-char?

Jeremy Grodberg

The expected name of the terraform state bucket will change when you change the namespace, so terraform will not be able to automatically clean up. I think the best you can do is terraform destroy in the root account to do some of the clean up.

Jeremy Grodberg

Do that before changing the namespace.

Jeremy Grodberg

The length of the namespace is not strictly limited, but it is part of every identifier and the total length of some identifiers are limited, so it’s best to keep it short.

dalekurt

okay, can the clean up (terraform destroy) be executed from with the container that did the deployment? Or within the project directory

Jeremy Grodberg

Actually, now that I think of it, most of the terraform state of the bootstrap code is stored in the root directory of the checked-out repo. How many *.tfstate files do you have there?

Jeremy Grodberg

Unfortunately, the whole setup is a bit complicated. We run some stuff on your host to get things started, and that includes creating stuff with terraform with the state in local *.tfstate files, then building a Docker container, then we run more terraform inside the Docker container. You probably didn’t get that far, though, because the Docker container would try to use the S3 bucket for the Terraform state.

Jeremy Grodberg

Worst comes to worse, you can use https://github.com/rebuy-de/aws-nuke to delete everything.

rebuy-de/aws-nuke

Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke

Jeremy Grodberg

I had to use that myself when I created a set of accounts with the wrong namespace.

dalekurt

@Jeremy Grodberg I have 10 *.tfstate files.

dalekurt

I’ve heard of aws-nuke, in fact from @Erik Osterman on Office hours.

dalekurt

@Jeremy Grodberg if I choose the nuclear option, should I perform that on each account or just the root?

Jeremy Grodberg

I’m thinking your errors happened very early on in make children, is that right?

dalekurt

That is my assumption as well, It is shown as an error during the make finalize but if it has a dependency of the s3 buckets which are created in the make children then you are correct.

Jeremy Grodberg

Wait, you ran make children and it exited successfully?

dalekurt

Yeah

Jeremy Grodberg

That is going to be difficult to recover from. You cannot just delete the AWS accounts, and we do not have the mechanism (as far as I know) to tell the scripts that they have already been created. @Erik Osterman what do you suggest?

dalekurt

To be accurate, there was no success message just goodbye

chamber_secret_access_key =
chamber_user_arn =
chamber_user_name =
chamber_user_unique_id =
secret_access_key = <sensitive>
user_arn =
user_name =
user_unique_id =
Skipping cloudtrail...
Goodbye
dalekurt

Whereas with the make root I got

* provider.aws: version = "~> 2.31"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Goodbye
Jeremy Grodberg

To be safe, I would make a backup copy of your entire repo (which will include a lot of generated artifacts by now, including the 10 *.tfstate files). Some of the artifacts you are going to need, particularly the list of account numbers, but could be erased if you are not careful.

dalekurt

Sounds like some DevOps surgery

dalekurt

I’ve made the backup

Jeremy Grodberg

Yes, sorry, you are in a bad state. I think you are going to need to have @Erik Osterman help you during office hours. You hit this failure where it is too late to start over but we suspect the problem is the dash in the namespace, and changing the namespace does break an awful lot of stuff.

dalekurt

No worries. I’m wondering if I should just do the nuke on the root and sub accounts. And re-run the make root and make children

Jeremy Grodberg

My hope is that you can get by with changing the config files and then resuming from make children, but I’m not sure.

Jeremy Grodberg

The problem is that if you run make root from scratch, it will create another set of AWS accounts and you really want to avoid that.

dalekurt

gotcha

dalekurt

Thank you for the time, I really appreciate it..

Jeremy Grodberg

If your are feeling daring, I can suggest you edit /configs/root.tfvars to be what you want it to be (maybe post it or a redacted version of it), and then run make root/init-resume.

What I’m counting on here is that the names of the accounts do not change, and init-resume is specifically for picking up after errors, and it avoids cleaning up old state.

What I’m not sure of is whether or not it will really create new AWS accounts or not. It should not, but this is not well tested code.

dalekurt

Thanks for the suggestion

dalekurt

I will tell how I feel tomorrow

Jeremy Grodberg

OK, I will leave you with this tip. Deleting AWS accounts is a long, painful process, because AWS does not want to be on the hook for deleting stuff that it cannot get back. One key thing is that the email address associated with the account will be _forever_ associated with that account. You will not be able to create a new account with that email address and you will not be able to change the email address later. So before you delete an account, change the email address to something you can consider a throwaway. Gmail and some other providers allow you to add “+anything” to your username to create a unique email that still routes to you, so I suggest you do that, using “+identity-del-201910” or something like that.

dalekurt

I wish I had know I could that a couple days ago before nuking the accounts. Thanks for the tip

Erik Osterman
Document Account Deletion Gotcha · Issue #46 · cloudposse/reference-architectures

Deleting AWS accounts is a long, painful process, because AWS does not want to be on the hook for deleting stuff that it cannot get back. One key thing is that the email address associated with the…

dalekurt

I will deploy the changes to the namespace later today, as well update the email addresses assigned to the existing sub-accounts. I may have to crosswalk this out on paper first so that I don’t miss a step for clean up.

Jeremy Grodberg

If the accounts are still in the waiting period for deletion, you can probably still change the email address. And if you cannot, you can stop the deletion process, change the email address, and start it again, so long as they are not already fully deleted.

dalekurt

I will check the emails for the cancellation

dalekurt

I’ve submitted a case to re-open the suspended accounts, then I will update the email addressed for those including the accounts I recently created. Then have them terminated.

dalekurt

@Jeremy Grodberg Once that is done, I will go ahead and re-deploy using the new namespace lunar, this I could get away with lunarops my OCD is going to bug me

dalekurt

Phew I just went through all the accounts replying to the AWS case that was opened. So they should be reinstated soon enough.

dalekurt

@Jeremy Grodberg I took your advise and updated the email addresses for the previously suspended AWS accounts. I will do the same from the second set of accounts I had created, then request a termination.

Once that is done I will re-deploy with the new namespace

Jeremy Grodberg

@dalekurt before deleting the current set of accounts, I suggest you attempt some of the surgical options. If they work, that saves you deleting the accounts. If they mess up horribly, you can still resolve it by nuking and deleting the accounts.

dalekurt

sounds good.

dalekurt

@Jeremy Grodberg Here is the plan, update the namespace to lunarops and re-run the make root.

Jeremy Grodberg

No, don’t rerun make root!

dalekurt

ok

Jeremy Grodberg

Update the namespace in root.tfvars

Jeremy Grodberg

Then run make root/init-resume

dalekurt

Ah ah.. okay let me give that a go.

I got a brief output of it.

terraform apply \
		-var-file=artifacts/aws.tfvars \
		-var-file=configs/root.tfvars \
		-auto-approve \
		-state=root.tfstate \
		accounts/root
...
module.account.module.render.local_file.data[1]: Creation complete after 0s (ID: 6c6649ca2a9e8400634a22bb826e5ae0f6d01543)
module.account.module.render.local_file.data[0]: Creation complete after 0s (ID: ca981bf46f94b8366ea80bcf0f00466c62199b2e)

Apply complete! Resources: 2 added, 0 changed, 2 destroyed.

Outputs:

docker_image = cloudposse/root.lunarops.io:latest
terraform output -state=root.tfstate docker_image > artifacts/root-docker-image
dalekurt

make children

...
===============================================================================================
ARN for lunarops-dev-admin not found in /artifacts/.aws/config

* Please report this error here:
          <https://github.com/cloudposse/reference-architectures/issues/new>



Goodbye
make: *** [dev/provision] Error 1
Jeremy Grodberg

Not entirely relevant, but did you remove security and identity from the accounts_enabled list? If not, please do.

Jeremy Grodberg

Verify that artifacts/accounts.tfvars has the correct account numbers

Jeremy Grodberg

You have a backup of everything, right? In which case, delete everything under repos/ and try make root/init-resume again

dalekurt

@Jeremy Grodberg Thanks for that. So, I did just that. I removed security and identity from the accounts_enabled in configs/root.tfvars.

dalekurt

As well, I removed them from the artifacts/accounts.tfvars and confirmed that account IDs are correct.

dalekurt

Then removed everything from the repos/ directory path and executed make root/init-resume which create the repos/root.lunarops.io dir

dalekurt

@Jeremy Grodberg So i bit the bullet on this and re-deployed make root the accounts got created successfully. make children and make finalize ran successfully as well. This was with the lunarops namespace. So you were right <http://lunarops.io> namespace created issues.

dalekurt

I’m doing clean up now by deleting the previous accounts created.

dalekurt

@Erik Osterman I am successful! I will update you further

Erik Osterman

Omg! That’s awesome. You are one persistent guy. Shows it pays off.

dalekurt

Oh yeah. So I have a question however, though the identity and security accounts are not supported.

dalekurt

Would there be any harm in still deploying them.

Erik Osterman

Yes, we haven’t finished supporting those

Erik Osterman

I think it should work, but they won’t do much

dalekurt

Here is the deal, I do want to have my users login on the identity account and write my policies in terraform to create the IAM Roles there .

dalekurt

So that I can isolate Root users to the root account and employees to the identity account.

dalekurt

I had it humming before with a manual deployment using the AWS Profile Extender extension

dalekurt

If that is what is it called.

dalekurt

AWS Extend Switch Roles extension for Chrome and Firefox

Erik Osterman

That sounds interesting

2019-09-30

SweetOps #geodesic
04:00:04 PM

There are no events this week

Cloud Posse
04:07:03 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Oct 09, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-09-27

Fred Light

hi guys ! tnaks for the SweetWorks ! I don’t really understand how you can override the policy or roles attributes using terraform-aws-ecr (0.6.1 tag) module. The documentation seems a bit outdated because the provided samples raise error in geodesic shell. Do you have an up to date usage sample ?

Alex Siegman

I’m using this regularly. Here’s an example that I’m using:

module "example_django_app" {
  source       = "git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-aws-ecr.git?ref=tags/0.6.1>"
  namespace    = "${var.namespace}"
  stage        = "${var.stage}"
  name         = "example-django-app"
  use_fullname = "false"

  max_image_count = "800"

  principals_full_access     = ["${local.principals_full_access}"]
  principals_readonly_access = ["${local.principals_readonly_access}"]

  tags = "${module.label.tags}"
}

Those locals are processed and provided in the module like this: https://github.com/cloudposse/terraform-aws-ecr/blob/0.11/master/main.tf#L1-L8 And I use pass in what it wants like this, via tfvars

external_principals_full_access=[
  "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">sts:role/OrganizationAccountAccessRole"
]
external_principals_readonly_access=[
  "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:role/nodes.us-east-1.staging.example"
]
cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

oscar

You would want to extend it (fork and PR) so that it can default to a policy doc that has CP’s default but also support taking from a users input https://github.com/cloudposse/terraform-aws-ecr/blob/master/main.tf#L28

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Fred Light

thxs @oscar

oscar

(but as of yet doesn’t look like you can hence why you’d need to develop it yourself)

Fred Light

ok got it clear. i was wondering this because of the error but in the meantime i understood it was linked to the IAM role not being specified as data. So no need to PR at the end but good to know it work this way.

oscar

What’s the error?

Fred Light
module.ecr.aws_ecr_lifecycle_policy.default: Creation complete after 0s (ID: goalgo-dev)
module.ecr.aws_ecr_repository_policy.default: Still creating... (10s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (20s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (30s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (40s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (50s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (1m0s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (1m10s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (1m20s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (1m30s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (1m40s elapsed)
module.ecr.aws_ecr_repository_policy.default: Still creating... (1m50s elapsed)
Releasing state lock. This may take a few moments...

Error: Error applying plan:

1 error(s) occurred:

* module.ecr.aws_ecr_repository_policy.default: 1 error(s) occurred:

* aws_ecr_repository_policy.default: Error creating ECR Repository Policy: InvalidParameterException: Invalid parameter at 'PolicyText' failed to satisfy constraint: 'Invalid repository policy provided'
	status code: 400, request id: cd97a9c3-1065-474f-8169-51235bd0ebf7
oscar

Also send your usage of the module

oscar

Might show something

Fred Light
Fred Light

the error was when principals_full_access was defined actually

oscar

So you’ve taken that out and it is fine?

Fred Light

no i added it and seems fine

oscar

FYI

oscar

0.6.1 is TF 11

Fred Light

yes perfect

oscar

0.7.0 is TF 12

oscar

So you want TF 11? Okie that’s fine. Is your Geodesic using TF 11?

Fred Light

terraform version Terraform v0.11.7

  • provider.aws v2.30.0
  • provider.null v2.1.2
aknysh

@Fred Light here is the latest example on using aws-ecr module https://github.com/cloudposse/terraform-root-modules/blob/master/aws/ecr/kops_ecr_app.tf#L11

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh
cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Erik Osterman

thanks @oscar !

2019-09-25

oscar

We have some people using AWS Vault and other AWS auth mechanics. We’ve disabled aws-vault in the Dockerfile due to the UI prompts it causes that aren’t needed for ‘other auth’ users (90% of users). For the 10% using AWS Vault, how can we have them use the same Geodesic/Dockerfile, but toggle AWS_VAULT_ENABLED ? I notice if you set this to true inside the container nothing happens, so it is clearly a build arg (could be wrong).

Any alts?

oscar

(Or is it possible to have it enabled but disable -> Run 'assume-role' to login to AWS with aws-vault that is printed on every command )

joshmyers

@oscar Have you looked through the geodesic codebase for how to do this? It is all pretty readable. I’m sure PRs welcome if it doesn’t do what you require

joshmyers
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

oscar

I have not. I will take a look and see if I can put a PR through

oscar

Thx

oscar

I imagine that runs at build time only?

oscar

Or rather container start

1
joshmyers

So, off the top of my head you could use build args here, which geodesic and build harness already supports

joshmyers
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

joshmyers
cloudposse/build-harness

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness

joshmyers

So you could an an ARG to the dockerfile, and change https://github.com/cloudposse/geodesic/blob/master/Makefile#L20 to make --no-print-directory docker:build ARGS="AWS_VAULT_ENABLED"

cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

oscar

I’m actually leaning towards…

oscar
	PROMPT_HOOKS+=("aws_vault_prompt")
	function aws_vault_prompt() {
		if [ -z "${AWS_VAULT}" ] && [ "${AWS_VAULT_PROMPT" == "true"]; then
			echo -e "-> Run '$(green assume-role)' to login to AWS with aws-vaultdsfdsfdssf"
		fi
	}
oscar

joshmyers

and add some makefile default that users can set an env var to turn off at build time

oscar

Would you accept the above?

joshmyers

It’s not me

oscar

Jeremy and Erik??

joshmyers

Not sure who atm but best open a PR and see

joshmyers

The above just turns off the prompt though, why not use the ENV var that is already there to turn the whole lot off, which is actually what you want

joshmyers

Not just the prompt.

oscar

I want AWS VAULT on, but prompt off

oscar

that way everyone can use the same Dockerfile no extra config

joshmyers

We have some people using AWS Vault and other AWS auth mechanics. We’ve disabled aws-vault in the Dockerfile due to the UI prompts it causes that aren’t needed for ‘other auth’ users (90% of users). For the 10% using AWS Vault, how can we have them use the same Geodesic/Dockerfile, but toggle AWS_VAULT_ENABLED ? I notice if you set this to true inside the container nothing happens, so it is clearly a build arg (could be wrong).

Any alts?

oscar

L142

	PROMPT_HOOKS+=("aws_vault_prompt")
	function aws_vault_prompt() {
		if [[ -z "${AWS_VAULT}" ]] && [[ "${AWS_VAULT_PROMPT" != "false"]]; then
			echo -e "-> Run '$(green assume-role)' to login to AWS with aws-vault"
		fi
	}
joshmyers

The above would also allow the same Dockerfile etc to be used

oscar

Yeh think I had some development time between then and went out for a lunch

oscar

That snippet wouldn’t affect anyone else

joshmyers

but the folks who do want to use aws-vault just set some var in their bash profile or whatever which will be used when they make build

oscar

but would allow others to disable prompt

joshmyers

What you want to do is already supported with some tweaks to the dockerfile/Makefile, which wouldn’t need to effect all of you.

joshmyers

Open a PR and I’m sure someone will get to it.

oscar
Feature: Toggle AWS Vault Helptext by osulli · Pull Request #525 · cloudposse/geodesic

What Adds ability to pass an ENV in individual Geodesic module&#39;s Dockerfiles to toggle the prompt. By default this will not affect anyone. Why I am not a fan of the help text that is prompted o…

joshmyers

@oscar How will you set AWS_VAULT_PROMPT ?

oscar

Geodesic Dockerfile

joshmyers

Right, so you still need to add something to the Dockerfile

oscar

Yep, I am ok with that, is that a problem generally?

oscar

Seems like an OK thing. It’s a per geodesic module config

joshmyers

No but I don’t see how it differs from the suggestion above which doesn’t involve a PR as it is already supported. Also still not sure why you want AWS_VAULT_ENABLED true, but no prompt….

oscar

Other people can benefit

oscar

Ok I can explain

oscar

Maybe context will help understand my rationale

joshmyers

Maybe best off to add that to the PR.

oscar

We have 9 users on Azure AD authing with AWS

oscar

we have to use this NPM package called aws-azure-login

oscar

we don’t need aws-vault

oscar

but we have 1 user using IAM users with keypair

oscar

they need aws-vault

oscar

We all use the same geodesic repo + module + files

oscar

I understand the geodesic local config

oscar

that is fine in principle

oscar

but I don’t want to have to ask people to do such things

joshmyers

So how can all users use the same dockerfile now if passing AWS_VAULT_PROMPT = false ?

oscar

Because we don’t use aws-vault, but now the prompt is gone

oscar

So it’s there

oscar

just not in our faces

joshmyers

sigh

joshmyers

nvm

oscar

Plus even when using aws-vault it’s annoying

oscar

You don’t need the prompt to use vault

oscar

or am I misunderstanding

joshmyers

You don’t need the prompt no.

oscar

I am missing why you are frustrated then

oscar

Your solution is good. Have a .bashrc variable and pass it in the build target, but I don’t want to have to ask people to modify their .bashrc

joshmyers

From experience working with devs who may not be used to AWS or geodesic, telling them to assume-role is useful. Rather than not, expecting people to know to run the thing before anything else

joshmyers

and what you actually want to do is not enable AWS_VAULT for those users, not just hide the prompt for all…

joshmyers

anyway, nvm, I see your use case.

Erik Osterman

public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

1

2019-09-24

oscar

I’ve placed a file in /rootfs/usr/bin with the hopes it’ll copy to bin but no luck on make build. Any guidance available please?

joshmyers

did you make install your new container? Sure you are running the newly built version?

joshmyers

Putting into rootfs/usr/local/bin/foo worked for me with a make all

oscar

This is an existing geodesic module. I have a new script : rootfs/usr/bin/lint and run make build and then start my Geodesic module, but the script is not in the /usr/bin/ dir in the Geodesic module container

oscar

Doesn’t seem any different to my rootfs/etc/profile.d/aliases.sh script, yet doesn’t copy over

Erik Osterman

In your Dockerfile, you’ll need something like COPY rootfs/ /

oscar

COPY rootfs/ / Already present

oscar
oscar

so alises.sh works just fine

Erik Osterman

I suspect it’s attaching to a running container

oscar

but lint does not

oscar

ah ok

oscar

Ah yes

oscar

I did have it open in a nother window

oscar

lets see

Erik Osterman

oscar

Spot on

oscar

Anyway to give a /usr/bin/ file chmod +x easily?

oscar
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

oscar

For now I’ve popped

COPY rootfs/ /
RUN chmod -R +x /usr/bin/
oscar

in Dockerfile which will do, but let me know if there’s a better solution

Erik Osterman

Hrmmm modes should be carried over from what is in git

3
Erik Osterman

According to that link, it’s an executable

Erik Osterman

Would need to take a deeper look

joshmyers

lol.

dalekurt

@Erik Osterman That is correct, the mode of the file (in git) would be the same when copied to the Docker image.

oscar
05:41:35 PM

Excellent thanks

Hrmmm modes should be carried over from what is in git

oscar

Missed that part.

oscar

2019-09-23

SweetOps #geodesic
04:00:04 PM

There are no events this week

Cloud Posse
04:01:12 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Oct 02, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-09-20

2019-09-18

dalekurt

Hey guys, I’m using the reference architecture to setup my AWS landing zone and having an issue with make finalize with the following output - https://gist.github.com/dalekurt/7c451ba3914f066bf16b42392904aca1

dalekurt

I believe from the output some s3 bucket are missing from other accounts.

from the output it looks like all your child account state buckets are missing.. maybe you missed steps when provisioning the children.. child account should be provisioned and finalized before finalizing the root account..

dalekurt

Yes, I had a successful completion of the make children but that my very well be true. I will review the children stage of the deployment

dalekurt

I confirmed that the s3 buckets do exist in one of the accounts the error is complaining about.

dalekurt

I’m assuming that this should be the mydomain-com-dev-terraform-state

2019-09-16

SweetOps #geodesic
04:00:06 PM

There are no events this week

1
Cloud Posse
04:02:47 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Sep 25, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-09-09

SweetOps #geodesic
04:00:09 PM

There are no events this week

Cloud Posse
04:03:44 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Sep 18, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-09-04

2019-09-02

SweetOps #geodesic
04:00:04 PM

There are no events this week

Cloud Posse
04:01:48 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Sep 11, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-09-01

2019-08-30

oscar

@Erik Osterman what is to stop a team just having one Geodesic module and then having nested .envrc in /conf to simulate global variables? The only disadvantage I spot is the swapping of AWS-crednentials which could be easily solved.

Erik Osterman

It can work that way

Erik Osterman

Nothing stopping anyone.

Erik Osterman

basically it just comes down to our convention; how to manage multiple different tools pinned to different versions in different stages.

Erik Osterman

so the problem with the one container is pretty much all tools have to be the same version

Erik Osterman

or you need to use a version manager for every tool

Erik Osterman

most OSes don’t make it easy to have multiple versions of software installed at the same time

Erik Osterman

terraform is one tool that needs to be versioned

Erik Osterman

but helm is very strict about versions of client/server matching. not pinning it means you force upgrading everywhere.

Erik Osterman

recently upgraded alpine which upgraded git. then we saw the helm-git provider break because some flags it depends on were broken

Erik Osterman

recently we upgraded variant and that broke some older scripts

Erik Osterman

my point is just that having one shell for all environments makes it difficult to be experimental while not also breaking prod toolchain

Erik Osterman

also, 99.9% of companies don’t worry about this. i basically see most companies operate infrastructure in a monorepo and don’t try to strictly version their toolchain the way we do in a way that allows versions of software to be promoted. we’ve just been really strict about it

oscar

e.g. /conf/prod/.envrc (all the account specific vars)

BAU: /conf/prod/eu-west-1/terraform/my_project/* /conf/prod/eu-west-2/terraform/my_project/*

oscar

Likewise an interesting conversation came up internally yesterday:

In the same way we would promote code in git dev –> master… how would one promote new variables etc along Geodesic /conf? Or new version pegging in the .envrc from-module line? The only solution to make it easy is have all the Geodesic modules in one repo so that it would be easily spotted if you missed updating one environment in the PR.

oscar

I feel I have not seen the Cloudposse way

Erik Osterman

it’s not the way we do it, true… but that doesn’t make it wrong

Erik Osterman

most companies do structure it this way

Erik Osterman

this is the way terraform recommends it

Erik Osterman

this is the way terragrunt recommends it

Erik Osterman

at cloudposse, we’ve taken the convention of “share nothing” down to the repo

Erik Osterman

which opens up awesome potential

Erik Osterman

it means the git history isn’t even shared between stages

Erik Osterman

it means webhooks aren’t shared between stages

Erik Osterman

it means github team permissions aren’t shared between stages

Erik Osterman

it means one PR can never modify more than one stage at a time

Erik Osterman

forcing the convention that you strictly test before rollout to production

Erik Osterman

the mono repo convention requires self-discipline, but it doesn’t enforce it.

tamsky

Nominating this thread (and Erik’s additional “we don’t share…” statements from later this same day) for pinned status.

joshmyers

but in geodesic you have all your env vars because you are wrapping geodesic in your env specific container, no?

oscar

Not sure I follow.. but normally you would have http://acme.stage.com but I wonder why not have: http://acme.com

and then: /conf/ ../dev/.envrc (with equivalent variables of Dockerfile) /prod/.envrc (with equivalent variables of Dockerfile)

oscar

It’s a bit weird but something like that came up in a meeting when I was introducing Geodesic to another team and I didn’t really have an answer other than “that isn’t the way”, but it could work without losing any features so…?

Erik Osterman

yep, you can totally do it this way

Erik Osterman

where you have one folder per stage

Erik Osterman

we do something very similar for regions

Erik Osterman

/conf/us-east-1/eks and /conf/us-west-2/eks

Erik Osterman

where the us-east-1 folder has an .envrc file with export REGION=us-east-1

Erik Osterman

this could just as well be modified to

Erik Osterman

/conf/prod/us-east-1/eks

Erik Osterman

where the /conf/prod/.envrc file has STAGE=prod

Jeremy Grodberg

It’s a matter of “what are you trying to solve for?” It is, in fact, tedious to roll out changes to 4 environments under the reference architecture, in that you need to checkout, branch, commit, pull request, merge, and release 4 times. With everything in 1 repo you can do less with Git, but you still need to apply changes in 4 folders, but now your PRs could be for any environment, so following the evolution of just one environment becomes much harder. Having a 1-to-1 mapping of Geodesic shells to environments just makes it a lot easier to keep everything straight.

Tega McKinney

@Erik Osterman Given the example above, do your REGION and AWS_REGION environment variables not get overwritten when you cd into eks directory?

I have a similar structure but my environment variables are being overwritten.

I have a structure with /conf/<tenant>/eu-west-1/platform-dns where I set /conf/<tenant>/eu-west-1/.envrc to REGION=eu-west-1 however region is being overwritten back to eu-central-1 when I change directory into /conf/<tenant>/eu-west-1/platform-dns

Jeremy Grodberg

@Tega McKinney You need to put source_up as the first line of your .envrc file in order to pick up setting in parent directories. Otherwise direnv only uses the settings in the current directory to override whatever is in your environment, which means you get different results if you

cd /conf
cd eu-west-1
cd platform-dns

than if you just

cd /conf
cd eu-west-1/platform-dns
oscar

More of a probing question than ‘how to do this’. Curious to what other’s think, specifically Erik and Jeremy

Alex Siegman

The first thing I know would be “broken” is role assumption to AWS. It’d be really easy to assume role in dev, do some stuff, then not be able to apply a different stage because you’re still the the dev account.

I think the separation can be healthy even if it’s not DRY. It’s like the age-old “do we deploy centralized logging per stage or put all stages in one place” kind of argument. There’s pros and cons to both, but if you’re trying to prevent leakage between environments, why not prevent it at the tool level too

The separation between stages also allows you to test changes to your tool chain in safe environments, rather than break the production one.

1
Erik Osterman

I think we could even get around the role assumption piece

1
Erik Osterman

the role assumption works on ENVs as well

1
Erik Osterman

so if you assume the role in the proper folder, it assume the proper role

1
Erik Osterman

but if the user then goes to /conf/dev while being assumed as prod, GOOD LUCK!!!

1
Erik Osterman

again, most companies aren’t strict about enforcing this stuff. they get away with it. we just make sure it’s that much harder

1
oscar

So this can be done easily and safely. You have aws profile set in the conf .envrc and it just assumes the role based on that, but my company uses AD to authenticate to AWS so AWS vault doesnt work so it is done external. Bit vague and can go info more technical detail if anyone curious.

1
Erik Osterman

we’ve use it with SSO too (okta)

1
Erik Osterman


You have aws profile set in the conf .envrc and it just assumes the role based on that

1
Erik Osterman

it’s just hard to enforce what happens if they leave a folder

1
Erik Osterman

you need to ensure that in transition from /conf/prod to /conf/dev they still don’t have their prod role assumed

1
oscar

That’s true.

1
oscar

O Rly how do yo do SSO (azure ad) and aws cault???

1
Erik Osterman

you don’t use aws-vault

1
Erik Osterman

aws-vault is one way to do auth

1
Erik Osterman

aws-okta is another

1
Erik Osterman

aws-keycloak

1
Erik Osterman

basically, a different cli is used

1
Erik Osterman
Versent/saml2aws

CLI tool which enables you to login and retrieve AWS temporary credentials using a SAML IDP - Versent/saml2aws

1
Erik Osterman
segmentio/aws-okta

aws-vault like tool for Okta authentication. Contribute to segmentio/aws-okta development by creating an account on GitHub.

1
oscar

Ahhh yes. Thank you. I’ve been using aws azure sso login from npm

1
oscar

Which of these do you use and prefer btw?

1
Erik Osterman

I have only used aws-okta and it works really well

1
joshmyers

Yup, makes CI/CD a bit cleaner too

Alex Siegman

I also like that the toolchain follows the same “TRUST THE ENVIRONMENT” we yell to our ~devs~ selves for 12-factor style apps, but maybe not everybody follows 12-factor app style.

1
1
Erik Osterman

on the one hand, I’m jealous of companies that take the monorepo approach to infrastructure. you can easily bring up and tear down your entire environment. You can open PRs that fix problems across all environments in one fell swoop. You can easily setup cross account peering because you control all accounts in a single session. You can do all sorts of automation.

1
oscar

And this is what my team encouraged me to do. They were saying “why have N conf directories with all the little components when you can have 3: application, aws account, and shared services”

My answer was: That’s not the way. This very monolithic. We want flexibility.

Then they said; We dont want flexibility. We want to ensure all environments are the same. What if someone forgets to update the dev project etc

1
Erik Osterman

ya, guess they just want to optimize for different things.

1
Erik Osterman

these are often opinions strongly held by an organization. they are often influenced by how they got to where they are today. these strongly held opinions are not easily changed because most of the people who were involved in getting the organization to where they are today were the ones who made them.

just like it’s difficult for us (cloudposse) to get our head around how we would manage infrastructure in that way. It’s not an uncommon belief and many do, just we see all the problems that go along with that too.

1
Erik Osterman

one thing i struggle with is a companies “prod” account almost never equals their “staging” account

1
Erik Osterman

they might run 4 clusters in 4 regions in prod

1
Erik Osterman

but they run one region in staging

1
Erik Osterman

they run multi demo environments in staging, but none in prod

1
Erik Osterman

they run shared services in one account (like Jenkins, Keycloak, etc), yet don’t have another “staging” account for those shared services

1
Erik Osterman

so I instead argue we want to ensure the same code pinned at some version runs in some account

1
Erik Osterman

we want some assurances that that was tested

1
Erik Osterman

but the way it runs in a particular stage isn’t the same

1
oscar

Mmm it makes sense. This is deffo a topic for next wednesday. I’ll try to prep some more specific examples and files structures so we can all cross examine.

1
Erik Osterman

yea for sure

1
Erik Osterman

also, willing to explore this in a deeper session

1
Erik Osterman

i’d like to offer this as an alternative strategy for companies who prefer it

1
Erik Osterman

it’ll definitely appeal more to the terragrunt crowd (as well)

1
Erik Osterman

but this also is freggin scary. i think it’s optimizing for the wrong use-case where you start from scratch. i think it’s better to optimize for day to day operations and stability.

Erik Osterman

also, what i struggle with is where do you draw the line?

Erik Osterman

i’m sure most of the engineers would agree “share nothing” is the right approach

Erik Osterman

but despite that, tools, state buckets, repos, dns zones, accounts, webhooks, ci/cd, etc are all shared.

Erik Osterman

we’ve taken the (relatively speaking) extreme position to truly share nothing.

Erik Osterman

we don’t share the tools, they are in different containers.

Erik Osterman

we don’t share the state buckets, they are in different accounts.

Erik Osterman

we don’t share the repos, each account corresponds to a repo

Erik Osterman

we don’t share DNS zones. each account has it’s own service discovery domain

Erik Osterman

we don’t share webhooks, because each account has it’s own repo

Erik Osterman

we don’t share CI/CD (for infra) because each account has it’s own atlantis, and each atlantis receives it’s own webhooks

Erik Osterman

etc…

aknysh

yes, all of that looks like very difficult to setup and maintain from the start, but in the end it’s much easier to manage security and access without jumping through many hoops (and still having holes)

aknysh

in that share nothing architecture, the only point of access to an account is to add a user to the identity account and allow it to assume a role with permissions to access resources in the other accounts

aknysh

note that we still share all the code (from terraform modules, helmfiles, and the root-modules catalog) in all account repos, so no repetition there (we load them dynamically using tags with semantic versioning)

aknysh

we just have different settings (vars, ENvs, etc.) for each account

aknysh

as you can see for example here https://github.com/cloudposse/testing.cloudposse.co/tree/master/conf/ecs, there is no code in the account repos, just settings (not secrets)

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

aknysh

all code (logic) is shared and open (if it’s your company secret, make the repo private)

oscar

Yes that example is similar to how were doing it now. We have a few levels of abstraction going on. I can go over this next wednesday for 10-15 minutes d

2019-08-28

oscar

Yes similar to above I would create a R53 role in root account and assume the role using a second provider block and then for the r53 record resource use the root account t provider.

joshmyers

Why not add route53 perms to the root admin group that already exists?

joshmyers

Currently the users in root admin group, in the root account can only do IAM stuff for their own users

joshmyers

Ah, nevermind me

oscar

What was the issue/thing for future reference?

joshmyers

Me being stupid and missing the admin role

Erik Osterman

#office-hours starting now! join us here https://zoom.us/s/508587304

2019-08-27

joshmyers

How are folks doing vanity domains using geodesic / ref arch ?

joshmyers

Currently the parent dns name (http://foo.com) lives in the root account, and by default, users don’t have admin in root

joshmyers

So no one can actually delegate http://example.foo.com down to http://example.prod.foo.com

joshmyers

Easily solvable but wondering what folks are doing

Erik Osterman

by design, the branded domain is provisioned in the root account, since it will contain references to the service discovery domain in any account.

Erik Osterman

e.g. corp/shared account, prod account, data account, etc.

Erik Osterman

branded = vanity

Erik Osterman
terraform {
  required_version = "~> 0.12.0"

  backend "s3" {}
}

provider "aws" {
  version = "~> 2.17"

  assume_role {
    role_arn = var.aws_assume_role_arn
  }
}

variable "aws_assume_role_arn" {
  type = string
}

data "aws_route53_zone" "ourcompany_com" {
  # Note: The zone name is the domain name plus a final dot
  name = "ourcompany.com."
}

# allow_overwrite lets us take over managing entries that are already there
# use sparingly in live domains unless you know what's what
resource "aws_route53_record" "apex" {
  allow_overwrite = true
  zone_id         = data.aws_route53_zone.ourcompany_com.zone_id
  type            = "A"
  ttl             = 300
  records         = ["1.2.3.4"]
}

resource "aws_route53_record" "www" {
  allow_overwrite = true
  zone_id         = data.aws_route53_zone.ourcompany_com.zone_id
  type            = "CNAME"
  ttl             = 300
  records         = ["<http://app.prod.ourcompany.org>"]
joshmyers

Right, but using the ref arch, even admin users in the root account don’t have access to do anything to route53 resources

Alex Siegman

They should if they assume the right role.

joshmyers

Which role is that?

Jeremy Grodberg

namespace-root-admin. In the ref arch you pick a namespace for all your accounts, like cpco, and then the role in your prod environment is cpco-prod-admin. So you also get a cpco-root-admin role that you can assume to do whatever you need to do in the root account.

2019-08-26

SweetOps #geodesic
04:00:08 PM

There are no events this week

Cloud Posse
04:01:56 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Sep 04, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-08-23

oscar

Thanks =] All working now

2019-08-22

oscar

Any thoughts on why I receive the following error:

 ✗ . (none) backend ⨠ terraform init
Copying configuration from "git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-root-modules.git//aws/tfstate-backend?ref=tags/0.35.1>"...

Error: Can't populate non-empty directory

The target directory . is not empty, so it cannot be initialized with the
-from-module=... option.

Latest Geodesic.

oscar
 ✗ . (none) backend ⨠ ls -la
total 16
drwxr-xr-x 2 root root 4096 Aug 22 15:00 .
drwxr-xr-x 3 root root 4096 Aug 22 09:49 ..
-rw-r--r-- 1 root root  380 Aug 22 15:29 .envrc
-rw-r--r-- 1 root root  122 Aug 22 15:00 Makefile
 ⧉  oscar
 ✗ . (none) backend ⨠ cat .envrc
# Import the remote module
export TF_CLI_INIT_FROM_MODULE="git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-root-modules.git//aws/tfstate-backend?ref=tags/0.35.1>"
export TF_CLI_PLAN_PARALLELISM=2

source <(tfenv)

use terraform 0.12
use tfenv
aknysh

add export TF_MODULE_CACHE=.module

joshmyers

@oscar It has been discussed here a while back, check through history, as @aknysh says ^^

joshmyers

hey @aknysh

1
oscar

Thanks both, I did actually have a scan through 0.12 and geodesic channel but couldn’t find it.

aknysh

and in Makefile.tasks, change it to:

aknysh
-include ${TF_MODULE_CACHE}/Makefile

deps:
	mkdir -p ${TF_MODULE_CACHE}
	terraform init

## Reset this project
reset:
	rm -rf ${TF_MODULE_CACHE}
oscar

Ah you know I can see it now

oscar

19th July

joshmyers
Direnv with Terraform 0.12 by osterman · Pull Request #500 · cloudposse/geodesic

what Use an empty cache folder to initialize module why Terraform 0.12 no longer allows initialization of folders with even dot files =( Example of how to use direnv with terraform 0.12 usage e…

oscar

Thanks just had a read of that

oscar

Makes sense. I must have missed this this afternoon!

oscar

is source <(tfenv) required?

oscar

@aknysh I get this from the new Makefile

 ✗ . (none) backend ⨠ make
Makefile<i class="em em-4"></i> *** missing separator.  Stop.
 ⧉  oscar
 ✗ . (none) backend ⨠ make reset
Makefile<i class="em em-4"></i> *** missing separator.  Stop.
 ⧉  oscar
 ✗ . (none) backend ⨠ make deps
\Makefile<i class="em em-4"></i> *** missing separator.  Stop.
Erik Osterman

Makefile<i class="em em-4"></i> *** missing separator. Stop. is usually causes by spaces (replace with tabs)

2019-08-19

SweetOps #geodesic
04:00:07 PM

There are no events this week

Cloud Posse
04:04:22 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Aug 28, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-08-16

Erik Osterman

Hey Ryan - sorry - our docs are really out of date.

Erik Osterman

Here’s an example of how we use it: https://github.com/cloudposse/testing.cloudposse.co

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

2019-08-15

Hello all, I’m playing with geodesic and it doesn’t seem to be generating the dynamic cluster correctly. I get output that is example.foo.bar.

$ export CLUSTER_NAME=<http://test.myco.com>

$ docker run -e CLUSTER_NAME \
-e DOCKER_IMAGE=cloudposse> -e DOCKER_IMAGE=cloudposse/${CLUSTER_NAME} \
-e DOCKER_TAG=dev \
cloudposse/geodesic:latest -c new-projec> -e DOCKER_TAG=dev \
> cloudposse/geodesic:latest -c new-project \| tar -xv -C .
Building project for example.foo.bar...
./
example.foo.bar/Dockerfile
example.foo.bar/Makefile
example.foo.bar/conf/
example.foo.bar/conf/.gitignore

I’m looking through various source, it’s pretty challenging to piece it all together (feels over abstracted). So wondering if I’m missing something.

Also new to Terraform so the concepts for how it’s organized is a bit confusing atm

2019-08-12

SweetOps #geodesic
04:00:02 PM

There are no events this week

Cloud Posse
04:04:12 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Aug 21, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-08-11

2019-08-09

Erik Osterman

have not tried to accomplish cross account kiam

Erik Osterman

generally we try never to cross account boundaries

2019-08-08

Tega McKinney

Has any configured kiam beyond the helmfiles defaults for cross account resource access from kops pods? I’m running into a situation where my cross account policies are not allowing me access and I’m thinking maybe kiam is not properly setup to assume roles

Tega McKinney

I think it’s starting to make sense now. I hadn’t realized kiam-server roles were not setup and no sts:AssumeRole on the masters were configured either.

FYI - this TGIK by Joe Beda helped in that understanding. https://www.youtube.com/watch?v=vgs3Af_ew3c

Tega McKinney

@Erik Osterman Any reasons that with the kiam setup, it is essentially using the master nodes role vs creating a kiam-server specific role and allowing it to establish the trust relationship with pod roles?

Erik Osterman

no, but what you describe is a better practice

Erik Osterman

basically, there should be a separate node pool strictly for kiam-server and assumed roles

Tega McKinney

I did not go as far as a separate node pool however I did create the kiam-server assume-role and set assume-role arg on kiam-server instead of it detecting the node role.

Jeremy Grodberg

Right now we run the kiam-server on the masters, and we treat the master role created by kops as the kaim-server role. As far as I can see, there is not much added security or convenience in creating a separate kiam-server role until you get to the point of creating separate instances for the kiam servers and giving them instance roles for the kiam-server. In our configuration, anything on the master nodes can directly assume any pod role that kiam can authorize. With a separate kiam-server role, this is still the case, it’s just that there would be an extra intermediate step of assuming the kiam-server role.

Jeremy Grodberg

To answer your question @Tega McKinney, the reason we treat the master role like it is the kiam server role is because it is a lot easier. While we will likely do it eventually, it is going to be a lot of work to separate out the kiam server role from the master role in all of our Terraform.

2019-08-06

thanks @aknysh @Erik Osterman RUN apk add [email protected] was the trick (instead of RUN apk add [email protected] [email protected]==0.11.14-r0)

2

2019-08-05

Tega McKinney

Okay; reason I was asking is I attempted to add /helmfiles/ templates to the configs/prod.tf in reference-architecture. Failed to render templates as it was looking for a few env vars that are not setup (KOPS_CLUSTER_NAME and STAGE). That got me thinking those templates are just reference files for after the initial architecture is setup. Is that the general idea?

Erik Osterman

So the Helmfiles will require a ton of settings

Erik Osterman

We do not have those documented. But if you share specific ones I can help.

Erik Osterman

Usually we set STAGE in the Dockerfile. If you only have one cluster in a stage, then you can also set KOPS_CLUSTER_NAME in the Dockerfile so it’s available globally.

Tega McKinney

Yeah i’m planning to do that. It was more of a reference to that value not being set in the ref-architecture when attempting to add it templates in prod.tfvars. Helmfiles would be a post bootstrap step I’m assuming

SweetOps #geodesic
04:00:07 PM

There are no events this week

Cloud Posse
04:02:33 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Aug 14, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

Tim Jones

Hi! I’ve been reviewing the terraform-root-modules resource in order to help bring a little order to the chaos AWS I’ve inherited but I’m having trouble understanding where real human users are managed. I see that the aws/usersseems to be set up for this, but it seems incomplete or something, the welcome.txt references a username var that doesn’t exist, and isn’t used as a template source as far as I can see anyways.

Tega McKinney

@Tim Jones Have you seen the reference-architectures repo? Its leverages the root-modules. It creates your admin users using aws/users root-module

Tim Jones

@Tega McKinney yes but it’s been very much broken with the release of Terraform v0.12

I’m having trouble building from geodesic with terraform 0.11 since 0.117.0

is there a better way to use both 0.11 and 0.12?

Erik Osterman

Yes, but I am afk

Erik Osterman

@aknysh can you show Dave

Erik Osterman

Basically install [email protected]

Erik Osterman

And write “use terraform 0.11”

Erik Osterman

In your .envrc

where/when do I install 0.11? before geodesic 0.117.0, apk add [email protected] worked in the Dockerfile

Erik Osterman

Yep, add that to your Dockerfile

Erik Osterman

This way we can support multiple concurrent major/minor versions

Yes, but during docker build:

ERROR: unsatisfiable constraints:
  terraform-0.12.0-r0:
    breaks: world[terraform=0.11.14-r0]
The command '/bin/sh -c apk add [email protected] [email protected]==0.11.14-r0' returned a non-zero
Erik Osterman

Show me your Dockerfile

Erik Osterman

You need to remove the second package there

dockerfile

Erik Osterman

RUN apk add [email protected]==0.11.14-r0 is what you want

Erik Osterman

the long & short of it is that since we upgraded to the alpine:3.10 series, theres been no new 0.11 release, so no package for 0.11 was built under terraform.

Erik Osterman

however, we explicitly build a terraform_0.11 package and a terraform_0.12 package

Erik Osterman

like a python2 and python3 package

Erik Osterman

and from alpine:3.10 , the terraform package will be 0.12.x

Erik Osterman

behind the scenes, we’re installing a symlink to /usr/local/terraform/x.y/bin/terraform that points to /usr/local/bin/terraform-x.y

Erik Osterman

that way when we write use terraform 0.11, we can set PATH=/usr/local/terraform/0.11/bin:$PATH and it will automatically find the correct version of terraform

Erik Osterman

without changing code

Erik Osterman

and not using alias

aknysh

@ this is what we used in Dockerfile to install both TF 0.11 and 0.12 under geodesic 0.117

aknysh
# Install terraform 0.11 for backwards compatibility
RUN apk add [email protected]

# Install terraform 0.12
RUN apk add [email protected] [email protected]==0.12.3-r0
aknysh

then, for the modules that use TF 0.12, we use

use envrc
use terraform 0.12
use tfenv
aknysh

and for the modules that use TF 0.11

use envrc
use terraform 0.11
use tfenv

2019-08-04

Tega McKinney

Regarding helmfiles, is multi-stage dockerfiles the current approach? How does that relate to /templates/conf/helmfiles/... in reference-architectures?

Erik Osterman

great you ask

Erik Osterman

no - we’re using remote helmfiles pinned to releases now

Erik Osterman

It looks like this:

Erik Osterman
# Ordered list of releases.
# Terraform-module-like URLs for importing a remote directory and use a file in it as a nested-state file.
# The nested-state file is locally checked-out along with the remote directory containing it.
# Therefore all the local paths in the file are resolved relative to the file.

helmfiles:
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/reloader.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/cert-manager.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/prometheus-operator.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/cluster-autoscaler.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/kiam.yaml?ref=0.51.2>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/external-dns.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/teleport-ent-auth.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/teleport-ent-proxy.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/aws-alb-ingress-controller.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/kube-lego.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/nginx-ingress.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/heapster.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/dashboard.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/forecastle.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/keycloak-gatekeeper.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/fluentd-elasticsearch-aws.yaml?ref=0.47.0>
  - path: git:<i class="em em-<https"></i>//github.com/cloudposse/[email protected]/kubecost.yaml?ref=0.50.0>

2019-08-02

2019-08-01

Tega McKinney

Is it possible override the terraform backend s3 key? I created the following top-level folder structure:

root
-- vpc
-- -- terraform.envrc

I would like the key to be : root/vpc/terraform.tfstate so I set TF_CLI_INIT_BACKEND_CONFIG_KEY=root/vpc/terraform.tfstate.

This does not override the default cause with the key ends up being vpc/terraform.tfstate. I sit possible to override the key?

Tega McKinney

Got it. I needed to remove ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd" from my Dockerfile so it would use the full directory path afer /conf.

Erik Osterman

You got it!

Tega McKinney

Ran into another issue where I have created additional root-dns entries that have a hyphen in the name. The label is removing the hyphen from the stage however I assumed the regex_replace_chars should not replace hyphens. Is there something else that could be stripping the hyphen?

Tega McKinney

Got it. Took some digging to realize was on an older version on terraform-null-label from reference-architecture. Updated to 0.11.1 and it works perfectly.

Erik Osterman

ah sweet!

Erik Osterman

yes, we had a bug I think related to that

2019-07-31

Lee Skillen

As a service that runs on .io (we have the .com as well, and we’re migrating to it), I’d heartily recommend avoiding .io if you can. It’s mostly OK, but occasionally has worldwide resolution issues.

1
Lee Skillen

It’d be fine unless you’re running something that’s mission-critical though. A blog? Fine. Something that your pipeline needs for deployments? Negative.

Tim Jones

Another question - are you guys still supporting the terraform iam user modules? I noticed terraform-aws-iam-system-user was updated to terraform v0.12 but not terraform-aws-iam-user (I submitted a PR https://github.com/cloudposse/terraform-aws-iam-user/pull/3)? I’m curious to know how it, terraform-aws-organization-access-group, terraform-aws-iam-assumed-roles, and terraform-aws-organization-access-role are all meant to work together…

Erik Osterman

yea, it’s just a slog. We have over 100 modules.

Erik Osterman

upgrading them a little bit at a time.

Erik Osterman

we’ve upgrading 40+ modules to 0.12 and terratest

Erik Osterman

(but this is all out of pocket for cloudposse)

2
joshmyers

Ya’ll rock! @aknysh is a machine

2
Tim Jones

I can only imagine the chore!! Hopefully my contrib helps, since I’m looking at adopting your modules. They follow really great practices from what I can see.

Tim Jones

My company is happy to let me dedicated paid time to Open Source projects that we use in turn.

Erik Osterman

thanks @Tim Jones!

Erik Osterman

most of the slow down for us accepting 0.12 PRs is that we want to have testing in place so all future PRs can be merged faster.

Erik Osterman

we’re using the 0.12 upgrade as the excuse to add them.

Erik Osterman

@aknysh on our side is doing most of the work.

Tim Jones

Cool! I’d like to help out here I can, and can easily use my fork until PRs get merged…

Erik Osterman

thanks! sorry for the hold ups

Tim Jones

No need to apologise!! We’re already grateful for the awesome contributions you’ve all made!

Tim Jones

To my other question though: can you give a quick run-down of how terraform-aws-iam-user, terraform-aws-organization-access-group, terraform-aws-iam-assumed-roles, and terraform-aws-organization-access-role are made to be used together? For instance, I have the master account, along with develop, staging, & production accounts. I create the real-life users in the master account with terraform-aws-iam-user, but I’m not sure how I then give the, access to the other accounts & control things like MFA requirements…

Erik Osterman

fwiw, have you had a chance to look through our terraform-root-modules repo? this is a collection of blueprints we use that leverage all the modules together.

Erik Osterman

terraform-aws-organization-access-group is used when you have a role in another account and you want to allow users in your root account (or identity account) easily assume roles in the other account.

Erik Osterman

so you deploy terraform-aws-organization-access-group for each role that exists in another account that is assumable

Erik Osterman

terraform-aws-iam-assumed-roles was one of our earlier modules. We use it for provisioning a few default roles in the root account. it also enforces some best practices around MFA and password resets.

Erik Osterman

terraform-aws-iam-user is used when you don’t have SSO. it provisions an IAM account suitable for humans

Erik Osterman

(we also have terraform-aws-iam-system-user for bot accounts. e.g. external CI/CD systems)

Tim Jones

So really I should be using terraform-aws-iam-assumed-roles for human users, and then simply delegate their access with an terraform-aws-organization-access-group with the role ARNs of the org accounts?

Tim Jones

Ah, I think I see it now, create the individual users with terraform-aws-iam-user, manage the settings of all the users with terraform-aws-iam-assumed-roles and delegate their account access with terraform-aws-organization-access-group`

Erik Osterman

the terraform-aws-organization-access-group is somewhat redundant with terraform-aws-iam-assumed-roles

Erik Osterman

the terraform-aws-organization-access-group module was designed to be a more general way to assume any role in other accounts

Erik Osterman

while the terraform-aws-iam-assumed-roles is designed specifically to assume roles in the “root” (apex) account.

2019-07-29

SweetOps #geodesic
04:00:09 PM

There are no events this week

Cloud Posse
04:01:34 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Aug 07, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

Tim Jones

Hi all! I’m trying to use the https://github.com/cloudposse/reference-architectures repo to put some order into an ad-hoc AWS I inherited. I have two questions: 1) in configs/root.tfvars I have to change the domain for the entire infra. Now currently the company domain <http://example.com> is pointed to a heroku app and I can’t make AWS the SOA for the entire domain. If I use something like <http://infra.example.com> will that then make all the accounts register as <account>.<http://infra.example.com>?? 2) In running make rootI get the following error. I can’t create the accounts/root dir by hand as the first step in the Makefile removes it!

Erik Osterman

We suggest distinguishing between your branded/vanity domains and you infrastructure service discovery domains

Erik Osterman

e.g. a product might have dozens of branded domains, but you’ll only have one infra

Erik Osterman

so for our customers, we usually register a new domain on route53 or repurpose an existing one that was unused

Tim Jones

aahh, so you’d suggest the ‘root’ account to ‘own’ <http://companyexmaple.net> or something completely separate from the company itself

Erik Osterman

yes, precisely

Erik Osterman

the .net, .io, .org, .dev, .sh, .co domains are what we typically use

2019-07-24

Erik Osterman

Public #office-hours with cloud posse starting now! https://zoom.us/s/508587304 join if you have any questions or want to listen in.

2019-07-23

joshmyers

@chrism Re Geodesic on windows, are you running native Windows? Bash? WSL? I literally have no idea with windows these days but a dev in the team is a Windows user

chrism

Windows 10 + Docker for windows. Ubuntu 18 / WSL (from the store)

joshmyers

Do you need to install docker in WSL? Colleague has docker for windows installed but in a “bash” environment, cannot see DOCKER

chrism

yeah

chrism

one mo

chrism
Setting Up Docker for Windows and WSL to Work Flawlessly

With a couple of tweaks the WSL (Windows Subsystem for Linux, also known as Bash for Windows) can be used with Docker for Windows.

1
joshmyers

Thanks @chrism

joshmyers

So yeah, it looks like assume-role interactive doesn’t work, and also /localhost/.aws/config doesn’t exist, are you setting AWS_DATA_PATH ?

joshmyers

@chrism Are you always running assume-role $ROLENAME in that case?

chrism
ENV ASSUME_ROLE_INTERACTIVE=false
ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
# Needed because multi provider started failing with wsl and dynamic linking
ENV TF_PLUGIN_CACHE_DIR=/tmp

in my docker file

1
joshmyers

how does your aws config end up in the right place? It looks like WSL has a mapping of e.g. /mnt/c/Users/work/.aws

joshmyers

inside WSL

joshmyers

or is your aws config inside WSL? >_<

chrism

you configure vault + aws in wsl

chrism

otherwise you’re in the weird world of virtualised chmod

chrism

wsl’s my default commandline; i rarely need win-cmd.

1
joshmyers

OK ta, this dev is a windows windows user, grrrr

joshmyers

Are you able to write to /root/.aws/config inside WSL land?

joshmyers

Feels like I need to do some reading around WSL

chrism

thats just normal linux shit

chrism

the aws config should be in ~/.aws/config the vault ~/.aws-vault/ etc

chrism

basically dont sudo su

chrism

main issues I had with the folders was sudo altering the owner of files

chrism

nothing as heart warming as fighting your way through config just to get a worker node to reply banana on request

joshmyers

WSL seems broken…. there isn’t a /root/.aws dir inside WSL, but we can’t seem to create one

joshmyers

We’re getting some weird input/output error trying to create /root/.aws

chrism

it should be /home/username/.aws I don’t get why its in root

joshmyers

hmm

chrism
joshmyers

For some reason he is root as soon as enters WSL…. there is nothing in /home/

joshmyers

whoami = root

chrism

thats… screwy

joshmyers

Said he just installed WSL from the store and doesn’t know anything else

joshmyers

¯_(ツ)_/¯

chrism

burn it start again

joshmyers

Yeah, he’s already not happy

chrism

Never had that one and I’ve been using wsl since insider release came out

chrism

<httpsoverviewtab>

chrism

Legacy wsl you could force it to open as root lxrun /setdefaultuser root newer ones ` ubuntu config –default-user ` but I prefer to keep it as a real user; seems crazy to do everything as root

joshmyers

Indeed

joshmyers

So how do you keep it as real usernames? Default that should be the case?

chrism

just the default tbh

joshmyers

OK, thanks for all the info dude! Out my depth in windows world

chrism

It’s a fun system of virtualised virtualisation

2019-07-22

SweetOps #geodesic
04:00:05 PM

There are no events this week

Cloud Posse
04:01:09 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Jul 31, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-07-19

joshmyers

hmmm, how are folks doing Terraform 0.12 and geodesic?

joshmyers
02:51:21 PM
Erik Osterman
Direnv with Terraform 0.12 by osterman · Pull Request #500 · cloudposse/geodesic

what Use an empty cache folder to initialize module why Terraform 0.12 no longer allows initialization of folders with even dot files =( Example of how to use direnv with terraform 0.12 usage e…

Erik Osterman

but this is what @Josh Larsen and @ are doing https://sweetops.slack.com/archives/CB84E9V54/p1563307247187300

much more elegant than my workaround

deps:
	mkdir -p modules
	terraform init modules
	cp modules/* ./
	rm -rf modules

2019-07-17

joshmyers

@chrism Another quick thing, any issues running the Geodesic wrapper script on windows? i.e. make install so you get a <http://root.foo.co> in your /usr/local/bin to call ?

chrism

works fine

1

2019-07-16

Jeremy Grodberg

Actually, @chrism what @Erik Osterman was referring to was a command I added to Geodesic called role-server. I have only tried it when using credentials that have/require virtual MFA, and with that setup, here is how it works:

You start your first Geodesic shell and run role-server. It uses aws-vault to assume an IAM role, creating temporary credentials with a life of AWS_VAULT_ASSUME_ROLE_TTL (default 1h meaning 1 hour) and a role session with life of AWS_VAULT_SESSION_TTL (default 12h). aws-vault then runs as a credential server, using the same interface that the AWS credential server uses on EC2 instances. It runs in debugging mode so you will see all the activity printed out on the screen as it runs. For this reason, I usually hide the window.

Then you start a second Geodesic shell using the same Docker image. This second shell, as it starts up, finds the credential server is running and immediately assumes the role it is serving. This shell will work uninterrupted until the role session runs out (12 hours). When the temporary credentials expire, aws-vault renews them as long as the role session is valid. When the session runs out, the shell will stop having credentials, but back in the window that I hid, running the role server, aws-vault will ask for the MFA token. Type it in and you get a new session for another 12 hours.

And of course, you are not just limited to 1 useful Geodesic shell. You can run several at the same time and they will all share the same credential server, as long as they are all running the same Docker image. The Geodesic wrapper script that gets installed to launch the Docker image does not run multiple copies of the image, but rather starts multiple bash shells running inside the same image. This is how they are able to share the credential server (and the kubecfg).

1
Jeremy Grodberg

@Erik Osterman @joshmyers I believe the reason the organizations module is not enabled in root.tfvars in the ref arch is that it is not needed because it is not practical. From a cold start, you have to use ClickOps to:

  • create the root account,
  • add billing info,
  • buy a root domain,
  • connect the root domain to the root account and Route 53,
  • create the organization,
  • verify the root account email address with AWS,
  • request an increase in the limit of the number of accounts allowed under your organization,
  • wait for the organization to finish initializing (can take an hour or more), and
  • wait for the requested limit increase to be approved and implemented.

Because of the long waits and the interaction with AWS support and email, none of the above is done with Terraform. After it is all done, Terraform does not need to manage the organization at all. New accounts are automatically added to the organization when they are created because they are created as “child” accounts of the “root” account.

joshmyers
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Jeremy Grodberg

I have never seen it used. It was written over a year ago, when the whole terraform-root-modules project was new. My guess is that it was an attempt at automating the cold start that was abandoned when it was realized that the above-mentioned tasks really needed to be done via ClickOps.

joshmyers

OK, should prob remove it then, I’ll open a PR

joshmyers

Heads up that I applied that module to root, seems fine…

Erik Osterman

@Josh Larsen any opinions on this? https://github.com/cloudposse/geodesic/pull/500

Direnv with Terraform 0.12 by osterman · Pull Request #500 · cloudposse/geodesic

what Use an empty cache folder to initialize module why Terraform 0.12 no longer allows initialization of folders with even dot files =( Example of how to use direnv with terraform 0.12

1
Josh Larsen

yes this looks great! might be worth mentioning that people may need to change the make deps target on modules that use overrides

Direnv with Terraform 0.12 by osterman · Pull Request #500 · cloudposse/geodesic

what Use an empty cache folder to initialize module why Terraform 0.12 no longer allows initialization of folders with even dot files =( Example of how to use direnv with terraform 0.12

1
Erik Osterman

yes, I need to do that

1
Erik Osterman

@oscarsullivan_old

Erik Osterman

basically, this is to get around the fact that 0.12 does not allow the initialization of modules to a non-empty directory (previously terraform permitted dot files)

oscarsullivan_old

What’s an example of initialising an empty directory?

much more elegant than my workaround

deps:
	mkdir -p modules
	terraform init modules
	cp modules/* ./
	rm -rf modules
Erik Osterman

haha, more or less the same

Erik Osterman

just using ENVs

Erik Osterman

though in this case, we keep using .modules/ by passing it always as an argument to terraform (~ TF_CLI_ARGS_plan=.module, TF_CLI_ARGS_apply=.module)

Erik Osterman

or TF_CLI_PLAN=.module and TF_CLI_APPLY=.module when using cloudposse/tfenv

loren

Does every terraform 0.12 command support a directory as an argument? I know in 0.11 it was not consistent

Erik Osterman

Not sure if every command supports it

Erik Osterman

The ones we care about do: init, plan, apply, destroy

loren

I think it was import or state that did not

2019-07-15

chrism

Is there a quick way when in geodesic to refresh your vault session

Erik Osterman

Do you use vault server?

SweetOps #geodesic
04:00:06 PM

There are no events this week

Cloud Posse
04:00:35 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Jul 24, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

chrism

sorry i mean aws-vault

Erik Osterman

right, but aws-vault server mode

Erik Osterman

we have support for that in geodesic

Erik Osterman

and it will automatically refresh your tokens

chrism

oh; have to prod that. I was digging through the shell

chrism

ta

Erik Osterman
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Erik Osterman

@Jeremy Grodberg spent a lot of time to get this right.

Erik Osterman

there were all kinds of edge cases (like what to do when running on an EC2 instance like an AWS Workspace)

Erik Osterman

…what to do when the clock is behind

Erik Osterman

…what to do if already in a shell with a role

chrism

i get the clock issue all the time since updating from ~40odd to current. There’s a whole pile of related issues with docker4win /hyperv and that though. I generally just run hwclock -s

Erik Osterman

@Jeremy Grodberg do you know why the organizations module is not enabled in root.tfvars?

Erik Osterman

In the ref arch…

Erik Osterman

@joshmyers pointed this out and I don’t understand either

2019-07-08

SweetOps #geodesic
04:00:07 PM

There are no events this week

Cloud Posse
04:03:24 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Jul 17, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-07-04

joshmyers

Does GEODESIC_PORT serve a purpose?

Erik Osterman

Yes, it’s used for things like kubectl proxy and teleport logins. Basically where you want to access something in geodesic via http

joshmyers

Figured so, can see some bits for teleport

joshmyers

Don’t remember needing to do anything with kubectl proxy though…

Erik Osterman

Nice for accessing kube dashboard when you don’t have all the other infra in place like keycloak

2019-07-03

Erik Osterman

Public Office Hours starting now! Join me here: https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8

Have any questions? This is your chance to ask us anything.

Erik Osterman

anybody think this would be cool? https://github.com/genuinetools/img

genuinetools/img

Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder. - genuinetools/img

Erik Osterman

compile geodesic to an exe

Erik Osterman

Erik Osterman

so it wouldn’t require even docker

loren

Slick

2019-07-01

SweetOps #geodesic
04:00:01 PM

There are no events this week

Cloud Posse
04:02:21 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Jul 10, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-06-27

Erik Osterman
cloudposse/packages

Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages

Erik Osterman
remind101/assume-role

Easily assume AWS roles in your terminal. Contribute to remind101/assume-role development by creating an account on GitHub.

Erik Osterman

probably about the same.

2019-06-26

oscarsullivan_old

Hi guys, how do I upgrade Ansible to 2.8.1 on Geodesic 0.112.0

oscarsullivan_old

have tried apk add ansible apk add --upgrade ansible apk add ansible-2.8.1 and apk add ansible-2.8.1-r0 (https://pkgs.alpinelinux.org/package/edge/main/x86/ansible)

oscarsullivan_old

Also pip isn’t in the image by default so I figure it is not pip that installs ansible

oscarsullivan_old
build(deps): bump ansible from 2.7.10 to 2.8.1 by dependabot-preview · Pull Request #493 · cloudposse/geodesic

Bumps ansible from 2.7.10 to 2.8.1. Commits See full diff in compare view Dependabot will resolve any conflicts with this PR as long as you don&#39;t alter it yourself. You can also trigger a…

oscarsullivan_old

so it is pip

oscarsullivan_old
❌ . (none) ~ ➤ pip install
bash: pip: command not found
oscarsullivan_old

but why isn’t it in my shell, especially when it isn;t removed in https://github.com/cloudposse/geodesic/blob/master/Dockerfile

oscarsullivan_old

ohhh it’s a different stage of the build FROM alpine:3.9.3 as python dang it

oscarsullivan_old

Solution for your docker file

apk add py-pip
pip install --upgrade ansible==2.8.1

Erik, Jeremy, thanks for the help yesterday getting the reference architecture up and running. I was able to finish things up this morning and have it all built. Really impressive stuff. Going through it all this morning trying to get a firm grasp on how it all works.

2019-06-25

Hey everyone, following the quick start docs at https://docs.cloudposse.com/geodesic/module/quickstart/ and i’m running into:

docker run -e CLUSTER_NAME \ -e DOCKER_IMAGE=cloudposse/${CLUSTER_NAME} \ -e DOCKER_TAG=dev \ cloudposse/geodesic:latest -c new-project \| tar -xv -C .
docker: invalid reference format.
See 'docker run --help'.
Erik Osterman

@ the quick start docs are out of date and not functional. Use the http://github.com/cloudposse/reference-architectures instead

ah okay. thanks Erik!

Erik Osterman

Also, archives are here: https://archive.sweetops.com/geodesic/

geodesic

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Erik Osterman

If you get stuck, maybe some nuggets in there.

Erik Osterman

@dalekurt has been recently working through these

So, I pulled the repo, edited configs/root.tfvars, and exported the aws account’s root master keys to ENV vars, I’m getting:

terraform init -from-module=modules/root accounts/root
Copying configuration from "modules/root"...
Error: Target directory does not exist
Cannot initialize non-existent directory accounts/root.
make: *** [root/init] Error 1
Erik Osterman

Not sure. @Jeremy Grodberg provisioned these this week. Any ideas?

oh, i was running tf 0.12

Erik Osterman

aha, yes, not updated for 0.12

yeah, that’s my bad haha

Jeremy Grodberg

Yes, you need to have terraform version 0.11 installed on your workstation.

Jeremy Grodberg

I will be pushing some updates to the Reference Architecture sometime in the next few days.

Jeremy Grodberg

The main thing is updating the baseline version of Geodesic, and fixing the race condition in making the Docker images. Currently, Terraform often tries to build the Docker images before all the files are in place.

The other big things are to update Kubernetes to 1.12.9, switch from kube-dns to coredns, and to pin the versions of terraform and helm installed in the Docker images.

@Jeremy Grodberg I’m guessing this is the race condition you mentioned?

Error: Error applying plan:

1 error occurred:
	* module.account.module.docker_build.null_resource.docker_build: Error running command 'docker build -t <http://root.blvd.co> -f Dockerfile .': exit status 1. Output:
#2 [internal] load .dockerignore
#2       digest: sha256:c8c62ec01c2e58b7ca35e6a8231270186f80ab4c83633dace3b2a61f6e9dc939
#2         name: "[internal] load .dockerignore"
#2      started: 2019-06-25 19<i class="em em-16"></i>05.8271816 +0000 UTC
#2    completed: 2019-06-25 19<i class="em em-16"></i>05.8272689 +0000 UTC
#2     duration: 87.3µs
#2      started: 2019-06-25 19<i class="em em-16"></i>05.8274642 +0000 UTC
#2    completed: 2019-06-25 19<i class="em em-16"></i>05.8712445 +0000 UTC
#2     duration: 43.7803ms
#2 transferring context: 2B 0.0s done


#1 [internal] load build definition from Dockerfile
#1       digest: sha256:045540caaa44e0ec4d861b43e9328ac90843e9d94c485db1703c3e559ed7dc07
#1         name: "[internal] load build definition from Dockerfile"
#1      started: 2019-06-25 19<i class="em em-16"></i>05.8264853 +0000 UTC
#1    completed: 2019-06-25 19<i class="em em-16"></i>05.8265771 +0000 UTC
#1     duration: 91.8µs
#1      started: 2019-06-25 19<i class="em em-16"></i>05.8272773 +0000 UTC
#1    completed: 2019-06-25 19<i class="em em-16"></i>05.8602995 +0000 UTC
#1     duration: 33.0222ms
#1 transferring dockerfile: 2B 0.0s done

failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount930443153/Dockerfile: no such file or directory
Jeremy Grodberg

@ Yes, that is the race condition. You can just run make root again. When it comes time to make the children, the make children command is safe to run multiple times, but to save time, I recommend you make each child one at a time. Or you can wait a couple of days for the next release of the reference architecture.

Okay. I’ve still got some conceptual work to do on my end so I’ll probably just hold.

Jeremy Grodberg

Since you are waiting on it, I will make an effort to get the release out today.

oh, cool. I mean, no rush really, I don’t want to divert your focus for your day haha.

Jeremy Grodberg

No worries, it’s one of the things I’m currently working on for a new client.

awesome. I appreciate the help.

@Jeremy Grodberg Question for you, When spinning these accounts up, I want to rename the dev account to sandbox. Is that as simple as s/dev/sandbox/ in accounts_enabled[] in root.tfvars, renaming dev.tfvars and then stage=sandbox in that file?

Jeremy Grodberg

Honestly I’m not sure. I think it would be best to copy rather than rename /configs/dev.tfvars -> /configs/sandbox.tfvars and then customize what you want installed in the sandbox. Keep in mind that by default the dev environment does NOT include a Kubernetes cluster.

Jeremy Grodberg

Yes, you also need to change stage = "dev" to stage = "sandbox" inside sandbox.tfvars and replace dev with sandbox in accounts_enabled[] in root.tfvars

Jeremy Grodberg

I expect that is all you need to do, but I’m not positive.

Jeremy Grodberg

Also keep in mind that the “stage” name shows up as a part of nearly every label there is, so we try to keep it short in order to avoid running into issues with names getting too long. So I suggest you pick a 3 or 4 letter name instead of a 7 letter name like “sandbox”.

Jeremy Grodberg

@ We have pushed out a new reference-architecture release for you. Skimped a tiny bit on the testing, so please let me know if you find any issues. https://github.com/cloudposse/reference-architectures/releases/tag/0.14.0

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

oh awesome. pulling now

Ran into some terraform errors

Jeremy Grodberg

I was afraid of that. Please paste

Jeremy Grodberg

in this thread

Okay, sending you a log of the run. It’s a bit verbose so I’ll send as a file.

Sent you the full log, here’s the actual errors, for this thread:

Jeremy Grodberg

I got the log, that’s not actually a Terraform error. Your AWS access key is lacking permissions.

oh, crap you’re right

oohh, i’m in the new account waiting period on this new root account I spun up.

okay, fixed that.

Jeremy Grodberg

BTW, how did you get out of the waiting period so quickly?

Jeremy Grodberg

Not Terraform. You need to set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to static (not sesson) keys with a lot of privileges. Typically they are the root keys of the root account.

yeah, this new aws root account was in the ‘waiting period’, I fixed that now

should have checked that after I spun the account up heh

so, will failing where it did cause any problems, or will make root pick up where it left off?

Jeremy Grodberg

It is safe to run make root again, but I added a make root/init-resume just for this sort of thing.

okay, I’ll give make root/init-resume a go then

Jeremy Grodberg

After make root/init-resume (but not after make root) you need to run make root/provision

okay, init-resume finished super fast

Jeremy Grodberg

Yes, it’s mainly to get you to a viable docker image. I now realize you were already past that. So make root/provision

running root/provision

Jeremy Grodberg

When that finishes, that will be the equivalent of having run make root successfully and you can proceed from there.

Cloud Posse
01:26:30 AM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about https://github.com/cloudposse/geodesic, get live demos and learn from others using it. Next one is Jun 26, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-06-24

SweetOps #geodesic
04:00:08 PM

There are no events this week

Cloud Posse
04:03:37 PM

2019-06-21

2019-06-20

2019-06-19

chrism

was there something specific to fix this assume-role (win10/wsl/ubuntu18lts) ?

chrism

chrism

all good; found the file from the last time I updated geodesic ENV ASSUME_ROLE_INTERACTIVE=false ftw

chrism

How are you supposed to use the legacy s3 storage? https://github.com/cloudposse/geodesic/commit/4170a58766fa925800c4293886b32da8d254bff9

I tried adding the following to docker

ENV TF_BUCKET_PREFIX=
ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"

getting the feeling I’ll have to clear the TF_BUCKET_PREFIX in the .envrc every folder as it still populates it with path depth I dont want

[direnv] use new TF bucket prefix method (#402) · cloudposse/[email protected]
  • [direnv] use new TF bucket prefix method TF_BUCKET_PREFIX_FORMAT selects the format to use for setting the TF remote state bucket prefix/key: the original $(basename $(pwd)) leaf-only form…
Erik Osterman

ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"

chrism

yup; works. I was trying to cheat and use the envrc file in a folder higher up (i.e. /conf/frankfurt/nginx/ (I put the file in frankfurt) to set it to use TF11 while i migrate some of the easier bits in my control first. Because it changes the env var as use terraform is initialised it was screwing with what I expected

Erik Osterman

hrmmm

Erik Osterman

something like that should work, but maybe there’s a bug somewhere in what we have

chrism

its just because the old one was root based so it gave no trucks about /{this folder/nginx I got around the region issue using workspaces

then it was fixed recently

chrism

Is there a way to run multiple geodesics at the same time. it always seems to boot into whichever is running first

Erik Osterman

So you would like multiple sessions of the same image?

Erik Osterman

I think we could add an option for that

Erik Osterman

Right now it gives the Docker container the name of the image so it doesn’t work with concurrent sessions

Erik Osterman

It always execs into the running image if one is found

chrism

timezone diff

So i have root.xxx and prod.xxx If i make all on root it boots into that container if i then do the same on prod I end up in roots container

chrism

ideally should be able to have both open.

Erik Osterman

That’s not right! Have you installed the wrapper lately?

Erik Osterman

Try reinstalling it

chrism

i tend to use make all habitually. seemed odd tbh

chrism

geodesics up-to-date (hence all the oh fudge that assume role thing i’d been avoiding that breaks in wsl) I’ll dig deeper if its not expected to do that as its probably something stupid

Erik Osterman

I think this is what yoou want

Erik Osterman

we have that in many dockerfiles

Erik Osterman

#office-hours starting now! https://zoom.us/j/684901853

Have a demo of using Codefresh for ETL

Mat Geist

question regarding geodesic in CICD / automated environments. looking at https://github.com/cloudposse/testing.cloudposse.co/blob/master/codefresh/terraform/pipeline.yml i think im missing how the assume-role actually gets executed. as far as i can tell, theres no way to setup aws-vault to be completely non interactive (it always asks for the passphrase prompt). so, in a sentence: how are roles getting assumed in CICD environments

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Erik Osterman

aws-vault is for humans

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Erik Osterman

in the CI/CD context, the credentials are provided via alternative means

Erik Osterman

For example, one way is to update a Codefresh shared secret with temporary credentials

Erik Osterman

E.g. if you don’t like the idea of long-lived creds stored in codefresh, this is one way

Erik Osterman
#!/bin/bash 

set -e

eval "$(aws-vault exec cpco-testing-admin --assume-role-ttl=1h --session-ttl=12h -- sh -c 'export -p')"

output="/dev/shm/codefresh.yaml"
cat<<__EOF__>$output
apiVersion: "v1"
kind: "context"
owner: "account"
metadata:
  name: "aws-assume-role"
spec:
  type: "secret"
  data:
    AWS_SESSION_TOKEN: "${AWS_SESSION_TOKEN}"
    AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
    AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
    AWS_SECURITY_TOKEN: "${AWS_SECURITY_TOKEN}"
    AWS_PROFILE: "default"
    AWS_DEFAULT_PROFILE: "default"
    AWS_VAULT_SERVER_ENABLED: "false"
__EOF__

codefresh auth create-context --api-key $CF_API_KEY
codefresh patch context -f $output
rm -f ${output}

Erik Osterman

@dustinvb nice little trick

Mat Geist

how are you able to use aws-vault without the manual passphrase input in that script?

Erik Osterman

Set the AWS_VAULT_FILE_PASSPHRASE env var

Mat Geist

oh wow thanks! been looking all over and never found that

Erik Osterman

I think I found it looking through the code at some point

Mat Geist

i ended up writing a little tool, since working with aws-vault in ci pipelines was a bit too clunky for my tastes https://github.com/BetterWorks/go-assume its a quick and dirty script i threw together this afternoon but it works

dustinvb
10:05:15 PM

@dustinvb has joined the channel

2019-06-17

Cloud Posse
04:00:52 PM

2019-06-13

jober

Sorry another noob question:

How do I get domain resolutions to work for the member accounts, lets say <http://app.dev.example.com> in the dev account just being a static s3 site

jober

I have been digging around the root modules trying to figure this out and so far no luck

Erik Osterman

so a few things are going on

Erik Osterman

first you need to delegate <http://dev.example.com> to the dev account

Erik Osterman

the account-dns root module handles creating the zone and is invoked in each child account

Erik Osterman

then the root-dns module delegates the DNS to each child account

jober

So I went through the setup of the reference architectures, I have the root account with the NS records set for the dev account. In the dev account the NS records are setup as well and then I created an A record in the dev account to point to the bucket

jober

@Erik Osterman would the original hosted zone I had setup for the root domain be interfering with it?

Erik Osterman
cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Erik Osterman

?

jober

@Erik Osterman yes

jober

Everything is working as far as the account shells and such. Just having the issue with Route53. I have a suspicion that the original hosted zone setup on the root account is affecting the reference-architecture setup

jober

I moved the registrar to point to the new name servers, and move over any legacy record sets, still no luck

jober

Got it to work

Erik Osterman

Great job!

Erik Osterman

What was it in the end?

jober

Forgot to update the registrar to the new nameservers

jober

knew it was going to be a noob mistake, thanks for the patience

2019-06-12

Erik Osterman

Public #office-hours starting now! Join us on Zoom if you have any questions. https://zoom.us/j/684901853

jober

is it possible to make changes and not have to rebuild the shell everytime?

Erik Osterman

Use /localhost

Erik Osterman

Also, we have this PR pending for docs: https://github.com/cloudposse/docs/pull/460

Document workflow for developing terraform modules locally by Nuru · Pull Request #460 · cloudposse/docs

what Document workflow for developing terraform modules locally why Existing documentation does not cover the workflow

jober

Amazing

jober

Thanks so much, that provided a ton of clarity

Erik Osterman

(@Jeremy Grodberg )

jober

When I follow these instructions I get an error:

Error copying source module: error downloading `file:///Users/justin/infrastructure/terraform-root-modules/aws/vpc` : source path error: stat /Users/justin/infrastructure/terraform-root-modules/aws/vpc: no such file or directory
jober

I followed the exact folder structures and everything

Erik Osterman

But the users folder is not mounted - not by us

1
1
Erik Osterman

Somewhere that is referenced

jober
As a convenience, Geodesic mounts your home directory into the Geodesic container and creates a symbolic link so that you can reach your home directory using the same absolute path inside Geodesic that you would use on your workstation. This means that as long as you do your development in directories under your home directory (and on the same disk device), your workstation's absolute paths to your development files will work inside Geodesic just as well as outside it.
jober

Sorry I must be missing something

Erik Osterman

Aha, that was some thing new Jeremy added

1
Erik Osterman

Haven’t tested that myself

Erik Osterman

I would verify that you have a current version of geodesic

1
Erik Osterman

And that you see the symlinks in your shell

1
jober

Jeremy Grodberg

Mapping of Home directory was added in Geodesic 0.94.0 https://github.com/cloudposse/geodesic/releases/tag/0.94.0

cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

1

2019-06-11

JeroenK

in the cold start instruction accounts are provisioned, but in the process a e-mail account like [[email protected]] is needed. Is there a workaround because want to use our general department e-mail address.

Erik Osterman

Use plus addressing. By default the reference architectures in the repo above do that. See root.tfvars

Jeremy Grodberg

Each AWS account requires a unique email address because that is how AWS identifies an account.

JeroenK

How can we use geodesic with for example an mgmt vpc that is connected to a staging vpc and a prod vpc. We use bitbucket server througout the organization. How does this work with the different accounts. Are the examples of custom (terraform)modules?

Erik Osterman

Think of geodesic as just a preconfigured shell with all the tools required for cloud automation

1
Erik Osterman

What you describe is a configuration not a tool

Erik Osterman

So you would add the configuration to geodesic and run it

Erik Osterman

This is where our root modules come in

Erik Osterman

Those provide blueprints for typical configurations like the ones you described

aknysh

@JeroenK in https://github.com/cloudposse/terraform-root-modules, there are a few examples of VPC peering:

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh

EKS - backing services (where you run things like RDS, ElastiCache etc.) VPC peering: https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks-backing-services-peering/main.tf

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh

as @Erik Osterman mentioned, geodesic has nothing to do with configuration (code, data, settings), it’s a cloud automation shell with many tools inside, used to secure access to AWS (assume role or enterprise auth like Okta) and orchestration of cloud operations

aknysh

configuration usually consists of code (terraform, helm, helmfile, etc.) and data (variables, NEV variables, other settings)

aknysh

for code, we use module hierarchy: root modules (catalog of module invocations to provision entire infrastructure) - infrastructure modules (e.g. RDS, EKS, ECS - these are usually combination of other low-level modules) - low-level modules (usually to provision one or a few AWS resources, e.g. IAM role, S3 bucket with permissions, VPC with subnets, etc.)

aknysh

all those modules are usually “identity-less”, meaning they don’t care where and how they will be provisioned, all configuration is provided from TF variables, ENV variables, SSM param store, Vault, etc.)

aknysh

to directly answer your question, what we do is this:

aknysh
  1. Create low-level modules (e.g. VPC, IAM, S3, etc.)
aknysh
  1. Create infrastructure modules (e.g. EKS, ECS, RDS, Aurora), using the low-level modules
aknysh
  1. Create a reusable catalog of module invocations (we call it root modules) that uses all other modules from the above
aknysh
  1. Provide configuration to the modules (usually using TF vars from files or Dockerfile, ENV vars, and SSM param store using chamber - depends on use case and whether the data are secrets or not)
aknysh
  1. And finally, from geodesic, login to the AWS account (by assuming IAM role), all configuration gets populated from the sources described in #4, and provision infrastructure for the particular account using the root modules invocations (which, once inside the geodesic shell for the particular AWS account, already know how and where they will be provisioned since they got all the configuration)
Josh Larsen

@Erik Osterman do you have any docs or advice for upgrading to the most recent geodesic with terraform 0.12 with the purpose of upgrading to 0.12 wholly? i just noticed when i do make deps now terraform says the directory is not totally empty (before it would just ignore the envrc tfvars). also, should i be concerned that it may distort my remote state file?

Erik Osterman

@Josh Larsen - we ran into this too

Erik Osterman

it’s aggravating.

Erik Osterman

I can give you a temporary workaround (haven’t tested it), but I think it hsould work

Erik Osterman

basically, run terraform init blah and it should init the files to the blah folder

Josh Larsen

ok, but that might mess with the tfstate pathing… new state file would for /account-dns might change to /blah/account-dns no?

Erik Osterman

then set export TF_DATA_DIR=$(pwd)/.terraform

Erik Osterman

oh

Erik Osterman

i see what you mean.

Josh Larsen

guess i could copy it all up one folder after init, just clunky

Erik Osterman

for now, I suggest overloading deps target until we have a cleaner workaround

Erik Osterman

e..g do doing the extra copy step

Josh Larsen

ok, then its safe to assume geodesic is not really fully in line with 0.12 quite yet?

Erik Osterman

It’s fair to say our strategy of terraform init -from-module=.... does not work as-is with 0.12

Josh Larsen

ok, fair enough. we will try working around it. i do like that adding the version to .envrc changes the terraform version. nifty.

Erik Osterman

yea, happy with that part

Erik Osterman

so there’ss a -force-copy arg now, but I wish it applied force to the “right” copy operation

Erik Osterman

so all the terraform commands support specifying the path

Erik Osterman

that path can be added to the TF_CLI* envs

2019-06-10

SweetOps #geodesic
04:00:07 PM

There are no events this week

Cloud Posse
04:03:01 PM
jober

@Erik Osterman quick question:

Is this https://docs.cloudposse.com/reference-architectures/cold-start/ still pretty much up to date?

jober

it looks like it may be out of date?

Erik Osterman

It’s mstly out of date

Erik Osterman
cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

2019-06-06

mmuehlberger

Hi folks, it’s been a while! I’ve got a tiny question regarding geodesic and direnv: I’d like to automate the execution of chamber to fetch a stored GitHub token and private key after assuming a role. I thought that having a .envrc file in /conf that does that would be a good idea, but it seems, that direnv is not running after assume-role. Any pointers on how to achieve that?

Erik Osterman

Hrmmm it should definitely operate even after assume role

Erik Osterman

Are you running a current version of geodesic?

Erik Osterman

Ohhhhhhh here’s what maybe is happening. You want it to rerun after assume role, however it runs only once

mmuehlberger

Exactly!

Erik Osterman

You would need to flush the direnv cache so it triggers again

Erik Osterman

I forget how to do that

mmuehlberger

What would be the easiest way to run a post-asssume-role command? Doesn’t need to be direnv, I would just want to execute some shell commands. Is there any way?

2019-06-05

JeroenK

Error configuring the backend “s3”: Not a valid region: eu-north-1 I get this error while trying to create tfstate-backend Is eu-north-1 not allowed?

nutellinoit
backend/s3: Support New eu-north-1 Region Automatic Validation · Issue #19632 · hashicorp/terraform

Current Terraform Version terraform 0.11.10 Use-cases AWS has just publicly announced the availability of the eu-north-1 (Stockholm) region: https://aws.amazon.com/blogs/aws/now-open-aws-europe-sto

JeroenK

Thanks the skip region validation workaround is the trick

Erik Osterman

office hours starting now: https://zoom.us/j/684901853

2019-06-03

SweetOps #geodesic
04:00:09 PM

There are no events this week

Cloud Posse
04:01:59 PM

2019-05-30

Erik Osterman

we’ve release support for terraform 0.12 in geodesic

1
1
1
Erik Osterman

@Josh Larsen

Erik Osterman
Terraform 0.12 support by osterman · Pull Request #484 · cloudposse/geodesic

what Add support for multiple concurrent versions of terraform why Individual projects need to be pinned to different versions of terraform since not all projects will be updated at the same tim…

1
Erik Osterman

TL;DR:

Erik Osterman
apk add --update [email protected]
Erik Osterman

and add use terraform 0.12 in your .envrc

Erik Osterman

to support 0.11 as well

Erik Osterman

do apk add --update [email protected] and add use terraform 0.12 in the .envrc

Erik Osterman

if you only care about 0.12 and not 0.11

Mat Geist

awesome!

Erik Osterman

you can skip the brouhaha with installing terraform_0.x packages

2019-05-29

Josh Larsen

@Erik Osterman i’m sorry if this has already been asked, but is there a rough timeline on when geodesic will be updated to terraform 0.12?

Erik Osterman

Working on it as we speak

Erik Osterman

Might have it ready by end of day

Erik Osterman

Problem is we need to support multiple versions concurrently so introducing a system For that

2
Josh Larsen

nice… ok thank you. looking forward to it

Erik Osterman

public #office-hours starting now! join us here: https://zoom.us/j/684901853

2019-05-28

Abel Luck

I’ve followed the reference architecture setup, so i’ve got an admin group in the root account with users added to it, and the users access sub-accounts by assuming a role

Abel Luck

now, in one sub-account I want to enable SSM Session Manager to allow users to create sessions on instances. I’ve created an appropriate policy, but I’m stuck on where to attach the policy to. I probably should attach this policy to a group, but in the sub-account there are no users/groups of course.

Erik Osterman

I haven’t tried to use SSM this way, but what I think you want to do is attach the policy to a role in the child account

Erik Osterman

The allow the group in the root account to assume that role

Erik Osterman

We have an example of how to do that if you look at the organization access role module

Abel Luck

Woops I minced my word there, I definitely meant attach the policy to a role and then attach it to a group. But you knew what I meant hah.

Abel Luck

Ah I see now. I’ve had my admin group setup to assume the OrganizationAccountAccessRole, which gives the AdministratorAccess policy

Abel Luck

This actually addresses an issue I’ve wanted to fix for awhile: restricting access to the users in the sub-accounts.

Abel Luck

I didn’t quite understand how assumed role access worked until just now

Erik Osterman

Awesome! Yes we should offer some more canned roles

Abel Luck

But i can’t attach it the group in the root account, because the policy is in the sub-account

Abel Luck

anyone know the correct approach here?

2019-05-27

SweetOps #geodesic
04:00:06 PM

There are no events this week

Cloud Posse
04:00:45 PM

2019-05-24

2019-05-23

Tega McKinney

Just curious, when running reference-architecture, i just realized that it does not add users from /config/root.tfvars as admins on the root accounts. That make sense as those users may not be admins. Should it expose a root_admin_user_names and/or root_readonly_user_names var(s) to ensure the ability to administer the account using IAM vs the root email?

Tega McKinney

@Erik Osterman any thoughts on the above?

Erik Osterman

might be an oversight

Erik Osterman

Oh, i think this is what you were suggesting in #office-hours

Erik Osterman

(and agree)

Erik Osterman


brought up a good point that we need to document how to get the outputs for the users created in the reference-architectures

Tega McKinney

@Erik Osterman I believe I see my mistake. I logged into the root account using my email / password instead of assuming the <namespace>-root-admin role. All sorted now

Tega McKinney

Also, this note was different than the #office-hours report. That references how to obtain the admin user’s password after closing the terminal.

This thread was just my mistake in logging into the console incorrectly. Thanks a bunch

2019-05-22

paul.mortimer

Hi , i’m seeing the same issue as @jober

paul.mortimer

just reading through the notes above …

paul.mortimer
Tega McKinney

I saw the same issue above as well…noticed when I ran make root/shell and manually ran cd /conf/accounts direnv exec . make deps in the shell it would initialize but when running make root/provision it would throw error. I was able to get around it however I was not aware of make reset that @Jeremy Grodberg mentioned

Tega McKinney

@Erik Osterman curious to know how you obtain admin user password. I ran reference-architectures however I didn’t pull the password before closing out my shell and deleting my root creds. I have already committed my repos however now I’m not certain proper way to get admin user console password. Any thoughts?

paul.mortimer

@Tega McKinney, thanks , looking at this now

paul.mortimer

@Tega McKinney, which directory are you running make reset from ?

Tega McKinney

@paul.mortimer Not sure; I did not run it however it looks like it’s available in /conf/accounts. @jober may have some insight

1
Jeremy Grodberg

Each directory under /conf is a Terraform module to install (except for /conf/helmfiles, which are helmfiles). In a Terraform directory, make deps loads the modules and initializes the Terraform state, and as part of loading the modules, loads a module-specific Makefile. After running make deps you can do all the normal Terraform stuff, but as protection against accidentally overwriting something, you cannot run make deps while there is Terraform state in the current directory. If you are sure you want to clear it out, that is when you run make reset, which deletes everything make deps pulled in and any state Terraform stored in the directory.

jober

@Tega McKinney @paul.mortimer I ran the make reset from the /conf/accounts

Erik Osterman

Public/Free Office Hours with Cloud Posse starting now!!

https://zoom.us/j/684901853

paul.mortimer

@Jeremy Grodberg @jober, thanks for the info appreciate the pointers.

paul.mortimer

I’m in Australia, so just getting up … is there any chance i could jump on zoom shortly and walk through this issue with someone?

paul.mortimer

@Erik Osterman ?

Erik Osterman

@paul.mortimer best way is to schedule some time here: https://calendly.com/cloudposse

2019-05-20

SweetOps #geodesic
04:00:01 PM

There are no events this week

Cloud Posse
04:03:38 PM
jober

Has anyone run across this error when running make root from the reference architectures?

jober
Erik Osterman

@jober not that one in particular

Erik Osterman

but I can maybe help you work through it if you want to zoom

jober

jober

Trying to run it on an active account not sure if it is conflicting with something

jober

I ran on personal account and everything was ok

Erik Osterman

My hunch is this: see that error in the output about copying overrides? I think maybe that’s preventing it from completing.

jober

i was looking at that as well

Erik Osterman

so I makefile will abort on the first failure

Erik Osterman

so i think the module is just not getting initialized and that’s maybe causing nothing to be written to SSM

jober

Like it says their is no overrides

Erik Osterman

are you familiar with what we are doing here with overrides?

jober

No

Erik Osterman

(maybe not this exactly spot, but the pattern itself)

Erik Osterman

ok

Erik Osterman

so basically we use the terraform init -from-module=.... pattern everywhere.

Erik Osterman

That works great, except for the init will bail if there are any .tf files in the current directory

Erik Osterman

so what we do is stick all those “overrides” (.tf files) in the overrides/ directory

Erik Osterman

then…

Erik Osterman
  1. terraform init -from-module=....
Erik Osterman
  1. cp overrides/* .
Erik Osterman

3 terraform init -from-module= (null modules so that it it doesn’t try to redwnload)

Erik Osterman

why do we need overrides?

Erik Osterman

…that’s so we can have a general root module like a “users” module, but not define any users

Erik Osterman

then we stick the user accounts in overrides/ e.g. overrides/osterman.tf

jober

ahh ok

jober

is it required to have overrides?

Erik Osterman

i don’t think it should be required (fundamentally speaking), but that error is complaining about that.

Erik Osterman

@Jeremy Grodberg do you recall seeing this error when you recently provisioned “that customer”

jober

Is this something new? I copied the reference architectures about a month or 2 ago and setup on a personal account and this did not come up

Erik Osterman

hrmmm so actually, it might be “new” in the sense we finally got around to updating the ref arch with our latest customer rollout, but not new in the sense we’ve been doing it for about 6mo

jober

gotcha

Erik Osterman

basically, every time we do a customer rollout we revise/polish the ref-arch. it’s still got it’s bugs (first and foremost it’s a device we use to speed up our own engagements)

jober

makes sense

jober

i am just looking at the user module and the code to deal with the overrides to try and gain some insight on the implementation

jober

try and find a work around

jober

I do not have any users setup in the root.tfvars

jober

is this required?

Erik Osterman

Aha, haven’t tested it without probably

jober

ok so i should add it their an try again

jober

so if i add my user to here in the root.tfvars:

# Administrator IAM usernames mapped to their keybase usernames for password encryption
users = {
  #  "<mailto:[email protected]>" = "osterman"
}
Erik Osterman

Yes, I believe that should be all it takes.

jober

ok so then do i need to create an overrides folder and anything in their?

Erik Osterman

no, it should get created for you

jober

ok i will try that

jober

thanks!

jober

well that worked, but now i am getting….

Erik Osterman

hrmmm odd

Erik Osterman

that second terraform init should look like:

terraform init -from-module=
Erik Osterman

that empty -from-module= is deliberate

jober

this is why noobs are the best QA haha

Erik Osterman

thanks for sticking in there

jober

Hahaha this will put me miles ahead from where i would be otherwise

jober

it looks like the exported worked to /artifacts/accounts.tfvars

jober

btw i am only initializing audit, dev, staging, test and prod

Jeremy Grodberg

@jober @Erik Osterman It is required to have at least 1 user configured in root.tfvars because that is how you are intended to have long-term access to the organization accounts.

jober

@Jeremy Grodberg thanks! sorry it was not clear that is a requirement. But it makes sense

1
Jeremy Grodberg

Unfortunately, the reference architecture is intended to get things started. It is not a full-fledged multi-functional tool.

jober

Forsure! And its awesome!

jober

I am still having issues withe the above error. @Jeremy Grodberg do you have any suggestions/insight?

Jeremy Grodberg

You need to do make reset to clear the errored state

jober

That worked thanks!!!

2019-05-19

2019-05-18

Jesse

Could someone help me locate or add aws creds to a built geodesic container? I’m having some issues understanding where geodesic looks for this info, and in what format to provide it. I have no roles yet, and id like to add one for aws so i may assume it.

Erik Osterman

So, after building the container, run make install, which will install a simple wrapper script into /usr/local/bin/

Erik Osterman

when you use that wrapper script, it’ll take care of automatically mounting ~/.aws into the container

Erik Osterman

we generally use aws-vault

Erik Osterman

also helpful to note, is that $HOME is mounted to /localhost in the container

sarkis

Hi Jesse - been a little bit since I had the pleasure to use geodesic, but I think I have a good idea where you are stuck… have you followed along with the documentation here? https://docs.cloudposse.com/documentation/getting-started/

The reference architecture should give you a good idea on where/how the roles to be assumed are meant to be used: https://github.com/cloudposse/terraform-root-modules

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Jesse

Hi, i started reading these docs again, i think i missed the cold start section which seems to cover this.

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

2019-05-16

Erik Osterman

Share what you we’re doing …

Erik Osterman

Not enough to go on

rohit

i am just trying to mount S3 bucket using

s3fs bucketname directoryname
Erik Osterman

Need ab fstab entry

Erik Osterman

we have a “helper” to make this easier

Erik Osterman
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Erik Osterman

this is an example of how to do it

Erik Osterman

after you add the fstab entry, just run mount -a

Erik Osterman

assumes you’ve already run assume-role

rohit

@Erik Osterman

s3 fstab '${TF_BUCKET}' '/' '/secrets/tf'

what is s3 in this command ?

Erik Osterman
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Erik Osterman

Just a helper to make it easier to work with goofys and fstab

Erik Osterman

it’s entirely optional, but you could say it documents how to do it.

rohit

ohh ok. I am trying to form s3fs command using it

Erik Osterman

so s3fs is intended to be called via mount

Erik Osterman

like the other filesystems (E.g. extfs)

Erik Osterman
Erik Osterman

here’s what my fstab looks like in a geodesic container

Erik Osterman

note, that mount will call the s3fs

Erik Osterman

that’s what s3fs#${TF_BUCKET} is saying….

Erik Osterman

note the ${TF_BUCKET} is eval’d by the s3fs command (wrapper) so that we can have dynamic mounts

Erik Osterman

you can cat /usr/bin/s3fs to see what it does

Erik Osterman

it’s just a simple helper script

rohit

i don’t have /usr/bin/s3fs on my machine

rohit

i installed s3fs using brew install s3fs

Erik Osterman

ok, so I think there’s a disconnect.

Erik Osterman

We’re in the #geodesic channel

Erik Osterman

brew is for mac

Erik Osterman

geodesic runs alpine

Erik Osterman
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Erik Osterman

all of our instructions for s3fs are relative to geodesic

rohit

my bad, sorry

Erik Osterman
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Erik Osterman

this is our wrapper script

Erik Osterman
kahing/goofys

a high-performance, POSIX-ish Amazon S3 file system written in Go - kahing/goofys

rohit

I will checkout . thanks again

rohit

when i cd to the directory i don’t see anything

rohit

it is trying to fetch all the subdirectories when i run ls command

Alex Siegman

s3 is an object store, not a file system. this idea of s3fs confuses me. I’ve never even thought to use s3 in that way. Though I guess a filesystem is just a specialized object store?

rohit

i know but that’s what s3fs does

Erik Osterman

@Alex Siegman using goofys is a nice escape-hatch for using S3 as a filesystem

Erik Osterman

not for databases, but great for “legacy” apps that want to read files

Erik Osterman

in the past, we’ve used it to store SSH keys and other configuration files.

2019-05-15

Erik Osterman

office hours starting now https://zoom.us/j/684901853

rohit

I tried using s3fs but couldn’t mount sub directories. I saw that you are using it as part of geodesic so posting my question here

2019-05-13

SweetOps #geodesic
04:00:05 PM

There are no events this week

Cloud Posse
04:00:28 PM

2019-05-08

oscarsullivan_old

That’s cool.. What sort of man pages are we talking? Custom?

So I create one called api.md and inside i have say internal instructions for our api?

Erik Osterman

yes, exactly!

Erik Osterman

it’s for custom man pages

Erik Osterman

but we are just piggy backing on the existing linux manpage system by installing the generated man pages to /usr/share/man

Erik Osterman

that means it will wrk with system man pages too

2019-05-07

Josh Larsen

anyone using packer at all? i’m curious why packer isn’t in the geodesic image.

Josh Larsen

nm, i think i see a solution in the slack archives… thanks @oscarsullivan_old for asking this before me. will do RUN apk add --update [email protected] in Dockerfile

2
Erik Osterman

Ya just trying to reduce the number binaries. Tempted to remove more from the default distribution in the future, since we have packages for most things.

2019-05-06

SweetOps #geodesic
04:00:06 PM

There are no events this week

Cloud Posse
04:01:27 PM
Erik Osterman

geodesic 0.106.0 adds support for man pages in markdown

Erik Osterman

stick all your documentation in /usr/share/docs and run docs update

Erik Osterman

then using man works as expected

Erik Osterman

try man faq to test

Erik Osterman

or help to search

Erik Osterman

@oscarsullivan_old

2019-05-01

hi everyone, I have a question. Do aws-chamber and vault overlap ?

aws-vault for securely storing and accessing AWS credentials in an encrypted vault for the purpose of assuming IAM roles
Erik Osterman

aws-vault only handles AWS credentials, nothing else

chamber for managing secrets with AWS SSM+KMS and exposing them as environment variables
Erik Osterman

you would use aws-vault first to obtain a session. then with that session you could use chamber. it would be a catch22 if we tried to use chamber for AWS credentials

seems they’re for managing aws secrets

loren

i think chamber is more general purpose

loren

aws-vault is specifically for retrieving an aws credential and assuming an IAM role

2
Josh Larsen

no office hours this week?

aknysh
06:49:00 PM
Office Hours

<!date^1556735400^{date_short_pretty} from {time}|May 1st, 2019 from 11:30 AM> to <!date^1556738400^{time}|12:20 PM GMT-0700> at https://zoom.us/j/684901853

Erik Osterman

ooops

Erik Osterman

i totally lost track of time

Erik Osterman

sorry everyone!

2019-04-29

SweetOps #geodesic
04:00:07 PM

There are no events this week

Cloud Posse
04:03:52 PM

2019-04-26

Jeremy Grodberg

@Erik Osterman I did not delete any accounts. Not only are there lots of hoops to go through, but also deleting accounts take a long time (90 days, I think) and the account email address remains permanently associated with the “deleted” account, and therefore cannot be used on a new account.

Jeremy Grodberg

@Mike Pfaffroth What I recommend is to reuse the created accounts. Run make root up to the point where terraform complains that it cannot create the accounts, then import the accounts into Terraform using terraform import.

Mike Pfaffroth

interesting… I will try that approach. Appreciate the tip.

Mike Pfaffroth

hm… even on a brand new account I am not able to create these- it always dies when trying to create the others:

* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
	status code: 400, request id: 7de7c003-6846-11e9-9002-752f3d4639b5
Mike Pfaffroth

I changed the email address as well, just to make sure it wasn’t trying to create ones that were already there

Mike Pfaffroth

I am the root account:

➜  reference-architectures git:(master) ✗ aws sts get-caller-identity
{
    "UserId": "1redacted7",
    "Account": "1redacted7",
    "Arn": "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:root"
}
Mike Pfaffroth

is there a special permissions I need? or is there a documentation process for signing up for an account in the right way within an organization?

Erik Osterman

Sounds like maybe yuo’re not provisioning from the top-level payer account

Erik Osterman

this is the “root” account

Erik Osterman

If you are already in a subaccount, AWS does not let you create more subaccounts

Mike Pfaffroth

so if I understand it correctly I need to sign up for a brand new account, and then geodesic creates organizations and IAM users inside that account for each environment @Erik Osterman?

Erik Osterman

yes, more or less. you can technically use an existing root level account too, but you run the risk of conflicting resources.

Erik Osterman

Our reference-architectures are what we use in our consulting to stand up new accounts for our customers. We have a very specific focus.

Mike Pfaffroth

yup- totally understood. just wanted to make sure I understood how it worked. Thanks for your help!

2019-04-25

Mike Pfaffroth

Hi- if I have started the “quick start” with http://example.io, had an issue and ran make clean (after terraform had created some things), how can I most easily “reset”?

Mike Pfaffroth

I want to run terraform destroy essentially

Mike Pfaffroth

I am getting, for example-

* module.staging.aws_organizations_account.default: 1 error(s) occurred:

* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
	status code: 400, request id: 5b93365b-67b0-11e9-808b-3fc6f3a60880
* module.dev.aws_organizations_account.default: 1 error(s) occurred:

* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
	status code: 400, request id: 5b9335cd-67b0-11e9-b64d-9b693972455f
* module.prod.aws_organizations_account.default: 1 error(s) occurred:

* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
	status code: 400, request id: 5b92998f-67b0-11e9-b23d-dd34fe4bf6c3
* module.audit.aws_organizations_account.default: 1 error(s) occurred:

* aws_organizations_account.default: Error creating account: AccessDeniedException: You don't have permissions to access this resource.
	status code: 400, request id: 5b93362c-67b0-11e9-a70f-ffba4d75ead7
Erik Osterman

@Mike Pfaffroth recommend to start here instead: https://github.com/cloudposse/reference-architectures

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Mike Pfaffroth

@Erik Osterman I am running that (I’m in the make root stage)

Erik Osterman
Erik Osterman

so terraform destroy will fail when trying to delete the accounts

Erik Osterman

you can destroy the accounts after jumping through a lot of hoops (password reset, login, accept t&c, etc - for each child account)

Mike Pfaffroth

right- I guess what I was trying to say is- is that process documented anywhere?

Mike Pfaffroth

I know I’m in a bad position, I am trying to find a similar checklist

Erik Osterman

aha! yes, good question

Erik Osterman

no, we have not documented it. @Jeremy Grodberg would it be relatively quick for you to open an issue against the reference-architectures that describes the process you took? It’s been a long enough time that I can’t recall every step.

Erik Osterman

i just remember doing the password reset on every child account so that I could log in

1
Erik Osterman

then i think there was some kind of terms & conditions I had to accept (checkbox/click)

Erik Osterman

and after that, I think it’s sufficient enough to retry the terraform destroy

Erik Osterman

that said, I don’t think it’s possible to reuse the same email address on round 2

1
Erik Osterman

@oscarsullivan_old or @Jan or @Josh Larsen might have some more recent recollection

2019-04-24

tamsky

I was experimenting today with terragrunt vs vanilla terraform init and wanted to point out a few important differences between these two similar methods:

  • terragrunt { source = "<blueprint source>" } and
  • terraform init -from-module=<blueprint source>

terragrunt allows the use of:

  • <http://override.tf> [1] files in the CWD
  • “mix-in” files in the CWD (*.tf files that do not match blueprint filenames), useful for adding one-off resources to a blueprint
  • “upstage” files in the CWD (*.tf files that do match a blueprint filename), these replace the contents of a blueprint source file of the same name), useful for removing blueprint resources This is due to the fact that terragrunt init creates a tmp dir, clones the SOURCE to the tmp dir, and then copies/overwrites (aka “upstages”) all files in the CWD to the tmp dir (overwriting any duplicates).

Whereas terraform init -from-module= requires that the CWD contains zero files matching *.tf or *.tf.json. This prevents all the above techniques: overrides, mix-ins, and upstage files.

Has anyone thought about how to support/implement the override, mix-in and upstage patterns without the use of terragrunt?

[1] https://www.terraform.io/docs/configuration-0-11/override.html

Override Files - 0.11 Configuration Language - Terraform by HashiCorp

Terraform loads all configuration files within a directory and appends them together. Terraform also has a concept of overrides, a way to create files that are loaded last and merged into your configuration, rather than appended.

Erik Osterman

See our overrides strategy

Override Files - 0.11 Configuration Language - Terraform by HashiCorp

Terraform loads all configuration files within a directory and appends them together. Terraform also has a concept of overrides, a way to create files that are loaded last and merged into your configuration, rather than appended.

Erik Osterman

We support that too, only it’s explicit

tamsky

is that the overrides/ directory ?

tamsky
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Erik Osterman
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Erik Osterman

Yes

Erik Osterman

Not “ideal” imo, but identical strategy

Erik Osterman

We copy to . Rather than to a “cache” folder

Erik Osterman

Terraform just does a basic check for any .tf file

Erik Osterman

IMO that should be optional

Erik Osterman

Perhaps a -force-copy arg

tamsky

I’m trying to grok the two init calls to terraform there

tamsky

the first one I’m guessing makes use of TF_CLI_ARGS_init, and the second one does what?

tamsky

loads any new modules that were included as a result of the new files being cped ?

Erik Osterman

It’s self mutilating code

Erik Osterman

At first you init with no overrides which pulls down the source

Erik Osterman

Then we overlay our files which changes the source

Erik Osterman

Then we might introduce new modules

Erik Osterman

So we need to rein it

Erik Osterman

Re init

tamsky

yup ok got it

Erik Osterman

The second one would fail if we tried to init again from a remote module

Erik Osterman

So we null it out

Erik Osterman

We could also unset it from the env

Erik Osterman

Both ok

Erik Osterman

Sitting down for early dinner

tamsky

ok thanks for the explainer

2019-04-23

Josh Larsen

question: let’s say my terraform-root-modules is a private repo… how can i get my github keys into geodesic so the terraform init will be able to retrieve the modules?

Erik Osterman

@Josh Larsen you have a few options

Erik Osterman

are you running this under CI/CD or as a human?

Erik Osterman
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Erik Osterman

then we add the ssh public key as a deploy key to the root modules repo

Josh Larsen

ideally i’d like to do both

Josh Larsen

but let’s start with human for now

Josh Larsen

i’m thinking something like cat /localhost/.ssh/id_rsa \| ssh-add - would do it, but would just have to do that every time i started the shell. or is there a better way?

Jeremy Grodberg

By default, Geodesic will start an ssh-agent and add ~/.ssh/id_rsa at startup. You can also write your own scripts to be run at startup. See https://github.com/cloudposse/geodesic/pull/422

Enable run-time customization by Nuru · Pull Request #422 · cloudposse/geodesic

what In addition to some small cleanups and additions, provide a capability for users to customize Geodesic at runtime. why Because people vary, their computers vary, what they are trying to accomp…

Erik Osterman

are you on a mac or linux?

Erik Osterman

on linux, we mount your ssh agent socket into the container

Erik Osterman

we can’t do that on a mac

Erik Osterman

the other option is to store your ssh key in SSM

Josh Larsen

i’m on mac… but i think i get the idea. thanks.

2019-04-22

SweetOps #geodesic
04:00:05 PM

There are no events this week

Cloud Posse
04:01:03 PM

2019-04-17

Erik Osterman

Office Hours Today from 11:30 AM to 12:20 PM at https://zoom.us/j/684901853

Erik Osterman

(PST)

2019-04-16

Eugene Korekin

Hello, everyone. I am trying to follow the cold-start procedure described here https://docs.cloudposse.com/reference-architectures/cold-start/ but it seems that it is outdated, right now I have an issues trying to create users, could anybody help me?

Erik Osterman

@Eugene Korekin sorry for the troubles!

Erik Osterman

yes, the cold-start docs are quite out of date and refer to a older implementation.

Erik Osterman

our current process is being kept up to date here: https://github.com/cloudposse/reference-architectures

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Erik Osterman
geodesic

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Eugene Korekin

thanks, Eric! I’ll take a look

Eugene Korekin

@ could you please tell me what would be the best approach if I already created the master aws account in aws orgs? I think I can’t remove it

Erik Osterman

@Eugene Korekin

Eugene Korekin

I used the older implementation to create it

Erik Osterman

soooo you have a couple of options.

Erik Osterman

did you create those AWS accounts by hand or using terraform?

Eugene Korekin

I used the existing account as the base for the procedure described in the old cold start, the master account referring to the existing account was created in aws orgs in the process

Erik Osterman

ok, so option (1) is to continue going down the path you were. there’s nothing wrong with that per say.

Eugene Korekin

so the master account was created via terraform, but it already has some ec2 instances etc

Erik Osterman

so there’s no one way to bring up this infrastructure. the idea is the geodeisc is just a run time environment.

Erik Osterman

what you do inside of it is entirely open-ended.

Eugene Korekin

I stuck on that path, as the user creation step requires some contents present in ssm and I cannot find how to create this content with the old procedure

Erik Osterman

hrmmm so mostly likely, the user creation stuff will then need to use older versions of the modules that pre-date the SSM dependencies

Erik Osterman

i can’t say what version that would be.

Erik Osterman

to start fresh though, would involve the following:

Eugene Korekin
05:51:57 PM
Erik Osterman
  1. password reset on each account
Eugene Korekin

here is the error I have

Erik Osterman
  1. login to each account, accept t&c
Erik Osterman
  1. terraform destroy accounts
Erik Osterman

Erik Osterman

then you have a clean base

Eugene Korekin

is there a way to somehow import existing accounts?

Eugene Korekin

they are already in use

Erik Osterman

you can import existing accounts, however @Jeremy Grodberg recently went through this with one of our current clients and if every parameter doesn’t match 100% it wants to recreate them

Erik Osterman

(jumping on a )

Eugene Korekin

I see, is there a way to only create a new account (let’s say ‘testing’) and leave the master one as it is (including all the existing users)?

Erik Osterman

yes, you can probabblby do that

Erik Osterman

use accounts_enabled flag to only select testing

Erik Osterman

e.g. get your feet wet with the system

Erik Osterman

there’s a lot of moving pieces, so getting your hands dirty with one account would be a good idea

Eugene Korekin

so, do I just need to skip ‘make root’ and start right from ‘make children’?

Eugene Korekin

I’ve just tried it, and it doesn’t seem to work, the make command doesn’t provide any output

Jeremy Grodberg

You need to set up your configs/root.tfvars with all the right configuration (hopefully it’s reasonably self-explanatory) including accounts_enabled limited to the accounts you want to create. Then set yourself up with AWS admin credentials in your “root” AWS account and run make root. This creates the children accounts and sets up roles and network CIDRs and so forth and creates a bootstrap user you will use for the rest of the configuration.

Eugene Korekin

but in won’t change anything inside the root account in the process, right?

Jeremy Grodberg

It will definitely change things inside the root account. The root account is where your users are created and roles for the children account are created and DNS entries for children DNS subdomains are created.

Eugene Korekin

I don’t want to change any of the existing users, won’t it be possible to proceed without that?

Eugene Korekin

in other words, would it be possible to use the existing user entries in the master account and reuse then in the children ones?

Erik Osterman

so our reference architectures don’t provide that level of configurability b/c the number of permutations is insurmountable

Erik Osterman

however, all of our terraform modules are compose of other modules

Erik Osterman

you can pick and choose exactly what you want to use

2019-04-15

Abel Luck

ah ok, that explains what i’m seeing

Abel Luck

on macos it just works, but on our linux workstations the host mounted files are written with root:root

Erik Osterman
Use bind mounts

Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on…

Erik Osterman

I was hoping they supported uid/gid mapping in bind-mount (by now), but it’s not supported

Erik Osterman

we could technically drop to a non-privileged user in geodesic, but haven’t optimized for that.

SweetOps #geodesic
04:00:02 PM

There are no events this week

2019-04-14

Jeremy Grodberg

Geodesic installs a wrapper script to run the container, which launches the container with something like (but not exactly, and with other options) docker run -it --privileged --volume=$HOME=/localhost. The shell inside the Geodesic container is run as root. The permissions mapping is handled by Docker. I use docker-machine on my Mac, so everything Docker runs runs as me and the root user inside the Docker container has the same permissions as I do on the host. Files created on the host from Geodesic are owned by me and files I cannot read cannot be read from inside Geodesic.

1
Erik Osterman

I think the behavior is slightly different on linux, where UID/GID within the container are preserved on the host machine, but what @Jeremy Grodberg describes is correct on OSX.

Alex Siegman

This has actually caused a problem with us on our build server. The Jenkins user can no longer clean up some of the workspaces because files end up getting owned by root after unit testing and such =( On my backlog to fix that.

Erik Osterman

Hrmmm okay I can help address that. So when we run geodesic containers in a headless fashion (e.g. cicd or Atlantis), we never use the wrapper script. We always run it in the “native” way for that platform. So for ECS it’s a task and for Kubernetes it’s a pod in a deployment. We never mount localhost, which is only ever recommended for local development and not the general workflow.

Erik Osterman

If localhost is not mounted the host machine is always insulated from these kinds of errors.

Erik Osterman

Keep in mind we always build an image that uses geodesic as it’s base and we don’t run geodesic directly

Alex Siegman

Ah, I meant the more generic problem of root files inside a container against a local mount becoming root owned files on the host operating system. This isn’t even using geodesic. Though, a lot of our engineers here do use linux as their day-to-day O/S so it would affect them as well.

Jeremy Grodberg

On linux, host permissions for processes running in Docker containers should be managed with user namespaces: https://docs.docker.com/engine/security/userns-remap/

Isolate containers with a user namespace

Linux namespaces provide isolation for running processes, limiting their access to system resources without the running process being aware of the limitations. For more information on Linux namespaces, see Linux…

Erik Osterman

The awkward thing though is I think we want to be able to be root in the geodesic container while developing locally

Erik Osterman

but we want files owned by the host user

Jeremy Grodberg

user namespaces give you that: it maps whatever UIDs you user in Docker to whatever you want on the host

oscarsullivan_old

Yeh ubuntu runs as root and sets files as root:root

2019-04-13

Abel Luck

how does geodesic handle issues of uid/gid when mounting dirs from the host?

Abel Luck

could someone point me to the code where that happens?

Erik Osterman

It does nothing with uid/gid mapping

2019-04-12

2019-04-11

raehik

Question I forgot I had yesterday: I know that Geodesic opens a port (range?) for handy kubectl port-forwards. How do I use it, and what version was it released in? (in the last month, or older?)

Erik Osterman

Yes, that’s supported - it’s been in the container for years

Erik Osterman

there’s an env inside the container that contains the port number

Erik Osterman
Document Kubectl Proxy in Geodesic · Issue #428 · cloudposse/docs

what We port map a random port into the geodesic container This port is what should be used for proxying kubectl proxy –port=${KUBERNETES_API_PORT} –address=0.0.0.0 –accept-hosts=&#39;.*&#39; wh…

1
Erik Osterman

That said, I think we should rename the port to be more generic

Erik Osterman

so it can be repurposed/overloaded

raehik

Thanks, I was looking for $GEODESIC*

Erik Osterman

Yea, that would make more sense

Erik Osterman

I think we should rename it to GEODESIC_PORT

raehik

that’s exactly what I thought it was

Erik Osterman

what’s confusing is in the wrapper we call it one thing (GEODESIC_PORT) and I guess we end up renaming it in the container

Erik Osterman

If you open an issue on geodesic, we’ll track it and fix that

raehik

Will do, cheers!

Erik Osterman
GitOps with Terraform on Codefresh (Webinar)

Infrastructure as code, pipelines as code, and now we even have code as code! =P In this talk, we show you how we build and deploy applications with Terraform using GitOps with Codefresh. Cloud Posse is a power user of Terraform and have written over 140 Terraform modules. We’ll share how we handl

Erik Osterman

Here’s how we used geodesic with Codefresh to achieve GitOps with terraform on Codefresh

joshmyers

What happened with Atlantis?

Erik Osterman

We’re still using it, but customers have asked us to use codefresh instead

Erik Osterman

… so it’s one system they know and understand

Erik Osterman

in the end, I think we were able to reproduce a lot of what atlantis does. Still need better locking mechanisms and support of CODEOWNERS for blocking apply

Erik Osterman
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

2019-04-10

chrism

If you add sops to the geodesic RUN wget <https://github.com/mozilla/sops/releases/download/3.2.0/sops-3.2.0.linux> -O /usr/bin/sops && chmod +x /usr/bin/sops and set the env vars

RUN curl <https://keybase.io/user/pgp_keys.asc> \| gpg --import 
ENV SOPS_KMS_ARN="arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">kms<i class="em em-xx-xxx-x"></i>xxxx:key/xxx-xxx-xxx-xxx-xxxx"
ENV SOPS_PGP_FP="APGPKEY,ANOTHERPGPKEY,ETC"

You can use sops encryption before pushing files into storage so they’re only un-encrypted within the container during use

Erik Osterman

I believe we provide a sops package

Erik Osterman
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Erik Osterman

(so in fact, sops ships with geodesic)

chrism

You know… Didnt know that

chrism

chrism

it’s pretty good; we use it alongside sopstool

chrism

for keeping our tls certs secure in storage

chrism

im now going to go and remove my manually pulling in sops

Erik Osterman

also, if our sops is not current, feel free to submit PR against cloudposse/packages to update it.

SweetOps #geodesic
06:00:01 PM
Office Hours

<!date^1554921000^{date_short_pretty} from {time}|April 10th, 2019 from 11:30 AM> to <!date^1554924000^{time}|12:20 PM GMT-0700> at https://zoom.us/j/684901853

2019-04-09

Josh Larsen

i’ve noticed the build-harness has a lot of cloudposse references… is forking this repo also recommended when getting setup with geodesic? it seems to be a dependency and affects commands like readme generation and such.

Erik Osterman

@Josh Larsen yes/no - I think it’s a great idea to fork it for your own org’s needs

Erik Osterman

that said, there’s no easy way to take advantage of your fork in our repos

oscarsullivan_old

Youd have to fork it to use your fork of the readme generatie for example. The readme generator references CP a lot

2019-04-08

SweetOps #geodesic
04:00:03 PM

There is 1 event this week

Office Hours

<!date^1554921000^{date_short_pretty} from {time}|April 10th, 2019 from 11:30 AM> to <!date^1554924000^{time}|12:20 PM GMT-0700> at https://zoom.us/j/684901853

jober

So if I wanted to get starting trying to use geodesic where is the best place to start. I see lots of information about and references to geodesic and reference architectures, just wonder if there is a certain order to follow. Any info would be helpful, thanks

Alex Siegman

@oscarsullivan_old recently went through this, made some docs https://github.com/osulli/geodesic-getting-started

osulli/geodesic-getting-started

A getting-started guide for Cloud Posse’s Geodesic. - osulli/geodesic-getting-started

Alex Siegman

it’s on my list to investigate using geodesic, i just haven’t gotten there yet~

jober

Awesome thanks so much! I’ll give that a look. Really appreciate that

Alex Siegman

The 20 foot view is, you make a dockerfile based off geodesic with some specific stuff in it and then you use the resultant container as your “shell” for the given environment you built it for, if my understanding is correct.

jober

Thats helpful to keep in mind

jober

Im going to document my journey as a noob with this and see if I can come out with some notes/docs for setting this up

oscarsullivan_old

Yep AMA on Geodesic, will try to answer

oscarsullivan_old

also tune in on Wednesday and happy to answer and demo then

oscarsullivan_old

find time in #geodesic

jober

There is 1 event this week

jober

correct?

oscarsullivan_old

jober

ty

oscarsullivan_old

That’s local to your time

jober

Awesome!

oscarsullivan_old

BST

jober

was my next question haha

oscarsullivan_old

I’ll try and revisit my PR

oscarsullivan_old

but maybe read this instead of the master branch readme

oscarsullivan_old
Update getting-started guide by osulli · Pull Request #1 · osulli/geodesic-getting-started

What Update the guides with clearer examples Add Example project that I actually use with Geodesic and Terraform Why Several more weeks worth of experience using the tools Some clear errors in t…

oscarsullivan_old

it was written 2 weeks later I think

oscarsullivan_old

(which is p significant at my rate)

oscarsullivan_old

I also share a TF project that I literally use

jober

Awesome

oscarsullivan_old

that shows you HOW to use geodesic

jober

thanks for all the info

oscarsullivan_old

how to leverage it

oscarsullivan_old

and have one tf project for say your API that can be used for dev/staging/prod without duping files etc

oscarsullivan_old

np, catch you wednesday

jober

see you then!

Erik Osterman

Thanks @oscarsullivan_old !!

Jeremy Grodberg

Note that https://github.com/cloudposse/reference-architectures though a bit hard in itself to grasp, shows how Geodesic was designed to be used. Reference Architectures will actually generate your Geodesic Docker container source repos for you.

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

oscarsullivan_old

@Jeremy Grodberg a nice feature for reference-architectures would be to just setup the geodesic modules and not affect any AWS accounts

oscarsullivan_old

I give it my existing account IDs and any other key details like root account ID

oscarsullivan_old

and it generates the geodesic module repos

Jeremy Grodberg

reference-architectures is very proscriptive, and its primary purpose is to solve the cold-start problem and get you up and running on AWS quickly, starting from nothing. Importing your existing infrastructure and making sense of it automatically is an entirely different concept and a whole other project.

Erik Osterman

I pretty much agree with Jeremy

Erik Osterman

it’s a realllllly hard problem to “generalize” a solution where we are not in control of the configuration

Erik Osterman

(maybe one day!)

oscarsullivan_old

No right I get that. But ref arch does this already right in a linear step? It builds the modules then sets up the accounts or is it all intertwined?

Erik Osterman

It is linear

Erik Osterman

it’s certainly possible (just not a priority yet)

oscarsullivan_old

That makes sense. Perhaps you could point me to the file that triggers it.. although I imagine it is the make file

2019-04-05

hey hey

danyone know why kops is still 1.10.* in geodesic?

Erik Osterman

No reason - just haven’t received your PR :-)