#geodesic (2018-11)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2018-11-02
What do you guys use to host the packages available at apk.cloudposse.com?
Managed service or self hosted?
Good question
Automatic Alpine Linux Package (apk) Repository Generation using AWS Lambda, S3 & SSM Parameter Store - cloudposse/alpinist
It’s a fork
Very cool little lambda deployed via cloud formation
PR is still open because we’ve been using it for demos
With Atlantis
haha dude
I was like “I do not want to host an APK repo”
THen found this https://github.com/forward3d/alpinist
Automatic Alpine Linux Package (apk) Repository Generation using AWS Lambda, S3 & SSM Parameter Store - forward3d/alpinist
Checked back here to see if you’d hear of it
TUrns out you’ve done the hard part for me already
If you’re going to host one I can finish up the PR next week
I’m happy to jump on with you?
I think it’s ready to merge and release
Then our CloudPosse packages repo is how we generate our packages
Yeah noted
We are going to consolidate the vendor and install folders in packages. Our justification for doing it this way is so we don’t have lock in on apk. We use it to install native binaries (e.g. for local Dev) and maybe one day will support other packages like Debian.
I might be missing something but I’m unsure how consolidating those folders will change anything? That Makefile in install is a manifest and quite useful as a new-comer to the repo.
You can achieve multi-package support by simply adding more steps to the individual Makefiles in the vendor dir
Exactly! This is the plan
Disclaimer: not exactly a Makefile pro but it’s easy enough to follow
Yep, you got it :-)
Oh the install folder was just how we used to do it
The vendor folder is the new way
We are going to add a make target to preserve the original behavior
But keep all the logic in the vendor folder.
Oh btw, if there are any packages missing you would like feel free to open an issue or PR (preferred) and we can add them
Yeah was just thinking that - most of my daily tools are there
2018-11-07
Does anyone have an example repo that uses the “multi-stage docker build pattern” against geodesic? I’m hoping not to reinvent the wheel for customizations.
Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co
Example Terraform Reference Architecture for Geodesic Module Staging Organization in AWS. - cloudposse/staging.cloudposse.co
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
oh, I didn’t know those draw from geodesic – but I should have thought of that
tnx!
2018-11-11
what do folks do when they need a later version of a package than what is listed in the cloudposse/packages repo? I’m looking for a later version of terraform than 0.11.8 listed at https://github.com/cloudposse/packages/blob/master/vendor/terraform/VERSION
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages
or a later version than what’s available in the apk aports
: https://git.alpinelinux.org/cgit/aports/tree/community/terraform/APKBUILD
Submit a PR against our packages repo
The only reason it’s not the latest is we haven’t had a chance to update it yet.
@tamsky
Also, anything you want a package for, feel free to open an issue or submit a PR
There seem to be at least two different version directives for terraform
within cloudposse/packages
:
diff -r 5c58433395f3 install/Makefile
--- a/install/Makefile Tue Oct 23 18:32:59 2018 -0700
+++ b/install/Makefile Sun Nov 11 15:53:45 2018 -0800
@@ -297,7 +297,7 @@
teleport:
$(CURL) <https://get.gravitational.com/teleport/$(TELEPORT_VERSION)/teleport-v$(TELEPORT_VERSION)-${OS}-$(ARCH)-bin.tar.gz> -o - | tar -C $(INSTALL_PATH) -zx --wildcards --strip-components=1 --overwrite --mode='+x' */tsh */tctl */teleport
-export TERRAFORM_VERSION ?= 0.11.7
+export TERRAFORM_VERSION ?= 0.11.10
# Releases: <https://github.com/hashicorp/terraform/releases>
## Install Terraform
terraform:
diff -r 5c58433395f3 vendor/terraform/VERSION
--- a/vendor/terraform/VERSION Tue Oct 23 18:32:59 2018 -0700
+++ b/vendor/terraform/VERSION Sun Nov 11 15:53:45 2018 -0800
@@ -1,1 +1,1 @@
-0.11.8
+0.11.10
and there seems to be some conflict between what version of terraform
gets installed in geodesic.
geodesic appears to use the apk
manifest in //packages.txt
to install /usr/bin/terraform
.
in the cloudposse/packages
image, it installs /usr/local/bin/terraform
binary directly from vendor (no apk), but this binary does not get copied into geodesic during the multi-stage build.
to use multi-stage pattern to install packages, use this:
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. https://slack.cloudposse.com/ - cloudposse/geodesic
So that may mean there are 3 different version directives of terraform
in play?
- //packages/install/Makefile
- //packages/vendor/terraform/VERSION (appears to have no effect)
- //geodesic/packages.txt (link: https://github.com/cloudposse/geodesic/blob/master/packages.txt#L52)
There are 3 methods to install
Alpine packages
Make directives
Docker multi stage
The install folder is preserved for legacy reasons
It will be migrated to use the new make system for backwards compatibility
But for now we just have both
@tamsky i see what you’re saying
//packages/vendor/terraform/VERSION
is used by the new system
//packages/install/Makefile
we’re going to create a wildcard target to map the old targets to the new targets for backwards compatibility
geodesic appears to use the apk
manifest in //packages.txt
to install /usr/bin/terraform
.
in the cloudposse/packages
image,
correct - alpine packages tend to get installed in /usr/bin
, so we preserved that
in the cloudposse/packages
image, it installs /usr/local/bin/terraform
binary directly from vendor (no apk)
this is determined by the INSTALL_PATH
environment variable that defaults to /usr/local/bin
we didn’t want non-package packages going into /usr/bin
/usr/local/bin
should have priority over /usr/bin
(but i haven’t verified)
i’m open to discussion/suggestions on better ways of handling this
@yurchenko did you get to the bottom of it?
@yurchenko has joined the channel
2018-11-12
/usr/local/bin
should have priority over /usr/bin
(but i haven’t verified)
geodesic
’s Dockerfile only lists two COPY
directives related to the packages image
COPY --from=packages /packages/install/ /packages/install/
COPY --from=packages /dist/ /usr/local/bin/
so everything in --from=packages /packages/bin/
is lost.
for now, I’ve fixed my problem of getting v0.11.10
in geodesic by adding
+COPY --from=packages /packages/bin/terraform /usr/bin/terraform
@tamsky see make dist
the idea is that you copy everything you want into /dist
then COPY --from=packages /dist /usr/local/bin
by staging things into the /dist
folder, allows you to specify the packages in an ENV, so that you only specify a single COPY
statement
if it would help, i can do a quick zoom
@tamsky are you unblocked?
2018-11-13
I made good progress yesterday, so I feel pretty good about it all.
Thanks for the reply and offer to help.
allows you to specify the packages in an ENV,
this sounds like a key concept.
I’ll take a look at make dist
today.
Do you have any existing techniques/strategies for caching the .terraform/plugins/linux_amd64/
directory?
– maybe as a runtime volume-mount?
i want to do that
i tried it in vane at some point, but the problem is that it’s per terraform project
i wish .terraform/plugins
could be mapped to /var/lib/terraform
or something
(and without symlinks)
but i haven’t found any envs to do that
i tried it in vane at some point, but the problem is that it’s per terraform project
it could be a greedy/opportunistic algorithm… all local images that match a pattern get mounted somewhere and an init job/script links/copies them into $HOME/.terraform/plugins
…
(and without symlinks)
sounds like you know something here. does terraform
require the inodes in .terraform/plugins/linux_amd64/
to be regular files and not symlinks?
no, i’m just “anti-symlink” hacks
especially where generating symlinks programmatically
if it’s a one shot deal, like ln -s /lib64 /lib
, thats cool
i guess it’s more related to tfstate. we used to link tfstate to a persistent volume
but if the symlinks were stale or acidentally pointing to the wrong place, it can have catastrophic outcomes. .terraform/plugins
should be pretty safe to symlink. Just wish there was an ENV for it.
the init-terraform
script would be the place to do it.
well, tbh, I’d expect any mistakes in the tree or data below .terraform/plugins
to be ignored by the terraform binary….
Seems like it’s an opportunistic cache, and the cache itself does not need versioning… since each provider binary within the cache is versioned.
To avoid surprising users by creating a global plugin directory they don't know about, we elected to auto-install plugins into the local .terraform directory to keep things contained. However, …
Does TF_PLUGIN_CACHE_DIR do what you want?
wha?? i missed that
that’s cool
yea, that’s what i want.
now can we have the same for modules?
References #15613 #16000 Similar to the update in 0.10.7 that cached plugins in a shared dir it would also be nice to be able to keep modules in a shared location to stop duplication on things like…
only if you trust .terragrunt-cache
oh, maybe we can setup geodesic to bind mount the local user’s .terraform/plugins
, and that way the local user’s cache can be shared?
just set TF_PLUGIN_CACHE_DIR=/localhost/.terraform/plugins
we always mount $HOME
to /localhost
(so we can use .ssh
among other things)
i caution though against using /localhost/.terraform
, unless your host os is linux
since you’ll have conflicts (possibly) unless plugins are versioned by arch
maybe use /localhost/.geodesic/terraform
plugins are versioned by arch
they are…
in the filenames?
in the tree structure
nice
this works… my mac’s homedir .terraform.d/plugins
used to only have a darwin_amd64/
dir, now it also has a new linux_amd64/
dir
now if we can just get rid of the ssh-add passphrase prompt
do you never pull private terraform modules?
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. https://slack.cloudposse.com/ - cloudposse/geodesic
I’d love it if my local OSX ssh-agent could be shared/forwarded…. like as described here: https://github.com/avsm/docker-ssh-agent-forward/pull/10#issuecomment-410510815
@avsm Thanks for the great idea you had here! I changed this so it doesn't need local volumes anymore. This makes it work with boot2docker on Linux as well. Even more, it works with any remote …
I can report success using geodesic. :smiley:
I use the [root.cloudposse.co](http://root.cloudposse.co)
style custom Dockerfile for my environment.
I’m able to run terraform on source from my private repo.
do you never pull private terraform modules?
I do, but it’s the main tree I use, and I COPY it into my geodesic build.
I have two different make
targets in my tree.
One directs terragrunt to use a module { src=
that points to a private git repo,
the other uses the local directory as the module source (I use terragrunt ENV var and include()
directives do this).
Given that my docker image has all the private tf source files I need, I just skip the ssh-add prompt.
i’ve wanted to forward the SSH agent as well
the socat based approach felt difficult to generalize for the average user
plus introduces more native dependencies
currently, i like that bash+docker is all that’s needed
on linux, it’s possible to bind mount the socket
but I assume you’re on OSX
yes, and therein lies a feature discrepancy between docker-machine and native linux
the other idea i’ve had is to run sshd
inside the container
then instead of exec’ing you treat it like a host
Expected behavior When mounting a directory containing unix sockets the sockets should function the same as they do on a Linux host. Actual behavior The socket is 'there', but non-functiona…
what do you think about thta?
the terrible thing about ssh keyfile passphrases, is their encryption is terrible
hahah, but locally?? on your mac?? does it matter
we could -v ${HOME}/.ssh/authorized_keys:/root/.ssh/authorized_keys
so it would be easy to ssh into the container. plus, if you have your keys in your OS keychain, then no passwords + agent forwarding.
The eslint-scope npm package got compromised recently, stealing npm credentials from your home directory. We started running tabletop exercises: what else wo…
yeah, it kinda does matter
hah do you know @lvh?
no, don’t know em
(i think he wrote that post)
I prefer to allow the OS keychain manager to manage secrets…. which we’re kinda looking the other way when it comes to aws-vault as well – we’re kind-of assuming that PBKDF2 is “good enough”
… and typing it all the time isn’t a burden
but taking a step back, i don’t understand why exposing an SSH daemon inside the geodesic container would be a bad thing (for local workstations) and how this “openssh key encryption” issue is related
they’re not related… you’ve proposed a different way in, that allows us to bring our agent with us….
right
the encryption issue stems from the current setup – and I don’t want to type the passphrase (my keychain manager does that for me)
aha
ok, thanks - now I get you.
so the way it works today, you don’t like for the aforementioned reasons.
a different way in, that allows us to bring our agent with us
this would be really smooth
if we moved to the sshd approach, that would be cool?
yeah – I could imagine a key in ~/.ssh/geodesic_authorized_keys
or similar, since I’d guess not everyone wants an ~/.ssh/authorized_keys
file at all.
yea - we could generate one for geodesic
detect the existence of the file in the wrapper, and set an ENV var in docker run
which triggers the daemon to launch?
oh, “generate one for geodesic” => wrapper generates a new key and auth_keys file for geodesic access kinda thing?
I think we should add ENV TF_PLUGIN_CACHE_DIR=/localhost/.terraform.d/plugins
to the main geodesic
thoughts?
2018-11-15
@OScar continuing this thread https://sweetops.slack.com/archives/CB3579ZM3/p1542300152238500
@here i might have asked this, but maybe it wasn’t clear. Using the Geodesic Framework, setting up CloudTrails that send logs to central Audit account S3. All good there, I have that in place. My question is; what module do i use to send other AWS events to the bucket from all of the accounts?
what other events that you want to log are you asking for?
as always, curious if you have any more suggestions
I think the reason I haven’t had many suggestions is because I haven’t been able to use it daily. That’ll change now.
@lvh has joined the channel
2018-11-19
@tamsky I’ve updated https://github.com/cloudposse/packages
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages
so that now install/
targets are just proxied to the vendor/<package>/install
(all backward compatible)
2018-11-20
What are folks thoughts on Terragrunt usage in Geodesic? I note some usage in places but not everywhere…
We haven’t committed to it wholeheartedly. Generally averse to wrappers as they often break interoperability.
But I do like how we can extend root modules (e.g. for adding users)
It could rather trivially be used everywhere in geodesic
To date, we have been Docker multi stage to copy and keep things dry
I know one company moved to git submodules
So I think we have 3 good approaches with various trade offs
But it’s a topic that I like to debate. I don’t feel like it’s 100% solved.
Part of what we want is a solution that is tool agnostic, which is why terragrunt is not ideal
We need to version kops, Helmfiles, and other things. Terraform isn’t the end-all-be-all
Indeed. adding more layers to geodesic such as Terragrunt make things harder to rationalise
Although I totally see why it is helpful >_<
Part of my motivation for adding it was to show how flexible the strategy is that we have
That it works with terragrunt like it works with everything else
There are aspects of Terragrunt I don’d mind leaning on too much. Others like the overrides, I’m not so sold on.
I like the easy state auto init, env mapping, and ability to create some thing like a poor mans overlay file system (are those the overrides?)
In terms of the directory structure and recursively looking up a directory until it reaches a terraform.tfvars, allows only 2 level deep child/parent overrides IIRC. Agree on above likes, and before/after hooks
Feels weird using it when there is some overlap/similar things going on with geodesic itself
Yea so the terragrunt directory structure doesn’t make sense with geodesic reference architectures
We recommend sticking with our poly repo approach
@joshmyers when you say “overrides” are you talking about terraform’s *[override.tf](http://override.tf)
feature?
recursively looking up a directory until it reaches a terraform.tfvars, allows only 2 level deep child/parent overrides
I do this, but in a more generic way in the Makefile
I use. It just looks for a placeholder filename that indicates the root of the environment:
export UPSTREAM_ROOT_PREFIX := $(shell FILE=.environment_root ; for i in . .. ../.. ../../.. ; do test -e $$i/$$FILE && echo "$$i/" && exit 0 ; done ; echo "unable-to-find-$$FILE" ; exit 1 )
Contribute to tamsky/terrabase development by creating an account on GitHub.
Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules. - gruntwork-io/terragrunt
We do something similar with the build-harness “autodiscovery”
export BUILD_HARNESS_PATH ?= $(shell until [ -d "$(BUILD_HARNESS_PROJECT)" ] || [ "`pwd`" == '/' ]; do cd ..; done; pwd)/$(BUILD_HARNESS_PROJECT)
nice to see someone love make as much as we do
[email protected]=123456789012/
account-specific resources are placed within an AWS account directory.
directory names the root account email and the 12-digit AWS account number.
that’s kind’a nice
@here pardon my ignorance, but i want to ask something that may make sense for others, just not clear to me:
The IAM User that is created for the [root.cloudposse.co](http://root.cloudposse.co)
repository. Which is used to manage other accounts. Can that be a federated user since it assumes IAM Roles? or?
so there’s often a two-phased approach here since a “virgin” account has no IAM users
so step 1, you use the master AWS account credentials to setup the scaffolding
part of that scaffolding is to provision an IAM user for yourself (and anyone else on the team)
step 2, is you use your personal IAM account to provision everything else
this is the ugly “coldstart” problem…
Ok so here is my scenario, I am thinking of doing this in the master account (without using the actual master account):
Login to the root account with the root credentials and do the following: Create new IAM group cloud_admin Assign AdministratorAccess policy to the group Create an IAM user with the name aws_admin Add the user to the group Enable MFA for the user (we recommend using Google Authenticator as Virtual MFA device) Generate Access Key ID and Secret Access Key for the user (we’ll need them to run scripts)
yea, that should be fine
So in this scenario for root
, anyone in the cloud_admin
group, including say my individual account can run the Geodesic
stuff right?
assuming my individual account has keys and enabled with MFA of course
yep, that should work
Thanks @Erik Osterman (Cloud Posse)!!
this clarification helps me tremendously in explaining what account is needed to run this stuff
np.. yea, in the end, you just need some kind of “administrator” access. then you can go about provisioning all the other stuff.
2018-11-21
It looks like this error is only happening when I am inside the Geodesic shell…
I think maybe your AWS_DEFAULT_PROFILE is not set to use that profile?
Or try to remove the default profile from aws/config, if you have one
I don’t have a default profile in ~/.aws/config
Ok one other idea
Run “hwclock -s” in your container
It doesn’t seem to like your MFA token
So maybe clock is out of sync
oh you know what? i think you are right
We have seen that happen a lot in Docker for MAC
one thing about this account is that the guy who set this up, also setup the mfa so i had to use his barcode in order to add this to my mobile
hwclock
yields this 2018-11-22 01:41:54.996656+00:00
so that time seems to be different than my time
its tomorrow
for one thing
Though it’s GMT :)
Are the minutes and seconds similar?
✗ (none) ~ ⨠ hwclock -v
hwclock from util-linux 2.32
System Time: 1542851097.345229
Trying to open: /dev/rtc0
Using the rtc interface to the clock.
Last drift adjustment done at 1542851026 seconds after 1969
Last calibration done at 1542851026 seconds after 1969
Hardware clock is on UTC time
Assuming hardware clock is kept in UTC time.
Waiting for clock tick...
...got clock tick
Time read from Hardware Clock: 2018/11/22 01:44:58
Hw clock time : 2018/11/22 01:44:58 = 1542851098 seconds since 1969
Time since last adjustment is 72 seconds
Calculated Hardware Clock drift is 0.000000 seconds
2018-11-22 01:44:56.995687+00:00
but that does not seem like my local time
Did you try hwclock -s?
what aws-vault: error: Failed to get credentials for example (source profile for example-staging-admin): SignatureDoesNotMatch: Signature expired: 20180806T044229Z is now earlier than 20180806T1916…
i did, same issue, the date shows Nov 22 as well
Ok, can help dig in more when in front of a keyboard :)
i need a drink!
thanks @Erik Osterman (Cloud Posse)!
i’ll keep trying things
Maybe some tips in this issue
My osx system has IST time zone. When i run, docker info command, it shows a wrong system time which is few hours behind the osx host. docker info shows UTC time. I have properly configured time/ti…
i found this https://github.com/arunvelsriram/docker-time-sync-agent/ worth a try?
docker-time-sync-agent is a tool to prevent time drift in Docker for Mac’s HyperKit VM. - arunvelsriram/docker-time-sync-agent
Can try it out…
@tamsky this is adjacent to what you were working on https://github.com/hashicorp/terraform/blob/master/tools/terraform-bundle/README.md
Terraform is a tool for building, changing, and combining infrastructure safely and efficiently. - hashicorp/terraform
¿¿¿ “yet-another-config-file” ???
/me wishes it would have been able to use our existing *.tf
-definitions of provider.[*].*.version
Terraform is a tool for building, changing, and combining infrastructure safely and efficiently. - hashicorp/terraform
Yea…
Also want this for modules
To satisfy client request for vendoring
@Erik Osterman (Cloud Posse) i tried that agent, no bueno
I added this to the Dockerfile and now the date shows correctly, but it is like 2 seconds off!
2018-11-22
Ola!
quick question, is there a known cost to runt he smallest setup of the full reference archetecture
with aws org
@Jan it depends on the project. There is no cost to provision the resources that are just ’metadata` like VPC, subnets, all IAM stuff (users, roles, groups), accounts, etc.
for Route53, you just pay for each DNS zone you have (I believe it’s $2 per zone per month)
then you deploy EC2, RDS, Aurora, Elasticsearch, Elacticache, for which you pay depending on the instance type and count
if you provision k8s using kops
or EKS
, you pay per instance type and count
then, you pay for S3 storage
and for network traffic
so, if you deploy the initial setup - VPC, subnets, accounts, IAM, Route53, S3 (for TF and kops states) - you pay just for Route53 and S3 storage (which is almost nothing)
yea thanks @Andriy Knysh (Cloud Posse)
im doing an evaluation of full org using geodesic
and figured I would do so on my own account to start with
mmm there is something I am not groc’ing
with aws-vault
following here https://docs.cloudposse.com/tools/aws-vault/
What’s the error?
ah nm
im being stupid
aws-vault login aws.tf-staging-admin
Enter passphrase to unlock /Users/xxxxx/.awsvault/keys/:
Enter token for arn:aws:iam::xxxxxxxxxxx:mfa/root-account-mfa-device: xxxx
aws-vault: error: Failed to get credentials for aws.tf-root (source profile for aws.tf-staging-admin): AccessDenied: Roles may not be assumed by root accounts.
I had the staging account set to the root account id
Ah yes copy pasta get me every time
yarp
so I would almost do this with terraform local-exec and render templates
Isn’t that a catch-22? Would love if we could automate it, but we need the profile configuration before we can run terraform
yea totally, at some point bootstrapping has to start manually
and pass the credentials for boot strap via a .tfvars file which the local exec kills after it runs
also what the docs dont mention is that you should have 2 accounts setup to start with
2 AWS accounts ?
root and staging
for aws orgs
maybe I have skipped ahead in the docs or something
[profile example-staging-admin]
region=us-west-2
role_arn=arn:aws:iam::$aws_account_id_for_staging:role/OrganizationAccountAccessRole
mfa_serial=arn:aws:iam::$aws_account_id_for_root:mfa/[email protected]
source_profile=example
Hrmmm so maybe doc issue. We programmatically create all accounts
aws accounts?
Yea
Sec
o.O
then I am in the wrong place on the doc for sure
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
brb, dinner
If you want, maybe open a “running issue” with all the confusing things you hit. Then we can address those.
Hah, eating breakfast. Opposite ends of the word for sure.
Where in the docs should I be starting from and how should I follow the guid end to end
Guide *
So I think I might understand how I ended up where I was in the docs. The quick start guide says to setup local env and tools. Linked in there is the page for aws-vault setup
Which already has assumptions based on things not yet done
“Create an IAM user with the name admin” I assume only CLI access?
Hrmmm can you send me a link?
That admin user is another in the bootstrap process before you can assume roles as yourself.
2018-11-23
@Jan are you able to login to the root
account from geodesic
?
So my current place is:
- root account is there
- aws-vault is setup with root admin user
- I can login or exec
- its when I then want to do the IAM make init I get promoted for the unset account id var
I am
so before #4, let’s provision the other accounts
Ah I see
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
cd ...
init-terraform
ok 2 secs
terraform plan <whatever account you need>
Should this be doable without cloning down the repo?
cd accounts && terraform plan testing
?
@Erik Osterman (Cloud Posse) are you maybe able to help me on this?
@Jan sorry missed your reply
can you give more details?
Gimme a few minutes
if you look at the Dockerfiles for each environment, e.g. prod https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile
Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co
we copy the root modules from terraform-root-modules
repo into geodesic
container https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L35
Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co
copy whatever modules you need for that particular env
in the root
account/env, we copy the accounts
module https://github.com/cloudposse/root.cloudposse.co/blob/master/Dockerfile#L61
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
because we create all the accounts in root (billing/master)
then, you run the geodesic
container, then cd
into the folder, and run terraform plan/apply
hey @Jan schedule is crazy today. can we maybe setup some time this week? https://calendly.com/cloudposse
i’ll help ya out
That would be awesome
Done, thanks mate
@Andriy Knysh (Cloud Posse) thanks for the info, will explore that this evening
@Jan please give more details, I could help with that
I will do so in a few hours when home
Right lets take a look see
So currently I have root/ and testing/ (following the cold start)
root env run, assume role done. tfstate backend all done
@Andriy Knysh (Cloud Posse) now is the point where I got lost
after tfbackend is done
✓ (aws.tf-root-admin) tfstate-backend ⨠ ls
Makefile README.md main.tf outputs.tf scripts terraform.tfstate terraform.tfstate.backup terraform.tfvars.example
⧉ root.aws.tf
✓ (aws.tf-root-admin) tfstate-backend ⨠ cd ..
⧉ root.aws.tf
✓ (aws.tf-root-admin) ~ ⨠ ls
Makefile README.md account-settings accounts atlantis atlantis-repos cloudtrail iam organization root-dns root-iam terraform.tfvars tfstate-backend users
⧉ root.aws.tf
✓ (aws.tf-root-admin) ~ ⨠ cd accounts
-> Run 'init-terraform' to use this project
⧉ root.aws.tf
✓ (aws.tf-root-admin) accounts ⨠ ls
audit.auto.tfvars.example audit.tf dev.auto.tfvars.example dev.tf main.tf prod.auto.tfvars.example prod.tf staging.auto.tfvars.example staging.tf testing.auto.tfvars.example testing.tf
-> Run 'init-terraform' to use this project
⧉ root.aws.tf
so I guess now the idea, based on your description would be to jump into the accounts dir and TF plan / apply
but given that I will try run all the .tf’s that doesnt make sense
yea, so terraform-root-modules
contains everything that could be used in any project. Three possible ways of doing it:
- You don’t have to copy all folders and all files if you don’t need them. In Dockerfile, copy just what you need, e.g. accounts/main.tf + accounts/testing.tf + …
- Fork/copy our repo and update it to suit your needs
ok I see
so I will just rm for now while testing
- If you want to use our repo AND you copied all files, then use
terraform plan -target=...
to provision just the resources you need
ok lets go with 3
lol
⧉ root.aws.tf
✓ (aws.tf-root-admin) accounts ⨠ terraform plan -target=testiing.tf
var.audit_account_email
Audit account email
so to provision just the testing
account, you’d do: terraform plan -target=aws_organizations_account.testing
ah ok
so yes, if you copied all the files, it will ask you for the missing vars
✓ (aws.tf-root-admin) accounts ⨠ terraform plan -target=aws_organizations_account.testing
var.audit_account_email
Audit account email
auditing is not a dependant of the testing env is it?
you can add them to the Dockerfile and make them empty (for now) so it would not ask it everytime
no, it just asks for all missing vars
Terraform asks
ah alright
so you just unset them all in the Dockerfile
So lets say I want to create ALL the envs
in Dockerfile, add all the missing vars https://github.com/cloudposse/root.cloudposse.co/blob/master/Dockerfile
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
those that you don’t want/need, make an empty string
later when you want to create more envs, update the empty values with real ones and provision the nevs
are you referring to filling/or emptying them? https://github.com/cloudposse/root.cloudposse.co/blob/master/Dockerfile#L24-L42
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
So maybe I missunderstood
I was under the thinking that at this point in the cold start I would have geodesic create the other AWS accounts
you need real values for those envs that you are creating (testing in your case)
how would I have ID or NS servers for the other envs
for those envs that you don’t need right now, you can make them empty strings for now so TF will not ask for the values
unless the account and zone is a prerequisite
update the email
provision the account -> get the ID
update the Dockerfile with the account ID
“provision the account -> get the ID” using gedesic?
yes, root
geodesic
ok, as a suggestion then I would have these run as differnt layers or something
or as make targets
sec lemme try
it’s cold start, so it’s not just one or a few commands to run
you have to provision some resources, then update the Dockerfile, restart geodesic, then provision the next resource
so then maybe its needed to have the description about unsetting and setting etc
we have that in the docs (except for the accounts, which were added after the docs was created)
could you point me to the place in the docs, for my own sanity?
sorry, we have added the accounts to the doc already
so… before that here https://docs.cloudposse.com/reference-architectures/cold-start/#
Update the TF_VAR_root_account_admin_user_names variable in Dockerfile for the root account with your own values.
I was reading it as following top to bottom which doesnt work in this flow
yea sorry, the doc was not updated for that. The user names are here https://github.com/cloudposse/root.cloudposse.co/blob/master/conf/root-iam/terraform.tfvars
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
but you can add them to the Dockerfile as well
So I am not seeing the env vars in env
TF will read those env vars from the .tfvars
file or from TF_VAR_
vars
⧉ root.aws.tf
✓ (aws.tf-root-admin) accounts ⨠ pwd
/conf/accounts
⧉ root.aws.tf
✓ (aws.tf-root-admin) accounts ⨠ env | grep -i TF_VAR
TF_VAR_region=eu-central-1
TF_VAR_root_domain_name=root.aws.tf
TF_VAR_stage=root
TF_VAR_namespace=aws.tf
TF_VAR_parent_domain_name=aws.tf
TF_VAR_account_id=xxxxxxxxx
TF_VAR_aws_assume_role_arn=arn:aws:iam::xxxxxxxx:user/admin
TF_VAR_local_name_servers=["", "", "", ""]
⧉ root.aws.tf
I did drop out and do a make init / make docker/build make install
and re run
you need to add them to either the .tfvars
file or to Dockerfile
they are in the dockerfile
let me just remove all local copies of this docker file
been too much debugging that im in a weird place
think I spotted it
the two env’s in the root Makefile need to be updated
export CLUSTER ?=
export DOCKER_ORG ?=
then make docker/build
then make push
after which make install
@Jan yes, ENV DOCKER_IMAGE="cloudposse/root.cloudposse.co"
needs to be updated
you don’t need to push it
make/install
installs it locally
I did
something else is going on on my env
locally then
cause this is not playing as expected
ENV DOCKER_IMAGE="cloudposse/root.cloudposse.co"
was always updated
(even if it was not, should still work since it’s a local docker image)
yea, something else is up
what’s the issue?
sorry kids were waking up
I have cloned all the repos down clean
starting at 0
Change the IAM user names for the accounts
this relate to https://github.com/cloudposse/root.cloudposse.co/blob/master/conf/root-iam/terraform.tfvars
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
yes
and they could be added to the .tfvars
file OR as env vars to the Dockerfile
cool thanks
I would stick with 1 method, not both honestly
Agree with you. It was changed recently for some other reasons. You can open issues in docs and root modules, and we’ll review. Thanks again
cool cool, again im making notes as I go
I honestly might just put in place gotemplates and have terraform render them
or some glue code on top
I will run some of the guys at the office through this all sooner rather than later too
so will do a docs PR first
so…
in root
make init
.......
Make docker/build
...
Status: Downloaded newer image for cloudposse/geodesic:0.38.0
---> edc6e5ea3362
Step 3/46 : ENV DOCKER_IMAGE="jdnza/root.aws.tf"
---> Running in d378f33e07fe
Removing intermediate container d378f33e07fe
---> ce3e38018eaa
Step 4/46 : ENV DOCKER_TAG="latest"
---> Running in 1ece15da606a
......
....
Successfully built 76c97523011f
Successfully tagged cloudposse/root.cloudposse.co:latest
(⎈ N/A:N/A) ~/dev/aws.tf/root.aws.tf master ● make install
Password:
# Installing root.aws.tf from jdnza/root.aws.tf:latest...
Unable to find image 'jdnza/root.aws.tf:latest' locally
docker: Error response from daemon: manifest for jdnza/root.aws.tf:latest not found.
(⎈ N/A:N/A) ~/dev/aws.tf/root.aws.tf master ● cat Dockerfile | grep -i DOCKER_IMAGE
ENV DOCKER_IMAGE="jdnza/root.aws.tf"
updated makefile and its happy
after make docker/build
finally on track
aws_organizations_account.audit: Error creating account: ConcurrentModificationException: AWS Organizations can't complete your request because it conflicts with another attempt to modify the same entity. Try again later.
status code: 400, request id: 83cad504-f29f-11e8-ba07-3d9eed4b75b4
* aws_organizations_account.dev: 1 error(s) occurred:
* aws_organizations_account.dev: Error creating account: ConcurrentModificationException: AWS Organizations can't complete your request because it conflicts with another attempt to modify the same entity. Try again later.
status code: 400, request id: 83cad503-f29f-11e8-ba07-3d9eed4b75b4
hahahah
plan & apply again and its done
“PROVISION IAM
PROJECT TO CREATE ROOT
IAM ROLE “
cd iam
Comment out the `assume_role` section in `iam/main.tf`
should be
cd root-iam
Comment out the `assume_role` section in `root-iam/main.tf`
thinking about it now I would also have expandable example output from these commands
or glue code replace all the copy pasta
well as far as r53 zones
gonna sleep, just past 2am here
Let me know how it goes
The docs need to be updated to reflect the latest changes to the modules and project structures
Yea I see that, hopefully I can contribute some of those updates
I have 2 dir’s
it will output the account ID
root and testing
I am using the root bin now yea?
then add that account ID to the Dockerfile
in the other account’s repo (testing
in your case)
let me quickly do a git reset
–hard
I have been making all sorts of changes debugging this
on a train so internet is slow , gimme a minute
ok
so in shoer, you go to the root
account and provision all other accounts, they will be added to the org automatically since the root
account is the master/billing in this case
that makes sense
then you update all Dockerfile(s)
with the account IDs
I knew there must be something I was missing
yea
then follow the docs https://docs.cloudposse.com/reference-architectures/cold-start/
sorry for the confusion, we’ll update the docs and ref architectures for that
alright, let me get back to that point
hehe all good
next iteration will be better
I have read most of the docs and loads of the TF / kops / k8s code now
nice
if you have any questions or concerns, please just open issues, we’ll get to them ASAP
btw is there any reason I cant use the same value for parent domain name and namespace?
Ideally I will send a pull request from fork with the docs stuff
we use the same namespace for all envs
some English corrections, missed word etc
NS yes
ah yes
two cases here:
- Use the same parent domain for all envs, e.g.
[testing.aws.tf](http://testing.aws.tf)
,[prod.aws.tf](http://prod.aws.tf)
, etc.
yea totally with you there
- Use diff TLS for the envs/accounts, e.g.
[example.com](http://example.com)
forprod
,[example.qa](http://example.qa)
forstaging
etc.
tld’s or tls?
top-level domains, TLDs
kk
figured
im like a human english linter
and since those DNS zones are in diff AWS accounts, we use DNS zone delegations - add Name servers for [prod.aws.tf](http://prod.aws.tf)
to the [aws.tf](http://aws.tf)
DNS zone
yea
with you there too
so I essentially have done e2e tooling similar to what you have
over the years many time in TF
just with less abstraction and less nice docker exec env glue code
so the entire design idea fits sooooo perfectly with my likes
yea i see
really excited to help contribute back
thanks for testing and using it
please, any issues or improvements, let us know
for sure
have a list I have built up which I will go voer again when there is a new tag for docs
as it hasnt been deployed since last update
o.O
-> Run 'assume-role' to login to AWS
⧉ root.aws.tf
✗ (none) ~ ⨠ assume-role
Enter passphrase to unlock /conf/.awsvault/keys/:
Enter token for arn:aws:iam::xxxxxxxxxx:mfa/admin: 387674
2018/11/23 14:40:40 Request body type has been overwritten. May cause race conditions
* Assumed role arn:aws:iam::xxxxxxxx:user/admin
* Found SSH agent config
* syslog-ng is already running
* Screen resized to 59x165
⧉ root.aws.tf
✓ (aws.tf-root-admin) ~ ⨠
2018/11/23 14:40:40 Request body type has been overwritten. May cause race conditions
might be from my train internet
haha fighting internet here
mmm
I might just give up and do this when im home in a few hours
We never tested that on high speed trains :)
lol
well I am
yea will pick this up later
2018-11-25
2018-11-27
2018-11-30
set the channel topic: https://github.com/cloudposse/geodesic