#geodesic (2019-01)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2019-01-03
Too much and !
haha
2019-01-04
in the tf root modules is there a list someplace as to what all the backing services relate to?
can you expand on “relate to”?
what Explain this mind warping concept Introduction When you run your infrastructure with geodesic as your base image, you have the benfit of being able to run it anywhere you have docker support. …
Nice write up! I thought of an edge case I think - not sure unless I can tias though.
what Explain this mind warping concept Introduction When you run your infrastructure with geodesic as your base image, you have the benfit of being able to run it anywhere you have docker support. …
Left a comment
2019-01-05
2019-01-07
Not sure if I configured something wrong, but is there a way to keep an assume-role
session active indefinitely?
[2]+ Stopped aws-vault exec ${AWS_VAULT_ARGS[@]} $role -- bash -l
-> Run 'assume-role' to login to AWS
⧉ testing
✗ (none) ~ ⨠ assume-role
Enter passphrase to unlock /conf/.awsvault/keys/:
aws-vault: error: Failed to start credential server: listen tcp 127.0.0.1:9099: bind: address already in use
Hrm…..
so yes, it should be possible. We do run the aws-vault server
in the background
I am using a VM (AWS WorkSpace), so it doesn’t work for me
aws-vault: error: Failed to start credential server: listen tcp 127.0.0.1 bind: address already in use
This doesn’t look right
Are you using a recent version of geodesic
?
It’s only annoying because I find I have to exit out of geodesic completely before I can successfully assume-role
again.
hrmmm
If that’s the case, that’s a bug
yeah i just rebuilt FROM cloudposse/geodesic:0.57.0
ok, that’s recent
sec
to help me triage this…
are you running multiple geodesic containers at the same time?
if you run your geodesic shell and then run ps uxaww
do you see aws-vault server
running?
when you start the shell, do you see the message:
* Started EC2 metadata service at <http://169.254.169.254/latest>
ugh
(was hoping for the opposite order of thumbs)
aws-vault exec --assume-role-ttl=1h --server lootcrate-testing-admin -- bash -l
guessing im running into problems at the 1h mark
ohh
ok, try this:
AWS_VAULT_ASSUME_ROLE_TTL=24h
set that in the Dockerfile
and rebuild
ok
fyi aws-vault: error: Maximum duration for assumed roles is 12h0m0s
hrm…
ok, well start with that I guess
now, for indefinite, that “used to” work.
I wonder why I don’t see this [2]+ Stopped aws-vault exec ${AWS_VAULT_ARGS[@]} $role -- bash -l
ok this is getting somewhere:
aws-vault: error: Failed to get credentials for lootcrate (source profile for lootcrate-testing-admin): ValidationError: The requested DurationSeconds exceeds the MaxSessionDuration set for this role.
status code: 400, request id: ef5c9f5d-12c6-11e9-bd1b-b5609fd02acf
oh interesting.
(a) I didn’t know it was possible to scope it to a role (b) I feel like it’s a red herring since this used to work
@daveyu what version of geodesic
are you suing?
see above
0.57
so the way it’s supposed to work is actually unreltaed to that ttl
basically, it could be 60 seconds
the way it’s supposed to work is when something needs AWS access, it uses the metadata api
that is proxied by aws-vault server
which mocks the API
then it fetches some temporary credentials.
Can you open an issue against cloudposse/geodesic
with this info
sure thing
PSA: there have been a lot of awesome updates to geodesic
in the past month
if you haven’t upgraded, do give it a shot.
if you’re using k8s, then the kube-ps1
prompt is really slick
if you want to run multiple k8s clusters (kops) per account, that’ now supported using the direnv
pattern
we’re using geodesic
regularly with #atlantis, so if that’s interesting hit me up
we’ve added a script called aws-config-setup
which makes it easier for users to setup aws-vault
(no copy pasta)
we’re adding iam-authenticator support to kops
we’ve enhanced kubens
and kubectx
autocomplete with fzf
we’ve added tfenv
to automatically export envs for terraform consumption
for terraform-root-modules
, we’ve added support for multple new account types: identity
, corp
, security
as well as generalized the pattern so we can add more account types easily
we’ve started embracing SSM with terraform so that we automatically populate settings for consumption with chamber
(no more copy pasta)
anyways, if any of this sounds interesting and you want more info/demo, hit me up.
I think CloudPosse needs a newsletter
I’d love to get updates like this weekly or monthly
Newsletter Signup
Expert Cloud Architects
thanks @Andriy Knysh (Cloud Posse)! signed up
@sarkis yes - we’re well overdue for the next edition
I not had a chance to it together
catching up with all the new geodesic changes
A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster - kubernetes-sigs/aws-iam-authenticator
right now, we’re working on getting the slack archives published on a static site so we can have the history available
are you guys looking at k8s now?
not yet
at least a quarter out
im just staying in touch with everything - getting ready
that’s cool - at least it’s on the horizon
2019-01-08
Almost all the additions I will need
Will do an upgrade today
@Jan we still need to work on injecting kops to existing subnets/vpcs
yes though that will fall to my backlog as I will do the vpc creation via kops for now
its a very easy though
simply populate the cluster manifest with the vpc id, the subnet ID’s and make sure the cidr is the same
then just kubectl create -f
Yes, just the devil in the details. :-)
@Jan regarding the kops
manifest, we’ve been using gomplate
(go templates) because terraform templates not not enough
(no conditionals)
like you said, easy enough to parameterize
Though I think maybe we would create a separate manifest
since the changes would be significant.
have a look at the latest changes for how we provision kops settings from terraform
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
I am quite fond of it. We write them to SSM from terraform. Consume them with chamber and call build-kops-manifest
or kops
Cool
Will be Re setting everything up with the cleaner reference architecture tomorrow morning
And doing kops vpcs
2019-01-09
in the new reference arch…
the org cidr range, whats the expectation there?
# Network CIDR of Organization
org_network_cidr = "10.0.0.0/8"
org_network_offset = 100
org_network_newbits = 8 # /8 + /8 = /16
that all the sub accounts fit within that cidr?
So at this point I would love to have the new reference arch make root
have a --dry-run
option so I can still go verify and edit all the configuration before it starts bootstrapping
make root/init
that does not apply
cheers
and then despite having set the region to eu-central-1 there is a hard expressed parameter or a variable thats only doing its default of using us-west-2 for the tfstate-backend s3 bucket
* module.tfstate_backend.aws_s3_bucket.default: 1 error(s) occurred:
• aws_s3_bucket.default: Error creating S3 bucket: IllegalLocationConstraintException: The us-west-2 location constraint is incompatible for the region specific endpoint this request was sent to.
simply adding the following to the Dockerfile.root
solves it for this case
# Jan hacking
ENV TF_VAR_region="${aws_region}"
@Jan
that all the sub accounts fit within that cidr?
yes, the idea was that we could peer the VPCs from any of the accounts, so the VPCs should have not-overlapping ranges
thanks for finding the issue, we’ll fix that
totally
and I do like the setting of the cidr’s per account/vpc
but setting into a pre calculated range doesnt fit, especially for me
You can override them per account
we only calculate it if account_cidr
(i think) is not set
perfect
and im trying to work out how to use it
same as I dont want to provision the aws_orgs part
as I have the accounts already
thanks @Jan
any suggestions and improvements are welcome
it was the first version of the ref architectures, definitely has room for improvements
I have forked the 3 projects and will work on fixing the tf state bucket var
Already validated it works
And will try run a modded version of the reference architecture that fits my needs
we’re using tvenv
instead of using TF_VAR_
in Dockerfile
I think I found the issue with the tfstate-backend vars
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
should those not rather be ENV TF_VAR_
no more TF_VAR_
ever
mmmm really
standardize around normal envs
that can be used by kops, terraform, scripts etc
so im trying to track down why its reverting to the default value of us-west-2
use tfenv
to export them to terraform
hrmm
what do you have in root.tfvars
config?
# The default region for this account
aws_region = "eu-central-1"
hrmm
ok, let’s address on our call
I ended up with everything getting added in eu-central-1 and a error from s3 when it tried to create the bucket with region set to us-west-2
ohhh
that’s TF_STATE_BUCKET_REGION
we don’t se tthat in the bootstrap (oversight)
I got around it by adding ENV TF_VAR_region="${aws_region}"
to the Dockerfile.root
ah!
good catch
(let me confirm)
slightly wrong
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
TF_BUCKET_REGION
the problem is when going multi-region with terraform is where the statefiles belong
hrm
so im not trying to do multi region
lol
lets clarify it later
ok
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
here’s the other problem with the coldstart
you’re right
i’ll tell you the fix
yea I saw the default value
add a config template here:
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
for tfstate-backend
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
then add something like that
then it will generate a terraform.tfvars
for that project
since each project might be in a different region
(so there’s nothing wrong with setting TF_VAR_region
in the Dockerfile
- just I’m not advocating it anymore)
I like the explicitness of terraform.tfvars
will happily stick to the new convention
… anyways this is a
if you track a list of these issues I’ll get them fixed
that worked great btw, thanks
2019-01-10
is anyone using the new reference architecture with geodesic?
I mean in here that isnt you I feel bad bombarding just you with questions
Yeah probably not…. this is so new that I don’t think anyone has had the chance to run through it yet.
Hehe then I will be the crash test dummy
got make root
working
though get
make children
make[1]: *** No rule to make target `staging/validate'. Stop.
make: *** [staging/validate] Error 2
Hrmmm that’s odd.
Those targets are generated from the ACCOUNTS_ENABLED macro
so I did
make root
- all passed, users created etc
then directly to
make child
ACCOUNTS_ENABLED = staging prod testing data corp audit
so in the artifacts the Makefile.env
is populated
so I think in the root makefile where we previously removed the --export-accounts
argument that has had a knock on effect later
busy testing
mm no
2019-01-11
wow the removal and deletion of aws org accounts is a pita
yep, TF can’t delete them https://github.com/cloudposse/reference-architectures#known-limitations
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Oh yea not even trying via tf
Doing it manually and it’s a grind
2019-01-14
What’s the rational in having the reference architecture route 53 zones public?
I get that you need to have a vpc existing in order create a private zone
Mmmm actually
Re reading the tf docs I need test it
@Jan you need them public to be able to access the resources in the clusters, e.g. [us-west-1.prod.cloudposse.co](http://us-west-1.prod.cloudposse.co)
is to access the k8s cluster in prod
Absolutely not the case
What I want is public people to access a elb via a set record that has nothing to do with the zones I would use for env cloudposse. Co
Why leak all the info about all the environments you have and the record in them
I will change our version of the ref arch to build internally
So for example.. Have cloudposse.co as a public zone with an associated private zone
All internal zones should have internal zones
It is an anti pattern to expose anything (including dns hosts information) for anything not intended to be publicly consumed
I normally do what I am describing with split horizon dns
Especially so with any direct connect or vpg/vpn peering
Ah Yea aws call it split-view
@Jan you can do it with private zones as you described
Yes
another reason to have public zones is when you use kops
so I guess there are many diff use-cases here
What I was asking is, given the focus on security in geodesic, the is seems wrong in the reference architecture
Kops works perfectly fine for private
Kops has no hard reliance on public r53 zones
I will rework our fork of the reference architecture tonight and see if I can resolve this
btw, kops works without any DNS at all
it supports gossip mode
we used this with our caltech vanvalen project
2019-01-15
is it safe to assuem this is the current state to follow?
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
for getting k8s into geodesic
Cool so I have kops creating a cluster in a vpc with a private hosted zone
all working
need to polish a bit
and stuff
but I have my previous idea, I define my vpc and launch k8s into it
Nice @Jan
also using the kops-backing-services
mostly
so with the cluster running as per geodewssic aws/kops method
mostly
helmfiles ⨠ helmfile apply
Adding repo coreos-stable <https://s3-eu-west-1.amazonaws.com/coreos-charts/stable>
"coreos-stable" has been added to your repositories
Updating repo
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "coreos-stable" chart repository
...Successfully got an update from the "cloudposse-incubator" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
Comparing prometheus-operator coreos-stable/prometheus-operator
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
Error: plugin "diff" exited with error
err: exit status 1
failed processing /conf/helmfiles/releases/prometheus-operator.yaml: exit status 1
Is rbac being taken into account?
-
rbacEnable: {{ env “RBAC_ENABLED” default “false” }}
Comprehensive Distribution of Helmfiles. Works with helmfile.d
- cloudposse/helmfiles
@Jan by default our helmfiles don’t use RBAC
I am asking if in the geodesic reference arch rbac is being used
ah nm
read your response wrong
so….. I can understand its easier to not
but its much much worse
agree
do you have all the rbac resources (service account, role binding, etc.) provisioned in the cluster?
just added
more exploring how much is or is not dealt with in the current ref arch
and extending it to do what I need
hAHAHA
Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)
identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)
Error: plugin "diff" exited with error
The Kubernetes Package Manager. Contribute to helm/helm development by creating an account on GitHub.
this by itself is a better place to start
Hey; I love the centralised management of roles/users using this setup; but how would you restrict a users access at a sub-org level with the current setup? Say for example you have a user jim in root who only has access to the data org, and you want to restrict that to just elasticache in the data org.
there are a few options
1) create more groups with fine grained access control like you describe; then add users to those groups.
2) add the policies directly to the IAM user.
I like (1) the best.
what you describe is a highly specific requirement (not wrong!), so we haven’t tried to come up with any patterns to generalize that.
…perhaps after seeing more patterns like this, we can whip up a module for it
I’ll have a prod around, the less unicorns the better
cool
happy to give you a high level overview / demo of what it all looks like
I’m guessing the simplest way would be to create a role within the sub-organisation to (as per the example) allow only access to one area and a group. tbh I probably need to RTFM on iam + suborgs. Previously we’d manage the users in each org (which is a headache in itself)
Love the wrapper around aws-vault and the sts usage; its a really nice way to standardise access
right - avoid managing users in suborgs
yea, the geodesic
shell has made it A LOT easier to distribute an advanced toolchain
we used to try to do it all “natively” (e..g on osx) and always ran into problems
old version of XYZ package
“works on my machine”
communicating upgrades etc
We’re a mix of windows / nix / mac so its good to see something that wont turn me from a developer into desktop support
do you guys run WSL?
Docker is a god send for tooling
we have early support for that.
yeah all in WSL
a few community members were using geodesic in that setting.
docker for windows; wsl talking over tcp. Works relatively seamlessly
Make vs make :(
We’re mostly using the ref setup (setup individually) for the org management. Our actual infrastructure in the orgs will be a tad more hybrid as we have our own terraform modules etc / splits across regions
Tool chains in docker are the future
Yeah docker’s pretty sweet for simplifying tooling; we already use it as an ops tool for running scans and long custom chains of tools to analyse deployments / nmap / etc. Kubernetes is a mixed bag though; its like creating a magic-bag for developers to throw half baked code into
Good work chaps; its nice to see an opinionated, secure by default, design
thanks @chrism!
2019-01-16
I was going over geodesic, im trying to understand what goofys is used for, I understand you mount some s3 buckets, but what for?
as afaik you use terraform backends, so state should work/happen through that
@pecigonzalo it was used to mount secrets from s3 to a local filesystem
I think @Erik Osterman (Cloud Posse) is wanting to move away from it and instead use SSM
OK, that makes sense, like sensitive tfvars and other variable/config/secrets
thanks
mounts private keys by the looks of it
Instead favouring things like https://github.com/cloudposse/terraform-root-modules/blob/master/aws/backing-services/aurora-mysql.tf#L105-L121
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
who will be lucky #200?
2019-01-17
★ repo
What version of kops / k8s is supported currently?
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
@Jan we recently deployed 1.10.12
with kops
https://github.com/kubernetes/kops#kubernetes-version-support https://github.com/kubernetes/kubernetes/releases
Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management - kubernetes/kops
Production-Grade Container Scheduling and Management - kubernetes/kubernetes
Awesome
Will start at that Version
2019-01-18
trying to hack my way through using the reference architecture with an existing root account. got prod and audit accounts created with make root
, but now running into this:
➜ make children
make[1]: *** No rule to make target `prod/validate'. Stop.
make: *** [prod/validate] Error 2
artifacts/Makefile.env
looks right:
ACCOUNTS_ENABLED = prod audit
and the account IDs are in artifacts/acounts.tfvars
where else should I look for problems?
ohhhh yes, there’s a bug.
edit tasks/Makefile.child
find the macro called child
there is an =
sign after it. that works in some makes and not in others
remove the =
and it will work.
beautiful. thanks
if it helps..
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-apple-darwin11.3.0
@Jan ran into this
we spent some time debugging it. Yea, I’ve been developing it on an AWS WorkSpace instance which is Amazon Linux
on OSX this happens
what When trying to hack my way through using the reference architecture with an existing root account, I got prod and audit accounts created. When I run make root, it errors ➜ make children make[1…
sometimes you need to make make!
make/install: MAKE_MAKE_VERSION ?= 4.1
make/install: MAKE_SOURCE ?= <http://ftp.gnu.org/gnu/make/make-$(MAKE_MAKE_VERSION).tar.gz>
make/install: | $(BIN_DIR)
@echo "[make]: MAKE_SOURCE=$(MAKE_SOURCE)"
$(CURL) -o make.tar.gz "$(MAKE_SOURCE)"
tar xvf make.tar.gz
cd make-* && ./configure --bindir=$(BIN_DIR) && sudo make install
rm -rf make*
which make
make --version
what When trying to hack my way through using the reference architecture with an existing root account, I got prod and audit accounts created. When I run make root, it errors ➜ make children make[1…
Lol
Make inception
Yeah, wasn’t sure it would work, but it does! Got tired of running into problems on different platforms due old make versions, and the newer features I like to use
I commented on the GH issue and shared my method for detecting and bypassing as best I could, the old 3.81 version of make.
hrmm
so I think @Jan had success on OSX Make 3.81
without updating make
(using the earlier fix in the issue linked, by removing the extraneous =
)
Yep, all worked after removing the trailing =
Im a simple man, I see colors and design, I upvote
Geodesic inside of tmate over the web
what Support remote debugging of geodesic containers why This is useful for pairing with other developers or debuging remote k8s pods, ECS tasks, atlantis, etc.. references #356 (related use-c…
@mumoshu has joined the channel
2019-01-19
2019-01-20
16 votes and 20 comments so far on Reddit
2019-01-21
ha that gnu make on osx thing is a pita
brew install remake
https://github.com/rocky/remake/wiki will take a look
Enhanced GNU Make - tracing, error reporting, debugging, profiling and more - rocky/remake
Is there a reason the org names are filtered to a predefined list rather than just allowing anything?
but I would be open to hearing your use-case
We have for an early customer chosen one unique TLD per account so they don’t even share that.
Also, we make the distinction between a company’s “service discovery domain” and it’s “branded” or “vanity” domains.
I think it makes pretty good sense to keep strong consistency on the service discovery domain, while letting the branded domains stray as far as necessary .
branded domains are what the public uses.
ohhh, the org “account” names
Soooooooooo yes, that’s terraform being terraform
It’s very tricky to reference dynamic properties
we use SSM parameter store to reduce that burden
but the the deal is this, we define some accounts and those accounts have some purpose that we cannot easily generalize without going much deeper into code generation.
On top of that, AWS doesn’t allow easy programatic deletion of accounts, so it’s incredibly painful to test. Requires, logging into each suborg manually and setting up the master account creds, before going to terraform to try and delete it.
it’s to reduce the number of settings
the more predictable they are, they more assumptions we can make
… if we’re talking about terraform-root-modules
, the idea here is that these are hyper opinionated.
All of our other terraform-aws-*
modules are designed to be building blocks
but at some point, you need to stop generalizing and implement something. That’s terraform-root-modules
@chrism would be happy to give you a high level over view of everything and some of the design decisions.
Of course, we want to capture all that in documentation, but that is not caught up.
lol
I was thinking more along the lines of root / audit / maybe data would be opinionated, if its not in the list assume its more of a blank slate like dev or prod sans the pre-included docker
soooooo what I’m leaning towards right now is adding more types of accounts
rather than renaming the current accounts
it makes it impossible to document a moving target
what are the account names you’d like to see?
To be honest its probably a bit domain specific to us but data-prod rather than just data. and something akin to dev-{team}. I can see myself being buried in orgs which isn’t great but at the same time it reduces risk around shared areas between teams / improves the ability to audit / split billing.
another idea that @Jan had is to supply another “set” of accounts
_I also realise i can do similar with policies in aws like we do in our original 1 org structure _
I’m simply thinking of dumb accounts basically that you can setup as needed / make use of the centralised auth with
So the part that I found tricky was NS delegation.
The dns delegation 8s easy
didn’t you end up hardcoding it?
Nope I created a private an additional module for making a private zone
Depedancy is having a vpc before
But that fits into my plan of creating a vpc that fits into my subnet strategy and have a k8s cluster launch into that vpc
It’s totally possible to do what you want to do, just with trade offs.
Admittedly not using any of the K8 / domain stuff as we use rancher for K8 management and multiple vpcs per env (we just run stateless apps in k8)
Tapping my foot waiting for AWS to match Azure + Google and make EKS free
yea, seriously
it’s really far behind in as it relates to k8s
Would love to see the rancher stuff. I bet we could expand the geodesic story to support the rancher
cli.
Ranchers pretty neat for what it does (Api proxy / rbac auth); we run the rancher control plane (HA) from our datacenter rather than AWS so its a bit of manual wiring to get the K8 plane setup. But it does talk to EKS which is good (from a compliance front) as it takes away 6 vms we’d otherwise have to manage
In an ideal world it would be more terraform friendly
as would everything
yea, getting everything in under terraform is nice
in the end though, it’s hard to expect one tool to do the job
live the dream
We’ve used it to manage dualDNS providers for a couple of years now; I just hope the 0.12 conversion tool works well
yea, srsly
I just see the words breaking change; look at all the repos of terraform and think FOOOOK
its either that or tag everything to run with <.12
2019-01-22
@Erik Osterman (Cloud Posse) https://github.com/cloudposse/terraform-root-modules/commit/547302316db329f492411bff44aae045bd70e430#diff-b4b339745fca6a54353bf7f392d8dc60 was the naming change intentional? Terraform hates refactoring
Terraform will perform the following actions:
- aws_organizations_account.audit
+ module.audit.aws_organizations_account.default
breaks out terraform state mv
Yes, the terraform-root-modules
are best forked by an organization
they are your starting off point
we refactored all the root modules to support SSM and made them more DRY by using local submodules
Yeah I quite like the submodule approach (as it saves extra shit to pull /versions to track)
Forking Root is something I shall definitely be doing.
Whats with the SSM though? its just storing ARNs (in this bit anyway) which are all named by convention
I should probably do the fork first and make use of that good old S3 versioning to uncorrupt my state file
so the SSM allows us to programmatically reference variables by name
e.g. we cannot progammatically reference an output by name from the terraform remote state provider
cool
Refactoring Terraform is painful.
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Not half
@Nico You might run into a few bugs with the referrence-architectures
there are a few people who have recently run through it and can maybe help if you get stuck
ok cool, I’ll definitely reach out
(check the open issues)
cool, thank you
I can also give you (or anyone else) an overview via Zoom
please ping me when you’ll schedule it with someone, I’ll try to join!
yeah I will definitely need some guidance once I actually start working on the architecture
2019-01-23
font recs for a stubborn Arch user
symbola is awful and emoji fonts don’t include the math symbols in geodesic prompt T_T
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
We’ve had a few issues in the past related to the prompt
(fwiw, on OSX and Amazon Linux it’s working for me)
Can’t lookup the font right now.
If you set PROMPT_STYLE=plain
it should use no funky characters
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
Cheers. stick with Symbola for now. Surprised that DejaVu doesn’t include those chars
I keep seeing DEPRECATED: /usr/local/bin/init-terraform is no longer needed. Use
tfenv instead.
init-terraform still works (and is still prompted as you change folders), but how is the other supposed to work?
sec (docs haven’t been updated)
In each project folder in /conf
Add a file like this one:
# Import the remote module
export TF_CLI_INIT_FROM_MODULE="git::<https://github.com/cloudposse/terraform-root-modules.git//aws/ecs?ref=tags/0.33.0>"
export TF_CLI_PLAN_PARALLELISM=2
use terraform
use tfenv
Read up on this here:
Transform environment variables for use with Terraform (e.g. HOSTNAME
⇨ TF_VAR_hostname
) - cloudposse/tfenv
basically, we discovered we don’t need wrappers to use terraform with remote state and remote modules
Ah cool; similar to .env files in node projects
we can populate TF_CLI_ARGS*
envs
yes
thats nice; though I’ve nothing against a good shell script
so this common direnv
interface works well with both terraform, kops etc.
Yea, though I’m not crazy about string manipulation in bash
and if we’re going to use a formal language, default to go
so we don’t need to install a lot of deps and can easily distribute with cloudposse/packages
so you’ll dig this:
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
direnv
has support for a stdlib
that can be extended
so we’ve added our own extensions…… (partially for backwards compat)
but users can add their own helpers
(or override ours)
Cool; its a damn sight easier to script stuff up and know it’ll run in linux. And as a general distribution mechanism rather than having to keep repos filled with scripts
All makes sense though; the make file systems a nice addition; I tend to write them with terraform envs, docker dealt with making sure people had the tools but linking it all up to the WSL auth (aws vault) has dealt with the headache of keeping things secure
Yea, really excited with how it’s all come together.
Optionally, add a Makefile
like this one:
## Fetch the remote terraform module
deps:
terraform init
## Reset this project
reset:
rm -rf Makefile *.tf .terraform
Hey guys - do you generally recommend that people create a new docker image based on the cloudposse geodesic image, or fork the geodesic repo and customize as needed?
I think it depends
Here’s how I recommend you get started.
Get everything up and running using cloudposse/geodesic
; reduce the moving pieces
then consider forking cloudposse/geodesic
and adding your extensions
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform - nikiai/geodesic
here’s an example of someone else - they forked and switched to debian =P
Thanks
Use geodesic image as base for your own image
Look at cloudposse/testing.cloudposse.co as an example
(though [testing.cloudposse.co](http://testing.cloudposse.co)
is quite out of date)
The reference-architectures
are most current
@me1249 got your PRs
thanks!
I’m going to get that taken carre of
2019-01-24
Appears that geodesic doesn’t have a terminfo file for Termite
upon running the geodesic image script:
✗ (none) ~ ⨠ echo $TERM
xterm-termite
tput: unknown terminal "xterm-termite"
tput: unknown terminal "xterm-termite"
-> Run 'assume-role' to login to AWS
tput: unknown terminal "xterm-termite"
tput: unknown terminal "xterm-termite"
(~4 more times)
for anyone else with this problem, run it in tmux or simply export TERM=screen-256color
Thanks for posting your fix!
what On Arch linux someone reported in slack that they get this error: ✗ (none) ~ ⨠ echo $TERM xterm-termite tput: unknown terminal "xterm-termite" tput: unknown terminal "xterm-term…
geodesic
is just alpine
under the hood
2019-01-25
I’m trying to glue a couple of things together but I’ve a feeling the only way it’ll work with the audit/root org setup is if I change the cloudwatch module
by default cloudtrail in root/data etc creates a log-group to match the org namespace (it doesnt set the cloud_watch_logs_role_arn/cloud_watch_logs_group_arn so the resulting log group is null as their’s no cloudwatch)
adding
module "cloudwatch_log" {
source = "git::<https://github.com/cloudposse/terraform-aws-cloudwatch-logs.git?ref=tags/0.2.2>"
namespace = "${var.namespace}"
stage = "${var.stage}"
retention_in_days = 180
}
and the cloudwatch role+group arn to the cloudtrail tries to create module.cloudwatch_log.aws_iam_role.default which fails
` aws_iam_role.default: Error creating IAM Role testns-root-log-group: MalformedPolicyDocument: Invalid principal in policy: com.amazon.balsa.error.InvalidPolicyException: The passed in policy has a statement with no principals!`
Remind me to mount the local volume on a folder so I can sync in changes rather than typing make all every few minutes
You homedir gets mounted to /localhost
in geodesic
you know I’ve seen that everytime I open the shell and still didnt click
For exactly that reason
its been a long week
sorry for our contribution to that!!
haha
so our (my) dev workflow is to develop in my regular IDE
keep a geodesic shell open to the dev/sandbox account
cd /localhost/...../project
that makes perfect sense
not randomly typing vi and duplicating stuff backwards like a pleb
yea, and forgetting that the one thing that you actually made in the shell manually
oh that never happens whistles innocently
here’s something else that’s cool:
what Add support for Docker for Mac (DFM) Kubernetes or Minikube why Faster LDE, protyping Testing Helm Charts, Helmfiles howto I got it working very easily. Here's what I did (manually): Enabl…
I haven’t formalized a workflow around it, but I did a POC
we can actually use geodesic
with minikube
neat; we all run rancher locally / docker for desktop minikubes handy for quick pocs
ahh localhost mounts to the user folder. Still need to map all my work lives on a separate ssd.
ohhh
look at /usr/local/bin/<script>
I think you can override it
if not, we would accept PR to support overriding it
cool I’ll have a dig as I assume the home paths used for aws-vault
that too
its easy enough to get docker to dance
and a local “Cache”
windows subsystem; where home is home but not quite home
are you using WSL?
yep
ubuntu 18
yea, that was A MAJOR pain
it took us a few PRs to get that right
works REALLY well so you did a good job there
Think I got the cloudwatch issue down. I already had it scripted up before splitting all the orgs so I’ve made it sorta hybrid for the time being. https://gist.github.com/ChrisMcKee/0ca78c207fa7c3aca3b973c824aab069
Now to wait for the SIEM monitoring company to start screaming… probably shouldnt have changed it all on a friday
what company do you use? (curious)
dm’d
Hopefully not AlienVault
I remember doing some due diligence a while back and found private keys in their AMIs
lol cough
we use all the aws stuff as well as the managed service but it all feeds into the one place for monitoring as well as the aws notifications. Much as I’d love to trust a single service
There’s a distinct lack of silver bullets
any good guides on using docker in WSL? i was looking into it a while ago, but at the time it seemed either/or not both
With a couple of tweaks the WSL (Windows Subsystem for Linux, also known as Bash for Windows) can be used with Docker for Windows.
Thanks! I swear it was probably the week before this article posted that I was last looking for an updated guide! Can’t keep up!
With a couple of tweaks the WSL (Windows Subsystem for Linux, also known as Bash for Windows) can be used with Docker for Windows.
Works fine
ahh, now i remember the problem, i need to be able to use virtualbox locally (for packer, nothing persistent), and installing docker for windows enables hyper-v, which breaks virtualbox
ohhhh yea, I’ve had similar issues when trying to use other VMs
Maybe I can get terraform to launch i3 metal instances to run virtualbox and packer for me
2019-01-26
Packer runs fine on windows though , you can run it in docker as well. Very few reasons for vbox thesedays
i understand that, it’s really a vagrant box that packer is creating, with a virtualbox provider
of course packer runs fine itself on windows, but packer launches virtualbox locally to create the image/ovf, and then converts it to a vagrant box
so, somehow or other, i need packer to be able to run virtualbox
2019-01-30
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
We’ve added okta support