#geodesic (2019-11)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2019-11-01
2019-11-02
2019-11-03
2019-11-04
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Nov 13, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-11-07
When using the reference-architectures project, would it be possible to adopt an existing account?
@Joe Niland technically yes, but we don’t optimize for that since it’s not how we (as a company) work. There are too many variables to consider.
It’s best to look at what geodesic represents: a strategy for bundling and shipping the tools and configuration for an environment
we happen to have our flavor of how to do that, but nothing restricts how it is used
@Erik Osterman (Cloud Posse) thanks. After I wrote that, I had a look through the account module and realised it’s not so easy.
Asking because I have a new client who have been working for a while in a single account and have AWS credits tied to it.
I’m working on creating the other accounts using your process and then will create a geodesic repo manually for the existing account. That’s the plan anyway!
Yes, I think manually creating the geodesic repos in this case is the way to go
https://calendly.com/cloudposse if you want some quick pointers
Welcome to my scheduling page. Please follow the instructions to add an event to my calendar.
Thanks Erik!
Welcome to my scheduling page. Please follow the instructions to add an event to my calendar.
In this situation, couldn’t he just make the existing account the root account and proceed from there?
Hey Jeremy, I thought about doing that but there are a lot of resources in there. It’s a mix of important stuff, tests, etc. It thought it may be cleaner to start fresh and then migrate the bits they need across into the new structure.
2019-11-08
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages
packages are now updated nightly using GitHub actions!
This is an auto-generated PR with dependency updates.
@Andriy Knysh (Cloud Posse) can you test out @tamsky change: https://github.com/cloudposse/geodesic/pull/534
Newly available on Docker for Mac (Edge) release. https://docs.docker.com/docker-for-mac/edge-release-notes/ Someone should test that this doesn’t break on non-Edge Docker-for-Mac releases.
I’ running docker for Mac Edge … an I think geodesic is working, but I can’t figure out how to make it do anything.
Newly available on Docker for Mac (Edge) release. https://docs.docker.com/docker-for-mac/edge-release-notes/ Someone should test that this doesn’t break on non-Edge Docker-for-Mac releases.
@Blaise Pabon can you give some examples?
If you think about geodesic as a light weight VM it might help
it comes preinstalled with all the essential tools
but you still need to add your apps
so if you haven’t done that, it’s not going to do much
well, I would like to create a cluster with kops
(that’s already built in) and then I would probably add k9s
and popeye
to the image.
at the moment, it is asking me to run assume-role
but I don’t know the syntax.
I tried:
/bin/bash assume-role arn:aws:iam::224064240808:role/masters.kops.dev.travellogic.k8s.local
assume-role arn:aws:iam::224064240808:role/masters.kops.dev.travellogic.k8s.local
assume-role arn:aws:iam::224064240808:user/blaise
asssume-role
is just a wrapper around aws-vault
you can only assume a role for something which has been previously configured in your ~/.aws/config
(also, those are not assume’able roles by the look of it)
….AWS makes this all complicated
Will test
2019-11-11
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Nov 20, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Sooo… the docs don’t actually describe how to get started with geodesic…
I guessed that git clone <https://github.com/cloudposse/geodesic.git> && cd geodesic && geodesic
would get me launch the shell.
Next, it asked me for my passphrase (which I think I remember)
I’ll tell you what, if I agree to be initiated me into the sacred brotherhood of geodesic
and I swear that I will not reveal the sacred mysteries to anyone; will you show me the answer to the riddle:
-> Run 'assume-role' to login to AWS
⧉ geodesic
✗ (none) ~ ⨠
@Blaise Pabon Geodesic makes a few assumptions that are not really documented AFAIK but flow from the way things are set up by the CloudPosse Reference Architecture: https://github.com/cloudposse/reference-architectures
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Geodesic assumes that you are using AWS, and that you will get AWS credentials dynamically injected into your environment using some tool that allows you to assume an AWS IAM role. Typically that is aws-vault
, but it can be aws-okta
or anything. All that has to happen is that once you have your AWS credentials set up in your environment, you need to set the environment variable ASSUME_ROLE
to the role name, after which the “Run assume-role” prompt will go away and the role name specified by ASSUME_ROLE
will become part of the prompt.
Oh! OK, I have been setting my AWS creds in $AWS_ACCESS_KEY
etc. I will look over the reference arch setup more closely. Thanks for the tips. FWIW, I’m a fan of keeping a higher barrier to entry…. it keeps the day-trippers and riff-raff away.
It’s not that Unix isn’t user-friendly, it’s just picky about who its friends are
2019-11-12
The module directories created by reference-architectures
are all for Terraform 0.11. I’m creating my own for my app but I want to use 0.12.
I saw that @Andriy Knysh (Cloud Posse) recommended to change the Makefile like this:
-include ${TF_MODULE_CACHE}/Makefile
And to add export TF_MODULE_CACHE=.module
to terraform.envrc
But I want to create ${TF_MODULE_CACHE} if it’s not there.
So far I have this:
$(shell mkdir -p ${TF_MODULE_CACHE})
-include ${TF_MODULE_CACHE}/Makefile
## Fetch the remote terraform module
deps:
terraform init
## Reset this project
reset:
rm -rf Makefile *.tf .terraform
Is there a better way?
2019-11-13
How do you run “make lint” without it running terraform 0.11?
tried passing in env var but its got a fetish for 0.11
Does as the label says; adds an example using it which I used to test that it works as expected. The "make && make init" keeps trying to install and setup terraform 0.11 which is …
meh doesnt do much anyway as it falls over using 12 at the validate step
is gomplate supposed to be in that mass of stuff in the build harness?
/bin/bash: gomplate: command not found
I’ve installed it and generated the file; still no idea why the build harness kept defaulting to v11 in the project but I broke my way passed it so
AFAICR gomplate isn’t installed inside geodesic, so you need that on your host
I lied. It is, but often you will want to run make readme
from outside geodesic
Its a tf repo; so its more the “build tools” bit; i suppose its awkward to get the right binary unless you presume (like the terraform side) its all linux
I use go get ...
in the end and ran it. instructions were unclear so I winged it
In general, you run make init
to load the build harness, but then you still may need to run make readme/deps
to get gomplate
so you can run make readme
.
2019-11-18
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Nov 27, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
any quick tools to generate a “provider file” with the versions it recommends pinning
pre-empting https://www.hashicorp.com/blog/deprecating-terraform-0-11-support-in-terraform-providers/
During the upcoming months, we will begin deprecating support for Terraform 0.11 in new releases of Terraform providers we officially maintain, beginning with the AWS, AzureRM, Goo…
2019-11-19
I’ve used the reference-architectures
to set up the base repos for each account. I’m now adding specific modules to build app infrastructure. I have a couple of general questions:
- Once everything is created, should I remove the
<namespace>-root-bootstrap
user and role? - Say I create a module at dev.example.net/conf/network/ which creates VPC & subnets, then I have another at dev.example.net/conf/data, would you create a main.tf in the conf/ dir which references the above and can be used to create all the modules with one
terraform apply
? This would mainly be for convenience, but also to make sure you deployed every module in the repo. I’m used to Salt states, where you often apply many states with a single command. - I noticed that the
[[email protected]](mailto:[email protected])
users created by Cold Start can assume role toarn:aws:iam::XXXXX:role/example-root-admin
which has theAdministratorAccess
policy but this policy is not directly attached to theexample-root-admin
group. I know we try not to use the Console often but sometimes it is required/useful. Also clients may prefer it. It’s an extra step to assume role in the console so just wondering if it would be bad to assign theAdministratorAccess
policy directly the toexample-root-admin
group, since the net effect is the same.
(btw, if you break your message into multiple msgs, easier to respond)
Once everything is created, should I remove the <namespace>-root-bootstrap
user and role?
it’s only needed in the very, very beginning
Say I create a module at dev.example.net/conf/network/ which creates VPC & subnets, then I have another at dev.example.net/conf/data, would you create a main.tf in the conf/ dir which references the above and can be used to create all the modules with one terraform apply
? This would mainly be for convenience, but also to make sure you deployed every module in the repo. I’m used to Salt states, where you often apply many states with a single command.
This is where a task runner comes in
using something like make
or terragrunt
make
has gotten a lot of attention lately (and for good reason)
but one thing that’s often neglected (and not obvious to newcomers), is that targets are first and foremost files
e.g.
file.txt: foo.txt
cp -a foo.txt $@
my point is that if file.txt is older than foo.txt, then it perfoms the action
this is interesting if you think about it for terraform
since using modification times, it can be derived if things have changed and need updating
anyways, see #terragrunt for the apply-all
type stuff
re: 3, we prefer to always require assuming a role
so things are deliberate
Thanks very much @Erik Osterman (Cloud Posse). Sorry about the multi-part message.
I’ve been getting to know make since finding your stuff, so I’ll trying creating a Makefile for each account repo and see how it goes.
2019-11-20
Made a project to play around with CodeFresh, and get a jump on some future work. I’d love some feedback. It’s a philosophical cousin to Geodesic, with slightly different goals.
Container version of Dad's garage. It's full of tools, you spend lots of time in it, and you use it to build great things. https://hub.docker.com/r/dadsgarage/dadsgarage - dadsgarage/dadsga…
Btw, I don’t see the codefresh pipelines in the repo
Container version of Dad's garage. It's full of tools, you spend lots of time in it, and you use it to build great things. https://hub.docker.com/r/dadsgarage/dadsgarage - dadsgarage/dadsga…
suggest configuring pipelines to pull yaml from repo
I was reading that it was more secure to leave the yaml that runs pull request pipelines from forks as inline yaml? I’m interested to hear your view as well. I would pull it out into a separate repo possibly, but putting it in the same repo as the code being run through the pipeline is something I want to get away from
I want to avoid a malicious PR that changes the YAML for the pipeline
By default, pipelines are not run on forks
So unless you’re trying to protect it from yourself it should be ok
I already don’t run without a /test
comment, but all it would take is me forgetting to check if the yaml got changed before adding /test
I have the option turned on
Currently codefresh doesn’t run automatically. I have to comment /test. Running in forks is on
So theoretically yeah, I’m covered. I would have to comment /test on a malicious PR for something bad to happen, but why not mitigate that too
¯_(ツ)_/¯
I do want to extract it to a different repo, and have codefresh pull it from there
What I’m currently contemplating is moving the master and tag builds from docker hub to codefresh. Codefresh builds the containers much faster than docker hub
Yea, we use codefresh for that too
the layer caching makes it much faster
I did see that codefresh will let me push 2 tags at once. Docker hub actually makes me build the image twice, which is totally silly
So I can push latest
and the release tag at the same tag from one build. Very nice
lol!
love the name. so perfectly fitting.
@roth.andy will take a look tomorrow
2019-11-21
2019-11-24
Is there a recommended way to add the S3 state backend to the module structure generated by reference-architectures
? I had a look at https://docs.cloudposse.com/geodesic/module/with-terraform/ but it seems to be describing the process that is already automated by Cold Start
I can see all the required env vars seem to have been created, e.g.
TF_CLI_ARGS_init=-backend-config=region=ap-southeast-2 -backend-config=key=network/terraform.tfstate -backend-config=bucket=example-dev-terraform-state -backend-config=dynamodb_table=example-dev-terraform-state-lock -backend-config=encrypt=true -from-module=git::<ssh://git>@bitbucket.org/team-example/infra.modules.git//modules/network?ref=master .module
TF_CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE=example-dev-terraform-state-lock
TF_CLI_INIT_BACKEND_CONFIG_BUCKET=example-dev-terraform-state
TF_CLI_INIT_BACKEND_CONFIG_KEY=network/terraform.tfstate
... etc
However when I run make
which runs terraform init
Terraform prompts for the bucket name.
Just wondering why it’s not picking up the values from the TF_CLI vars
I should also add, my module is initialised from git, is using v0.12, and I added backend.tf
containing the following to the module directory.
terraform {
backend "s3" {}
}
@Joe Niland hit me up tomorrow and can take a look
Thanks @Erik Osterman (Cloud Posse) - still playing with it and if I remove -from-module=...
from TF_CLI_ARGS_init=…
it reads in the backend config.
Anyway, thanks for replying on Sunday!
I’ll let you know how it goes tomorrow
See what we do in /etc/direnv/rc.d
That will reduce some magic maybe
I was just looking at that. I’m just writing up a workaround so I can explain what’s happening.
It was just a misunderstanding on my part. For some reason I had the idea that my ‘root’ module (root in the sense of your terraform-root-modules repo) shouldn’t have a backend specified. I guess I had this idea because backends could be specified by whoever is going to use the module.
I was trying to configure the backend after running terraform init. This wouldn’t work because when using -from-module you can’t init with module files already downloaded.
Once I added
terraform {
required_version = ">= 0.12.0"
backend "s3" {}
}
to my module, then everything just worked - since all the env is already set up for S3 by the cold start process and geodesic Dockerfile.
Just out of interest, what would you do if you need to support multiple backend providers in a root module?
2019-11-25
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Dec 04, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
I saw that you guys have the following in your Makefile to use local state if it’s there otherwise remote.
init:
[ -f .terraform/terraform.tfstate ] || terraform $@
So that takes care of local development, right?
No, just an optimization to only init once
2019-11-26
Ah yeah. I totally misread that. Too much code today.
How are folks handling getting a unique TF_BUCKET during bootstrap? I may be missing something, but it looks like the chance for collision on the names is fairly high.
that’s we we’ve stuck to the namespace-stage-name
convention
which is enforced by labels
We use this module: https://github.com/cloudposse/terraform-aws-tfstate-backend
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
to provision the statebucket (once per AWS account)
and then use folders for each project in that account
Thanks @Erik Osterman (Cloud Posse). Does it make sense to include the account id in the name? I picked a fairly common namespace, and make children
failed halfway through on just one of the accounts.
aha, you could, but we don’t provide a way to parameterize that.
the main reason we avoided that is account id’s have little meaning to humans
so reviewing terraform plans that contain them doesn’t really help code review
it’s certainly a common pattern
and i would probably recommend the pattern whereby you’re generating a slew of accounts (E.g. for developers or apps)
but stick to named accounts for your “core” accounts
we tend to use acronyms for namespace
for example “sweetops cloud infrastructure”, we’d use soci
so it’s “optimistically” unique given all the other dimensions
Right, I was thinking of just having the account id as a uniquifier in the state file bucket name, no where else
Is there any guidance on how to unwind a partially bootstrapped set of accounts?
There isn’t - mostly because AWS makes it impossible to destroy an account programmatically without human clickops intervention
so we haven’t invested in the remaining automation
I think others maybe have used aws-nuke
?
but you can’t nuke the accounts themselves, on the the resources there in
that’sthe thing though
we use N buckets
one per aws account (to share nothing)
Right, so you would have soci-dev-<dev acct id>-terraform-state
and soci-prod-<prod acct id>-terraform-state
, for example
ok
you can give it a shot! haven’t tried it, so not sure what sort of edge cases might come up
i like that it provides a nice way to disambiguate multiple prod accounts
For a fresh bootstrap, will the label get the attributes from the TF_BUCKET envvar?
That is set in the template Dockerfile? If so, it is an easy change I think
yes/no, yes it will use it for projects
but no it won’t use it for creation
b/c the terraform-aws-tfstate-backend
module is opinionated and always uses the terraform-null-label
format
Yup, I think I need to read a bit more to find out how that is invoked. It looks like it accepts the attributes var on the label module, so perhaps that can be used
I would accept a PR to the tfstate backend module that added support for macros
Of which account id could be one
So you could add %account_id%
Or maybe we should add this to the null label
That way we can use it else where
@Andriy Knysh (Cloud Posse) what do you think?
let me read from the beginning
you can still use namespace-stage-name-attributes
format
you can use account ID as name
or add acount ID to the attributes
otherwise, I don’t see a use case for that. Do you want to use a few of those buckets in one account and for the same environment (stage)?
as we do it, we distinguish environments by stage
name
they can be deployed into one account or into separate accounts
but still the resource names will be unique
if you happen to deploy the same set of resources into the same namespace and stage with the same names, you can add additional attributes e.g. namespace-stage-name-blue
namespace-stage-name-green``
This is just for the state bucket. I am hitting a global collision with someone else’s bucket on the name, since I chose devops
as the name space (so the bucket it attempts to create is devops-prod-terraform-state
).
I think I can accomplish this with a two line change to the reference architectures repo. Thanks for the help, and the great modules!
@Joe Hosteny I’ve been learning how to use this repo recently. Just curious what that 2 line change would be.
diff --git a/templates/Dockerfile.child b/templates/Dockerfile.child
index e802a8b..092d4d0 100644
--- a/templates/Dockerfile.child
+++ b/templates/Dockerfile.child
@@ -44,7 +44,7 @@ ENV CHAMBER_KMS_KEY_ALIAS="alias/$${NAMESPACE}-$${STAGE}-chamber"
ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
ENV TF_BUCKET_ENCRYPT="true"
ENV TF_BUCKET_REGION="$${AWS_REGION}"
-ENV TF_BUCKET="$${NAMESPACE}-$${STAGE}-terraform-state"
+ENV TF_BUCKET="$${NAMESPACE}-$${STAGE}-$${AWS_ACCOUNT_ID}-terraform-state"
ENV TF_DYNAMODB_TABLE="$${NAMESPACE}-$${STAGE}-terraform-state-lock"
# Default AWS Profile name
diff --git a/templates/Dockerfile.root b/templates/Dockerfile.root
index e194abf..8fc37e0 100644
--- a/templates/Dockerfile.root
+++ b/templates/Dockerfile.root
@@ -37,7 +37,7 @@ ENV ACCOUNT_NETWORK_CIDR="${account_network_cidr}"
ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
ENV TF_BUCKET_ENCRYPT="true"
ENV TF_BUCKET_REGION="$${AWS_REGION}"
-ENV TF_BUCKET="$${NAMESPACE}-$${STAGE}-terraform-state"
+ENV TF_BUCKET="$${NAMESPACE}-$${STAGE}-$${AWS_ACCOUNT_ID}-terraform-state"
ENV TF_DYNAMODB_TABLE="$${NAMESPACE}-$${STAGE}-terraform-state-lock"
# Default AWS Profile name
diff --git a/templates/conf/tfstate-backend/terraform.tfvars b/templates/conf/tfstate-backend/terraform.tfvars
index 0295e34..7556fad 100644
--- a/templates/conf/tfstate-backend/terraform.tfvars
+++ b/templates/conf/tfstate-backend/terraform.tfvars
@@ -1 +1,2 @@
region = "${aws_region}"
+attributes = ["${aws_region}", "state"]
I think that would do it. Haven’t had a chance to check yet, since I need to clean up some lingering state from the prior attempted bootstrap
Thanks Joe - I see what you mean
@Joe Niland if you try this, make sure you also set the TF_DYNAMODB_TABLE
environment variable in both Dockerfiles to include the AWS_ACCOUNT_ID
envvar as part of the name. The table names don’t have to be globally unique, but they are labeled in the terraform-aws-tfstate-backend
module using a shared base label that includes the attributes
.
FYI, that was not sufficient. I’m not sure what exactly was going wrong, but the terraform apply was getting called twice with the adjusted bucket name, and it failed on the retry (since the bucket and table now existed). It is probably checking for the bucket name without the account id somewhere, and taking the wrong path. I saw a check like that in one of the template files, but I wasn’t able to find out which one was failing here. Regardless, I reverted and was able to get things to work without the account id, so I probably won’t look into this more. It would be good to fix though.
Thanks for the update Joe. There is a lot going on in the cold start modules!
why not just change the namespace?
It should be unique
devops is not
Namespace should be your company name or even better company abbreviation
Which decreases the possibility of collision
Yes, we could do that. I was avoiding it as our company name is fairly long (and the abbreviation is not uncommon).
You should try to keep it as short as possible because some AWS resources have limits on names and IDs length
But nothing prevents you from using namespace like ctcr2 or similar
Yup - I have used terragrunt on other work projects, along with terraform-null-label. Typically, I use a namespace of “crl”, so the names are reasonable in length. I’ve just been using the account ID (only) in the tf state bucket name to avoid collisions. It’s not a dealbreaker by any means, it was just a bit unfortunate since it ran into the collision only after the 4th account was being created.
yea, if it’s any consolation, “you are not alone”
I have run into it as well using the even
namespace
I was blown away that there would be a conflict
2019-11-27
2019-11-30
Hi, I wanted to build Geodesic today, but I’m facing issue with helm v3 support. make docker/build is complaining that –client-only is an unknown flag. I was wondering if there is a way to specify the helm version 2 from the lis of packages to download from cloudposs ?. Or do I need to download a specific cloudposs version ? thanks
Hrmmmm so in your Dockerfile
you would put something like RUN apk add helm==2.15.2
I’m not sure what our latest 2.x
release of helm is though
you can maybe run apk info helm
to see available versions
oh, this might work too:
apk add 'helm<3.0.0'
I have a Dockerfile to build a Docker image that is based on Alpine Linux. Now I need to install a package as part of this Dockerfile. Currently I have: RUN apk update && \ apk upgrad…