#geodesic (2019-11)

geodesic https://github.com/cloudposse/geodesic

Discussions related to https://github.com/cloudposse/geodesic Archive: https://archive.sweetops.com/geodesic/

2019-11-30

Stéphane Bernier avatar
Stéphane Bernier

Hi, I wanted to build Geodesic today, but I’m facing issue with helm v3 support. make docker/build is complaining that –client-only is an unknown flag. I was wondering if there is a way to specify the helm version 2 from the lis of packages to download from cloudposs ?. Or do I need to download a specific cloudposs version ? thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmmm so in your Dockerfile you would put something like RUN apk add helm==2.15.2

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m not sure what our latest 2.x release of helm is though

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can maybe run apk info helm to see available versions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, this might work too:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
apk add 'helm<3.0.0'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
How to install a specific package version in Alpine?

I have a Dockerfile to build a Docker image that is based on Alpine Linux. Now I need to install a package as part of this Dockerfile. Currently I have: RUN apk update && \ apk upgrad…

2019-11-27

2019-11-26

Joe Niland avatar
Joe Niland

Ah yeah. I totally misread that. Too much code today.

Joe Hosteny avatar
Joe Hosteny

How are folks handling getting a unique TF_BUCKET during bootstrap? I may be missing something, but it looks like the chance for collision on the names is fairly high.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s we we’ve stuck to the namespace-stage-name convention

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

which is enforced by labels

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

to provision the statebucket (once per AWS account)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and then use folders for each project in that account

Joe Hosteny avatar
Joe Hosteny

Thanks @Erik Osterman (Cloud Posse). Does it make sense to include the account id in the name? I picked a fairly common namespace, and make children failed halfway through on just one of the accounts.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha, you could, but we don’t provide a way to parameterize that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the main reason we avoided that is account id’s have little meaning to humans

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so reviewing terraform plans that contain them doesn’t really help code review

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s certainly a common pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and i would probably recommend the pattern whereby you’re generating a slew of accounts (E.g. for developers or apps)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but stick to named accounts for your “core” accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we tend to use acronyms for namespace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for example “sweetops cloud infrastructure”, we’d use soci

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so it’s “optimistically” unique given all the other dimensions

Joe Hosteny avatar
Joe Hosteny

Right, I was thinking of just having the account id as a uniquifier in the state file bucket name, no where else

Joe Hosteny avatar
Joe Hosteny

Is there any guidance on how to unwind a partially bootstrapped set of accounts?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There isn’t - mostly because AWS makes it impossible to destroy an account programmatically without human clickops intervention

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so we haven’t invested in the remaining automation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think others maybe have used aws-nuke?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but you can’t nuke the accounts themselves, on the the resources there in

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’sthe thing though

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use N buckets

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

one per aws account (to share nothing)

Joe Hosteny avatar
Joe Hosteny

Right, so you would have soci-dev-<dev acct id>-terraform-state and soci-prod-<prod acct id>-terraform-state, for example

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can give it a shot! haven’t tried it, so not sure what sort of edge cases might come up

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i like that it provides a nice way to disambiguate multiple prod accounts

Joe Hosteny avatar
Joe Hosteny

For a fresh bootstrap, will the label get the attributes from the TF_BUCKET envvar?

Joe Hosteny avatar
Joe Hosteny

That is set in the template Dockerfile? If so, it is an easy change I think

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes/no, yes it will use it for projects

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but no it won’t use it for creation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

b/c the terraform-aws-tfstate-backend module is opinionated and always uses the terraform-null-label format

Joe Hosteny avatar
Joe Hosteny

Yup, I think I need to read a bit more to find out how that is invoked. It looks like it accepts the attributes var on the label module, so perhaps that can be used

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would accept a PR to the tfstate backend module that added support for macros

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Of which account id could be one

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So you could add %account_id%

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or maybe we should add this to the null label

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That way we can use it else where

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@aknysh what do you think?

aknysh avatar
aknysh

let me read from the beginning

aknysh avatar
aknysh

you can still use namespace-stage-name-attributes format

aknysh avatar
aknysh

you can use account ID as name

aknysh avatar
aknysh

or add acount ID to the attributes

aknysh avatar
aknysh

otherwise, I don’t see a use case for that. Do you want to use a few of those buckets in one account and for the same environment (stage)?

aknysh avatar
aknysh

as we do it, we distinguish environments by stage name

aknysh avatar
aknysh

they can be deployed into one account or into separate accounts

aknysh avatar
aknysh

but still the resource names will be unique

aknysh avatar
aknysh

if you happen to deploy the same set of resources into the same namespace and stage with the same names, you can add additional attributes e.g. namespace-stage-name-blue namespace-stage-name-green``

Joe Hosteny avatar
Joe Hosteny

This is just for the state bucket. I am hitting a global collision with someone else’s bucket on the name, since I chose devops as the name space (so the bucket it attempts to create is devops-prod-terraform-state).

Joe Hosteny avatar
Joe Hosteny

I think I can accomplish this with a two line change to the reference architectures repo. Thanks for the help, and the great modules!

Joe Niland avatar
Joe Niland

@Joe Hosteny I’ve been learning how to use this repo recently. Just curious what that 2 line change would be.

Joe Hosteny avatar
Joe Hosteny
diff --git a/templates/Dockerfile.child b/templates/Dockerfile.child
index e802a8b..092d4d0 100644
--- a/templates/Dockerfile.child
+++ b/templates/Dockerfile.child
@@ -44,7 +44,7 @@ ENV CHAMBER_KMS_KEY_ALIAS="alias/$${NAMESPACE}-$${STAGE}-chamber"
 ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
 ENV TF_BUCKET_ENCRYPT="true"
 ENV TF_BUCKET_REGION="$${AWS_REGION}"
-ENV TF_BUCKET="$${NAMESPACE}-$${STAGE}-terraform-state"
+ENV TF_BUCKET="$${NAMESPACE}-$${STAGE}-$${AWS_ACCOUNT_ID}-terraform-state"
 ENV TF_DYNAMODB_TABLE="$${NAMESPACE}-$${STAGE}-terraform-state-lock"

 # Default AWS Profile name
diff --git a/templates/Dockerfile.root b/templates/Dockerfile.root
index e194abf..8fc37e0 100644
--- a/templates/Dockerfile.root
+++ b/templates/Dockerfile.root
@@ -37,7 +37,7 @@ ENV ACCOUNT_NETWORK_CIDR="${account_network_cidr}"
 ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
 ENV TF_BUCKET_ENCRYPT="true"
 ENV TF_BUCKET_REGION="$${AWS_REGION}"
-ENV TF_BUCKET="$${NAMESPACE}-$${STAGE}-terraform-state"
+ENV TF_BUCKET="$${NAMESPACE}-$${STAGE}-$${AWS_ACCOUNT_ID}-terraform-state"
 ENV TF_DYNAMODB_TABLE="$${NAMESPACE}-$${STAGE}-terraform-state-lock"

 # Default AWS Profile name
diff --git a/templates/conf/tfstate-backend/terraform.tfvars b/templates/conf/tfstate-backend/terraform.tfvars
index 0295e34..7556fad 100644
--- a/templates/conf/tfstate-backend/terraform.tfvars
+++ b/templates/conf/tfstate-backend/terraform.tfvars
@@ -1 +1,2 @@
 region = "${aws_region}"
+attributes = ["${aws_region}", "state"]
Joe Hosteny avatar
Joe Hosteny

I think that would do it. Haven’t had a chance to check yet, since I need to clean up some lingering state from the prior attempted bootstrap

Joe Niland avatar
Joe Niland

Thanks Joe - I see what you mean

Joe Hosteny avatar
Joe Hosteny

@Joe Niland if you try this, make sure you also set the TF_DYNAMODB_TABLE environment variable in both Dockerfiles to include the AWS_ACCOUNT_ID envvar as part of the name. The table names don’t have to be globally unique, but they are labeled in the terraform-aws-tfstate-backend module using a shared base label that includes the attributes.

Joe Hosteny avatar
Joe Hosteny

FYI, that was not sufficient. I’m not sure what exactly was going wrong, but the terraform apply was getting called twice with the adjusted bucket name, and it failed on the retry (since the bucket and table now existed). It is probably checking for the bucket name without the account id somewhere, and taking the wrong path. I saw a check like that in one of the template files, but I wasn’t able to find out which one was failing here. Regardless, I reverted and was able to get things to work without the account id, so I probably won’t look into this more. It would be good to fix though.

Joe Niland avatar
Joe Niland

Thanks for the update Joe. There is a lot going on in the cold start modules!

aknysh avatar
aknysh

why not just change the namespace?

aknysh avatar
aknysh

It should be unique

aknysh avatar
aknysh

devops is not

aknysh avatar
aknysh

Namespace should be your company name or even better company abbreviation

aknysh avatar
aknysh

Which decreases the possibility of collision

Joe Hosteny avatar
Joe Hosteny

Yes, we could do that. I was avoiding it as our company name is fairly long (and the abbreviation is not uncommon).

aknysh avatar
aknysh

You should try to keep it as short as possible because some AWS resources have limits on names and IDs length

aknysh avatar
aknysh

But nothing prevents you from using namespace like ctcr2 or similar

Joe Hosteny avatar
Joe Hosteny

Yup - I have used terragrunt on other work projects, along with terraform-null-label. Typically, I use a namespace of “crl”, so the names are reasonable in length. I’ve just been using the account ID (only) in the tf state bucket name to avoid collisions. It’s not a dealbreaker by any means, it was just a bit unfortunate since it ran into the collision only after the 4th account was being created.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, if it’s any consolation, “you are not alone”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have run into it as well using the even namespace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I was blown away that there would be a conflict

2019-11-25

SweetOps #geodesic avatar
SweetOps #geodesic
05:00:08 PM

There are no events this week

Cloud Posse avatar
Cloud Posse
05:02:35 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about geodesic, get live demos and learn from others using it. Next one is Dec 04, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Joe Niland avatar
Joe Niland

I saw that you guys have the following in your Makefile to use local state if it’s there otherwise remote.

init:
	[ -f .terraform/terraform.tfstate ] || terraform [email protected]
Joe Niland avatar
Joe Niland

So that takes care of local development, right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No, just an optimization to only init once

2019-11-24

Joe Niland avatar
Joe Niland

Is there a recommended way to add the S3 state backend to the module structure generated by reference-architectures ? I had a look at https://docs.cloudposse.com/geodesic/module/with-terraform/ but it seems to be describing the process that is already automated by Cold Start

Joe Niland avatar
Joe Niland

I can see all the required env vars seem to have been created, e.g.

TF_CLI_ARGS_init=-backend-config=region=ap-southeast-2 -backend-config=key=network/terraform.tfstate -backend-config=bucket=example-dev-terraform-state -backend-config=dynamodb_table=example-dev-terraform-state-lock -backend-config=encrypt=true -from-module=git::<ssh://git>@bitbucket.org/team-example/infra.modules.git//modules/network?ref=master .module
TF_CLI_INIT_BACKEND_CONFIG_DYNAMODB_TABLE=example-dev-terraform-state-lock
TF_CLI_INIT_BACKEND_CONFIG_BUCKET=example-dev-terraform-state
TF_CLI_INIT_BACKEND_CONFIG_KEY=network/terraform.tfstate
... etc

However when I run make which runs terraform init Terraform prompts for the bucket name.

Just wondering why it’s not picking up the values from the TF_CLI vars

Joe Niland avatar
Joe Niland

I should also add, my module is initialised from git, is using v0.12, and I added backend.tf containing the following to the module directory.

terraform {
    backend "s3" {}
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Joe Niland hit me up tomorrow and can take a look

Joe Niland avatar
Joe Niland

Thanks @Erik Osterman (Cloud Posse) - still playing with it and if I remove -from-module=... from TF_CLI_ARGS_init=… it reads in the backend config.

Joe Niland avatar
Joe Niland

Anyway, thanks for replying on Sunday!

Joe Niland avatar
Joe Niland

I’ll let you know how it goes tomorrow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See what we do in /etc/direnv/rc.d

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That will reduce some magic maybe

Joe Niland avatar
Joe Niland

I was just looking at that. I’m just writing up a workaround so I can explain what’s happening.

Joe Niland avatar
Joe Niland

It was just a misunderstanding on my part. For some reason I had the idea that my ‘root’ module (root in the sense of your terraform-root-modules repo) shouldn’t have a backend specified. I guess I had this idea because backends could be specified by whoever is going to use the module.

Joe Niland avatar
Joe Niland

I was trying to configure the backend after running terraform init. This wouldn’t work because when using -from-module you can’t init with module files already downloaded.

Joe Niland avatar
Joe Niland

Once I added

terraform {
  required_version = ">= 0.12.0"

  backend "s3" {}
}

to my module, then everything just worked - since all the env is already set up for S3 by the cold start process and geodesic Dockerfile.

Joe Niland avatar
Joe Niland

Just out of interest, what would you do if you need to support multiple backend providers in a root module?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Make sure you have “use tfenv” in your envrc

:--1:1

2019-11-21

2019-11-20

roth.andy avatar
roth.andy

Made a project to play around with CodeFresh, and get a jump on some future work. I’d love some feedback. It’s a philosophical cousin to Geodesic, with slightly different goals.

https://github.com/dadsgarage/dadsgarage

dadsgarage/dadsgarage

Container version of Dad&#39;s garage. It&#39;s full of tools, you spend lots of time in it, and you use it to build great things. https://hub.docker.com/r/dadsgarage/dadsgarage - dadsgarage/dadsga…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Btw, I don’t see the codefresh pipelines in the repo

dadsgarage/dadsgarage

Container version of Dad&#39;s garage. It&#39;s full of tools, you spend lots of time in it, and you use it to build great things. https://hub.docker.com/r/dadsgarage/dadsgarage - dadsgarage/dadsga…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

suggest configuring pipelines to pull yaml from repo

roth.andy avatar
roth.andy

I was reading that it was more secure to leave the yaml that runs pull request pipelines from forks as inline yaml? I’m interested to hear your view as well. I would pull it out into a separate repo possibly, but putting it in the same repo as the code being run through the pipeline is something I want to get away from

roth.andy avatar
roth.andy

I want to avoid a malicious PR that changes the YAML for the pipeline

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

By default, pipelines are not run on forks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So unless you’re trying to protect it from yourself it should be ok

roth.andy avatar
roth.andy

I already don’t run without a /test comment, but all it would take is me forgetting to check if the yaml got changed before adding /test

roth.andy avatar
roth.andy

I have the option turned on

roth.andy avatar
roth.andy

Currently codefresh doesn’t run automatically. I have to comment /test. Running in forks is on

roth.andy avatar
roth.andy

So theoretically yeah, I’m covered. I would have to comment /test on a malicious PR for something bad to happen, but why not mitigate that too

roth.andy avatar
roth.andy

¯_(ツ)_/¯

roth.andy avatar
roth.andy

I do want to extract it to a different repo, and have codefresh pull it from there

roth.andy avatar
roth.andy

What I’m currently contemplating is moving the master and tag builds from docker hub to codefresh. Codefresh builds the containers much faster than docker hub

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, we use codefresh for that too

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the layer caching makes it much faster

roth.andy avatar
roth.andy

I did see that codefresh will let me push 2 tags at once. Docker hub actually makes me build the image twice, which is totally silly

roth.andy avatar
roth.andy

So I can push latest and the release tag at the same tag from one build. Very nice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

lol!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

love the name. so perfectly fitting.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@roth.andy will take a look tomorrow

2019-11-19

Joe Niland avatar
Joe Niland

I’ve used the reference-architectures to set up the base repos for each account. I’m now adding specific modules to build app infrastructure. I have a couple of general questions:

  1. Once everything is created, should I remove the <namespace>-root-bootstrap user and role?
  2. Say I create a module at [dev.example.net/conf/network/> which creates VPC & subnets, then I have another at dev.example.net/conf/data, would you create a <http://main.tf main.tf](http://dev.example.net/conf/network/) in the conf/ dir which references the above and can be used to create all the modules with one terraform apply? This would mainly be for convenience, but also to make sure you deployed every module in the repo. I’m used to Salt states, where you often apply many states with a single command.
  3. I noticed that the [[email protected]](mailto:[email protected]) users created by Cold Start can assume role to arn:aws:iam::XXXXX:role/example-root-admin which has the AdministratorAccess policy but this policy is not directly attached to the example-root-admin group. I know we try not to use the Console often but sometimes it is required/useful. Also clients may prefer it. It’s an extra step to assume role in the console so just wondering if it would be bad to assign the AdministratorAccess policy directly the to example-root-admin group, since the net effect is the same.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(btw, if you break your message into multiple msgs, easier to respond)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Once everything is created, should I remove the <namespace>-root-bootstrap user and role?

:100:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s only needed in the very, very beginning

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

using something like make or terragrunt

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

make has gotten a lot of attention lately (and for good reason)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but one thing that’s often neglected (and not obvious to newcomers), is that targets are first and foremost files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
file.txt: foo.txt
  cp -a foo.txt [email protected]
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

my point is that if file.txt is older than foo.txt, then it perfoms the action

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is interesting if you think about it for terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

since using modification times, it can be derived if things have changed and need updating

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

anyways, see #terragrunt for the apply-all type stuff

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

re: 3, we prefer to always require assuming a role

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so things are deliberate

Joe Niland avatar
Joe Niland

Thanks very much @Erik Osterman (Cloud Posse). Sorry about the multi-part message.

Joe Niland avatar
Joe Niland

I’ve been getting to know make since finding your stuff, so I’ll trying creating a Makefile for each account repo and see how it goes.

2019-11-18

SweetOps #geodesic avatar
SweetOps #geodesic
05:00:04 PM

There are no events this week

Cloud Posse avatar
Cloud Posse
05:02:06 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about geodesic, get live demos and learn from others using it. Next one is Nov 27, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

chrism avatar
chrism

any quick tools to generate a “provider file” with the versions it recommends pinning

chrism avatar
chrism
Deprecating Terraform 0.11 Support in Terraform Providers

During the upcoming months, we will begin deprecating support for Terraform 0.11 in new releases of Terraform providers we officially maintain, beginning with the AWS, AzureRM, Goo…

2019-11-13

chrism avatar
chrism

How do you run “make lint” without it running terraform 0.11?

chrism avatar
chrism

tried passing in env var but its got a fetish for 0.11

chrism avatar
chrism
Add support for Mixed Instance Spot Policy Autoscaling. by ChrisMcKee · Pull Request #17 · cloudposse/terraform-aws-ec2-autoscale-group

Does as the label says; adds an example using it which I used to test that it works as expected. The &quot;make && make init&quot; keeps trying to install and setup terraform 0.11 which is …

chrism avatar
chrism

meh doesnt do much anyway as it falls over using 12 at the validate step

chrism avatar
chrism

is gomplate supposed to be in that mass of stuff in the build harness? /bin/bash: gomplate: command not found I’ve installed it and generated the file; still no idea why the build harness kept defaulting to v11 in the project but I broke my way passed it so

joshmyers avatar
joshmyers

AFAICR gomplate isn’t installed inside geodesic, so you need that on your host

joshmyers avatar
joshmyers

I lied. It is, but often you will want to run make readme from outside geodesic

chrism avatar
chrism

Its a tf repo; so its more the “build tools” bit; i suppose its awkward to get the right binary unless you presume (like the terraform side) its all linux

chrism avatar
chrism

I use go get ... in the end and ran it. instructions were unclear so I winged it

Jeremy Grodberg avatar
Jeremy Grodberg

In general, you run make init to load the build harness, but then you still may need to run make readme/deps to get gomplate so you can run make readme.

:--1:2

2019-11-12

Joe Niland avatar
Joe Niland

The module directories created by reference-architectures are all for Terraform 0.11. I’m creating my own for my app but I want to use 0.12.

I saw that @aknysh recommended to change the Makefile like this:

-include ${TF_MODULE_CACHE}/Makefile

And to add export TF_MODULE_CACHE=.module to terraform.envrc

But I want to create ${TF_MODULE_CACHE} if it’s not there.

So far I have this:

$(shell mkdir -p ${TF_MODULE_CACHE})
-include ${TF_MODULE_CACHE}/Makefile


\## Fetch the remote terraform module
deps:
	terraform init


\## Reset this project
reset:
	rm -rf Makefile *.tf .terraform

Is there a better way?

2019-11-11

SweetOps #geodesic avatar
SweetOps #geodesic
05:00:11 PM

There are no events this week

Cloud Posse avatar
Cloud Posse
05:02:39 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about geodesic, get live demos and learn from others using it. Next one is Nov 20, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Blaise Pabon avatar
Blaise Pabon

Sooo… the docs don’t actually describe how to get started with geodesic… I guessed that git clone <https://github.com/cloudposse/geodesic.git> && cd geodesic && geodesic would get me launch the shell. Next, it asked me for my passphrase (which I think I remember)

Blaise Pabon avatar
Blaise Pabon

I’ll tell you what, if I agree to be initiated me into the sacred brotherhood of geodesic and I swear that I will not reveal the sacred mysteries to anyone; will you show me the answer to the riddle:

-> Run 'assume-role' to login to AWS
 ⧉  geodesic
 ✗   (none) ~ ⨠  
Jeremy Grodberg avatar
Jeremy Grodberg

@Blaise Pabon Geodesic makes a few assumptions that are not really documented AFAIK but flow from the way things are set up by the CloudPosse Reference Architecture: https://github.com/cloudposse/reference-architectures

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Jeremy Grodberg avatar
Jeremy Grodberg

Geodesic assumes that you are using AWS, and that you will get AWS credentials dynamically injected into your environment using some tool that allows you to assume an AWS IAM role. Typically that is aws-vault, but it can be aws-okta or anything. All that has to happen is that once you have your AWS credentials set up in your environment, you need to set the environment variable ASSUME_ROLE to the role name, after which the “Run assume-role” prompt will go away and the role name specified by ASSUME_ROLE will become part of the prompt.

Blaise Pabon avatar
Blaise Pabon

Oh! OK, I have been setting my AWS creds in $AWS_ACCESS_KEY etc. I will look over the reference arch setup more closely. Thanks for the tips. FWIW, I’m a fan of keeping a higher barrier to entry…. it keeps the day-trippers and riff-raff away. It’s not that Unix isn’t user-friendly, it’s just picky about who its friends are

2019-11-08

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/packages

Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

packages are now updated nightly using GitHub actions!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@aknysh can you test out @tamsky change: https://github.com/cloudposse/geodesic/pull/534

add Darwin support for bind mount of ssh-agent socket by tamsky · Pull Request #534 · cloudposse/geodesic

Newly available on Docker for Mac (Edge) release. https://docs.docker.com/docker-for-mac/edge-release-notes/ Someone should test that this doesn’t break on non-Edge Docker-for-Mac releases.

Blaise Pabon avatar
Blaise Pabon

I’ running docker for Mac Edge … an I think geodesic is working, but I can’t figure out how to make it do anything.

add Darwin support for bind mount of ssh-agent socket by tamsky · Pull Request #534 · cloudposse/geodesic

Newly available on Docker for Mac (Edge) release. https://docs.docker.com/docker-for-mac/edge-release-notes/ Someone should test that this doesn’t break on non-Edge Docker-for-Mac releases.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Blaise Pabon can you give some examples?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you think about geodesic as a light weight VM it might help

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it comes preinstalled with all the essential tools

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but you still need to add your apps

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so if you haven’t done that, it’s not going to do much

Blaise Pabon avatar
Blaise Pabon

well, I would like to create a cluster with kops (that’s already built in) and then I would probably add k9s and popeye to the image. at the moment, it is asking me to run assume-role but I don’t know the syntax.

Blaise Pabon avatar
Blaise Pabon

I tried: /bin/bash assume-role arn:aws:iam::224064240808:role/masters.kops.dev.travellogic.k8s.local assume-role arn:aws:iam::224064240808:role/masters.kops.dev.travellogic.k8s.local assume-role arn:aws:iam::224064240808:user/blaise

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

asssume-role is just a wrapper around aws-vault

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can only assume a role for something which has been previously configured in your ~/.aws/config

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, those are not assume’able roles by the look of it)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

….AWS makes this all complicated

Blaise Pabon avatar
Blaise Pabon

ah! ok, I can use the nomeclature of my .aws/config

:100:1
aknysh avatar
aknysh

Will test

2019-11-07

Joe Niland avatar
Joe Niland

When using the reference-architectures project, would it be possible to adopt an existing account?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Joe Niland technically yes, but we don’t optimize for that since it’s not how we (as a company) work. There are too many variables to consider.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s best to look at what geodesic represents: a strategy for bundling and shipping the tools and configuration for an environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we happen to have our flavor of how to do that, but nothing restricts how it is used

Joe Niland avatar
Joe Niland

@Erik Osterman (Cloud Posse) thanks. After I wrote that, I had a look through the account module and realised it’s not so easy.

Asking because I have a new client who have been working for a while in a single account and have AWS credits tied to it.

I’m working on creating the other accounts using your process and then will create a geodesic repo manually for the existing account. That’s the plan anyway!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, I think manually creating the geodesic repos in this case is the way to go

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

https://calendly.com/cloudposse if you want some quick pointers

Erik Osterman

Welcome to my scheduling page. Please follow the instructions to add an event to my calendar.

Joe Niland avatar
Joe Niland

Thanks Erik!

Erik Osterman

Welcome to my scheduling page. Please follow the instructions to add an event to my calendar.

Jeremy Grodberg avatar
Jeremy Grodberg

In this situation, couldn’t he just make the existing account the root account and proceed from there?

Joe Niland avatar
Joe Niland

Hey Jeremy, I thought about doing that but there are a lot of resources in there. It’s a mix of important stuff, tests, etc. It thought it may be cleaner to start fresh and then migrate the bits they need across into the new structure.

:--1:1

2019-11-04

SweetOps #geodesic avatar
SweetOps #geodesic
05:00:06 PM

There are no events this week

Cloud Posse avatar
Cloud Posse
05:05:23 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions about geodesic, get live demos and learn from others using it. Next one is Nov 13, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2019-11-03

2019-11-02

2019-11-01

    keyboard_arrow_up