#terraform (2019-03)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2019-03-31

2019-03-30

Bruce avatar
Bruce

Hey guys, I’m looking for a module to create a client VPN server to connect to our aws private network (home of our vault server). Any suggestions?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Bruce we have this module to create a vpn connection on AWS https://github.com/cloudposse/terraform-aws-vpn-connection

cloudposse/terraform-aws-vpn-connection

Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network - cloudposse/terraform-aws-vpn-connection

:--1:1
Maxim Tishchenko avatar
Maxim Tishchenko

hey guys, I’m trying to fetch users from group via TF. but seems for me it is impossible… it is correct?

I was tried to fetch like that

data "aws_iam_group" "admin_group" {
  group_name = "Admins"
}

but I can’t get user list from this data…

I was tried to fetch user with group like this

data "aws_iam_user" "admins" {
}

but it doesn’t have such filter.

can anybody help me ?

antonbabenko avatar
antonbabenko

This is true, there does not seem to be a way to fetch members of group, so you have to use external data-source with aws- cli (or other ways).

Maxim Tishchenko avatar
Maxim Tishchenko

yeah.. thanks

ldlework avatar
ldlework

Why would ` count = “${var.alb_target_group_arn == “” ? 1 : 0}” produce: * module.backend-worker.module.task.aws_ecs_service.default: aws_ecs_service.default: value of ‘count’ cannot be computed`

ldlework avatar
ldlework

Holy crap, you can’t have a submodule that uses dynamic counts?!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, more or less. It’s massively frustrating. It used to work better in older versions of terraform.

ldlework avatar
ldlework

I’m stunned.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in the end, we need something like sass for hcl.

ldlework avatar
ldlework

Is this issue fixed in 0.12?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nope

ldlework avatar
ldlework

lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
pulumi/pulumi

Define cloud apps and infrastructure in your favorite language and deploy to any cloud - pulumi/pulumi

ldlework avatar
ldlework

lol is cloudposse going to migrate to that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No, it would be too much effort.

ldlework avatar
ldlework

What about using a system like Jinja2 to generate HCL?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s what some people are doing

ldlework avatar
ldlework

Basically do what Saltstack does

ldlework avatar
ldlework

I see.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem with that IMO is unless there is an open framework for doing it consistently across organizations, it leads to snowflakes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if there was an established way of doing (e.g. like SASS for CSS), then I could get behind it maybe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

here was a fun hack by @antonbabenko https://github.com/antonbabenko/terrible

antonbabenko/terrible

Let’s orchestrate Terraform configuration files with Ansible! Terrible! - antonbabenko/terrible

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(the name is so apropos)

troll1
ldlework avatar
ldlework

I just thought of a nice project subtitle for pulumi:

Pulumi: Write your own bugs instead!
ldlework avatar
ldlework

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) Saltstack literally uses Jinja2 so at least there is prior-art there.

ldlework avatar
ldlework

It generates YAML. Compare this to ansible, which extends YAML is silly ways to give it dynamic features.

ldlework avatar
ldlework

It’s much more easier to understand a template that generates standard YAML. Easier to debug too. And the potential abstractions are much stronger.

ldlework avatar
ldlework

But with Jinja2 you could easily have conditional say, load_balancer blocks in your aws_ecs_service resource.

ldlework avatar
ldlework

shakes fist

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m not saying there’s no prior art. I’m saying there’s no canonical way of doing it so one person does it in jinja, another person does it in ansible, another in salt, another using gotemplates, another using bash scripting, etc. So what we end up with is a proliferation of incompatible approaches. Even two people using jinja will probably approach it differently. Also, I don’t like a generalized tool repurposed for templating HCL (the way helm has done for YAML). Instead, I think it should be a highly opinionated, purpose built tool that uses a custom DSL which generates HCL.

ldlework avatar
ldlework

I mean in that case, HCL itself just needs to improve. I was thinking more stop-gap measure.

ldlework avatar
ldlework

But I also did not think about the fact that if you have some module that’s parametrically expanded by some template system that there’s no way to do that between one module calling another. It basically wouldn’t work anyway.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ideally HCL itself should improve - and it is with leaps and bounds in 0.12. However, Hashimoto himself said this whole count issue is incredibly difficult for them to solve and they don’t have a definitive solution for it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
MastodonC/terraboot

DSL to generate terraform configuration and run it - MastodonC/terraboot

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(not recommending it… just stumbled across it)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
joshdk/tfbundle

Bundle a single artifact as a Terraform module. Contribute to joshdk/tfbundle development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@tamsky

joshdk/tfbundle

Bundle a single artifact as a Terraform module. Contribute to joshdk/tfbundle development by creating an account on GitHub.

1
rohit avatar
rohit

@Erik Osterman (Cloud Posse) I don’t understand where tfbundle would be useful. could you please elaborate the usecase(s) ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

complex lambdas e.g. that npm install a lot of deps.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

using artifacts is nice b/c those artifacts are immutable

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, anyone can then deploy that lambda even if they don’t have npm installed locally

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

typically, in terraform modules that you see deploy an advanced lambda, they expect the end user to have a full local dev environment with all build tools required to build the lambda and zip it up. if that lambda is instead distributed as a zip, it mitigates that problem.

rohit avatar
rohit

makes sense now

rohit avatar
rohit

Thanks for elaborating

1
rohit avatar
rohit

What is the best way to version your own terraform modules ? So that you use a particular version of your module(well tested) in prod and can also actively work on it

ldlework avatar
ldlework

By splitting your sub-modules from your root-modules and using a git-tag as the source value on the module invocation.

:--1:1
rohit avatar
rohit

I was thinking in the same lines but not exactly sure on how to do it. Does you have examples that i can look at ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

those are all of our examples of invoking modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

everytime we merge to master, we cut a new release

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then in prod or staging, we do something like terraform init -from-module=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/chamber?ref=tags/0.35.1>

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(technically, we use cloudposse/tfenv to make this easier)

rohit avatar
rohit

I have my root module that invokes my submodules(compute,storage,networking) and each of my submodules uses different resourses/modules, what would be the best way to version in this scenario ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You version your root modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then the root modules are what you promote to each account/stage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you familiar with terraform init -from-module=?

rohit avatar
rohit

nope

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s the missing piece

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Command: init - Terraform by HashiCorp

The terraform init command is used to initialize a Terraform configuration. This is the first command that should be run for any new or existing Terraform configuration. It is safe to run this command multiple times.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Optionally, init can be run against an empty directory with the -from-module=MODULE-SOURCE option, in which case the given module will be copied into the target directory before any other initialization steps are run.

rohit avatar
rohit

Interesting. So if my sub modules are currently pointing to ?ref=tags/1.2.2 and then i could run something like

terraform init -from-module=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/chamber?ref=tags/2.1.2>
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

exactly

rohit avatar
rohit

it will replace the source code in my sub modules with latest version code

rohit avatar
rohit

correct ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it won’t replace though; it will error if your current directory contains *.tf files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you could have a Makefile with an init target

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that calls the abbove

rohit avatar
rohit

Ohh. I would have to think more about the Makefile and init

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or you can do this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

export TF_CLI_ARGS_init="-from-module=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/chamber?ref=tags/2.1.2>"

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then when you run terraform init, it will automatically import that module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(that’s what we do)

rohit avatar
rohit

it will automatically import the latest version if i do exactly like what you do

rohit avatar
rohit

sounds like magic

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s an incredibly DRY way of doing terraform

rohit avatar
rohit

So where does cloudposse/tfenv fit in the process ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the TF_CLI_ARGS_* envs contain a long list of concatenated arguments

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you want to toggle just one argument, that’s a pain. for example, we want to target the prefix in the S3 bucket for state.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we want to define key value pairs of environment variables, then use tfenv to combine them into argument above.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We discuss tfenv in more detail. search here: https://archive.sweetops.com/terraform/

terraform

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

^F

rohit avatar
rohit

This is awesome

rohit avatar
rohit

It is very tempting

rohit avatar
rohit

I will have to try it soon

rohit avatar
rohit

@Erik Osterman (Cloud Posse) you guys are doing amazing work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks @rohit!! Appreciate it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ldlework avatar
ldlework

Why would I be getting:

 module.frontend.module.codepipeline.module.ecs_codepipeline.module.github_webhooks.provider.github: 1:3: unknown variable accessed: var.github_organization in:
ldlework avatar
ldlework

It’s a variable problem inside of ecs-codepipeline?

ldlework avatar
ldlework
10:10:25 PM

scratches his head.

ldlework avatar
ldlework

“Error: Error asking for user input: 1 error(s) occurred:”

ldlework avatar
ldlework

oh man i have no idea what this is

ldlework avatar
ldlework
2019-03-30T17:24:05.054-0500 [DEBUG] plugin.terraform-provider-github_v1.3.0_x4: plugin address: timestamp=2019-03-30T17:24:05.054-0500 network=unix address=/tmp/plugin407023942
2019/03/30 17:24:05 [ERROR] root.frontend.codepipeline.ecs_codepipeline.github_webhooks: eval: *terraform.EvalOpFilter, err: 1:3: unknown variable accessed: var.github_organization in:
ldlework avatar
ldlework

ah ok fixing some other errors fixed it

1
1
ldlework avatar
ldlework

I have two ecs-codepipelines working great. When I tried to deploy a third, I got:

Action execution failed
Error calling startBuild: User: arn:aws:sts::607643753933:assumed-role/us-west-1-qa-backend-worker-codepipeline-assume/1553989516226 is not authorized to perform: codebuild:StartBuild on resource: arn:aws:codebuild:us-west-1:607643753933:project/us-west-1-qa-backend-worker-build (Service: AWSCodeBuild; Status Code: 400; Error Code: AccessDeniedException; Request ID: deba2268-5345-11e9-a5ef-d15213ce18a0)

I’m confused because the ecs-codepipeline module does not take any IAM information as variables…

ldlework avatar
ldlework

Looks like the ARNs mentions are unique to that third service “backend-worker”

ldlework avatar
ldlework

So like there shouldn’t be any collision with the others. In which case why wouldn’t the IAM stuff that the module creates for itself be working?

ldlework avatar
ldlework

Can anyone please help me reason why ecs-codebuild module would suffer this permission error when trying to build the container?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrmmmm that’s odd… you were able to deploy (2) without any problems, but the (3)rd one errors?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
samsung-cnct/terraform-provider-execute

execute arbitrary commands on Terraform create and destroy - samsung-cnct/terraform-provider-execute

antonbabenko avatar
antonbabenko

Welcome to terragrunt

samsung-cnct/terraform-provider-execute

execute arbitrary commands on Terraform create and destroy - samsung-cnct/terraform-provider-execute

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

except for terragrunt is a strap on

antonbabenko avatar
antonbabenko

But yeah, I see the value in such provider as a replacement to a script which will do something before the main “terraform apply”.

antonbabenko avatar
antonbabenko

More watchers than stars on github usually means that all company’s employees are watching

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh yea, in this case it’s not been updated also for a couple years

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i highly doubt it’s (terraform-provider-external) is compatible with the current version of tf

antonbabenko avatar
antonbabenko

certainly not, but there were similar providers out there.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this opens up interesting possibilities

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

anyone ever kick the tires on this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…basically execute any command as part of the apply or destroy phase; this is different from local-exec on null_resource

1
ldlework avatar
ldlework

lol I think my issue is coming from the fact that my codepipeline is getting created before the service/task

ldlework avatar
ldlework

the service uses the json from the container, the container uses the ecr registry url from the codepipeline and so yeah

ldlework avatar
ldlework

the world makes sense again

ldlework avatar
ldlework

phew!

ldlework avatar
ldlework

guess I have to break out the ECR from my codepipeline abstracting module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i love that feeling

2019-03-29

Nikola Velkovski avatar
Nikola Velkovski

Hey people, I just noticed some interesting behavior when using the ECR module https://github.com/cloudposse/terraform-aws-ecr/blob/master/main.tf#L34 makes apply fail if it’s empty. I use the label module but I do not have stage set, so I am wondering if setting a simple conditional here: https://github.com/cloudposse/terraform-aws-ecr/blob/master/main.tf#L11 would be good enough ?

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Nikola Velkovski avatar
Nikola Velkovski

The error is

aws_ecr_lifecycle_policy.default: InvalidParameterException: Invalid parameter at 'LifecyclePolicyText' failed to satisfy constraint: 'Lifecycle policy validation failure:
string "" is too short (length: 0, required minimum: 1)
Nikola Velkovski avatar
Nikola Velkovski
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm… why don’t you use something for the stage, e.g. test or testing? what’s the reason to not use it (all the modules were designed with the assumption to have namespace, stage and name )

Nikola Velkovski avatar
Nikola Velkovski

no, but that’s just because I Am used to environment rather than stage

Nikola Velkovski avatar
Nikola Velkovski

nvm I can pass env in place of stage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yep, those are just labels

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(environment and stage are the same in all cases we use the modules)

Nikola Velkovski avatar
Nikola Velkovski

thanks for the reply though

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can even change the order if using terraform-null-label

Nikola Velkovski avatar
Nikola Velkovski

oh yeah, I really dig that module

Nikola Velkovski avatar
Nikola Velkovski

kudos for that one

Julio Tain Sueiras avatar
Julio Tain Sueiras

hi

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Julio Tain Sueiras

Julio Tain Sueiras avatar
Julio Tain Sueiras

just wanted to ask the opinion for a terraform LSP?

Julio Tain Sueiras avatar
Julio Tain Sueiras

(though right now mostly focusing on adding provider/provisioner completion and adding terraform v0.12 support once is in GA to my plugin)

Julio Tain Sueiras avatar
Julio Tain Sueiras

also any vim + terraform users here?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what is terraform LSP? https://langserver.org ?

Julio Tain Sueiras avatar
Julio Tain Sueiras

correct

Julio Tain Sueiras avatar
Julio Tain Sueiras
juliosueiras/vim-terraform-completion

A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool - juliosueiras/vim-terraform-completion

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are talking about autocompletion/syntax highlighting/error highlighting, a lot of people are using VScode (with terraform plugin) or JetBrains IDEA (with terraform plugin)

Julio Tain Sueiras avatar
Julio Tain Sueiras

I know that part(also a lot of people use my plugin for vim as well)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice plugin @Julio Tain Sueiras

Julio Tain Sueiras avatar
Julio Tain Sueiras

since my thinking is, a LSP approach would allow any editor, and also can just focus on the new features rather then editor specific implementation

loren avatar
loren

i like the LSP idea

Julio Tain Sueiras avatar
Julio Tain Sueiras

for ex. vlad’s plugin is very good

Julio Tain Sueiras avatar
Julio Tain Sueiras

but because is a standard jetbrains plugin

Julio Tain Sueiras avatar
Julio Tain Sueiras

meaning that update time is quite far apart

Julio Tain Sueiras avatar
Julio Tain Sueiras

but terraform provider get updated very frequently

Julio Tain Sueiras avatar
Julio Tain Sueiras

(for that issue, I implemented a bot that auto update ,and do version base completion, incase you want to use data from older version of the provider)

:--1:1
Julio Tain Sueiras avatar
Julio Tain Sueiras

also LSP would allow even more nonstandard editor

Julio Tain Sueiras avatar
Julio Tain Sueiras

to have terraform features

Julio Tain Sueiras avatar
Julio Tain Sueiras

like Atom(RIP)

Julio Tain Sueiras avatar
Julio Tain Sueiras

P.S. @Andriy Knysh (Cloud Posse) for the vscode terraform plugin, right now, it only support aws/gcp/azure

Julio Tain Sueiras avatar
Julio Tain Sueiras

he is implementing a new feature that would load the completion data from my plugin

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh nice

Julio Tain Sueiras avatar
Julio Tain Sueiras
juliosueiras/vim-terraform-completion

A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool - juliosueiras/vim-terraform-completion

Julio Tain Sueiras avatar
Julio Tain Sueiras
juliosueiras/vim-terraform-completion

A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool - juliosueiras/vim-terraform-completion

Julio Tain Sueiras avatar
Julio Tain Sueiras

if you click on aws for example

Julio Tain Sueiras avatar
Julio Tain Sueiras

it have the data for every version of the provider

Julio Tain Sueiras avatar
Julio Tain Sueiras

since my thinking was, what if you want to lock to a version

Julio Tain Sueiras avatar
Julio Tain Sueiras

and only want the editor have completion data of that version

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

great, looks like you did a lot of work for that

Julio Tain Sueiras avatar
Julio Tain Sueiras

there is also extra small stuff like, autocomplete module’s attribute and argument , autocomplete modules list from registry.terraform.com, evaluate interpolation, open documentation of the current parameter in a module, etc

maarten avatar
maarten

@Julio Tain Sueiras cool stuff!

Julio Tain Sueiras avatar
Julio Tain Sueiras

the entire reason I did the plugin is, I don’t want to use another editor

Julio Tain Sueiras avatar
Julio Tain Sueiras

but if there is no autocompletion, then terraform is quite annoyed to work with

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(yea, that’s what it says on your GitHub profile )

Julio Tain Sueiras avatar
Julio Tain Sueiras

but yeah, once TF0.12 is on GA

Julio Tain Sueiras avatar
Julio Tain Sueiras

then I will work a LSP implementation

Julio Tain Sueiras avatar
Julio Tain Sueiras

especially because Golang have the AST for HCL

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you going to use Golang or Ruby?

Julio Tain Sueiras avatar
Julio Tain Sueiras

go

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

even better

Julio Tain Sueiras avatar
Julio Tain Sueiras

I did several provider for terraform & added new feature to official terraform providers(vsphere) and packer(vsphere as well)

maarten avatar
maarten

@Julio Tain Sueiras Can you add a Microsoft Clippy in case a user is writing iam_role_policy_attachment ?

:--1:1
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but Clippy did not end up well

maarten avatar
maarten

party pooper

Julio Tain Sueiras avatar
Julio Tain Sueiras
juliosueiras/terraform-provider-packer

A Terraform Provider to generate Packer JSON. Contribute to juliosueiras/terraform-provider-packer development by creating an account on GitHub.

Julio Tain Sueiras avatar
Julio Tain Sueiras

also how would that work

Julio Tain Sueiras avatar
Julio Tain Sueiras

?

maarten avatar
maarten

Clippy: It looks like you want to use iam_role_policy_attachment, are you sure about that ?

Julio Tain Sueiras avatar
Julio Tain Sueiras

XD

Julio Tain Sueiras avatar
Julio Tain Sueiras

I actually use that, since I don’t like dealing with json

maarten avatar
maarten

do you also correct formatting like hclfmt does ?

Julio Tain Sueiras avatar
Julio Tain Sueiras

my plugin have a dependencies for vim-terraform

Julio Tain Sueiras avatar
Julio Tain Sueiras

which have auto format

Julio Tain Sueiras avatar
Julio Tain Sueiras

using terraform format

Julio Tain Sueiras avatar
Julio Tain Sueiras

also any of you guys use vault?

maarten avatar
maarten

I’m working with it at the moment

Julio Tain Sueiras avatar
Julio Tain Sueiras

since I had a meeting with the Hashicorp’s people(Toronto division) around 1 month ago

Julio Tain Sueiras avatar
Julio Tain Sueiras

and I mention that I wish there is a similar thing for vault like aws_iam_policy_document data source

Julio Tain Sueiras avatar
Julio Tain Sueiras

and they have it now

Julio Tain Sueiras avatar
Julio Tain Sueiras

but……………

Julio Tain Sueiras avatar
Julio Tain Sueiras

is not on the terraform docs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

why not?

Julio Tain Sueiras avatar
Julio Tain Sueiras
Julio Tain Sueiras avatar
Julio Tain Sueiras

not sure

Julio Tain Sueiras avatar
Julio Tain Sueiras

but is part of the official release

Julio Tain Sueiras avatar
Julio Tain Sueiras

not a beta/alpha

Julio Tain Sueiras avatar
Julio Tain Sueiras

so you can write all the vault policy

Julio Tain Sueiras avatar
Julio Tain Sueiras

without using heredoc

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s nice

Julio Tain Sueiras avatar
Julio Tain Sueiras

P.S. the biggest best feeling I did was for kubernetes pod terraform

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Andriy Knysh (Cloud Posse) https://asciinema.org/a/158264

Complete Nested Block Completion attachment image

Recorded by juliosueiras

Julio Tain Sueiras avatar
Julio Tain Sueiras

kube provider is nightmare to work with, without autocompletion

antonbabenko avatar
antonbabenko

You can always use json2hcl (https://github.com/kvz/json2hcl) to write packer.json as hcl and then just convert it to valid json. I use it pretty often for various cases.

kvz/json2hcl

Convert JSON to HCL, and vice versa . Contribute to kvz/json2hcl development by creating an account on GitHub.

antonbabenko avatar
antonbabenko

Though more often I use yaml<>json using https://github.com/dbohdan/remarshal

dbohdan/remarshal

Convert between TOML, YAML and JSON. Contribute to dbohdan/remarshal development by creating an account on GitHub.

Julio Tain Sueiras avatar
Julio Tain Sueiras

the use case I did for packer is because I want to reference terraform resources in packer

Julio Tain Sueiras avatar
Julio Tain Sueiras

and json2hcl is a hard convert

Julio Tain Sueiras avatar
Julio Tain Sueiras

but doesn’t account for several things

Julio Tain Sueiras avatar
Julio Tain Sueiras

like terraform doesn’t understand the ideal of sequential

Julio Tain Sueiras avatar
Julio Tain Sueiras

(so provisioners is going to have problem)

Julio Tain Sueiras avatar
Julio Tain Sueiras

I use my provider with local_file resource

Julio Tain Sueiras avatar
Julio Tain Sueiras

then null_resource with packer build

Julio Tain Sueiras avatar
Julio Tain Sueiras

or validate

antonbabenko avatar
antonbabenko

@Julio Tain Sueiras I used your documentation generators back in the days (2+ years ago in my project - https://github.com/antonbabenko/terrapin). I mentioned you there in the bottom of README Thanks for that! It saved me some time. I will probably contact you in the future.

antonbabenko/terrapin

[not-WIP] Terraform module generator (not ready for its prime time, yet) - antonbabenko/terrapin

Julio Tain Sueiras avatar
Julio Tain Sueiras

no problem

Julio Tain Sueiras avatar
Julio Tain Sueiras

one of the most funniest I did (with my plugin)

Julio Tain Sueiras avatar
Julio Tain Sueiras

was doing terraform with my android phone

antonbabenko avatar
antonbabenko

hehe, this one is pretty amazing usage of Terraform - https://www.youtube.com/watch?v=--iS_LH-5ls

1
1
Julio Tain Sueiras avatar
Julio Tain Sueiras

nice!!

2019-03-28

oscarsullivan_old avatar
oscarsullivan_old

Be careful terraform destroying an ECR… it isn’t like an S3 bucket where it warns you it is not empty… your images will go poof

ldlework avatar
ldlework

@oscarsullivan_old i had the other problem the other day it wouldn’t delete the content inside of an s3 bucket when deleting an ecs-codepipeline

oscarsullivan_old avatar
oscarsullivan_old

That’s a good thing! Don’t want to accidentally remove data, right? Is that what you mean

ldlework avatar
ldlework

I mean, ECR images when I literally tell terraform to destroy it?

ldlework avatar
ldlework

It should at least be an option..

inactive avatar
inactive

Guys, I have a question regarding team collaboration. What do you guys use to work in teams when running terraform? I know that terraform provides a solution to this on their paid tiers, but they are cost prohibitive for us. Currently, our workaround has been to share our terraform templates via a cloud drive which is replicated locally to each developer workstation. It sorta works, but there are running into several limitations and issues. Any suggestions?

Tim Malone avatar
Tim Malone

we just commit into a git repo, and use remote state (s3 with locking via dynamodb). have you seen atlantis? pretty much the free version of terraform enterprise, as i understand

inactive avatar
inactive

we already use s3 as backend to save the terraform state. but we still need a way to run ‘terraform apply’ in different machines. using git only takes care of the source code, but it ignores several other files which must also be shared with team members

inactive avatar
inactive

i checked atlantis earlier but i couldn’t find anything specific to team collaboration

Tim Malone avatar
Tim Malone

atlantis’ whole purpose is team collab - their tagline is even ‘Start working on Terraform as a team.’ https://www.runatlantis.io/

:--1:1
Tim Malone avatar
Tim Malone

re the other files that need to be shared - what sort of files are we talking?

Nikola Velkovski avatar
Nikola Velkovski

This is IMO the hardest part of terraform. I would recommend using workspaces

Nikola Velkovski avatar
Nikola Velkovski

In order to solve the team issue, you would need some kind of pipelines that have queues.

inactive avatar
inactive

We have ssh keys that are used when terraform deploys our bastion hosts. We also have terraform.tfvars which include credentials which cannot be pushed to git. And finally, our .terraform directory is not pushed to git which then forces each developer to reinitialize their local terraform environment with ‘terraform init’. We’ve been able to successfully do all of this using OneDrive… but I feel like this is a silly workaround and there must be a better solution out there that does not require spending

Nikola Velkovski avatar
Nikola Velkovski

I’ve used jenkins in the past and it worked quite wel.

inactive avatar
inactive

Another option for us was to create a shared VM where multiple users could use, one at a time (ughh)

Nikola Velkovski avatar
Nikola Velkovski

First you need to solve the issue you’ve mentioned above though.

Nikola Velkovski avatar
Nikola Velkovski

A central key store for the secrets is one option.

1
inactive avatar
inactive

A third option that we are considering is using an NFS shared drive (even Amazon EFS) where we store all of our terraform files

inactive avatar
inactive

And yes, we use Jenkins already to schedule and automate our deployments, but we still need to test them manually in our local machines

Nikola Velkovski avatar
Nikola Velkovski

test meaning ?

inactive avatar
inactive

Developer A makes a change to a terraform template locally on his machine. He runs ‘terraform apply’. It works.

inactive avatar
inactive

He goes out to lunch. Developer B needs to pick up where he left of on his own machine

Nikola Velkovski avatar
Nikola Velkovski

Well, why not put that on jenkins ?

Nikola Velkovski avatar
Nikola Velkovski

or any other ci for that matter.

inactive avatar
inactive

you mean use Jenkins as a shared VM?

Nikola Velkovski avatar
Nikola Velkovski

and for this terraform plan would suffice

Nikola Velkovski avatar
Nikola Velkovski


you mean use Jenkins as a shared VM?
wait first you need a clear definition of a workflow

Nikola Velkovski avatar
Nikola Velkovski

when you have this, then you can jump on implementation.

hkaya avatar
hkaya

it doesn’t have to be jenkins, any pipeline would be fine, gitlab-ci or codefresh could also be used as a service, hence no vm involved

hkaya avatar
hkaya

as for the public ssh keys, check them in to your repo, that’s why they are called public

Tim Malone avatar
Tim Malone

definitely recommend some sort of secret share to get around the tfvars problem we use lastpass and manually update our local envs, but we don’t have a lot of secrets - could get painful if there were more. this could handle the ssh keys too (if they’re private)

if a full secret share is too much to jump to right away, you could even store tfvars in s3 to get started with - but of course tightly control the access

re init’ing local .terraform directory - that shouldn’t take very long to do, and terraform will tell you if you need to update - that shouldn’t be a deal breaker to require people to do that (in fact, it’s almost a necessity - it downloads the provider binaries suitable for the person’s machine)

Abel Luck avatar
Abel Luck

we commit everything to git, but use mozilla’s sops utility to encrypt the secrets

Abel Luck avatar
Abel Luck
mozilla/sops

Secrets management stinks, use some sops! Contribute to mozilla/sops development by creating an account on GitHub.

Abel Luck avatar
Abel Luck

there’s a fantastic terraform provider for sops that lets you use the encrypted yaml/json seamlessly https://github.com/carlpett/terraform-provider-sops

carlpett/terraform-provider-sops

A Terraform provider for reading Mozilla sops files - carlpett/terraform-provider-sops

1
inactive avatar
inactive

I appreciate all of the feedback you have provided so far. The jenkins/ci pipeline makes sense to me, but only for automation deployments. But we still want independent execution to be done manually via our local terminal from our Mac. I will look into the secret share suggestions that you have pointed out as that does make sense. Thanks again.

inactive avatar
inactive

I just checked sops and this is what I think makes the most sense. It will allow to check everything into git. Thanks @Abel Luck

keen avatar

I’m a half fan of git-crypt https://github.com/AGWA/git-crypt - it’s cleanly transparent. as long as you have it setup anyway. (if you dont, it gets a bit uglier).

AGWA/git-crypt

Transparent file encryption in git. Contribute to AGWA/git-crypt development by creating an account on GitHub.

keen avatar

your specified files are decrypted on disk, but encrypted in the repo - transparently during the commit/checkout.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also use AWS SSM param store + chamber to store secrets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it works in a container on local machine

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and from a CI/CD pipeline

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
    # Deploy chart to cluster using helmfile (with chamber secrets)
    - "chamber exec kops app -- helmfile --file config/helmfile.yaml --selector component=app sync --concurrency 1 --args '--wait --timeout=600 --force --reset-values'"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

chamber exec namespace -- terraform plan

ldlework avatar
ldlework

@inactive I’m particularly fond of using gopass as a git-based secret store https://github.com/gopasspw/gopass which has first class support for summon which is a tool that injects secrets as environment variables into processes (like when you run terraform apply) https://github.com/cyberark/summon Here is an example of how Terraform is run:

summon -p $(which gopass) -f secrets.yml terraform apply
gopasspw/gopass

The slightly more awesome standard unix password manager for teams - gopasspw/gopass

cyberark/summon

CLI that provides on-demand secrets access for common DevOps tools - cyberark/summon

ldlework avatar
ldlework

Your secrets.yml file simply is a mapping between environment variables you want to set on the terraform process, and the secret within the gopass store so like:

TF_VAR_github_oauth_token: !var roboduels/github/oauth-token
TF_VAR_discord_webhook_url: !var roboduels/discord/webhook-url
ldlework avatar
ldlework

Since the variables use the TF_VAR_ prefix, they will actually be set as Terraform variables on your root module!

inactive avatar
inactive

Thanks again to all. Very useful tips.

loren avatar
loren

This thread should be captured somewhere, fantastic set of resources and options

1
Tim Malone avatar
Tim Malone

someone could write a blog post based on it??

2019-03-27

jaykm avatar
jaykm

Hello everyone, I Just joined this channel.

I want to ask one thing related to API gateway, I’m able to create API gateway, methods, and integration with lambda and I also configured all the headers, origins, methods, in the response parameters but CORS has not been configured. Can someone help me to get in, I can also send the terraform script how I’m implementing. @sahil FYI

jaykm avatar
jaykm

@foqal No, I’m already given the permission of lambda to API gateway and I can also access the lambda from API gateway but If I do a request from the browser (Client) then It gives me CORS error and you can also see the error.

jaykm avatar
jaykm
10:10:47 AM
jaykm avatar
jaykm

@foqal

oscarsullivan_old avatar
oscarsullivan_old

Anyone been lucky in using https://github.com/cloudposse/terraform-aws-ecr and setting up ECR

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it was used many times before

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the issue?

oscarsullivan_old avatar
oscarsullivan_old

2 probs.

1) In the example module-resource “roles” argument is set. This isn’t in the documentation, variables.tf and also errors I forked and did a make readme to see if it was just not run but it doesn’t exist.

2) In the examples for cicd_user for codefresh the outputs error with

terraform init
Initializing modules...
- module.cicd_user
  Getting source "git::<https://github.com/cloudposse/terraform-aws-iam-system-user.git?ref=tags/0.4.1>"
- module.ecr
- module.cicd_user.label
  Getting source "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.5.4>"
- module.ecr.label

Initializing the backend...

Error: resource 'aws_iam_policy_attachment.login' config: "policy_login_arn" is not a valid output for module "ecr"



Error: resource 'aws_iam_policy_attachment.read' config: "policy_read_arn" is not a valid output for module "ecr"



Error: resource 'aws_iam_policy_attachment.write' config: "policy_write_arn" is not a valid output for module "ecr"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, here it was tested/deployed many months ago https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf#L65

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

oscarsullivan_old avatar
oscarsullivan_old

Thanks!

oscarsullivan_old avatar
oscarsullivan_old

Yep that looks like how I’ve set it up now

oscarsullivan_old avatar
oscarsullivan_old

Not sure why the doc has “roles” as an input then

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the docs were not updated

oscarsullivan_old avatar
oscarsullivan_old

Think its best to use the 2.x v?

oscarsullivan_old avatar
oscarsullivan_old

feel like I’m setting myself up for one if I use 4.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not sure, I tested it up to 0.2.9, did not test latest releases, but they were tested by other people in many deployments (but maybe the interface is different)

oscarsullivan_old avatar
oscarsullivan_old

hmm

oscarsullivan_old avatar
oscarsullivan_old

Using 0.40 I have an ECR with permissions

oscarsullivan_old avatar
oscarsullivan_old

so it can’t NOT be working

oscarsullivan_old avatar
oscarsullivan_old

ECR repo w/ permissions & lifecycle policy which was the premise

oscarsullivan_old avatar
oscarsullivan_old

Just the docs that are outdated it seems

midacts avatar
midacts

https://github.com/cloudposse/terraform-aws-ec2-instance Would i need to fork this to add a provisioner? We’d need to join Windows machines to the domain and run post provisioning steps on Windows and Linux.

oscarsullivan_old avatar
oscarsullivan_old

Does anyone else find that output.tf inside a module don’t go to stdout when using the module?

oscarsullivan_old avatar
oscarsullivan_old

terraform output -module= mymodule apparently this is a solution but eh

ldlework avatar
ldlework

@oscarsullivan_old only the root module’s outputs are printed right?

oscarsullivan_old avatar
oscarsullivan_old

Yeh

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to add the outputs from the child module(s) to the outputs of the parent

oscarsullivan_old avatar
oscarsullivan_old

Ah ok. That’s what I thought was but hoping to avoid that.

oscarsullivan_old avatar
oscarsullivan_old

Now I know about -module=x I at least have a way to not dupe them

antonbabenko avatar
antonbabenko

Just discovered https://codeherent.tech/ - new kid in town, or am I the last one to find it?

imiltchman avatar
imiltchman

Looks cool ^^

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

They have been reaching out to us

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Guess I should check it out

antonbabenko avatar
antonbabenko

It is too early and does not look for my cases or for my customers.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

watched the demo video… yea, an IDE for TF is not something we would support at this itme

antonbabenko avatar
antonbabenko

I’d like to have smart IDE with real-time validation and suggestions. Something what I could easily integrate in my existing workflows.

oscarsullivan_old avatar
oscarsullivan_old
camptocamp/terraboard

A web dashboard to inspect Terraform States - camptocamp/terraboard

oscarsullivan_old avatar
oscarsullivan_old

oh actually seeing slightly different use cases there

oscarsullivan_old avatar
oscarsullivan_old

that’s actually a tool to visually USE terraform not audit it

imiltchman avatar
imiltchman

modules.tf does visual->code, but it’d be good to be able to do code->visual. Think Atlantis, except for the large text output, you get a link to a nice diagram that describes the deltas.

imiltchman avatar
imiltchman

And ideally another one after the apply is complete, with all of the unknowns resolved

antonbabenko avatar
antonbabenko

@imiltchman I have some nice ideas and POC how to get code -> visual, but lack of time or $$$ to work on that.

imiltchman avatar
imiltchman

I bet it’s a harder problem to solve than visual -> code

antonbabenko avatar
antonbabenko

It is pretty much equal with the same amount of details. More work is to convert from visual to cloudformation and from cloudformation to Terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suppose this appeals to a certain mode of developer and probably not the one using vim as their primary IDE ;)

imiltchman avatar
imiltchman

It might make it easier to do code reviews

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, as usual something like that looks nice and helpful at first, but then always gets in the way

:100:1
johncblandii (Cloud Posse) avatar
johncblandii (Cloud Posse)

Does this example actually work? https://github.com/cloudposse/terraform-aws-ecs-container-definition/blob/master/examples/multi_port_mappings/main.tf#L25-L36

I got this error when using a similar approach, but with 8080/80 (container/host):

* aws_ecs_task_definition.default: ClientException: When networkMode=awsvpc, the host ports and container ports in port mappings must match.

So I sleuthed and found https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html:
If you are using containers in a task with the awsvpc or host network mode, exposed ports should be specified using containerPort. The hostPort can be left blank or it must be the same value as the containerPort.

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

PortMapping - Amazon Elastic Container Service

Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks line not, needs a PR

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

PortMapping - Amazon Elastic Container Service

Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.

johncblandii (Cloud Posse) avatar
johncblandii (Cloud Posse)

ok. just wanted to make sure i wasn’t in crazy town

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re holding an office hours

Alex Siegman avatar
Alex Siegman

@Erik Osterman (Cloud Posse) Yeah I’m just listening in to see what’s up

imiltchman avatar
imiltchman

Has anyone ever gotten a diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue. message from Terraform?

Tim Malone avatar
Tim Malone

i’ve seen it a couple of times. usually just another apply fixed it… … but i would make sure your state is backed up - if you don’t already have it versioned - just in case

imiltchman avatar
imiltchman

Thanks, it looks like it didn’t harm anything on my side either

ldlework avatar
ldlework

Maybe consider making https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git support building the alb conditionally

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

ldlework avatar
ldlework

the target_group / listener I mean

ldlework avatar
ldlework

It’s annoying to do in 0.11 I know - but that’s exactly why I’d prefer if you maintained the HCL

ldlework avatar
ldlework

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll consider that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

2019-03-26

deftunix avatar
deftunix

hi everyone

deftunix avatar
deftunix

I am using terraform to provision consul cluster on aws using the https://github.com/hashicorp/terraform-aws-consul module. do you have any

hashicorp/terraform-aws-consul

A Terraform Module for how to run Consul on AWS using Terraform and Packer - hashicorp/terraform-aws-consul

deftunix avatar
deftunix

suggestion to terminate instance one by one when launch configuration change?

imiltchman avatar
imiltchman

I am suddenly having trouble passing the route 53 zone nameservers as records to the NS record

imiltchman avatar
imiltchman

The error I get is records.0 must be a single value, not a list

imiltchman avatar
imiltchman

I am passing in ["${aws_route53_zone.hostedzone.*.name_servers}"]

antonbabenko avatar
antonbabenko

Try: ["${flatten(aws_route53_zone.hostedzone.*.name_servers)}"]

:--1:1
imiltchman avatar
imiltchman

Thanks, I just stumbled on that

antonbabenko avatar
antonbabenko

Or actually try: ["${aws_route53_zone.hostedzone.name_servers[0]}"]

imiltchman avatar
imiltchman

flatten worked for me

:--1:1
imiltchman avatar
imiltchman

element(…, 0) didn’t (“element() may only be used with flat lists”)

antonbabenko avatar
antonbabenko

right, combine with flatten, if you need to get just one element

imiltchman avatar
imiltchman

Thanks

tallu avatar
tallu

given values="m1.xlarge,c4.xlarge,c3.xlarge,c5.xlarge,t2.xlarge,r3.xlarge" can I use jsonencode or something similar to get the following

{
      "InstanceType": "m1.xlarge"
    },
    {
      "InstanceType": "c4.xlarge"
    },
    {
      "InstanceType": "c3.xlarge"
    },
    {
      "InstanceType": "c5.xlarge"
    },
    {
      "InstanceType": "t2.xlarge"
    },
    {
      "InstanceType": "r3.xlarge"
    }
ldlework avatar
ldlework
split - Functions - Configuration Language - Terraform by HashiCorp

The split function produces a list by dividing a given string at all occurrences of a given separator.

map - Functions - Configuration Language - Terraform by HashiCorp

The map function constructs a map from some given elements.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or use null_data_source to construct anything you want, e.g. https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/main.tf#L63

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

tallu avatar
tallu

map fails when the key is same like

> map("InstanceType","m1.xlarge","InstanceType","c4.xlarge")
map: argument 3 is a duplicate key: "InstanceType" in:

${map("InstanceType","m1.xlarge","InstanceType","c4.xlarge")}
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

tallu avatar
tallu

could not get my desired output looking at all the proposed links

tallu avatar
tallu

thanks I will try it out

tallu avatar
tallu
formatlist - Functions - Configuration Language - Terraform by HashiCorp

The formatlist function produces a list of strings by formatting a number of other values according to a specification string.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

tallu avatar
tallu
> formatlist("Hello, %s!", ["Valentina", "Ander", "Olivia", "Sam"])
parse error at 1:28: expected expression but found "["
tallu avatar
tallu

in terraform console

tallu avatar
tallu

nevermind formatlist("Hello, %s!", list("Valentina", "Ander", "Olivia", "Sam")) worked

ldlework avatar
ldlework

What would be the cause of this failure when deploying ecs-codepipeline

* module.backend.module.ecs_codepipeline.module.github_webhooks.github_repository_webhook.default: 1 error(s) occurred:


• github_repository_webhook.default: POST <https://api.github.com/repos/dustinlacewell/roboduels-frontend/hooks>: 404 Not Found []
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you create a correct GitHub token?

ldlework avatar
ldlework

Hmm…

ldlework avatar
ldlework

Oh I know what happened. Thanks @Andriy Knysh (Cloud Posse) lol.

ldlework avatar
ldlework

OK, does anyone ahve an idea about this one? I’m chalking this up to my inexperience with the interpolation details:

* output.cache_hostname: element: element() may only be used with flat lists, this list contains elements of type map in:

${element(module.elasticache.nodes, 0)}
* module.elasticache.aws_route53_record.cache: 1 error(s) occurred:

* module.elasticache.aws_route53_record.cache: At column 3, line 1: element: argument 1 should be type list, got type string in:

${element(aws_elasticache_cluster.cache.cache_nodes.*.address, 0)}
ldlework avatar
ldlework

oh that’s two errors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

aws_elasticache_cluster.cache.cache_nodes is a map

ldlework avatar
ldlework

isn’t it a list of maps?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

lookup(element(aws_elasticache_cluster.cache.cache_nodes, 0), "address") - try something like this

ldlework avatar
ldlework

Interesting.

ldlework avatar
ldlework

ew I think I have to escape the inner quotes

ldlework avatar
ldlework
09:18:55 PM

does a raindance for 0.12

ldlework avatar
ldlework

escaping doesn’t work either…

ldlework avatar
ldlework
  value = "${lookup(element(module.elasticache.nodes, 0), \"address\")}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what are you escaping?

ldlework avatar
ldlework

Error: Error loading /home/ldlework/roboduels/infra/stages/qa/outputs.tf: Error reading config for output cache_hostname: parse error at 1:48: expected expression but found invalid sequence “\”

loren avatar
loren

a miracle of hcl is that you do not need to escape inner quotes like that

ldlework avatar
ldlework

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

value = "${element(module.elasticache.nodes, 0)["address"]}" - or this

loren avatar
loren
element() may only be used with flat lists, this list contains elements of type map in ...
loren avatar
loren

so, no element() for you…

ldlework avatar
ldlework

ldlework avatar
ldlework

lol what, how do I report the address of the primary elasticache node? can I just output the whole list?

loren avatar
loren

aws_elasticache_cluster.cache.cache_nodes[0]

ldlework avatar
ldlework

that will output the whole map right?

ldlework avatar
ldlework

i guess i don’t know why I’m trying to reduce all my outputs to single values

ldlework avatar
ldlework
09:23:37 PM

stops doing that.

loren avatar
loren

i think i also mixed up the two errors there, oops

ldlework avatar
ldlework

Oh ok, so I guess I still have a problem

ldlework avatar
ldlework

I want to give internal simple DNS to the first ip in the elasticache cluster

ldlework avatar
ldlework

I so had something like:

  records = ["${element(aws_elasticache_cluster.cache.cache_nodes.*.address, 0)}"]
ldlework avatar
ldlework

But it sounds like none of the options we just discussed are going to work to extract the address of the first item?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Element will not work on list of maps

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can input and output anything

ldlework avatar
ldlework

Really? How will I extract the address of the first map of the lists?

ldlework avatar
ldlework

Wont I have the same exact problem there?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can output a list of values

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And then use element

ldlework avatar
ldlework

Genius.

ldlework avatar
ldlework

So you mean, output all the addresses as a list using splat. Then take the first. I’ll try it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Something like that

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here is an example of working with list of maps and getting the first element https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/main.tf#L114

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

[0] works with maps

ldlework avatar
ldlework

I have:

data "null_data_source" "cache_addresses" {
  inputs = {
    addresses = ["${aws_elasticache_cluster.cache.cache_nodes.*.address}"]
  }
}

resource "aws_route53_record" "cache" {
  zone_id = "${var.zone_id}"
  name    = "${local.name}"
  type    = "CNAME"
  records = ["${data.null_data_source.cache_addresses.outputs["addresses"]}"]
  ttl     = "300"
}

and get:

Error: module.elasticache.aws_route53_record.cache: records: should be a list Error: module.elasticache.data.null_data_source.cache_addresses: inputs (addresses): ‘’ expected type ‘string’, got unconvertible type ‘[]interface {}’

ldlework avatar
ldlework

this is a pretty confusing DSL all things considered

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just try [0] for now

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse)

resource "aws_route53_record" "cache" {
  zone_id = "${var.zone_id}"
  name    = "${local.name}"
  type    = "CNAME"
  records = ["${aws_elasticache_cluster.cache.cache_nodes[0].address}"]
  ttl     = "300"
}

Error downloading modules: Error loading modules: module elasticache: Error loading .terraform/modules/a86e58cdab02f33e0c2a0f76c4ae3934/stacks/elasticache/main.tf: Error reading config for aws_route53_record[cache]: parse error at 1:47: expected “}” but found “.”

ldlework avatar
ldlework

Is that what you meant?

ldlework avatar
ldlework

Starting to feel a little dumb here lol.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

[“${aws_elasticache_cluster.cache.cache_nodes[0][“address”]}“]

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if [0]["address"] together does not work, use locals as in the example (we had the same issues)

ldlework avatar
ldlework

yeah that doesn’t work

ldlework avatar
ldlework

OK trying coalescelist

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

coalescelist has nothing to do with that

ldlework avatar
ldlework

what am i looking at then hehe

ldlework avatar
ldlework

the splat?

ldlework avatar
ldlework

ohhhh

ldlework avatar
ldlework

the next few lines

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean use locals to first get [0] from the list, then another local to get [“data”] from the map

ldlework avatar
ldlework
09:48:30 PM

tries

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) thanks homie

ldlework avatar
ldlework

Why is the ECS container definition for the environment a list of maps?

ldlework avatar
ldlework

oh i probably need to do name/value

GFox (someTXcloudGuy) avatar
GFox (someTXcloudGuy)

Hello, anyone have any Terraform modules code to automate cis benchmarks in azure subscriptions / tenants ???

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe ask in #azure

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not many people here are familiar with Azure

1
GFox (someTXcloudGuy) avatar
GFox (someTXcloudGuy)

if so, you’re going to be very popular!

ldlework avatar
ldlework

A dependency graph encompassing all the CloudPosse projects would be awesome

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice idea! we’ll consider it thanks

ldlework avatar
ldlework

I’m changing the environment setting of a https://github.com/cloudposse/terraform-aws-ecs-container-definition module and it isn’t updating the container defnition.

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

ldlework avatar
ldlework

Not sure how to get it to change.

ldlework avatar
ldlework

TIL how to taint

ldlework avatar
ldlework

not clear how to taint the container definition though…

ldlework avatar
ldlework
01:57:38 AM

just destroys the whole module

ldlework avatar
ldlework

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it just generates JSON output

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

ldlework avatar
ldlework

yes

ldlework avatar
ldlework

and with ecs-codepipeline

ldlework avatar
ldlework

what am I missing?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to taint https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/master/main.tf#L41, the resources that uses the generated json

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but this will prevent from updating the entire task definition including the container definition

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) I ran terraform destroy on the whole module and it says:

* module.backend.module.ecs_codepipeline.aws_s3_bucket.default (destroy): 1 error(s) occurred:
* aws_s3_bucket.default: error deleting S3 Bucket (us-west-1-qa-backend-codepipeline): BucketNotEmpty: The bucket you tried to delete is not empty
ldlework avatar
ldlework

I guess you have to manually go in and delete the bucket data?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

ldlework avatar
ldlework

lame

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we will consider updating the module to add a var for force destroy on the bucket

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) why are the container definitions ignored in the lifecycle??

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

don’t know, did not implement that part, maybe there was a good reason, need to take a look

ldlework avatar
ldlework

np

2019-03-25

xluffy avatar
xluffy

Hey, I have a string variable 10.20.30.111, I want to get last element of this string, expecting output is 111, I can use

value = "${element(split(".", var.private_ip), length(split(".", var.private_ip)) - 1 )}"

But too complex, any suggestion?

oscarsullivan_old avatar
oscarsullivan_old

Is that too complicated?

xluffy avatar
xluffy

Hmm, It’s work but too complicated with me. I want to make sure to have another solution

oscarsullivan_old avatar
oscarsullivan_old

you’re splitting on “.” and looking for the last index of the output

oscarsullivan_old avatar
oscarsullivan_old

LGTM

1
mmuehlberger avatar
mmuehlberger

You can also just do element(split(".", var.private_ip), 3) since IP addresses have a predefined syntax (always 4 blocks).

xluffy avatar
xluffy

yeah, my bad tks

oscarsullivan_old avatar
oscarsullivan_old

Does anyone else ever get this?

Failed to load backend: 
Error configuring the backend "s3": RequestError: send request failed
caused by: Post <https://sts.amazonaws.com/>: dial tcp: lookup [sts.amazonaws.com> on 8.8.4.4:53: read udp 172.17.0.2:46647-](http://sts\.amazonaws\.com)8.8.4.4:53: i/o timeout

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".

I feel it has something to do with my VPN

Nikola Velkovski avatar
Nikola Velkovski

usually these kind of errors are network/internet related

2
oscarsullivan_old avatar
oscarsullivan_old

On my end?

oscarsullivan_old avatar
oscarsullivan_old

It does seem to sort itself out when I turn off wifi / vpns thereby resetting my network connection

albttx avatar
albttx

Hello, just posted this https://github.com/cloudposse/terraform-aws-acm-request-certificate/issues/16 just wan’t to be sure it’s an error from the module… any idea ?

`aws_route53_record.default` errror · Issue #16 · cloudposse/terraform-aws-acm-request-certificate

Error: module.acm_request_certificate.aws_route53_record.default: 1 error(s) occurred: * module.acm_request_certificate.aws_route53_record.default: At column 3, line 1: lookup: argument 1 should be…

oscarsullivan_old avatar
oscarsullivan_old

How do I get what I believe is NATing so that public IP r53 records NAT to private IP when using openvpn?

oscarsullivan_old avatar
oscarsullivan_old

Nvm looks like a security group issue actually

Bharat avatar
Bharat

Terraform is marking task-def as inactive, when ever we update the task-def. We need those old task-def’s to do roll-back. We are deploying ECS-Service as part of CI. Any work around on how to retain the older versions of task-def’s ?

2019-03-23

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, that works in a source block too

ldlework avatar
ldlework

Nice

2019-03-22

ldlework avatar
ldlework

When I try to destroy an ecs_codepipeline module by removing it from my HCL, I get:

Error: Error asking for user input: 1 error(s) occurred:

* module.ecs_codepipeline.module.github_webhooks.github_repository_webhook.default: configuration for module.ecs_codepipeline.module.github_webhooks.provider.github is not present; a provider configuration block is required for all operations
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

welcome to terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve encountered this as well. I don’t know how to get around it.

ldlework avatar
ldlework

lol jeeze

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I love terraform, but every tool has it’s limitations. clean destroy’s are really hard to achieve with module compositions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yet managing infrastructure without composable modules is not scalable for the amount of infrastructure we manage.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re just hoping that these edgecases improve as terraform the language improves

ldlework avatar
ldlework

I recently joined a startup and I’m the only guy doing the infrastructure. CloudPosse modules have been a god-send, regardless of whatever little issues there are. I’ve almost got a completely serverless deployment of their stuff going, kicked off with Github releases, flowing through CodePipeline, deployed to Fargate, with CloudWatch events sending SNS notifications to kick off Typescript Lambda functions to send me Discord notifications for each step. All in about three weeks by myself, never having used Terraform before.

party_parrot1
ldlework avatar
ldlework

So yeah, blemishes aside…

ldlework avatar
ldlework

I’m old enough to remember when you had to call someone on the phone to get a new VPS.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow! that sounds awesome

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’d like to get a demo of that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in 3 weeks!!

ldlework avatar
ldlework

Heh, it’s really mostly thanks to the CloudPosse modules..

oscarsullivan_old avatar
oscarsullivan_old

Anyone familiar with this with vpc_peering? Had it working a few weeks ago but not working this time around:

Error: Error applying plan:

2 error(s) occurred:

* module.vpc_peering.aws_vpc_peering_connection_options.accepter: 1 error(s) occurred:

* aws_vpc_peering_connection_options.accepter: Error modifying VPC Peering Connection Options: OperationNotPermitted: Peering pcx-xxx8b80615da5 is not active. Peering options can be added only to active peerings.
	status code: 400, request id: ed787be8-xxx-4c6c-xxx-117b303c9d84
* module.vpc_peering.aws_vpc_peering_connection_options.requester: 1 error(s) occurred:

* aws_vpc_peering_connection_options.requester: Error modifying VPC Peering Connection Options: OperationNotPermitted: Peering pcx-xxx8b80615da5 is not active. Peering options can be added only to active peerings.
	status code: 400, request id: eca6b1ab-xxx-4cef-xxx-ac6f80bd903f
oscarsullivan_old avatar
oscarsullivan_old

oh

oscarsullivan_old avatar
oscarsullivan_old

ok I ran it a second time without destroying it after it failed and its working

oscarsullivan_old avatar
oscarsullivan_old

guess it was a dependency thang

imiltchman avatar
imiltchman

I have an order of creation issue with an AutoScaling policy. I have one module that creates and ALB and Target Group and another that creates the AutoScaling policy, where I specify the target group resource_label. Terraform proceeds to create the AutoScaling policy using the Target Group, before the ALB->TargetGroup ALB listener rule is created, which causes an error. I tried a depends_on workaround by passing in the alb_forwarding_rule_id as a depends_on variable to the ASG module, but I assume I am still missing a step where I need to use this variable within the aws_autoscaling_policy resource block. Do I stick it in the count property?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Resources - Configuration Language - Terraform by HashiCorp

Resources are the most important element in a Terraform configuration. Each resource corresponds to an infrastructure object, such as a virtual network or compute instance.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depends_on will not always work though

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

because TF does it already automatically

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s just a hint

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but if something could not be done b/c of AWS API or other things, it will not help

imiltchman avatar
imiltchman

It’s a cross-module scenario, which I think isn’t supported until 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can try --target which is not pretty

imiltchman avatar
imiltchman

It works fine if I run the apply twice.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s another method

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@imiltchman just out of curiosity, https://www.terraform.io/docs/providers/aws/r/autoscaling_policy.html#example-usage, instead of autoscaling_group_name = "${aws_autoscaling_group.bar.name}" can you try autoscaling_group_name = "${aws_autoscaling_group.bar.id}" and see what happens ?

AWS: aws_autoscaling_policy - Terraform by HashiCorp

Provides an AutoScaling Scaling Group resource.

ldlework avatar
ldlework

@imiltchman you can resolve module dependencies by using the “tags” attribute on stuff I’ve found.

ldlework avatar
ldlework

@imiltchman If you don’t use a variable, it is optimized away

ldlework avatar
ldlework

And doesn’t affect ordering

ldlework avatar
ldlework

So an easy way to use a dependent variable even though you don’t need it, is to stick the variable in a Tag on the resource

ldlework avatar
ldlework

This worked perfectly for me, for your exact use case which I also ran into

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh nice, this is a solution we did not think about, thanks

loren avatar
loren

heh, tags and descriptions, had to do that before myself. any reference to force terraform to see the dependency when it resolves the config

imiltchman avatar
imiltchman

But then you’re left with an unwanted tag

ldlework avatar
ldlework

It’s not the worst thing, because afterall the module the tagged asset comes from does depend on whatever asset you’re interpolating into the tag.

imiltchman avatar
imiltchman

It actually makes sense, but I don’t think ASG policy has tags

ldlework avatar
ldlework

Yeah that’s the only limitation I guess

ldlework avatar
ldlework

I can’t think of a way around it if there’s no field to stick the dependency on

ldlework avatar
ldlework

Something crazy like, using the dependency in a locals block in which you take just the first letter or something, and then use that local in the name of your ASG policy

ldlework avatar
ldlework

lol

ldlework avatar
ldlework
08:17:09 PM
imiltchman avatar
imiltchman

I can use count

imiltchman avatar
imiltchman

Can’t I?

imiltchman avatar
imiltchman

Just check that the depends_on is not empty

imiltchman avatar
imiltchman

useless check, but maybe it will work

imiltchman avatar
imiltchman

I’ll give it a try and report back

loren avatar
loren

policies have descriptions

ldlework avatar
ldlework

hah

imiltchman avatar
imiltchman

Description not on the policy, count didn’t work, ended up putting the first letter of the ID in the policy name Seems to work

ldlework avatar
ldlework

lmfao

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Or you can create a list, put in there two items, one is real name, the other is something from the dependency

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Then get the first item

ldlework avatar
ldlework

Every time I apply I get the following change:

module.frontend-ecs.module.ecs_codepipeline.aws_codepipeline.source_build_deploy: Modifying... (ID: us-west-1-qa-frontend-codepipeline)
  stage.0.action.0.configuration.%:          "4" => "5"
  stage.0.action.0.configuration.OAuthToken: "" => "<OAUTH-TOKEN>"

What is actually being changed here? Why is it always 4 -> 5? Why does it think the OAuth token is changing each time even though it is not?

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) XD

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We had similar issues with oauth tokens before

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think terraform or AWS ignore them, forgot who

imiltchman avatar
imiltchman

@Andriy Knysh (Cloud Posse) Great suggestion, thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

So you need to put the field into ignore changes, not good though if if ever changes

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) Can anything use a lifecycle block?

ldlework avatar
ldlework

Also, I can’t add the lifecycle block to a CloudPosse module right?

ldlework avatar
ldlework

Since this is happening inside the ecs_codepipeline module

ldlework avatar
ldlework

I wonder if anything on AWS is actually getting changed though. Or if it is just needlessly updating the Terraform state each time.

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) any clue?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

lifecycles cannot be interpolated or passed between modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe this is changing in 0.12? not sure

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in the end, I think it’s hard to escape needing a task runner for complicated setups

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(e.g. make or variant or terragrunt or astro)

ldlework avatar
ldlework

How does that help in this case?

foqal avatar
foqal
09:59:27 PM

Helpful question stored to @:

I have an order of creation issue with an AutoScaling policy. I have one module that creates a Target Group and another that creates the AutoScaling policy which uses the ALBRequestCountPerTarget,...
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you use terraform ... -target= to surgically target what you want to modify

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in the order in which needs to happen that terraform was not able to figure out

ldlework avatar
ldlework

In this case it just thinks that the github oauth token inside of the ecs-codepipeline is changing each time. So I’m not sure that it is a matter of ordering.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh sorry, i didn’t look closely enough at what you wanted to solve

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in this case, yea, maybe not the best solutino

ldlework avatar
ldlework

If the thing that’s being updated is just the Terraform state, it might be no big deal to just let it update the state needlessly each time.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

really, the only option is to put the lifecycle in the module but then it applies to everyone

ldlework avatar
ldlework

I can’t really tell by the message what’s changing though.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

jumping on call

ldlework avatar
ldlework

Yeah.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s not that it’s changing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s that terraform doesn’t know what the current value is

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so it always changes

ldlework avatar
ldlework

But does it have any consequences?

ldlework avatar
ldlework

Like changing infrastructure etc? Seems not? Can’t tell though.

Charlie Mathews avatar
Charlie Mathews

Might @anyone know why the cloudposse/terraform-aws-ecs-container-definition doesn’t include VolumesFrom or Links? I’m trying to figure out if I should tack those things onto the json output in a hacky way or submit a MR.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we didn’t implement the full spec, just what we needed or others have contributed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you see a way of adding support for it, please do!

Charlie Mathews avatar
Charlie Mathews

Will do, thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Please open a PR

:--1:1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Idlework, no consequences

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

It just consider the token a secret

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And does not know, or thinks it doesn’t know, the value

ldlework avatar
ldlework

Makes sense

ldlework avatar
ldlework

How do you all handle different environments? Right now I’ve got a single branch, with a bunch of HCL, and a Summon secrets.yml which contains the shared and environment specific Terraform variables. So by selecting an environment with Summon, different variables go into Terraform and I’m able to build different environments in different regions like that.

ldlework avatar
ldlework

However, another way I’ve been thinking about it, is having a branch per environment so that the latest commit on that branch is the exact code/variables etc that deployed a given enviornment.

ldlework avatar
ldlework

This allows different environments to actually have different/varying HCL files. So in a given environment you can actually change around the actual infrastructure and stuff. Then by merging changes from say, staging into production branch, you can move incremental infrastructure changes down the pipeline.

ldlework avatar
ldlework

I wonder what you are all doing.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The long-lived branches tend to be discourage by most companies we’ve worked with. typically, they only want master as the only long-lived branch.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

some use a hierchy like prod/us-west-2/vpc, staging/us-west-2/vpc and then a modules/ folder where they pull from

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use a 1-repo-per-account approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

[prod.cloudposse.co>, <http://root.cloudposse.co|root.cloudposse.co](http://prod\.cloudposse\.co), etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we treat our repos like a reflection of our AWS accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then we have our terraform-root-modules service catalog

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) Do you know what I mean though? Let’s say you wanted to make substantial changes of a major module by say, switching from self-hosted DB to Aurora Serverless (say). If you have just one branch that holds all the environments, then all environments must see this module change at the same time. VS, if you have a branching strategy, then you can make the major changes in a dev environment, then merge it into a staging environment, and then finally merge it into production/master. Is that not a problem you face / approach you see being used?

loren avatar
loren

I’ve tried using branches, but the trouble is that they are not visible enough. People lose track of them too easily. So for them to work, we need another layer over them anyway that cycles through various actions (validate/plan/apply/etc). Separate directories or repos seem easier to grok

ldlework avatar
ldlework

Huh, branches having low visibility. Not sure I understand that exactly, but thank you for your input.

ldlework avatar
ldlework

I like the prospect of branching allowing for experimental environments to be spun up and destroyed creatively.

ldlework avatar
ldlework

It appears this is the strategy that Hashicorp recommends for it’s enterprise users.

loren avatar
loren

My personal approach has been to write a module that abstracts the functionality, and then one or more separate repos that invoke the module for a specific implementation

ldlework avatar
ldlework

Sure but consider a major refactor of that module.

ldlework avatar
ldlework

I appreciate how parametic modules allows you to achieve multiple environments, that’s what I’m doing right now.

ldlework avatar
ldlework

But it seems like the repo would overall be more simple with less going on in a given branch.

loren avatar
loren

Tags as versions, each implementation is isolated from changes to the module

ldlework avatar
ldlework

This requires you to use external modules, no?

loren avatar
loren

Define “external”… You can keep the module in the same repo if you like

ldlework avatar
ldlework

So now you have the maintanience overhead of a new repo * how many modules.

ldlework avatar
ldlework

How do you tag modules that live in the same repo?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use one repo for our service catalog

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and one repo to define the invocation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

since we don’t use the same repo, we can surgically version pin every single module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and never force an upgrade we didn’t want

loren avatar
loren

It’s easier to pin if they are separate repos, for sure

loren avatar
loren

Not impossible if they are in the same repo, but some mental gymnastics are needed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s the only way it can work for us since we build infra for multiple organizations

ldlework avatar
ldlework

It makes perfect sense to me that you’d externalize modules you’re reusing across organizations sure.

ldlework avatar
ldlework

I’m just one person at a tiny startup right now though.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know monorepos have gained in popularity over recent years and major companies have come out in favor of them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i think they can work well inside of an organization that builds 99% of their stuff internally

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but open source is necessarily democratized into a number of smaller repos (think NPM, ruby, perl, go, and ……. terraform modules)

ldlework avatar
ldlework

It’s not really a mono-repo in the sense that are multiple disjoint substantial projects in a single repo.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it sounds like you’re going to have modules within your repo that have different SDLCs

ldlework avatar
ldlework

No that’s the point, they don’t really have different SDLCs.

ldlework avatar
ldlework

They’re not really indepdent abstractions we intend to reuse in multiple places.

ldlework avatar
ldlework

Most of my HCL code is just invocations of high-level CloudPosse modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Let’s say you wanted to make substantial changes of a major module by say, switching from self-hosted DB to Aurora Serverless (say). If you have just one branch that holds all the environments, then all environments must see this module change at the same time.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so we don’t have more than one environment per repo

ldlework avatar
ldlework

What has different SDLCs are the overall architecture of the different environments.

ldlework avatar
ldlework

Yes, that is in terms of the HCL defining our architecture, but I don’t really have a reusable module which defines arbitrary DBs

loren avatar
loren

You can try tf workspaces also, that might work for your use case

ldlework avatar
ldlework

I guess I was just waiting to hear “Yeah branches sound like they’ll work for your use-case too”

ldlework avatar
ldlework

Thank you!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there’s no right/wrong, only different opinions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can achieve what you want to do with branches/workspaces

loren avatar
loren

My experience has just between that I always, eventually, need to separate configuration from logic

ldlework avatar
ldlework

I think you could even combine approaches.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what i don’t like about workspaces is the assume a shared statebucket

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we never share a statebucket across stages or accounts

ldlework avatar
ldlework

I don’t intend on using Workspaces. I’ve read enough about them to avoid them for now.

ldlework avatar
ldlework

I almost went full Terragrunt, but I’m trying to thread the needle with my time and workload

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, well, a lot of good patterns they’ve developed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’ve borrowed the terraform init -from-module=... pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what we didn’t like about terragrunt is that overloads .tfvars with HCL-like code that isn’t supported by terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so it’s writing something vendor-locked

ldlework avatar
ldlework

“Why didn’t you use k8s or EKS?”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, ok, so yea that’s one approach that appeals to many. just another camp

ldlework avatar
ldlework

“Uhh, because I can fit CodePipeline in my head” lol

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) So basically your suggestion to me is to have two repos: 1 repo that contains all the modules. 1 repo which contains the environment-specific calls into those modules to build environment-specific infrastructure. Each environment’s HCL that uses the modules can pin to different versions of those modules, even though all the modules are in one repo, since each module call has a different source parameter. Is this close?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nailed it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so technically, it can all be in one repo and still use source pinning to the git repo, just my head hurts doing it that way

loren avatar
loren

Mental gymnastics

loren avatar
loren

Also, Chinese finger traps

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) so I guess my only concern is moving enhancements made in one environment’s HCL to another environment, and the next. It all comes down to how much per-environment HCL there is to arrange the modules in order to construct the infrastructure. To put things at their extremes to make the point; in order to have essentially zero manual maintanece of moving changes between environments, the modules would have to be essentially so high-level that they simply implement the entire environment. Rather than the environment HCL having room to vary in their composure of slightly-lower level modules and being able to express different approaches in infrastructure architecture.

ldlework avatar
ldlework

little wordy there, i just hope to be understood (so that I can in turn understand too)

ldlework avatar
ldlework

This is contrasted to the “per-environment HCL” being in branches and letting Git do the work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, it comes at a trade off of being more effort to manage changes

ldlework avatar
ldlework

(whether or not you use remote modules)

ldlework avatar
ldlework

Basically my point is, you’ll always have to manually “merge” across environments if the per-environment HCL is in a single branch, regardless of module use.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


, the modules would have to be essentially so high-level that they simply implement the entire environment

ldlework avatar
ldlework

(so maybe if you combined approaches you’d achieve the theoretical minimum maintainence)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this = “terraform root module”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the root invocatin of a module is a module

ldlework avatar
ldlework

Oh you’re saying there is literally no per-environment HCL?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep

ldlework avatar
ldlework

Per-environment HCL is a tagged “highest-level-module” ?

ldlework avatar
ldlework

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Modules - Configuration Language - Terraform by HashiCorp

Modules allow multiple resources to be grouped together and encapsulated.

ldlework avatar
ldlework

lol why you link that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
02:22:20 AM
ldlework avatar
ldlework

lol no I know

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

to define what a “Root module” is from the canonical source

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so just saying the root is a module. treat it like one. invoke it everywhere you need that capability

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

use .tfvars to tailor the settings

loren avatar
loren

Module in module inception

ldlework avatar
ldlework

Yeah but you might have variance across environment in the implementation of that module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is why we like the terraform init -from-module=...

:--1:1
ldlework avatar
ldlework

And so what you’re saying is that, you achieve that, by having a separate repo, which simply refers to a tagged implementation of that module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yup

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so you can invoke an environment 20 times

ldlework avatar
ldlework

Clever.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just call terraofrm init -from-module 20x

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(in different folders)

loren avatar
loren

And then you have terragrunt

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so i gotta run, but this is why we came up with tfenv

ldlework avatar
ldlework

In the configuration repo, where each environment simply invokes a tag of the remote root-module, the amount of code you have per-environment is so little there’s no reason to have different branches.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s the unwrapper for using terraform init -from-module=...

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can just specify environment variables that terraform init already supports

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and it will clone the remote module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and initialize it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

here’s my post on it

loren avatar
loren

Also, highly recommend dependabot or something similar for bumping version tags

ldlework avatar
ldlework

The root modules don’t really seem like all encompassing modules which compose many (say) AWS modules to build a whole infrastructure.

ldlework avatar
ldlework
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

Like one just sets up an ECR registry, etc. A real service is going to require a bunch of these “root modules” composed together to make it happen.

ldlework avatar
ldlework

So any “environment repo” which is supposed to just call out to a single tagged remote root module to build the entire environment doesn’t seem like it would work with these root modules. Like, the HCL in the environment repo is going to be pretty substantial in order to tie these together. And then you have the manual merging problem again.

loren avatar
loren

Modules all the way down, that’s where inception comes in

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you, and every company that want to use the strategy, should create their own catalog of root modules (which is logic), and then invoke them from diff repos (config)

loren avatar
loren

Discussion reminds me of this thread, https://github.com/gruntwork-io/terragrunt/issues/627

Best Practice Question: Single "stack" module with tfvars versus current recommendations? · Issue #627 · gruntwork-io/terragrunt

So the &quot;best practice&quot; layout as defined in the repository I&#39;ve seen with terragrunt has each module in its own folder with its own tfvars per environment My problem here is I have a …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you put in there as many modules of diff types as you want to have in ALL your environments

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just diff approaches

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but I agree with @loren, it comes to separation of logic from config

ldlework avatar
ldlework

Hmm, do I understand that you’re saying that the root modules are really “root” with regards to individual substantial LAYERS of your environment’s architecture?

ldlework avatar
ldlework

The environment isn’t defined by calling a SINGLE module, but rather, it builds the environment out of a few major “layer-level root” modules?

loren avatar
loren

Bingo

ldlework avatar
ldlework

I see.

loren avatar
loren

Though, you could try to have one module to rule them all, with all the logic and calling all the modules conditionally as necessary

ldlework avatar
ldlework

So while there is more “cross environment” manual merging to be done than the ideal, it’s still probably less than would warrant separate branches.

ldlework avatar
ldlework

Well you wouldn’t do it conditionally.

ldlework avatar
ldlework

Each environment would pin to a different tag of the God Module

loren avatar
loren

Yep, but the God module changes over time also, and different invocations of the module may need different things, so conditionally calling submodules becomes important

ldlework avatar
ldlework

Which you can pin to different tags, etc?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

consider them as a collection/catalog of everything you need in all your environments

ldlework avatar
ldlework

Yes, but there’s different conceptual abstraction levels of “everything you need in all your environments”

ldlework avatar
ldlework

You could imagine a SINGLE remote module being the sole invocation that defines everything for an environment.

ldlework avatar
ldlework

Or you could imagine a handful of remote modules which define the major layers of the architecture. VPC/Networking, ECS/ALB for each server, ECR, CloudWatch etc

ldlework avatar
ldlework

I dunno.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s your call how you configure them. We use the second approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The thing is you never ever want one module that does everything

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yep

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) see the thread above

ldlework avatar
ldlework

I was only thinking about one module because we were aiming at minimizing “cross environment merging”.

ldlework avatar
ldlework

One module, while silly, would allow each environment to simply pin to the exact implementation of the God Module relevant to that environment.

ldlework avatar
ldlework

That one module could in-turn compose Layer Modules

ldlework avatar
ldlework

Which in-turn compose Component Modules

ldlework avatar
ldlework

w/e

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A) terraform module composition has a lot of problems as you discovered

ldlework avatar
ldlework

But yeah overall I think I’m clearer on the approach you were all suggesting.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

B) mono modules have huge state. We ran into this 2 years ago where a module was so big it took 20 minutes to run a plan

ldlework avatar
ldlework

It’s just another level similar to the way your “root” (I read “layer/stack”) modules compose component modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

C) huge state means huge blast radius. You cannot easily roll out small incremental changes

ldlework avatar
ldlework

You could.

ldlework avatar
ldlework

You would change the god module to compose the layer modules differently.

ldlework avatar
ldlework

Then you would update a given environment to the new pin of the god module

ldlework avatar
ldlework

Exactly how layer modules work with component modules.

ldlework avatar
ldlework

It’s just another level of abstraction.

ldlework avatar
ldlework

I’m not saying it’s useful.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no one God Module, we use separate folders for each service or a group of services (if that makes sense), so there is no one big single state

ldlework avatar
ldlework

(but just from a thought experiment of how to absolutely minimize the cross-environment maintanence of merging the per-environment HCL)

ldlework avatar
ldlework

(if one is to avoid branching strategy)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is no merging as in git branches, you just make changes to the root modules and apply them in each env separately as needed

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) there is

ldlework avatar
ldlework

If each environment needs HCL to describe how it composes the high-level layer modules to describe the whole infrastructure for the environment

ldlework avatar
ldlework

then you need to manually adopt those changes from one environment to the next, since they’re all in a single branch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry, on my phone and don’t have the keyboard speed to keep up with this

1
ldlework avatar
ldlework

if each environment simple makes one module { } call, then the amount you have to manually copy-paste over as you move enhancements/changes down the environment pipeline is the smallest amount possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s true but that’s not what blast radius refers to

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t; apply everything at once

ldlework avatar
ldlework

No, I’m just explaining why we’re talking about this in the first place

ldlework avatar
ldlework

to @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just the modules we need to change

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) he is doing a thought experiment

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) right

ldlework avatar
ldlework

you typically apply just major layers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

He is not asking what we do :-)

loren avatar
loren

-target is hugely powerful

ldlework avatar
ldlework

yeah

ldlework avatar
ldlework

I’m going to refactor my stuff into layers like this tonight

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

He wants to understand why he should/shouldn’t do what he is proposing

ldlework avatar
ldlework

VPC/Networking, ECS/ALB/CodePipeline for a given Container, Database, Caching

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i get it

ldlework avatar
ldlework

Make “root modules” for each of those

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just saying that don’t apply everything and don’t merge anything - just make changes somewhere and apply step by step one by one

ldlework avatar
ldlework

Does it make sense to have a root module covering CodePipeline/ECS/ALB-target-and-listeners for a given container?

ldlework avatar
ldlework

Like everything needed to deploy a given container-layer of our overall product infrastructure

ldlework avatar
ldlework

Pass in the VPC/ALB info, etc

ldlework avatar
ldlework

Have it provision literally everything needed to build and run that container

ldlework avatar
ldlework

Have a module we can reuse for that for each container we have

ldlework avatar
ldlework

Or should there be a root module for each container?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if you have one module that tried to do all of that (beyond terraform not supporting it due to count of cannot be computed errors) the problem is the amount of infrastructure it touches is massive. So if you just wanted to change an auto scale group max size, you will incidentally be risking changes to every other part of the infrastructure because everything is in one module.

ldlework avatar
ldlework

Sure, but it would be all the infrastructure having to do with a logical component of the overall microservice architecture.

ldlework avatar
ldlework

If anything related to that microserivice gets boofed, the whole microservice is boofed anyway.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All I can say is you have to experience it first hand :-)

ldlework avatar
ldlework

That top-level module could of course be compose of other modules for doing just the ECS, just the CodePipeline, just the ALB

ldlework avatar
ldlework

infact, those are already written

ldlework avatar
ldlework

they’re yours

loren avatar
loren

It’s hugely composable and there is no one right answer

ldlework avatar
ldlework

I run two container-layers in our microservice architecture with this module: https://gist.github.com/dustinlacewell/c89421595a20577f1394251e99d51dd8

loren avatar
loren

Just tradeoffs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

exactly

ldlework avatar
ldlework

It does ECR/CloudWatch/Container/Task/ALB-target-listener/CodePipeline

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we, for example, have big and small root modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

lmao

ldlework avatar
ldlework

wish i found that first link three days ago

ldlework avatar
ldlework

i literally just built all that, but not as good, and there’s more there like dns

loren avatar
loren

Isn’t it Friday night? Here we all are on slack

ldlework avatar
ldlework

I work for a broke startup so I work all the time for the foreseeable future

ldlework avatar
ldlework

lol

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s get some beer

ldlework avatar
ldlework

OK

ldlework avatar
ldlework

loren avatar
loren

ldlework avatar
ldlework

Here’s a question I haven’t been able to answer for myself yet. Is a single ECR registry capable of holding multiple images? Or is a single ECR only for a single Docker image?

ldlework avatar
ldlework

I know that a Docker image can have multiple tags. I don’t mean that. I mean can you store multiple images in a single ECR. Do I need an ECR for each of my containers, or just one for the region?

ldlework avatar
ldlework

Cuz, Docker Registry obviously does multiple images, so I’m confused on this point.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one ECR can have just one image

ldlework avatar
ldlework

Got it thank you

ldlework avatar
ldlework

Make sense since the ECR name is set as the image name. Thanks for confirming.

loren avatar
loren

Gotta sign off, catch some zzzzzzs. Good discussion! Night folks!

ldlework avatar
ldlework

o/

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) I must admit, I poured a cup of coffee instead.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

haha, maybe tomorrow, good night

ldlework avatar
ldlework

Another thing I worry about, is that I have a container to containerize soon that hosts multiple ports.

ldlework avatar
ldlework

I think I will need to place this Fargate container into multiple target-groups, one for each of the ports.

ldlework avatar
ldlework

But the posse module for tasks only takes a single ARN

ldlework avatar
ldlework

Oh maybe you have just one target group for the container, but you add multiple listeners.

ldlework avatar
ldlework

That must be it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, if you’re using our terraform-aws-ecs-web-app module, you might want to fork it for your needs since it’s rather opinionated. it was designed to show how to use all of our other ecs modules together.

ldlework avatar
ldlework

I’m now pondering whether each top-level layer module should have it’s own remote state. And whether the environment HCL should pass values to layers by utilizing the remote_state data source. That’s basically essential if you’re going to use -target right? Like how else does the auroradb layer get the right private DNS zone id? It has a variable on it’s module for that. But how does the environment HCL pass it to it, if the private zone isn’t being provisioned due to only the auroradb layer being targetted? It has to use the remote data source to access the other layer module’s outputs right?

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) thoughts on that?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we do use separate states for each root module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and we use both 1) remote state 2) data source lookup to communicate values b/w them

ldlework avatar
ldlework

I see.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but again, that depends

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we also use 3) write values from a module into SSM param store or AWS Secret Manager, then read those values from other modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

ldlework avatar
ldlework

Never even heard of that before

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) I’m trying to imagine how all the state management would look on the environment/configuration repo that calls the various root modules.

ldlework avatar
ldlework

Right now, I just have a little folder with a tiny bit of HCL that sets up an S3 bucket and DynamoDB table, and then those details are used when I apply my actual HCL

ldlework avatar
ldlework

Would I have to have little side modules on the environment side for each root module the enviornment uses?

ldlework avatar
ldlework

I bet you guys have some crazy complex infrastructures

ldlework avatar
ldlework

I love all your abstractions, it’s great.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so, this is opinionated, but here’s what we have/use (in simple terms, regardless of complex infrastructure or not):

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. TF modules https://github.com/cloudposse?utf8=%E2%9C%93&q=terraform-&type=&language= - this is logic and also just the definition w/o specifying the invocation
ldlework avatar
ldlework

I’ve been calling those Asset/Component modules. Individual AWS assets. Got it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Then we describe the invocation of those modules (which ones to use deppens on your use case) in the catalog of module invocation https://github.com/cloudposse/terraform-root-modules
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

I’ve been calling those Layer Modules. They define major aspects/facets of an environments overall infrastructure. Got it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those invocations are identity-less, they don’t care where and how they are deployed, this is just logic

ldlework avatar
ldlework

Right, like you said some are big, some are small, but they compose the lower-level Asset modules.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Then for each environment (prod, staging, dev, test), we have a GitHub repo with a Dockerfile that does two things:
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

1) Copies the invocation of the root modules from the catalog into the container (geodesic in our case) - this is logic

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

2) Defines ENV vars to configure the modules (the ENV vars could come from many diff places including Dockerfile, SSM param store, HashiCorp Vault, direnv, etc.) - this is config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then we start the container and it has all the code and config required to run a particular environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so only the final container has identity and knows what it will be deploying and where and how (environment, AWS region, AWS account. etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the final container could be run from your computer or from a CI/CD system (e.g #codefresh )

ldlework avatar
ldlework

OK I got confused along the way

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/prod.cloudposse.co

Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co

ldlework avatar
ldlework

You said “Copies the invocation of the root modules from the catalog into the container”. I thought the “catalog” was a repo with all the root/layer modules inside of it. And that this environment repo had some HCL that called root/layer modules out of the catalog to compose an entire environment architecture.

ldlework avatar
ldlework

Where did I go wrong? Is the catalog something different than the repo containing the root/layer modules?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/prod.cloudposse.co

Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each environment (prod, staging, dev) copies the entire catalog, OR only those modules that are needed for the particular environment

ldlework avatar
ldlework

Sure, and it also has some HCL of its own for calling the root/layer modules from the catalog right?

ldlework avatar
ldlework

The environments.

ldlework avatar
ldlework
04:33:32 AM

looks at the links.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it can have (if it’s very specific to the environment AND is not re-used in other environments)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but usually you want to re-use modules across diff environment (with different params of cause)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we put those module invocations in the common catalog (root modules as we call it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so in other words, the catalog is a collection of reusable module invocations (not definitions) that you would reuse in your environments. But any of those environments could have some specific module definitions and invocations nott from the catalog (if that makes sense)

ldlework avatar
ldlework

Yes, reuse modules, but in an experimental environment, you might be trying to save costs and switch from one DB to another DB, and so a different root/layer module would be called in that environment’s HCL right?

ldlework avatar
ldlework

Like, you probably use a half-dozen or so root/layer modules to create a moderate environment for a reasonable multi-layered service right?

ldlework avatar
ldlework

So there’s got to be something in the environment, HCL, which calls all those modules.

ldlework avatar
ldlework

Remember, there’s no God Module

ldlework avatar
ldlework

Like an environment doesn’t invoke just one God Module from the catalogue right? Because the catalog contains modules which only cover a single “layer” of the environment. ECS, Networking, Database, etc right?

ldlework avatar
ldlework

So each environment must have a bit of HCL which are the invocations of the various layer modules which define the major sections of the infrastructure.

ldlework avatar
ldlework

lol I’m gonna make sense of this eventually

ldlework avatar
ldlework

I see in your Dockerfile you’re copying the various root modules which create various layers of the environment. account-dns, acm, cloudtrail, etc

ldlework avatar
ldlework

Which reaffirms my idea that root modules only cover a facet of the environment, so you need multiple root modules to define an environment.

ldlework avatar
ldlework

So where is the per-environment HCL that invokes those modules? in conf/ perhaps.

ldlework avatar
ldlework

There’s nothing in conf/!

ldlework avatar
ldlework

Are you mounting the environment’s HCL into the container via volume at conf/ ?

ldlework avatar
ldlework

The stuff that calls the root modules?

ldlework avatar
ldlework

OHH you’re literally CD’ing into the root modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or use atlantis

ldlework avatar
ldlework

and running terraform apply from inside the root modules themselves

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If we need something special or new, we add it to the catalog and then can copy to the container of that environment

ldlework avatar
ldlework

nothing calls the root modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If we need a smaller db instance for dev, all those settings are config and provided from ENV vars

ldlework avatar
ldlework

So let’s say you needed two “instances” of what a root module provides. Like say it provides all the ECS and related infrastructure for running a microservice web-container.

ldlework avatar
ldlework

Do you cd into the same root module and call it with different args?

ldlework avatar
ldlework

Or would you end up making two root modules, one for each container in the microservice architecture?

ldlework avatar
ldlework

I’m guessing the latter?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

In our case, we would create a “big” root module combining the smaller ones, and put it into the catalog

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Then configure it from ENV vars from the container

ldlework avatar
ldlework

Let me give an example.

ldlework avatar
ldlework

Let’s say you have a single webservice container that does server side rendering. Like a Django application serving it’s own static media. You have an Asset module like the CloudPosse ecs-codepipeline module, and other’s like the Task and Container module. You create a Root Module which ties these together so you can deploy your Django application and everything it needs as a single Root Module. You might also have Root Module for the VPC/Networking for that application. So, you have an environment dockerfile, and you start by CD’ing into the VPC Root Module, and you deploy it. Then you CD into the Django App’s Root Module, and you deploy that too. OK so now you have a Django web-service deployed.

ldlework avatar
ldlework

Now lets say your dev team refactors the app so that Django only serves the backend, and you have an Nginx container serving the static assets for the frontend. So now you need another ECS-CodePipeline module, Task and Container modules. Just like for the Django container.

ldlework avatar
ldlework

Do you create another Root Module for the Nginx Frontend container, which calls the CodePipeline, Task and Container modules again?

ldlework avatar
ldlework

So then you’d have 3 Root Modules you’d be able to deploy independently? How would you resolve the duplication?

ldlework avatar
ldlework

(the calling of ecs-codepipeline, task, and container modules the same way in two root modules representing the two container services in your environment)

ldlework avatar
ldlework

BTW thanks for all the charity.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yea, good example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we mostly use k8s for that kind of things

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and did not have ECS infra such as that

ldlework avatar
ldlework

Sure but you can imagine someone using your modules for that (I am )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but i guess we would create one module with all that stuff but with many params to be able to parameterize all the things and even switch some things on/off

ldlework avatar
ldlework

Sure, a root module which combines ecs-codepipeline, task and container, etc to deploy a single containerized microservice

ldlework avatar
ldlework

But what if you needed two? Two different containers but deployed essentially the same way.

ldlework avatar
ldlework

You already have on Root Module for deploying the unified container.

ldlework avatar
ldlework

But now you need two, so do you just copy paste the Root Module for the second container, as it is basically exactly the same minus some port mappings and source repo, and stuff?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s config

ldlework avatar
ldlework

Or would you create a high-level NON-root-module which expressed how to deploy a ecs-codepipeline, task and container together

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can be parmeterized

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, one module

ldlework avatar
ldlework

And then two root modules which just called the non-root-module with different settings?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

one non-root module though?

ldlework avatar
ldlework

But like

ldlework avatar
ldlework

OK so you have one generalized root module for deploying ecs services great.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those are just names (root or not)

ldlework avatar
ldlework

So where are the different configs for the different containers?

ldlework avatar
ldlework

Like in your prod.posse.com example.

ldlework avatar
ldlework

Because it seems the only config that’s available is environmental config

ldlework avatar
ldlework

Like the difference between dev and prod

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to re-use the module (which can combine smaller modules), parametirize it and out into the catalog

ldlework avatar
ldlework

But where would the difference between frontend and backend be, for calling the ecs root module twice?

ldlework avatar
ldlework

The catalog is the thing that holds root modules right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it holds modules, big and small, that can be re-used

ldlework avatar
ldlework

Or is the catalog the repo of modules that your root modules use?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Never compose “root modules” inside of other root modules. If or when this is desired, then the module should be split off into a new repository and versioned independently as a standalone module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

Sure, I think I’m down with that idea.

ldlework avatar
ldlework

OK, so you already have the ECS module right?

ldlework avatar
ldlework

It is a general parameterized module that can deploy any ECS service.

ldlework avatar
ldlework

You want to call it twice, with different parameters.

ldlework avatar
ldlework

Where do you store the different paremters for calling that module twice?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That would be considered a different project folder.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. conf/vpc1 and conf/vpc2

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or conf/us-west-2/vpc and conf/us-east-1/vpc

ldlework avatar
ldlework

So you’d simply copy the Root Module out of the source Docker image twice?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

depends what you mean by “copy”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we terraform init both

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

using -from-module

ldlework avatar
ldlework

Like two ECS layers.

ldlework avatar
ldlework

A frontend and a backend.

ldlework avatar
ldlework

They can both be implemented by the root module you have “ecs” in your root modules example.

ldlework avatar
ldlework

By passing different settings to it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, so we init a new root module from that one. then we use terraform.tfvars to update the settings (or envs)

ldlework avatar
ldlework

So inside the prod Docker image, we copy the ECS root module… once? twice?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you want 2 ECS clusters, you copy it twice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but you wouldn’t have both clusters in the same project folder

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you might have us-west-2/mgmt and us-west-2/public

ldlework avatar
ldlework

Right so where do the parameters you pass to each copy come from? Its the same HCL module, but you’re going to call it/init/apply it twice with separate states.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the parameters are stored in us-west-2/mgmt/terraform.tfvars

ldlework avatar
ldlework

Same HCL in terms of implementation - it’s been copied to two different places in conf/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and us-west-2/public/terraform.tfvars

ldlework avatar
ldlework

Where are those?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you create those var files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

those are your settings

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s what makes it unique to your org and not ours

ldlework avatar
ldlework

Are they baked into the prod shell image?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s one way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

they are always in the git repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is central to our CI/CD strategy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

first important to understand how that works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

when we use atlantis, then we deploy it in the account container

ldlework avatar
ldlework

OK so the environment specific repo has environment specific variable files for invoking the various root modules that the environment specific dockerfile/shell has available

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

when we do that, we cannot rebuild the container with every change; instead atlantis clones the changes into the container

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, depending on what you want, you can 1) copy the same module twice and use it with diff params; 2) crerate a “bigger” module combining the smaller ones and copy it once

ldlework avatar
ldlework

I see

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Idlework I think you summarized it well

ldlework avatar
ldlework

Because the only environment variables that change between your environment repos is environment specific stuff.

ldlework avatar
ldlework

Like between dev and prod.

ldlework avatar
ldlework

So you’re in dev, you copy in the ECS module into the Docker image

ldlework avatar
ldlework

You’re loaded into the geodesic shell

ldlework avatar
ldlework

You’re ready to deploy root modules by CD’ing into them and running terraform apply

ldlework avatar
ldlework

You CD into the ecs root module

ldlework avatar
ldlework

How do you go about deploying the frontend

ldlework avatar
ldlework

And then how do you go about deploying the backend?

ldlework avatar
ldlework

They both depend on this generalized ECS root module we copied into the prod Docker image.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well, make a “bigger” module with frontend and backend in it, and put into the catalog into diff folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that all depends on many diff factors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are talking about ideas and patterns here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how do you implement it, your choice

ldlework avatar
ldlework

Sure, I’m totally just fishing to understand how the posse does it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes I get it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and thanks for those questions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no solution is perfect

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

some of your use-cases could be not covered, or covered better by some other solutions

foqal avatar
foqal
05:23:00 AM

Helpful question stored to @:

I'm trying to imagine how all the state management would look on the environment/configuration repo that calls the various root modules...
ldlework avatar
ldlework

Since root modules are literally terraform apply contexts… how do they boot strap their remote state?

ldlework avatar
ldlework

You CD into one and terrafrom apply, didn’t I need to bootstrap its remote state first somehow?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it uses the S3 backend provisioned separately before

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the backend is configured from ENV vars in the container

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

Do you guys provision like just one backend bucket, and then use varying names to store the various states in that bucket in different files? So you only have to provision the backend once?

ldlework avatar
ldlework
05:26:23 AM

looks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one backend per environment (AWS account)

ldlework avatar
ldlework

oh I see OK

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then each projects is in separate folder in the repo and in the backend S3 bucket

ldlework avatar
ldlework

by project you mean “root module where we run terraform apply” right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

BTW, take a look at the docs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

little bit outdated, but can give you some ideas

ldlework avatar
ldlework

I really appreciate all the advice. It was way above and beyond. Thanks for sticking through all that. Really!

:--1:1
ldlework avatar
ldlework

Wow atlantis looks cool

ldlework avatar
ldlework

phew there is still so much to learn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Let’s talk about atlantis later :)

2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Just in case you want to follow, #atlantis

ldlework avatar
ldlework

Oh dang, you can’t refer to modules in a remote git repo that are not at the root?

ldlework avatar
ldlework

rough

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

try this:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
terraform init -from-module=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/users?ref=tags/0.53.3>"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that initializes the aws/users module

ldlework avatar
ldlework

I mean in a module { source = } block

ldlework avatar
ldlework

Or maybe that double slash works there too?

2019-03-21

DaGo avatar

Has anyone tried this wizardry? https://modules.tf

modules.tf - Get your infrastructure as code delivered as Terraform modules

Your infrastructure as code delivered as Terraform modules

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@antonbabenko

modules.tf - Get your infrastructure as code delivered as Terraform modules

Your infrastructure as code delivered as Terraform modules

antonbabenko avatar
antonbabenko

@DaGo Yes, I tried it.

xluffy avatar
xluffy

we have author of this site here.

1
1
oscarsullivan_old avatar
oscarsullivan_old

I really like Cloudcraft

oscarsullivan_old avatar
oscarsullivan_old
09:19:40 AM

here you go @DaGo

fiesta_parrot1
oscarsullivan_old avatar
oscarsullivan_old

Cloudcraft doesn’t allow you to configure objects like ALB (e.g. listener and redirects)

oscarsullivan_old avatar
oscarsullivan_old

here’s a simple eu-west 2 ALB + EC2

oscarsullivan_old avatar
oscarsullivan_old
09:38:25 AM
oscarsullivan_old avatar
oscarsullivan_old

I think it’s really good for starting a new framework for a project

DaGo avatar

Awesome. Many thanks, Oscar!

oscarsullivan_old avatar
oscarsullivan_old

Here’s a question for the crowd: Do you prefer to leave a default for variables?

Example:

variable "stage" {
  type        = "string"
  default     = "testing"
  description = "Stage, e.g. 'prod', 'staging', 'dev' or 'testing'"
}

My take: I personally don’t like leaving defaults to variables like ‘name’ and ‘stage’, but don’t mind for ‘instance_size’. The reason is I’d rather it failed due to a NULL value and I could fix this var not being passed to TF (from, say, Geodesic) than read the whole PLAN and check I’m getting the correct stage etc.

What do you think?

oscarsullivan_old avatar
oscarsullivan_old

Another example in a Dockerfile:

FROM node:8.15-alpine

ARG PORT
EXPOSE ${PORT}

I could have ARG PORT=3000, but I’d rather it failed due to a lack of port definition than go through the build process and find the wrong / no port was exposed.

oscarsullivan_old avatar
oscarsullivan_old

I’d rather have NO PORT than the WRONG port for my built image. I feel it is easier for me to catch the NO PORT than the WRONG port.

mmuehlberger avatar
mmuehlberger

I like defaults, for things, where I know that I’m not going to change them in every stage/environment or variables where it makes sense to have one. (e.g. require stage to be set explicitly, but like you said, something like instance_type is fine, even though, you might want to change it in every stage anyways). For something like PORT I’d also set a default, usually to whatever the default is for the technology I’m using.

My take on defaults is basically: try to make it as easy as possible to use whatever you are building with ar little extra configuration as possible.

oscarsullivan_old avatar
oscarsullivan_old

RE: AKS cluster

variable "image_id" {
  type        = "string"
  default     = ""
  description = "EC2 image ID to launch. If not provided, the module will lookup the most recent EKS AMI. See <https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html> for more details on EKS-optimized images"
}

variable "eks_worker_ami_name_filter" {
  type        = "string"
  description = "AMI name filter to lookup the most recent EKS AMI if `image_id` is not provided"
  default     = "amazon-eks-node-v*"
}
* module.eks_workers.data.aws_ami.eks_worker: data.aws_ami.eks_worker: Your query returned no results. Please change your search criteria and try again.

What are people specifying as their IMAGE_ID or what filter have they got for the ami_name_filter?

oscarsullivan_old avatar
oscarsullivan_old

Looks like they’re EKS specific, so I don’t want to use my standard AMI

oscarsullivan_old avatar
oscarsullivan_old
11:32:05 AM

Looks like the pattern has changed. No v for the semantic version

oscarsullivan_old avatar
oscarsullivan_old
* aws_eks_cluster.default: error creating EKS Cluster (acme-sandbox-eks-cluster): InvalidParameterException: A CIDR attached to your VPC is invalid. Your VPC must have only RFC1918 or CG NAT CIDRs. Invalid CIDR: [14.0.0.0/16]

Hmmm looks like a valid CIDR to me

oscarsullivan_old avatar
oscarsullivan_old

Weird as they’re created with this:

module "vpc" {
  source    = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=master>"
  version   = "0.4.0"
  namespace = "${var.namespace}"
  stage     = "${var.stage}"
  name      = "vpc"
  cidr_block         = "${var.cidr_prefix}.0.0.0/16"
}

module "dynamic_subnets" {
  source             = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=master>"
  namespace          = "${var.namespace}"
  stage              = "${var.stage}"
  name               = "dynamic_subnets"
  region             = "${var.aws_region}"
  availability_zones = ["${var.aws_region}a", "${var.aws_region}b", "${var.aws_region}c"]
  vpc_id             = "${module.vpc.vpc_id}"
  igw_id             = "${module.vpc.igw_id}"
  cidr_block         = "${var.cidr_prefix}.0.0.0/16"
}
oscarsullivan_old avatar
oscarsullivan_old

Can the VPC and hte SUBNET not have the same block

oscarsullivan_old avatar
oscarsullivan_old

I thought it was more about linking them together

oscarsullivan_old avatar
oscarsullivan_old
cloudposse/terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

mmuehlberger avatar
mmuehlberger

Valid RFC1918 CIDRs are only those 3:

10.0.0.0        -   10.255.255.255  (10/8 prefix)
172.16.0.0      -   172.31.255.255  (172.16/12 prefix)
192.168.0.0     -   192.168.255.255 (192.168/16 prefix)
mmuehlberger avatar
mmuehlberger

Everything else is in the public IP space.

oscarsullivan_old avatar
oscarsullivan_old
cloudposse/terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

oscarsullivan_old avatar
oscarsullivan_old

because it is 10.0.0.0/16

oscarsullivan_old avatar
oscarsullivan_old

but should really be 10.0.0.0/8

oscarsullivan_old avatar
oscarsullivan_old

?

mmuehlberger avatar
mmuehlberger

AWS only allows /16 networks, not more.

oscarsullivan_old avatar
oscarsullivan_old

So it sounds like 10.0.0.0/16 is not RFC1918 compliant then

mmuehlberger avatar
mmuehlberger

No. /8 means, the first 8 bits must be fixed (10 in this case). /16 means, the first 16 bits must be fixed (so the first 2 parts)

Steven avatar
Steven

10.0.0.0/16 is fine, 14.0.0.0/16 is not

Samuli avatar
Samuli

10.0.0.0/16 is subnet of 10.0.0.0/8

oscarsullivan_old avatar
oscarsullivan_old

Oh

mmuehlberger avatar
mmuehlberger

With VPCs in AWS you can have a range of 10.0.0.0/16 to 10.255.0.0/16 as valid network CIDRs. Also 172.16.0.0/16 to 172.31.0.0/16 and 192.168.0.0/16.

oscarsullivan_old avatar
oscarsullivan_old

so 10.0.0.0/16 is valid

oscarsullivan_old avatar
oscarsullivan_old

but my 14.0.0.0/16 is not

oscarsullivan_old avatar
oscarsullivan_old

haha woops

oscarsullivan_old avatar
oscarsullivan_old

but had I gone for 10.{var.cidr_prefix}.0.0/16 all would be well

mmuehlberger avatar
mmuehlberger

(plus 100.64.0.0/16 to 100.127.0.0/16 but this is carrier-grade NAT, so better stay away from that)

mmuehlberger avatar
mmuehlberger

Exactly.

oscarsullivan_old avatar
oscarsullivan_old

Damn

oscarsullivan_old avatar
oscarsullivan_old

Alright, hopefully it should be easy to fix then

oscarsullivan_old avatar
oscarsullivan_old

Thanks chaps for explaining CIDR blocks and RFC1918

loren avatar
loren

I forget where I read it, but there is a lot of truth to the quote, “in the cloud, everyone is a network engineer”

1
mmuehlberger avatar
mmuehlberger

Absolutely!

oscarsullivan_old avatar
oscarsullivan_old

My networking is so poor. For the last 2 years the networking aspect of my cloud was managed by the service provider!

loren avatar
loren

VPCs, subnets, security groups, NACLs, VPNs, WAFs, load balancers, oh my!

oscarsullivan_old avatar
oscarsullivan_old

Yep! The only thing I had to manage were firewall ingress/egress rules and load balancer rules.. none of the setup and maintenance

oscarsullivan_old avatar
oscarsullivan_old

It was Infrastructure as a service really

oscarsullivan_old avatar
oscarsullivan_old

That’s convenient… it’s so abstracted that I only need to change it in 2 values on the same file

oscarsullivan_old avatar
oscarsullivan_old

Oh dear, terraform isn’t detecting the change properly and isn’t destroying the old subnets

Samuli avatar
Samuli

try terraform destroy without the changes first?

oscarsullivan_old avatar
oscarsullivan_old

No it has totally desynced from the bucket

oscarsullivan_old avatar
oscarsullivan_old

so I’m makign it local

oscarsullivan_old avatar
oscarsullivan_old

copying the S3 state

oscarsullivan_old avatar
oscarsullivan_old

pasting that into the local

oscarsullivan_old avatar
oscarsullivan_old

and then checking if terraform state list shows the resources

oscarsullivan_old avatar
oscarsullivan_old

then I’ll push it back up

oscarsullivan_old avatar
oscarsullivan_old

yep perfect showing the machiens again

oscarsullivan_old avatar
oscarsullivan_old
Plan: 26 to add, 2 to change, 26 to destroy.
oscarsullivan_old avatar
oscarsullivan_old

vs.

 ✓   (acme-sandbox-admin) vpc ⨠ terraform destroy
data.aws_availability_zones.available: Refreshing state...
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: no


Error: Destroy cancelled.


oscarsullivan_old avatar
oscarsullivan_old

Specifically…

-/+ module.vpc.aws_vpc.default (new resource required)
      id:                                 "vpc-xxx" => <computed> (forces new resource)
      arn:                                "arn:aws:ec2:eu-west-2:xxx:vpc/vpc-xxx" => <computed>
      assign_generated_ipv6_cidr_block:   "true" => "true"
      cidr_block:                         "14.0.0.0/16" => "10.14.0.0/16" (forces new resource)

Should be compliant now

oscarsullivan_old avatar
oscarsullivan_old
02:50:41 PM

Doh it’s happened again.. can’t delete modules

rbadillo avatar
rbadillo

Team any suggestions on how to fix this error ?

"data.aws_vpc.vpc.tags" does not have homogenous types. found TypeString and then TypeMap in ${data.aws_vpc.vpc.tags["Name"]}
rbadillo avatar
rbadillo

I have some Kubernetes Tags, doing some googling says to delete those tags but I want to avoid that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can delete/update those tags in kubernetes. Terraform data sources just read them and you can’t change them on the fly

ldlework avatar
ldlework

In the terraform-aws-ecs-codepipeline module, it has an example buildspec: https://github.com/cloudposse/terraform-aws-ecs-codepipeline#example-buildspec

cloudposse/terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline

ldlework avatar
ldlework

Where are the variables like $REPO_URL and $IMAGE_REPO_NAME coming from?

ldlework avatar
ldlework

They’re not official build environment variables.

ldlework avatar
ldlework

Oh I see, it’s provided by the module.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline

LeoGmad avatar
LeoGmad

Has anyone encountered issues lately with the module terraform-aws-dynamic-subnets. I believe it has something to do with AWS adding an AZ (us-west-2d) to the us-west-2 region some time in Feb.

I ended up removing the subnets in one of my environments before being able to recreate them. Not something I want to do in OPS. Any insight on the issue?

-/+ module.subnets.aws_subnet.public[0] (new resource required)
      id:                                                                                         "subnet-XXX" => <computed> (forces new resource)
      arn:                                                                                        "arn:aws:ec2:us-west-2:XXX:subnet/subnet-XXX" => <computed>
      assign_ipv6_address_on_creation:                                                            "false" => "false"
      availability_zone:                                                                          "us-west-2a" => "us-west-2a"
      availability_zone_id:                                                                       "usw2-az2" => <computed>
      cidr_block:                                                                                 "10.0.96.0/19" => "10.0.128.0/19" (forces new resource)

-/+ module.subnets.aws_subnet.public[1] (new resource required)
      id:                                                                                         "subnet-XXX" => <computed> (forces new resource)
      arn:                                                                                        "arn:aws:ec2:us-west-2:XXX:subnet/subnet-XXX" => <computed>
      assign_ipv6_address_on_creation:                                                            "false" => "false"
      availability_zone:                                                                          "us-west-2b" => "us-west-2b"
      availability_zone_id:                                                                       "usw2-az1" => <computed>
      cidr_block:                                                                                 "10.0.128.0/19" => "10.0.160.0/19" (forces new resource)
    
LeoGmad avatar
LeoGmad

Sorry I don’t have the actual error output but it complained about the CIDR already existing

oscarsullivan_old avatar
oscarsullivan_old

Does it already exist?

oscarsullivan_old avatar
oscarsullivan_old

I got bamboozled by that question today

LeoGmad avatar
LeoGmad

Well technically its that module.subnets.aws_subnet.public[1] which never gets deleted

LeoGmad avatar
LeoGmad

or updated.

LeoGmad avatar
LeoGmad

I may be able to use 2d and 2b in this case, but not sure what my solution for my OPS env which currently uses a,b, and c. I may have to just ride a second VPC for a migration.

oscarsullivan_old avatar
oscarsullivan_old

Good luck. I don’t think I have an answer for this!

LeoGmad avatar
LeoGmad

Actually, I may have found the solution! I’ll just temporarily go down to 2 subnets in OPS c, and d which should not conflict.

daveyu avatar
daveyu

I ran into this too. It has to do with how the module subdivides CIDR blocks. With the addition of us-west-2d, the module wants to create a new private subnet, but it tries to assign to it the CIDR block that’s already given to the public subnet for us-west-2a

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) can you take a look?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We specifically and deliberately tried to address this use-case in our module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but might have a logical problem affecting it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think your suggestion though is the best: hardcode the desired AZs for stability

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’ll look into that

LeoGmad avatar
LeoGmad

I see

daveyu avatar
daveyu

i couldn’t figure out a fix.. fortunately my env was in a state where i could delete and recreate all the subnets

daveyu avatar
daveyu

if you’re using terraform-aws-dynamic-subnets directly, it looks like you should be able to set availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]

:--1:1
LeoGmad avatar
LeoGmad

Thanks I am using terraform-aws-dynamic-subnets I’ll give it a go.

ldlework avatar
ldlework
09:50:52 PM

What might be going wrong with my cloudposse/terraform-aws-ecs-codepipeline if I’m getting the following error on deploy:

ldlework avatar
ldlework

I’m not sure what IAM role is relevant here (there so many O_O)

ldlework avatar
ldlework

I’m not even sure what is being uploaded to s3?

ldlework avatar
ldlework

Oh I deleted these lines from my buildspec:

      - printf '[{"name":"%s","imageUri":"%s"}]' "$CONTAINER_NAME" "$REPO_URI:$IMAGE_TAG" | tee imagedefinitions.json
artifacts:
  files: imagedefinitions.json
ldlework avatar
ldlework

Perhaps these are important? Not sure why there would an IAM role error?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@idlework are you using this in thee context of one of our other modules? e.g. the ecs-web-app?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if not, I would look at how our other modules leverage it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Build software better, together

GitHub is where people build software. More than 31 million people use GitHub to discover, fork, and contribute to over 100 million projects.

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) yeah I’ve been following right out this, https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

ldlework avatar
ldlework

Adding those lines I removed seemed to make it work.

ldlework avatar
ldlework

The last thing that’s not working is that the Github Webhook doesn’t seem to do anything.

ldlework avatar
ldlework

I cut a release, Github says it sent the webhook, and nothing happens in CodePipeline even after 10 minutes.

ldlework avatar
ldlework

I could probably post my HCL since I don’t think it contains any secrets.

ldlework avatar
ldlework

Maybe it only works if the commit is different

ldlework avatar
ldlework

still nothing

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) do I have to click this even though I setup the terraform module with an oauth token with the right permissions? terraform was able to setup the webhook on my repo afterall, http://logos.ldlework.com/caps/2019-03-21-22-33-18.png

attachment image
ldlework avatar
ldlework

It would be nice if AWS listed the webhook events somewhere

ldlework avatar
ldlework

I don’t have a clue what could be wrong

ldlework avatar
ldlework

When I list the webhooks via the AWS cli I see that there is a “authenticationConfiguration” section with a “SecretToken”

ldlework avatar
ldlework

I don’t see this secret token anywhere in the webhook on the github side

ldlework avatar
ldlework

Oh that’s probably the obscured “Secret”

ldlework avatar
ldlework

I have no idea

ldlework avatar
ldlework

The response on the github side says 200

ldlework avatar
ldlework

SSL Verification is enabled

ldlework avatar
ldlework

Even got a x-amzn-RequestId header in the response

ldlework avatar
ldlework

Filters on the Webhook:

                "filters": [
                    {
                        "jsonPath": "$.action", 
                        "matchEquals": "published"
                    }
                ]
ldlework avatar
ldlework

Websocket Payload:

{
  "action": "published",
  "release": {
ldlework avatar
ldlework

Hmm it was the password.

ldlework avatar
ldlework

That’s worrying. Oh well.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you mean the webhook secret?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As I recall, the webhook secret on GitHub cannot be updated (rotated)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You need to delete/recreate

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

….at least with Terraform

ldlework avatar
ldlework

That might’ve been it.

2019-03-20

Arvind avatar
Arvind

I have successfully inserted my AWS_ACCESS_KEY and AWS_SECRET_KEYS in Vault

MacBook-Pro-5$ vault read  secret/infra_secrets
Key                 Value
---                 -----
refresh_interval    xxh
access_key          xxxxx
secret_key          xxxx

Now can anyone can suggest to me how should i use this keys in my main.tf(code piece) so i can provision the infra in aws.

oscarsullivan_old avatar
oscarsullivan_old

By using Geodesic’s assume-role function or by running aws-vault exec [profile] [cmd]

:--1:1
oscarsullivan_old avatar
oscarsullivan_old

so

oscarsullivan_old avatar
oscarsullivan_old

aws-vault exec sandbox terraform apply

xluffy avatar
xluffy

hey, I have a module for creating 2 VPC + another module for creating a peering between 2 VPC. In peering module, I have a read-only for counting route table. But I can’t count if i don’t create VPC first. How to depend them?

Error: Error refreshing state: 2 error(s) occurred:

* module.peering.data.aws_route_table.acceptor: data.aws_route_table.acceptor: value of 'count' cannot be computed
* module.peering.data.aws_route_table.requestor: data.aws_route_table.requestor: value of 'count' cannot be computed
oscarsullivan_old avatar
oscarsullivan_old

Ultimate goal is to create VPCs and peer them? anything more complex?

xluffy avatar
xluffy

yeah, just create 2 vpcs, after that, will create a peering between them

xluffy avatar
xluffy

terraform doesn’t support depend on modules.

oscarsullivan_old avatar
oscarsullivan_old

5 mins

:--1:1
oscarsullivan_old avatar
oscarsullivan_old
02:19:48 PM
oscarsullivan_old avatar
oscarsullivan_old
02:20:05 PM
oscarsullivan_old avatar
oscarsullivan_old

Two different projects

oscarsullivan_old avatar
oscarsullivan_old

Those are the main tf files

oscarsullivan_old avatar
oscarsullivan_old

So I run vpc.tf’s project inside of my geodesic module for each stage

oscarsullivan_old avatar
oscarsullivan_old

and then I run vpc_peering in each sub-accounts module that isn’t mgmt or root

oscarsullivan_old avatar
oscarsullivan_old

This is what I do to peer MGMT to all other sub accounts

xluffy avatar
xluffy

yeah, will work. because u create VPC first (run vpc.tf).

But if u have a peering module + vpc module in a project. Peering module can’t query data in a VPC if VPC doesn’t create.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

think about creating projects (e.g. separate states) based on how they need to be applied

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so if one resource depends on the outputs of another module like a vpc, it might make more sense to separate them out

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

rather than needing to rely on -target parameters to surgically apply state

xluffy avatar
xluffy

https://github.com/cloudposse/terraform-aws-vpc-peering/blob/master/main.tf#L38

data "aws_route_table" "requestor" {
  count     = "${var.enabled == "true" ? length(distinct(sort(data.aws_subnet_ids.requestor.ids))) : 0}"
  subnet_id = "${element(distinct(sort(data.aws_subnet_ids.requestor.ids)), count.index)}"
}

Lookup route tables from a VPC. If this VPC doesn’t create. will fail

cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account. - cloudposse/terraform-aws-vpc-peering

oscarsullivan_old avatar
oscarsullivan_old


But if u have a peering module + vpc module in a project. Peering module can’t query data in a VPC if VPC doesn’t create.
That’s why they’re separate..

oscarsullivan_old avatar
oscarsullivan_old

Different ‘goals’ usually get isolated in my work

oscarsullivan_old avatar
oscarsullivan_old

Plus what if I want to create a VPC that isn’t peered etc

oscarsullivan_old avatar
oscarsullivan_old
02:27:25 PM
xluffy avatar
xluffy

I see

oscarsullivan_old avatar
oscarsullivan_old

I also don’t use [main.tf](http://main\.tf) files.. I don’t like them

oscarsullivan_old avatar
oscarsullivan_old

I like a file per resource type

oscarsullivan_old avatar
oscarsullivan_old
02:29:59 PM
joshmyers avatar
joshmyers

I prefer a main, at least for common elements

:--1:1
joshmyers avatar
joshmyers

¯_(ツ)_/¯

joshmyers avatar
joshmyers

That is quite a preference thing, not sure if quite a coding standard

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

agree with both

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I like a [main.tf](http://main\.tf) for common stuff like the provider definition

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and then when a lot of stuff is going on, break it out by .tf files like @oscarsullivan_old

:100:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:18:21 PM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Creating Modules - Terraform by HashiCorp

A module is a container for multiple resources that are used together.

oscarsullivan_old avatar
oscarsullivan_old


I like a [main.tf> for common stuff like the provider definition</span](http://main.tf) I have that in my [terraform.tf](http://terraform\.tf)!

joshmyers avatar
joshmyers
What about that sneaky data source that gets used all over the shop by [r53.tf> <http://vpc.tf vpc.tf](http://r53.tf) etc
joshmyers avatar
joshmyers

Anyway, the shed is blue.

oscarsullivan_old avatar
oscarsullivan_old


sneak data source
[terraform.tf](http://terraform\.tf) !

oscarsullivan_old avatar
oscarsullivan_old
04:47:37 PM
joshmyers avatar
joshmyers

¯_(ツ)_/¯

oscarsullivan_old avatar
oscarsullivan_old

Joining the meetup tonight @joshmyers?

joshmyers avatar
joshmyers

Where is this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(PST)

joshmyers avatar
joshmyers

01:30 GMT? @oscarsullivan_old

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

6:30 PM Wednesday, Greenwich Mean Time (GMT)

oscarsullivan_old avatar
oscarsullivan_old

haha not quite

oscarsullivan_old avatar
oscarsullivan_old

6;30

oscarsullivan_old avatar
oscarsullivan_old

after work sadly

joshmyers avatar
joshmyers

ahh, I need to nip out but would be good to try and make that :–1:

oscarsullivan_old avatar
oscarsullivan_old

I’ll have only just arrived at home at 6:10

oscarsullivan_old avatar
oscarsullivan_old

Deffo do

oscarsullivan_old avatar
oscarsullivan_old

https://github.com/cloudposse/terraform-aws-eks-cluster SHOULD I create a new VPC just for EKS?

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

oscarsullivan_old avatar
oscarsullivan_old

.. Or can I safely use my existing VPC that my sub-account uses for everything

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can definitely use the same vpc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the “SHOULD” would come down to a business requirement

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. “the EKS cluster SHOULD run in a separate PCI compliant VPC”

oscarsullivan_old avatar
oscarsullivan_old

Ah awesome

oscarsullivan_old avatar
oscarsullivan_old

Not a technical requirement great

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yup

oscarsullivan_old avatar
oscarsullivan_old

Ok this will help me get setup with EKS quickly

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha,

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and open issues that others have encountered with EKS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

most of the time errors encountered are due to missing a step

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

tested many times by a few people

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

oscarsullivan_old avatar
oscarsullivan_old

Thanks! Will spend tomorrow reading those and setting up first k8 cluster

2019-03-19

niek avatar

Does anyone knows if it is possible to construct from a map a string.

For example

{
   "key1" = "val1"
   "key2" = "val2"
}

To: "key1, val1, key2, val2"

Currently having a tf module wich accepts as input a map for tagging in AWS. But there is on one place I need to pass the tags as a list. I prefer to keep my work backwards compatible.

niek avatar

Solved

replace(jsonencode(map("key1", "val1", "key2", "val2")), "/[\\{\\}\"\\s]/", "")
oscarsullivan_old avatar
oscarsullivan_old
Error: module.jenkins.module.cicd.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled



Error: module.jenkins.module.cicd.module.build.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled



Error: module.jenkins.module.efs_backup.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled



Error: module.jenkins.module.elastic_beanstalk_environment.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled


 ⧉  sandbox 
 ✓   (sandbox-iac) jenkins ⨠ terraform -v
Terraform v0.11.11
+ provider.aws v2.2.0
oscarsullivan_old avatar
oscarsullivan_old

Anyone familiar with this?

oscarsullivan_old avatar
oscarsullivan_old

Looks like the aws_region is not passed through the module itself to the submodules..

oscarsullivan_old avatar
oscarsullivan_old

Hmm having looked at the jenkins modules and the erroring sub modules it does pass aws_region down

oscarsullivan_old avatar
oscarsullivan_old
Simply remove current = true from your Terraform configuration. The data source defaults to the current provider region if no other filtering is enabled.

:thinking_face:

Thanks for the link though I don’t understand to which file it refers nor do I recognise the current = true flag

oscarsullivan_old avatar
oscarsullivan_old

Ah got it

oscarsullivan_old avatar
oscarsullivan_old

ah man

oscarsullivan_old avatar
oscarsullivan_old
cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

oscarsullivan_old avatar
oscarsullivan_old

its everywhere

oscarsullivan_old avatar
oscarsullivan_old

I’m going to have to use so many forks until this is merged in lol

loren avatar
loren

the argument has been deprecated for a while, just printing warnings instead error and exit non-zero

loren avatar
loren

removing it is backwards compatible for a reasonable number of versions

oscarsullivan_old avatar
oscarsullivan_old

PRs made for it

oscarsullivan_old avatar
oscarsullivan_old

any idea how to change to my branch:

  source              = "git::<https://github.com/osulli/terraform-aws-cicd.git?ref=heads/osulli:patch-1>"
oscarsullivan_old avatar
oscarsullivan_old

instead of tags

oscarsullivan_old avatar
oscarsullivan_old

I’ve tried heads

mmuehlberger avatar
mmuehlberger

Just use the branch as a ref. ref=master for instance

oscarsullivan_old avatar
oscarsullivan_old

ty

oscarsullivan_old avatar
oscarsullivan_old

fab that works

oscarsullivan_old avatar
oscarsullivan_old
Update sub-module versions by osulli · Pull Request #39 · cloudposse/terraform-aws-jenkins

… And use my forks until PRs merged for AWS v2 support What Updates sub-module versions to latest releases Uses my forks until the PRs are merged for the sub-modules Why Use latest versions S…

oscarsullivan_old avatar
oscarsullivan_old

That’s weird.. Still getting …

Error: module.jenkins.module.cicd.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled

even though..

[email protected]:~/code/terraform-aws-efs-backup$ git checkout patch-1 
Branch 'patch-1' set up to track remote branch 'patch-1' from 'origin'.
Switched to a new branch 'patch-1'
[email protected]:~/code/terraform-aws-efs-backup$ grep -iR "current" ./*
./docs/terraform.md:| noncurrent_version_expiration_days | S3 object versions expiration period (days) | string | `35` | no |
./README.md:  noncurrent_version_expiration_days = "${var.noncurrent_version_expiration_days}"
./README.md:> NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group Rule resource 
./README.md:| noncurrent_version_expiration_days | S3 object versions expiration period (days) | string | `35` | no |
./README.yaml:    noncurrent_version_expiration_days = "${var.noncurrent_version_expiration_days}"
./README.yaml:  > NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group Rule resource 
./s3.tf:    noncurrent_version_expiration {
./s3.tf:      days = "${var.noncurrent_version_expiration_days}"
./variables.tf:variable "noncurrent_version_expiration_days" {
oscarsullivan_old avatar
oscarsullivan_old

Next…

* module.jenkins.module.efs_backup.output.sns_topic_arn: Resource 'aws_cloudformation_stack.sns' does not have attribute 'outputs.TopicArn' for variable 'aws_cloudformation_stack.sns.outputs.TopicArn'

Has anyone used the Jenkins module lately?

oscarsullivan_old avatar
oscarsullivan_old
Resource 'aws_cloudformation_stack.sns' does not have attribute 'outputs.TopicArn' for variable 'aws_cloudformation_stack.sns.outputs.TopicArn' · Issue #36 · cloudposse/terraform-aws-efs-backup

Hi, I&#39;m trying to create EFS backups using this module but I keep running into the following error: * module.efs_backup.output.sns_topic_arn: Resource &#39;aws_cloudformation_stack.sns&#39; doe…

oscarsullivan_old avatar
oscarsullivan_old

Ok it was looking for sns.TopicArn instead of sns.arn https://www.terraform.io/docs/providers/aws/r/sns_topic.html#arn

AWS: sns_topic - Terraform by HashiCorp

Provides an SNS topic resource.

oscarsullivan_old avatar
oscarsullivan_old

arn not valid either. Just going to remove the output.. Hopefully the output isn’t referenced elsewhere? Not sure you can reference an output anyway. That’s just visual.

mmuehlberger avatar
mmuehlberger

Well, you can when looking up a remote terraform state.

oscarsullivan_old avatar
oscarsullivan_old

No but it’s not like in another module ${output.x.x} is a thing

oscarsullivan_old avatar
oscarsullivan_old

So it should be safe to remove this one output that breaks the whole project

oscarsullivan_old avatar
oscarsullivan_old
AWS: aws_route53_zone - Terraform by HashiCorp

Provides details about a specific Route 53 Hosted Zone

oscarsullivan_old avatar
oscarsullivan_old

Damn, received this a bunch of times.

module.jenkins.module.efs.aws_efs_mount_target.default[1]: Creation complete after 2m44s (ID: fsmt-xxx)

Error: Error applying plan:

1 error(s) occurred:

* module.jenkins.module.efs.aws_efs_mount_target.default[0]: 1 error(s) occurred:

* aws_efs_mount_target.default.0: MountTargetConflict: mount target already exists in this AZ
	status code: 409, request id: 0df8f8c2-xxx-xxx-xxx-55a525bfd810

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
oscarsullivan_old avatar
oscarsullivan_old

changed the versions of CP/efs but no luck. It seems to be duplicated to create it twice???

oscarsullivan_old avatar
oscarsullivan_old

Yet doesn’t appear to be created twice:

[email protected]:/devops/terraform/providers/aws/jenkins/.terraform/modules$ grep -ir "aws_efs_mount_target" ./*
./08d41c3e3037162fadbb4393ce396759/outputs.tf:  value = ["${aws_efs_mount_target.default.*.id}"]
./08d41c3e3037162fadbb4393ce396759/outputs.tf:  value = ["${aws_efs_mount_target.default.*.ip_address}"]
./08d41c3e3037162fadbb4393ce396759/main.tf:resource "aws_efs_mount_target" "default" {
./808f0aa181de2ea4cc344b3503eff684/efs.tf:data "aws_efs_mount_target" "default" {
./808f0aa181de2ea4cc344b3503eff684/cloudformation.tf:    myEFSHost                  = "${var.use_ip_address == "true" ? data.aws_efs_mount_target.default.ip_address : format("[%s.efs.%s.amazonaws.com](http://%s\.efs\.%s\.amazonaws\.com)", data.aws_efs_mount_target.default.file_system_id, (signum(length(var.region)) == 1 ? var.region : data.aws_region.default.name))}"
./808f0aa181de2ea4cc344b3503eff684/security_group.tf:  security_group_id        = "${data.aws_efs_mount_target.default.security_groups[0]}"
oscarsullivan_old avatar
oscarsullivan_old

only one resource for it in the whole of the jenkins project and its modules

oscarsullivan_old avatar
oscarsullivan_old
12:52:38 PM

wants to create it a second time despite it already existing

oscarsullivan_old avatar
oscarsullivan_old

Found the cause: terraform-aws-efs

Inside [main.tf](http://main\.tf)

resource "aws_efs_mount_target" "default" {
  count           = "${length(var.availability_zones)}"
  file_system_id  = "${aws_efs_file_system.default.id}"
  subnet_id       = "${element(var.subnets, count.index)}"
  security_groups = ["${aws_security_group.default.id}"]
}
oscarsullivan_old avatar
oscarsullivan_old

The length was what was causing multiple to be created……. so I just only used one availability zone and no longer receiving that dupe error,

oscarsullivan_old avatar
oscarsullivan_old

Onto the next error

Error: Error refreshing state: 1 error(s) occurred:

* module.jenkins.module.efs_backup.output.datapipeline_ids: At column 48, line 1: map "aws_cloudformation_stack.datapipeline.outputs" does not have any elements so cannot determine type. in:

${aws_cloudformation_stack.datapipeline.outputs["DataPipelineId"]}
oscarsullivan_old avatar
oscarsullivan_old

Oh. Cool. I can’t terraform destroy either

oscarsullivan_old avatar
oscarsullivan_old

^ Commented out the output..

And now….

* aws_elastic_beanstalk_environment.default: InvalidParameterValue: No Solution Stack named '64bit Amazon Linux 2017.09 v2.8.4 running Docker 17.09.1-ce' found.
	status code: 400, request id: d7bc0ae2-2278-4bbd-9540-bda532e9cd71
oscarsullivan_old avatar
oscarsullivan_old

Feel like I’m getting closer

oscarsullivan_old avatar
oscarsullivan_old

You know what.. I’m going to try the live version and either: 1) Define the AWS provider version 2) Use it how it is + grep for the deprecated argument and just manually remove it

rbadillo avatar
rbadillo

Guys, is there a way to do a split and get the last element of the list ?

rbadillo avatar
rbadillo

or do I need to know the size of the list ?

xluffy avatar
xluffy

${lenght(var.your_list)}

rbadillo avatar
rbadillo

I did it like that, thanks

oscarsullivan_old avatar
oscarsullivan_old

try [-1] for the index

rbadillo avatar
rbadillo

-1 doesn’t work

rbadillo avatar
rbadillo

I ended up using length function

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

xluffy avatar
xluffy

For getting last element in a list, I think can use like that

variable "public_subnet" {
  default = ["10.20.99.0/24" , "10.20.111.0/24", "10.20.222.0/24"]
}

output "last_element" {
  value = "${element(var.public_subnet, length(var.public_subnet) - 1 )}"
}

xluffy avatar
xluffy

will return to last_element = 10.20.222.0/24

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@oscarsullivan_old https://github.com/cloudposse/terraform-aws-jenkins was tested by us about a year ago (was deployed many times at the time), so prob a lot of things changed since then

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

oscarsullivan_old avatar
oscarsullivan_old

I’ve realised @Andriy Knysh (Cloud Posse) trying so hard to use it lmao

oscarsullivan_old avatar
oscarsullivan_old

I can’t believe no one else (?) has used it recently though.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i think some people used it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just need to find them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have not personally used it for some time

oscarsullivan_old avatar
oscarsullivan_old

It’s worrying that there are issues in it that actually prevent me from running terraform destroy?!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
oscarsullivan_old avatar
oscarsullivan_old

Astonished that’s possible

oscarsullivan_old avatar
oscarsullivan_old

I don’t have everything in containers yet

Mohamed Lrhazi avatar
Mohamed Lrhazi

Hello! is this the place to ask for help about https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Mohamed Lrhazi

Mohamed Lrhazi avatar
Mohamed Lrhazi

Geat! Here I go… am testing for the first time with this:

» cat main.tf module “cdn” { source = “git://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=master>” namespace = “ts-web” stage = “prod” name = “test” aliases = [] parent_zone_name = “catholic.edu” acm_certificate_arn = “arnawsacm:us-east-1:947556264854:certificate/e9b7a021-ef1a-49f7-8f2c-5a8e13c89dd2” use_regional_s3_endpoint = “true” origin_force_destroy = “true” cors_allowed_headers = [“”] cors_allowed_methods = [“GET”, “HEAD”, “PUT”] cors_allowed_origins = [“”] cors_expose_headers = [“ETag”] }

resource “aws_s3_bucket_object” “index” { bucket = “${module.cdn.s3_bucket}” key = “index.html” source = “${path.module}/index.html” content_type = “text/html” etag = “${md5(file(“${path.module}/index.html”))}” }

It seems to work fine.. but then when I visit the cdn site, I get:

» curl -i https://d18shdqwx0ry07.cloudfront.net HTTP/1.1 502 Bad Gateway Content-Type: text/html Content-Length: 507 Connection: keep-alive Server: CloudFront Date: Tue, 19 Mar 2019 15:17:45 GMT Expires: Tue, 19 Mar 2019 15:17:45 GMT X-Cache: Error from cloudfront Via: 1.1 e6aa91f0ba1f6ad473a8fc451c95d017.cloudfront.net (CloudFront) X-Amz-Cf-Id: P5kPEIr2kxXdfOBYgE2iiHiOUBOUh2bGSM8ZU9xI_w8zjcxT6PLCnw== … <H2>Failed to contact the origin.</H2>

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@oscarsullivan_old what we noticed before, if deployment fails for any reasons, you need to manaully destroy those data pipelines

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF does not detroy them (uses CloudFormation)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and also, all of that should be updated to use https://aws.amazon.com/backup/

AWS Backup | Centralized Cloud Backup

AWS Backup is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Honestly, looking at the architecture for deploying a single, non HA Jenkins on beanstalk is enough for me to say I just don’t think it’s worth running jenkins. Plus, to get HA with Jenkins you have to go enterprise. At that point might as well look at other options.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

instead of the efs-backup module which was a hack

oscarsullivan_old avatar
oscarsullivan_old

oscarsullivan_old avatar
oscarsullivan_old

oscarsullivan_old avatar
oscarsullivan_old

lmao data pipelines aren’t even available in London

oscarsullivan_old avatar
oscarsullivan_old

I wonder where they were created

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I forgot the command but terraform has something you can run that will output the state

oscarsullivan_old avatar
oscarsullivan_old

show

oscarsullivan_old avatar
oscarsullivan_old

let me try that

oscarsullivan_old avatar
oscarsullivan_old

Checked all the data centers and not there lawwwwwwd

oscarsullivan_old avatar
oscarsullivan_old

and cloud-nuke won’t solve this either

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya @Andriy Knysh (Cloud Posse) is correct that pipelines should now be replaced with the backups service

oscarsullivan_old avatar
oscarsullivan_old

execute me now pls

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

How long did you wait @Mohamed Lrhazi ?

Mohamed Lrhazi avatar
Mohamed Lrhazi

Any idea what I am missing?

Mohamed Lrhazi avatar
Mohamed Lrhazi

oh.. maybe 10 mins ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It takes sometimes up to 30 minutes to create a distribution

Mohamed Lrhazi avatar
Mohamed Lrhazi

h.. but it says DEPLOYED as status…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

10 m is faster than I have seen. Not sure what is wrong based on your example.

Mohamed Lrhazi avatar
Mohamed Lrhazi

and I think its actually been more than 30mins

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can go into the webconsole and poke around

Mohamed Lrhazi avatar
Mohamed Lrhazi

still giving same error…

Mohamed Lrhazi avatar
Mohamed Lrhazi

yes, I looked at the s3 bucket and looks like it did assign the right perms, from what I can guess

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Did you use a custom SSL cert?

Mohamed Lrhazi avatar
Mohamed Lrhazi

Nope!

Mohamed Lrhazi avatar
Mohamed Lrhazi

could that be it? docs dont says thats required!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If yes, the cloudfront provided one will not work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

In your code I see the cert ARN provided

Mohamed Lrhazi avatar
Mohamed Lrhazi

Oh sorry you’re right!!! I did add that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Also the S3 bucket is not a website

Mohamed Lrhazi avatar
Mohamed Lrhazi

oh.. the module does not do that for me?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You have to access any file by adding its name after the URL

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

There is another module for that

Mohamed Lrhazi avatar
Mohamed Lrhazi

ok. let me poke around and see if I can make it work… one last question.. is the module supposed to also create route53 records needed?

Mohamed Lrhazi avatar
Mohamed Lrhazi

cause it does not seem like it did in my simple test case.

oscarsullivan_old avatar
oscarsullivan_old

I just want this jenkins module off of my account now.. Any idea how to get past this:

* module.jenkins.module.efs.module.dns.output.hostname: variable "default" is nil, but no error was reported
* module.jenkins.module.elastic_beanstalk_environment.module.tld.output.hostname: variable "default" is nil, but no error was reported

oscarsullivan_old avatar
oscarsullivan_old

Have tried commenting that output out and also removing all outputs.tf files

oscarsullivan_old avatar
oscarsullivan_old

Just need to be able to run terraform destroy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mohamed Lrhazi 1 sec

oscarsullivan_old avatar
oscarsullivan_old

been in a half hour loop trying to purge it but always end up at the same spot

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@oscarsullivan_old did you find the pipelines in the AWS console? you need to manually delete them

oscarsullivan_old avatar
oscarsullivan_old

They don’t exist

oscarsullivan_old avatar
oscarsullivan_old

I looked through all the data centers

oscarsullivan_old avatar
oscarsullivan_old

I’m set to eu-west-2 (london) and that’s not even an available data center!

oscarsullivan_old avatar
oscarsullivan_old
03:32:19 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what do you see in CloudFormation? Delete those stacks manually

oscarsullivan_old avatar
oscarsullivan_old

0 stacks

oscarsullivan_old avatar
oscarsullivan_old

Also checked the US DCs in case it defaulted to CP’s DC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when you run terraform destroy, what’s the error?

oscarsullivan_old avatar
oscarsullivan_old
 ⧉  sandbox 
 ✓   (healthera-sandbox-admin) jenkins ⨠ terraform destroy
data.aws_availability_zones.available: Refreshing state...
data.aws_region.default: Refreshing state...
data.aws_iam_policy_document.assume_role: Refreshing state...
data.aws_iam_policy_document.ec2: Refreshing state...
data.aws_region.default: Refreshing state...
data.aws_route53_zone.he_uk: Refreshing state...
data.aws_elb_service_account.main: Refreshing state...
data.aws_iam_policy_document.role: Refreshing state...
data.aws_ami.amazon_linux: Refreshing state...
data.aws_ami.base_ami: Refreshing state...
data.aws_vpcs.account_vpc: Refreshing state...
data.aws_caller_identity.default: Refreshing state...
data.aws_iam_policy_document.permissions: Refreshing state...
data.aws_region.default: Refreshing state...
data.aws_iam_policy_document.resource_role: Refreshing state...
data.aws_iam_policy_document.service: Refreshing state...
data.aws_iam_policy_document.assume: Refreshing state...
data.aws_region.default: Refreshing state...
data.aws_caller_identity.default: Refreshing state...
data.aws_iam_policy_document.slaves: Refreshing state...
data.aws_iam_policy_document.role: Refreshing state...
data.aws_iam_policy_document.default: Refreshing state...
data.aws_acm_certificate.he_uk_ssl: Refreshing state...
data.aws_iam_policy_document.default: Refreshing state...
data.aws_subnet_ids.private_subnet: Refreshing state...
data.aws_subnet_ids.public_subnet: Refreshing state...
data.aws_vpc.default: Refreshing state...
data.aws_subnet_ids.default: Refreshing state...

Error: Error applying plan:

2 error(s) occurred:

* module.jenkins.module.efs.module.dns.output.hostname: variable "default" is nil, but no error was reported
* module.jenkins.module.elastic_beanstalk_environment.module.tld.output.hostname: variable "default" is nil, but no error was reported

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm, never seen those

oscarsullivan_old avatar
oscarsullivan_old

Have tried commenting out those ouputs, have tried removing all outputs.tf files, have attried re-applying then destroying etc

oscarsullivan_old avatar
oscarsullivan_old

I get something like this erry tim I use a CP module

oscarsullivan_old avatar
oscarsullivan_old

but this is a complex one and hard to cleanup manually

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i guess just go to Route53 and delete those records

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(and fix the variables and outputs so it does not complain)

oscarsullivan_old avatar
oscarsullivan_old

There aren’t any R53 records

oscarsullivan_old avatar
oscarsullivan_old

oh god

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i hope you did that in a test account that you could nuke somehow

oscarsullivan_old avatar
oscarsullivan_old

Yeh but unsure nuke will work on this

oscarsullivan_old avatar
oscarsullivan_old

if we’re taling cloud-nuke

oscarsullivan_old avatar
oscarsullivan_old

had a read through the resources it can od

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean anything to destroy it, not exactly using the nuke module

oscarsullivan_old avatar
oscarsullivan_old

oscarsullivan_old avatar
oscarsullivan_old

jeeeeez

oscarsullivan_old avatar
oscarsullivan_old

Deprecate that god damn repo, please. This has been really unpleasant

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s tone it down a notch. We’re all volunteering support here. https://gist.github.com/richhickey/1563cddea1002958f96e7ba9519972d9

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
What it feels like to be an open-source maintainer

Outside your door stands a line of a few hundred people. They are patiently waiting for you to answer their questions, complaints, pull requests, and feature requests. You want to help all of them, but for now you’re putting it off. Maybe you had a hard day at work, or you’re tired, or you’re just trying to enjoy a weekend with your family and friends. But if you go to , there’s a constant reminder of how many people are waiting:

When you manage to find some spare time, you open the door to the first person. They’re well-meaning enough; they tried to use your project but ran into some confusion over the API. They’ve pasted their code into a GitHub comment, but they forgot or didn’t know how to format it, so their code is a big unreadable mess. Helpfully, you edit their comment to add a code block, so that it’s nicely formatted. But it’s still a lot of code to read. Also, their description of the problem is a bit hard to understand. Maybe this person doesn’t speak English as a first language, or maybe they have a disability that makes it difficult for them to communicate via writing. You’re not sure. Either way, you struggle to understand the paragraphs of text they’ve posted. Wearily, you glance at the hundreds of other folks waiting in line behind them. You could spend a half-hour trying to understand this person’s code, or you could just skim through it and offer some links to tutorials and documentation, on the off-chance that it will help solve their problem. You also cheerfully suggest that they try Stack Overflow or the Slack channel instead. The next person in line has a frown on their face. They spew out complaints about how your project wasted 2 hours of their life because a certain API didn’t work as advertised. Their vitriol gives you a bad feeling in the pit of your stomach. You don’t waste a lot of time on this person. You simply say, “This is an open-source project, and it’s maintained by volunteers. If there’s a bug in the code, please submit a reproducible test case or a PR.” The next person has run into a very common error, with an easy workaround. You know you’ve seen this error a few times before, but can’t quite recall where the solution was posted. Stack Overflow? The wiki? The mailing list? After a few minutes of Googling, you paste a link and close the issue. The next person is a regular contributor. You recognize their name from various community forums and sibling projects. They’ve run into a very esoteric issue and have proposed a pull request to fix it. Unfortunately the issue is complicated, and so their PR contains many paragraphs of prose explaining it. Again, your eye darts to the hundreds of people still waiting in line. You know that this person put a lot of work into their solution, and it’s probably a reasonable one. The Travis tests passed, and so you’re tempted to just say “LGTM” and merge the pull request. However, you’ve been burned by that before. In the past, you’ve merged a PR without fully evaluating it, and in the end it led to new headaches because of problems you failed to foresee. Maybe the tests passed, but the performance degraded by a factor of ten. Or maybe it introduced a memory leak. Or maybe the PR made the project too confusing for new users, because it excessively complicated the API surface. If you merge this PR now, you might wind up with even more issues tomorrow, because you broke someone else’s workflow by solving this one person’s (very edge-casey) problem. So you put it on the back burner. You’ll get to it later when you have more time. The next person in line has found a new bug, but you know that it’s actually a bug in a sibling project. They’re saying that this is blocking them from shipping their app. You know it’s a big problem, but it’s one of many, and so you don’t have time to fix it right now. You respond that this looks like a genuine issue, but it’s more appropriate to open in another repo. So you close their issue and copy it into the other repo, then add a comment suggesting where they might look in the code to start fixing it. You doubt they’ll actually do so, though. Very few do. The next person just says “What’s the status on this?” You’re not sure what they’re talking about, so you look at the context. They’ve commented on a lengthy GitHub thread about a long-standing bug in the project. Many people disagreed on the proper solution to the problem, so it generated a lot of discussion. There are more than 20 comments on this particular issue, and it would take you a long time to read through them all to jog your memory. So you merely respond, “Sorry, this issue has been open for a while, but nobody has tackled it yet. We’re still trying to understand the scope of the problem; a pull request could be a good start!” The next person is just a GreenKeeper bot. These are easy. Except that this particular repo has fairly flaky tests, and the tests failed for what looks like a spurious reason, so you have to restart them to pass. You restart the tests and try to remind yourself to look into it later after Travis has had a chance to run. The next person has opened a pull request, but it’s on a repo that’s fairly active, and so another maintainer is already providing feedback. You glance through the thread; you trust the other maintainer to handle this one. So you mark it as read and move on. The next person has run into what appears to be a bug, and it’s not one you’ve ever seen before. But unfortunately they’ve provided scant details on how the problem actually occurred. What browser was it? What version of Node? What version of the project? What code did they use to reproduce it? You ask them for clarification and close the tab. The constant stream After a while, you’ve gone through ten or twenty people like this. There are still more than a hundred waiting in line. But by now you’re feeling exhausted; each person has either had a complaint, a question, or a request for enhancement. In a sense, these GitHub notifications are a constant stream of negativity about your projects. Nobody opens an issue or a pull request when they’re satisfied with your work. They only do so when they’ve found something lacking. Even if you only spend a little bit of time reading through these notifications, it can be mentally and emotionally exhausting. Your partner has observed that you’re always grumpy after going through this ritual. Maybe you found yourself snapping at her for no reason, just because you were put in a sour mood. “If doing open source makes you so angry, why do you even do it?” she asks. You don’t have a good answer. You could take a break; in fact you’ve probably earned it by now. In the past, you’ve even taken vacations of a week or two from GitHub, just for your own mental health. But you know that that’s exactly how you ended up in this situation, with hundreds of people patiently waiting. If you had just kept on top of your GitHub notifications, you’d probably have a more manageable 20-30 to deal with per day. Instead you let them pile up, so now there are hundreds. You feel guilty. In the past, for one reason or another, you’ve really let issues pile up. You might have seen an issue that was left unanswered for months. Usually, when you go back to address such an issue, the person who opened it never responds. Or they respond by saying, “I fixed my problem by abandoning your project and using another one instead.” That makes you feel bad, but you understand their frustration. You’ve learned from experience that the most pragmatic response to these stale issues is often just to say, &…

oscarsullivan_old avatar
oscarsullivan_old

My apologies, it was very rude of me as I was frustrated at the time and unsuccessful at getting it to work even after opening several forks and PRs.

1
oscarsullivan_old avatar
oscarsullivan_old

And thank you for calling me out on it

oscarsullivan_old avatar
oscarsullivan_old

And thanks for that nolanlawson read

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so I think part of the issue was that the pipelines are not supported in the region (and maybe other resources)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but the error reporting is bad

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mohamed Lrhazi this module https://github.com/cloudposse/terraform-aws-s3-website does create an S3 website

cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then you use this https://github.com/cloudposse/terraform-aws-cloudfront-cdn to add CDN for it

cloudposse/terraform-aws-cloudfront-cdn

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - cloudposse/terraform-aws-cloudfront-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(this https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn creates a regular S3 bucket, not a website, and points a CloudFront distribution to it)

Mohamed Lrhazi avatar
Mohamed Lrhazi
cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since it’s not a website, you have to access all the files by their names, e.g. https://d18shdqwx0ry07.cloudfront.net/index.html

Mohamed Lrhazi avatar
Mohamed Lrhazi