#terraform (2019-03)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2019-03-01

oscarsullivan_old avatar
oscarsullivan_old

Can someone list their workflow please for deploying a terraform project to dev and then to, say, sandbox. My understanding of CP’s tools is this:

Switch to Dev

  • Use TFENV to switch AWS account w/ aws-vault + backend configuration (S3 for state and DynamoDB for lock) && run terraform init every time doing a deploy
  • Run tfenv terraform apply

Switch to Sandbox/Staging/Prod

  • Use TFENV to switch AWS account w/ aws-vault + backend configuration (S3 for state and DynamoDB for lock) && run terraform init every time doing a deploy
  • Run tfenv terraform apply
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we don’t use it as a wrapper

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

though it does support being called as a wrapper

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use it with direnv which has a stdlib

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/geodesic

Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

oscarsullivan_old avatar
oscarsullivan_old

And do you have multiple .envrc’s to switch between envs (staging/dev/prod) to set hte bucket name for instance?

oscarsullivan_old avatar
oscarsullivan_old

Yeh that direnv seems to do that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we have one repo per stage.

oscarsullivan_old avatar
oscarsullivan_old

that’s a lot of code to maintain

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nope

oscarsullivan_old avatar
oscarsullivan_old

that’s what I’ve been avoiding this whole time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

terraform-root-modules is our service catalog

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(do read the README.md on that repo though b/c it’s often misunderstood)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we can call a module with very few lines of code

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
# Import the remote module
export TF_CLI_INIT_FROM_MODULE="git::<https://github.com/cloudposse/terraform-root-modules.git//aws/acm?ref=tags/0.35.1>"
export TF_CLI_PLAN_PARALLELISM=2

use terraform
use atlantis
use tfenv
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So when calling terraform init, it will download the remote module and initialize it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Environment Variables - Terraform by HashiCorp

Terraform uses environment variables to configure various aspects of its behavior.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

using tfenv, we pass all parameters via the TF_CLI_ARGS_init environment variable

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this allows us to call all terraform commands without a wrapper

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

however, all this stuff is a PIA to implement if you don’t use a preconfigured shell

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s why we have geodesic

oscarsullivan_old avatar
oscarsullivan_old

The readme for geodisc is even more confusing and doesn’t really articulate workflows

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s 1000 things

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there’s no one workflow

oscarsullivan_old avatar
oscarsullivan_old

IMO documentation MUST show how one can use it. It might seem like spoonfeeding but knowledge bases should assume that existing knowledge has gaps etc.

oscarsullivan_old avatar
oscarsullivan_old

I look at geodisc

oscarsullivan_old avatar
oscarsullivan_old

see an amazing product

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yup, we need to add more docs.

oscarsullivan_old avatar
oscarsullivan_old

but as with many CP products am left with: 1) Where does this fit into my workflow 2) How do people use it in their workflow w/ CLI commands spelled out in an example etc. 3) How do I start using it

oscarsullivan_old avatar
oscarsullivan_old

2 and 3 being the most critical, as 1 an engineer can usually work out

oscarsullivan_old avatar
oscarsullivan_old

if I download geodisc I’d have 0 idea how to start using it

oscarsullivan_old avatar
oscarsullivan_old

Also I’m seeing a not github link now

oscarsullivan_old avatar
oscarsullivan_old

to docs

oscarsullivan_old avatar
oscarsullivan_old

So I’ll review those :]

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s far from complete.

oscarsullivan_old avatar
oscarsullivan_old
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oscarsullivan_old avatar
oscarsullivan_old

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
09:53:21 AM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

note it calls out OSX

oscarsullivan_old avatar
oscarsullivan_old

The whole doc is just OSX

oscarsullivan_old avatar
oscarsullivan_old

AWS VAULT is a sub header of OSX

oscarsullivan_old avatar
oscarsullivan_old

How can I do a PR for docs Erik?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/docs

Cloud Posse Developer Hub. Complete documentation for the Cloud Posse solution. https://docs.cloudposse.com - cloudposse/docs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we have hundreds of issues no shortage

oscarsullivan_old avatar
oscarsullivan_old

No PR template

oscarsullivan_old avatar
oscarsullivan_old
99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

oscarsullivan_old avatar
oscarsullivan_old

Ok that’s two PRs for you

Erik Weber avatar
Erik Weber

Does anyone know of a good way to check whether or not a lot of terraform configurations are up to date? Basically I’d like to run terraform plan on $x amount of services/environments and get some sort of alterting if the plan isn’t empty

oscarsullivan_old avatar
oscarsullivan_old

Pop it in a CI pipeline?

oscarsullivan_old avatar
oscarsullivan_old

Or a bash script that iterates

Nikola Velkovski avatar
Nikola Velkovski

terraform plan -detailed-exitcode would do the trick

2
Nikola Velkovski avatar
Nikola Velkovski

as for the process it depends on what you have

Nikola Velkovski avatar
Nikola Velkovski

e.g. if you have jenkins then you will do a bash scripts that runs in a CI pipeline

Erik Weber avatar
Erik Weber

The plan was to use atlantis rather than pipelines for terraform, but I guess I could do both (where the pipeline only does a plan)

loren avatar

this is what i do, yes, terraform plan -detailed-exitcode

loren avatar

have scheduled jobs in CI with read-only privs that runs that every day for all tf configs

oscarsullivan_old avatar
oscarsullivan_old

@Erik Weber both both!

Erik Weber avatar
Erik Weber

Cheers

daveyu avatar

how can I show the state of a single resource instance?

 $ terraform state list | grep aws_subnet.public
module.subnets.aws_subnet.public[0]
module.subnets.aws_subnet.public[1]
module.subnets.aws_subnet.public[2]
module.subnets.aws_subnet.public[3]
 
 $ terraform state show module.subnets.aws_subnet.public[1]
Multiple instances found for the given pattern!

This command requires that the pattern match exactly one instance
of a resource. To view the matched instances, use "terraform state list".
Please modify the pattern to match only a single instance.
oscarsullivan_old avatar
oscarsullivan_old
Command: state show - Terraform by HashiCorp

The terraform state show command is used to show the attributes of a single resource in the Terraform state.

oscarsullivan_old avatar
oscarsullivan_old

That’s weird that doing [1] doesn’t work

oscarsullivan_old avatar
oscarsullivan_old

any different with 0 or *

daveyu avatar

thanks. no, it’s the same Multiple instances found for the given pattern!

oscarsullivan_old avatar
oscarsullivan_old

Not really used state show before, but this looked like a potential cause https://github.com/hashicorp/terraform/issues/8929 if you haven’t seen it already

Remote State · Issue #8929 · hashicorp/terraform

Hi there, Terraform v0.7.3 I had several environments with their own dedicated tfstate file located in s3 and remote state turned on. I accidentally connected to the incorrect state file when tryin…

daveyu avatar

hmm well terraform show does output the expected state for module.subnets.aws_subnet.public.1

daveyu avatar

I guess that will work for now

sytten avatar

Just wanted to say thanks for the terraform modules, it helps me a lot for my school project

2
cool-doge3
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

welcome @sytten

1

2019-03-02

oscarsullivan_old avatar
oscarsullivan_old

Why does reference-architectures insist on using keybase instead of slack? Can this be ignored? What do I miss out on?

loren avatar

i think it is for verification of user identity using pgp keys

joshmyers avatar
joshmyers

Keybase is used to encrypt user credentials so that only that user can decrypt the terraform output (or email)

joshmyers avatar
joshmyers

It’s nice as otherwise you need to figure out how to let users know their credentials in non plain text

oscarsullivan_old avatar
oscarsullivan_old

Thanks both

deftunix avatar
deftunix

hi all, I have a quick question. how do you provide estate visibility within dynamic infrastructure?

joshmyers avatar
joshmyers

There is a “portal” that links you to k8s dashboard, grafana, Prometheus, Kibana

deftunix avatar
deftunix

I am not referring just to k8s env dashboard but also to multi-cloud deployment of resource provisioned with terraform for example

joshmyers avatar
joshmyers

What kind of visibility are you after?

joshmyers avatar
joshmyers

Ah, of all your Terraform provisioned infra? Like, what have I actually created?

1
deftunix avatar
deftunix

I would like to create a visual single source of truth in which show all the resource provisioned

joshmyers avatar
joshmyers

Terraform graph and feed it to graphiz?

joshmyers avatar
joshmyers
terraform graphdot -Tpng > graph.png
deftunix avatar
deftunix

I was referring to something able to put terraform resources information into a kv to be consumed with other tools

deftunix avatar
deftunix

statefile -> exporter -> kv

joshmyers avatar
joshmyers

Consumed by anything specific?

deftunix avatar
deftunix

not currently

Steven avatar

@deftunix Terraboard might be the type of end result you’re looking for https://github.com/camptocamp/terraboard

camptocamp/terraboard

A web dashboard to inspect Terraform States - camptocamp/terraboard

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s neat!

camptocamp/terraboard

A web dashboard to inspect Terraform States - camptocamp/terraboard

Steven avatar

Surprised you didn’t know about it. Been around a long time. There is also, https://github.com/sepulworld/tfsoa. But it hasn’t been maintained as much

sepulworld/tfsoa

Terraform State of Awareness Dashboard. Contribute to sepulworld/tfsoa development by creating an account on GitHub.

h3in3k3n avatar
h3in3k3n

will try terraboard immediately as We really need it.

deftunix avatar
deftunix

I will check

deftunix avatar
deftunix

yes, it’s exactly what I was looking for.

deftunix avatar
deftunix

good starting point. thank you @Steven

Steven avatar

welcome

joshmyers avatar
joshmyers

Nice

2019-03-03

2019-03-04

oscarsullivan_old avatar
oscarsullivan_old

Does anyone know how to use TF_CLI_ARGS_name in a .tf file?

oscarsullivan_old avatar
oscarsullivan_old

For instance ` comment = “${var.TF_CLI_ARGS_stage}”`

oscarsullivan_old avatar
oscarsullivan_old

unknown variable referenced: 'TF_CLI_ARGS_stage'; define it with a 'variable' block .. obvs setting it as a variable doesn’t fix this.

oscarsullivan_old avatar
oscarsullivan_old

Just wondering how I’m to access TF_CLI_ARGS especialy when they’re coming through geodesic

loren avatar

not sure about the geodesic part, but any variable in terraform would generally just be var.stage, and terraform will automatically read it from the env TF_VAR_stage if present

loren avatar

TF_CLI_ARGS is something else entirely

oscarsullivan_old avatar
oscarsullivan_old
variable "stage" {}
variable "tf_bucket_region" {}
variable "tf_bucket" {}
variable "tf_dynamodb_table" {}

resource "null_resource" "null" {
    triggers = {
        test = "${var.stage}"
        bucket_region = "${var.tf_bucket}"
        bucket_name = "${var.tf_bucket_region}"
        dynamodb_table = "${var.tf_dynamodb_table}"

    }
}

output "test" {
  value = "${null_resource.null.triggers}"
}
oscarsullivan_old avatar
oscarsullivan_old

This should help anyone understand how to use GEODESIC vars inside of Terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Though those variables are all for the s3 backend in the terraform provider.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Terraform backends do not support interpolation

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s why we use the TF_CLI_ARGS pattern

Maxim Tishchenko avatar
Maxim Tishchenko

../modules/aws_cross_account_role

resource "aws_iam_role_policy_attachment" "policies" {
  count = "${length(var.role_policies)}"
  policy_arn = "${var.role_policies[count.index]}"
  role = "${aws_iam_role.role.id}"
}

another file

module "remote_role" {
  source = "../modules/aws_cross_account_role"

  aws_external_account_id = "${local.aws_account_id}"
  role_name = "remote_role"
  role_policies = [
    "arn:aws:iam::aws:policy/ReadOnlyAccess",
    "${aws_iam_policy.invoke_flow_resources_lambda_access.arn}",
    "${aws_iam_policy.create_function_flow_resources_lambda_access.arn}"
  ]
}

does anyone know what am I missing here? I’m getting error * module.remote_role.aws_iam_role_policy_attachment.policies: aws_iam_role_policy_attachment.policies: value of 'count' cannot be computed

2019-03-05

Samuli avatar

could one of the aws_iam_policy in the latter file be missing when you are trying to execute it?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Maxim Tishchenko that’s the infamous count error

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your case, TF can’t calculate dynamic counts across modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(if you put all the resources into one module, it should be ok)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the easiest (but not prettiest) way to fix it in your case would be to add var.role_policies_count variable to the modules/aws_cross_account_role module and provide it from module "remote_role"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "remote_role" {
  role_policies_count = 3
  role_policies = [
    "arn:aws:iam::aws:policy/ReadOnlyAccess",
    "${aws_iam_policy.invoke_flow_resources_lambda_access.arn}",
    "${aws_iam_policy.create_function_flow_resources_lambda_access.arn}"
  ]
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
resource "aws_iam_role_policy_attachment" "policies" {
  count = "${var.role_policies_count}"
  policy_arn = "${var.role_policies[count.index]}"
  role = "${aws_iam_role.role.id}"
}
Maxim Tishchenko avatar
Maxim Tishchenko

Mmmm

Maxim Tishchenko avatar
Maxim Tishchenko

I got you, thanks

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we do it in some of our modules, like i said not pretty, but works

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Richy de la cuadra avatar
Richy de la cuadra

hi,somebody have a module for create a cloudwatch event for keep lambda warm?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Richy de la cuadra found this example, can be put into a module https://github.com/terraform-providers/terraform-provider-aws/issues/3040

in terraform, cloudwatch_event_target (lambda warmer) does not link properly to production alias · Issue #3040 · terraform-providers/terraform-provider-aws

This issue was originally opened by @xenemo as hashicorp/terraform#17125. It was migrated here as a result of the provider split. The original body of the issue is below. Hello Terraform Experts! I…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
AWS Lambda Deployment using Terraform – Build ACL – Mediumattachment image

Use Terraform to overcome common deployment and development related challenges with AWS Lambda.

Richy de la cuadra avatar
Richy de la cuadra

very helpfull thanks @Andriy Knysh (Cloud Posse)!

1
squidfunk avatar
squidfunk
04:44:08 PM

Anyone else experiencing very slow TF Lambda deployments? 30 seconds+ for every single function. This was much faster before. Already upgraded to the newest TF, no changes.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did not test lambda recently

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

does it eventually finish successfully ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you test the same from the AWS console and compare the times?

tamsky avatar

@squidfunk are things slow inside a geodesic container?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(he’s not using geodesic yet)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

tamsky avatar

ok, just curious if it was related to TF_CLI_PLAN_PARALLELISM=2 – but looking at that var now, I’m guessing it only affects plan ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea… but you’re right, that does slow things down.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have that due to running terraform under ECS wher we experienced AWS platform rate limits

2019-03-06

maarten avatar
maarten

Hi, is anyone using Travis with private modules, what’s the best way to get git+ssh working there ?

squidfunk avatar
squidfunk

Hey guys. I was at my girlfriend’s parents for the last days and, well, it turned out it was just the upstream. They have a really bad internet connection. Stupid me Sorry!

3
2
johncblandii avatar
johncblandii

you have to take in-law internet into consideration when choosing to take the relationship to the next level.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@squidfunk AWS WorkSpaces FTW!

Andy avatar
Support for AWS Workspaces · Issue #10794 · hashicorp/terraform

Support for AWS Workspaces would be great, especially since there is existing support for AWS directory service directories in Terraform. Something like: resource &quot;aws_workspaces_workspace&quo…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

of anyone is looking for something to contribute, looks like there’s a bug affecting a lot of users: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/issues/39

viewer_certificate.0.acm_certificate_arn conflict · Issue #39 · cloudposse/terraform-aws-cloudfront-s3-cdn

Using release 0.7.0 AWS provider 2.0 module &quot;cf-s3-cdn&quot; { name = &quot;help-scentregroup-cf-s3-cdn&quot; source = &quot;cloudposse/cloudfront-s3-cdn/aws&quot; version = &quot;0.7.0&quot; …

h3in3k3n avatar
h3in3k3n

has anyone tried terratest ? new toy from gruntwork ?

oscarsullivan_old avatar
oscarsullivan_old

Yes it’s pretty good

oscarsullivan_old avatar
oscarsullivan_old

Tests written in go.

oscarsullivan_old avatar
oscarsullivan_old

I will eventually share my testing setup but that will take around 3 weeks

h3in3k3n avatar
h3in3k3n

yah i look forward to see that.

oscarsullivan_old avatar
oscarsullivan_old

Combination of: Atlantis Terratest GOSS TFlint

oscarsullivan_old avatar
oscarsullivan_old

I like the look of terraformcompliance tho

oscarsullivan_old avatar
oscarsullivan_old

BDD is nice

oscarsullivan_old avatar
oscarsullivan_old

Was talking to Erik on Monday and turns out my plans for Atlantis weren’t right. It’s meant to be used more as a CD tool than a tester… but all will be written up

2019-03-07

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

geez, so many breaking changes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
aws_route53_record `allow_overwrite` deprecation will break ability to update `SOA` · Issue #7846 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

loren avatar

here’s the pr for that one, they really didn’t actually change anything, it’s just the annoying deprecation warning

https://github.com/terraform-providers/terraform-provider-aws/pull/7734/files

resource/aws_route53_record: Switch allow_overwrite default from true to false by bflad · Pull Request #7734 · terraform-providers/terraform-provider-aws

Closes #3895 Reference: #2926 Previously, the aws_route53_record resource did not follow standard Terraform conventions of requiring existing infrastructure to be imported into Terraform&#39;s stat…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(our bad for not pinning provider)

oscarsullivan_old avatar
oscarsullivan_old

All these deprecation but where’s the new features list yo https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md

loren avatar

breaking changes are what major releases are for, features go in minor releases

1
AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Hi guys, i’m having kind of an issue with the “terraform-aws-iam-user” module. Once I create a user and i get the keybase_password_decrypt_command i’m perfectlly capable of decrypting the user password but then when i try to login to the aws console it says that the login information is incorrect. Just as a quick debug i changed the user password using the root account and i logged in successfully.

xluffy avatar

Hmm, I don’t have any ideal exclude to clone terraform-aws-iam-user module and modify for debugging. U can output encrypted_password + plaintext_password. After that, decrypt the encrypted_password and compare.

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Thanks!, to output plaintext_password i should also add it to the module’s outputs rigth? because i don’t see it there

xluffy avatar

sorry, my bad. output of aws_iam_user_login_profile just support encrypted_password, U can’t do it

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

damn

xluffy avatar

your keybase account is exist?

xluffy avatar

and u have a pgp public key on this account?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

yes, using keybase

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Solved it

xluffy avatar

How to fix? your mistake?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

yeah sorry i did not see that the command was adding an extra character.

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Sorry to bother you

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Could anyone help me debug this? Thanks in advanced. BTW you have some awsome terraform modules. Greetings from Argentina!

deftunix avatar
deftunix

hi all, I am deploying a consul and vault cluster on aws using terraform. do you have any example of pipeline able to manage the rolling upgrade of the cluster? all the nodes are in ASG

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Solved/Uderstood it!

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

the echo command is adding a “%” at the end of the password

1
AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

for some reason

2019-03-08

sweetops avatar
sweetops

Anyone running into x509: certificate signed by unknown authority with registry.terraform.io?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haven’t noticed this yet, but we tend to pin to github releases directly

sweetops avatar
sweetops

running into it when running a terraform init

github140 avatar
github140

@sweetops Is your certstore up to date? Does curl connect without issues?

sweetops avatar
sweetops

curl connects without issues

sweetops avatar
sweetops
curl <https://registry.terraform.io/.well-known/terraform.json>
{"modules.v1":"/v1/modules/","providers.v1":"/v1/providers/"}

2019-03-09

cabrinha avatar
cabrinha

It’d be nice if we could configure EKS workers to be spot instances here: https://github.com/cloudposse/terraform-aws-eks-workers

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

cabrinha avatar
cabrinha

So, I just used this example to get up and running with an EKS cluster: https://github.com/cloudposse/terraform-aws-eks-cluster

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

cabrinha avatar
cabrinha

But I’m getting the error: no nodes available to schedule pods

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what happens when you run kubectl get nodes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Do you see all nodes participating in the cluster?

cabrinha avatar
cabrinha

I can see that I have one worker node up, but I can’t see that node via kubectl

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@cabrinha have you seen our example in terraform-root-modules

cabrinha avatar
cabrinha
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) has been keeping this up to date

cabrinha avatar
cabrinha

@Erik Osterman (Cloud Posse) thanks for pointing me in the right direction, just wondering why the module doesn’t have a complete working example in it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

keep in mind, recent changes in terraform-aws-provider has introduced some issues; pinning to a pre-2.0 module might help or ensuring you’re on 2.1

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there are so many moving pieces in terraform between versions of terraform, versions of providers and versions of modules. We cannot test all combinations. We try to keep it all stable and working, but we are not a large company :smiley: and all maintenance is paid for by cloudposse (~$10K/mo!) Despite that, we try to stay on top of our 130+ terraform modules, ~60 tools in cloudposse/packages, 50+ helm charts in cloudposse/charts, 300+ original open source projects on our github, dozens of pull requests every week, code reviews, etc. It’s not easy and pull requests are greatly appreciated.

cabrinha avatar
cabrinha

I understand thanks

1

2019-03-10

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@cabrinha the example here https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete is a complete and working example, was tested about a month ago and also deployed by a few other people

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

2019-03-11

oscarsullivan_old avatar
oscarsullivan_old

Anyone already established a smart way of getting the VPC ID from another account (data resource from another account) when VPC peering?

# Mgmt VPC
data "aws_vpc" "requester_vpc" {
  tags = {
    Name = "${var.stage}-vpc"
  }
}

# Other sub-account VPCs
data "aws_vpc" "accepter_vpc" {
  tags = {
    Name = "${var.stage}-vpc"
  }
}

module "vpc_peering" {
  source           = "git::<https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account.git?ref=master>"
  namespace        = "he"
  stage            = "${var.stage}"
  name             = "vpn"
  requester_vpc_id = "${data.aws_vpc.requester_vpc.vpc_id}"
  requester_aws_assume_role_arn = "arn:aws:iam::xxx:role/vpc-admin"
  requester_region = "${var.aws_region}"
  accepter_vpc_id  = "${var.vpn-vpc}"
  accepter_aws_assume_role_arn = "arn:aws:iam::xxx:role/vpc-admin"
  accepter_region = "${var.region}"
}

Samuli avatar

variable?

oscarsullivan_old avatar
oscarsullivan_old

Would like to avoid hardcoding… edit: or user prompts

oscarsullivan_old avatar
oscarsullivan_old

But I’ve taken on the tech debt of having it hardcoded in my Geodesic Dockerfile

Samuli avatar

How is it hard coding if you have it as a variable?

oscarsullivan_old avatar
oscarsullivan_old

It is either hardcoded (the vpc-id) or it requires input. Both aren’t a typical solution I go for..

loren avatar

you can look it up, but you have to pass the provider to the data source so the credential has permissions to query the other account

loren avatar
data "aws_vpc" "accepter_vpc" {
  provider = "aws.<ALIAS_ACCEPTER>"

  tags = {
    Name = "${var.stage}-vpc"
  }
}
loren avatar
Providers - Configuration Language - Terraform by HashiCorp

Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.

Samuli avatar

but that would require some identification to be added eg. the var.stage in the example (+ the provider configuration) so why not just use vpc-id?

loren avatar

personally i agree, i don’t like implicit references like that, but i’m just offering an answer to the question tho

oscarsullivan_old avatar
oscarsullivan_old


so why not just use vpc-id?

I would 100% much rather have a global variable for a role ARN than a one/two-use variable like VPC-id being hardcoded

oscarsullivan_old avatar
oscarsullivan_old

Thanks the multi provider link should do the trick.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s how we used the terraform remote state provider to achieve a similar requirement https://github.com/cloudposse/terraform-root-modules/blob/master/aws/root-dns/ns/main.tf#L27-L51

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

oscarsullivan_old avatar
oscarsullivan_old

Trying to hunt down the source of a count error.. despite project having no lists or maps

foqal avatar
foqal
03:58:14 PM

Helpful question stored to <@Foqal> by @Erik Osterman (Cloud Posse):

thanks for pointing me in the right direction, just wondering why the module doesn't have a complete working example in it :thinking_face:

2019-03-12

Jan avatar

what was the tf tool being used to mask passwords

maarten avatar
maarten
output "db_password" {
  value       = aws_db_instance.db.password
  description = "The password for logging in to the database."
  sensitive   = true
}

`

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes this is the best approach for outputs

mrwacky avatar
mrwacky

But it’s still stored in tfstate in plaintext. We hacked up a null-resource to change the RDS cluster password after TF provisions it with a dummy password. Roughly:

resource "null_resource" "password_setter" {
  count      = "${local.count}"
  depends_on = ["aws_rds_cluster.rds"]

  provisioner "local-exec" {
    command = "${local.password_reset_command}"
  }
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@mrwacky doesn’t this break idempotency?

mrwacky avatar
mrwacky

Nope, because TF can’t read the master password from AWS. So it assumes the dummy one in the state is correct.

mrwacky avatar
mrwacky

Or because of this, or both:

lifecycle {
    ignore_changes = ["master_username", "master_password"]
  }
1
1
Jan avatar

sensitive = true

Jan avatar

oh really

Jan avatar

tfmask was the tool though

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

keep in mind sensitive won’t catch all the leakage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if using random_id or random_string resources, they leak information in the plan

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if using the github provider it leaks the oauth key in the plan

Jan avatar

thanks dude

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/tfmask

Terraform utility to mask select output from terraform plan and terraform apply - cloudposse/tfmask

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Hi guys sorry to bother you with a newbie question. I’m not completely sure as to how to use both terraform-aws-iam-user and terraform-aws-iam-assumed-roles combined. Can anyone help?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you create IAM users and then add them to the groups (admin or readonly)

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

so i should avoid using the “ groups = [”${module.assumed_roles_Infra.group_admin_name}”] “ input in the user definition and just use “admin_user_names = [“User.Name”] “ in my assumed-roles definition rigth?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

and execute them in that order,user creation, then groups, policies, etc

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

(thanks in advanced for your help)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Actually, we recommend the reverse these days

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically, adding a user should also add them to the groups

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s more managable and easier to remove a user - especially when practicing gitops

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

ho I see!! i’ll look into reversing that! thanks so much!

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Erik i have one last question.

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

in the user.tf files of the repo you mentioned, the groups input is set to “${local.admin_groups}”

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

that local.admin_groups, where is it referencing to?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

see the ../ directory

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use the terraform init -from-module=... pattern so we “clone” a remote module locally

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then copy our users into that folder

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is the upstream module

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

ho i get it!

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Thanks Erik, if you ever come to Argentina. I owe you one

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

would love to visit again some day!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Loved the Palermo Soho area

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I work rigth at the center of Palermo soho hehe.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thats cool!

xluffy avatar

You should start with https://github.com/cloudposse/terraform-aws-iam-user/blob/master/main.tf

terraform-aws-iam-user help to create a normal IAM user (u can login with password aws_iam_user_login_profile). After that, U need to assign this user to a group.

cloudposse/terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

xluffy avatar

terraform-aws-iam-assumed-roles is a big module, create a group, role, policy ….

oscarsullivan_old avatar
oscarsullivan_old

Anyone got a favourite way of creating openvpn server? Unlicensed as only need max 2 users.

Current choices are: 1) Use existing AMI on marketplace 2) Use existing AMI on marketplace as base in packer and provision with my stuff 3) New packer project htat installs openvpn with Ansible or Docker (or Ansible installing container…)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Check out Pritunl

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that said, we have a nice way of doing it in kubernetes, but I don’t think you’re going to be using k8s

oscarsullivan_old avatar
oscarsullivan_old

I’m seeing examples of Pritunl being used with openvpn. Are you using as an alternative

xluffy avatar

what happen with openvpn OSS

xluffy avatar

max 2 users because u use openvpn-as

xluffy avatar

or https://www.wireguard.com/, this is a new project, in development, but i think it is awesome

WireGuard: fast, modern, secure VPN tunnelattachment image

WireGuard: fast, modern, secure VPN tunnel

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@oscarsullivan_old We don’t prescribe using VPNs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Identity Aware Proxies are the way to go

oscarsullivan_old avatar
oscarsullivan_old

@Arvind Few things i would like to understand:

  1. How can sententiously multiple developer can work on one Terraform Repo.
    S3 Backend
  2. How can I manage multiple .tfsstate file, as I have three environments lab/stage/prod
    Geodesic. Have multiple accounts. Each one has a bucket. You go into a different Geodesic container for each environment (one env per account).
  3. What is the different b/w variables.tf and .tfsvar.
    [variables.tf](http://variables.tf) declares your variables and their data structure. A .tfvars file sets the value of the variable neatly, instead of doing it in the same file as say a resource creation.
    Typical project of mine:

jenkins: ec2.tf elb.tf jenkins_sg.tf outputs.tf r53.tf terraform.tf > declare backend and other common data bits variables.tf

1
Arvind avatar
Arvind
04:57:34 PM

@Arvind has joined the channel

Arvind avatar

Do we have any sample git repo where we are provisioning the infrastructure as per environments basis ex:stage/lab/prod.And i need to understand how terraform.tf file will read the environments specific variables. ex: in lab i require Redis instance smaller size while in stage and prod i require bigger size Redis instances.

joshmyers avatar
joshmyers
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@joshmyers’s suggestion is spot on

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

basically, you put all common code in terraform-root-modules (like blueprints)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then you use that code in all of your account environments (E.g. [testing.cloudposse.co](http://testing.cloudposse.co) is ours)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we have root, staging, prod, etc.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s where the custom settings go for each environment.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s been our experience that we want to run identical code (e.g. modules) in environments like staging and prod, but staging and prod necessarily differ in areas like instance sizes or perhaps the number of clusters

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for example, in production for an Enterprise SaaS platform, there might be one cluster per enterprise customer, but you wouldn’t run one cluster per enterprise customer in staging.

2019-03-13

xluffy avatar

Hey, I have a issue with terrarom-aws-iam-user. First i run apply, my user add to group admin, but when i run again, my user is remove to group

  ~ module.assumed_roles.aws_iam_group_membership.admin
      users.#:          "1" => "0"
      users.1070365961: "quang" => ""
  ~ module.quang.aws_iam_user_group_membership.default
      groups.#:          "0" => "1"
      groups.3764225916: "" => "g_ops_users"
joshmyers avatar
joshmyers

@xluffy what are you running?

xluffy avatar

terraform plan

xluffy avatar

I want to create normal IAM user, after that, I added this user to admin group (create by terraform-aws-iam-assumed-roles)

xluffy avatar

But when i run tf plan. It is remove/add my user from group

joshmyers avatar
joshmyers

Where? Got some code to look at?

xluffy avatar
cloudposse/terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

xluffy avatar
module "assumed_roles" {
  source    = "git::<https://github.com/cloudposse/terraform-aws-iam-assumed-roles.git?ref=tags/0.2.2>"
  namespace = "eg"
  stage     = "testing"
}

module "erik" {
  source  = "../../"
  name    = "erik"
  pgp_key = "keybase:osterman"
  groups  = ["${module.assumed_roles.group_admin_name}", "${module.assumed_roles.group_readonly_name}"]
}

oscarsullivan_old avatar
oscarsullivan_old

Anyone had this before when using the ec2 module? https://github.com/cloudposse/terraform-aws-ec2-instance/issues/37

* module.instance_01.output.network_interface_id: Resource 'aws_instance.default' does not have attribute 'network_interface_id' for variable 'aws_instance.default.*.network_interface_id'
Attribute 'network_interface_id' not found · Issue #37 · cloudposse/terraform-aws-ec2-instance

What Error: Error applying plan: 1 error(s) occurred: * module.instance_01.output.network_interface_id: Resource &#39;aws_instance.default&#39; does not have attribute &#39;network_interface_id'…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like network_interface_id output was removed from https://www.terraform.io/docs/providers/aws/r/instance.html#attributes-reference (or it was working somehow before)

AWS: aws_instance - Terraform by HashiCorp

Provides an EC2 instance resource. This allows instances to be created, updated, and deleted. Instances also support provisioning.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

needs a PR

oscarsullivan_old avatar
oscarsullivan_old

Ah fab

Arjun avatar

is there any possibility of mentioning an EBS resource in launch configuration using interpolation? ( i am trying to abstract out this , to conditionally mention an extra block device in LC)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what resource? share an example code

Arjun avatar

@Andriy Knysh (Cloud Posse)

resource "aws_launch_configuration" "launch-config" {
  name                        = "${var.lc_name}"
  image_id                    = "${var.image_id}"
  instance_type               = "${var.instance_type}"
  user_data                   = "${var.user_data}"

  root_block_device {
    delete_on_termination = "${var.root_ebs_delete_on_termination}"
    iops = "${var.root_ebs_iops}"
    volume_size = "${var.root_ebs_volume_size}"
    volume_type = "${var.root_ebs_volume_type}"
  }
  //TODO: change it to resource
  ebs_block_device {
    device_name           = "${var.device_name}"
    volume_size           = "${var.volume_size}"
    encrypted             = "${var.encrypt_ebs}"
    delete_on_termination = "${var.delete_on_termination}"
    volume_type           = "${var.volume_type}"

  }
  key_name = "${var.key_name}"
  security_groups = ["${module.security_group.security_group_id}"]
}
Arjun avatar

what i wanted to do is have ebs_block_device created conditionally when creating this LC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use the slice pattern here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ebs_block_device is actually a list

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try ebs_block_device = [] first

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it works, you can use the slice pattern to either provide an item with settings to the list, or have an empty list

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depending on some condition

Arjun avatar

@Andriy Knysh (Cloud Posse) no the block inside launch-configuration is a “block resource embedded in it”, I don’t think there is right side assignment for these kinda block , however i see people create two resources one with ebs_block and one without and make a decision in autoscaling group depending on the situation

Arjun avatar

i was trying to find a more better option ( looks like waitinf for dynamic attributes in 0.12 is the only way)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you try ebs_block_device = [], it could work. All those blocks are actually lists

1
Arjun avatar

@Andriy Knysh (Cloud Posse) , but since out Autoscaling group is randomly choosing an AZ while creating the instance i wonder how to deal this in that case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ebs_block_device does not depend on AZ, you either add it to aws_launch_configuration or not (in the later case try ebs_block_device = []) depending on some condition

Arjun avatar

yeah that clears my mind , thanks a lot !

Arjun avatar

hi @Andriy Knysh (Cloud Posse), i tried your suggestion and got stuck since , i wanted to modularize this option as variable being used in the module so i tried someting like

resource "aws_launch_configuration" "launch-config" { 
ebs_block_device = [ "${var.ebs_block_device_list}"]}
Arjun avatar

but now in the list of maps , i cant interpolate the variables in that variable

Arjun avatar

i tried assigning variable with default , somthing like

variable "ebs_block_device_list"{
type = list 
default: [ {
    device_name           = "${var.device_name}"
    volume_size           = "${var.volume_size}"
    encrypted             = "${var.encrypt_ebs}"
    delete_on_termination = "${var.delete_on_termination}"
    volume_type           = "${var.volume_type}"
  }]
}
Arjun avatar

but terraform doesnt allow interpolation in variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You don’t use variables inside a variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You provide a list of maps with values to the module

Arjun avatar

but those values are not fixed

Jan avatar

heads up this is no longer valid, https://github.com/cloudposse/terraform-root-modules/blob/master/aws/account-dns/main.tf#L26-L35

* aws_route53_record.dns_zone_soa: 1 error(s) occurred:

• aws_route53_record.dns_zone_soa: [ERR]: Error building changeset: InvalidChangeBatch: [Tried to create resource record set [name='some.example.com.', type='SOA'] but it already exists]
 
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Jan avatar

used to work, 27 days ago for sure

Jan avatar

but also not needed as SOA gets created by default

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
aws_route53_record `allow_overwrite` deprecation will break ability to update `SOA` · Issue #7846 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s not that we want to create a new SOA; it is essential to set a low SOA TTL to prevent negative ttl caching in dynamic service discovery environments. E.g. a negative cache hit can cause 15 minutes of downtime for no good reason.

mrwacky avatar
mrwacky

Has this guy popped up in here yet? https://github.com/iann0036/AWSConsoleRecorder

iann0036/AWSConsoleRecorder

Records actions made in the AWS Management Console and outputs the equivalent CLI/SDK commands and CloudFormation/Terraform templates. - iann0036/AWSConsoleRecorder

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hah, wow, haven’t seen this before

oscarsullivan_old avatar
oscarsullivan_old

Holy cow that’s.. an amazing project. I do find aws easier to understand and ‘do’ with TF than when I use the console though tbh

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i find the terraform documentation phenomenal and often easier than to RTFM of AWS docs

2
ansgar avatar

To chime in as a newbie (to AWS and Terraform): I sometimes find myself clicking through some AWS wizard to get an idea what exactly some property/config does

mrwacky avatar
mrwacky

I wonder if it works

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I just got it today via meduim and quickly tried it. It’s a bit raw but an awsome tool

2019-03-14

h3in3k3n avatar
h3in3k3n

Hi everyone. I don’t know which channel to post this question but as I tend to implement with TF so I posted here.

My question is : How did you manage ECR repository on Terraform or alternative tools you using ? The problem I facing is

  • i want to create multiple ecr repositories in 1 TF code, so which is the good way to generate those codes ? I writing go template. I think I will use that to generate the TF code from Yaml File My yaml file format like this ``` [name-of-repository]
    • [username-arn-id-a] - pull
    • [username-arn-id-b] - push [name-of-repository2]
    • [username-arn-id-d] - pull
    • [username-arn-id-c] - push ``` Any one has other ideas feel free to tell me once, Appreciate !
Nikola Velkovski avatar
Nikola Velkovski

@h3in3k3n the way I would do it is to create/find a module that has a proper ecr policy set and just use that one. The resource by itself in terraform is super small but with the policy it can get a bit bigger also you might need to pass the policy per repo or whatever your use case requires

Nikola Velkovski avatar
Nikola Velkovski

Hi People, any recommendation for an IDE or a vim setting in which I can detect unused variables in terraform ?

Nikola Velkovski avatar
Nikola Velkovski

I think that one of the jetbrains IDEs has this feature.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Jetbrains IDEA has a very nice terraform support

3
antonbabenko avatar
antonbabenko

Feel free to give kudos to Vlad (creator of Terraform plugin for Jetbrains products) - https://twitter.com/Vlad_P53

Vladislav Rassokhin (@Vlad_P53) | Twitter

The latest Tweets from Vladislav Rassokhin (@Vlad_P53). Software Developer @jetbrains. My opinions are my own. Saint-Petersburg

5
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Can detect missing and unused variables

foqal avatar
foqal
02:11:18 PM

Helpful question stored to <@Foqal> by @Nikola Velkovski:

Hi everyone. I don’t know which channel to post this question but as I tend to implement with TF so I posted here...
Nikola Velkovski avatar
Nikola Velkovski

yeah that’s what I remember as well

Nikola Velkovski avatar
Nikola Velkovski

but I don’t want to switch my precious VIM

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And highlights everything

Nikola Velkovski avatar
Nikola Velkovski
02:11:50 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

It can emulate VIM editor so you don’t lose anything :)

Nikola Velkovski avatar
Nikola Velkovski

hmmm

Nikola Velkovski avatar
Nikola Velkovski

ok let me check that out

Nikola Velkovski avatar
Nikola Velkovski

thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

It’s a very nice product in my opinion

loren avatar

vscode does this also for terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Supports all possible languages and frameworks

Nikola Velkovski avatar
Nikola Velkovski

ok vscode I’ave installed but I haven’t seen this feature in action , do I need to install a special plugin @loren?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Vscode not bad too

loren avatar

yes, you need a plugin, one sec and i’ll get the name

Nikola Velkovski avatar
Nikola Velkovski

thanks, I already have the tf plugin installed though

loren avatar

yeah, should just be terraform

loren avatar
loren
02:20:31 PM

you don’t see references like this?

Nikola Velkovski avatar
Nikola Velkovski

no I get

Nikola Velkovski avatar
Nikola Velkovski
02:21:44 PM
Nikola Velkovski avatar
Nikola Velkovski

for every var..

Nikola Velkovski avatar
Nikola Velkovski

I think it’s the way we use terraform ..

loren avatar

oh, it might not work when the references are in a different file?

Nikola Velkovski avatar
Nikola Velkovski

yes they are in a different folder..

Nikola Velkovski avatar
Nikola Velkovski

and then we sprinkle some makefile magic

Nikola Velkovski avatar
Nikola Velkovski

to make an apply

loren avatar

well, different file, same directory works fine

Nikola Velkovski avatar
Nikola Velkovski

sorry different folder

Nikola Velkovski avatar
Nikola Velkovski

so the plugin thinks it’s a different state

Nikola Velkovski avatar
Nikola Velkovski

ok now I get it

loren avatar
loren
02:23:27 PM
loren avatar

that’s in a variables.tf file, same directory as main.tf where all the references are

Nikola Velkovski avatar
Nikola Velkovski

yeah , that’s not the case where I currently work at

loren avatar

gotcha

Nikola Velkovski avatar
Nikola Velkovski

but thanks for the help ! now I know that vscode works as expected

loren avatar

it is hard to do this kind of lint/validation when you have some wrapper/generator around your tf code

loren avatar

well, harder, i guess. need to generate it, then validate the output

Nikola Velkovski avatar
Nikola Velkovski

yeah, I am not a fan of the wrappers as well.

Nikola Velkovski avatar
Nikola Velkovski

but there’s a way , the Makefile generates a main.tf file

Nikola Velkovski avatar
Nikola Velkovski

so I can just use that one

loren avatar

yep

Nikola Velkovski avatar
Nikola Velkovski
02:27:44 PM

yeah

Nikola Velkovski avatar
Nikola Velkovski

thanks guys for your help!

Nikola Velkovski avatar
Nikola Velkovski

Much appreciated

Nikola Velkovski avatar
Nikola Velkovski

hmm I found out that the current plugin in vsode sets 0 references even if the variable is invoked inside a function

Nikola Velkovski avatar
Nikola Velkovski

like so "{tf_function(var.something)}"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

IDEA can actually see and analyze variables in functions (and even in plain text)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
03:16:11 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it does not show the ref count as vscode does

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but when you click “Find Usages” on anything

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
03:16:55 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it shows all references

Nikola Velkovski avatar
Nikola Velkovski

oh nice..

Nikola Velkovski avatar
Nikola Velkovski

I am definitelly trying it out

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

highlights wrong/missing vars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
03:18:42 PM
johncblandii avatar
johncblandii

that moment when you’re getting a product demo from a company and you notice a Cloud Posse browser tab + the label module naming format

johncblandii avatar
johncblandii

Turns out, @Erik Osterman (Cloud Posse) is known from the days at Sumo (by Nick Coffee)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hahah wow! small freggin world.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I remember Nick.

1
johncblandii avatar
johncblandii

He’s at Harness https://harness.io/

mrwacky avatar
mrwacky

if I want to to test terraform 0.12 beta.. how do I get providers built against it?

mrwacky avatar
mrwacky

I guess I have to compile myself

loren avatar
Terraform 0.12 Development: A Step-by-Step Guide to Get Running with Providersattachment image

Hashicorp has set the Terraform 0.12 release date broadly sometime before the end of March 2019. Version 0.12 will include many…

3
mrwacky avatar
mrwacky

nice

2019-03-15

oscarsullivan_old avatar
oscarsullivan_old

I haven’t yet had a chance to try this, but it was on my mind.

Using Geodesic across multiple AWS accounts for each stage, I have Route 53 records to create. I have one domain name: acme.co.uk I own [acme.co.uk](http://acme.co.uk). I have [acme.co.uk](http://acme.co.uk) NS pointing to my ROOT account.

Scenario: I have to create r53 records, say [test.acme.co.uk](http://test.acme.co.uk). Naturally I want to create this on my testing account. I want this r53 record to be public. Naturally this means the testing account needs to have an [acme.co.uk](http://acme.co.uk) r53 public zone… but wait… I already have a public zone for this in ROOT with the public NS pointing to ROOT account.

Problem: Is this possible? Or to have public records for my one domain, must I assume a role into my ROOT account and only create public records there?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we do DNS zone delegation. In the root DNS, we provision the parent domain ([acme.co.uk](http://acme.co.uk)) Zone and subdomains (e.g. [test.acme.co.uk](http://test.acme.co.uk)) zones, then we provision the DNS zone [test.acme.co.uk](http://test.acme.co.uk) in the test account, and then add the name server for [test.acme.co.uk](http://test.acme.co.uk) in the parent DNS zone in the root account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for root ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for member account ^

mmuehlberger avatar
mmuehlberger

I was wondering the same thing today. I’m debating with myself if it makes sense to split your naming into an infastructure zone like acme.cloud and have a separate one for your public-facing domain, being more picky which namespaces to use.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we recommend having a service discovery domain which is not your branded vanity domain

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so we have [cloudposse.org](http://cloudposse.org) as our infra domain

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

[cloudposse.com](http://cloudposse.com) as our vanity domain

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(and [cloudposse.co](http://cloudposse.co) is just for our examples)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Yes so CloudPosse.org has NS records for all our member accounts

mmuehlberger avatar
mmuehlberger

Makes total sense now that you say it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
mmuehlberger avatar
mmuehlberger

The only issue I thought might be a little bit tricky is with certificates, but I think it would just be the load balancer that needs to have a certificate for the vanity domain applied, so not too bad.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s possible now to associate multiple ACM ARNs with an ALB

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the alternative is to use SANs

mmuehlberger avatar
mmuehlberger

Ah, I somehow missed that. If possible, I try to avoid using SANs on vanity domains, it’s just neater.

1
mmuehlberger avatar
mmuehlberger

Thanks for the clarification.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
04:48:36 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what we have in the root account cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

certificates get provisioned in each account separately https://github.com/cloudposse/terraform-root-modules/blob/master/aws/acm/main.tf

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Igor avatar

Since we’re on the subject, is it a good idea to create a hosting zone for a customer’s site and ask them to create an NS record, instead of giving them the ALB CNAME? My thoughts are that this way I can change the ALB if ever required without having to request a customer to make a DNS change.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if the service discovery domain will not change, then probably yes, give them the NS records

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on the other hand, if you have a service discovery domain, e.g. [prod.mycompany.co](http://prod.mycompany.co) which you control, you can ask them to add the CNAME record from their main (vanity) domain to [prod.mycompany.co](http://prod.mycompany.co) (which should never change as well). No ALBs involved

Igor avatar

Is one approach better than the other?

Alex Siegman avatar
Alex Siegman

Amazon charges for CNAMEs, if that kind of penny pinching bothers you. Clarification: cnames count towards your queries, but those are stupidly cheap anyways. I’ve had bosses squawk over it though. A ALIAS records do not.

Alex Siegman avatar
Alex Siegman

I think the two approaches, it’s mostly just how you want to manage it.

Igor avatar

In this case, I think the CNAME would be on the customer side, so no additional cost

Alex Siegman avatar
Alex Siegman

Oh, I missed the “customer’s site” - in that case, I might say you want to control your own domain stuff, make them use cnames, but that’s just me

Igor avatar

CNAME has more visibility. Someone can access the site using [customer.mycompany.co](http://customer.mycompany.co) instead of [site.customer.co](http://site.customer.co) as intended.

Igor avatar

I just tried the NS approach for the first time, and the ask was met with some skepticism (from the customer)

oscarsullivan_old avatar
oscarsullivan_old

Thanks.. great discussion. Your solution seems v easy @Andriy Knysh (Cloud Posse)

oscarsullivan_old avatar
oscarsullivan_old

thanks for usage screenshots also

1
SweetOps avatar
SweetOps
06:00:34 PM

Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.

github140 avatar
github140

Could resources be deployed into two different AWS accounts having a single “TF environment” and still use some sort of ENV variables for keys? Is there even support for vault?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we handle that by having “generic” TF modules (that provision resources but all settings come from variables, they have no identity by itself). Then we have Docker containers (geodesic in our case) for each AWS account. We copy the modules into each containers. All those TF vars get populated from ENV vars which can come from a few different places (Dockerfile, CI/CD pipeline, AWS SSM, chamber, https://direnv.net, etc.). When starting a container for a particular account, it already has all the code (TF, Helm, Helfiles, etc.) and all the ENV vars populated, so the container has all the info to provision infra for that account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since all settings are coming from ENV vars (including AWS regions and account IDs), and all TF modules use terraform-null-label module to uniquely name all resources, the same setup could be used with multiple AWS accounts, or just with one if needed (in which case all containers will login/assume role into the same AWS account, but all the resources will have diff environment/stage in their names, so all names are unique anyway)

bilal avatar
bilal
07:50:50 PM

Does this work cross accounts? Would I just need to create another hosted zone with the same NS?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it is cross-accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the member accounts (prod, staging, dev) you create corresponding DNS zones (e.g. prod.cloudposse.co)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then get the NS servers from them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and update the NS records in the root account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

obviously you update the NS in the registrar (if it’s not Route53) to the root DNS zone name servers (first record in the image above)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then, to put it very simple, every request to let’s say prod.cloudposse.co will first hit the root zone for cloudposse.co and the DNS server will return the name servers for prod.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then all other records for prod.cloudposse.co will be requested from its name servers (which could be in the same AWS account or diff, does not matter)

johncblandii avatar
johncblandii

Anyone else get access to TFE for the free remote state? I’m curious how people feel about it.

2019-03-16

oscarsullivan_old avatar
oscarsullivan_old

Can’t comment on TFE but I’m very happy with my S3 backend. TFE is out of the box and you pay for it. S3 backend is ‘free’ once you get your head around it.

oscarsullivan_old avatar
oscarsullivan_old

@Erik Osterman (Cloud Posse) perhaps for the first hangout I could demo how to set up one of my aws accounts to work with geodesic, aws vault, and s3 backend

antonbabenko avatar
antonbabenko

I think @johncblandii meant Terraform Saas which is expected to be affordable for many once it is public. If so, my question is more like this:

  1. Will I be able to run Terragrunt there (as a custom docker image with all other deps)?
oscarsullivan_old avatar
oscarsullivan_old

@Andriy Knysh (Cloud Posse) could you just double check I’m doing this right please?

ROOT account: Has acme.co.uk public zone Has NS record for sandbox.acme.co.uk pointing to the NS addresses that come default with the sandbox.acme.co.uk r53 zone I created in my sandbox account

Sandbox account: Has sandbox.acme.co.uk private zone without default NS records Has test.sandbox.acme.co.uk A record to 127.0.0.1

Expected result:

host [test.sandbox.acme.co.uk](http://test.sandbox.acme.co.uk)
127.0.0.1

Actual result:
Host test.sandbox.acme.co.uk not found: 2(SERVFAIL)

mmuehlberger avatar
mmuehlberger

You need a public zone for sandbox.acme.co.uk.

oscarsullivan_old avatar
oscarsullivan_old

Ty

mmuehlberger avatar
mmuehlberger

Private zones only work within the VPC.

oscarsullivan_old avatar
oscarsullivan_old

Great fab. That’s now working

1
oscarsullivan_old avatar
oscarsullivan_old

Man that’s so cool. So much easier. What a great piece of knowledge right there

oscarsullivan_old avatar
oscarsullivan_old

I wonder if I can go a step further

mmuehlberger avatar
mmuehlberger

If you want to see how that works you can use dig to see the delegations happening: dig ns [sandbox.acme.co.uk](http://sandbox.acme.co.uk) +trace +nodnssec

oscarsullivan_old avatar
oscarsullivan_old

ROOT: acme.co.uk Public

Sandbox: sandbox.acme.co.uk Public NS to sandbox.acme.net

Sandbox: sandbox.acme.net Private

oscarsullivan_old avatar
oscarsullivan_old

I don’t like the idea of my internal infra being on a public zone

oscarsullivan_old avatar
oscarsullivan_old

Got a few solutions up my sleeve that I’m going to test out.

mmuehlberger avatar
mmuehlberger

As far as I know you can’t scan DNS records easilly

oscarsullivan_old avatar
oscarsullivan_old

Oh really? A long time ago someone told me “don’t put internal infra on public DNS, it just makes it easier for intruders to map out your kit”

oscarsullivan_old avatar
oscarsullivan_old

kinda just stuck as it sounded reasonable

loren avatar

We have that split dns setup in Route53. I wouldn’t do it again, starting over

loren avatar

Did it for that same reason, originally

oscarsullivan_old avatar
oscarsullivan_old

So, not worth it?

loren avatar

Not with the same zone name, for sure

mmuehlberger avatar
mmuehlberger

A quick article I just found on this topic is https://hackertarget.com/find-dns-host-records/. The only way to get the records is doing a DNS Zone Transfer, which doesn’t work from the public.

Find DNS Host Records | Subdomain Finder | HackerTarget.com

Online tool to enumerate subdomains of a domain. Find host records for a domain during the discovery phase of a security assessment or penetration test.

loren avatar

Use a totally different zone name in the private zone, and that’s generally ok

mmuehlberger avatar
mmuehlberger

If you have a private zone, I’d use something like .local or .acme as the root zone.

mmuehlberger avatar
mmuehlberger

The problem is, if you are working from outside (like you do with a multi-account AWS setup), you will run into more problems than not, if stuff doesn’t work.

loren avatar

Also, we run into problems we with multi account setups, where the accounts cannot resolve each other’s records from the private zones. There are ways of doing it, but it’s kludgey. So if you have internal services that should be resolvable across accounts, private zones end up being painful. This basic NS delegation that works with public zones does not work at all with private zones

oscarsullivan_old avatar
oscarsullivan_old

My goal was:

VPC peer a VPC with Jenkins + VPN to all sub accounts. Continue having acme.co.uk on our ROOT account where all our infra currently is. Have a sub-account for each stage. Have all apps go through a loadbalancer that terminates SSL on acme.co.uk (we own this + have SSL). Have all internal infra on acme.net (we don’t own this and no need for ssl as terminates on ELBs)

oscarsullivan_old avatar
oscarsullivan_old


should be resolvable across accounts
Yep jenkins would need this big time

loren avatar

I’m hoping the new route53 resolvers will get me out of this mess

oscarsullivan_old avatar
oscarsullivan_old
12:10:39 PM

I didn’t really understand what this was showing to me. When I searched my public acme.co.uk only two results came up and both had public IPs

A quick article I just found on this topic is https://hackertarget.com/find-dns-host-records/. The only way to get the records is doing a DNS Zone Transfer, which doesn’t work from the public.

oscarsullivan_old avatar
oscarsullivan_old

What does this tell me?

mmuehlberger avatar
mmuehlberger

This was about finding DNS records out of the blue.

oscarsullivan_old avatar
oscarsullivan_old

But what does it tell me about my current setup that only two of our many public A records in our public acme.co.uk zone appear?

mmuehlberger avatar
mmuehlberger

Nothing, I was just backing up my claim.
oscarsullivan [1:00 PM]
Oh really? A long time ago someone told me “don’t put internal infra on public DNS, it just makes it easier for intruders to map out your kit”

mmuehlberger avatar
mmuehlberger

Ah, now I understand what you mean. You should only receive all the records that are for acme.co.uk (and not any subdomain).

loren avatar

That site is doing more than just attempting a zone transfer (which doesn’t work with route53). They’re also scraping for site info and guessing common names

mmuehlberger avatar
mmuehlberger

Yes, it was more about the text, than the tool itself.

xluffy avatar

Hey, I checked some modules about VPC (on GitHub). But nobody implements a Network ACL on AWS. What are the reasons why don’t do that?

loren avatar

They are very hard to manage, as they do not track connections, so you need rules to handle both outbound and any valid returning traffic

loren avatar

The only real valid use case I’ve seen is to throw on deny nacls for known bad actors ¯_(ツ)_/¯

mmuehlberger avatar
mmuehlberger

That, and if you have a very tiered network, where you want to prevent miss use.

xluffy avatar

yeah, I see. So hard to manage + hard for debugging. But you must trade off between security vs convenience.

Some infrastructures (payment system, banking system) need security more.

loren avatar

well, yes, but with nacls basically requiring allowing all ports for return traffic, or at least all epehmeral ports, they feel far more like checkbox security or security theatre than actual security

2
antonbabenko avatar
antonbabenko

https://github.com/terraform-aws-modules/terraform-aws-vpc/pull/174 - there will be support for NACL very soon (I will review and merge this during a couple of days).

Configure network ACLs for public/private/intra subnets by kinghuang · Pull Request #174 · terraform-aws-modules/terraform-aws-vpc

Network ACLs provide a layer of security to VPCs with stateless rules for inbound and outbound network connections. This PR adds support for configuring network ACLs for public, private, and intra …

2

2019-03-17

Valter Henrique avatar
Valter Henrique

Hey guys, I’m trying to create a domain zone on AWS Route53 and create a certificate that will be used in an ALB later. However, I need to validate the record first. I’m trying the validation_method="DNS" but is taking ages and nothing happens. My impression is that I need to create the domain, create the ALB, assign a record to the ALB and then try to validate the record. Is that right?

By the way, I’m talking about this: https://www.terraform.io/docs/providers/aws/r/acm_certificate_validation.html

AWS: aws_acm_certificate_validation - Terraform by HashiCorp

Waits for and checks successful validation of an ACM certificate.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Hi @Valter Henrique so the dns zone you are requesting the certificate for should be accessible von the internet, meaning if the registrar is not Route53, then you need to create the zone in Route53, get the name servers and update the NS records in the registrar

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

In other words, if AWS can’t access your dns records, the validation will take forever and never finish

Valter Henrique avatar
Valter Henrique

Right, thank you very much for that @Andriy Knysh (Cloud Posse)!

Valter Henrique avatar
Valter Henrique

I was almost bashing my head against my keyboard hehehe

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Yea, we were bitten by that ad well. Certificate Manager adds the validation record to your zone, and you think if it can write it, it should be able to read it as well :)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

But no, it tries to read using public internet, not the internal network

Valter Henrique avatar
Valter Henrique

Indeed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

But of course, the purpose of that is to validate that you own the domain. Only if you bought the domain, you could update its name servers. Otherwise I could create a DNS zone for your domain and request a cert for it

2019-03-18

mmuehlberger avatar
mmuehlberger

I was looking at the ecs modules and I’m trying to figure out, how it’s supposed to work with atlantis. The terraform-aws-ecs-atlantis module uses the default-backend for both service tasks (404 and atlantis), but shouldn’t it use the container image from ECR for the atlantis instance? (I hope this is the right place).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@mmuehlberger sounds about right. You might notice that we hardcore the image to the default backend. That is by design.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The image should always be updated using codebuild/codepipeline, so if you see the 404 then you know it didn’t work :-)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is our example implementation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s the build spec we use

mmuehlberger avatar
mmuehlberger

God, I’m an idiot (was a long day already). Of course it get’s updated in the CodePipeline.

mmuehlberger avatar
mmuehlberger

Because I used a private helmfiles image, the CodeBuild project failed, and it didn’t update yet. Duh.

mmuehlberger avatar
mmuehlberger

The problem is completely on my end. Sometimes I get lost in the dependency chain of Terraform modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok! Let me know it goes…

mmuehlberger avatar
mmuehlberger

It deploys fine now, as it should be (thank you for the pointers). Adding encrypted secrets to the buildspec is kind of a hastle though, because the user is burried deep in modules. Also there is issues with the healthcheck and atlantis shutting down. I need to investigate further tomorrow.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use chamber to write all secrets to ssm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@mmuehlberger are you using atlantis with #geodesic?

mmuehlberger avatar
mmuehlberger

Yes, I’m using geodesic in a slightly more modern form than reference-architectures (like testing). It can read the parameters from SSM just fine, the role just doesn’t have permission to decrypt them.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cool!

2019-03-19

niek avatar

Does anyone knows if it is possible to construct from a map a string.

For example

{
   "key1" = "val1"
   "key2" = "val2"
}

To: "key1, val1, key2, val2"

Currently having a tf module wich accepts as input a map for tagging in AWS. But there is on one place I need to pass the tags as a list. I prefer to keep my work backwards compatible.

niek avatar

Solved

replace(jsonencode(map("key1", "val1", "key2", "val2")), "/[\\{\\}\"\\s]/", "")
oscarsullivan_old avatar
oscarsullivan_old
Error: module.jenkins.module.cicd.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled



Error: module.jenkins.module.cicd.module.build.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled



Error: module.jenkins.module.efs_backup.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled



Error: module.jenkins.module.elastic_beanstalk_environment.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled


 ⧉  sandbox 
 ✓   (sandbox-iac) jenkins ⨠ terraform -v
Terraform v0.11.11
+ provider.aws v2.2.0
oscarsullivan_old avatar
oscarsullivan_old

Anyone familiar with this?

oscarsullivan_old avatar
oscarsullivan_old

Looks like the aws_region is not passed through the module itself to the submodules..

oscarsullivan_old avatar
oscarsullivan_old

Hmm having looked at the jenkins modules and the erroring sub modules it does pass aws_region down

oscarsullivan_old avatar
oscarsullivan_old
Simply remove current = true from your Terraform configuration. The data source defaults to the current provider region if no other filtering is enabled.

:thinking_face:

Thanks for the link though I don’t understand to which file it refers nor do I recognise the current = true flag

oscarsullivan_old avatar
oscarsullivan_old

Ah got it

oscarsullivan_old avatar
oscarsullivan_old

ah man

oscarsullivan_old avatar
oscarsullivan_old
cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

oscarsullivan_old avatar
oscarsullivan_old

its everywhere

oscarsullivan_old avatar
oscarsullivan_old

I’m going to have to use so many forks until this is merged in lol

loren avatar

the argument has been deprecated for a while, just printing warnings instead error and exit non-zero

loren avatar

removing it is backwards compatible for a reasonable number of versions

oscarsullivan_old avatar
oscarsullivan_old

PRs made for it

oscarsullivan_old avatar
oscarsullivan_old

any idea how to change to my branch:

  source              = "git::<https://github.com/osulli/terraform-aws-cicd.git?ref=heads/osulli:patch-1>"
oscarsullivan_old avatar
oscarsullivan_old

instead of tags

oscarsullivan_old avatar
oscarsullivan_old

I’ve tried heads

mmuehlberger avatar
mmuehlberger

Just use the branch as a ref. ref=master for instance

oscarsullivan_old avatar
oscarsullivan_old

ty

oscarsullivan_old avatar
oscarsullivan_old

fab that works

oscarsullivan_old avatar
oscarsullivan_old
Update sub-module versions by osulli · Pull Request #39 · cloudposse/terraform-aws-jenkins

… And use my forks until PRs merged for AWS v2 support What Updates sub-module versions to latest releases Uses my forks until the PRs are merged for the sub-modules Why Use latest versions S…

oscarsullivan_old avatar
oscarsullivan_old

That’s weird.. Still getting …

Error: module.jenkins.module.cicd.data.aws_region.default: "current": [REMOVED] Defaults to current provider region if no other filtering is enabled

even though..

oscar@infinitum:~/code/terraform-aws-efs-backup$ git checkout patch-1 
Branch 'patch-1' set up to track remote branch 'patch-1' from 'origin'.
Switched to a new branch 'patch-1'
oscar@infinitum:~/code/terraform-aws-efs-backup$ grep -iR "current" ./*
./docs/terraform.md:| noncurrent_version_expiration_days | S3 object versions expiration period (days) | string | `35` | no |
./README.md:  noncurrent_version_expiration_days = "${var.noncurrent_version_expiration_days}"
./README.md:> NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group Rule resource 
./README.md:| noncurrent_version_expiration_days | S3 object versions expiration period (days) | string | `35` | no |
./README.yaml:    noncurrent_version_expiration_days = "${var.noncurrent_version_expiration_days}"
./README.yaml:  > NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group Rule resource 
./s3.tf:    noncurrent_version_expiration {
./s3.tf:      days = "${var.noncurrent_version_expiration_days}"
./variables.tf:variable "noncurrent_version_expiration_days" {
oscarsullivan_old avatar
oscarsullivan_old

Next…

* module.jenkins.module.efs_backup.output.sns_topic_arn: Resource 'aws_cloudformation_stack.sns' does not have attribute 'outputs.TopicArn' for variable 'aws_cloudformation_stack.sns.outputs.TopicArn'

Has anyone used the Jenkins module lately?

oscarsullivan_old avatar
oscarsullivan_old
Resource 'aws_cloudformation_stack.sns' does not have attribute 'outputs.TopicArn' for variable 'aws_cloudformation_stack.sns.outputs.TopicArn' · Issue #36 · cloudposse/terraform-aws-efs-backup

Hi, I&#39;m trying to create EFS backups using this module but I keep running into the following error: * module.efs_backup.output.sns_topic_arn: Resource &#39;aws_cloudformation_stack.sns&#39; doe…

oscarsullivan_old avatar
oscarsullivan_old

Ok it was looking for sns.TopicArn instead of sns.arn https://www.terraform.io/docs/providers/aws/r/sns_topic.html#arn

AWS: sns_topic - Terraform by HashiCorp

Provides an SNS topic resource.

oscarsullivan_old avatar
oscarsullivan_old

arn not valid either. Just going to remove the output.. Hopefully the output isn’t referenced elsewhere? Not sure you can reference an output anyway. That’s just visual.

mmuehlberger avatar
mmuehlberger

Well, you can when looking up a remote terraform state.

oscarsullivan_old avatar
oscarsullivan_old

No but it’s not like in another module ${output.x.x} is a thing

oscarsullivan_old avatar
oscarsullivan_old

So it should be safe to remove this one output that breaks the whole project

oscarsullivan_old avatar
oscarsullivan_old
AWS: aws_route53_zone - Terraform by HashiCorp

Provides details about a specific Route 53 Hosted Zone

oscarsullivan_old avatar
oscarsullivan_old

Damn, received this a bunch of times.

module.jenkins.module.efs.aws_efs_mount_target.default[1]: Creation complete after 2m44s (ID: fsmt-xxx)

Error: Error applying plan:

1 error(s) occurred:

* module.jenkins.module.efs.aws_efs_mount_target.default[0]: 1 error(s) occurred:

* aws_efs_mount_target.default.0: MountTargetConflict: mount target already exists in this AZ
	status code: 409, request id: 0df8f8c2-xxx-xxx-xxx-55a525bfd810

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
oscarsullivan_old avatar
oscarsullivan_old

changed the versions of CP/efs but no luck. It seems to be duplicated to create it twice???

oscarsullivan_old avatar
oscarsullivan_old

Yet doesn’t appear to be created twice:

oscar@infinitum:/devops/terraform/providers/aws/jenkins/.terraform/modules$ grep -ir "aws_efs_mount_target" ./*
./08d41c3e3037162fadbb4393ce396759/outputs.tf:  value = ["${aws_efs_mount_target.default.*.id}"]
./08d41c3e3037162fadbb4393ce396759/outputs.tf:  value = ["${aws_efs_mount_target.default.*.ip_address}"]
./08d41c3e3037162fadbb4393ce396759/main.tf:resource "aws_efs_mount_target" "default" {
./808f0aa181de2ea4cc344b3503eff684/efs.tf:data "aws_efs_mount_target" "default" {
./808f0aa181de2ea4cc344b3503eff684/cloudformation.tf:    myEFSHost                  = "${var.use_ip_address == "true" ? data.aws_efs_mount_target.default.ip_address : format("%s.efs.%s.amazonaws.com", data.aws_efs_mount_target.default.file_system_id, (signum(length(var.region)) == 1 ? var.region : data.aws_region.default.name))}"
./808f0aa181de2ea4cc344b3503eff684/security_group.tf:  security_group_id        = "${data.aws_efs_mount_target.default.security_groups[0]}"
oscarsullivan_old avatar
oscarsullivan_old

only one resource for it in the whole of the jenkins project and its modules

oscarsullivan_old avatar
oscarsullivan_old
12:52:38 PM

wants to create it a second time despite it already existing

oscarsullivan_old avatar
oscarsullivan_old

Found the cause: terraform-aws-efs

Inside [main.tf](http://main.tf)

resource "aws_efs_mount_target" "default" {
  count           = "${length(var.availability_zones)}"
  file_system_id  = "${aws_efs_file_system.default.id}"
  subnet_id       = "${element(var.subnets, count.index)}"
  security_groups = ["${aws_security_group.default.id}"]
}
oscarsullivan_old avatar
oscarsullivan_old

The length was what was causing multiple to be created……. so I just only used one availability zone and no longer receiving that dupe error,

oscarsullivan_old avatar
oscarsullivan_old

Onto the next error

Error: Error refreshing state: 1 error(s) occurred:

* module.jenkins.module.efs_backup.output.datapipeline_ids: At column 48, line 1: map "aws_cloudformation_stack.datapipeline.outputs" does not have any elements so cannot determine type. in:

${aws_cloudformation_stack.datapipeline.outputs["DataPipelineId"]}
oscarsullivan_old avatar
oscarsullivan_old

Oh. Cool. I can’t terraform destroy either

oscarsullivan_old avatar
oscarsullivan_old

^ Commented out the output..

And now….

* aws_elastic_beanstalk_environment.default: InvalidParameterValue: No Solution Stack named '64bit Amazon Linux 2017.09 v2.8.4 running Docker 17.09.1-ce' found.
	status code: 400, request id: d7bc0ae2-2278-4bbd-9540-bda532e9cd71
oscarsullivan_old avatar
oscarsullivan_old

Feel like I’m getting closer

oscarsullivan_old avatar
oscarsullivan_old

You know what.. I’m going to try the live version and either: 1) Define the AWS provider version 2) Use it how it is + grep for the deprecated argument and just manually remove it

rbadillo avatar
rbadillo

Guys, is there a way to do a split and get the last element of the list ?

rbadillo avatar
rbadillo

or do I need to know the size of the list ?

xluffy avatar

${lenght(var.your_list)}

rbadillo avatar
rbadillo

I did it like that, thanks

oscarsullivan_old avatar
oscarsullivan_old

try [-1] for the index

rbadillo avatar
rbadillo

-1 doesn’t work

rbadillo avatar
rbadillo

I ended up using length function

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

xluffy avatar

For getting last element in a list, I think can use like that

variable "public_subnet" {
  default = ["10.20.99.0/24" , "10.20.111.0/24", "10.20.222.0/24"]
}

output "last_element" {
  value = "${element(var.public_subnet, length(var.public_subnet) - 1 )}"
}

xluffy avatar

will return to last_element = 10.20.222.0/24

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@oscarsullivan_old https://github.com/cloudposse/terraform-aws-jenkins was tested by us about a year ago (was deployed many times at the time), so prob a lot of things changed since then

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

oscarsullivan_old avatar
oscarsullivan_old

I’ve realised @Andriy Knysh (Cloud Posse) trying so hard to use it lmao

oscarsullivan_old avatar
oscarsullivan_old

I can’t believe no one else (?) has used it recently though.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i think some people used it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just need to find them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have not personally used it for some time

oscarsullivan_old avatar
oscarsullivan_old

It’s worrying that there are issues in it that actually prevent me from running terraform destroy?!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
oscarsullivan_old avatar
oscarsullivan_old

Astonished that’s possible

oscarsullivan_old avatar
oscarsullivan_old

I don’t have everything in containers yet

Mohamed Lrhazi avatar
Mohamed Lrhazi

Hello! is this the place to ask for help about https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Mohamed Lrhazi

Mohamed Lrhazi avatar
Mohamed Lrhazi

Geat! Here I go… am testing for the first time with this:

» cat main.tf module “cdn” { source = “git://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=master>” namespace = “ts-web” stage = “prod” name = “test” aliases = [] parent_zone_name = “catholic.edu” acm_certificate_arn = “arnawsacm947556264854:certificate/e9b7a021-ef1a-49f7-8f2c-5a8e13c89dd2” use_regional_s3_endpoint = “true” origin_force_destroy = “true” cors_allowed_headers = [“”] cors_allowed_methods = [“GET”, “HEAD”, “PUT”] cors_allowed_origins = [“”] cors_expose_headers = [“ETag”] }

resource “aws_s3_bucket_object” “index” { bucket = “${module.cdn.s3_bucket}” key = “index.html” source = “${path.module}/index.html” content_type = “text/html” etag = “${md5(file(“${path.module}/index.html”))}” }

It seems to work fine.. but then when I visit the cdn site, I get:

» curl -i https://d18shdqwx0ry07.cloudfront.net HTTP/1.1 502 Bad Gateway Content-Type: text/html Content-Length: 507 Connection: keep-alive Server: CloudFront Date: Tue, 19 Mar 2019 1545 GMT Expires: Tue, 19 Mar 2019 1545 GMT X-Cache: Error from cloudfront Via: 1.1 e6aa91f0ba1f6ad473a8fc451c95d017.cloudfront.net (CloudFront) X-Amz-Cf-Id: P5kPEIr2kxXdfOBYgE2iiHiOUBOUh2bGSM8ZU9xI_w8zjcxT6PLCnw== … <H2>Failed to contact the origin.</H2>

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@oscarsullivan_old what we noticed before, if deployment fails for any reasons, you need to manaully destroy those data pipelines

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF does not detroy them (uses CloudFormation)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and also, all of that should be updated to use https://aws.amazon.com/backup/

AWS Backup | Centralized Cloud Backup

AWS Backup is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Honestly, looking at the architecture for deploying a single, non HA Jenkins on beanstalk is enough for me to say I just don’t think it’s worth running jenkins. Plus, to get HA with Jenkins you have to go enterprise. At that point might as well look at other options.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

instead of the efs-backup module which was a hack

oscarsullivan_old avatar
oscarsullivan_old

oscarsullivan_old avatar
oscarsullivan_old

oscarsullivan_old avatar
oscarsullivan_old

lmao data pipelines aren’t even available in London

oscarsullivan_old avatar
oscarsullivan_old

I wonder where they were created

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I forgot the command but terraform has something you can run that will output the state

oscarsullivan_old avatar
oscarsullivan_old

show

oscarsullivan_old avatar
oscarsullivan_old

let me try that

oscarsullivan_old avatar
oscarsullivan_old

Checked all the data centers and not there lawwwwwwd

oscarsullivan_old avatar
oscarsullivan_old

and cloud-nuke won’t solve this either

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya @Andriy Knysh (Cloud Posse) is correct that pipelines should now be replaced with the backups service

oscarsullivan_old avatar
oscarsullivan_old

execute me now pls

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

How long did you wait @Mohamed Lrhazi ?

Mohamed Lrhazi avatar
Mohamed Lrhazi

Any idea what I am missing?

Mohamed Lrhazi avatar
Mohamed Lrhazi

oh.. maybe 10 mins ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It takes sometimes up to 30 minutes to create a distribution

Mohamed Lrhazi avatar
Mohamed Lrhazi

h.. but it says DEPLOYED as status…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

10 m is faster than I have seen. Not sure what is wrong based on your example.

Mohamed Lrhazi avatar
Mohamed Lrhazi

and I think its actually been more than 30mins

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can go into the webconsole and poke around

Mohamed Lrhazi avatar
Mohamed Lrhazi

still giving same error…

Mohamed Lrhazi avatar
Mohamed Lrhazi

yes, I looked at the s3 bucket and looks like it did assign the right perms, from what I can guess

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Did you use a custom SSL cert?

Mohamed Lrhazi avatar
Mohamed Lrhazi

Nope!

Mohamed Lrhazi avatar
Mohamed Lrhazi

could that be it? docs dont says thats required!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If yes, the cloudfront provided one will not work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

In your code I see the cert ARN provided

Mohamed Lrhazi avatar
Mohamed Lrhazi

Oh sorry you’re right!!! I did add that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Also the S3 bucket is not a website

Mohamed Lrhazi avatar
Mohamed Lrhazi

oh.. the module does not do that for me?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You have to access any file by adding its name after the URL

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

There is another module for that

Mohamed Lrhazi avatar
Mohamed Lrhazi

ok. let me poke around and see if I can make it work… one last question.. is the module supposed to also create route53 records needed?

Mohamed Lrhazi avatar
Mohamed Lrhazi

cause it does not seem like it did in my simple test case.

oscarsullivan_old avatar
oscarsullivan_old

I just want this jenkins module off of my account now.. Any idea how to get past this:

* module.jenkins.module.efs.module.dns.output.hostname: variable "default" is nil, but no error was reported
* module.jenkins.module.elastic_beanstalk_environment.module.tld.output.hostname: variable "default" is nil, but no error was reported

oscarsullivan_old avatar
oscarsullivan_old

Have tried commenting that output out and also removing all outputs.tf files

oscarsullivan_old avatar
oscarsullivan_old

Just need to be able to run terraform destroy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mohamed Lrhazi 1 sec

oscarsullivan_old avatar
oscarsullivan_old

been in a half hour loop trying to purge it but always end up at the same spot

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@oscarsullivan_old did you find the pipelines in the AWS console? you need to manually delete them

oscarsullivan_old avatar
oscarsullivan_old

They don’t exist

oscarsullivan_old avatar
oscarsullivan_old

I looked through all the data centers

oscarsullivan_old avatar
oscarsullivan_old

I’m set to eu-west-2 (london) and that’s not even an available data center!

oscarsullivan_old avatar
oscarsullivan_old
03:32:19 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what do you see in CloudFormation? Delete those stacks manually

oscarsullivan_old avatar
oscarsullivan_old

0 stacks

oscarsullivan_old avatar
oscarsullivan_old

Also checked the US DCs in case it defaulted to CP’s DC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when you run terraform destroy, what’s the error?

oscarsullivan_old avatar
oscarsullivan_old
 ⧉  sandbox 
 ✓   (healthera-sandbox-admin) jenkins ⨠ terraform destroy
data.aws_availability_zones.available: Refreshing state...
data.aws_region.default: Refreshing state...
data.aws_iam_policy_document.assume_role: Refreshing state...
data.aws_iam_policy_document.ec2: Refreshing state...
data.aws_region.default: Refreshing state...
data.aws_route53_zone.he_uk: Refreshing state...
data.aws_elb_service_account.main: Refreshing state...
data.aws_iam_policy_document.role: Refreshing state...
data.aws_ami.amazon_linux: Refreshing state...
data.aws_ami.base_ami: Refreshing state...
data.aws_vpcs.account_vpc: Refreshing state...
data.aws_caller_identity.default: Refreshing state...
data.aws_iam_policy_document.permissions: Refreshing state...
data.aws_region.default: Refreshing state...
data.aws_iam_policy_document.resource_role: Refreshing state...
data.aws_iam_policy_document.service: Refreshing state...
data.aws_iam_policy_document.assume: Refreshing state...
data.aws_region.default: Refreshing state...
data.aws_caller_identity.default: Refreshing state...
data.aws_iam_policy_document.slaves: Refreshing state...
data.aws_iam_policy_document.role: Refreshing state...
data.aws_iam_policy_document.default: Refreshing state...
data.aws_acm_certificate.he_uk_ssl: Refreshing state...
data.aws_iam_policy_document.default: Refreshing state...
data.aws_subnet_ids.private_subnet: Refreshing state...
data.aws_subnet_ids.public_subnet: Refreshing state...
data.aws_vpc.default: Refreshing state...
data.aws_subnet_ids.default: Refreshing state...

Error: Error applying plan:

2 error(s) occurred:

* module.jenkins.module.efs.module.dns.output.hostname: variable "default" is nil, but no error was reported
* module.jenkins.module.elastic_beanstalk_environment.module.tld.output.hostname: variable "default" is nil, but no error was reported

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm, never seen those

oscarsullivan_old avatar
oscarsullivan_old

Have tried commenting out those ouputs, have tried removing all outputs.tf files, have attried re-applying then destroying etc

oscarsullivan_old avatar
oscarsullivan_old

I get something like this erry tim I use a CP module

oscarsullivan_old avatar
oscarsullivan_old

but this is a complex one and hard to cleanup manually

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i guess just go to Route53 and delete those records

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(and fix the variables and outputs so it does not complain)

oscarsullivan_old avatar
oscarsullivan_old

There aren’t any R53 records

oscarsullivan_old avatar
oscarsullivan_old

oh god

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i hope you did that in a test account that you could nuke somehow

oscarsullivan_old avatar
oscarsullivan_old

Yeh but unsure nuke will work on this

oscarsullivan_old avatar
oscarsullivan_old

if we’re taling cloud-nuke

oscarsullivan_old avatar
oscarsullivan_old

had a read through the resources it can od

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean anything to destroy it, not exactly using the nuke module

oscarsullivan_old avatar
oscarsullivan_old

oscarsullivan_old avatar
oscarsullivan_old

jeeeeez

oscarsullivan_old avatar
oscarsullivan_old

Deprecate that god damn repo, please. This has been really unpleasant

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s tone it down a notch. We’re all volunteering support here. https://gist.github.com/richhickey/1563cddea1002958f96e7ba9519972d9

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
What it feels like to be an open-source maintainer

Outside your door stands a line of a few hundred people. They are patiently waiting for you to answer their questions, complaints, pull requests, and feature requests. You want to help all of them, but for now you’re putting it off. Maybe you had a hard day at work, or you’re tired, or you’re just trying to enjoy a weekend with your family and friends. But if you go to , there’s a constant reminder of how many people are waiting:

When you manage to find some spare time, you open the door to the first person. They’re well-meaning enough; they tried to use your project but ran into some confusion over the API. They’ve pasted their code into a GitHub comment, but they forgot or didn’t know how to format it, so their code is a big unreadable mess. Helpfully, you edit their comment to add a code block, so that it’s nicely formatted. But it’s still a lot of code to read. Also, their description of the problem is a bit hard to understand. Maybe this person doesn’t speak English as a first language, or maybe they have a disability that makes it difficult for them to communicate via writing. You’re not sure. Either way, you struggle to understand the paragraphs of text they’ve posted. Wearily, you glance at the hundreds of other folks waiting in line behind them. You could spend a half-hour trying to understand this person’s code, or you could just skim through it and offer some links to tutorials and documentation, on the off-chance that it will help solve their problem. You also cheerfully suggest that they try Stack Overflow or the Slack channel instead. The next person in line has a frown on their face. They spew out complaints about how your project wasted 2 hours of their life because a certain API didn’t work as advertised. Their vitriol gives you a bad feeling in the pit of your stomach. You don’t waste a lot of time on this person. You simply say, “This is an open-source project, and it’s maintained by volunteers. If there’s a bug in the code, please submit a reproducible test case or a PR.” The next person has run into a very common error, with an easy workaround. You know you’ve seen this error a few times before, but can’t quite recall where the solution was posted. Stack Overflow? The wiki? The mailing list? After a few minutes of Googling, you paste a link and close the issue. The next person is a regular contributor. You recognize their name from various community forums and sibling projects. They’ve run into a very esoteric issue and have proposed a pull request to fix it. Unfortunately the issue is complicated, and so their PR contains many paragraphs of prose explaining it. Again, your eye darts to the hundreds of people still waiting in line. You know that this person put a lot of work into their solution, and it’s probably a reasonable one. The Travis tests passed, and so you’re tempted to just say “LGTM” and merge the pull request. However, you’ve been burned by that before. In the past, you’ve merged a PR without fully evaluating it, and in the end it led to new headaches because of problems you failed to foresee. Maybe the tests passed, but the performance degraded by a factor of ten. Or maybe it introduced a memory leak. Or maybe the PR made the project too confusing for new users, because it excessively complicated the API surface. If you merge this PR now, you might wind up with even more issues tomorrow, because you broke someone else’s workflow by solving this one person’s (very edge-casey) problem. So you put it on the back burner. You’ll get to it later when you have more time. The next person in line has found a new bug, but you know that it’s actually a bug in a sibling project. They’re saying that this is blocking them from shipping their app. You know it’s a big problem, but it’s one of many, and so you don’t have time to fix it right now. You respond that this looks like a genuine issue, but it’s more appropriate to open in another repo. So you close their issue and copy it into the other repo, then add a comment suggesting where they might look in the code to start fixing it. You doubt they’ll actually do so, though. Very few do. The next person just says “What’s the status on this?” You’re not sure what they’re talking about, so you look at the context. They’ve commented on a lengthy GitHub thread about a long-standing bug in the project. Many people disagreed on the proper solution to the problem, so it generated a lot of discussion. There are more than 20 comments on this particular issue, and it would take you a long time to read through them all to jog your memory. So you merely respond, “Sorry, this issue has been open for a while, but nobody has tackled it yet. We’re still trying to understand the scope of the problem; a pull request could be a good start!” The next person is just a GreenKeeper bot. These are easy. Except that this particular repo has fairly flaky tests, and the tests failed for what looks like a spurious reason, so you have to restart them to pass. You restart the tests and try to remind yourself to look into it later after Travis has had a chance to run. The next person has opened a pull request, but it’s on a repo that’s fairly active, and so another maintainer is already providing feedback. You glance through the thread; you trust the other maintainer to handle this one. So you mark it as read and move on. The next person has run into what appears to be a bug, and it’s not one you’ve ever seen before. But unfortunately they’ve provided scant details on how the problem actually occurred. What browser was it? What version of Node? What version of the project? What code did they use to reproduce it? You ask them for clarification and close the tab. The constant stream After a while, you’ve gone through ten or twenty people like this. There are still more than a hundred waiting in line. But by now you’re feeling exhausted; each person has either had a complaint, a question, or a request for enhancement. In a sense, these GitHub notifications are a constant stream of negativity about your projects. Nobody opens an issue or a pull request when they’re satisfied with your work. They only do so when they’ve found something lacking. Even if you only spend a little bit of time reading through these notifications, it can be mentally and emotionally exhausting. Your partner has observed that you’re always grumpy after going through this ritual. Maybe you found yourself snapping at her for no reason, just because you were put in a sour mood. “If doing open source makes you so angry, why do you even do it?” she asks. You don’t have a good answer. You could take a break; in fact you’ve probably earned it by now. In the past, you’ve even taken vacations of a week or two from GitHub, just for your own mental health. But you know that that’s exactly how you ended up in this situation, with hundreds of people patiently waiting. If you had just kept on top of your GitHub notifications, you’d probably have a more manageable 20-30 to deal with per day. Instead you let them pile up, so now there are hundreds. You feel guilty. In the past, for one reason or another, you’ve really let issues pile up. You might have seen an issue that was left unanswered for months. Usually, when you go back to address such an issue, the person who opened it never responds. Or they respond by saying, “I fixed my problem by abandoning your project and using another one instead.” That makes you feel bad, but you understand their frustration. You’ve learned from experience that the most pragmatic response to these stale issues is often just to say, &…

oscarsullivan_old avatar
oscarsullivan_old

My apologies, it was very rude of me as I was frustrated at the time and unsuccessful at getting it to work even after opening several forks and PRs.

1
oscarsullivan_old avatar
oscarsullivan_old

And thank you for calling me out on it

oscarsullivan_old avatar
oscarsullivan_old

And thanks for that nolanlawson read

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so I think part of the issue was that the pipelines are not supported in the region (and maybe other resources)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but the error reporting is bad

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mohamed Lrhazi this module https://github.com/cloudposse/terraform-aws-s3-website does create an S3 website

cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then you use this https://github.com/cloudposse/terraform-aws-cloudfront-cdn to add CDN for it

cloudposse/terraform-aws-cloudfront-cdn

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - cloudposse/terraform-aws-cloudfront-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(this https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn creates a regular S3 bucket, not a website, and points a CloudFront distribution to it)

Mohamed Lrhazi avatar
Mohamed Lrhazi
cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since it’s not a website, you have to access all the files by their names, e.g. https://d18shdqwx0ry07.cloudfront.net/index.html

Mohamed Lrhazi avatar
Mohamed Lrhazi

I tried that, still getting same “Failed to contact the origin.” error!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and since you provided your own SSL cert (I guess you created it already), you have to access the site by the DNS name (not CloudFront) : https://catholic.edu/index.html

oscarsullivan_old avatar
oscarsullivan_old

Well, according to my state file it is GONE…

oscarsullivan_old avatar
oscarsullivan_old
03:43:28 PM
Mohamed Lrhazi avatar
Mohamed Lrhazi

ok.. maybe that’s the key… thanks a lot!… I will see whats wrong with the DNS part…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-route53-cluster-zone

Terraform module to easily define consistent cluster domains on Route53 (e.g. [prod.ourcompany.com](http://prod.ourcompany.com)) - cloudposse/terraform-aws-route53-cluster-zone

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-route53-cluster-hostname

Terraform module to define a consistent AWS Route53 hostname - cloudposse/terraform-aws-route53-cluster-hostname

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mohamed Lrhazi to create an S3 website (not just a regular S3 bucket) and then point a CloudFront CDN to it, we have this example https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

working because it’s how https://docs.cloudposse.com is deployed

Mohamed Lrhazi avatar
Mohamed Lrhazi

Cool! Thanks!. I will start from that example then. I was really hoping a found a module that would allow me to just say : my site is foo.dom.ain, my content is in content folder… go do it but this example does not look that bad.. just need to learn me a bit of terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

such a module could be created by using these two

oscarsullivan_old avatar
oscarsullivan_old

If do have a website already set up and you mean go do it as in go host it.. you could use a container?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which the example above shows

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one of the reasons that they have not been combined in one module is that you can use an S3 website without any CDN

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

point your DNS to the S3 website endpoint

Mohamed Lrhazi avatar
Mohamed Lrhazi

I think s3 website does not support ssl, so one needs cloudfront for that…

@oscarsullivan_old yes, I meant here is my static content, host it in s3+cloudfront… use my pre-created ssl cert, but do everything else for me. s3/perms/route53/cloudfront. all from one config file where I just say: fqdn, content path, and I guess ssl arn. but I think with all the examples you guys pointed out… am probably very close.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Just copy the example, it’s two modules to call instead of one

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And the data source will lookup your pre-created SSL cert

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Or just provide the cert ARN and don’t use the lookup

xluffy avatar

@Andriy Knysh (Cloud Posse) I have a question this module

https://github.com/cloudposse/terraform-aws-vpc-peering/blob/master/variables.tf#L2

enabled is boolean variable. I see you check this variable in main.tf like that count = "${var.enabled == "true" ? 1 : 0}"

Why not use enabled = true and check with syntax count = "${var.enabled ? 1 : 0}". I think it is make sense.

cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account. - cloudposse/terraform-aws-vpc-peering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

because TF recommends using strings instead of boolean values

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Input Variables - 0.11 Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the issue is that booleans are always converted to strings in TF

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but that conversion is different (not consistent) in variables, tfvars files and on the command line

xluffy avatar

wow, thank you I see

oscarsullivan_old avatar
oscarsullivan_old

@xluffy did you get the peering working? Just recently set this up myself. Let me know if any Qs

xluffy avatar

thank @oscarsullivan_old I’m just curious

2019-03-20

Arvind avatar

I have successfully inserted my AWS_ACCESS_KEY and AWS_SECRET_KEYS in Vault

MacBook-Pro-5$ vault read  secret/infra_secrets
Key                 Value
---                 -----
refresh_interval    xxh
access_key          xxxxx
secret_key          xxxx

Now can anyone can suggest to me how should i use this keys in my main.tf(code piece) so i can provision the infra in aws.

oscarsullivan_old avatar
oscarsullivan_old

By using Geodesic’s assume-role function or by running aws-vault exec [profile] [cmd]

1
oscarsullivan_old avatar
oscarsullivan_old

so

oscarsullivan_old avatar
oscarsullivan_old

aws-vault exec sandbox terraform apply

xluffy avatar

hey, I have a module for creating 2 VPC + another module for creating a peering between 2 VPC. In peering module, I have a read-only for counting route table. But I can’t count if i don’t create VPC first. How to depend them?

Error: Error refreshing state: 2 error(s) occurred:

* module.peering.data.aws_route_table.acceptor: data.aws_route_table.acceptor: value of 'count' cannot be computed
* module.peering.data.aws_route_table.requestor: data.aws_route_table.requestor: value of 'count' cannot be computed
oscarsullivan_old avatar
oscarsullivan_old

Ultimate goal is to create VPCs and peer them? anything more complex?

xluffy avatar

yeah, just create 2 vpcs, after that, will create a peering between them

xluffy avatar

terraform doesn’t support depend on modules.

oscarsullivan_old avatar
oscarsullivan_old

5 mins

1
oscarsullivan_old avatar
oscarsullivan_old
02:19:48 PM
oscarsullivan_old avatar
oscarsullivan_old
02:20:05 PM
oscarsullivan_old avatar
oscarsullivan_old

Two different projects

oscarsullivan_old avatar
oscarsullivan_old

Those are the main tf files

oscarsullivan_old avatar
oscarsullivan_old

So I run vpc.tf’s project inside of my geodesic module for each stage

oscarsullivan_old avatar
oscarsullivan_old

and then I run vpc_peering in each sub-accounts module that isn’t mgmt or root

oscarsullivan_old avatar
oscarsullivan_old

This is what I do to peer MGMT to all other sub accounts

xluffy avatar

yeah, will work. because u create VPC first (run vpc.tf).

But if u have a peering module + vpc module in a project. Peering module can’t query data in a VPC if VPC doesn’t create.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

think about creating projects (e.g. separate states) based on how they need to be applied

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so if one resource depends on the outputs of another module like a vpc, it might make more sense to separate them out

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

rather than needing to rely on -target parameters to surgically apply state

xluffy avatar

https://github.com/cloudposse/terraform-aws-vpc-peering/blob/master/main.tf#L38

data "aws_route_table" "requestor" {
  count     = "${var.enabled == "true" ? length(distinct(sort(data.aws_subnet_ids.requestor.ids))) : 0}"
  subnet_id = "${element(distinct(sort(data.aws_subnet_ids.requestor.ids)), count.index)}"
}

Lookup route tables from a VPC. If this VPC doesn’t create. will fail

cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account. - cloudposse/terraform-aws-vpc-peering

oscarsullivan_old avatar
oscarsullivan_old


But if u have a peering module + vpc module in a project. Peering module can’t query data in a VPC if VPC doesn’t create.
That’s why they’re separate..

oscarsullivan_old avatar
oscarsullivan_old

Different ‘goals’ usually get isolated in my work

oscarsullivan_old avatar
oscarsullivan_old

Plus what if I want to create a VPC that isn’t peered etc

oscarsullivan_old avatar
oscarsullivan_old
02:27:25 PM
xluffy avatar

I see

oscarsullivan_old avatar
oscarsullivan_old

I also don’t use [main.tf](http://main.tf) files.. I don’t like them

oscarsullivan_old avatar
oscarsullivan_old

I like a file per resource type

oscarsullivan_old avatar
oscarsullivan_old
02:29:59 PM
joshmyers avatar
joshmyers

I prefer a main, at least for common elements

1
joshmyers avatar
joshmyers

¯_(ツ)_/¯

joshmyers avatar
joshmyers

That is quite a preference thing, not sure if quite a coding standard

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

agree with both

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I like a [main.tf](http://main.tf) for common stuff like the provider definition

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and then when a lot of stuff is going on, break it out by .tf files like @oscarsullivan_old

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:18:21 PM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Creating Modules - Terraform by HashiCorp

A module is a container for multiple resources that are used together.

oscarsullivan_old avatar
oscarsullivan_old


I like a [main.tf](http://main.tf) for common stuff like the provider definition
I have that in my [terraform.tf](http://terraform.tf)!

joshmyers avatar
joshmyers

What about that sneaky data source that gets used all over the shop by r53.tf vpc.tf etc

joshmyers avatar
joshmyers

Anyway, the shed is blue.

oscarsullivan_old avatar
oscarsullivan_old


sneak data source
[terraform.tf](http://terraform.tf) !

oscarsullivan_old avatar
oscarsullivan_old
04:47:37 PM
joshmyers avatar
joshmyers

¯_(ツ)_/¯

oscarsullivan_old avatar
oscarsullivan_old

Joining the meetup tonight @joshmyers?

joshmyers avatar
joshmyers

Where is this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(PST)

joshmyers avatar
joshmyers

01:30 GMT? @oscarsullivan_old

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

6:30 PM Wednesday, Greenwich Mean Time (GMT)

oscarsullivan_old avatar
oscarsullivan_old

haha not quite

oscarsullivan_old avatar
oscarsullivan_old

6;30

oscarsullivan_old avatar
oscarsullivan_old

after work sadly

joshmyers avatar
joshmyers

ahh, I need to nip out but would be good to try and make that

oscarsullivan_old avatar
oscarsullivan_old

I’ll have only just arrived at home at 6:10

oscarsullivan_old avatar
oscarsullivan_old

Deffo do

oscarsullivan_old avatar
oscarsullivan_old

https://github.com/cloudposse/terraform-aws-eks-cluster SHOULD I create a new VPC just for EKS?

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

oscarsullivan_old avatar
oscarsullivan_old

.. Or can I safely use my existing VPC that my sub-account uses for everything

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can definitely use the same vpc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the “SHOULD” would come down to a business requirement

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. “the EKS cluster SHOULD run in a separate PCI compliant VPC”

oscarsullivan_old avatar
oscarsullivan_old

Ah awesome

oscarsullivan_old avatar
oscarsullivan_old

Not a technical requirement great

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yup

oscarsullivan_old avatar
oscarsullivan_old

Ok this will help me get setup with EKS quickly

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha,

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and open issues that others have encountered with EKS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

most of the time errors encountered are due to missing a step

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

tested many times by a few people

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

oscarsullivan_old avatar
oscarsullivan_old

Thanks! Will spend tomorrow reading those and setting up first k8 cluster

2019-03-21

DaGo avatar

Has anyone tried this wizardry? https://modules.tf

modules.tf - Get your infrastructure as code delivered as Terraform modules

Your infrastructure as code delivered as Terraform modules

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@antonbabenko

modules.tf - Get your infrastructure as code delivered as Terraform modules

Your infrastructure as code delivered as Terraform modules

antonbabenko avatar
antonbabenko

@DaGo Yes, I tried it.

xluffy avatar

we have author of this site here.

1
1
oscarsullivan_old avatar
oscarsullivan_old

I really like Cloudcraft

oscarsullivan_old avatar
oscarsullivan_old
09:19:40 AM

here you go @DaGo

fiesta_parrot1
oscarsullivan_old avatar
oscarsullivan_old

Cloudcraft doesn’t allow you to configure objects like ALB (e.g. listener and redirects)

oscarsullivan_old avatar
oscarsullivan_old

here’s a simple eu-west 2 ALB + EC2

oscarsullivan_old avatar
oscarsullivan_old
09:38:25 AM
oscarsullivan_old avatar
oscarsullivan_old

I think it’s really good for starting a new framework for a project

DaGo avatar

Awesome. Many thanks, Oscar!

oscarsullivan_old avatar
oscarsullivan_old

Here’s a question for the crowd: Do you prefer to leave a default for variables?

Example:

variable "stage" {
  type        = "string"
  default     = "testing"
  description = "Stage, e.g. 'prod', 'staging', 'dev' or 'testing'"
}

My take: I personally don’t like leaving defaults to variables like ‘name’ and ‘stage’, but don’t mind for ‘instance_size’. The reason is I’d rather it failed due to a NULL value and I could fix this var not being passed to TF (from, say, Geodesic) than read the whole PLAN and check I’m getting the correct stage etc.

What do you think?

oscarsullivan_old avatar
oscarsullivan_old

Another example in a Dockerfile:

FROM node:8.15-alpine

ARG PORT
EXPOSE ${PORT}

I could have ARG PORT=3000, but I’d rather it failed due to a lack of port definition than go through the build process and find the wrong / no port was exposed.

oscarsullivan_old avatar
oscarsullivan_old

I’d rather have NO PORT than the WRONG port for my built image. I feel it is easier for me to catch the NO PORT than the WRONG port.

mmuehlberger avatar
mmuehlberger

I like defaults, for things, where I know that I’m not going to change them in every stage/environment or variables where it makes sense to have one. (e.g. require stage to be set explicitly, but like you said, something like instance_type is fine, even though, you might want to change it in every stage anyways). For something like PORT I’d also set a default, usually to whatever the default is for the technology I’m using.

My take on defaults is basically: try to make it as easy as possible to use whatever you are building with ar little extra configuration as possible.

oscarsullivan_old avatar
oscarsullivan_old

RE: AKS cluster

variable "image_id" {
  type        = "string"
  default     = ""
  description = "EC2 image ID to launch. If not provided, the module will lookup the most recent EKS AMI. See <https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html> for more details on EKS-optimized images"
}

variable "eks_worker_ami_name_filter" {
  type        = "string"
  description = "AMI name filter to lookup the most recent EKS AMI if `image_id` is not provided"
  default     = "amazon-eks-node-v*"
}
* module.eks_workers.data.aws_ami.eks_worker: data.aws_ami.eks_worker: Your query returned no results. Please change your search criteria and try again.

What are people specifying as their IMAGE_ID or what filter have they got for the ami_name_filter?

oscarsullivan_old avatar
oscarsullivan_old

Looks like they’re EKS specific, so I don’t want to use my standard AMI

oscarsullivan_old avatar
oscarsullivan_old
11:32:05 AM

Looks like the pattern has changed. No v for the semantic version

oscarsullivan_old avatar
oscarsullivan_old
* aws_eks_cluster.default: error creating EKS Cluster (acme-sandbox-eks-cluster): InvalidParameterException: A CIDR attached to your VPC is invalid. Your VPC must have only RFC1918 or CG NAT CIDRs. Invalid CIDR: [14.0.0.0/16]

Hmmm looks like a valid CIDR to me

oscarsullivan_old avatar
oscarsullivan_old

Weird as they’re created with this:

module "vpc" {
  source    = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=master>"
  version   = "0.4.0"
  namespace = "${var.namespace}"
  stage     = "${var.stage}"
  name      = "vpc"
  cidr_block         = "${var.cidr_prefix}.0.0.0/16"
}

module "dynamic_subnets" {
  source             = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=master>"
  namespace          = "${var.namespace}"
  stage              = "${var.stage}"
  name               = "dynamic_subnets"
  region             = "${var.aws_region}"
  availability_zones = ["${var.aws_region}a", "${var.aws_region}b", "${var.aws_region}c"]
  vpc_id             = "${module.vpc.vpc_id}"
  igw_id             = "${module.vpc.igw_id}"
  cidr_block         = "${var.cidr_prefix}.0.0.0/16"
}
oscarsullivan_old avatar
oscarsullivan_old

Can the VPC and hte SUBNET not have the same block

oscarsullivan_old avatar
oscarsullivan_old

I thought it was more about linking them together

oscarsullivan_old avatar
oscarsullivan_old
cloudposse/terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

mmuehlberger avatar
mmuehlberger

Valid RFC1918 CIDRs are only those 3:

10.0.0.0        -   10.255.255.255  (10/8 prefix)
172.16.0.0      -   172.31.255.255  (172.16/12 prefix)
192.168.0.0     -   192.168.255.255 (192.168/16 prefix)
mmuehlberger avatar
mmuehlberger

Everything else is in the public IP space.

oscarsullivan_old avatar
oscarsullivan_old
cloudposse/terraform-aws-vpc

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

oscarsullivan_old avatar
oscarsullivan_old

because it is 10.0.0.0/16

oscarsullivan_old avatar
oscarsullivan_old

but should really be 10.0.0.0/8

oscarsullivan_old avatar
oscarsullivan_old

?

mmuehlberger avatar
mmuehlberger

AWS only allows /16 networks, not more.

oscarsullivan_old avatar
oscarsullivan_old

So it sounds like 10.0.0.0/16 is not RFC1918 compliant then

mmuehlberger avatar
mmuehlberger

No. /8 means, the first 8 bits must be fixed (10 in this case). /16 means, the first 16 bits must be fixed (so the first 2 parts)

Steven avatar

10.0.0.0/16 is fine, 14.0.0.0/16 is not

Samuli avatar

10.0.0.0/16 is subnet of 10.0.0.0/8

oscarsullivan_old avatar
oscarsullivan_old

Oh

mmuehlberger avatar
mmuehlberger

With VPCs in AWS you can have a range of 10.0.0.0/16 to 10.255.0.0/16 as valid network CIDRs. Also 172.16.0.0/16 to 172.31.0.0/16 and 192.168.0.0/16.

oscarsullivan_old avatar
oscarsullivan_old

so 10.0.0.0/16 is valid

oscarsullivan_old avatar
oscarsullivan_old

but my 14.0.0.0/16 is not

oscarsullivan_old avatar
oscarsullivan_old

haha woops

oscarsullivan_old avatar
oscarsullivan_old

but had I gone for 10.{var.cidr_prefix}.0.0/16 all would be well

mmuehlberger avatar
mmuehlberger

(plus 100.64.0.0/16 to 100.127.0.0/16 but this is carrier-grade NAT, so better stay away from that)

mmuehlberger avatar
mmuehlberger

Exactly.

oscarsullivan_old avatar
oscarsullivan_old

Damn

oscarsullivan_old avatar
oscarsullivan_old

Alright, hopefully it should be easy to fix then

oscarsullivan_old avatar
oscarsullivan_old

Thanks chaps for explaining CIDR blocks and RFC1918

loren avatar

I forget where I read it, but there is a lot of truth to the quote, “in the cloud, everyone is a network engineer”

1
mmuehlberger avatar
mmuehlberger

Absolutely!

oscarsullivan_old avatar
oscarsullivan_old

My networking is so poor. For the last 2 years the networking aspect of my cloud was managed by the service provider!

loren avatar

VPCs, subnets, security groups, NACLs, VPNs, WAFs, load balancers, oh my!

oscarsullivan_old avatar
oscarsullivan_old

Yep! The only thing I had to manage were firewall ingress/egress rules and load balancer rules.. none of the setup and maintenance

oscarsullivan_old avatar
oscarsullivan_old

It was Infrastructure as a service really

oscarsullivan_old avatar
oscarsullivan_old

That’s convenient… it’s so abstracted that I only need to change it in 2 values on the same file

oscarsullivan_old avatar
oscarsullivan_old

Oh dear, terraform isn’t detecting the change properly and isn’t destroying the old subnets

Samuli avatar

try terraform destroy without the changes first?

oscarsullivan_old avatar
oscarsullivan_old

No it has totally desynced from the bucket

oscarsullivan_old avatar
oscarsullivan_old

so I’m makign it local

oscarsullivan_old avatar
oscarsullivan_old

copying the S3 state

oscarsullivan_old avatar
oscarsullivan_old

pasting that into the local

oscarsullivan_old avatar
oscarsullivan_old

and then checking if terraform state list shows the resources

oscarsullivan_old avatar
oscarsullivan_old

then I’ll push it back up

oscarsullivan_old avatar
oscarsullivan_old

yep perfect showing the machiens again

oscarsullivan_old avatar
oscarsullivan_old
Plan: 26 to add, 2 to change, 26 to destroy.
oscarsullivan_old avatar
oscarsullivan_old

vs.

 ✓   (acme-sandbox-admin) vpc ⨠ terraform destroy
data.aws_availability_zones.available: Refreshing state...
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: no


Error: Destroy cancelled.


oscarsullivan_old avatar
oscarsullivan_old

Specifically…

-/+ module.vpc.aws_vpc.default (new resource required)
      id:                                 "vpc-xxx" => <computed> (forces new resource)
      arn:                                "arn:aws:ec2:eu-west-2:xxx:vpc/vpc-xxx" => <computed>
      assign_generated_ipv6_cidr_block:   "true" => "true"
      cidr_block:                         "14.0.0.0/16" => "10.14.0.0/16" (forces new resource)

Should be compliant now

oscarsullivan_old avatar
oscarsullivan_old
02:50:41 PM

Doh it’s happened again.. can’t delete modules

rbadillo avatar
rbadillo

Team any suggestions on how to fix this error ?

"data.aws_vpc.vpc.tags" does not have homogenous types. found TypeString and then TypeMap in ${data.aws_vpc.vpc.tags["Name"]}
rbadillo avatar
rbadillo

I have some Kubernetes Tags, doing some googling says to delete those tags but I want to avoid that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can delete/update those tags in kubernetes. Terraform data sources just read them and you can’t change them on the fly

ldlework avatar
ldlework

In the terraform-aws-ecs-codepipeline module, it has an example buildspec: https://github.com/cloudposse/terraform-aws-ecs-codepipeline#example-buildspec

cloudposse/terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline

ldlework avatar
ldlework

Where are the variables like $REPO_URL and $IMAGE_REPO_NAME coming from?

ldlework avatar
ldlework

They’re not official build environment variables.

ldlework avatar
ldlework

Oh I see, it’s provided by the module.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline

LeoGmad avatar
LeoGmad

Has anyone encountered issues lately with the module terraform-aws-dynamic-subnets. I believe it has something to do with AWS adding an AZ (us-west-2d) to the us-west-2 region some time in Feb.

I ended up removing the subnets in one of my environments before being able to recreate them. Not something I want to do in OPS. Any insight on the issue?

-/+ module.subnets.aws_subnet.public[0] (new resource required)
      id:                                                                                         "subnet-XXX" => <computed> (forces new resource)
      arn:                                                                                        "arn:aws:ec2:us-west-2:XXX:subnet/subnet-XXX" => <computed>
      assign_ipv6_address_on_creation:                                                            "false" => "false"
      availability_zone:                                                                          "us-west-2a" => "us-west-2a"
      availability_zone_id:                                                                       "usw2-az2" => <computed>
      cidr_block:                                                                                 "10.0.96.0/19" => "10.0.128.0/19" (forces new resource)

-/+ module.subnets.aws_subnet.public[1] (new resource required)
      id:                                                                                         "subnet-XXX" => <computed> (forces new resource)
      arn:                                                                                        "arn:aws:ec2:us-west-2:XXX:subnet/subnet-XXX" => <computed>
      assign_ipv6_address_on_creation:                                                            "false" => "false"
      availability_zone:                                                                          "us-west-2b" => "us-west-2b"
      availability_zone_id:                                                                       "usw2-az1" => <computed>
      cidr_block:                                                                                 "10.0.128.0/19" => "10.0.160.0/19" (forces new resource)
    
LeoGmad avatar
LeoGmad

Sorry I don’t have the actual error output but it complained about the CIDR already existing

oscarsullivan_old avatar
oscarsullivan_old

Does it already exist?

oscarsullivan_old avatar
oscarsullivan_old

I got bamboozled by that question today

LeoGmad avatar
LeoGmad

Well technically its that module.subnets.aws_subnet.public[1] which never gets deleted

LeoGmad avatar
LeoGmad

or updated.

LeoGmad avatar
LeoGmad

I may be able to use 2d and 2b in this case, but not sure what my solution for my OPS env which currently uses a,b, and c. I may have to just ride a second VPC for a migration.

oscarsullivan_old avatar
oscarsullivan_old

Good luck. I don’t think I have an answer for this!

LeoGmad avatar
LeoGmad

Actually, I may have found the solution! I’ll just temporarily go down to 2 subnets in OPS c, and d which should not conflict.

daveyu avatar

I ran into this too. It has to do with how the module subdivides CIDR blocks. With the addition of us-west-2d, the module wants to create a new private subnet, but it tries to assign to it the CIDR block that’s already given to the public subnet for us-west-2a

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) can you take a look?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We specifically and deliberately tried to address this use-case in our module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but might have a logical problem affecting it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think your suggestion though is the best: hardcode the desired AZs for stability

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’ll look into that

LeoGmad avatar
LeoGmad

I see

daveyu avatar

i couldn’t figure out a fix.. fortunately my env was in a state where i could delete and recreate all the subnets

daveyu avatar

if you’re using terraform-aws-dynamic-subnets directly, it looks like you should be able to set availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]

1
LeoGmad avatar
LeoGmad

Thanks I am using terraform-aws-dynamic-subnets I’ll give it a go.

ldlework avatar
ldlework
09:50:52 PM

What might be going wrong with my cloudposse/terraform-aws-ecs-codepipeline if I’m getting the following error on deploy:

ldlework avatar
ldlework

I’m not sure what IAM role is relevant here (there so many O_O)

ldlework avatar
ldlework

I’m not even sure what is being uploaded to s3?

ldlework avatar
ldlework

Oh I deleted these lines from my buildspec:

      - printf '[{"name":"%s","imageUri":"%s"}]' "$CONTAINER_NAME" "$REPO_URI:$IMAGE_TAG" | tee imagedefinitions.json
artifacts:
  files: imagedefinitions.json
ldlework avatar
ldlework

Perhaps these are important? Not sure why there would an IAM role error?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@idlework are you using this in thee context of one of our other modules? e.g. the ecs-web-app?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if not, I would look at how our other modules leverage it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Build software better, together

GitHub is where people build software. More than 31 million people use GitHub to discover, fork, and contribute to over 100 million projects.

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) yeah I’ve been following right out this, https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

ldlework avatar
ldlework

Adding those lines I removed seemed to make it work.

ldlework avatar
ldlework

The last thing that’s not working is that the Github Webhook doesn’t seem to do anything.

ldlework avatar
ldlework

I cut a release, Github says it sent the webhook, and nothing happens in CodePipeline even after 10 minutes.

ldlework avatar
ldlework

I could probably post my HCL since I don’t think it contains any secrets.

ldlework avatar
ldlework

Maybe it only works if the commit is different

ldlework avatar
ldlework

still nothing

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) do I have to click this even though I setup the terraform module with an oauth token with the right permissions? terraform was able to setup the webhook on my repo afterall, http://logos.ldlework.com/caps/2019-03-21-22-33-18.png

attachment image
ldlework avatar
ldlework

It would be nice if AWS listed the webhook events somewhere

ldlework avatar
ldlework

I don’t have a clue what could be wrong

ldlework avatar
ldlework

When I list the webhooks via the AWS cli I see that there is a “authenticationConfiguration” section with a “SecretToken”

ldlework avatar
ldlework

I don’t see this secret token anywhere in the webhook on the github side

ldlework avatar
ldlework

Oh that’s probably the obscured “Secret”

ldlework avatar
ldlework

I have no idea

ldlework avatar
ldlework

The response on the github side says 200

ldlework avatar
ldlework

SSL Verification is enabled

ldlework avatar
ldlework

Even got a x-amzn-RequestId header in the response

ldlework avatar
ldlework

Filters on the Webhook:

                "filters": [
                    {
                        "jsonPath": "$.action", 
                        "matchEquals": "published"
                    }
                ]
ldlework avatar
ldlework

Websocket Payload:

{
  "action": "published",
  "release": {
ldlework avatar
ldlework

Hmm it was the password.

ldlework avatar
ldlework

That’s worrying. Oh well.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you mean the webhook secret?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As I recall, the webhook secret on GitHub cannot be updated (rotated)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You need to delete/recreate

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

….at least with Terraform

ldlework avatar
ldlework

That might’ve been it.

2019-03-22

ldlework avatar
ldlework

When I try to destroy an ecs_codepipeline module by removing it from my HCL, I get:

Error: Error asking for user input: 1 error(s) occurred:

* module.ecs_codepipeline.module.github_webhooks.github_repository_webhook.default: configuration for module.ecs_codepipeline.module.github_webhooks.provider.github is not present; a provider configuration block is required for all operations
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

welcome to terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve encountered this as well. I don’t know how to get around it.

ldlework avatar
ldlework

lol jeeze

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I love terraform, but every tool has it’s limitations. clean destroy’s are really hard to achieve with module compositions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yet managing infrastructure without composable modules is not scalable for the amount of infrastructure we manage.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re just hoping that these edgecases improve as terraform the language improves

ldlework avatar
ldlework

I recently joined a startup and I’m the only guy doing the infrastructure. CloudPosse modules have been a god-send, regardless of whatever little issues there are. I’ve almost got a completely serverless deployment of their stuff going, kicked off with Github releases, flowing through CodePipeline, deployed to Fargate, with CloudWatch events sending SNS notifications to kick off Typescript Lambda functions to send me Discord notifications for each step. All in about three weeks by myself, never having used Terraform before.

party_parrot1
ldlework avatar
ldlework

So yeah, blemishes aside…

ldlework avatar
ldlework

I’m old enough to remember when you had to call someone on the phone to get a new VPS.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow! that sounds awesome

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’d like to get a demo of that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in 3 weeks!!

ldlework avatar
ldlework

Heh, it’s really mostly thanks to the CloudPosse modules..

oscarsullivan_old avatar
oscarsullivan_old

Anyone familiar with this with vpc_peering? Had it working a few weeks ago but not working this time around:

Error: Error applying plan:

2 error(s) occurred:

* module.vpc_peering.aws_vpc_peering_connection_options.accepter: 1 error(s) occurred:

* aws_vpc_peering_connection_options.accepter: Error modifying VPC Peering Connection Options: OperationNotPermitted: Peering pcx-xxx8b80615da5 is not active. Peering options can be added only to active peerings.
	status code: 400, request id: ed787be8-xxx-4c6c-xxx-117b303c9d84
* module.vpc_peering.aws_vpc_peering_connection_options.requester: 1 error(s) occurred:

* aws_vpc_peering_connection_options.requester: Error modifying VPC Peering Connection Options: OperationNotPermitted: Peering pcx-xxx8b80615da5 is not active. Peering options can be added only to active peerings.
	status code: 400, request id: eca6b1ab-xxx-4cef-xxx-ac6f80bd903f
oscarsullivan_old avatar
oscarsullivan_old

oh

oscarsullivan_old avatar
oscarsullivan_old

ok I ran it a second time without destroying it after it failed and its working

oscarsullivan_old avatar
oscarsullivan_old

guess it was a dependency thang

Igor avatar

I have an order of creation issue with an AutoScaling policy. I have one module that creates and ALB and Target Group and another that creates the AutoScaling policy, where I specify the target group resource_label. Terraform proceeds to create the AutoScaling policy using the Target Group, before the ALB->TargetGroup ALB listener rule is created, which causes an error. I tried a depends_on workaround by passing in the alb_forwarding_rule_id as a depends_on variable to the ASG module, but I assume I am still missing a step where I need to use this variable within the aws_autoscaling_policy resource block. Do I stick it in the count property?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Resources - Configuration Language - Terraform by HashiCorp

Resources are the most important element in a Terraform configuration. Each resource corresponds to an infrastructure object, such as a virtual network or compute instance.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depends_on will not always work though

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

because TF does it already automatically

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s just a hint

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but if something could not be done b/c of AWS API or other things, it will not help

Igor avatar

It’s a cross-module scenario, which I think isn’t supported until 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can try --target which is not pretty

Igor avatar

It works fine if I run the apply twice.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s another method

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Igor just out of curiosity, https://www.terraform.io/docs/providers/aws/r/autoscaling_policy.html#example-usage, instead of autoscaling_group_name = "${aws_autoscaling_group.bar.name}" can you try autoscaling_group_name = "${aws_autoscaling_group.bar.id}" and see what happens ?

AWS: aws_autoscaling_policy - Terraform by HashiCorp

Provides an AutoScaling Scaling Group resource.

ldlework avatar
ldlework

@Igor you can resolve module dependencies by using the “tags” attribute on stuff I’ve found.

ldlework avatar
ldlework

@Igor If you don’t use a variable, it is optimized away

ldlework avatar
ldlework

And doesn’t affect ordering

ldlework avatar
ldlework

So an easy way to use a dependent variable even though you don’t need it, is to stick the variable in a Tag on the resource

ldlework avatar
ldlework

This worked perfectly for me, for your exact use case which I also ran into

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh nice, this is a solution we did not think about, thanks

loren avatar

heh, tags and descriptions, had to do that before myself. any reference to force terraform to see the dependency when it resolves the config

Igor avatar

But then you’re left with an unwanted tag

ldlework avatar
ldlework

It’s not the worst thing, because afterall the module the tagged asset comes from does depend on whatever asset you’re interpolating into the tag.

Igor avatar

It actually makes sense, but I don’t think ASG policy has tags

ldlework avatar
ldlework

Yeah that’s the only limitation I guess

ldlework avatar
ldlework

I can’t think of a way around it if there’s no field to stick the dependency on

ldlework avatar
ldlework

Something crazy like, using the dependency in a locals block in which you take just the first letter or something, and then use that local in the name of your ASG policy

ldlework avatar
ldlework

lol

ldlework avatar
ldlework
08:17:09 PM
Igor avatar

I can use count

Igor avatar

Can’t I?

Igor avatar

Just check that the depends_on is not empty

Igor avatar

useless check, but maybe it will work

Igor avatar

I’ll give it a try and report back

loren avatar

policies have descriptions

ldlework avatar
ldlework

hah

Igor avatar

Description not on the policy, count didn’t work, ended up putting the first letter of the ID in the policy name Seems to work

ldlework avatar
ldlework

lmfao

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Or you can create a list, put in there two items, one is real name, the other is something from the dependency

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Then get the first item

ldlework avatar
ldlework

Every time I apply I get the following change:

module.frontend-ecs.module.ecs_codepipeline.aws_codepipeline.source_build_deploy: Modifying... (ID: us-west-1-qa-frontend-codepipeline)
  stage.0.action.0.configuration.%:          "4" => "5"
  stage.0.action.0.configuration.OAuthToken: "" => "<OAUTH-TOKEN>"

What is actually being changed here? Why is it always 4 -> 5? Why does it think the OAuth token is changing each time even though it is not?

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) XD

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We had similar issues with oauth tokens before

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think terraform or AWS ignore them, forgot who

Igor avatar

@Andriy Knysh (Cloud Posse) Great suggestion, thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

So you need to put the field into ignore changes, not good though if if ever changes

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) Can anything use a lifecycle block?

ldlework avatar
ldlework

Also, I can’t add the lifecycle block to a CloudPosse module right?

ldlework avatar
ldlework

Since this is happening inside the ecs_codepipeline module

ldlework avatar
ldlework

I wonder if anything on AWS is actually getting changed though. Or if it is just needlessly updating the Terraform state each time.

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) any clue?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

lifecycles cannot be interpolated or passed between modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe this is changing in 0.12? not sure

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in the end, I think it’s hard to escape needing a task runner for complicated setups

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(e.g. make or variant or terragrunt or astro)

ldlework avatar
ldlework

How does that help in this case?

foqal avatar
foqal
09:59:27 PM

Helpful question stored to <@Foqal> by @ldlework:

I have an order of creation issue with an AutoScaling policy. I have one module that creates a Target Group and another that creates the AutoScaling policy which uses the ALBRequestCountPerTarget,...
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you use terraform ... -target= to surgically target what you want to modify

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in the order in which needs to happen that terraform was not able to figure out

ldlework avatar
ldlework

In this case it just thinks that the github oauth token inside of the ecs-codepipeline is changing each time. So I’m not sure that it is a matter of ordering.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh sorry, i didn’t look closely enough at what you wanted to solve

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in this case, yea, maybe not the best solutino

ldlework avatar
ldlework

If the thing that’s being updated is just the Terraform state, it might be no big deal to just let it update the state needlessly each time.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

really, the only option is to put the lifecycle in the module but then it applies to everyone

ldlework avatar
ldlework

I can’t really tell by the message what’s changing though.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

jumping on call

ldlework avatar
ldlework

Yeah.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s not that it’s changing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s that terraform doesn’t know what the current value is

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so it always changes

ldlework avatar
ldlework

But does it have any consequences?

ldlework avatar
ldlework

Like changing infrastructure etc? Seems not? Can’t tell though.

Charlie Mathews avatar
Charlie Mathews

Might @anyone know why the cloudposse/terraform-aws-ecs-container-definition doesn’t include VolumesFrom or Links? I’m trying to figure out if I should tack those things onto the json output in a hacky way or submit a MR.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we didn’t implement the full spec, just what we needed or others have contributed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you see a way of adding support for it, please do!

Charlie Mathews avatar
Charlie Mathews

Will do, thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Please open a PR

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Idlework, no consequences

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

It just consider the token a secret

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And does not know, or thinks it doesn’t know, the value

ldlework avatar
ldlework

Makes sense

ldlework avatar
ldlework

How do you all handle different environments? Right now I’ve got a single branch, with a bunch of HCL, and a Summon secrets.yml which contains the shared and environment specific Terraform variables. So by selecting an environment with Summon, different variables go into Terraform and I’m able to build different environments in different regions like that.

ldlework avatar
ldlework

However, another way I’ve been thinking about it, is having a branch per environment so that the latest commit on that branch is the exact code/variables etc that deployed a given enviornment.

ldlework avatar
ldlework

This allows different environments to actually have different/varying HCL files. So in a given environment you can actually change around the actual infrastructure and stuff. Then by merging changes from say, staging into production branch, you can move incremental infrastructure changes down the pipeline.

ldlework avatar
ldlework

I wonder what you are all doing.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The long-lived branches tend to be discourage by most companies we’ve worked with. typically, they only want master as the only long-lived branch.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

some use a hierchy like prod/us-west-2/vpc, staging/us-west-2/vpc and then a modules/ folder where they pull from

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use a 1-repo-per-account approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

[prod.cloudposse.co](http://prod.cloudposse.co), [root.cloudposse.co](http://root.cloudposse.co), etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we treat our repos like a reflection of our AWS accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then we have our terraform-root-modules service catalog

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) Do you know what I mean though? Let’s say you wanted to make substantial changes of a major module by say, switching from self-hosted DB to Aurora Serverless (say). If you have just one branch that holds all the environments, then all environments must see this module change at the same time. VS, if you have a branching strategy, then you can make the major changes in a dev environment, then merge it into a staging environment, and then finally merge it into production/master. Is that not a problem you face / approach you see being used?

loren avatar

I’ve tried using branches, but the trouble is that they are not visible enough. People lose track of them too easily. So for them to work, we need another layer over them anyway that cycles through various actions (validate/plan/apply/etc). Separate directories or repos seem easier to grok

ldlework avatar
ldlework

Huh, branches having low visibility. Not sure I understand that exactly, but thank you for your input.

ldlework avatar
ldlework

I like the prospect of branching allowing for experimental environments to be spun up and destroyed creatively.

ldlework avatar
ldlework

It appears this is the strategy that Hashicorp recommends for it’s enterprise users.

loren avatar

My personal approach has been to write a module that abstracts the functionality, and then one or more separate repos that invoke the module for a specific implementation

ldlework avatar
ldlework

Sure but consider a major refactor of that module.

ldlework avatar
ldlework

I appreciate how parametic modules allows you to achieve multiple environments, that’s what I’m doing right now.

ldlework avatar
ldlework

But it seems like the repo would overall be more simple with less going on in a given branch.

loren avatar

Tags as versions, each implementation is isolated from changes to the module

ldlework avatar
ldlework

This requires you to use external modules, no?

loren avatar

Define “external”… You can keep the module in the same repo if you like

ldlework avatar
ldlework

So now you have the maintanience overhead of a new repo * how many modules.

ldlework avatar
ldlework

How do you tag modules that live in the same repo?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use one repo for our service catalog

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and one repo to define the invocation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

since we don’t use the same repo, we can surgically version pin every single module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and never force an upgrade we didn’t want

loren avatar

It’s easier to pin if they are separate repos, for sure

loren avatar

Not impossible if they are in the same repo, but some mental gymnastics are needed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s the only way it can work for us since we build infra for multiple organizations

ldlework avatar
ldlework

It makes perfect sense to me that you’d externalize modules you’re reusing across organizations sure.

ldlework avatar
ldlework

I’m just one person at a tiny startup right now though.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know monorepos have gained in popularity over recent years and major companies have come out in favor of them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i think they can work well inside of an organization that builds 99% of their stuff internally

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but open source is necessarily democratized into a number of smaller repos (think NPM, ruby, perl, go, and ……. terraform modules)

ldlework avatar
ldlework

It’s not really a mono-repo in the sense that are multiple disjoint substantial projects in a single repo.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it sounds like you’re going to have modules within your repo that have different SDLCs

ldlework avatar
ldlework

No that’s the point, they don’t really have different SDLCs.

ldlework avatar
ldlework

They’re not really indepdent abstractions we intend to reuse in multiple places.

ldlework avatar
ldlework

Most of my HCL code is just invocations of high-level CloudPosse modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Let’s say you wanted to make substantial changes of a major module by say, switching from self-hosted DB to Aurora Serverless (say). If you have just one branch that holds all the environments, then all environments must see this module change at the same time.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so we don’t have more than one environment per repo

ldlework avatar
ldlework

What has different SDLCs are the overall architecture of the different environments.

ldlework avatar
ldlework

Yes, that is in terms of the HCL defining our architecture, but I don’t really have a reusable module which defines arbitrary DBs

loren avatar

You can try tf workspaces also, that might work for your use case

ldlework avatar
ldlework

I guess I was just waiting to hear “Yeah branches sound like they’ll work for your use-case too”

ldlework avatar
ldlework

Thank you!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there’s no right/wrong, only different opinions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can achieve what you want to do with branches/workspaces

loren avatar

My experience has just between that I always, eventually, need to separate configuration from logic

ldlework avatar
ldlework

I think you could even combine approaches.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what i don’t like about workspaces is the assume a shared statebucket

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we never share a statebucket across stages or accounts

ldlework avatar
ldlework

I don’t intend on using Workspaces. I’ve read enough about them to avoid them for now.

ldlework avatar
ldlework

I almost went full Terragrunt, but I’m trying to thread the needle with my time and workload

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, well, a lot of good patterns they’ve developed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’ve borrowed the terraform init -from-module=... pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what we didn’t like about terragrunt is that overloads .tfvars with HCL-like code that isn’t supported by terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so it’s writing something vendor-locked

ldlework avatar
ldlework

“Why didn’t you use k8s or EKS?”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, ok, so yea that’s one approach that appeals to many. just another camp

ldlework avatar
ldlework

“Uhh, because I can fit CodePipeline in my head” lol

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) So basically your suggestion to me is to have two repos: 1 repo that contains all the modules. 1 repo which contains the environment-specific calls into those modules to build environment-specific infrastructure. Each environment’s HCL that uses the modules can pin to different versions of those modules, even though all the modules are in one repo, since each module call has a different source parameter. Is this close?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nailed it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so technically, it can all be in one repo and still use source pinning to the git repo, just my head hurts doing it that way

loren avatar

Mental gymnastics

loren avatar

Also, Chinese finger traps

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) so I guess my only concern is moving enhancements made in one environment’s HCL to another environment, and the next. It all comes down to how much per-environment HCL there is to arrange the modules in order to construct the infrastructure. To put things at their extremes to make the point; in order to have essentially zero manual maintanece of moving changes between environments, the modules would have to be essentially so high-level that they simply implement the entire environment. Rather than the environment HCL having room to vary in their composure of slightly-lower level modules and being able to express different approaches in infrastructure architecture.

ldlework avatar
ldlework

little wordy there, i just hope to be understood (so that I can in turn understand too)

ldlework avatar
ldlework

This is contrasted to the “per-environment HCL” being in branches and letting Git do the work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, it comes at a trade off of being more effort to manage changes

ldlework avatar
ldlework

(whether or not you use remote modules)

ldlework avatar
ldlework

Basically my point is, you’ll always have to manually “merge” across environments if the per-environment HCL is in a single branch, regardless of module use.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


, the modules would have to be essentially so high-level that they simply implement the entire environment

ldlework avatar
ldlework

(so maybe if you combined approaches you’d achieve the theoretical minimum maintainence)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this = “terraform root module”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the root invocatin of a module is a module

ldlework avatar
ldlework

Oh you’re saying there is literally no per-environment HCL?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep

ldlework avatar
ldlework

Per-environment HCL is a tagged “highest-level-module” ?

ldlework avatar
ldlework

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Modules - Configuration Language - Terraform by HashiCorp

Modules allow multiple resources to be grouped together and encapsulated.

ldlework avatar
ldlework

lol why you link that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
02:22:20 AM
ldlework avatar
ldlework

lol no I know

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

to define what a “Root module” is from the canonical source

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so just saying the root is a module. treat it like one. invoke it everywhere you need that capability

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

use .tfvars to tailor the settings

loren avatar

Module in module inception

ldlework avatar
ldlework

Yeah but you might have variance across environment in the implementation of that module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is why we like the terraform init -from-module=...

1
ldlework avatar
ldlework

And so what you’re saying is that, you achieve that, by having a separate repo, which simply refers to a tagged implementation of that module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yup

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so you can invoke an environment 20 times

ldlework avatar
ldlework

Clever.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just call terraofrm init -from-module 20x

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(in different folders)

loren avatar

And then you have terragrunt

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so i gotta run, but this is why we came up with tfenv

ldlework avatar
ldlework

In the configuration repo, where each environment simply invokes a tag of the remote root-module, the amount of code you have per-environment is so little there’s no reason to have different branches.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s the unwrapper for using terraform init -from-module=...

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can just specify environment variables that terraform init already supports

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and it will clone the remote module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and initialize it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

here’s my post on it

loren avatar

Also, highly recommend dependabot or something similar for bumping version tags

ldlework avatar
ldlework

The root modules don’t really seem like all encompassing modules which compose many (say) AWS modules to build a whole infrastructure.

ldlework avatar
ldlework
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

Like one just sets up an ECR registry, etc. A real service is going to require a bunch of these “root modules” composed together to make it happen.

ldlework avatar
ldlework

So any “environment repo” which is supposed to just call out to a single tagged remote root module to build the entire environment doesn’t seem like it would work with these root modules. Like, the HCL in the environment repo is going to be pretty substantial in order to tie these together. And then you have the manual merging problem again.

loren avatar

Modules all the way down, that’s where inception comes in

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you, and every company that want to use the strategy, should create their own catalog of root modules (which is logic), and then invoke them from diff repos (config)

loren avatar

Discussion reminds me of this thread, https://github.com/gruntwork-io/terragrunt/issues/627

Best Practice Question: Single "stack" module with tfvars versus current recommendations? · Issue #627 · gruntwork-io/terragrunt

So the &quot;best practice&quot; layout as defined in the repository I&#39;ve seen with terragrunt has each module in its own folder with its own tfvars per environment My problem here is I have a …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you put in there as many modules of diff types as you want to have in ALL your environments

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just diff approaches

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but I agree with @loren, it comes to separation of logic from config

ldlework avatar
ldlework

Hmm, do I understand that you’re saying that the root modules are really “root” with regards to individual substantial LAYERS of your environment’s architecture?

ldlework avatar
ldlework

The environment isn’t defined by calling a SINGLE module, but rather, it builds the environment out of a few major “layer-level root” modules?

loren avatar

Bingo

ldlework avatar
ldlework

I see.

loren avatar

Though, you could try to have one module to rule them all, with all the logic and calling all the modules conditionally as necessary

ldlework avatar
ldlework

So while there is more “cross environment” manual merging to be done than the ideal, it’s still probably less than would warrant separate branches.

ldlework avatar
ldlework

Well you wouldn’t do it conditionally.

ldlework avatar
ldlework

Each environment would pin to a different tag of the God Module

loren avatar

Yep, but the God module changes over time also, and different invocations of the module may need different things, so conditionally calling submodules becomes important

ldlework avatar
ldlework

Which you can pin to different tags, etc?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

consider them as a collection/catalog of everything you need in all your environments

ldlework avatar
ldlework

Yes, but there’s different conceptual abstraction levels of “everything you need in all your environments”

ldlework avatar
ldlework

You could imagine a SINGLE remote module being the sole invocation that defines everything for an environment.

ldlework avatar
ldlework

Or you could imagine a handful of remote modules which define the major layers of the architecture. VPC/Networking, ECS/ALB for each server, ECR, CloudWatch etc

ldlework avatar
ldlework

I dunno.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s your call how you configure them. We use the second approach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The thing is you never ever want one module that does everything

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yep

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) see the thread above

ldlework avatar
ldlework

I was only thinking about one module because we were aiming at minimizing “cross environment merging”.

ldlework avatar
ldlework

One module, while silly, would allow each environment to simply pin to the exact implementation of the God Module relevant to that environment.

ldlework avatar
ldlework

That one module could in-turn compose Layer Modules

ldlework avatar
ldlework

Which in-turn compose Component Modules

ldlework avatar
ldlework

w/e

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A) terraform module composition has a lot of problems as you discovered

ldlework avatar
ldlework

But yeah overall I think I’m clearer on the approach you were all suggesting.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

B) mono modules have huge state. We ran into this 2 years ago where a module was so big it took 20 minutes to run a plan

ldlework avatar
ldlework

It’s just another level similar to the way your “root” (I read “layer/stack”) modules compose component modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

C) huge state means huge blast radius. You cannot easily roll out small incremental changes

ldlework avatar
ldlework

You could.

ldlework avatar
ldlework

You would change the god module to compose the layer modules differently.

ldlework avatar
ldlework

Then you would update a given environment to the new pin of the god module

ldlework avatar
ldlework

Exactly how layer modules work with component modules.

ldlework avatar
ldlework

It’s just another level of abstraction.

ldlework avatar
ldlework

I’m not saying it’s useful.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no one God Module, we use separate folders for each service or a group of services (if that makes sense), so there is no one big single state

ldlework avatar
ldlework

(but just from a thought experiment of how to absolutely minimize the cross-environment maintanence of merging the per-environment HCL)

ldlework avatar
ldlework

(if one is to avoid branching strategy)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is no merging as in git branches, you just make changes to the root modules and apply them in each env separately as needed

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) there is

ldlework avatar
ldlework

If each environment needs HCL to describe how it composes the high-level layer modules to describe the whole infrastructure for the environment

ldlework avatar
ldlework

then you need to manually adopt those changes from one environment to the next, since they’re all in a single branch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry, on my phone and don’t have the keyboard speed to keep up with this

1
ldlework avatar
ldlework

if each environment simple makes one module { } call, then the amount you have to manually copy-paste over as you move enhancements/changes down the environment pipeline is the smallest amount possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s true but that’s not what blast radius refers to

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t; apply everything at once

ldlework avatar
ldlework

No, I’m just explaining why we’re talking about this in the first place

ldlework avatar
ldlework

to @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just the modules we need to change

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) he is doing a thought experiment

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) right

ldlework avatar
ldlework

you typically apply just major layers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

He is not asking what we do :-)

loren avatar

-target is hugely powerful

ldlework avatar
ldlework

yeah

ldlework avatar
ldlework

I’m going to refactor my stuff into layers like this tonight

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

He wants to understand why he should/shouldn’t do what he is proposing

ldlework avatar
ldlework

VPC/Networking, ECS/ALB/CodePipeline for a given Container, Database, Caching

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i get it

ldlework avatar
ldlework

Make “root modules” for each of those

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just saying that don’t apply everything and don’t merge anything - just make changes somewhere and apply step by step one by one

ldlework avatar
ldlework

Does it make sense to have a root module covering CodePipeline/ECS/ALB-target-and-listeners for a given container?

ldlework avatar
ldlework

Like everything needed to deploy a given container-layer of our overall product infrastructure

ldlework avatar
ldlework

Pass in the VPC/ALB info, etc

ldlework avatar
ldlework

Have it provision literally everything needed to build and run that container

ldlework avatar
ldlework

Have a module we can reuse for that for each container we have

ldlework avatar
ldlework

Or should there be a root module for each container?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if you have one module that tried to do all of that (beyond terraform not supporting it due to count of cannot be computed errors) the problem is the amount of infrastructure it touches is massive. So if you just wanted to change an auto scale group max size, you will incidentally be risking changes to every other part of the infrastructure because everything is in one module.

ldlework avatar
ldlework

Sure, but it would be all the infrastructure having to do with a logical component of the overall microservice architecture.

ldlework avatar
ldlework

If anything related to that microserivice gets boofed, the whole microservice is boofed anyway.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All I can say is you have to experience it first hand :-)

ldlework avatar
ldlework

That top-level module could of course be compose of other modules for doing just the ECS, just the CodePipeline, just the ALB

ldlework avatar
ldlework

infact, those are already written

ldlework avatar
ldlework

they’re yours

loren avatar

It’s hugely composable and there is no one right answer

ldlework avatar
ldlework

I run two container-layers in our microservice architecture with this module: https://gist.github.com/dustinlacewell/c89421595a20577f1394251e99d51dd8

loren avatar

Just tradeoffs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

exactly

ldlework avatar
ldlework

It does ECR/CloudWatch/Container/Task/ALB-target-listener/CodePipeline

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we, for example, have big and small root modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

lmao

ldlework avatar
ldlework

wish i found that first link three days ago

ldlework avatar
ldlework

i literally just built all that, but not as good, and there’s more there like dns

loren avatar

Isn’t it Friday night? Here we all are on slack

ldlework avatar
ldlework

I work for a broke startup so I work all the time for the foreseeable future

ldlework avatar
ldlework

lol

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s get some beer

ldlework avatar
ldlework

OK

ldlework avatar
ldlework

ldlework avatar
ldlework

Here’s a question I haven’t been able to answer for myself yet. Is a single ECR registry capable of holding multiple images? Or is a single ECR only for a single Docker image?

ldlework avatar
ldlework

I know that a Docker image can have multiple tags. I don’t mean that. I mean can you store multiple images in a single ECR. Do I need an ECR for each of my containers, or just one for the region?

ldlework avatar
ldlework

Cuz, Docker Registry obviously does multiple images, so I’m confused on this point.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one ECR can have just one image

ldlework avatar
ldlework

Got it thank you

ldlework avatar
ldlework

Make sense since the ECR name is set as the image name. Thanks for confirming.

loren avatar

Gotta sign off, catch some zzzzzzs. Good discussion! Night folks!

ldlework avatar
ldlework

o/

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) I must admit, I poured a cup of coffee instead.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

haha, maybe tomorrow, good night

ldlework avatar
ldlework

Another thing I worry about, is that I have a container to containerize soon that hosts multiple ports.

ldlework avatar
ldlework

I think I will need to place this Fargate container into multiple target-groups, one for each of the ports.

ldlework avatar
ldlework

But the posse module for tasks only takes a single ARN

ldlework avatar
ldlework

Oh maybe you have just one target group for the container, but you add multiple listeners.

ldlework avatar
ldlework

That must be it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, if you’re using our terraform-aws-ecs-web-app module, you might want to fork it for your needs since it’s rather opinionated. it was designed to show how to use all of our other ecs modules together.

ldlework avatar
ldlework

I’m now pondering whether each top-level layer module should have it’s own remote state. And whether the environment HCL should pass values to layers by utilizing the remote_state data source. That’s basically essential if you’re going to use -target right? Like how else does the auroradb layer get the right private DNS zone id? It has a variable on it’s module for that. But how does the environment HCL pass it to it, if the private zone isn’t being provisioned due to only the auroradb layer being targetted? It has to use the remote data source to access the other layer module’s outputs right?

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) thoughts on that?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we do use separate states for each root module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and we use both 1) remote state 2) data source lookup to communicate values b/w them

ldlework avatar
ldlework

I see.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but again, that depends

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we also use 3) write values from a module into SSM param store or AWS Secret Manager, then read those values from other modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

ldlework avatar
ldlework

Never even heard of that before

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) I’m trying to imagine how all the state management would look on the environment/configuration repo that calls the various root modules.

ldlework avatar
ldlework

Right now, I just have a little folder with a tiny bit of HCL that sets up an S3 bucket and DynamoDB table, and then those details are used when I apply my actual HCL

ldlework avatar
ldlework

Would I have to have little side modules on the environment side for each root module the enviornment uses?

ldlework avatar
ldlework

I bet you guys have some crazy complex infrastructures

ldlework avatar
ldlework

I love all your abstractions, it’s great.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so, this is opinionated, but here’s what we have/use (in simple terms, regardless of complex infrastructure or not):

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. TF modules https://github.com/cloudposse?utf8=%E2%9C%93&q=terraform-&type=&language= - this is logic and also just the definition w/o specifying the invocation
ldlework avatar
ldlework

I’ve been calling those Asset/Component modules. Individual AWS assets. Got it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Then we describe the invocation of those modules (which ones to use deppens on your use case) in the catalog of module invocation https://github.com/cloudposse/terraform-root-modules
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

I’ve been calling those Layer Modules. They define major aspects/facets of an environments overall infrastructure. Got it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those invocations are identity-less, they don’t care where and how they are deployed, this is just logic

ldlework avatar
ldlework

Right, like you said some are big, some are small, but they compose the lower-level Asset modules.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Then for each environment (prod, staging, dev, test), we have a GitHub repo with a Dockerfile that does two things:
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

1) Copies the invocation of the root modules from the catalog into the container (geodesic in our case) - this is logic

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

2) Defines ENV vars to configure the modules (the ENV vars could come from many diff places including Dockerfile, SSM param store, HashiCorp Vault, direnv, etc.) - this is config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then we start the container and it has all the code and config required to run a particular environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so only the final container has identity and knows what it will be deploying and where and how (environment, AWS region, AWS account. etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the final container could be run from your computer or from a CI/CD system (e.g #codefresh )

ldlework avatar
ldlework

OK I got confused along the way

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/prod.cloudposse.co

Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co

ldlework avatar
ldlework

You said “Copies the invocation of the root modules from the catalog into the container”. I thought the “catalog” was a repo with all the root/layer modules inside of it. And that this environment repo had some HCL that called root/layer modules out of the catalog to compose an entire environment architecture.

ldlework avatar
ldlework

Where did I go wrong? Is the catalog something different than the repo containing the root/layer modules?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/prod.cloudposse.co

Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each environment (prod, staging, dev) copies the entire catalog, OR only those modules that are needed for the particular environment

ldlework avatar
ldlework

Sure, and it also has some HCL of its own for calling the root/layer modules from the catalog right?

ldlework avatar
ldlework

The environments.

ldlework avatar
ldlework
04:33:32 AM

looks at the links.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it can have (if it’s very specific to the environment AND is not re-used in other environments)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but usually you want to re-use modules across diff environment (with different params of cause)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we put those module invocations in the common catalog (root modules as we call it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so in other words, the catalog is a collection of reusable module invocations (not definitions) that you would reuse in your environments. But any of those environments could have some specific module definitions and invocations nott from the catalog (if that makes sense)

ldlework avatar
ldlework

Yes, reuse modules, but in an experimental environment, you might be trying to save costs and switch from one DB to another DB, and so a different root/layer module would be called in that environment’s HCL right?

ldlework avatar
ldlework

Like, you probably use a half-dozen or so root/layer modules to create a moderate environment for a reasonable multi-layered service right?

ldlework avatar
ldlework

So there’s got to be something in the environment, HCL, which calls all those modules.

ldlework avatar
ldlework

Remember, there’s no God Module

ldlework avatar
ldlework

Like an environment doesn’t invoke just one God Module from the catalogue right? Because the catalog contains modules which only cover a single “layer” of the environment. ECS, Networking, Database, etc right?

ldlework avatar
ldlework

So each environment must have a bit of HCL which are the invocations of the various layer modules which define the major sections of the infrastructure.

ldlework avatar
ldlework

lol I’m gonna make sense of this eventually

ldlework avatar
ldlework

I see in your Dockerfile you’re copying the various root modules which create various layers of the environment. account-dns, acm, cloudtrail, etc

ldlework avatar
ldlework

Which reaffirms my idea that root modules only cover a facet of the environment, so you need multiple root modules to define an environment.

ldlework avatar
ldlework

So where is the per-environment HCL that invokes those modules? in conf/ perhaps.

ldlework avatar
ldlework

There’s nothing in conf/!

ldlework avatar
ldlework

Are you mounting the environment’s HCL into the container via volume at conf/ ?

ldlework avatar
ldlework

The stuff that calls the root modules?

ldlework avatar
ldlework

OHH you’re literally CD’ing into the root modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or use atlantis

ldlework avatar
ldlework

and running terraform apply from inside the root modules themselves

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If we need something special or new, we add it to the catalog and then can copy to the container of that environment

ldlework avatar
ldlework

nothing calls the root modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If we need a smaller db instance for dev, all those settings are config and provided from ENV vars

ldlework avatar
ldlework

So let’s say you needed two “instances” of what a root module provides. Like say it provides all the ECS and related infrastructure for running a microservice web-container.

ldlework avatar
ldlework

Do you cd into the same root module and call it with different args?

ldlework avatar
ldlework

Or would you end up making two root modules, one for each container in the microservice architecture?

ldlework avatar
ldlework

I’m guessing the latter?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

In our case, we would create a “big” root module combining the smaller ones, and put it into the catalog

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Then configure it from ENV vars from the container

ldlework avatar
ldlework

Let me give an example.

ldlework avatar
ldlework

Let’s say you have a single webservice container that does server side rendering. Like a Django application serving it’s own static media. You have an Asset module like the CloudPosse ecs-codepipeline module, and other’s like the Task and Container module. You create a Root Module which ties these together so you can deploy your Django application and everything it needs as a single Root Module. You might also have Root Module for the VPC/Networking for that application. So, you have an environment dockerfile, and you start by CD’ing into the VPC Root Module, and you deploy it. Then you CD into the Django App’s Root Module, and you deploy that too. OK so now you have a Django web-service deployed.

ldlework avatar
ldlework

Now lets say your dev team refactors the app so that Django only serves the backend, and you have an Nginx container serving the static assets for the frontend. So now you need another ECS-CodePipeline module, Task and Container modules. Just like for the Django container.

ldlework avatar
ldlework

Do you create another Root Module for the Nginx Frontend container, which calls the CodePipeline, Task and Container modules again?

ldlework avatar
ldlework

So then you’d have 3 Root Modules you’d be able to deploy independently? How would you resolve the duplication?

ldlework avatar
ldlework

(the calling of ecs-codepipeline, task, and container modules the same way in two root modules representing the two container services in your environment)

ldlework avatar
ldlework

BTW thanks for all the charity.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yea, good example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we mostly use k8s for that kind of things

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and did not have ECS infra such as that

ldlework avatar
ldlework

Sure but you can imagine someone using your modules for that (I am )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but i guess we would create one module with all that stuff but with many params to be able to parameterize all the things and even switch some things on/off

ldlework avatar
ldlework

Sure, a root module which combines ecs-codepipeline, task and container, etc to deploy a single containerized microservice

ldlework avatar
ldlework

But what if you needed two? Two different containers but deployed essentially the same way.

ldlework avatar
ldlework

You already have on Root Module for deploying the unified container.

ldlework avatar
ldlework

But now you need two, so do you just copy paste the Root Module for the second container, as it is basically exactly the same minus some port mappings and source repo, and stuff?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s config

ldlework avatar
ldlework

Or would you create a high-level NON-root-module which expressed how to deploy a ecs-codepipeline, task and container together

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can be parmeterized

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, one module

ldlework avatar
ldlework

And then two root modules which just called the non-root-module with different settings?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

one non-root module though?

ldlework avatar
ldlework

But like

ldlework avatar
ldlework

OK so you have one generalized root module for deploying ecs services great.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those are just names (root or not)

ldlework avatar
ldlework

So where are the different configs for the different containers?

ldlework avatar
ldlework

Like in your prod.posse.com example.

ldlework avatar
ldlework

Because it seems the only config that’s available is environmental config

ldlework avatar
ldlework

Like the difference between dev and prod

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to re-use the module (which can combine smaller modules), parametirize it and out into the catalog

ldlework avatar
ldlework

But where would the difference between frontend and backend be, for calling the ecs root module twice?

ldlework avatar
ldlework

The catalog is the thing that holds root modules right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it holds modules, big and small, that can be re-used

ldlework avatar
ldlework

Or is the catalog the repo of modules that your root modules use?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Never compose “root modules” inside of other root modules. If or when this is desired, then the module should be split off into a new repository and versioned independently as a standalone module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

Sure, I think I’m down with that idea.

ldlework avatar
ldlework

OK, so you already have the ECS module right?

ldlework avatar
ldlework

It is a general parameterized module that can deploy any ECS service.

ldlework avatar
ldlework

You want to call it twice, with different parameters.

ldlework avatar
ldlework

Where do you store the different paremters for calling that module twice?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That would be considered a different project folder.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. conf/vpc1 and conf/vpc2

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or conf/us-west-2/vpc and conf/us-east-1/vpc

ldlework avatar
ldlework

So you’d simply copy the Root Module out of the source Docker image twice?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

depends what you mean by “copy”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we terraform init both

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

using -from-module

ldlework avatar
ldlework

Like two ECS layers.

ldlework avatar
ldlework

A frontend and a backend.

ldlework avatar
ldlework

They can both be implemented by the root module you have “ecs” in your root modules example.

ldlework avatar
ldlework

By passing different settings to it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, so we init a new root module from that one. then we use terraform.tfvars to update the settings (or envs)

ldlework avatar
ldlework

So inside the prod Docker image, we copy the ECS root module… once? twice?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you want 2 ECS clusters, you copy it twice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but you wouldn’t have both clusters in the same project folder

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you might have us-west-2/mgmt and us-west-2/public

ldlework avatar
ldlework

Right so where do the parameters you pass to each copy come from? Its the same HCL module, but you’re going to call it/init/apply it twice with separate states.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the parameters are stored in us-west-2/mgmt/terraform.tfvars

ldlework avatar
ldlework

Same HCL in terms of implementation - it’s been copied to two different places in conf/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and us-west-2/public/terraform.tfvars

ldlework avatar
ldlework

Where are those?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you create those var files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

those are your settings

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s what makes it unique to your org and not ours

ldlework avatar
ldlework

Are they baked into the prod shell image?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s one way

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

they are always in the git repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is central to our CI/CD strategy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

first important to understand how that works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

when we use atlantis, then we deploy it in the account container

ldlework avatar
ldlework

OK so the environment specific repo has environment specific variable files for invoking the various root modules that the environment specific dockerfile/shell has available

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

when we do that, we cannot rebuild the container with every change; instead atlantis clones the changes into the container

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, depending on what you want, you can 1) copy the same module twice and use it with diff params; 2) crerate a “bigger” module combining the smaller ones and copy it once

ldlework avatar
ldlework

I see

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Idlework I think you summarized it well

ldlework avatar
ldlework

Because the only environment variables that change between your environment repos is environment specific stuff.

ldlework avatar
ldlework

Like between dev and prod.

ldlework avatar
ldlework

So you’re in dev, you copy in the ECS module into the Docker image

ldlework avatar
ldlework

You’re loaded into the geodesic shell

ldlework avatar
ldlework

You’re ready to deploy root modules by CD’ing into them and running terraform apply

ldlework avatar
ldlework

You CD into the ecs root module

ldlework avatar
ldlework

How do you go about deploying the frontend

ldlework avatar
ldlework

And then how do you go about deploying the backend?

ldlework avatar
ldlework

They both depend on this generalized ECS root module we copied into the prod Docker image.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well, make a “bigger” module with frontend and backend in it, and put into the catalog into diff folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that all depends on many diff factors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are talking about ideas and patterns here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how do you implement it, your choice

ldlework avatar
ldlework

Sure, I’m totally just fishing to understand how the posse does it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes I get it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and thanks for those questions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no solution is perfect

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

some of your use-cases could be not covered, or covered better by some other solutions

foqal avatar
foqal
05:23:00 AM

Helpful question stored to <@Foqal> by @Andriy Knysh (Cloud Posse):

I'm trying to imagine how all the state management would look on the environment/configuration repo that calls the various root modules...
ldlework avatar
ldlework

Since root modules are literally terraform apply contexts… how do they boot strap their remote state?

ldlework avatar
ldlework

You CD into one and terrafrom apply, didn’t I need to bootstrap its remote state first somehow?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it uses the S3 backend provisioned separately before

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the backend is configured from ENV vars in the container

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

Do you guys provision like just one backend bucket, and then use varying names to store the various states in that bucket in different files? So you only have to provision the backend once?

ldlework avatar
ldlework
05:26:23 AM

looks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one backend per environment (AWS account)

ldlework avatar
ldlework

oh I see OK

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then each projects is in separate folder in the repo and in the backend S3 bucket

ldlework avatar
ldlework

by project you mean “root module where we run terraform apply” right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

ldlework avatar
ldlework

right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

BTW, take a look at the docs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

little bit outdated, but can give you some ideas

ldlework avatar
ldlework

I really appreciate all the advice. It was way above and beyond. Thanks for sticking through all that. Really!

1
ldlework avatar
ldlework

Wow atlantis looks cool

ldlework avatar
ldlework

phew there is still so much to learn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Let’s talk about atlantis later :)

2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Just in case you want to follow, #atlantis

ldlework avatar
ldlework

Oh dang, you can’t refer to modules in a remote git repo that are not at the root?

ldlework avatar
ldlework

rough

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

try this:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
terraform init -from-module=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/users?ref=tags/0.53.3>"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that initializes the aws/users module

ldlework avatar
ldlework

I mean in a module { source = } block

ldlework avatar
ldlework

Or maybe that double slash works there too?

2019-03-23

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, that works in a source block too

ldlework avatar
ldlework

Nice

2019-03-25

xluffy avatar

Hey, I have a string variable 10.20.30.111, I want to get last element of this string, expecting output is 111, I can use

value = "${element(split(".", var.private_ip), length(split(".", var.private_ip)) - 1 )}"

But too complex, any suggestion?

oscarsullivan_old avatar
oscarsullivan_old

Is that too complicated?

xluffy avatar

Hmm, It’s work but too complicated with me. I want to make sure to have another solution

oscarsullivan_old avatar
oscarsullivan_old

you’re splitting on “.” and looking for the last index of the output

oscarsullivan_old avatar
oscarsullivan_old

LGTM

1
mmuehlberger avatar
mmuehlberger

You can also just do element(split(".", var.private_ip), 3) since IP addresses have a predefined syntax (always 4 blocks).

xluffy avatar

yeah, my bad tks

oscarsullivan_old avatar
oscarsullivan_old

Does anyone else ever get this?

Failed to load backend: 
Error configuring the backend "s3": RequestError: send request failed
caused by: Post <https://sts.amazonaws.com/>: dial tcp: lookup sts.amazonaws.com on 8.8.4.4:53: read udp 172.17.0.2:46647->8.8.4.4:53: i/o timeout

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".

I feel it has something to do with my VPN

Nikola Velkovski avatar
Nikola Velkovski

usually these kind of errors are network/internet related

2
oscarsullivan_old avatar
oscarsullivan_old

On my end?

oscarsullivan_old avatar
oscarsullivan_old

It does seem to sort itself out when I turn off wifi / vpns thereby resetting my network connection

albttx avatar

Hello, just posted this https://github.com/cloudposse/terraform-aws-acm-request-certificate/issues/16 just wan’t to be sure it’s an error from the module… any idea ?

`aws_route53_record.default` errror · Issue #16 · cloudposse/terraform-aws-acm-request-certificate

Error: module.acm_request_certificate.aws_route53_record.default: 1 error(s) occurred: * module.acm_request_certificate.aws_route53_record.default: At column 3, line 1: lookup: argument 1 should be…

oscarsullivan_old avatar
oscarsullivan_old

How do I get what I believe is NATing so that public IP r53 records NAT to private IP when using openvpn?

oscarsullivan_old avatar
oscarsullivan_old

Nvm looks like a security group issue actually

Bharat avatar

Terraform is marking task-def as inactive, when ever we update the task-def. We need those old task-def’s to do roll-back. We are deploying ECS-Service as part of CI. Any work around on how to retain the older versions of task-def’s ?

2019-03-26

deftunix avatar
deftunix

hi everyone

deftunix avatar
deftunix

I am using terraform to provision consul cluster on aws using the https://github.com/hashicorp/terraform-aws-consul module. do you have any

hashicorp/terraform-aws-consul

A Terraform Module for how to run Consul on AWS using Terraform and Packer - hashicorp/terraform-aws-consul

deftunix avatar
deftunix

suggestion to terminate instance one by one when launch configuration change?

Igor avatar

I am suddenly having trouble passing the route 53 zone nameservers as records to the NS record

Igor avatar

The error I get is records.0 must be a single value, not a list

Igor avatar

I am passing in ["${aws_route53_zone.hostedzone.*.name_servers}"]

antonbabenko avatar
antonbabenko

Try: ["${flatten(aws_route53_zone.hostedzone.*.name_servers)}"]

1
Igor avatar

Thanks, I just stumbled on that

antonbabenko avatar
antonbabenko

Or actually try: ["${aws_route53_zone.hostedzone.name_servers[0]}"]

Igor avatar

flatten worked for me

1
Igor avatar

element(…, 0) didn’t (“element() may only be used with flat lists”)

antonbabenko avatar
antonbabenko

right, combine with flatten, if you need to get just one element

Igor avatar

Thanks

tallu avatar

given values="m1.xlarge,c4.xlarge,c3.xlarge,c5.xlarge,t2.xlarge,r3.xlarge" can I use jsonencode or something similar to get the following

{
      "InstanceType": "m1.xlarge"
    },
    {
      "InstanceType": "c4.xlarge"
    },
    {
      "InstanceType": "c3.xlarge"
    },
    {
      "InstanceType": "c5.xlarge"
    },
    {
      "InstanceType": "t2.xlarge"
    },
    {
      "InstanceType": "r3.xlarge"
    }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or use null_data_source to construct anything you want, e.g. https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/main.tf#L63

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

tallu avatar

map fails when the key is same like

> map("InstanceType","m1.xlarge","InstanceType","c4.xlarge")
map: argument 3 is a duplicate key: "InstanceType" in:

${map("InstanceType","m1.xlarge","InstanceType","c4.xlarge")}
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

tallu avatar

could not get my desired output looking at all the proposed links

tallu avatar

thanks I will try it out

tallu avatar
formatlist - Functions - Configuration Language - Terraform by HashiCorp

The formatlist function produces a list of strings by formatting a number of other values according to a specification string.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

tallu avatar
> formatlist("Hello, %s!", ["Valentina", "Ander", "Olivia", "Sam"])
parse error at 1:28: expected expression but found "["
tallu avatar

in terraform console

tallu avatar

nevermind formatlist("Hello, %s!", list("Valentina", "Ander", "Olivia", "Sam")) worked

ldlework avatar
ldlework

What would be the cause of this failure when deploying ecs-codepipeline

* module.backend.module.ecs_codepipeline.module.github_webhooks.github_repository_webhook.default: 1 error(s) occurred:

• github_repository_webhook.default: POST <https://api.github.com/repos/dustinlacewell/roboduels-frontend/hooks>: 404 Not Found []
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you create a correct GitHub token?

ldlework avatar
ldlework

Hmm…

ldlework avatar
ldlework

Oh I know what happened. Thanks @Andriy Knysh (Cloud Posse) lol.

ldlework avatar
ldlework

OK, does anyone ahve an idea about this one? I’m chalking this up to my inexperience with the interpolation details:

* output.cache_hostname: element: element() may only be used with flat lists, this list contains elements of type map in:

${element(module.elasticache.nodes, 0)}
* module.elasticache.aws_route53_record.cache: 1 error(s) occurred:

* module.elasticache.aws_route53_record.cache: At column 3, line 1: element: argument 1 should be type list, got type string in:

${element(aws_elasticache_cluster.cache.cache_nodes.*.address, 0)}
ldlework avatar
ldlework

oh that’s two errors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

aws_elasticache_cluster.cache.cache_nodes is a map

ldlework avatar
ldlework

isn’t it a list of maps?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

lookup(element(aws_elasticache_cluster.cache.cache_nodes, 0), "address") - try something like this

ldlework avatar
ldlework

Interesting.

ldlework avatar
ldlework

ew I think I have to escape the inner quotes

ldlework avatar
ldlework
09:18:55 PM

does a raindance for 0.12

ldlework avatar
ldlework

escaping doesn’t work either…

ldlework avatar
ldlework
  value = "${lookup(element(module.elasticache.nodes, 0), \"address\")}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what are you escaping?

ldlework avatar
ldlework

Error: Error loading /home/ldlework/roboduels/infra/stages/qa/outputs.tf: Error reading config for output cache_hostname: parse error at 1 expected expression but found invalid sequence “\”

loren avatar

a miracle of hcl is that you do not need to escape inner quotes like that

ldlework avatar
ldlework

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

value = "${element(module.elasticache.nodes, 0)["address"]}" - or this

loren avatar
element() may only be used with flat lists, this list contains elements of type map in ...
loren avatar

so, no element() for you…

ldlework avatar
ldlework

ldlework avatar
ldlework

lol what, how do I report the address of the primary elasticache node? can I just output the whole list?

loren avatar

aws_elasticache_cluster.cache.cache_nodes[0]

ldlework avatar
ldlework

that will output the whole map right?

ldlework avatar
ldlework

i guess i don’t know why I’m trying to reduce all my outputs to single values

ldlework avatar
ldlework
09:23:37 PM

stops doing that.

loren avatar

i think i also mixed up the two errors there, oops

ldlework avatar
ldlework

Oh ok, so I guess I still have a problem

ldlework avatar
ldlework

I want to give internal simple DNS to the first ip in the elasticache cluster

ldlework avatar
ldlework

I so had something like:

  records = ["${element(aws_elasticache_cluster.cache.cache_nodes.*.address, 0)}"]
ldlework avatar
ldlework

But it sounds like none of the options we just discussed are going to work to extract the address of the first item?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Element will not work on list of maps

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can input and output anything

ldlework avatar
ldlework

Really? How will I extract the address of the first map of the lists?

ldlework avatar
ldlework

Wont I have the same exact problem there?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can output a list of values

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And then use element

ldlework avatar
ldlework

Genius.

ldlework avatar
ldlework

So you mean, output all the addresses as a list using splat. Then take the first. I’ll try it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Something like that

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here is an example of working with list of maps and getting the first element https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/main.tf#L114

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

[0] works with maps

ldlework avatar
ldlework

I have:

data "null_data_source" "cache_addresses" {
  inputs = {
    addresses = ["${aws_elasticache_cluster.cache.cache_nodes.*.address}"]
  }
}

resource "aws_route53_record" "cache" {
  zone_id = "${var.zone_id}"
  name    = "${local.name}"
  type    = "CNAME"
  records = ["${data.null_data_source.cache_addresses.outputs["addresses"]}"]
  ttl     = "300"
}

and get:

Error: module.elasticache.aws_route53_record.cache: records: should be a list Error: module.elasticache.data.null_data_source.cache_addresses: inputs (addresses): ‘’ expected type ‘string’, got unconvertible type ‘[]interface {}’

ldlework avatar
ldlework

this is a pretty confusing DSL all things considered

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just try [0] for now

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse)

resource "aws_route53_record" "cache" {
  zone_id = "${var.zone_id}"
  name    = "${local.name}"
  type    = "CNAME"
  records = ["${aws_elasticache_cluster.cache.cache_nodes[0].address}"]
  ttl     = "300"
}

Error downloading modules: Error loading modules: module elasticache: Error loading .terraform/modules/a86e58cdab02f33e0c2a0f76c4ae3934/stacks/elasticache/main.tf: Error reading config for aws_route53_record[cache]: parse error at 1 expected “}” but found “.”

ldlework avatar
ldlework

Is that what you meant?

ldlework avatar
ldlework

Starting to feel a little dumb here lol.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

[“${aws_elasticache_cluster.cache.cache_nodes[0][“address”]}“]

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if [0]["address"] together does not work, use locals as in the example (we had the same issues)

ldlework avatar
ldlework

yeah that doesn’t work

ldlework avatar
ldlework

OK trying coalescelist

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

coalescelist has nothing to do with that

ldlework avatar
ldlework

what am i looking at then hehe

ldlework avatar
ldlework

the splat?

ldlework avatar
ldlework

ohhhh

ldlework avatar
ldlework

the next few lines

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean use locals to first get [0] from the list, then another local to get [“data”] from the map

ldlework avatar
ldlework
09:48:30 PM

tries

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) thanks homie

ldlework avatar
ldlework

Why is the ECS container definition for the environment a list of maps?

ldlework avatar
ldlework

oh i probably need to do name/value

GFox)(AWSDevSecOps avatar
GFox)(AWSDevSecOps

Hello, anyone have any Terraform modules code to automate cis benchmarks in azure subscriptions / tenants ???

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe ask in #azure

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not many people here are familiar with Azure

1
GFox)(AWSDevSecOps avatar
GFox)(AWSDevSecOps

if so, you’re going to be very popular!

ldlework avatar
ldlework

A dependency graph encompassing all the CloudPosse projects would be awesome

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice idea! we’ll consider it thanks

ldlework avatar
ldlework

I’m changing the environment setting of a https://github.com/cloudposse/terraform-aws-ecs-container-definition module and it isn’t updating the container defnition.

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

ldlework avatar
ldlework

Not sure how to get it to change.

ldlework avatar
ldlework

TIL how to taint

ldlework avatar
ldlework

not clear how to taint the container definition though…

ldlework avatar
ldlework
01:57:38 AM

just destroys the whole module

ldlework avatar
ldlework

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it just generates JSON output

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

ldlework avatar
ldlework

yes

ldlework avatar
ldlework

and with ecs-codepipeline

ldlework avatar
ldlework

what am I missing?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to taint https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/master/main.tf#L41, the resources that uses the generated json

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but this will prevent from updating the entire task definition including the container definition

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) I ran terraform destroy on the whole module and it says:

* module.backend.module.ecs_codepipeline.aws_s3_bucket.default (destroy): 1 error(s) occurred:
* aws_s3_bucket.default: error deleting S3 Bucket (us-west-1-qa-backend-codepipeline): BucketNotEmpty: The bucket you tried to delete is not empty
ldlework avatar
ldlework

I guess you have to manually go in and delete the bucket data?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

ldlework avatar
ldlework

lame

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we will consider updating the module to add a var for force destroy on the bucket

ldlework avatar
ldlework

@Andriy Knysh (Cloud Posse) why are the container definitions ignored in the lifecycle??

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

don’t know, did not implement that part, maybe there was a good reason, need to take a look

ldlework avatar
ldlework

np

2019-03-27

jaykm avatar

Hello everyone, I Just joined this channel.

I want to ask one thing related to API gateway, I’m able to create API gateway, methods, and integration with lambda and I also configured all the headers, origins, methods, in the response parameters but CORS has not been configured. Can someone help me to get in, I can also send the terraform script how I’m implementing. @sahil FYI

jaykm avatar

@foqal No, I’m already given the permission of lambda to API gateway and I can also access the lambda from API gateway but If I do a request from the browser (Client) then It gives me CORS error and you can also see the error.

jaykm avatar
jaykm
10:10:47 AM
jaykm avatar

@foqal

oscarsullivan_old avatar
oscarsullivan_old

Anyone been lucky in using https://github.com/cloudposse/terraform-aws-ecr and setting up ECR

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it was used many times before

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the issue?

oscarsullivan_old avatar
oscarsullivan_old

2 probs.

1) In the example module-resource “roles” argument is set. This isn’t in the documentation, variables.tf and also errors I forked and did a make readme to see if it was just not run but it doesn’t exist.

2) In the examples for cicd_user for codefresh the outputs error with

terraform init
Initializing modules...
- module.cicd_user
  Getting source "git::<https://github.com/cloudposse/terraform-aws-iam-system-user.git?ref=tags/0.4.1>"
- module.ecr
- module.cicd_user.label
  Getting source "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.5.4>"
- module.ecr.label

Initializing the backend...

Error: resource 'aws_iam_policy_attachment.login' config: "policy_login_arn" is not a valid output for module "ecr"



Error: resource 'aws_iam_policy_attachment.read' config: "policy_read_arn" is not a valid output for module "ecr"



Error: resource 'aws_iam_policy_attachment.write' config: "policy_write_arn" is not a valid output for module "ecr"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, here it was tested/deployed many months ago https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf#L65

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

oscarsullivan_old avatar
oscarsullivan_old

Thanks!

oscarsullivan_old avatar
oscarsullivan_old

Yep that looks like how I’ve set it up now

oscarsullivan_old avatar
oscarsullivan_old

Not sure why the doc has “roles” as an input then

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the docs were not updated

oscarsullivan_old avatar
oscarsullivan_old

Think its best to use the 2.x v?

oscarsullivan_old avatar
oscarsullivan_old

feel like I’m setting myself up for one if I use 4.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not sure, I tested it up to 0.2.9, did not test latest releases, but they were tested by other people in many deployments (but maybe the interface is different)

oscarsullivan_old avatar
oscarsullivan_old

hmm

oscarsullivan_old avatar
oscarsullivan_old

Using 0.40 I have an ECR with permissions

oscarsullivan_old avatar
oscarsullivan_old

so it can’t NOT be working

oscarsullivan_old avatar
oscarsullivan_old

ECR repo w/ permissions & lifecycle policy which was the premise

oscarsullivan_old avatar
oscarsullivan_old

Just the docs that are outdated it seems

midacts avatar
midacts

https://github.com/cloudposse/terraform-aws-ec2-instance Would i need to fork this to add a provisioner? We’d need to join Windows machines to the domain and run post provisioning steps on Windows and Linux.

oscarsullivan_old avatar
oscarsullivan_old

Does anyone else find that output.tf inside a module don’t go to stdout when using the module?

oscarsullivan_old avatar
oscarsullivan_old

terraform output -module= mymodule apparently this is a solution but eh

ldlework avatar
ldlework

@oscarsullivan_old only the root module’s outputs are printed right?

oscarsullivan_old avatar
oscarsullivan_old

Yeh

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to add the outputs from the child module(s) to the outputs of the parent

oscarsullivan_old avatar
oscarsullivan_old

Ah ok. That’s what I thought was but hoping to avoid that.

oscarsullivan_old avatar
oscarsullivan_old

Now I know about -module=x I at least have a way to not dupe them

antonbabenko avatar
antonbabenko

Just discovered https://codeherent.tech/ - new kid in town, or am I the last one to find it?

Igor avatar

Looks cool ^^

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

They have been reaching out to us

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Guess I should check it out

antonbabenko avatar
antonbabenko

It is too early and does not look for my cases or for my customers.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

watched the demo video… yea, an IDE for TF is not something we would support at this itme

antonbabenko avatar
antonbabenko

I’d like to have smart IDE with real-time validation and suggestions. Something what I could easily integrate in my existing workflows.

oscarsullivan_old avatar
oscarsullivan_old
camptocamp/terraboard

A web dashboard to inspect Terraform States - camptocamp/terraboard

oscarsullivan_old avatar
oscarsullivan_old

oh actually seeing slightly different use cases there

oscarsullivan_old avatar
oscarsullivan_old

that’s actually a tool to visually USE terraform not audit it

Igor avatar

modules.tf does visual->code, but it’d be good to be able to do code->visual. Think Atlantis, except for the large text output, you get a link to a nice diagram that describes the deltas.

Igor avatar

And ideally another one after the apply is complete, with all of the unknowns resolved

antonbabenko avatar
antonbabenko

@Igor I have some nice ideas and POC how to get code -> visual, but lack of time or $$$ to work on that.

Igor avatar

I bet it’s a harder problem to solve than visual -> code

antonbabenko avatar
antonbabenko

It is pretty much equal with the same amount of details. More work is to convert from visual to cloudformation and from cloudformation to Terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suppose this appeals to a certain mode of developer and probably not the one using vim as their primary IDE ;)

Igor avatar

It might make it easier to do code reviews

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, as usual something like that looks nice and helpful at first, but then always gets in the way

1
johncblandii avatar
johncblandii

Does this example actually work? https://github.com/cloudposse/terraform-aws-ecs-container-definition/blob/master/examples/multi_port_mappings/main.tf#L25-L36

I got this error when using a similar approach, but with 8080/80 (container/host):

* aws_ecs_task_definition.default: ClientException: When networkMode=awsvpc, the host ports and container ports in port mappings must match.

So I sleuthed and found https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html:
If you are using containers in a task with the awsvpc or host network mode, exposed ports should be specified using containerPort. The hostPort can be left blank or it must be the same value as the containerPort.

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

PortMapping - Amazon Elastic Container Service

Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks line not, needs a PR

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

PortMapping - Amazon Elastic Container Service

Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.

johncblandii avatar
johncblandii

ok. just wanted to make sure i wasn’t in crazy town

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re holding an office hours

Alex Siegman avatar
Alex Siegman

@Erik Osterman (Cloud Posse) Yeah I’m just listening in to see what’s up

Igor avatar

Has anyone ever gotten a diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue. message from Terraform?

Tim Malone avatar
Tim Malone

i’ve seen it a couple of times. usually just another apply fixed it… … but i would make sure your state is backed up - if you don’t already have it versioned - just in case

Igor avatar

Thanks, it looks like it didn’t harm anything on my side either

ldlework avatar
ldlework

Maybe consider making https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git support building the alb conditionally

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

ldlework avatar
ldlework

the target_group / listener I mean

ldlework avatar
ldlework

It’s annoying to do in 0.11 I know - but that’s exactly why I’d prefer if you maintained the HCL

ldlework avatar
ldlework

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll consider that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

2019-03-28

oscarsullivan_old avatar
oscarsullivan_old

Be careful terraform destroying an ECR… it isn’t like an S3 bucket where it warns you it is not empty… your images will go poof

ldlework avatar
ldlework

@oscarsullivan_old i had the other problem the other day it wouldn’t delete the content inside of an s3 bucket when deleting an ecs-codepipeline

oscarsullivan_old avatar
oscarsullivan_old

That’s a good thing! Don’t want to accidentally remove data, right? Is that what you mean

ldlework avatar
ldlework

I mean, ECR images when I literally tell terraform to destroy it?

ldlework avatar
ldlework

It should at least be an option..

inactive avatar
inactive

Guys, I have a question regarding team collaboration. What do you guys use to work in teams when running terraform? I know that terraform provides a solution to this on their paid tiers, but they are cost prohibitive for us. Currently, our workaround has been to share our terraform templates via a cloud drive which is replicated locally to each developer workstation. It sorta works, but there are running into several limitations and issues. Any suggestions?

Tim Malone avatar
Tim Malone

we just commit into a git repo, and use remote state (s3 with locking via dynamodb). have you seen atlantis? pretty much the free version of terraform enterprise, as i understand

inactive avatar
inactive

we already use s3 as backend to save the terraform state. but we still need a way to run ‘terraform apply’ in different machines. using git only takes care of the source code, but it ignores several other files which must also be shared with team members

inactive avatar
inactive

i checked atlantis earlier but i couldn’t find anything specific to team collaboration

Tim Malone avatar
Tim Malone

atlantis’ whole purpose is team collab - their tagline is even ‘Start working on Terraform as a team.’ https://www.runatlantis.io/

1
Tim Malone avatar
Tim Malone

re the other files that need to be shared - what sort of files are we talking?

Nikola Velkovski avatar
Nikola Velkovski

This is IMO the hardest part of terraform. I would recommend using workspaces

Nikola Velkovski avatar
Nikola Velkovski

In order to solve the team issue, you would need some kind of pipelines that have queues.

inactive avatar
inactive

We have ssh keys that are used when terraform deploys our bastion hosts. We also have terraform.tfvars which include credentials which cannot be pushed to git. And finally, our .terraform directory is not pushed to git which then forces each developer to reinitialize their local terraform environment with ‘terraform init’. We’ve been able to successfully do all of this using OneDrive… but I feel like this is a silly workaround and there must be a better solution out there that does not require spending \(\)

Nikola Velkovski avatar
Nikola Velkovski

I’ve used jenkins in the past and it worked quite wel.

inactive avatar
inactive

Another option for us was to create a shared VM where multiple users could use, one at a time (ughh)

Nikola Velkovski avatar
Nikola Velkovski

First you need to solve the issue you’ve mentioned above though.

Nikola Velkovski avatar
Nikola Velkovski

A central key store for the secrets is one option.

1
inactive avatar
inactive

A third option that we are considering is using an NFS shared drive (even Amazon EFS) where we store all of our terraform files

inactive avatar
inactive

And yes, we use Jenkins already to schedule and automate our deployments, but we still need to test them manually in our local machines

Nikola Velkovski avatar
Nikola Velkovski

test meaning ?

inactive avatar
inactive

Developer A makes a change to a terraform template locally on his machine. He runs ‘terraform apply’. It works.

inactive avatar
inactive

He goes out to lunch. Developer B needs to pick up where he left of on his own machine

Nikola Velkovski avatar
Nikola Velkovski

Well, why not put that on jenkins ?

Nikola Velkovski avatar
Nikola Velkovski

or any other ci for that matter.

inactive avatar
inactive

you mean use Jenkins as a shared VM?

Nikola Velkovski avatar
Nikola Velkovski

and for this terraform plan would suffice

Nikola Velkovski avatar
Nikola Velkovski


you mean use Jenkins as a shared VM?
wait first you need a clear definition of a workflow

Nikola Velkovski avatar
Nikola Velkovski

when you have this, then you can jump on implementation.

hkaya avatar

it doesn’t have to be jenkins, any pipeline would be fine, gitlab-ci or codefresh could also be used as a service, hence no vm involved

hkaya avatar

as for the public ssh keys, check them in to your repo, that’s why they are called public

Tim Malone avatar
Tim Malone

definitely recommend some sort of secret share to get around the tfvars problem we use lastpass and manually update our local envs, but we don’t have a lot of secrets - could get painful if there were more. this could handle the ssh keys too (if they’re private)

if a full secret share is too much to jump to right away, you could even store tfvars in s3 to get started with - but of course tightly control the access

re init’ing local .terraform directory - that shouldn’t take very long to do, and terraform will tell you if you need to update - that shouldn’t be a deal breaker to require people to do that (in fact, it’s almost a necessity - it downloads the provider binaries suitable for the person’s machine)

Abel Luck avatar
Abel Luck

we commit everything to git, but use mozilla’s sops utility to encrypt the secrets

Abel Luck avatar
Abel Luck
mozilla/sops

Secrets management stinks, use some sops! Contribute to mozilla/sops development by creating an account on GitHub.

Abel Luck avatar
Abel Luck

there’s a fantastic terraform provider for sops that lets you use the encrypted yaml/json seamlessly https://github.com/carlpett/terraform-provider-sops

carlpett/terraform-provider-sops

A Terraform provider for reading Mozilla sops files - carlpett/terraform-provider-sops

1
inactive avatar
inactive

I appreciate all of the feedback you have provided so far. The jenkins/ci pipeline makes sense to me, but only for automation deployments. But we still want independent execution to be done manually via our local terminal from our Mac. I will look into the secret share suggestions that you have pointed out as that does make sense. Thanks again.

inactive avatar
inactive

I just checked sops and this is what I think makes the most sense. It will allow to check everything into git. Thanks @Abel Luck

keen avatar

I’m a half fan of git-crypt https://github.com/AGWA/git-crypt - it’s cleanly transparent. as long as you have it setup anyway. (if you dont, it gets a bit uglier).

AGWA/git-crypt

Transparent file encryption in git. Contribute to AGWA/git-crypt development by creating an account on GitHub.

keen avatar

your specified files are decrypted on disk, but encrypted in the repo - transparently during the commit/checkout.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also use AWS SSM param store + chamber to store secrets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it works in a container on local machine

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and from a CI/CD pipeline

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
    # Deploy chart to cluster using helmfile (with chamber secrets)
    - "chamber exec kops app -- helmfile --file config/helmfile.yaml --selector component=app sync --concurrency 1 --args '--wait --timeout=600 --force --reset-values'"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

chamber exec namespace -- terraform plan

ldlework avatar
ldlework

@inactive I’m particularly fond of using gopass as a git-based secret store https://github.com/gopasspw/gopass which has first class support for summon which is a tool that injects secrets as environment variables into processes (like when you run terraform apply) https://github.com/cyberark/summon Here is an example of how Terraform is run:

summon -p $(which gopass) -f secrets.yml terraform apply
gopasspw/gopass

The slightly more awesome standard unix password manager for teams - gopasspw/gopass

cyberark/summon

CLI that provides on-demand secrets access for common DevOps tools - cyberark/summon

ldlework avatar
ldlework

Your secrets.yml file simply is a mapping between environment variables you want to set on the terraform process, and the secret within the gopass store so like:

TF_VAR_github_oauth_token: !var roboduels/github/oauth-token
TF_VAR_discord_webhook_url: !var roboduels/discord/webhook-url
ldlework avatar
ldlework

Since the variables use the TF_VAR_ prefix, they will actually be set as Terraform variables on your root module!

inactive avatar
inactive

Thanks again to all. Very useful tips.

loren avatar

This thread should be captured somewhere, fantastic set of resources and options

1
Tim Malone avatar
Tim Malone

someone could write a blog post based on it??

2019-03-29

Nikola Velkovski avatar
Nikola Velkovski

Hey people, I just noticed some interesting behavior when using the ECR module https://github.com/cloudposse/terraform-aws-ecr/blob/master/main.tf#L34 makes apply fail if it’s empty. I use the label module but I do not have stage set, so I am wondering if setting a simple conditional here: https://github.com/cloudposse/terraform-aws-ecr/blob/master/main.tf#L11 would be good enough ?

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Nikola Velkovski avatar
Nikola Velkovski

The error is

aws_ecr_lifecycle_policy.default: InvalidParameterException: Invalid parameter at 'LifecyclePolicyText' failed to satisfy constraint: 'Lifecycle policy validation failure:
string "" is too short (length: 0, required minimum: 1)
Nikola Velkovski avatar
Nikola Velkovski
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm… why don’t you use something for the stage, e.g. test or testing? what’s the reason to not use it (all the modules were designed with the assumption to have namespace, stage and name )

Nikola Velkovski avatar
Nikola Velkovski

no, but that’s just because I Am used to environment rather than stage

Nikola Velkovski avatar
Nikola Velkovski

nvm I can pass env in place of stage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yep, those are just labels

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(environment and stage are the same in all cases we use the modules)

Nikola Velkovski avatar
Nikola Velkovski

thanks for the reply though

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can even change the order if using terraform-null-label

Nikola Velkovski avatar
Nikola Velkovski

oh yeah, I really dig that module

Nikola Velkovski avatar
Nikola Velkovski

kudos for that one

Julio Tain Sueiras avatar
Julio Tain Sueiras

hi

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Julio Tain Sueiras

Julio Tain Sueiras avatar
Julio Tain Sueiras

just wanted to ask the opinion for a terraform LSP?

Julio Tain Sueiras avatar
Julio Tain Sueiras

(though right now mostly focusing on adding provider/provisioner completion and adding terraform v0.12 support once is in GA to my plugin)

Julio Tain Sueiras avatar
Julio Tain Sueiras

also any vim + terraform users here?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what is terraform LSP? https://langserver.org ?

Julio Tain Sueiras avatar
Julio Tain Sueiras

correct

Julio Tain Sueiras avatar
Julio Tain Sueiras
juliosueiras/vim-terraform-completion

A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool - juliosueiras/vim-terraform-completion

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are talking about autocompletion/syntax highlighting/error highlighting, a lot of people are using VScode (with terraform plugin) or JetBrains IDEA (with terraform plugin)

Julio Tain Sueiras avatar
Julio Tain Sueiras

I know that part(also a lot of people use my plugin for vim as well)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice plugin @Julio Tain Sueiras

Julio Tain Sueiras avatar
Julio Tain Sueiras

since my thinking is, a LSP approach would allow any editor, and also can just focus on the new features rather then editor specific implementation

loren avatar

i like the LSP idea

Julio Tain Sueiras avatar
Julio Tain Sueiras

for ex. vlad’s plugin is very good

Julio Tain Sueiras avatar
Julio Tain Sueiras

but because is a standard jetbrains plugin

Julio Tain Sueiras avatar
Julio Tain Sueiras

meaning that update time is quite far apart

Julio Tain Sueiras avatar
Julio Tain Sueiras

but terraform provider get updated very frequently

Julio Tain Sueiras avatar
Julio Tain Sueiras

(for that issue, I implemented a bot that auto update ,and do version base completion, incase you want to use data from older version of the provider)

1
Julio Tain Sueiras avatar
Julio Tain Sueiras

also LSP would allow even more nonstandard editor

Julio Tain Sueiras avatar
Julio Tain Sueiras

to have terraform features

Julio Tain Sueiras avatar
Julio Tain Sueiras

like Atom(RIP)

Julio Tain Sueiras avatar
Julio Tain Sueiras

P.S. @Andriy Knysh (Cloud Posse) for the vscode terraform plugin, right now, it only support aws/gcp/azure

Julio Tain Sueiras avatar
Julio Tain Sueiras

he is implementing a new feature that would load the completion data from my plugin

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh nice

Julio Tain Sueiras avatar
Julio Tain Sueiras
juliosueiras/vim-terraform-completion

A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool - juliosueiras/vim-terraform-completion

Julio Tain Sueiras avatar
Julio Tain Sueiras
juliosueiras/vim-terraform-completion

A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool - juliosueiras/vim-terraform-completion

Julio Tain Sueiras avatar
Julio Tain Sueiras

if you click on aws for example

Julio Tain Sueiras avatar
Julio Tain Sueiras

it have the data for every version of the provider

Julio Tain Sueiras avatar
Julio Tain Sueiras

since my thinking was, what if you want to lock to a version

Julio Tain Sueiras avatar
Julio Tain Sueiras

and only want the editor have completion data of that version

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

great, looks like you did a lot of work for that

Julio Tain Sueiras avatar
Julio Tain Sueiras

there is also extra small stuff like, autocomplete module’s attribute and argument , autocomplete modules list from registry.terraform.com, evaluate interpolation, open documentation of the current parameter in a module, etc

maarten avatar
maarten

@Julio Tain Sueiras cool stuff!

Julio Tain Sueiras avatar
Julio Tain Sueiras

the entire reason I did the plugin is, I don’t want to use another editor

Julio Tain Sueiras avatar
Julio Tain Sueiras

but if there is no autocompletion, then terraform is quite annoyed to work with

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(yea, that’s what it says on your GitHub profile )

Julio Tain Sueiras avatar
Julio Tain Sueiras

but yeah, once TF0.12 is on GA

Julio Tain Sueiras avatar
Julio Tain Sueiras

then I will work a LSP implementation

Julio Tain Sueiras avatar
Julio Tain Sueiras

especially because Golang have the AST for HCL

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you going to use Golang or Ruby?

Julio Tain Sueiras avatar
Julio Tain Sueiras

go

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

even better

Julio Tain Sueiras avatar
Julio Tain Sueiras

I did several provider for terraform & added new feature to official terraform providers(vsphere) and packer(vsphere as well)

maarten avatar
maarten

@Julio Tain Sueiras Can you add a Microsoft Clippy in case a user is writing iam_role_policy_attachment ?

1
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but Clippy did not end up well

maarten avatar
maarten

party pooper

Julio Tain Sueiras avatar
Julio Tain Sueiras
juliosueiras/terraform-provider-packer

A Terraform Provider to generate Packer JSON. Contribute to juliosueiras/terraform-provider-packer development by creating an account on GitHub.

Julio Tain Sueiras avatar
Julio Tain Sueiras

also how would that work

Julio Tain Sueiras avatar
Julio Tain Sueiras

?

maarten avatar
maarten

Clippy: It looks like you want to use iam_role_policy_attachment, are you sure about that ?

Julio Tain Sueiras avatar
Julio Tain Sueiras

XD

Julio Tain Sueiras avatar
Julio Tain Sueiras

I actually use that, since I don’t like dealing with json

maarten avatar
maarten

do you also correct formatting like hclfmt does ?

Julio Tain Sueiras avatar
Julio Tain Sueiras

my plugin have a dependencies for vim-terraform

Julio Tain Sueiras avatar
Julio Tain Sueiras

which have auto format

Julio Tain Sueiras avatar
Julio Tain Sueiras

using terraform format

Julio Tain Sueiras avatar
Julio Tain Sueiras

also any of you guys use vault?

maarten avatar
maarten

I’m working with it at the moment

Julio Tain Sueiras avatar
Julio Tain Sueiras

since I had a meeting with the Hashicorp’s people(Toronto division) around 1 month ago

Julio Tain Sueiras avatar
Julio Tain Sueiras

and I mention that I wish there is a similar thing for vault like aws_iam_policy_document data source

Julio Tain Sueiras avatar
Julio Tain Sueiras

and they have it now

Julio Tain Sueiras avatar
Julio Tain Sueiras

but……………

Julio Tain Sueiras avatar
Julio Tain Sueiras

is not on the terraform docs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

why not?

Julio Tain Sueiras avatar
Julio Tain Sueiras
Julio Tain Sueiras avatar
Julio Tain Sueiras

not sure

Julio Tain Sueiras avatar
Julio Tain Sueiras

but is part of the official release

Julio Tain Sueiras avatar
Julio Tain Sueiras

not a beta/alpha

Julio Tain Sueiras avatar
Julio Tain Sueiras

so you can write all the vault policy

Julio Tain Sueiras avatar
Julio Tain Sueiras

without using heredoc

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s nice

Julio Tain Sueiras avatar
Julio Tain Sueiras

P.S. the biggest best feeling I did was for kubernetes pod terraform

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Andriy Knysh (Cloud Posse) https://asciinema.org/a/158264

Complete Nested Block Completionattachment image

Recorded by juliosueiras

Julio Tain Sueiras avatar
Julio Tain Sueiras

kube provider is nightmare to work with, without autocompletion

antonbabenko avatar
antonbabenko

You can always use json2hcl (https://github.com/kvz/json2hcl) to write packer.json as hcl and then just convert it to valid json. I use it pretty often for various cases.

kvz/json2hcl

Convert JSON to HCL, and vice versa . Contribute to kvz/json2hcl development by creating an account on GitHub.

antonbabenko avatar
antonbabenko

Though more often I use yaml<>json using https://github.com/dbohdan/remarshal

dbohdan/remarshal

Convert between TOML, YAML and JSON. Contribute to dbohdan/remarshal development by creating an account on GitHub.

Julio Tain Sueiras avatar
Julio Tain Sueiras

the use case I did for packer is because I want to reference terraform resources in packer

Julio Tain Sueiras avatar
Julio Tain Sueiras

and json2hcl is a hard convert

Julio Tain Sueiras avatar
Julio Tain Sueiras

but doesn’t account for several things

Julio Tain Sueiras avatar
Julio Tain Sueiras

like terraform doesn’t understand the ideal of sequential

Julio Tain Sueiras avatar
Julio Tain Sueiras

(so provisioners is going to have problem)

Julio Tain Sueiras avatar
Julio Tain Sueiras

I use my provider with local_file resource

Julio Tain Sueiras avatar
Julio Tain Sueiras

then null_resource with packer build

Julio Tain Sueiras avatar
Julio Tain Sueiras

or validate

antonbabenko avatar
antonbabenko

@Julio Tain Sueiras I used your documentation generators back in the days (2+ years ago in my project - https://github.com/antonbabenko/terrapin). I mentioned you there in the bottom of README Thanks for that! It saved me some time. I will probably contact you in the future.

antonbabenko/terrapin

[not-WIP] Terraform module generator (not ready for its prime time, yet) - antonbabenko/terrapin

Julio Tain Sueiras avatar
Julio Tain Sueiras

no problem

Julio Tain Sueiras avatar
Julio Tain Sueiras

one of the most funniest I did (with my plugin)

Julio Tain Sueiras avatar
Julio Tain Sueiras

was doing terraform with my android phone

antonbabenko avatar
antonbabenko

hehe, this one is pretty amazing usage of Terraform - https://www.youtube.com/watch?v=--iS_LH-5ls

1
1
Julio Tain Sueiras avatar
Julio Tain Sueiras

nice!!

2019-03-30

Bruce avatar

Hey guys, I’m looking for a module to create a client VPN server to connect to our aws private network (home of our vault server). Any suggestions?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Bruce we have this module to create a vpn connection on AWS https://github.com/cloudposse/terraform-aws-vpn-connection

cloudposse/terraform-aws-vpn-connection

Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network - cloudposse/terraform-aws-vpn-connection

1
Maxim Tishchenko avatar
Maxim Tishchenko

hey guys, I’m trying to fetch users from group via TF. but seems for me it is impossible… it is correct?

I was tried to fetch like that

data "aws_iam_group" "admin_group" {
  group_name = "Admins"
}

but I can’t get user list from this data…

I was tried to fetch user with group like this

data "aws_iam_user" "admins" {
}

but it doesn’t have such filter.

can anybody help me ?

antonbabenko avatar
antonbabenko

This is true, there does not seem to be a way to fetch members of group, so you have to use external data-source with aws- cli (or other ways).

Maxim Tishchenko avatar
Maxim Tishchenko

yeah.. thanks

ldlework avatar
ldlework

Why would ` count = “${var.alb_target_group_arn == “” ? 1 : 0}” produce: * module.backend-worker.module.task.aws_ecs_service.default: aws_ecs_service.default: value of ‘count’ cannot be computed`

ldlework avatar
ldlework

Holy crap, you can’t have a submodule that uses dynamic counts?!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, more or less. It’s massively frustrating. It used to work better in older versions of terraform.

ldlework avatar
ldlework

I’m stunned.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in the end, we need something like sass for hcl.

ldlework avatar
ldlework

Is this issue fixed in 0.12?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nope

ldlework avatar
ldlework

lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
pulumi/pulumi

Define cloud apps and infrastructure in your favorite language and deploy to any cloud - pulumi/pulumi

ldlework avatar
ldlework

lol is cloudposse going to migrate to that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No, it would be too much effort.

ldlework avatar
ldlework

What about using a system like Jinja2 to generate HCL?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s what some people are doing

ldlework avatar
ldlework

Basically do what Saltstack does

ldlework avatar
ldlework

I see.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem with that IMO is unless there is an open framework for doing it consistently across organizations, it leads to snowflakes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if there was an established way of doing (e.g. like SASS for CSS), then I could get behind it maybe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

here was a fun hack by @antonbabenko https://github.com/antonbabenko/terrible

antonbabenko/terrible

Let’s orchestrate Terraform configuration files with Ansible! Terrible! - antonbabenko/terrible

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(the name is so apropos)

troll1
ldlework avatar
ldlework

I just thought of a nice project subtitle for pulumi:

Pulumi: Write your own bugs instead!
ldlework avatar
ldlework

ldlework avatar
ldlework

@Erik Osterman (Cloud Posse) Saltstack literally uses Jinja2 so at least there is prior-art there.

ldlework avatar
ldlework

It generates YAML. Compare this to ansible, which extends YAML is silly ways to give it dynamic features.

ldlework avatar
ldlework

It’s much more easier to understand a template that generates standard YAML. Easier to debug too. And the potential abstractions are much stronger.

ldlework avatar
ldlework

But with Jinja2 you could easily have conditional say, load_balancer blocks in your aws_ecs_service resource.

ldlework avatar
ldlework

shakes fist

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m not saying there’s no prior art. I’m saying there’s no canonical way of doing it so one person does it in jinja, another person does it in ansible, another in salt, another using gotemplates, another using bash scripting, etc. So what we end up with is a proliferation of incompatible approaches. Even two people using jinja will probably approach it differently. Also, I don’t like a generalized tool repurposed for templating HCL (the way helm has done for YAML). Instead, I think it should be a highly opinionated, purpose built tool that uses a custom DSL which generates HCL.

ldlework avatar
ldlework

I mean in that case, HCL itself just needs to improve. I was thinking more stop-gap measure.

ldlework avatar
ldlework

But I also did not think about the fact that if you have some module that’s parametrically expanded by some template system that there’s no way to do that between one module calling another. It basically wouldn’t work anyway.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ideally HCL itself should improve - and it is with leaps and bounds in 0.12. However, Hashimoto himself said this whole count issue is incredibly difficult for them to solve and they don’t have a definitive solution for it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
MastodonC/terraboot

DSL to generate terraform configuration and run it - MastodonC/terraboot

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(not recommending it… just stumbled across it)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
joshdk/tfbundle

Bundle a single artifact as a Terraform module. Contribute to joshdk/tfbundle development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@tamsky

joshdk/tfbundle

Bundle a single artifact as a Terraform module. Contribute to joshdk/tfbundle development by creating an account on GitHub.

1
rohit avatar

@Erik Osterman (Cloud Posse) I don’t understand where tfbundle would be useful. could you please elaborate the usecase(s) ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

complex lambdas e.g. that npm install a lot of deps.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

using artifacts is nice b/c those artifacts are immutable

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, anyone can then deploy that lambda even if they don’t have npm installed locally

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

typically, in terraform modules that you see deploy an advanced lambda, they expect the end user to have a full local dev environment with all build tools required to build the lambda and zip it up. if that lambda is instead distributed as a zip, it mitigates that problem.

rohit avatar

makes sense now

rohit avatar

Thanks for elaborating

1
rohit avatar

What is the best way to version your own terraform modules ? So that you use a particular version of your module(well tested) in prod and can also actively work on it

ldlework avatar
ldlework

By splitting your sub-modules from your root-modules and using a git-tag as the source value on the module invocation.

1
rohit avatar

I was thinking in the same lines but not exactly sure on how to do it. Does you have examples that i can look at ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

those are all of our examples of invoking modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

everytime we merge to master, we cut a new release

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then in prod or staging, we do something like terraform init -from-module=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/chamber?ref=tags/0.35.1>

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(technically, we use cloudposse/tfenv to make this easier)

rohit avatar

I have my root module that invokes my submodules(compute,storage,networking) and each of my submodules uses different resourses/modules, what would be the best way to version in this scenario ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You version your root modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then the root modules are what you promote to each account/stage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you familiar with terraform init -from-module=?

rohit avatar

nope

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s the missing piece

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Command: init - Terraform by HashiCorp

The terraform init command is used to initialize a Terraform configuration. This is the first command that should be run for any new or existing Terraform configuration. It is safe to run this command multiple times.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Optionally, init can be run against an empty directory with the -from-module=MODULE-SOURCE option, in which case the given module will be copied into the target directory before any other initialization steps are run.

rohit avatar

Interesting. So if my sub modules are currently pointing to ?ref=tags/1.2.2 and then i could run something like

terraform init -from-module=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/chamber?ref=tags/2.1.2>
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

exactly

rohit avatar

it will replace the source code in my sub modules with latest version code

rohit avatar

correct ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it won’t replace though; it will error if your current directory contains *.tf files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you could have a Makefile with an init target

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that calls the abbove

rohit avatar

Ohh. I would have to think more about the Makefile and init

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or you can do this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

export TF_CLI_ARGS_init="-from-module=git::<https://github.com/cloudposse/terraform-root-modules.git//aws/chamber?ref=tags/2.1.2>"

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then when you run terraform init, it will automatically import that module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(that’s what we do)

rohit avatar

it will automatically import the latest version if i do exactly like what you do

rohit avatar

sounds like magic

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s an incredibly DRY way of doing terraform

rohit avatar

So where does cloudposse/tfenv fit in the process ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the TF_CLI_ARGS_* envs contain a long list of concatenated arguments

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you want to toggle just one argument, that’s a pain. for example, we want to target the prefix in the S3 bucket for state.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we want to define key value pairs of environment variables, then use tfenv to combine them into argument above.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We discuss tfenv in more detail. search here: https://archive.sweetops.com/terraform/

terraform

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

^F

rohit avatar

This is awesome

rohit avatar

It is very tempting

rohit avatar

I will have to try it soon

rohit avatar

@Erik Osterman (Cloud Posse) you guys are doing amazing work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks @rohit!! Appreciate it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ldlework avatar
ldlework

Why would I be getting:

 module.frontend.module.codepipeline.module.ecs_codepipeline.module.github_webhooks.provider.github: 1:3: unknown variable accessed: var.github_organization in:
ldlework avatar
ldlework

It’s a variable problem inside of ecs-codepipeline?

ldlework avatar
ldlework
10:10:25 PM

scratches his head.

ldlework avatar
ldlework

“Error: Error asking for user input: 1 error(s) occurred:”

ldlework avatar
ldlework

oh man i have no idea what this is

ldlework avatar
ldlework
2019-03-30T17:24:05.054-0500 [DEBUG] plugin.terraform-provider-github_v1.3.0_x4: plugin address: timestamp=2019-03-30T17:24:05.054-0500 network=unix address=/tmp/plugin407023942
2019/03/30 17:24:05 [ERROR] root.frontend.codepipeline.ecs_codepipeline.github_webhooks: eval: *terraform.EvalOpFilter, err: 1:3: unknown variable accessed: var.github_organization in:
ldlework avatar
ldlework

ah ok fixing some other errors fixed it

1
1
ldlework avatar
ldlework

I have two ecs-codepipelines working great. When I tried to deploy a third, I got:

Action execution failed
Error calling startBuild: User: arn:aws:sts::607643753933:assumed-role/us-west-1-qa-backend-worker-codepipeline-assume/1553989516226 is not authorized to perform: codebuild:StartBuild on resource: arn:aws:codebuild:us-west-1:607643753933:project/us-west-1-qa-backend-worker-build (Service: AWSCodeBuild; Status Code: 400; Error Code: AccessDeniedException; Request ID: deba2268-5345-11e9-a5ef-d15213ce18a0)

I’m confused because the ecs-codepipeline module does not take any IAM information as variables…

ldlework avatar
ldlework

Looks like the ARNs mentions are unique to that third service “backend-worker”

ldlework avatar
ldlework

So like there shouldn’t be any collision with the others. In which case why wouldn’t the IAM stuff that the module creates for itself be working?

ldlework avatar
ldlework

Can anyone please help me reason why ecs-codebuild module would suffer this permission error when trying to build the container?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrmmmm that’s odd… you were able to deploy (2) without any problems, but the (3)rd one errors?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
samsung-cnct/terraform-provider-execute

execute arbitrary commands on Terraform create and destroy - samsung-cnct/terraform-provider-execute

antonbabenko avatar
antonbabenko

Welcome to terragrunt

samsung-cnct/terraform-provider-execute

execute arbitrary commands on Terraform create and destroy - samsung-cnct/terraform-provider-execute

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

except for terragrunt is a strap on

antonbabenko avatar
antonbabenko

But yeah, I see the value in such provider as a replacement to a script which will do something before the main “terraform apply”.

antonbabenko avatar
antonbabenko

More watchers than stars on github usually means that all company’s employees are watching

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh yea, in this case it’s not been updated also for a couple years

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i highly doubt it’s (terraform-provider-external) is compatible with the current version of tf

antonbabenko avatar
antonbabenko

certainly not, but there were similar providers out there.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this opens up interesting possibilities

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

anyone ever kick the tires on this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…basically execute any command as part of the apply or destroy phase; this is different from local-exec on null_resource

1
ldlework avatar
ldlework

lol I think my issue is coming from the fact that my codepipeline is getting created before the service/task

ldlework avatar
ldlework

the service uses the json from the container, the container uses the ecr registry url from the codepipeline and so yeah

ldlework avatar
ldlework

the world makes sense again

ldlework avatar
ldlework

phew!

ldlework avatar
ldlework

guess I have to break out the ECR from my codepipeline abstracting module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i love that feeling

2019-03-31

    keyboard_arrow_up