#terraform (2020-03)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-03-01

bougyman avatar
bougyman

Hrm. What format is the pgp_key supposed to be in?

bougyman avatar
bougyman
I did gpg -a –export <my_key>base64 to write it.
Matt avatar

Does anyone know of a good Terraform module/repo for deploying a Lambda on API Gateway?

Matt avatar

API Gateway is ridiculously complicated. I need to use Terraform which I prefer but the Lambda/API Gateway combo is something I’ve spun up quickly and easily with Serverless and Zappa.

Matt avatar

However with Terraform, it’s a bit of a pain. I haven’t found any good working examples for this yet.

2020-03-02

randomy avatar
randomy

API Gateway in Terraform is pretty terrible in my experience, too many bugs. There are 66 open issues for it https://github.com/terraform-providers/terraform-provider-aws/issues?q=is%3Aissue+is%3Aopen+api+gateway+label%3Aservice%2Fapigateway Next time I have to do it, I’ll try a swagger definition or CloudFormation stack.

terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

randomy avatar
randomy

When I last tried it, I couldn’t get it to automatically deploy the stage after making changes without resorting to hacks. I nearly got it but there was a bug that made it disassociate the API key, requiring a 2nd TF apply to put it back. Ended up doing a local exec call to the AWS CLI to do it.

loren avatar

haven’t used it, but this looks promising… https://github.com/FormidableLabs/terraform-aws-serverless

FormidableLabs/terraform-aws-serverless

Infrastructure support for Serverless framework apps, done the right way - FormidableLabs/terraform-aws-serverless

loren avatar
rms1000watt/serverless-tf

Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.

stobiewankenobi avatar
stobiewankenobi

@rms1000watt lolol

rms1000watt/serverless-tf

Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.

rms1000watt avatar
rms1000watt

yessss, famous

stobiewankenobi avatar
stobiewankenobi

#famous

rms1000watt avatar
rms1000watt

just for the record, I’d recommend using serverless.com

rms1000watt avatar
rms1000watt

way more features

rms1000watt avatar
rms1000watt

and developers working on it

stobiewankenobi avatar
stobiewankenobi

As bad ass as RMS’s tf stuff is I would +1 that.

stobiewankenobi avatar
stobiewankenobi

Terraform && Lambda don’t play well together imo.

1
loren avatar

i think it’s the api gw that is hard. lambda+tf is easy and beautiful

loren avatar

serverless is alright as long as your api gw usage fits in their selection of defaults. it starts getting really restrictive and messy if their selections don’t fit your use case

Cloud Posse avatar
Cloud Posse
05:00:35 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Mar 11, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Matt avatar

@randomy Yeah, I’ve contemplated switching over to the Serverless framework which is not ideal as the rest of my infra is in TF

randomy avatar
randomy

i found the old code with the issues i ran into. they’re still not fixed unfortunately.

Terraform does not automatically update deployments after making changes to its resources: https://github.com/terraform-providers/terraform-provider-aws/issues/162

Terraform loses the usage plan configuration after updating the deployment resource: https://github.com/terraform-providers/terraform-provider-aws/issues/714

aws_api_gateway_deployment doesn't get updated after changes · Issue #162 · terraform-providers/terraform-provider-aws

This issue was originally opened by @blalor as hashicorp/terraform#6613. It was migrated here as part of the provider split. The original body of the issue is below. aws_api_gateway_deployment does…

A aws_api_gateway_usage_plan will not remain associated with aws_api_gateway_deployment when updated · Issue #714 · terraform-providers/terraform-provider-aws

This issue was originally opened by @joshdk as hashicorp/terraform#13928. It was migrated here as part of the provider split. The original body of the issue is below. Terraform Version 0.9.3 Affect…

Alex Siegman avatar
Alex Siegman

So, I use both serverless + terraform, I basically let serverless handle the Lambda and the API Gateway in front of it, everything else is handled by terraform. I use SSM as a way to pass ARNs and other needed values for terraform-managed resources to the serverless plan, as it can read from SSM when doing it’s deployments.

Alex Siegman avatar
Alex Siegman

It does do things like circumvent any policy-as-code and other requirements you may have already built in to your terraform pipelines. You’d need to enforce those sorts of things through maybe config rules that examine the deployed resources or something.

Alex Siegman avatar
Alex Siegman

Not a perfect solution, but it’s the best way I’ve found to marry the two concepts together and not block my devs working on serverless projects while still having a reasonable separation of responsibilities.

randomy avatar
randomy

here’s the hack i used to get auto deployments working. i think it works but this is quite old so there may be better hacks now.

resource "aws_api_gateway_deployment" "thing" {
  lifecycle {
    create_before_destroy = true
  }

  depends_on = [
    aws_api_gateway_integration_response.x, # make sure these are chained in a way so that all resources get created before this deployment resource, otherwise you can get errors
    aws_api_gateway_integration_response.y,
    aws_api_gateway_integration_response.z,
  ]

  rest_api_id = "${aws_api_gateway_rest_api.thing.id}"
  stage_name  = "thing"
}

module "path_hash" {
  source = "github.com/claranet/terraform-path-hash?ref=v0.1.2"
  path   = path.module
}

resource "null_resource" "auto-deploy-thing" {
  count = var.auto_deploy ? 1 : 0

  depends_on = [
    aws_api_gateway_integration_response.x, # i think this is necessary too, so that it auto deploys at the end
    aws_api_gateway_integration_response.y,
    aws_api_gateway_integration_response.z,
  ]

  triggers {
    hash  = module.path_hash.result # ensures any code changes trigger a deployment
    one   = var.one # ensures any variable changes trigger a deployment
    two   = var.two
    three = var.three
  }

  provisioner "local-exec" {
    # AWS limits these requests to 3 per minute per account, sleep/retry when throttled
    command = "for attempt in 1 2 3 4 5 6; do aws apigateway create-deployment --rest-api-id=${aws_api_gateway_deployment.thing.rest_api_id} --stage-name ${aws_api_gateway_deployment.thing.stage_name} --description 'auto deploy ${aws_api_gateway_deployment.thing.stage_name}' && exit 0 || ex=$?; echo Failed attempt $attempt; sleep 15; done; echo Failed too many times; exit $ex"
  }
}
randomy avatar
randomy

@Alex Siegman’s approach seems probably best

randomy avatar
randomy

I used https://github.com/aws/chalice + Terraform but just hardcoding some of the references across the 2 projects, and I preferred that over defining API Gateway in Terraform.

aws/chalice

Python Serverless Microframework for AWS. Contribute to aws/chalice development by creating an account on GitHub.

Alex Siegman avatar
Alex Siegman

Same concept, only I didn’t want to hardcode. There’s still problems with both approaches though, in that you can run in to issues changing dependent resources. Beyond that though, it’s worked well for us on a small scale so far

randomy avatar
randomy

Yep it’s a bit of a house of cards solution

Matt avatar

Yeah. . . mixing Serverless (CloudFormation) and Terraform. . .

Matt avatar

It’s so ugly

Matt avatar

But that’s where I am mentally right now

randomy avatar
randomy

How about API Gateway resources in a CloudFormation stack managed by Terraform

randomy avatar
randomy

You can pass in references from Terraform into the CFN template

randomy avatar
randomy

I don’t know how painful API Gateway in CFN is though

randomy avatar
randomy

CFN in Terraform is surprisingly good

loren avatar

heh, serverless is basically a generator for CFN, so use that to get the CFN template with api gateway resources, then parameterize the template for use with terraform!

Matt avatar

API Gateway is painful period, thus the popularity of Serverless, Zappa and friends

Matt avatar

@loren thanks, I haven’t seen/tried that one yet!

loren avatar

you’re welcome to use this as a reference also… we’re using it as an internal module in the project, it’s probably not generic enough for a general purpose api gateway module… but, maybe it helps as an example of all the pieces! https://github.com/plus3it/terraform-aws-ldap-maintainer/tree/master/modules/api_gateway

plus3it/terraform-aws-ldap-maintainer

A step function to maintain LDAP users via slack. Contribute to plus3it/terraform-aws-ldap-maintainer development by creating an account on GitHub.

Matt avatar

nice

Matt avatar

Is that your company/project?

loren avatar

plus3it is where i work, yes been working with the same core folks for a good 10 years, through different companies, but this one is basically ours

Matt avatar

excellent, thanks

2020-03-03

RB avatar

anyone know how to store arbitrary values to the tfstate?

im working with the null_resource to store output using triggers but the value doesn’t seem to be retrievable on subsequent terraform init && terraform plan

randomy avatar
randomy

how about: arbitrary values in the outputs of one stack, terraform remote state data source to read them from another stack

RB avatar

the output is from a command, the output is stored using a null resource

RB avatar

i could take the output and store it in an s3 file and stick that in s3 and then retrieve it, but that seems awful

RB avatar

i could use something like consul or etcd i suppose

loren avatar

ssm is often used to store values so they are accessible in different tfstates… not sure why there would be a problem on subsequent runs of the same tfstate…

RB avatar

i think it has something to do with this https://github.com/hashicorp/terraform/issues/23679

Allow destroy-time provisioners to access variables · Issue #23679 · hashicorp/terraform

Current Terraform Version Terraform v0.12.18 Use-cases Using a local-exec provisioner when=destroy in a null resource, to remove local changes; currently, the setup provisioner has: interpreter = […

loren avatar

try opening an issue?

loren avatar
matti/terraform-shell-resource

Run (exec) a command in shell and capture the output (stdout, stderr) and status code (exit status) - matti/terraform-shell-resource

RB avatar

already did

RB avatar

trying to find a workaround while i work through the issue

RB avatar

thanks tho

randomy avatar
randomy

i was just reading that issue, where the author says he doesn’t know how it works

1
RB avatar

sometimes i love terraform and sometimes i hate it

loren avatar


tbh - I don’t actually know how my module works internally - so far it just has worked. It relies on terraform “bugs” that I “abuse” in a way.
Could you make a github repo with what I could try this out?

loren avatar

lololol

Nikola Velkovski avatar
Nikola Velkovski

terraform always works in the CRUD cycle if you cannot do something in 1 apply then find another way to do it

RB avatar

so here’s what im trying to solve. maybe you fine folks can help me.

i created a new global/tags module for my company which runs a couple commands to get the git_repo and git_path dynamically and uses those values as keys in my tags which are then outputted and reused across my resources.

Nikola Velkovski avatar
Nikola Velkovski

key value store is not a bad idea

RB avatar

so far this works but fails on subsequent applies due to this bug or workaround or w/e you want to call it.

Nikola Velkovski avatar
Nikola Velkovski

I would investigate passing those values as cmd arguments to terraform

Nikola Velkovski avatar
Nikola Velkovski

so wrapper script, makefile etc

randomy avatar
randomy

that, or a data source that runs every time

RB avatar

@Nikola Velkovski errrrg nah, id rather it collect it dynamically so theres no fat fingering

RB avatar

how would the data source work? what data source would it be?

randomy avatar
randomy
External Data Source - Terraform by HashiCorp

Executes an external program that implements a data source.

RB avatar

@randomy the source looks promising but i get this error

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.external.example: Refreshing state...

Error: command "python" produced invalid JSON: invalid character 'e' in literal true (expecting 'r')

  on main.tf line 1, in data "external" "example":
   1: data "external" "example" {
External Data Source - Terraform by HashiCorp

Executes an external program that implements a data source.

randomy avatar
randomy

let me find an example

randomy avatar
randomy
raymondbutcher/terraform-aws-lambda-builder

Terraform module to build Lambda functions in Lambda - raymondbutcher/terraform-aws-lambda-builder

randomy avatar
randomy
raymondbutcher/terraform-aws-lambda-builder

Terraform module to build Lambda functions in Lambda - raymondbutcher/terraform-aws-lambda-builder

randomy avatar
randomy

oh wait, that one doesn’t return anything useful

randomy avatar
randomy
raymondbutcher/terraform-archive-stable

Terraform module to create zip archives with stable hashes - raymondbutcher/terraform-archive-stable

RB avatar

interesting! so the command just needs to return json and the result can be shown

RB avatar

brilliant

✗ cat main.tf               
data "external" "example" {
  program = ["echo", "{\"a\": \"b\"}"]
}

output "value" {
  value = data.external.example.result
}
✗ tf apply
data.external.example: Refreshing state...

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

value = {
  "a" = "b"
}
randomy avatar
randomy

yep, cool just be mindful that whatever you run there will become a requirement for using your terraform project. so if you use a ruby script, for example, then people may get annoyed that they have to install ruby

RB avatar

we’re an osx company so everyone has sh so ill use something that will run a local shell command

RB avatar

thanks a lot raymond for unblocking me

RB avatar

you the man

randomy avatar
randomy

you’re welcome

loren avatar

do you have a remote state configured? trying to understand what tf is comparing against after you’ve deleted .terraform

RB avatar

@randomy thanks looking into it

randomy avatar
randomy

i could be wrong but it sounds like a data source is more suitable than a resource. i think you’d want it to run every time to ensure the git details are current

RB avatar

@loren the tf is removed and then reinitalized using tf init by redownloading from s3

loren avatar

i understand that. the remote state is getting in the way then. the module is probably storing some data in .terraform itself.

RB avatar

i went hunting for it but had trouble finding the outputs. the only place i see the outputs is using tf show | less

RB avatar

it doesn’t show up in tf show -json | less

loren avatar

yeah, you need to look in the module to see what it is doing. if my intuition is correct, it is not strictly a tf thing

loren avatar

i have a meeting, will be gone an hour+

1
RB avatar

trying to create a minimal working example to try later today

loren avatar

ok, so yeah, this is a bug in the module. it is writing the stdout/stderr to path.module which is in .terraform. in the contents null_resource, it is then using fileexists to check if the file exists and setting stdout to null if it does not. setting stdout to null has the effect of removing the key from the map

RB avatar

oh riiiight it’s a file based approach.

loren avatar

outputs.tf is then checking if the triggers dictionary is null, but it is not, it still has the id key. so the false condition is triggering and it is trying to assign the stdout key, which does not exist cuz null

RB avatar

ive made the switch to the external data source. thanks for investigating that @loren. so many tf wizards here!

loren avatar

i think this could be fixed either by changing the expression in outputs.tf or changing the expression in the contents null_resource…

loren avatar

the module should definitely be resilient to deletion of .terraform or it won’t work with remote state across teams/systems

RB avatar

i’d love to see a pr but part of me is thinking, what’s the point, if the external data source works so much better

RB avatar

im surprised he’s using the hack instead of the data source. perhaps the module can be updated to use the data source itself

RB avatar

but then whats the point of the module lol

loren avatar

the local_exec approach lets you see the output of the command in the tf log

loren avatar

the external data source masks all output

RB avatar

i think im okay with that, but that is a good limitation to know

loren avatar

a data source also executes always, where a resource has a CRUD lifecycle

loren avatar

so the idea of the null_resource approach is that it gets you stateful changes

RB avatar

ah I see! you have a great grasp on this. would you have any cycles to submit a PR? im sure a lot of ppl would love that

RB avatar

(including myself xD)

loren avatar

eh, probably not in a timely manner…

Meb avatar
Overview

BDD Test Framework focused on Security/Compliance against HashiCorp Terraform

2
RB avatar

Thanks @Meb but unsure how that helps

Meb avatar

Just sharing here a tool that would help beside the discussion..

2
Jesse avatar

Looking for opinions on securely storing secrets and other sensitive data for terraform tfvars or terragrunt yamls as part of a pipeline. Hashicorp Vault is on our roadmap for other secrets, but we’re not there yet. I am trying to find a better solution than just managing them locally and in our password managers

RB avatar

there is a tf module that spins up vault pretty easily. if you don’t want that, you can stick secrets in manually into s3 kms encrypted, and then decrypt the key using terraform before passing it in

RB avatar

for instance, the password for your rds db would still be stored in your tfstate but it wont be in your code

RB avatar

the cloudposse guy, erik, says this

https://devops.stackexchange.com/a/4628

How can I manage secrets in .tf and .tfstate?

I would like to use the Terraform MySQL Provider to keep a list of mysql users and grants handy for creating new test environments. The .tf and .tfstate files both seem to want to store the MySQL

Jesse avatar

s3 encrypted was a route we were seriously considering as well

Jesse avatar

Thanks I will check that out

np1
RB avatar

we do it here but now we’re considering migrating to vault or what erik suggested. probably gonna go with the suggestion

Jesse avatar

Yeah chamber looks great… going to look into that some more for sure!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, we love using SSM parameter store directly from terraform to read and write secrets.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This works well with chamber too, since chamber works as a cli for SSM parameter store.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s an example of writing an SSH key to SSM parameter store https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair

cloudposse/terraform-aws-ssm-tls-ssh-key-pair

Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

latest and greatest from @johncblandii https://www.youtube.com/watch?v=VvH0F6Nrlvc

1
RB avatar
liamg/tfsec

Static analysis powered security scanner for your terraform code - liamg/tfsec

10001
RB avatar

there must be some overlap betw tfsec and tflint tho, no?

johncblandii avatar
johncblandii

Interesting topic for a new video.

wattiez.morgan avatar
wattiez.morgan

How to use tfsec and similar tools, when using terragrunt and modules, on the root folder ?

wattiez.morgan avatar
wattiez.morgan

Such tools usually only recognize tf files while terragrunt uses hcl files

RB avatar

How do people here manage their kms policies? We have a many services that each have their own role and we have to manually add each role to our kms policy using terraform.

Is there a better way to do this?

RB avatar

Ideally, I’d be able to use a data source for iam roles and filter them by tag and apply them to the policy but currently roles and users can be tagged but not filtered by tag.

If anyone is also interested in that, Id appreciate if people could request their AWS TAM’s for this feature

2020-03-04

Michał Czeraszkiewicz avatar
Michał Czeraszkiewicz

Is it possible to use waf_web_acl with WAF v2?

RB avatar
resource_full_access iam document is missing a few permissions · Issue #44 · cloudposse/terraform-aws-ecr

Found a bug? Maybe our Slack Community can help. Describe the Bug If the data source resource_full_access is truly for full access, then https://github.com/cloudposse/terraform-aws-ecr/blob/master/

Tan Quach avatar
Tan Quach

hi! Seems the version for this module went from 0.7.0 to 0.3.2 recently https://github.com/cloudposse/terraform-aws-s3-bucket/releases Is that the correct next version?

cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

1
MattyB avatar

I believe it’s because it’s based off of TF 0.11 support -> 0.11/master

cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

MattyB avatar

0.4.0 of this module begins TF 0.12 support the latest version of TF 0.11 support was 0.3.1

Tan Quach avatar
Tan Quach

gotcha, thanks for clarifying!

MattyB avatar

np!

Brij S avatar

if i wanted to convert

"${var.resource_name}-alb-logs-${data.aws_region.current.name}"

to use format, am I able to pass in two variables like so:

format("%s-alb-logs-%s", var.resource_name, data.aws_region.current.name)

RB avatar

yeah that looks like it would work

$ terraform console
> format("%s-%s-%s-%s-%s-%s", "my", "other", "hand", "is", "a", "sandwich")
my-other-hand-is-a-sandwich
Brij S avatar

thanks! also didnt know terraform console was a thing

np1

2020-03-05

Karoline Pauls avatar
Karoline Pauls

Is it possible to access the current S3 state bucket’s details? I want to pull remote state but since i’ve got separate buckets for dev and prod, I need to know which state to pull.

I have:

data "terraform_remote_state" "global" {
  backend = "s3"                        
                                        
  config = {                            
    bucket = var.state_bucket           
    key    = "global"           
    region = var.region                 
  }                                     
}                                       

I want:

data "terraform_remote_state" "global" {
  backend = "s3"                        
                                        
  config = {                            
    bucket = <current_provider>.bucket           
    key    = "global"            
    region = <current_provider>.region                 
  }                                     
}                                       
Karoline Pauls avatar
Karoline Pauls

https://www.terraform.io/docs/providers/aws/d/caller_identity.html i guess i can dispatch based on the account id

AWS: aws_caller_identity - Terraform by HashiCorp

Get information about the identity of the caller for the provider connection to AWS.

Karoline Pauls avatar
Karoline Pauls
AWS: aws_s3_bucket - Terraform by HashiCorp

Provides details about a specific S3 bucket

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

@here I am in a situation where I need to stay at Terraform v0.11 but needed both 1.X and 2.X aws providers …. I need to keep 1.X for cloudfront distribution as using 2.X it keeps re-applying at every run so I need to stick with 1.X for cloudfront module .. when I try to add both providers I am getting the following error … any ideas how to keep both providers

No provider "aws" plugins meet the constraint "~> 1.60,~> 2.0".
Karoline Pauls avatar
Karoline Pauls

can you use a separate state and module for cloudfront?

randomy avatar
randomy

i think it can only do one version at a time per .terraform directory. are you sure you can’t fix the issue in v2 where it re-applies every time? i would focus my efforts on fixing that.

1
nian avatar

What are thoughts about using LocalStack for local/dev testing on those services it supports?

Any experience compared to the real AWS cloud?

wattiez.morgan avatar
wattiez.morgan

Nothing is really as good and efficient as testing on real infra if you can. Localstack is the way you go when you are very contrained on the infra costs and cant afford to create test environments on cloud.

2
RB avatar

Hi all.

I was looking at the cloudposse module: https://github.com/cloudposse/terraform-aws-elasticache-redis

Noticed that it does not use resource aws_elasticache_cluster but instead uses aws_elasticache_replication_group . Does anyone know the benefits of the replication group over the cluster? As far as I understand it, the replication group is a group of clusters where as a cluster is a single instance of redis. Any additional costs with the replication group?

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

2020-03-06

MattyB avatar

Due to customer requirements I need to export RDS snapshots to S3 & eventually glacier. Can you not configure RDS to automatically send snapshots to S3? It’s either manual/console, AWS CLI, or RDS API? Is my best bet via Lambda on a timer?

Maciek Strömich avatar
Maciek Strömich

i guess because it’s a fairly new feature it’s not available in an automatic fashion

Maciek Strömich avatar
Maciek Strömich

Personally I would go with cron triggered Lambda to export those

1
Maciek Strömich avatar
Maciek Strömich

we have a similar solution to copy snapshots across regions

Maciek Strömich avatar
Maciek Strömich

works quite nicely

MattyB avatar

thanks

Brij S avatar

are there any pros and cons to doing format("%s-lambda", var.resource_name) over ${var.resource_name}-lambda

RB avatar

preference imho

randomy avatar
randomy

i can’t think of anything besides preference. i find the 2nd easier to read.

3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
nrkno/terraform-provider-lastpass

Terraform Lastpass provider. Contribute to nrkno/terraform-provider-lastpass development by creating an account on GitHub.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Credit: @antonbabenko

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looks pretty neat

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
anasinnyk/terraform-provider-1password

Terraform provider for 1Password. Contribute to anasinnyk/terraform-provider-1password development by creating an account on GitHub.

2020-03-07

RB avatar

ive wanted to use terraform providers like lastpass and vault but doesnt that mean you have to store your passwords in version control? and how do you lock something like that down?

Taylor avatar

haven’t had any personal experience with these providers but I think the expectation is that you’d be consuming the secrets stored in these provider targets as opposed to populating them.

the only thing you’d be expected store in vcs in relation to your secret(s) would be a pointer or tag reference

Marcin Brański avatar
Marcin Brański

if you want to store passwords in git then I would use KMS which is supported by terraform natively

RB avatar

ah ok, gotcha!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, basically what everyone else already said. Another place to store the lastpass/1password API secret would be with SSM Parameter Store, which can then be accessed by terraform.

2020-03-09

Cloud Posse avatar
Cloud Posse
04:00:17 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Mar 18, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

RB avatar

anyone know any terraform modules that take advantage of the new ecs autoscaling using

aws_ecs_capacity_provider https://www.terraform.io/docs/providers/aws/r/ecs_capacity_provider.html

Ref: https://aws.amazon.com/blogs/aws/aws-ecs-cluster-auto-scaling-is-now-generally-available/

AWS: aws_ecs_capacity_provider - Terraform by HashiCorp

Provides an ECS cluster capacity provider.

Igor avatar

@RB https://github.com/cloudposse/terraform-aws-ecs-alb-service-task appears to support capacity_provider_strategy configuration for aws_ecs_service, but you may need to create the capacity provider(s) yourself

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

1
Brij S avatar

does terraform support wildcards for filenames?

resource "aws_s3_bucket_object" "object" {
  bucket = var.s3_bucket
  key    = "${var.resource_name}/FILENAME?"
  source = "${path.module}/FILENAME?"

  # The filemd5() function is available in Terraform 0.11.12 and later
  # For Terraform 0.11.11 and earlier, use the md5() function and the file() function:
  # etag = "${md5(file("path/to/file"))}"
  etag = filemd5("path/to/file")
}

is it possible to do

"${var.resource_name}/*.zip"
RB avatar

have you tried it?

Brij S avatar

no not yet

RB avatar

try it, let’s see if it works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

My guess is it does not support wildcards. But didn’t you get this working? https://sweetops.slack.com/archives/CB6GHNLG0/p1582677639079700

I’m currently uploading files to a bucket as follows

resource "aws_s3_bucket_object" "object" {
  for_each      = fileset(var.directory, "**")
  ....
}

does anyone know of a clever way to output the ids of the files uploaded?

randomy avatar
randomy

I’ve made a Terraform wrapper that can generate resources in Python. This recursively uploads files to S3. https://github.com/raymondbutcher/pretf/blob/master/examples/aws/s3.tf.py

raymondbutcher/pretf

Generate Terraform code with Python. Contribute to raymondbutcher/pretf development by creating an account on GitHub.

Marcin Brański avatar
Marcin Brański

I’d vote against using s3 bucket objects in tf without super valid reason. IMO tf is good for infrastructure where s3 objects are outside of it’s scope.

So for example keeping those files in repository with CI trigger to push when they change. Disable delete permissions, enable versioning, etc

1
randomy avatar
randomy

I initially agreed but started to wonder if that kind of thinking is informed by Terraform traditionally being “bad” at certain things. What if the CI system dynamically creates a TF config containing S3 objects and uses TF to manage the life cycle of those objects? In some cases you might prefer these objects to be strictly managed. In other cases you might prefer a versioned bucket with old versions deleted after so many days.

marcinw avatar
marcinw

Alternative: write your own Terraform provider to fill in the gap. The API for authoring plugins is quite developer-friendly.

marcinw avatar
marcinw

Basically you can encapsulate any logic in a data resource (in read only cases) or even a regular resource when you need to handle the entire lifecycle.

marcinw avatar
marcinw

In fact you may be able to even use this existing data source - https://www.terraform.io/docs/providers/external/data_source.html

External Data Source - Terraform by HashiCorp

Executes an external program that implements a data source.

marcinw avatar
marcinw

No need for a custom provider, just do ls with the right params, split the output accordingly and use as a regular list to create your s3 resources. As to whether it’s a good idea or not,

Marcin Brański avatar
Marcin Brański


What if the CI system dynamically creates a TF config containing S3 objects and uses TF to manage the life cycle of those objects
Then you increase complexity IMO. You have repo with files and tf generated files and tf state. Where with files in repo you just have files in repo and cicd pipeline.
In fact you may be able to even use this existing data source.
I think this is the answer Brij was looking for. I was just adding my 2 cents :D

2020-03-10

2020-03-11

rbadillo avatar
rbadillo

Hi Guys,

Is anybody here having issues creating EKS Clusters using terraform ?

We are seeing this error:

module.eks_cluster.aws_eks_cluster.eks_cluster: Still creating... [11m20s elapsed]

module.eks_cluster.aws_eks_cluster.eks_cluster: Still creating... [11m30s elapsed]


Error: unexpected state 'FAILED', wanted target 'ACTIVE'. last error: %!s(<nil>)


  on ../../../../modules/eks/eks_control_plane/main.tf line 405, in resource "aws_eks_cluster" "eks_cluster":

 405: resource "aws_eks_cluster" "eks_cluster" {

AWS just released EKS v1.15 last night and we think it maybe related.

wannafly37 avatar
wannafly37

Am I the only one that finds these terraform patterns…bad? https://www.hashicorp.com/resources/evolving-infrastructure-terraform-opencredo

5 Common Terraform Patterns—Evolving Your Infrastructure with Terraformattachment image

Nicki Watt, OpenCredo’s CTO, explains how her company uses HashiCorp’s stack—and particularly Terraform—to support its customers in moving to the world of CI/CD and DevOps.

Marcin Brański avatar
Marcin Brański

What is the pattern that you use?

5 Common Terraform Patterns—Evolving Your Infrastructure with Terraformattachment image

Nicki Watt, OpenCredo’s CTO, explains how her company uses HashiCorp’s stack—and particularly Terraform—to support its customers in moving to the world of CI/CD and DevOps.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, I think the point of the talk was to emphasize the natural evolution an organization takes in adopting terraform. the “hard lessons” and mistakeds along the way. e.g. the “terralith”. if I recall correctly, towards the end it’s more about what the current best practices are, but even those evolve.)

wannafly37 avatar
wannafly37

Good point Erik - that should be the takeaway, maybe I missed that skimming it.

wannafly37 avatar
wannafly37

Separate definition per environment? is that common practice?

randomy avatar
randomy

I haven’t read that in a long time but yeah, it doesn’t seem to talk about using workspaces or extra tooling (via wrappers or automation) to deploy 1 config to multiple environments.

randomy avatar
randomy

I think I’ve only seen 1 serious project where they copy/pasted environment definitions with the same module calls but different tfvars. But maybe I work in a bubble.

loren avatar

i think that’s kind of what we do, using terragrunt. one base module, with per-account tfvars, and per-account tfstate

randomy avatar
randomy
Keep your Terraform code DRY

Learn how to achieve DRY Terraform code and immutable infrastructure.

loren avatar

we don’t entirely follow their layout, no. but it’s close-ish…

• we have a repo per tf “capability” module, e.g. cloudtrail, guardduty, config, etc. these are pure tf and know nothing of terragrunt. these may be our own modules or a community module

• we have one repo that includes at least one tf “root” module, which implements “capability” modules, passes attributes from one module to another, and includes locals/vars and customer/implementation-specific logic

• the same repo with the “root” module(s) contains a terragrunt config per account that references the root module as the source

randomy avatar
randomy

Ok thanks. That sounds similar to what they describe except that you have the root modules and tfvars (part of the terragrunt config?) in the same repo.

randomy avatar
randomy

So if you change a root module, it affects all environments/accounts using it because they’re in same same repo as opposed to pointing to a separate repo with a git ref.

randomy avatar
randomy

Is that right?

loren avatar

yes, mostly. you can set the source in the terragrunt.hcl to point at the remote ref, per terragrunt config. so you can version that and still use a single repo. but it also means you may have a lot of places to update that source ref as you roll out changes to the root.

loren avatar

we prefer to version the repo level, using local relative paths for the source arg, and use pull requests to review expected changes, and let the build system apply it on merge/tag

loren avatar


tfvars (part of the terragrunt config?)
yes, the tfvars are effectively part of the terragrunt config… the terragrunt config specifies which tfvar files to pass when running terraform. you can also use the terragrunt inputs block to set vars as TF_VAR_<var> environment variables, instead

loren avatar

here’s a pretty simple example… this same terragrunt.hcl config goes in every account’s base directory. the only differences from one account’s base config to the next is the values in the base.tfvars file

# Include all settings from the parent terragrunt.hcl
include {
  path = find_in_parent_folders()
}

terraform {
  source = "../../..//roots/aws/base"

  extra_arguments "base" {
    commands = get_terraform_commands_that_need_vars()

    required_var_files = [
      "${get_terragrunt_dir()}/base.tfvars",
    ]
  }
}

inputs = yamldecode(file("${get_parent_terragrunt_dir()}/base.common.tfvars.yaml"))
loren avatar

the parent terragrunt.hcl sets the backend remote state config, establishes the provider blocks, and maybe sets some other inputs (which get merged with the child block, child values override)

randomy avatar
randomy

That makes sense, thanks. I’m wondering whether we should be using remote repos or not. We currently have everything in 1 repo like you’ve described but not with Terragrunt. Having a change to the root config affect multiple environments at once is a bit annoying sometimes.

loren avatar

i like the combination of “capability” modules vs “root” modules. it’s a reasonable tradeoff in managing the ref updates. if we have an account that really needs to be different, we set that up as either a separate “root” module, or as a custom tf config within the account directory. or we use vars in a “root” module as feature flags, to turn something on/off per account.

loren avatar

separate “root” module meaning this line changes in the terragrunt.hcl:

  source = "../../..//roots/aws/base"

pointing to a different root config:

  source = "../../..//roots/aws/snowflake"
loren avatar

or if it’s truly a one-off, you can put the .tf directly in the account directory:

  source = "."
randomy avatar
randomy

Our projects tend to be more like 1 config has multiple environments (dev, uat, load, prod, etc) and we often want them to be the same, but rolled out slowly as the change gets tested in each environment. We use feature flags or if it’s tricky just not applying to an environment until it’s been tested in the previous ones.

randomy avatar
randomy

So for us, versioning seems like it’d be quite helpful. I can’t tell if you’re doing similar kinds of environments.

loren avatar

ahh, so i might have a meta root config per env, so your dev accounts would call the dev root config, and that dev root would point to a specific version of the underlying root module…?

randomy avatar
randomy

I don’t quite get it. That sounds like an extra hop when you could point directly to the specific version

loren avatar

just trying to manage where the version is set, to minimize the places you need to touch when it needs to be updated. if you only have one config referencing the version, then it’s no benefit. when you have 50, it gets annoying and error-prone to change it 50 places

randomy avatar
randomy

Oh, you’re talking about updating 50 dev environments at once? We’d generally only want to update 1.

loren avatar

updating 1 at a time when there are 50+ to touch takes way too long. the dev/user can do what they want in the account. we’re just going to manage the baseline resources the customer requires for networking, security, audit, and compliance. those are supposed to be the same across all accounts for a given environment-type…

loren avatar

if an account gets an approved deviation, we expose that via a var and include it in their tfvars

randomy avatar
randomy

That makes sense. We’re talking about very different types of projects. Mine are typically projects for 1 customer’s full AWS infrastructure and they retain ownership of their TF code and state files. This is everything all the way down to web servers, databases, lambdas, specific to their needs. We can’t have one repo with resources for multiple customers in this case.

loren avatar

that makes sense too! technically we do this per each customer, where the customers just tend to have a lot of teams working on their own projects/apps, and the customer wants some central visibility and control

randomy avatar
randomy

do you use terragrunt apply-all? i’ve never felt the need for something like that, but it sounds good for what you’re doing

loren avatar

it would actually be the only good use case i can think of for apply-all, also… but no, we’ve been using terragrunt a long time now and earlier versions were not handling async output very well. literally interleaved individual characters from the stdout/stderr streams. totally useless. so we wrote our own async subprocess executor/logger in python, and it calls terragrunt for us. i like it better this way, mostly, since we can prefix all the log messages with some context, so we can then filter on which config the message is associated to

Marcin Brański avatar
Marcin Brański

I did exactly the same. Terragrunt apply-all is a mess. I wrote wrapper around terragrunt to mange changes. Updating tens of modules in dozens of accounts can quickly get out of out of sync.

loren avatar

an async command runner with options for streaming stderr/stdout with context (and without deadlocks), and that handles errexit options well, would be a cool python library. didn’t see anything 2 years ago, but maybe something exists now.

randomy avatar
randomy

It sounds like Pretf will need an apply-all feature, or some way to manage multiple TF stacks. If/when I get around to it, I’ll try to make it generic.

randomy avatar
randomy

Thanks for all the details, it’s good to see how other people use TF - quite differently sometimes.

randomy avatar
randomy

My mad scientist idea is to use terraform to manage terraform projects in other directories but I’m not sure there’s a way to bubble up the plan outputs without the use of a custom provider.

randomy avatar
randomy

If a customer provider is needed, it could be a generic external process CRUD provider

loren avatar

hmm, there is already a generic REST provider, for creating tf providers for arbitrary REST APIs…

randomy avatar
randomy

I was just thinking that but I’m in the kitchen so didn’t check

randomy avatar
randomy

Pretf could run a http server before running terraform

randomy avatar
randomy

loren avatar
Mastercard/terraform-provider-restapi

A terraform provider to manage objects in a RESTful API - Mastercard/terraform-provider-restapi

loren avatar
dikhan/terraform-provider-openapi

OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file) - dikhan/terraform-provider-openapi

randomy avatar
randomy

Hmm would prefer a built in provider but can’t see anything obvious. Could maybe hijack some other protocol but that’s not ideal.

loren avatar

yeah, neither has been accepted into terraform-providers yet. definitely annoying.

loren avatar

i don’t see issues for that on either yet… maybe open an issue to see if they have interest in getting it into the upstream terraform provider registry?

loren avatar

looks like hashicorp is trying to make it easier to get community providers into the registry… https://www.hashicorp.com/blog/announcing-providers-in-the-new-terraform-registry/

Announcing Providers in the New Terraform Registry

Today, we’re excited to announce the beginnings of a new direction for the Registry. We’re renaming it as the Terraform Registry and expanding it to include Terraform providers as …

loren avatar


For now, you will only be able to find providers that we here at HashiCorp maintain. Today you can still find excellent providers from our partners and community at terraform.io. Eventually, all of these will be available on the Registry.

randomy avatar
randomy

hmm, hopefully soon

randomy avatar
randomy

i could prototype with the mastercard one to see if it’s viable

randomy avatar
randomy

i have a feeling that the plans will look awful

1
randomy avatar
randomy

unfortunately (but it makes sense) it doesn’t actually talk to the rest API during the plan phase, so there’s no way to show nested plans in the plan. it only shows the arguments passed into the resource, which will just be the path to the nested tf project

loren avatar

well that seems… un-useful…

randomy avatar
randomy

I’ve given up on this for now. I don’t have a personal need for it, and it’s difficult. I think a good implementation for running it locally would have an interface kind of like tig. You’d use arrow keys to browse through the different plans. Lots of details to figure out, it would take a lot of work to make it good.

loren avatar

makes sense. focus on doing one thing good. if someone sees a use case for the -all piece, let them figure it out

2020-03-12

Jason Huling avatar
Jason Huling

Hello! I’m currently using the terraform-aws-eks-cluster and just ran into an issue with auth.tf and the null_resource.apply_configmap_auth resource returning the following:

error: You must be logged in to the server (the server has asked for the client to provide credentials)

Previously I had used aws eks update-kubeconfig and specified my profile to use with --profile. I recently aliased my contexts which caused this module to create a new one, and it also updated my user for that cluster which removed the AWS_PROFILE environment variable. I was able to correct this by setting the --profile in aws_eks_update_kubeconfig_additional_arguments , but I’m also setting my profile within the aws provider, so it seems a little redundant.

So, my question for this group! Can the [auth.tf](http://auth.tf) file be refactored to use the kubernetes provider directly, and therefore also inherit the aws provider settings? For example I’m using the following in other scenarios to update k8s resources where local.eks_cluster is the output of the terraform-aws-eks-cluster module:

# Authenticate to the EKS Cluster
data "aws_eks_cluster_auth" "eks_cluster" {
  name = local.eks_cluster.eks_cluster_id
}

# Connect to kubernetes
provider "kubernetes" {
  host                   = local.eks_cluster.eks_cluster_endpoint
  cluster_ca_certificate = base64decode(local.eks_cluster.eks_cluster_certificate_authority_data)
  token                  = data.aws_eks_cluster_auth.eks_cluster.token
  load_config_file       = false
}

Can this not be used with kubternetes_config_map or am I overlooking something? Maybe because EKS creates the aws-auth configmap so it would need to be imported for terraform to update it?

Also, if this should be in a GitHub issue I can open one, didn’t feel like a Bug or Feature though so I came here . I am also willing to work on this refactor if there isn’t a known reason why it wouldn’t work.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Kubernetes: kubernetes_config_map - Terraform by HashiCorp

The resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we spent some time on implementing the config map uisng kubernetes provider, but had some issues with that. But that was some time ago. Yes, we need (and are planning) to implement it with kubernetes provider . No ETA, if you’d like to help, it would be appreciated

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Kubernetes: kubernetes_config_map - Terraform by HashiCorp

The resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes.

Jason Huling avatar
Jason Huling

Definitely open to help . Are there any github issues, branches, or forks with previous attempts or documenting the issues you guys ran into?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, so this has been fixed in the upstream terraform provider so it’s now possible to use the configmap resource

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We haven’t had a chance to implement it. It’s not been scheduled to be fixed yet (by us) but we’ll try to get to it in an upcoming customer engagement.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when we tried to do it, it was not any issues with the upstream provider that we faced. It was that we could not (correctly) dynamically construct the entires in the config map from additional_users and additional_roles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

although we did not spend much time on that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The gist of it is in here: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/355 (the other popular EKS module)

Use kubernetes provider to manage aws auth by stijndehaes · Pull Request #355 · terraform-aws-modules/terraform-aws-eks

PR o&#39;clock Description This changes the aws auth configmap to be managed by the kubernetes provider. This is a breaking change as already created aws auth configmaps will have to be moved into …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

EKS cluster is not ready before applying the config map with kubernetes provider is another issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we implemented a few attempts in our current script

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

something similar needs to be implemented for k8s provider

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, they have a workaround for that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
terraform-aws-modules/terraform-aws-eks

A Terraform module to create an Elastic Kubernetes (EKS) cluster and associated worker instances on AWS. - terraform-aws-modules/terraform-aws-eks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
terraform-aws-modules/terraform-aws-eks

A Terraform module to create an Elastic Kubernetes (EKS) cluster and associated worker instances on AWS. - terraform-aws-modules/terraform-aws-eks

Jason Huling avatar
Jason Huling

I’m guessing that wait_for_cluster workaround wouldn’t work if public access if turned off and you are outside the network (tf cloud)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

another thing we need is to be able to assume roles before applying the config map

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s also currently implemented in the bash script

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

at least for us, it’s important and needs to be accomodated in the k8s provider

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I’m guessing that wait_for_cluster workaround wouldn’t work if public access if turned off and you are outside the network (tf cloud)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Jason Huling avatar
Jason Huling

For a cluster without public access?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

He means where terraform cannot access the kubernetes API. (but our current solution has the same limitation, but works because we typically run it with atlantis which has VPC connectivity)

Jason Huling avatar
Jason Huling

yeah thats a good point… doesn’t matter if I can identify when the cluster is available if I cant connect to do stuff on it anyways lol

Jason Huling avatar
Jason Huling

so we can assume roles with the aws provider directly:

provider "aws" {
  assume_role {
    role_arn     = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
    session_name = "SESSION_NAME"
    external_id  = "EXTERNAL_ID"
  }
}

I havent dealt with multiple aws providers in the same module, but it seems like it would be preferred to have the module accept an aliased aws provider that will be used to authenticate the k8s provider, instead of passing aws_cli_assume_role_arn etc and assuming the role directly in the module?

Jason Huling avatar
Jason Huling

and I guess to clarify… was the intent of the origiinal implementation to effectively allow 1 method to authenticate for creating the cluster (through the provider) and another method to authenticate to configure it? (cli commands and aws_cli* args?

Jason Huling avatar
Jason Huling

or was that a workaround, similar to how I used the profile in my aws provider, but then had to pass it to aws_eks_update_kubeconfig_additional_arguments ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, in some cases we create a cluster using one IAM role, but then auth to kubernetes using another IAM role (that might be not for eveybody, but we have a lot of use-cases for it and using it now)

Jason Huling avatar
Jason Huling

ok good to know

Zachary Loeber avatar
Zachary Loeber

I believe terraform was written by the devil to punish intrepid devops engineers from trying to be too smart

Zachary Loeber avatar
Zachary Loeber

that is all, just my theory, nothing constructive at all here….

MattyB avatar

What’s your take on cloudformation?

9
Zachary Loeber avatar
Zachary Loeber

I don’t use it so I’ve no opinion at all but I understand it to be akin to Azure ARM templates or something. So it probably works great if all you ever deploy is AWS resources and don’t have to worry to much about complex multi-team environments (and resulting backend states).

Zachary Loeber avatar
Zachary Loeber

Working with terraform makes me feel like either I have to think like a moron and completely simplify my designs and expectations of the product or conversely, that I actually am a moron and should know that there is a completely undocumented template_file resource (as opposed to a data source) with the exact same syntax but different behavior….

Marcin Brański avatar
Marcin Brański

I know terraform has it’s flaws but it’s really good for things that it’s designed to do. With tf11 I had many problems but tf12 solved many of them. Maybe you just trying to do something that terraform shouldn’t do?

Zachary Loeber avatar
Zachary Loeber

Nope, I’m doing things it was exactly designed to do. It just cannot handle anything beyond the most basic of tasks without needing to do asinine things.

Zachary Loeber avatar
Zachary Loeber

Additionally, terraform is only as good as the providers you use. If you are using the AWS provider only then you are blessed with a mature implementation that can usually get done all that you need to do

Zachary Loeber avatar
Zachary Loeber

if you are stuck on Azure and need to do anything ‘new’ you will always be riding the bleeding edge and all the issues that entails.

Zachary Loeber avatar
Zachary Loeber

Additionally, 2+ year old bugs and issues abound: This one for instance: https://github.com/hashicorp/terraform/issues/17034

Data Resource Lifecycle Adjustments · Issue #17034 · hashicorp/terraform

Background Info Back in #6598 we introduced the idea of data sources, allowing us to model reading data from external sources as a first-class concept. This has generally been a successful addition…

1
ismail avatar

Hi all, Can someone please tell me how to accept/continue with AWS Marketplace images in Terraform?

2020-03-13

ikar avatar

hi @ismail, I guess you’re looking for this: https://www.terraform.io/docs/providers/aws/d/ami.html

AWS: aws_ami - Terraform by HashiCorp

Get information on a Amazon Machine Image (AMI).

ikar avatar

you use it for finding AMI you want to use and later use for defining e.g. EC2 instance

ikar avatar

this can be useful if you start with data sources: https://www.terraform.io/docs/configuration/data-sources.html

Data Sources - Configuration Language - Terraform by HashiCorp

Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration.

ismail avatar

umm…. I am able to find the image… But the problem is with Market place images…. When i try yo create an instance it gives errors as

Please accept licentse for marketplace images
ismail avatar

Anyways… I am not facing it anymore after accepting it on UI

loren avatar

right, that’s how it works. there is no api for accepting it. you have to use the console to accept the license. then you can use the ami via any api/console interface

loren avatar

also, the acceptance is good for 1 year

1
ismail avatar

oh… got it.. Thanks!

ikar avatar

oh okay

2020-03-15

Matt Gowie avatar
Matt Gowie

Hey @Cloudposse folks — Is it suggested to not use cloudposse/terraform-aws-rds-replica. And if I need an RDS replica of my primary to just use cloudposse/terraform-aws-rds? Asking as the replica repo hasn’t been touched in 12 months, so I’m figuring that is the case but also wanted to confirm.

Matt Gowie avatar
Matt Gowie

Hm, replicate_source_db isn’t provided as a variable on terraform-aws-rds so I’m assuming not. Seems like the replica repo hasn’t been maintained in a while, which is reasonable since Aurora…

Anyway, I believe I got my answer. I’ll have to dig into the replica code and see if that is reusable.

Matt Gowie avatar
Matt Gowie

To go full circle on this… I ended up using a fork (https://github.com/techfishio/terraform-aws-rds-replica) which is updated for tf v0.12 and that worked out great.

techfishio/terraform-aws-rds-replica

Terraform module that provisions an RDS replica. Contribute to techfishio/terraform-aws-rds-replica development by creating an account on GitHub.

2020-03-16

Tim avatar

Hey, someone there could quickly review and release https://github.com/cloudposse/terraform-aws-dynamodb/pull/52 ?

allow to add additional attributes and tags to autoscaler by etwillbefine · Pull Request #52 · cloudposse/terraform-aws-dynamodb

what This is to avoid adding a region prefix to dynamodb tables because the autoscaler IAM role requires additional region indicator in its names why IAM policies and roles need to be unique Dynam…

Cloud Posse avatar
Cloud Posse
04:00:09 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Mar 25, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Karthik Sadhasivam avatar
Karthik Sadhasivam

Hi Guys, I am new to this channel and trying to get some advice on the rolling update EC2 on the ASGs. I am trying to use this module https://registry.terraform.io/modules/cloudposse/ec2-autoscale-group/aws/0.4.0 and seeing that everytime I update userdata, instance type it just creates a new version of launch template but doesnt do any rolling update on the ASG. Is there is any sort of workarounds available as discussed in https://github.com/hashicorp/terraform/issues/1552.

aws: Allow rolling updates for ASGs · Issue #1552 · hashicorp/terraform

Once #1109 is fixed, I&#39;d like to be able to use Terraform to actually roll out the updated launch configuration and do it carefully. Whoever decides to not roll out the update and change the LC…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, as you discovered this is a terraform / AWS API limitation.

aws: Allow rolling updates for ASGs · Issue #1552 · hashicorp/terraform

Once #1109 is fixed, I&#39;d like to be able to use Terraform to actually roll out the updated launch configuration and do it carefully. Whoever decides to not roll out the update and change the LC…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Using Terraform for zero downtime updates of an Auto Scaling group in AWSattachment image

A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(but we haven’t implemented it in our modules)

loren avatar

i’ll second terraform+cloudformation for ec2 instances and ASGs…

Karthik Sadhasivam avatar
Karthik Sadhasivam

thanks

Karthik Sadhasivam avatar
Karthik Sadhasivam

@loren Did you come across any working examples for CF+TF especially with launch templates?

loren avatar

we define the launch template in CF also. TF just manages the lifecycle of the CF stack

loren avatar

plus this way we can, if we want, just use CF directly…

loren avatar

here’s a real module we manage this way, probably too complicated as an example, but it’s all i have handy… https://github.com/plus3it/terraform-aws-watchmaker/tree/master/modules/lx-autoscale

plus3it/terraform-aws-watchmaker

Terraform module for Watchmaker. Contribute to plus3it/terraform-aws-watchmaker development by creating an account on GitHub.

Karthik Sadhasivam avatar
Karthik Sadhasivam

perfect

Karthik Sadhasivam avatar
Karthik Sadhasivam

thanks

Karthik Sadhasivam avatar
Karthik Sadhasivam

I will look into that

loren avatar

one thing you’ll see are the “toggle” variables… one just changes the cfn init metadata, which can be used to modify a running instance. i don’t recommend this, generally, as it does not integrate with the CF signaling, so errors are not caught well.

the other changes the userdata in the launchconfig, which forces a new launchconfig, which triggers the CF ASG update policy (launching a new ASG+LC, waiting for success, then destroying the old ASG/LC)

randomy avatar
randomy
claranet/terraform-aws-asg-instance-replacement

Terraform module for AWS ASG instance replacement. Contribute to claranet/terraform-aws-asg-instance-replacement development by creating an account on GitHub.

1
loren avatar

personally i like blue/green approaches with 2 ASGs better than rolling updates with 1 ASG, but happy to see any advancement in pure TF here…

randomy avatar
randomy

It is not pure, it relies on a lambda func

randomy avatar
randomy

I’d probably try the CFN approach if starting from scratch today. The instance replacement module works pretty well though. The instance replacement happens outside of the Terraform process which may be good or bad depending on the situation.

loren avatar

ahh i see now. the TF+CFN approach does work pretty well

Zach avatar

Another approach, if you are using the ELB health checks, is to tie the name of the ASG to AMI, or launch config, or launch template version, etc. We use the build date of the AMI and insert it into the name of the ASG; whenever a new AMI is part of the terraform apply our ASG module creates a new ASG, waits for the instances to be ‘in service’, puts them in the existing Target Group and then destroys the old ASG

randomy avatar
randomy

That’s interesting. Do you know what exactly is terminating the instances?

Karthik Sadhasivam avatar
Karthik Sadhasivam

as long as ASG name changes everytime when new version of launch template gets created TF creates the new ASG using the lifecycle { create_before_destroy = true }

Karthik Sadhasivam avatar
Karthik Sadhasivam

the downside is you need enough capacity available in your account since all the old instances will be deleted only when all the new instances are online.

Zach avatar

True, we’re a fairly small platform so that’s not an impact to us for now. Cross that bridge when we come to it.

randomy avatar
randomy

https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html#force_delete
Normally, Terraform drains all the instances before deleting the group.
That’s the part i was missing. TF terminates them.

loren avatar

nice, thanks @Zach and @Karthik Sadhasivam!

Alex Siegman avatar
Alex Siegman

Looks like it was well covered, but for some anecdotal observations: I scripted in python/boto3 a way to rolling update instances in an ASG using a strategy where I double the number of instances, set the termination policy to oldest, and then halve it again. It is reliable, but it is also pretty slow though. I would recommend not going that route. A blue/green style has a definite cutover and removal of old instances. ASGs can take a while to terminate old instances. All solvable problems, of course. Mostly depends what you’re optimizing for.

As for doing this in terraform, others have covered it better. The infrastructures I last did the rolling updates in were both CloudFormation ones.

Zach avatar

If you’re doing the actual ‘rolling’ in the same ASG, we found that you need to do 2n + 1 (where n is desired ending capacity) instance launches, otherwise AWS will sometimes leave up an older instance in an effort to ‘balance’ the AZs

1
loren avatar

you can also disable az balancing…

loren avatar

Suspend and then resume one or more of the scaling processes for your Auto Scaling group.

Marcin Brański avatar
Marcin Brański

azrebalance is nasty feature

Marcin Brański avatar
Marcin Brański

My approach with boto/py doing ECS/K8S rollout was to detach instances, drain ECS, wait for ASG to spin new ones, wait for draining to complete, terminate detached nodes. This way old nodes were safe from azrebalance termination.

Termination policy oldestinstance will have desired effect only when instances across AZs are balanced.
When you customize the termination policy, if one Availability Zone has more instances than the other Availability Zones that are used by the group, your termination policy is applied to the instances from the imbalanced Availability Zone. If the Availability Zones used by the group are balanced, the termination policy is applied across all of the Availability Zones for the group.

loren avatar

interesting. we haven’t actually noticed that behavior, but probably because that particular group scales all the way down to 1 overnight and back up in the morning. iirc, without oldestinstance it would keep the oldest one, rather than rolling it over the way we expected

loren avatar

so with that note, i’d expect what is actually happening is the unbalanced AZs each terminate the oldest node, until the AZs are balanced, and then it proceeds to terminate the oldest node across AZs until only one node is left

Marcin Brański avatar
Marcin Brański

What if unbalanced AZ doesnt have old instances? I’ll try to better explain possible issue. If AZs are unbalanced then termination policy is applied to AZ with most instances running

Hypothetical situation but happened way too often for me in the past: AZa: 1 old instance AZb: 1 old instance scale out x2 AZa: 1 old instance AZb: 1 old instance AZc: 2 new instances scale in x2 One instance will be terminated from AZc and then default termination policy will terminate one old instance. Not a desirable outcome if you wanted to get rid of all old instances.

Marcin Brański avatar
Marcin Brański

azrebalance sometimes killed node that haven’t yet drained that’s why I used detaching approach. Different but also effective would be using scalein protection but I think it’s more complicated.

loren avatar

well, looks like scale out got dorked first in that scenario… the first scale out should have put something in AZc

2020-03-17

jeffrey avatar
jeffrey

Hi all, have any of you handled the case of an entire region going down and not having access to your remote backend (such as Amazon S3)? if disaster recovery is required such as spinning up resources in a different region, i imagine you’d probably want to have terraform state replicated

Marcin Brański avatar
Marcin Brański

Not sure if I got it right but if you have DR infrastructure it should have it’s own tf state.

jeffrey avatar
jeffrey

got it, that makes sense

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Do you know some folks that want free Terraform training? Are you in Southern California? Check out this free workshop from HashiCorp:

https://events.hashicorp.com/workshops/socalterraform

HashiCorp Southern California Virtual Terraform Workshop

Join local practitioners for an overview of the HashiCorp toolset and a hands-on virtual workshop for Terraform on Thursday, April 2nd. events.hashicorp.com/workshops/socalterraform

1
btai avatar

Excited to announce an official Terraform Operator for Kubernetes (alpha today). This works by automatically applying and keeping in syncing a TF workspace in TF cloud (only requires the free tier but works with all). https://www.hashicorp.com/blog/creating-workspaces-with-the-hashicorp-terraform-operator-for-kubernetes/ Demo video: https://www.youtube.com/watch?v=4MJF-enC3Us&feature=emb_title

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

So now I can YAML my HCL!!!

Excited to announce an official Terraform Operator for Kubernetes (alpha today). This works by automatically applying and keeping in syncing a TF workspace in TF cloud (only requires the free tier but works with all). https://www.hashicorp.com/blog/creating-workspaces-with-the-hashicorp-terraform-operator-for-kubernetes/ Demo video: https://www.youtube.com/watch?v=4MJF-enC3Us&feature=emb_title

1
David avatar

Can I multiline with the ternary conditional operator?

Marcin Brański avatar
Marcin Brański

nope

1

2020-03-18

ismail avatar

Hey Team, Does cloudposse have a terraform module for WAF for ALB?

maarten avatar
maarten

I don’t know but I made one for waf specific: https://github.com/Flaconi/terraform-aws-waf-acl-rules

Flaconi/terraform-aws-waf-acl-rules

Module for simple management of WAF Rules and the ACL - Flaconi/terraform-aws-waf-acl-rules

ismail avatar

Cool. Thanks!

Flaconi/terraform-aws-waf-acl-rules

Module for simple management of WAF Rules and the ACL - Flaconi/terraform-aws-waf-acl-rules

kgib avatar

can anyone help with this

Error: Either `number_cache_clusters` or `cluster_mode` must be set

  on .terraform/modules/redis.elasticache/main.tf line 81, in resource "aws_elasticache_replication_group" "default":
  81: resource "aws_elasticache_replication_group" "default" {
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

see the fixtures for example settings

kgib avatar

having a lot of trouble getting this module to work https://github.com/cloudposse/terraform-aws-elasticache-redis

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

Perry Hoekstra avatar
Perry Hoekstra

Quick question: I am attempting to use the terraform-aws-dynamodb module but I am getting an Unsupported Terraform Core version error. I am on Terraform 0.12.9, are the modules reasonably up to date on 0.12.x or just 0.12.0?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should work with any TF 0.12 version

Perry Hoekstra avatar
Perry Hoekstra

When I do a terraform init, I get the error: Unsupported Teffaform Core version with an explanation of: This configuration does not support Terraform version 0.12.9.

Perry Hoekstra avatar
Perry Hoekstra

Looking at the versions.tf, could I be bumping into a required_providers issue or would that be a different error message?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

It says that all 0.12 versions are allowed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

So not sure what’s the issue is

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can copy the module locally and update the provider version or remove it completely and test

rohit avatar

I am not sure if this is the right channel to ask my question but i wanted to know how people are deploying lambda functions to different stages in AWS

rohit avatar

do you deploy your infrastructure code and app code separately ?

Zach avatar

We tried to do it using terraform and it was painful and we gave up on it - I’d also be curious to hear what others do. It seems like the preferred method is to use one of the AWS or other framework CLIs to do the actual code deploy…

Joe Niland avatar
Joe Niland

On the current project I’m doing, we’re deploying shared/backing infrastructure manually (not via CI/CD) with Terraform and then deploying Lambda projects using the Serverless framework via CI/CD.

Joe Hosteny avatar
Joe Hosteny

I’ve moved to building the lambdas through a normal pipeline in concourse, and let it upload the function zipfile, and deploy it.

Joe Hosteny avatar
Joe Hosteny

We’re trying to lift the application code out of the TF dirs, since we don’t have a high level of automation there yet. I think it surfaces it better for other devs as well. I’ve found the lambda deploy process through TF to be spotty (at least with Python)

randomy avatar
randomy

I’ve mention this previously but since you asked… I made https://github.com/raymondbutcher/terraform-aws-lambda-builder and before that https://github.com/claranet/terraform-aws-lambda which should make it easy enough. Slightly different approaches in them. They are good for infrastructure Lambda functions. They don’t yet have a good story for promoting a built Lambda package from one env to the next, although I’m not sure if Serverless etc do that either. That Concourse approach is probably on the right track if you’re building a zip once and pushing it to multiple environments.

raymondbutcher/terraform-aws-lambda-builder

Terraform module to build Lambda functions in Lambda - raymondbutcher/terraform-aws-lambda-builder

claranet/terraform-aws-lambda

Terraform module for AWS Lambda functions. Contribute to claranet/terraform-aws-lambda development by creating an account on GitHub.

1
rohit avatar

@randomy yes, you already mentioned about it. Thanks !!

Brij S avatar

has anyone tried to use https://www.terraform.io/docs/providers/aws/#ignore_tag_prefixes ? Im adding the following

provider "aws" {
  region = "us-east-1"
  ignore_tag_prefixes = ["kubernetes.io/*"]
}

but when i run terraform apply Im still getting changes to the tags with those prefixes, like so:

      ~ tags                             = {
            "Environment"                                           = "dev"
          - "kubernetes.io/cluster/12-Cluster" = "shared" -> null
          ..............
        }
    }
Provider: AWS - Terraform by HashiCorp

The Amazon Web Services (AWS) provider is used to interact with the many resources supported by AWS. The provider needs to be configured with the proper credentials before it can be used.

loren avatar

i don’t think it globs… it’s a literal prefix. try removing the *?

Provider: AWS - Terraform by HashiCorp

The Amazon Web Services (AWS) provider is used to interact with the many resources supported by AWS. The provider needs to be configured with the proper credentials before it can be used.

Brij S avatar

was just about to try that

Brij S avatar

hah, that was it

loren avatar

nice!

loren avatar

cool feature! exact same use case

Brij S avatar

i just found out about it and didnt have a clever way to solve it ..until now !

randomy avatar
randomy

It’s a weirdly specific generic feature

2020-03-19

xluffy avatar

Hi all. I have a question about terraform-aws-vpc-peering-multi-account. In example, I need to input requester_aws_assume_role_arn and accepter_aws_assume_role_arn, I review code of this module. but don’t see anything for creating two assume roles. How to get/create there roles?

xluffy avatar

Another question:

 requester_aws_assume_role_arn             = "arn:aws:iam::XXXXXXXX:role/cross-account-vpc-peering-test"

So XXXXXXXX is AWS account id of accepter?

randomy avatar
randomy

I haven’t used this module but my initial impression is no, that would be the account of the requester

randomy avatar
randomy

There are policies shown in https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account#usage but they’re a bit hidden, you need to click to expand them

cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account

xluffy avatar

, but do u know how to create a role for cross-peering? I checked their github, dont’ see tf-module for create this role.

randomy avatar
randomy

it might be worth creating a module for it, as it’s always going to be the same if you’re creating roles just for this, but i think it’s exactly like this example but with different content for the policies https://www.terraform.io/docs/providers/aws/r/iam_role_policy.html#example-usage

randomy avatar
randomy

populate assume_role_policy with the trust policy from the module populate policy with the iam policy from the module

randomy avatar
randomy

i think so anyway. like i said, i’ve never used this module :)

xluffy avatar

got it, thanks @randomy

1
Michał Czeraszkiewicz avatar
Michał Czeraszkiewicz

Hi, did anyone experience issues with the EKS cluster module (https://github.com/cloudposse/terraform-aws-eks-cluster/) in a multi-worker scenario? Described the issue here: https://github.com/cloudposse/terraform-aws-eks-cluster/issues/55

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

multi-worker EKS not working · Issue #55 · cloudposse/terraform-aws-eks-cluster

Describe the Bug Trying to setup a multi-worker EKS cluster. Following the example from here. And the &quot;Module usage with two worker groups&quot; section from the documentation. Modified the ma…

Perry Hoekstra avatar
Perry Hoekstra

Question: Is there a Cloudposse module for the Terraform aws_lamba_permission (https://www.terraform.io/docs/providers/aws/r/lambda_permission.html)? I looked through the repositories and did not see anything.

AWS: aws_lambda_permission - Terraform by HashiCorp

Creates a Lambda function permission.

randomy avatar
randomy

There are no hard and fast rules but I think this fits the case of when to not use a module. https://www.terraform.io/docs/modules/index.html#when-to-write-a-module

Creating Modules - Terraform by HashiCorp

A module is a container for multiple resources that are used together.

Matt Gowie avatar
Matt Gowie

@Perry Hoekstra Yeah, agreed with @randomy. I would create your own internal (inside your repo) module for lambda which includes creating the lambda function (one resource) and then also includes the lambda_permission resource role for you specific use-case.

Perry Hoekstra avatar
Perry Hoekstra

Gotcha, thanks for the hint.

sheldonh avatar
sheldonh

Would you say this sounds correct?

If you are adopting Terraform with few developer focused team mates, then Terraform Cloud is going to be the easiest way to centralize both automated deploy + state.

If more mature development practices then using azure devops pipelines, jenkins pipelines etc can give more control + at the cost of more complexity.

If no mature pipeline practice that is easy to standardize across team at this time, then Terraform Cloud is going to force “pure” terraform development without relying on wrapper scripts, terragrunt and other tooling, resulting in simplier plans.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, I think Terraform Cloud will yield the best developer experience out-of-the-box with the least custom tooling (wrapper scripts, terragrunt, etc) and with the least infrastructure in place to support it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@johncblandii has some training videos he’s been working on for terraform cloud - that might help your team

RB avatar

ehhhh, you’ll still need a linter like tflint, you’ll still need terragrunt if you want to DRY up your code

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

IMO, you don’t need terragrunt to be dry, even with terraform cloud.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it requires a different project folder structure though than terragrunt projects.

sheldonh avatar
sheldonh

The key to this is simplicity. I personally am comfortable with concepts like terragrunt, scripting my code more etc with linting etc. However, the bare bones nature of terraform cloud forces you to not rely on make files and var files etc

RB avatar

IMHO, terraform cloud is more expensive than it’s worth

sheldonh avatar
sheldonh

Right now we have terraform cloud (me), some very make file oriented terraform jenkins jobs that would need much refactoring to be easily used in terraform cloud for example.

RB avatar

but if youre starting new, i’d POC it

sheldonh avatar
sheldonh

I’m already using it. It’s more about trying to help ease of adoption.

sheldonh avatar
sheldonh

I personally would like Azure DevOps multistage pipelines or something, but I’m afraid if everyone is already struggling to even setup a cloud pipeline that that approach will limit others contributing

sheldonh avatar
sheldonh

and yes the pricing model needs adjustment for sure

RB avatar

look into atlantis too if you get a chance. hashicorp bought the lead developer of it. and it’s pretty nifty for pipelines

sheldonh avatar
sheldonh

My impression is that Atlantis is great, but will basically be on life support in the future as terraform cloud is the way they want you to go

mfridh avatar

I find the diminishing returns are quite early and high when it comes to infrastructure which is NOT repeated en masse. What do you think?

sheldonh avatar
sheldonh

Let me thread this to to avoid noise…

sheldonh avatar
sheldonh

Define enmasse?

mfridh avatar

Ie, I’d be perfectly fine having EVERYTHING Atlantis if I had, say 30 identical “things of isolated infrastructure” (whatever that may be!) and I could easily do my testing in “development” and once that is done I can do the pull request for rolling it out for real.

mfridh avatar

The reality though of many things I do (ie, I am NOT a provider for other multitenant “stuff”) - I manage FEW, but complicated infrastructures.

mfridh avatar

That means I traverse one, possibly two “almost identical” environments for almost everything I do.

mfridh avatar

I get alot of return on investment in modules of course, for things which are repeated in each of these few environments I do have.

mfridh avatar

For example “users”, “services” etc. Of course I always do my best to keep things “DRY”.

mfridh avatar

But when it comes to repeating full terraform Stacks - that just doesn’t happen often enough for Atlantis to actually be beneficial outside of it’s possible ability to create a good view of things that have or are about to happen.

sheldonh avatar
sheldonh

Nice. Yeah I do mostly the deployment in two places, qa - prod seperate accounts. I just am trying to approach ways to simplify even though I’d be fine coding everything up through powershell/python etc, I’m trying to set the team up for consistency and for least amount of setup to get CICD, which seems like Terraform cloud to me

mfridh avatar

Cloud seems to have benefits to get going…

But I’m since day 1 way way back on proper remote state and dynamodb for locking.

I separate things in many many different “stacks”, all in the same repo.

mfridh avatar

I make cross-references to “parent” or “neighbour” remote states here and there.

mfridh avatar

Obviously, I did the old “huge stack” at first. Then started splitting out as that really doesn’t work very far.

mfridh avatar

I can’t have 15 minute terraform plan stacks…

sheldonh avatar
sheldonh

nice

mfridh avatar

I’m pondering adding Atlantis and having it running in Autoplan mode…

mfridh avatar

to detect “missed” apply’s mostly I think

mfridh avatar

but it could turn out that I could actually do the “trivial” updates via that, so if I start that way it might turn out I can have it help me rather than work against me while keeping the most critical updates done on my very own command line.

mfridh avatar

I have quite a complicated Makefile which manages everything in my structure so every single folder gets its unique remote state path etc.. so have to make all this work in Atlantis instead. Soon

mfridh avatar

If I can somehow make it detect module updates “globally”, then it will be awesome.

mfridh avatar

Ie, it finds an update in modules/fridh/bar and it can detect every “stack” which uses this bar module and run plan on all of them…

I’m guessing this could be complicated with several levels of inheritence… but it ought to be doable!

Otherwise I’d have to have it run plan on every stack, regardless of it having updated. That won’t work… there’s too many of them .

randomy avatar
randomy

It will be easier if you remove the modules directory from that repo, put modules in separate repo(s) and pin the versions.

randomy avatar
randomy

Then make everything hierarchical. Any change to a file in a directory affects any stacks/environments below it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

proper CI/CD of terraform is non-trivial. we’ve recently implemented it in codefresh pipelines. it was HARD. not because of using codefresh, but because treating planfiles as artifacts and knowing when to strategically plan projects based on files changed, and invalidating plans when new plans are created is essential.

randomy avatar
randomy

did you implement an equivalent of terraform cloud’s run triggers?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

where one pipeline can basically trigger dependent pipelines in a DAG-like behavior?

randomy avatar
randomy

Exactly. Did you do something like run triggers, or did you make it apply every stack (or whatever you call it) every time, or have you left these situations to be handled manually? (or some 4th option I haven’t thought of)

randomy avatar
randomy

I’m trying to come up with the “ideal” Terraform project structure and am now thinking about the CI/CD side of it. It does seem hard.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We did not solve it. It would look something like this:

  reapply_eks:
    title: Apply EKS
    image: 'codefresh/cli:latest'
    commands:
      - codefresh run ${{CF_REPO_OWNER}}/${{CF_REPO_NAME}}/reapply-eks -d -b=${{CF_BRANCH}} -v CF_RELEASE_TAG=${{CF_RELEASE_TAG}} -v CF_PRERELEASE_FLAG=${{CF_PRERELEASE_FLAG}}
    
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

basically, just add a few more steps at the end of the pipeline to trigger the projects that could be affected. it would trigger automatically, but the DAG is “human generated”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

care would have to be taken to ensure that cycles aren’t created between the various pipelines.

randomy avatar
randomy

that seems close enough to run triggers, having used neither terraform cloud nor codefresh

1
randomy avatar
randomy

Hope you don’t mind the questions. How did you fit pull requests into it? I guess the ideal is to create pull requests, plans appear based on which files changed, you approve/merge it, it gets applied. But then you have these dependant projects that may have changed that need to be approved before applying. And there is config drift where changes could appear at any time. Ideally they would appear as pull requests for consistency? Atlantis doesn’t seem to deal with this, just saying it’s out of scope.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


But then you have these dependant projects that may have changed that need to be approved before applying
Dependent projects fall into 2 classes. (a) dependent projects with open PRs (b) dependent projects that read from some shared storage like SSM.

In the case of (a) not thinking about any automation. This would be process driven. As for (b) this is pretty easy. See below.
Ideally they would appear as pull requests for consistency?
Yes/no, I agree PRs are nice, but we got into trouble for enforcing a PR workflow too rigorously (see our reference-architectures). In practice, I much rather more things be automatable with simply pipelines than PRs. So in the case where project A has some state changes that project B depends on, then project A writes those to SSM. Then triggers a pipeline run of project B. Since project B has no changes, no PR approvals are required. In codefresh, however, we can add an “approval step” which executes prior to executing the step. This is also possible with terraform cloud.
Atlantis doesn’t seem to deal with this, just saying it’s out of scope.
this requires computing a DAG across projects and it’s pretty much out of scope for everything that exists today for terraform. Also, the more interconnectedness the less we have in terms of “microservices” and the more we have of tightly coupled interconnected services resembling more and more of a terralith.

randomy avatar
randomy

Thanks. It’s getting late here so I’ll have to think about this more tomorrow. Regarding the DAG idea, could that be solved naturally with drift detection that also runs immediately after any changes are made? I’m thinking it could run on a schedule + straight after applies and it would handle both scenarios (maybe not as quickly as a DAG or run triggers, but still automatic).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but at what point have we just recreated a monolith? basically if every change causes every project to be planned (and possibly applied), it’s basically a monolith - even if it’s on a cron

randomy avatar
randomy

I don’t know if this is pushing it towards being a monolith. It’s surfacing configuration drift and making it easy to resolve it. The status quo is to expect people to remember (or follow instructions) that there are other projects they need to check and update afterwards. I do get that it’d be annoying to make a change, then wait ages for it to go and check every project. Depending on the project size, that could be fine or too slow.

sheldonh avatar
sheldonh

^^ this. Exactly. I took it for granted when doing my initial production deploy. I later realized why it was so hard. Cloudformation forces the pipelines/everything into the equivalent of terraform cloud by it’s nature.

sheldonh avatar
sheldonh

Terraform can be run on a latop and is the lowest overhead to get going. To scale to a team, jumping that hurdle is really hard to stop relying on local runs and everything through cicd

sheldonh avatar
sheldonh

Ok, one more (maybe a thread for this). Any docs/blog article on module design building blocks vs stacks? I have one coworker who is creating a module for a security group RULE, basically almost a 1-1 for a resource. I’m going to be owning the terraform standards this year, and trying to convince that individual resources at that level is too granular. Need some external material on this topic

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, don’t necessarily have a hard rule on this, but I agree with you on this one. The nuance here is if you can add business logic to it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

An example of combining business logic with security groups is this @antonbabenko’s module: https://github.com/terraform-aws-modules/terraform-aws-security-group

terraform-aws-modules/terraform-aws-security-group

Terraform module which creates EC2-VPC security groups on AWS - terraform-aws-modules/terraform-aws-security-group

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

at cloudposse, we’ve not used this approach for security groups. logic of security groups is tied to the the resource being secured (E.g. redis would handle it’s own security groups and expose the group name as an output)

mfridh avatar

Speaking generally and not security groups exclusively - If your module helps you enforce a standard, add extra information, tags (billing tags maybe), definitely a module per resource could be motivated I feel.

mfridh avatar

Only few rules of mine are of this “loose” form. Most are like Erik says - tied to a larger resource itself.

mfridh avatar

So, having a few rules then come from a module, why not?

Zachary Loeber avatar
Zachary Loeber

I’ve personally flip-flopped back and forth on making modules in tf. It is too easy to ‘think’ you are being clever with them when in fact you are making an inflexible mess with them instead.

mfridh avatar

I never go module first.

Zachary Loeber avatar
Zachary Loeber

Me either, I learned my lesson the hard way…

mfridh avatar

I strongly lean towards moving to a module the moment I see “oh, I can (or need to) re-use this”.

mfridh avatar

And actually… I never modularize it before the point where I actually need it again.

Zachary Loeber avatar
Zachary Loeber

flat first, then if the project firms up into a structure that becomes repeatable on a per-team/client boundary I ‘may’ cut out modules for it.

mfridh avatar

I’m quite fluent with terraform state mv by now. So no sweat moving things after the fact.

Zachary Loeber avatar
Zachary Loeber

It seems like that should not be a statement one should have to throw out there though right?

mfridh avatar

I think it’s tough to grasp unless you’ve spent alot of time in TF.

mfridh avatar

I mean, compare the manual gymnastics you need to do today visavi a Midnight Commander interface for moving state and resources - imagine that?

Zachary Loeber avatar
Zachary Loeber

plus dependencies pretty much enter a realm of some psychedelic nightmare when you start using modules.

Zachary Loeber avatar
Zachary Loeber

or I’m just needing more time to drink more of the kool-aide…

mfridh avatar

depends_on = [null_resource ...

Zachary Loeber avatar
Zachary Loeber

Hahah

mrwacky avatar
mrwacky

Ha, I am mercilessly taunted at work for a module I made that creates a single resource, and has more parameters than the resource it manages..

3
Brij S avatar

I know the terraform does dynamic variables like this

  dynamic "vpc_config" {
    for_each = var.vpc_config == null ? [] : [var.vpc_config]
    content {
      security_group_ids = vpc_config.value.security_group_ids
      subnet_ids         = vpc_config.value.subnet_ids
    }
  }

for an eks profile selector, does anyone have a clever way of making selector a list var so that if [default,kube-system] is entered it populates this

  selector {
    namespace = "default"
  }

  selector {
    namespace = "kube-system"
  }

The selector part has to be seperate for each item in the list is it possible?

Brij S avatar

figured it out! (for anyone else that may be interested

variable "selector" {
  type = list(map(string))
}

then

  dynamic "selector" {
    for_each = var.selector
    content {
      namespace = selector.value["namespace"]
    }
  }
2

2020-03-21

Morten Hjorth Fæster avatar
Morten Hjorth Fæster

Hi - I am trying to setup a task for windows container in ECS. The docs specify that windows containers must use the network mode ‘default’ as it uses ‘NAT’. But it seem terraform only support bridge, host, awsvpc and none:

terraform@22975a35d682:~/git-repos/kit.aws.medielogin/terraform/environments/medielogin-dev/04.services$ terraform apply

Error: expected network_mode to be one of [bridge host awsvpc none], got default

  on ../../../modules/ecs-service/main.tf line 30, in resource "aws_ecs_task_definition" "service_task":
  30: resource "aws_ecs_task_definition" "service_task" {

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html

This is going to hit me pretty hard. Does anyone have any advise. (Except skipping Windows altogether )

Task Definition Parameters - Amazon Elastic Container Service

Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container definitions, volumes, task placement constraints, and launch types. The family and container definitions are required in a task definition, while task role, network mode, volumes, task placement constraints, and launch type are optional.

loren avatar

the way i read that, i’d try none

loren avatar


When you register a task definition with Windows containers, you must not specify a network mode.

loren avatar

the sentence after that you are referring to is talking about the aws console, which has a “default” option. but that is not a valid option for the api. that is just the console trying to be helpful.

Morten Hjorth Fæster avatar
Morten Hjorth Fæster

Ok - thanks. I tried that and couldn’t attach a load balancer as no ports could be exposed in the task definition. (Apparently rejected by the underlying aws cli.) I believe this is simply a missing enum value in the aws provider.

Morten Hjorth Fæster avatar
Morten Hjorth Fæster

I was not aware the value was not available from the cli. From my understanding of the link I assumed that providing ‘default’ would cause it to be set to NAT.

loren avatar

well, network_mode is an optional field in the task definition. perhaps default just means it is left unspecified?

Morten Hjorth Fæster avatar
Morten Hjorth Fæster

seems it causes it to be null - I will write here if I reach any conclusions

2020-03-22

Callum Robertson avatar
Callum Robertson

Hi everyone, running into a bit of blocker and I thought I’d ask if anyone had found a way around problems like this in Terraform as it currently is.

I’m trying to use a dynamic block for a list(string) of ports I want to allow to a list(string) of security groups

Expected: Terraform will review the for_each argument and if the length of the var.allowed_security_group > 0 then do NOT execute the nested dynamic block

Behaviour: Creates the SG and fails as the source for security_groups is empty on a apply

Current configuration:

resource "aws_security_group" "default" {
  count  = var.enabled && var.use_existing_security_groups == false ? 1 : 0
  vpc_id = var.vpc_id
  name   = module.sg_label.id

  dynamic "ingress" {
    for_each = length(var.allowed_security_groups) > 0 ? var.service_ports : null
    iterator = ingress
    content {
      description     = "Allow inbound traffic from existing Security Groups"
      from_port       = ingress.value
      to_port         = ingress.value
      protocol        = "tcp"
      security_groups = length(var.allowed_security_groups) > 0 ? [element(var.allowed_security_groups, count.index)] : null
    }
  }
}
Zach avatar

if the varaible you’re using is a list type, you need to first cast it with toset() with for_each otherwise it will reject the variable

Callum Robertson avatar
Callum Robertson

It accepts it just fine!

Callum Robertson avatar
Callum Robertson

It successfully plans

Zach avatar

Weird. Its only supposed to operate on Map and Set types

Callum Robertson avatar
Callum Robertson

Yeah the problem I have is that the null on the for_each statement doesn’t treat itself as a condition to not execute the dynamic block

Zach avatar

Do you even need the null condition there? If the set passed to for_each is empty it should skip the block I thought

loren avatar

for_each on resources must be set or map. for_each on dynamic blocks can also be a list

Zach avatar

ahhh thanks

loren avatar

that is because the resource id must be unique, and lists allow duplicate values

loren avatar

try changing that null to []?

Callum Robertson avatar
Callum Robertson

tried that =[

Callum Robertson avatar
Callum Robertson

No go!

Callum Robertson avatar
Callum Robertson

Same result

loren avatar

what exactly is the error?

Callum Robertson avatar
Callum Robertson

Terraform interpolates it all successfully in the plan, however, it executes against the dynamic block, so the source_id which is a security group id in this cause is [” “], so the API call fails

Callum Robertson avatar
Callum Robertson

on an apply

loren avatar

right, i was thinking it’s not the ingress for_each that’s the problem, it’s actually the expression on the security_groups attribute…

Callum Robertson avatar
Callum Robertson

correct

Callum Robertson avatar
Callum Robertson

because the logic is executing against the dynamic block even though the false on the condition is null

loren avatar

maybe that expression needs to be a for loop instead of the conditional?

Callum Robertson avatar
Callum Robertson

Curious to know what you’re thinking @loren?

Callum Robertson avatar
Callum Robertson

@loren

Callum Robertson avatar
Callum Robertson

Much respect!

Callum Robertson avatar
Callum Robertson
for_each = [for s in var.allowed_security_groups: null if s != ""]
Callum Robertson avatar
Callum Robertson

If anyone is curious

Callum Robertson avatar
Callum Robertson

worked magic!

Callum Robertson avatar
Callum Robertson

loren avatar

Something like that, but maybe also question why is there an empty string in your list in the first place…

Callum Robertson avatar
Callum Robertson
variable "allowed_security_groups" {
  type        = list(string)
  default     = []
  description = "(Optional) - List of Security Group IDs that are allowed ingress to the cluster's Security Group created in the module"
}
Callum Robertson avatar
Callum Robertson

This is the current variable

Callum Robertson avatar
Callum Robertson

ah, nvm @loren, you’re bang on! I found that the variable I was feeding it had an empty string >.<

Callum Robertson avatar
Callum Robertson

Thank you so much for your help mate, legend

Callum Robertson avatar
Callum Robertson

Keen to know what you think!

Zach avatar

Has anyone successfully created an aws_pinpoint_gcm_channel resource with terraform? It’s a relatively new resource and I opened a bug report against it in the aws provider, because we just get ‘401 unauthorized’ errors when we run our plan, but we can enable the api key via the console no problem

2020-03-23

rbadillo avatar
rbadillo

Hi guys, I’m building my own terraform provider and I want to know if it is possible to use a datasource and try to save the value of a list that I”m computing. is that possible ? basically I want to keep adding values to the list instead of recreating the list every time the datasource runs.

Marcin Brański avatar
Marcin Brański

Am I understanding it correctly that the list immutable beside appending new items?

rbadillo avatar
rbadillo

correct

Marcin Brański avatar
Marcin Brański

I haven’t written any provider so I can’t say about ability to do that, any language can save data, but I would be vary to cache data in provider. This could lead to some inconsistency issues and I would favour more time for computation of those list then implement additional functionality like cache validation.

Cloud Posse avatar
Cloud Posse
04:00:37 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Apr 01, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

wow, hashicorp has developed their own operator for kubernetes (alpha)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there have been a few others (links are in the archives)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but this coming from hashicorp is rad!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Creating Workspaces with the HashiCorp Terraform Operator for Kubernetesattachment image

We are pleased to announce the alpha release of HashiCorp Terraform Operator for Kubernetes. The new Operator lets you define and create infrastructure as code natively in Kubernetes by making calls to Terraform Cloud.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah bummer - looks like it may be dependent on terraform cloud

Alex Siegman avatar
Alex Siegman

Yeah, i just watched the video, seems it is

Chris Fowles avatar
Chris Fowles
Support for state management and execution via Terraform OSS · Issue #10 · hashicorp/terraform-k8s

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

1
Chris Fowles avatar
Chris Fowles

give it all your

2020-03-24

Adam Blackwell avatar
Adam Blackwell

Hey folks, this may have already been talked about here, but we currently create our RDS databases outside of atlantis since atlantis can’t pull from vault and we don’t want root passwords in plaintext anywhere, is there a canonical way to use https://github.com/terraform-aws-modules/terraform-aws-rds-aurora or https://docs.cloudposse.com/terraform-modules/databases/terraform-aws-rds-cluster/ from Atlantis without plaintexting passwords?

terraform-aws-modules/terraform-aws-rds-aurora

Terraform module which creates RDS Aurora resources on AWS - terraform-aws-modules/terraform-aws-rds-aurora

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not that I know of. This is not a module limitation. This is a terraform (aka AWS API) limitation.

terraform-aws-modules/terraform-aws-rds-aurora

Terraform module which creates RDS Aurora resources on AWS - terraform-aws-modules/terraform-aws-rds-aurora

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so basically, if you restore from a snapshot you might be able to “game it”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, “plain text” is a relative term

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can have “plain text” in an encrypted value. e.g. if you have the terraform state on S3 encrypted.

jeffrey avatar
jeffrey

Hi all, I can also ask this during the office hours tomorrow but wanted to see if you any of you have input.

I’m working through disaster recovery with terraform, primarily for the terraform remote state management of multiple regions. i wanted to have a duplicate set of resources created in a separate region (e.g. us-east-1 for primary, us-west-2 for failover). initially i thought it’d be best to have remote state separated in each region, such that a bucket in us-east-1 handled all of the us-east-1 resources and a bucket in us-west-2 handled all of the us-west-2 resources. however, i imagine this becomes an issue if the region is actually down, and the failover reads from terraform_remote_state of the primary. would it be better to have a primary remote state that manages resources in multiple regions, but is also cross-region replicated? that way if the region goes down, we can update our terraform configurations to read from the failover remote state bucket and pick up exactly where we left off

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks @jeffrey - we’ll discuss tomorrow!

1

2020-03-25

Martin Heller avatar
Martin Heller

Hi folks, can someone review my pull request please? Just adding environment label to terraform-aws-vpc: https://github.com/cloudposse/terraform-aws-vpc/pull/48

Support the environment attribute by Morton · Pull Request #48 · cloudposse/terraform-aws-vpc

what Added support for the environment attribute that has been added to terraform-null-label why Cause this module should stay in-line with terraform-null-label

sohel2020 avatar
sohel2020

Hello good people, is it possible to get rid r2, r3, r4 ?

variable "destination_cidr_block" {
    default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"]
}

data "aws_route_tables" "rts" {
  vpc_id = "${var.vpc_id}"

  filter {
    name   = "tag:kubernetes.io/kops/role"
    values = ["private*"]
  }
}

resource "aws_route" "r1" {
  count                     = "${length(data.aws_route_tables.rts.ids)}"
  route_table_id            = "${data.aws_route_tables.rts.ids[count.index]}"
  destination_cidr_block    = "${var.destination_cidr_block[0]}"
  vpc_peering_connection_id = "pcx-0e9a7a9ecd137dc54"
}

resource "aws_route" "r2" {
  count                     = "${length(data.aws_route_tables.rts.ids)}"
  route_table_id            = "${data.aws_route_tables.rts.ids[count.index]}"
  destination_cidr_block    = "${var.destination_cidr_block[1]}"
  vpc_peering_connection_id = "pcx-0e9a7a9ecd137dc54"
}

resource "aws_route" "r3" {
  count                     = "${length(data.aws_route_tables.rts.ids)}"
  route_table_id            = "${data.aws_route_tables.rts.ids[count.index]}"
  destination_cidr_block    = "${var.destination_cidr_block[2]}"
  vpc_peering_connection_id = "pcx-0e9a7a9ecd137dc54"
}

resource "aws_route" "r4" {
  count                     = "${length(data.aws_route_tables.rts.ids)}"
  route_table_id            = "${data.aws_route_tables.rts.ids[count.index]}"
  destination_cidr_block    = "${var.destination_cidr_block[3]}"
  vpc_peering_connection_id = "pcx-0e9a7a9ecd137dc54"
}
Abel Luck avatar
Abel Luck

Anyone know if its possible to create a aws_iam_policy_document where the condition blocks are dynamic based on module input? That is.. the number of condition blocks are determined by input from a list.

Abel Luck avatar
Abel Luck

rather than use the aws_iam_policy_document data source, I guess it might be best to template a json string?

Morten Hjorth Fæster avatar
Morten Hjorth Fæster

Is it TF 11 or TF 12? - I guess you should be able to use the dynamic blocks for your purpose if it is TF 12. https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks

Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

Abel Luck avatar
Abel Luck

this is TF12, haven’t heard of dynamic blocks, will read up on that

Abel Luck avatar
Abel Luck

Beautiful!

dynamic "condition" {
      for_each = var.limit_tags
      content {
        test     = "StringEquals"
        variable = "aws:RequestTag/${condition.key}"
        values   = condition.value
      }
}
Abel Luck avatar
Abel Luck

Thanks @Morten Hjorth Fæster

Morten Hjorth Fæster avatar
Morten Hjorth Fæster

You are most welcome

sheldonh avatar
sheldonh

Trying to improve my “default settings” approach with a terraform project that has 50ish input defaults that i need to override. Please see this discussion issue and if you feel like helping out I’d love to see some insight. It’s a follow-up to my first more complicated project

https://discuss.hashicorp.com/t/best-practice-for-reusing-with-many-environments/2704/2

Best Practice for Reusing with many environments

Hi @sheldonhull, Without some specific examples to work with it’s hard to give specific advice, but as you’ve seen there are two common ways to represent multiple environments: If all of the environments have essentially the same “shape” but differ just in size and number of objects, a single configuration with multiple .tfvars files can be reasonable, although it has the downside that you need to manually select the right variables file for each environment when you apply. That’s why we usu…

sheldonh avatar
sheldonh

The last post in 1 day ago. I could use some insight on that if any cares to. As a quick fix I just tried using a local.tf file and pulling content in but I don’t think that will work without using symlinks. I’m really adverse to using symlinks for this… seems to add some accidental complexity

Best Practice for Reusing with many environments

Hi @sheldonhull, Without some specific examples to work with it’s hard to give specific advice, but as you’ve seen there are two common ways to represent multiple environments: If all of the environments have essentially the same “shape” but differ just in size and number of objects, a single configuration with multiple .tfvars files can be reasonable, although it has the downside that you need to manually select the right variables file for each environment when you apply. That’s why we usu…

randomy avatar
randomy

I’m working on this example which is aiming to be the “ideal” project structure: https://github.com/raymondbutcher/pretf/tree/mirror-module/examples/enterprise It symlinks files from parent directories on-the-fly. One of the stacks gets the root module remotely (like Terragrunt). You can have *.auto.tfvars at different levels and they’ll get symlinked into the working directory automatically. This is still in a branch, I want to keep refining it and flesh it out some more.

raymondbutcher/pretf

Generate Terraform code with Python. Contribute to raymondbutcher/pretf development by creating an account on GitHub.

randomy avatar
randomy

It would be relatively easy to create a directory structure with 100 environment directories each with a tfvars in them.

randomy avatar
randomy

I don’t know enough about Terraform Cloud to know if it would work there.

sheldonh avatar
sheldonh

I wasn’t certain symlink was going to be a cross platform and reliable approach. If that’s the only way to do this then I guess that makes sense.

randomy avatar
randomy

Mac and Linux are fine. Windows does support symlinks (junctions) but I’ve never tried it. You could easily copy them as normal files instead but figuring out which ones to delete when cleaning up is a bit trickier.

Sean Turner avatar
Sean Turner

Is there a way to do multiline descriptions on variables?

variable "dynamodb_table_name" {
  description = "Name of the dynamo db table that holds the proxy whitelist. This table resides in the xxxxxx account so that spinnaker can read the table when building proxy AMIs"
  type        = string
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think you can try using HEREDOC syntax

Sean Turner avatar
Sean Turner

Ah yep. Thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(and for others sake, here’s a link to what that looks like https://stackoverflow.com/a/57380137)

Terraform v0.12 Multi-line String EOF shell-style "here doc" syntax not been interpreted as before with v0.11

Within Octopus Deploy I’ve setup a Terraform Apply Step using their Apply a Terraform template In my Terraform main.tf file I want to use a connection to run an remote-exec on a Amazon Linux EC2

sheldonh avatar
sheldonh

pro tip: use heredoc with a dash like the example

aws_key_path = <<-EOF
               #{martinTestPrivateKey}
               EOF
Sean Turner avatar
Sean Turner

Yeah it really does make all the difference

sheldonh avatar
sheldonh

the dash means it truncates trailing spaces from the first line thereby allowing you to indent your content for readability without having to worry about whitespace

sheldonh avatar
sheldonh

Buried in the docs. I always use that makes things much more readable

sheldonh avatar
sheldonh

self-plug… here’s a blog post about getting started…. look at the code snippet called iam.tf https://www.sheldonhull.com/blog/getting-started-with-terraform/

Getting Started With Terraform

Getting started with using Terraform for infrastructure can be a bit daunting if you’ve not dived into this stuff before. I put this together as a write up for those looking to get their feet wet and have a better idea of where to go for getting some momentum in starting. There are some assuptions in this, such as basic familarity with git for source control automation, basic command line usage, and basic cloud familarity.

sheldonh avatar
sheldonh

This wasn’t indented, but because it has the dash instead of <<EOF i could have indented it and no issues. Without the dash you have to watch out for whitespace…. end random info

xluffy avatar

Hi all, I have another issue with https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account

module "vpc_peering_cross_account" {
  source           = "git::<https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account.git?ref=master>"
  namespace        = "eg"
  stage            = "dev"
  name             = "cluster"

  requester_aws_assume_role_arn             = "arn:aws:iam::1111111111:role/ops/r_ops_peering_access"
  requester_region                          = "us-west-2"
  requester_vpc_id                          = "vpc-1111111111"
  requester_allow_remote_vpc_dns_resolution = "true"

  accepter_aws_assume_role_arn             = "arn:aws:iam::2222222222:role/ops/r_ops_peering_access"
  accepter_region                          = "us-east-1"
  accepter_vpc_id                          = "vpc-2222222222"
  accepter_allow_remote_vpc_dns_resolution = "true"
}

I’m done for creating 2 role (with assume role). Show error when try to run terraform plan

Error: Error refreshing state: 1 error occurred:
        * module.vpc_peering_cross_account.provider.aws.accepter: The role "arn:aws:iam::2222222222:role/ops/r_ops_peering_access" cannot be assumed.

  There are a number of possible causes of this - the most common are:
    * The credentials used in order to assume the role are invalid
    * The credentials do not have appropriate permission to assume the role
    * The role ARN is not valid

any idea?

cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account

xluffy avatar

well, wrong trust policy for YYYYYY account

cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account

xluffy avatar
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::XXXXXXXX:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

2020-03-26

ByronHome avatar
ByronHome

Hi all, I have a question, How i can use the output module variable into the same module to assigned it as value? I get a cycle dependence error. I should use a local variable? Regards.

RB avatar

local seems easier or update the module with a flag to use it inside of itself somehow

1
ByronHome avatar
ByronHome

Ok, i expose my context, It will be clearer, im using this module -> https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment?ref=tags/0.18.0, this module bring me hostname as output variable, so, i want to save this hostname value into the additional_settings input variable as a environment application variable, so, locals is the best way to do this? I have a lot of modules with this issue. If i use locals variables i have to create a local for each module.. Thanks for your help

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

androogle avatar
androogle

@ByronHome that creates a circular dependency issue. All the module input is needed before creating the module resources. You can’t take output from the module and have it update itself after-the-fact in a normal “TF” way.

androogle avatar
androogle

what you’d want to do is fork the module, update it to add that host as additional properties as part of the module

androogle avatar
androogle

and if you can find a sane and modular way to do that, feel free to submit a PR back into the original module

ByronHome avatar
ByronHome

Okay thank you so much, i will rethink my context

1
1

2020-03-27

Matt avatar

Does anyone have a good workaround for adding a depends_on to a module? I have a state that I want to deploy. It references a module which requires a variable foo. I want to define a resource in my state and set foo to an attribute of the resource I’m creating.

Marcin Brański avatar
Marcin Brański

AFAIK there’s no easy way to do that. Waiting for tf13 https://github.com/hashicorp/terraform/issues/10462#issuecomment-604107965

depends_on cannot be used in a module · Issue #10462 · hashicorp/terraform

Hi there, Terraform Version 0.8.0 rc1+ Affected Resource(s) module Terraform Configuration Files module &quot;legacy_site&quot; { source = &quot;../../../../../modules/site&quot; name = &quot;foo-s…

Zachary Loeber avatar
Zachary Loeber

I do but calling it ‘good’ is not really the right way to think of it…

Zachary Loeber avatar
Zachary Loeber

add a variable like so:

Zachary Loeber avatar
Zachary Loeber

variable module_depends_on { type = any default = null }

Matt avatar

ha ha

Zachary Loeber avatar
Zachary Loeber

then in the module use it in your resources like so:

Zachary Loeber avatar
Zachary Loeber

depends_on = [var.module_depends_on]

Zachary Loeber avatar
Zachary Loeber

I hate that anytime you go to use modules or get more complex than a single provider you are actively punished it seems.

Zachary Loeber avatar
Zachary Loeber

oh, when calling the module use something like this:

Zachary Loeber avatar
Zachary Loeber

module_depends_on = [ azurerm_postgresql_firewall_rule.cicd, module.psql_demo, ]

Matt avatar

yeah

Matt avatar

honestly, that’s where the declarative aspect of Terraform drives me crazy

Matt avatar

a decent scripting interface would make all of this easy

Zachary Loeber avatar
Zachary Loeber

it sort of works for me to get around some issues with firewall rules being required to be in place for hosted postgres before running some psql commands

Matt avatar

I’m not sure that would work for what I’m doing

Zachary Loeber avatar
Zachary Loeber

I couldn’t use the psql provider either as providers cannot have dependencies either

Zachary Loeber avatar
Zachary Loeber

so I had to custom compile a usql go client and include it in my module

Zachary Loeber avatar
Zachary Loeber

craptastic

Zachary Loeber avatar
Zachary Loeber

(not still feeling a bit burned by that hack, not at all… :))

Matt avatar

this is what I have

Matt avatar
module "ecs_cluster" {
  source              = "redacted"
  env                 = var.env
  business_unit       = var.business_unit
  vpc_id              = module.aws_vpc.vpc_id
  vpc_cidr_block      = [local.vpc_cidr_block]
  vpc_public_subnets  = keys(local.vpc_public_subnets)
  vpc_private_subnets = keys(local.vpc_private_subnets)

  public_alb_enabled  = true
  private_alb_enabled = false
  certificate_arn     = aws_acm_certificate.cert.arn
  waf_enabled         = false
  autoscaling_type    = "cpu"
}

resource "aws_acm_certificate" "cert" {
  domain_name               = "*.redacted"
  subject_alternative_names = ["redacted"]
  validation_method         = "DNS"

  tags = local.default_tags

  lifecycle {
    create_before_destroy = true
  }
}
Matt avatar

in the module input, I want certificate_arn to reference the resource I’m creating

Matt avatar

depends_on would be perfect there

Zachary Loeber avatar
Zachary Loeber

Sry, actual work I had to do there. Why don’t you just pass it in as a var with the domain name and pull the arn from the data source using that domain name? https://www.terraform.io/docs/providers/aws/d/acm_certificate.html

AWS: aws_acm_certificate - Terraform by HashiCorp

Get information on a Amazon Certificate Manager (ACM) Certificate

Zachary Loeber avatar
Zachary Loeber

or just pass in the arn directly as a var from your created aws_acm_certificate?

Zachary Loeber avatar
Zachary Loeber

(sry, not in the aws provider every day)

Matt avatar

Yeah, I can do that but I need to create a separate state

Matt avatar

for the cert

Matt avatar

which is what I’m going to do

Matt avatar

Not clever but it’s clear

Matt avatar

I don’t really like the workarounds I’ve seen for depends_on + I don’t think they’ll work for my use case

Zachary Loeber avatar
Zachary Loeber

you could use remote state as a work around?

Matt avatar

yes, I’m using remote states

Matt avatar

I’m going to handle it is two separate remote states

Matt avatar

it’s clean

Zachary Loeber avatar
Zachary Loeber

curious, how many remote states become too much for a single deployment?

Matt avatar

we have a bunch of remote state files

Matt avatar

yeah, too much for a single deployment

Matt avatar

I’ve gone down that path, I’d rather have a large number of independent states than one monolithic one

Zachary Loeber avatar
Zachary Loeber

I’ve been able to manage with just a few per environment but it seems almost excessive to have to break out another state just to do work arounds for dependency management

Zachary Loeber avatar
Zachary Loeber

any rule of thumb wisdom on when you break out your states?

Zachary Loeber avatar
Zachary Loeber

I’m having a hard time reconciling (mentally) the fact that I have to cut apart my remote states and, in turn, my pipeline code, just to keep a sane dependency graphs

Zachary Loeber avatar
Zachary Loeber

I’m just ruminating now, no need to answer questions concerning my mental problems

Matt avatar

honestly, this is my biggest challenge with Terraform

Matt avatar

it’s difficult to come up with a sane layout/strategy with large systems

Zachary Loeber avatar
Zachary Loeber

ok, well it isn’t just me then, that’s oddly comforting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Adding @discourse_forum bot

discourse_forum avatar
discourse_forum
09:48:38 PM

@discourse_forum has joined the channel

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

:point_up: This was created by running /discourse post thread <https://sweetops.slack.com/archives/CB6GHNLG0/p1585316372027300> where the link comes from right-clicking on the message and click “copy link”. Anyone should be able to do this, but open to beta testers.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the idea is to be able to spin off good questions / answers to individual posts.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have our slack archives at archive.sweetops.com, but the problem there is they are not interactive. Other people who have the same problems (or found a better workaround) are unable to jump in the thread and contribute. With Discourse, this is possible.

loren avatar

pretty slick

2020-03-28

btai avatar

terraform cloud users (@johncblandii), how are we supposed to use providers like helm remotely? i cant seem to find documention on that being possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just to clarify, I think you mean remotely when the resources are not exposed externally and sitting on a private VPC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this use-case is not yet supported, but hashicorp is probably working on something to support this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for now, it means using the self-hosted terraform enterprise

johncblandii avatar
johncblandii

I used the cli via local exec in a makefile, but I wanted to switch it to the provider.

johncblandii avatar
johncblandii

+1 on @Erik Osterman (Cloud Posse)’s comment

btai avatar

yeah okay i thought so. for things like the helm provider it sounds like I will have to continue deploying via my local machine (but will host statefile in tf cloud)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can use venona with Codefresh

btai avatar

don’t have enterprise codefresh

2020-03-29

randomy avatar
randomy

Anyone interested in deploying AWS Lambda functions with Terraform, https://github.com/raymondbutcher/terraform-aws-lambda-builder now has an option to build deployment packages using CodeBuild. I did this to lower the barrier of entry for writing infrastructure Lambda functions in Go, but it should work for anything. There’s a golang example in the tests directory. If you haven’t seen it before, it also supports Node.js and Python.

raymondbutcher/terraform-aws-lambda-builder

Terraform module to build Lambda functions in Lambda or CodeBuild - raymondbutcher/terraform-aws-lambda-builder

5
cool-doge1
loren avatar

nice, yeah, we’ve been using codebuild as an easy command runner. great for scheduled and event-driven tasks of all kinds

raymondbutcher/terraform-aws-lambda-builder

Terraform module to build Lambda functions in Lambda or CodeBuild - raymondbutcher/terraform-aws-lambda-builder

randomy avatar
randomy

Yeah, it’s a decent service. I’ve mostly used it for running Packer to create AMIs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s slick! I like that design…

randomy avatar
randomy

Thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Had a random interesting idea. What about a provider like this one: https://github.com/scottwinkler/terraform-provider-shell that was called terraform-provider-webhook and all it did was let you trigger webhooks (with get/post parameters) as part of lifecycle events in terraform. So when you deploy a new ECS task (for example), you can use the terraform-provider-webhook to trigger a deployment from your CI/CD platform. Or for example, if you have multiple terraform projects that use remote state (or ssm), you could trigger deployments on dependent projects. Or you could send the webhooks to sentry for deployment notifications of when infrastructure changes. Seems like there could be a lot of use-cases for it.

scottwinkler/terraform-provider-shell

Terraform provider for executing shell commands and saving output to state file - scottwinkler/terraform-provider-shell

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Mastercard/terraform-provider-restapi

A terraform provider to manage objects in a RESTful API - Mastercard/terraform-provider-restapi

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this has a data provider to issue GET requests.

jose.amengual avatar
jose.amengual

I was thinking on that one when you were talking about the webhook provider, you can definitly use it to trigger the webhook

1
Joe Niland avatar
Joe Niland

Hi @Maxim Mironenko (Cloud Posse) would you mind reviewing this PR? I need a bit of help with the bats tests as well. Thanks!! https://github.com/cloudposse/terraform-aws-ec2-bastion-server/pull/27

Update for Terraform 0.12 by joe-niland · Pull Request #27 · cloudposse/terraform-aws-ec2-bastion-server

I have also added one example and tests, however I will need a bit of help to get the tests fully working.

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)

@Joe Niland sorry for late response. will check for it tomorrow. I am on a short vacation right now

Update for Terraform 0.12 by joe-niland · Pull Request #27 · cloudposse/terraform-aws-ec2-bastion-server

I have also added one example and tests, however I will need a bit of help to get the tests fully working.

Joe Niland avatar
Joe Niland

@Maxim Mironenko (Cloud Posse) no problem - thanks!

2020-03-30

davidvasandani avatar
davidvasandani

Has anyone else found a sustainable way to use ALB wighted target groups in Terraform as they aren’t officially supported yet? https://github.com/terraform-providers/terraform-provider-aws/issues/10942

Support ALB weighted target groups · Issue #10942 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

davidvasandani avatar
davidvasandani

I’m currently experimenting with local-exec provisioners as I know bash and not golang but I also know that isn’t sustainable.

Support ALB weighted target groups · Issue #10942 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

davidvasandani avatar
davidvasandani

Looks like someone submitted a PR a couple hours ago!

davidvasandani avatar
davidvasandani
Support weighted target groups in forward lb_listener default action and lb_listener_rule resource by rdelcampog · Pull Request #12574 · terraform-providers/terraform-provider-aws

NOTE: This PR is based on the @goodspark approach in PR #11606. I have implemented tests, docs, the default_action part and fix some things not working properly (like using a TypeList instead TypeS…

Cloud Posse avatar
Cloud Posse
04:00:56 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Apr 08, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

discourse avatar
discourse
04:15:10 PM
Versioning and Deploying Secrets [Terraform]

I am curious to understand how others manage their secret and sensitive info in conjunction with Terraform.

Most of my use-cases with terraform are provisioning Infra (Usually AWS) and then Application resources that depend on the infra.

Examples of Secrets:

single-line strings

passwords

api-keys

tokens

multi-line strings

ascii-armored pem files

ascii license data

binary license data

I’…

loren avatar

@Erik Osterman (Cloud Posse) another idea… how about a link to the slack archive of the thread (https://archive.sweetops.com/...)? in case there is more context/discussion that doesn’t make it to discourse…?

Versioning and Deploying Secrets [Terraform]

I am curious to understand how others manage their secret and sensitive info in conjunction with Terraform.

Most of my use-cases with terraform are provisioning Infra (Usually AWS) and then Application resources that depend on the infra.

Examples of Secrets:

single-line strings

passwords

api-keys

tokens

multi-line strings

ascii-armored pem files

ascii license data

binary license data

I’…

loren avatar

or, wait, is this link coming from discourse?

1
androogle avatar
androogle

yeah I posted there exclusively

androogle avatar
androogle

because of length and format reasons

1
androogle avatar
androogle

Though maybe to your point, once it posts to Slack, it updates the thread with that slack post link

1
loren avatar

i see. there has been a similar discussion in slack, so i thought it was the new discourse bot/app that creates discourse discussions from slack threads

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is cool. Let’s run with it. Also, maybe we can link to the slack archive for it in the discourse post.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(i need to step away - will take a look when I get back?)

1
rbadillo avatar
rbadillo

Hi guys, does anybody here have experience writing terraform provider ? I want to know if it is possible to get the name of the resource on the provider side

androogle avatar
androogle

I have zero experience with this and have only toyed with some ideas but this link from my bookmarks seems to have some cursory info on schema and resource name referecing: https://www.terraform.io/docs/extend/schemas/schema-types.html

Home - Extending Terraform - Terraform by HashiCorp

Extending Terraform is a section for content dedicated to developing Plugins to extend Terraform’s core offering.

rbadillo avatar
rbadillo

let me check

Clayton Wheeler avatar
Clayton Wheeler

Hey, I think I’ve got a decent start on updating the ECS service for 0.12; is that something that’s already in progress, or shall I make a PR for it?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So the biggest hold up is on our side. We’re only merging 0.12 upgrades that add tests

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so all of our modules today that support HCL2 also have terratest.

Clayton Wheeler avatar
Clayton Wheeler

That makes sense, it’s been a bit of a pain without a test suite.

Clayton Wheeler avatar
Clayton Wheeler

I’m not sure I’ve got time to put one together immediately, though.

Clayton Wheeler avatar
Clayton Wheeler

What I’ve got is at https://github.com/Genomenon/terraform-aws-airship-ecs-service/tree/terraform12; I’ve got it working okay for my own use cases, though I’m sure it needs more work to exercise the parts I’m not using. If it’d be useful as a starting point, great. Is that something you guys are envisioning doing in house? Are there any tests covering the ECS stuff that I might want to look at?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ohhh the #airship stuff is @maarten’s project

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(cloudposse is not involved in maintaining it - so best run it by him)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t think he’s using terratest.

Clayton Wheeler avatar
Clayton Wheeler

ohhh gotcha, I’ll ping him, thanks!

Todd Lyons avatar
Todd Lyons

I’m starting to get annoyed. I’m waiting on a aws_cloudfront_distribution apply to finish. All it did was add some tags. It’s at 1h18m so far.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heh, welcome to cloudfront

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what i really love about cloudflare / fastly, is that changes are nearly instantaneous.

Todd Lyons avatar
Todd Lyons

I’m starting to feel like it’s failed and terraform just hasn’t realized it yet.

1
Todd Lyons avatar
Todd Lyons

There are 4 invalidations running, seems like some deploys that a team is trying to do is conflicting or stuck waiting on my change to complete. Ugh, yuck.

loren avatar

I always set the cloudfront resource to not wait. It takes forever to stabilize

1
Todd Lyons avatar
Todd Lyons

Thanks for your feedback, all.

Todd Lyons avatar
Todd Lyons

These are normally 5 or 6 minute changes.

Todd Lyons avatar
Todd Lyons

So it turns out that CloudFront is experiencing an outage right now, about an hour into the official outage.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hah! that adds some sense closure. You were not crazy =)

Todd Lyons avatar
Todd Lyons

But my anecdotal evidence suggests it started well before they declare to have started.

1
discourse avatar
discourse
12:22:54 AM
Versioning and Deploying Secrets [Terraform]

Regarding examples of secrets, these are good, though we should also call out the different ways secrets are consumed. Especially when dealing with third-party software, the configuration mechanisms vary. Sometimes environment variables suffice, sometimes configuration files are required. Other times, with in-house software, they might directly interface with something like HashiCorp Vault or the …

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

updated with link to last week’s office hours recording that talks a little bit about it

Versioning and Deploying Secrets [Terraform]

Regarding examples of secrets, these are good, though we should also call out the different ways secrets are consumed. Especially when dealing with third-party software, the configuration mechanisms vary. Sometimes environment variables suffice, sometimes configuration files are required. Other times, with in-house software, they might directly interface with something like HashiCorp Vault or the …

discourse avatar
discourse
01:16:11 AM
Versioning and Deploying Secrets [Terraform]

What I like less is that when the secrets change, there’s no real oversight as part of the PR process of what specifically changed (E.g. was it Datadata Integration Key or the backend database password?).

This is definitely an issue and makes PR’s and review a pain. The data is all there but it definitely doesn’t lend itself as observable or transparent. I’ve considered switching from ansible-v…

2020-03-31

Geoff Weinhold avatar
Geoff Weinhold

So I’m trying to demo creating a VPC Endpoint for S3 for a customer but hit a bump where I need to associate to the route table. I was thinking I’d attach to existing infra like VPC/subnet/etc but looks like it’s harder to query for the route table that’s associated with subnet. Am I overthinking this and should just create it all (vpc/subnets/etc) at once?

Brij S avatar

has anyone tried to create more than one fargate profile?

resource "aws_eks_fargate_profile" "default" {
  cluster_name           = aws_eks_cluster.eks.name
  fargate_profile_name   = "default"
  pod_execution_role_arn = aws_iam_role.fargate_pod_execution.arn
  subnet_ids             = var.private_subnet_ids
  tags                   = var.tags

  dynamic "selector" {
    for_each = var.selector
    content {
      namespace = selector.value["namespace"]
    }
  }
}

resource "aws_eks_fargate_profile" "example" {
  cluster_name           = aws_eks_cluster.eks.name
  fargate_profile_name   = "default"
  pod_execution_role_arn = aws_iam_role.fargate_pod_execution.arn
  subnet_ids             = var.private_subnet_ids
  tags                   = var.tags

  dynamic "selector" {
    for_each = var.selector
    content {
      namespace = selector.value["namespace"]
    }
  }
}

by doing this, example might fail to create with the following error

Error: error creating EKS Fargate Profile (cluster-name:example): ResourceInUseException: Cannot create Fargate Profile example because cluster cluster-name currently has Fargate profile default in status CREATING
androogle avatar
androogle

what if you set depends_on and chain them? maybe its a parallel execution issue and the cluster can only process requests serially?

androogle avatar
androogle

I have not tried two, to answer your question.

Brij S avatar

let me try a depends_on real quick

androogle avatar
androogle

you could create a null resource provider with a trigger on the first profile status and execute the next profile

androogle avatar
androogle

out of curiosity, did depends_on work?

Brij S avatar

yes it did weird work around but it gets the job done

1
1
    keyboard_arrow_up