#terraform (2020-08)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-08-31

Peter Huynh avatar
Peter Huynh

hello, when writing a composite module for terraform, is it worth split logical components into separate files? My [main.tf](http://main\.tf) is growing a bit, and I was wondering if there are best practices around this scenario. Thanks in advanced.

pjaudiomv avatar
pjaudiomv

I always structure my modules the same as my main terraform

[main.tf](http://main\.tf)
[variables.tf](http://variables\.tf)
[outputs.tf](http://outputs\.tf)
[iam.tf](http://iam\.tf)
pjaudiomv avatar
pjaudiomv

etc

pjaudiomv avatar
pjaudiomv

makes it alot easier for managing

MattyB avatar
MattyB
That’s what I did with one of my projects. I think there’s a bit too much going on at this point. this new project I think we’re going to stick to [main.tf> containing most of the resources (vpc, iam, etc…) whereas any ecs app with r53 entries, its dependent ecr, etc.. would go under app1.tf or <http://example.tf example.tf](http://main.tf).
Peter Huynh avatar
Peter Huynh

I see, thanks for the inputs.

My initial thought was to split it by components, such as a lambda, an archive resource, the iam role for the lambda for example. This makes up a standalone part of the application, and it doesn’t depend on other components.

From what mentioned above, it seems that it is preferred that the resource are grouped by type, eg iam, s3 etc?

Gowiem avatar
Gowiem
Yeah, I typically group by logical types (either application layers such as [secrets.tf>, [network.tf>, data.tf or AWS services such as <http://route53.tf route53.tf](http://network.tf), <http://kms.tf kms.tf](http://secrets.tf), etc.) and had success with that.

This is one of the places where TF doesn’t have much structure or convention and it’s painful.

PePe avatar
I do the logical types too but I keep them grouped by “resource affiliation” meaning if I have an [instance.tf> file and it needs an instance profile then those roles and policies will not be on the <http://iam.tf iam.tf](http://instance.tf) file
:--1:1
PePe avatar

I will treat iam.tf as a global group of resources that are needed for the whole project to work

1
Gowiem avatar
Gowiem
Yeah similar thought process for myself as well. Then more generic things that are used across various resources / modules would go into the [iam.tf> or other <http://service.tf service.tf](http://iam.tf) files.
1
sheldonh avatar
sheldonh

for anything larger I don’t use any main.tf anymore tbh.

That’s probably a sign it needs to be organized a bit better though by me.

I tend on those type of plans to do stuff like

[backend.tf](http://backend\.tf)
[iam.tf](http://iam\.tf)
[ec2.tf](http://ec2\.tf)

… when I can’t use a module. I’d rather organize by types of content in that case.

For a best practice layout for a “root master module” I think Erik has some excellent root module repo layouts that pretty solid and I’d look at if I was building a new project as we ll.

Peter Huynh avatar
Peter Huynh

Thanks for the insights. I’ll go have a look at the root master module mentioned.

sheldonh avatar
sheldonh

I keep having issues on a module I pulled in with the failed to find provider due to the new source path stuff in 0.13. Anyone have a quick fix to this besides the upgrade command as I just need to tear some stuff down but dependent modules

Error while installing hashicorp/template v2.1.2: after installing
[registry.terraform.io/hashicorp/template](http://registry\.terraform\.io/hashicorp/template) it is still not detected in the target directory; this is a bug in Terraform
Chris Wahl avatar
Chris Wahl
terraform {
  required_providers {
    template = {
      source  = "hashicorp/template"
      version = "~>2.1.2"
    }
  }
}

Can always just make a new branch in git, use the upgrade command, steal the provider code, and then reset.

sheldonh avatar
sheldonh

hmm. i think i tried that already and still failed. I converted all the code to the new format, still failed.

I just grabbed the docker-terraform repo and built a local terraform cli container. Doing init with this succeeded without error, so I’ll try this instead. Some issue with docker vs local i think

Chris Wahl avatar
Chris Wahl

You might try purging the local .terraform folder if it has references to the old provider, or perhaps your terraform.d settings if you think it’s the local profile hitting a snag. I hit that issue with one of my 0.12 configs.

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I thought the template provider was deprecated?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(and replaced by the templatefile function)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:06:36 AM
Chris Wahl avatar
Chris Wahl

“This provider has been archived. Please use the Cloudinit provider instead.”

Blake avatar
Blake

Has anyone kicked the tires on Terraspace at all? https://terraspace.cloud/

Terraspace | The Terraform Framework

The Terraform Framework

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This new, opinionated Terraform wrapper / framework just launched: https://terraspace.cloud/

I wouldn’t use it, but would be interested in hearing from others if they would.

:--1:1
Alan Kis avatar
Alan Kis

Why you wouldn’t use it? Asking for a friend.

Blake avatar
Blake

It looks like the opinion is that it’s written in Ruby and lacks support for Windows systems

2020-08-30

Peter Huynh avatar
Peter Huynh

hey, when I rename the local reference to a module, it destroys the stack previously referenced and build a new one. In this scenario, I am wondering if:

• there is a way for me to migrate the reference to the new name? The resources are the same, just the references in the statefile changes.

• When terraform deletes and creates the same resource (eg an S3 bucket with the same name), it throws an error. Is there a way to nicely handle this situation?

james avatar
james
  1. Sounds like terraform state mv
james avatar
james
  1. If it really deletes it first, this should be fine. What’s the error? If it’s trying to create the new one first and failing because of the old one, terraform has a lifecycle hook to control that:
    lifecycle {
      create_before_destroy = true
    }
    

    check if you accidentally have this set somehow

loren avatar
loren

the problem is that when you change the resource name, you break any ability for terraform to track the dependency between the “old” resource and the “new” one. so you have a race condition on the destroy/create. i.e. it is not destroying and then creating the s3 bucket. terraform works in parallel, so it is destroying and creating the s3 bucket, at the same time

loren avatar
loren

i’d vote with @james, use terraform state mv to rename the resource in the state

:--1:2
Peter Huynh avatar
Peter Huynh

Thanks James and Loren. I’ll try terraform state mv next time.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
08:25:14 AM

Terraform Cloud scheduled maintenance Aug 30, 08:00 UTC Completed - The scheduled maintenance has been completed.Aug 30, 07:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 27, 10:28 UTC Scheduled - We will be undergoing a scheduled maintenance to make some network upgrades to Terraform Cloud. We don’t anticipate any customer-facing impact to our services during this window.

james avatar
james

warning: the combination of terraform cloud + terraform 0.13.0 has completely deadlocked one of our workspaces:

  1. one of my team upgraded to terraform 0.13.0, applied a change
  2. now nobody can touch that state with 0.12
  3. BUT somehow the state has something inside that 0.13.0 doesn’t like: ``` Error: Invalid resource instance data in state

Instance module.public_api01.aws_instance.instance data could not be decoded from the state: unsupported attribute “network_interface_id”. ```

  1. https://github.com/hashicorp/terraform/issues/25752 - previously the only fix was to manually edit the state back to say 0.12, but manually editing this state file is not possible in terraform cloud (state push with 0.13 will override the terraform version to 0.13, and 0.12 refuses to push)

…but now after writing all this I see that a fix was included in 0.13.1

Provider Removed Attributes Causing "data could not be decoded from the state: unsupported attribute" Error · Issue #25752 · hashicorp/terraform

Terraform Version v0.13.0-rc1 Although its also being reported with v0.12.29. Terraform Configuration Files main.tf terraform { required_providers { aws = { source = &quot;hashicorp/aws&quot; versi…

james avatar
james

still, terraform’s handling of upgrades is really obnoxious IMO

james avatar
james

working in a team, if anyone uses a newer version, everyone is forced to upgrade and there’s often no way back

loren avatar
loren

@james highly recommend a strict pin of the terraform version, to prevent such accidental upgrades… put this in your root config:

terraform {
  required_version = "0.13.1"
}
1
:100:1
roth.andy avatar
roth.andy

Yep. My team does this with all projects. This combined with Terraform installed using asdf and having a .tool-versions file makes things pretty painless

asdf-vm/asdf

Extendable version manager with support for Ruby, Node.js, Elixir, Erlang & more - asdf-vm/asdf

2020-08-29

Van Johnson avatar
Van Johnson

Hoping this is the correct channel. I’m trying to figure out how to get a json object with jsonencode from a list of maps. But I’m not getting the desired outcome. 

What I need the json to be is…

{
   "bob.role":"arn:aws:iam::1234:role/access-role-bob",
   "bob.path":"*",
   "dave.role":"arn:aws:iam::1234:role/access-role-dave",
   "dave.path":"prod/files"
 }

Here’s an example list

locals {
  users = [
    { username : "bob", path : "*" },
    { username : "dave", path : "prod/files" },
  ]

  secret = [
    for index, data in local.users : 
      map(
        "${data["username"]}.role", "arn:aws:iam::1234:role/access-role-${data["username"]}",
        "${data["username"]}.path", "${data["path"]}"
      )
  ]
}

And I get the correct map that I want for each, but it’s still a list(map). I’ve tried merge and few other functions.

$ terraform console
> local.secret
[
  {
    "bob.path" = "*"
    "bob.role" = "arn:aws:iam::1234:role/access-role-bob"
  },
  {
    "dave.path" = "prod/files"
    "dave.role" = "arn:aws:iam::1234:role/access-role-dave"
  },
]
loren avatar
loren

if you’re a little flexible on the data structure, you can get there pretty easily using the “map” syntax of the for loop, i.e. { for ... } (instead of the list syntax you have now, [ for ... ]) something like this:

  secret = {
    for data in local.users :  data.username => {
      role = "arn:aws:iam::1234:role/access-role-${data.username}"
      path = data.path
    }
  } 

ought to give you a data structure like this:

{
  bob = {
    path = "*"
    role = "arn:aws:iam::1234:role/access-role-bob"
  }
  dave = {
    path = "prod/files"
    role = "arn:aws:iam::1234:role/access-role-dave"
  }
}
loren avatar
loren

so you can then get the role or the path for any given user, using the username as the map lookup, e.g. secret["bob"].path and secret["bob"].role

1
Van Johnson avatar
Van Johnson

Great! Thank you. Made some progress with this and I’m confident it will work.

Van Johnson avatar
Van Johnson

Ok, I got what I wanted. By using the original logic, I used the operator like so.

merge(local.secret...)

And I got… (pun intended?)

{
  "bob.path" = "*"
  "bob.role" = "arn:aws:iam::1234:role/access-role-bob"
  "dave.path" = "prod/files"
  "dave.role" = "arn:aws:iam::1234:role/access-role-dave"
}
Peter Huynh avatar
Peter Huynh

Hello, I have a module that creates an SSL certificate. Now, I am running on a different region to us-east-1 , so I declared 2 provider "aws" and the us-east-1 has an alias.

Things are looking fine until I run terraform plan, and I get this error,

To work with module.docs-site.aws_acm_certificate.ssl its original
provider configuration at
module.docs-site.provider["[registry.terraform.io/hashicorp/aws](http://registry\.terraform\.io/hashicorp/aws)"].us-east-1
is required, but it has been removed. This occurs when a provider
configuration is removed while objects created by that provider still exist in
the state. Re-add the provider configuration to destroy
module.docs-site.aws_acm_certificate.ssl, after which you can
remove the provider configuration again.

Is there a way for me to pass the provider reference that defined in the global module into the child module?

Peter Huynh avatar
Peter Huynh

I have checked my statefile and it’s empty, which is quite confusing, because I’ve just restructured my code but havent ran anything, so I didnt expect to have objects created by that provider still exists in the state .

pjaudiomv avatar
pjaudiomv
Modules - Configuration Language - Terraform by HashiCorp

Modules allow multiple resources to be grouped together and encapsulated.

pjaudiomv avatar
pjaudiomv

You can pass the provider to the module you’ll need to use an alias when using two aws providers

Peter Huynh avatar
Peter Huynh

Thanks Patrick.

:--1:1
Peter Huynh avatar
Peter Huynh

Turned out I need to do something similar to the module "tunnel" example. Thanks for pointing me in the right direction.

:--1:1

2020-08-28

Peter Huynh avatar
Peter Huynh

gday everyone. I’ve got a quick question about the cloudposse module. I am looking to setup a static (restricted access) website, and decided to go down the s3 + cloudfront + ACM + [email protected] (for authentication and basic routing). I was wondering if https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn does a lot of those heavy lifting already?

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

RB avatar

Yes it does

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

RB avatar

It does s3 and cloudfront but your lambda has to be created separately

Peter Huynh avatar
Peter Huynh

yeah, okay, that makes sense. I can do the ACM and lambda separately. I’ll give it a shot and will look at the docs to see how to connect the lambda.

Thanks heap for replying.

RB avatar

No problem at all. My company might be open sourcing their viewer response lambda as a module so that would make creating secure static sites even easier

RB avatar

Essentially it just adds headers using a terraform map dynamically to the lambda

RB avatar

Each of our sites use a unique csp so each static s3 site requires a different [email protected]

Peter Huynh avatar
Peter Huynh

ohhh, that’d be nice. I just need the [email protected] to do 2 things, basic auth + a redirect rule (*/ -> */index.html)

Peter Huynh avatar
Peter Huynh

fairly easy to throw that into a python file.

Vlad Ionescu avatar
Vlad Ionescu

You may also want to check out AWS Amplify Console which is basically managed [email protected]+CI/CD from AWS.

It supports static websites and password protection

Static Web Hosting – AWS Amplify Console – Amazon Web Services

Build, deploy, and host static web apps and websites using frameworks like React, Gatsby, Vue, Angular, Ember, Jekyll, and Hugo.

RB avatar

oh wow, i did not know of the above ^

RB avatar

i wonder what the pricing differences are betw amplify vs s3+cf+lambda

RB avatar

fyi if you are going to stick with the [email protected] method, be sure to check out the other tf modules in the area.

https://github.com/search?o=desc&q=lambda+edge+terraform&s=stars&type=Repositories

Vlad Ionescu avatar
Vlad Ionescu

Hm… I did not compare pricing at all Even if it’s more expensive, setting up the per-PR preview is reason enough to pay for it

Also, you don’t really get CloudWatch Metrics with Amplify Console which is a shame. And you don’t get access to any underlying AWS resources There’s a feature request open for metrics IIRC

Peter Huynh avatar
Peter Huynh

hmmm, looks interesting. I only want to host a docs site, so I am just looking for the simplest option possible. I got cloudfront + s3 working, tho configuring the cloudfront resource is a bit of a pain, so I am looking at a module to simplify that process a bit.

RB avatar

you already found one in the original post regarding both cf and s3

Peter Huynh avatar
Peter Huynh

yeah, which lead me here.

Peter Huynh avatar
Peter Huynh

I’ve got a question about for_each. What I am trying to do is to create an SSL certificate via ACM. Following the example from hashicorp, https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/acm_certificate_validation, they have DNS validation and have an example resource. However, when grouping all the resources together, I get the (somewhat famous) error: The "for_each" value depends on resource attributes that cannot be determined until apply. Is there any common practices to get around this issue, or should I just do a terraform apply -target && terraform apply?

loren avatar
loren

here you go, not yet updated for tf 0.13 though, nor aws provider v3… https://github.com/plus3it/terraform-aws-tardigrade-acm/blob/master/main.tf

plus3it/terraform-aws-tardigrade-acm

Contribute to plus3it/terraform-aws-tardigrade-acm development by creating an account on GitHub.

imiltchman avatar
imiltchman

This seems very hacky. Is this due to aws provider v3 update?

loren avatar
loren

it’s due to funny business in tf 0.12 and v2 of the aws provider

loren avatar
loren

and because it is attempting to address create and destroy lifecycles, multiple SANs, etc… modules are hard

loren avatar
loren

i haven’t gotten around to updating it for tf 0.13 and v3 of the aws provider, which i think will reduce quite a bit of the hackiness, but will also be totally backwards incompatible

Peter Huynh avatar
Peter Huynh

Thanks heap.

Exequiel Barrirero avatar
Exequiel Barrirero

Hey there SweetOps community!!! We appreciate a :+1: if someone came across this :terraform: tf-0.13.1 github issue -> https://github.com/hashicorp/terraform/issues/26038 Your collab would be trully useful Thanks in advance! 

Terraform 0.13.1 reports Unsupported argument error when initializing backend · Issue #26038 · hashicorp/terraform
Terraform Version $ terraform version Terraform v0.13.1 + provider [registry.terraform.io/-/aws> v3.4.0 + provider <http://registry.terraform.io/hashicorp/aws registry.terraform.io/hashicorp/aws](http://registry.terraform.io/-/aws) v2.70.0 Terraform Configuration Files Terraform …
:--1:1
github1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We can now accept PRs for 0.13 and test them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
05:00:08 PM
5
2
:--1:2
party_parrot2

2020-08-27

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
10:35:13 AM

Terraform Cloud scheduled maintenance THIS IS A SCHEDULED EVENT Aug 30, 07:30 - 08:30 UTCAug 27, 10:28 UTC Scheduled - We will be undergoing a scheduled maintenance to make some network upgrades to Terraform Cloud. We don’t anticipate any customer-facing impact to our services during this window.

Terraform Cloud scheduled maintenance

HashiCorp Services’s Status Page - Terraform Cloud scheduled maintenance.

Jon Murillo avatar
Jon Murillo

Good morning, I am running into a dynamic grants issue with the S3 module version 0.17.1 and was wondering if anyone had any recommendations. I’m using Terraform v.12.24. I have ACL set to private and do not wish to use grants at this time so it should be defaulting to null.

Error: Unsupported block type

  on .terraform/modules/terraform_s3_bucket/terraform-aws-s3-bucket-0.17.1/main.tf line 92, in resource "aws_s3_bucket" "default":
  92:   dynamic "grant" {

Blocks of type "grant" are not expected here.
Jon Murillo avatar
Jon Murillo

looks like it’s happening to be on version 17.0 and 16.0 as well. I probably did something wrong

curious deviant avatar
curious deviant

These error messages are very cryptic. I ran into something similar with using dynamic blocks for copy_action for AWS Backup service. In my case, the issue was that the AWS provider 2.58 had been updated to include support for it and even though my code was using 2.70 , I was seeing this error , until I pinned the AWS provider version in code explicitly to 2.70 instead of saying >2.11 etc.

curious deviant avatar
curious deviant

The reason I am telling you this is that it could be your code is correct but maybe one of the provider version etc is breaking it. I would be very interested in knowing how you fix this error for your case.

curious deviant avatar
curious deviant

also, I saw this error with TF versions :12.19 12.23 and 12.29

Jon Murillo avatar
Jon Murillo

very interesting, I currently have my AWS provider set to v2.28.1

curious deviant avatar
curious deviant

try setting it to something latest..2.70 for example

curious deviant avatar
curious deviant

the provider changelog typically indicates when support for a particular feature was introduced. But testing it like this will save the trouble of going through the versions.

curious deviant avatar
curious deviant
Feature Request - S3 ACL Policy Grant · Issue #20 · terraform-aws-modules/terraform-aws-s3-bucket

Hi there, We have a requirement to implement Bucket ACLs on a few buckets in S3 and have been using this module for other buckets we have created, so we&#39;d like to keep some consistency if possi…

:--1:1
Jon Murillo avatar
Jon Murillo

weird, I keep getting an error trying to terraform init --upgrade with using any of the newer providers.. I wonder if my Terraform version needs to get bumped up. Even if I put it back to the way I had it or just completely remove the requirement Terraform still errors out. :confused:

Error: no suitable version is available

Jon Murillo avatar
Jon Murillo

and my ~/.terraform.d/ directory only has a checkpoint_cache and checkpoint_signature file in it

Jon Murillo avatar
Jon Murillo

No provider “aws” plugins meet the constraint “ < 4.0,>= 3.0,> 2.0,gt; 2.0,> 2.0,gt; 2.0”.

The version constraint is derived from the “version” argument within the provider “aws” block in configuration. Child modules may also apply provider version constraints. To view the provider versions requested by each module in the current configuration, run “terraform providers”.

To proceed, the version constraints for this provider must be relaxed by either adjusting or removing the “version” argument in the provider blocks throughout the configuration.

Error: no suitable version is available

Jon Murillo avatar
Jon Murillo

hmm looks to be only for this one repo. Terraform init still works in my other repos.

Jon Murillo avatar
Jon Murillo

fixed it.

curious deviant avatar
curious deviant

hey..awesome ! what was the issue ?

Jon Murillo avatar
Jon Murillo

I ended up pulling in a different TF module for S3 and bumped the version to the latest which sourced in the AWS 3.0 provider. So when I ran terraform providers I saw the 3.0 being sourced from that S3 module.

:--1:1
Jon Murillo avatar
Jon Murillo

still haven’t fixed the initial issue I had with the original module

Alan Kis avatar
Alan Kis

Another neat one:

Terraform versioned modules in AWS CodeCommit with federated access? Has someone had success with it?

Brij S avatar
Brij S

Hey, all! I see in the Terraform documentation they recommend using the provider in the module name, for example terraform-aws-xxxx for a module that contains all aws resources. I’ve got a situation where I’ve technically two providers in a module of mine - helm and aws. Is there guidelines on this? Or do I need to split this up

roth.andy avatar
roth.andy

terraform-aws-helm-xxxx?

Brij S avatar
Brij S

is that the terraform recommendation? or a suggestion?

roth.andy avatar
roth.andy

not sure there is an official recommendation

roth.andy avatar
roth.andy

there is no “rule” here, just guidelines. When I do something like this I do use the terraform-prov1-prov2-xxx syntax

Steven avatar
Steven

The recommending naming that you’re looking at, is a requirement for releasing on terraform registry. If you don’t plan on doing that then naming is completely up to you. example: we use that naming for public modules, but for internal modules we use tf-<thing being managed>. Helps people know what is public and what is not

Brij S avatar
Brij S

yeah it would be going into a registry (terraform enterprise…or is it called cloud now ) which is why im trying to figure out

roth.andy avatar
roth.andy

Good to know thank’s @Steven, I’ve never published one on the registry so that was news to me

Brij S avatar
Brij S

right, I shouldve mentioned that this was for a registry

Tom Howarth avatar
Tom Howarth

this might sound like an odd request, but I am attempting to automatically configure kubectl after deploying an EKS cluster I have captured the generated clustername as a Terraform Ouptut and I want to feed that to kubectl as an input.

I was thinking of using a provisoner local exec to configure kubectl in the script, but I know the using provisioners is frowned upon.

roth.andy avatar
roth.andy

You could use local_file and make sure the directory that the file gets added to is set up in your KUBECONFIG environment variable

Gowiem avatar
Gowiem

@Tom Howarth This is how I’m doing this on a project:

resource "null_resource" "eks_kubeconfig" {
  triggers = {
    eks_endpoint = module.eks_cluster.eks_cluster_id
  }

  provisioner "local-exec" {
    command = "aws eks --region ${var.region} update-kubeconfig --name ${module.eks_cluster.eks_cluster_id}"
  }
}
roth.andy avatar
roth.andy

That works too, though I prefer to keep different files when possible so $HOME/.kube/config file doesn’t get super cluttered

roth.andy avatar
roth.andy

aws eks update-kubeconfig has a --kubeconfig flag to specify the file that gets modified/created

Gowiem avatar
Gowiem

@roth.andy Ah TIL. I like that idea. This is my first major k8s project, so I haven’t hit that level of .kube/config clutter, but I can see that being annoying with a bunch of projects / environments.

Tom Howarth avatar
Tom Howarth

so I was wondering if there was a better way of doing it.

sheldonh avatar
sheldonh

anyone regularly using tflint? I’m suprised I hadn’t run across it sooner. I don’t have sentinel right now as on free terraform cloud version. The tflint seems pretty cool to plug-in to my github actions checks and all.

roth.andy avatar
roth.andy
antonbabenko/pre-commit-terraform

pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform

roth.andy avatar
roth.andy

is what I use frequently. includes tflint

sheldonh avatar
sheldonh

Very cool!!!! I just ran in cloudposse template I used and it flagged pinned at master recommending better practice. Love it. Going to check out this repo too

sheldonh avatar
sheldonh

first time i setup pre-commit successfully. Love it. Terraform markdown docs + formatting + whitespace trim now enabled.

I’m a fan of trying to do this in github actions when possile to eliminate any local dependency on someone but i imagine it would be really easy to run the same triggers in github

RB avatar

tflint and checkov are very useful

RB avatar

caught a lot of things before committing

Jon Murillo avatar
Jon Murillo

How can I properly set my own CMK using the dynamodb module version 0.18.0? I see the enable_encryption parameter that I can set to true. That works. I want to specify a kms_key_arn. I see it’s normally available through the resource but when I try to add it, it says it’s in invalid parameter. When I look at the [main.tf](http://main\.tf) in the repo, I see the server_side_encryption block without the ability to add a CMK. I’m wanting to setup Terraform dynamoDB locking so I’d like to use a CMK. Just curious if the module by default creates a separate key, uses a default one, uses the one associated with my S3 bucket or what.. Thank you in advance!

sheldonh avatar
sheldonh

Wow. What a setup experience.

Check this out: https://www.gitpod.io/docs/self-hosted/latest/install/install-on-aws-script/

Super smooth docker container that downloaded all the required terraform internally and persisted to local volume when done + asked for input from user. Super impressed

Docs

Documentation site for Gitpod.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If anyone is curious how much effort goes into code review for new modules, check this one out! https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/1

Initial Version of Module by htplbc · Pull Request #1 · cloudposse/terraform-aws-msk-apache-kafka-cluster

what Initial Version of Module why Needed a way to setup AWS MSK via terraform.

2
1
Jon Murillo avatar
Jon Murillo

AWS recently released Private CA sharing.. I wonder if this would work with MSK now.. Crazy how you’re required to use Private ACM for client auth.

Initial Version of Module by htplbc · Pull Request #1 · cloudposse/terraform-aws-msk-apache-kafka-cluster

what Initial Version of Module why Needed a way to setup AWS MSK via terraform.

Alex Siegman avatar
Alex Siegman

Heh, we just built this internally lol

zidan avatar
zidan

When writing code and you notice that you are taking copy-paste from place to place you will think about DRY “don’t repeat yourself” principle and you will move the shared code to a class of function, so you don’t need to repeat it again and again, but what about if you want to apply the same in terraform? here are some tips that I follow when I write a terraform script, check it out and let me know what do u think? https://www.dailytask.co/task/tips-that-i-follow-when-i-provision-my-aws-resources-using-terraform-ahmed-zidan

Tips that I follow when I provision my aws resources using terraform

Tips that I follow when I provision my aws resources using terraform written by Ahmed Zidan

:--1:1
RB avatar

Nice work. This is definitely an issue with a lot of terraform i see

Tips that I follow when I provision my aws resources using terraform

Tips that I follow when I provision my aws resources using terraform written by Ahmed Zidan

RB avatar

I’m surprised you didn’t mention the terraform registry. The registry modules, like the vpc module or cloud posse nodules have reduced our code a lot

:--1:1
RB avatar

Before i even start writing custom terraform code, i try to find a module first

:--1:1
Emmanuel Gelati avatar
Emmanuel Gelati

But better use workspaces to divided envs

Jon Murillo avatar
Jon Murillo

I’ve recently moved away from using workspaces to divide my environments after about 4 years.. I was using workspaces to create resources based off of my AWS accounts (dev,staging,prod). Things started to break whenever I wanted to introduce a sub-environment into the mix (e.g. qa, aat, uat). Maybe I was using workspaces too broadly? Anyway, I’ve migrated to environment specific tfvars recently and so far it seems to be working better for me and has reduced a lot if not most of the complex variable maps I used to have.

:--1:1
RB avatar

I agree. I don’t like using workspaces when i could use a module with different tfvars input or a simple module reference instead

:--1:2
Emmanuel Gelati avatar
Emmanuel Gelati

I didn’t get the point of sub environment , I love to avoid to have directory per env to not duplicate my code and directory management, could you provide an example where workspace doesn’t works?

Emmanuel Gelati avatar
Emmanuel Gelati

@RB I am using modules with workspace, I love the idea to have an state file per environment

:--1:1
Jon Murillo avatar
Jon Murillo

Read my example. If you use workspace to separate your code based on your AWS accounts (or VPC’s if you’re doing that) by dev/staging/prod. Go deploy a MSK (Kafka) cluster or something. Now try to deploy another cluster for a QA or UAT environment. Assuming you are using variable maps per environment/workspace then it makes it annoying.

zidan avatar
zidan

@RB yes, online modules are the first place to look into it, check the standard that I follow I have mentioned that too https://www.dailytask.co/task/a-standard-that-i-follow-when-i-write-terraform-script-ahmed-zidan

A standard that I follow when I write Terraform script

A standard that I follow when I write Terraform script written by Ahmed Zidan

2020-08-26

Adrian Navarrete avatar
Adrian Navarrete

Hi all, I am using this module https://registry.terraform.io/modules/terraform-aws-modules/atlantis/aws/2.23.0 thanks to @antonbabenko,  for setting up Atlantis with Gitlab, it works fine when the the load balancer is external, but as soon as I change it to be internal is not reachable from the internet and therefore from Gitlab servers. Do you know if this module supports or what I can do for being able to use the internal ALB and being reachable from gitlab servers / the internet ? Many thanks in advance.

Tom de Vries avatar
Tom de Vries

Isn’t that the whole idea of having a public vs. private load balancer?

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
03:45:14 PM

HCS Azure Marketplace Integration Affected Aug 26, 15:38 UTC Investigating - We are currently experiencing a potential disruption of service regarding our Azure Marketplace Application offering. Our incident handlers and engineering teams are investigating this matter to provide a timely mitigation of impact.

As our teams work on this issue we will provide further ongoing information as it becomes available. If you have questions or are experiencing difficulties with this service please reach out to our customer support team (Needs…

HCS Azure Marketplace Integration Affected

HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
05:05:23 PM

HCS Azure Marketplace Integration Affected Aug 26, 16:55 UTC Update - We have confirmed a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide further updates soon.

As our teams work on this issue we will provide further ongoing information as it becomes available. If you have questions or are experiencing difficulties with this service please reach out to your customer support team….

HCS Azure Marketplace Integration Affected

HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
05:35:18 PM

HCS Azure Marketplace Integration Affected Aug 26, 16:55 UTC Update - We have confirmed a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide further updates soon.

As our teams work on this issue we will provide further ongoing information as it becomes available. If you have questions or are experiencing difficulties with this service please reach out to your customer support team….

Emmanuel Gelati avatar
Emmanuel Gelati

Hello, I am using terraform with azure and I have an strange issue, I create a cluster with two nodes pools and everything works, but I remove one of the node pool, the cluster get recreated

Release notes from terraform avatar
Release notes from terraform
06:24:16 PM

v0.13.1 0.13.1 (August 26, 2020) ENHANCEMENTS: config: cidrsubnet and cidrhost now support address extensions of more than 32 bits (#25517) cli: The directories that Terraform searches by default for provider plugins can now be symlinks to directories elsewhere. (This applies only to the top-level directory, not to nested directories…

lang/funcs: update cidrsubnet and cidrhost for 64-bit systems by mildwonkey · Pull Request #25517 · hashicorp/terraform

The cidrsubnet and cidrhost functions were limited to supporting 32-bit systems. This PR: updates the &quot;github.com/apparentlymart/go-cidr&quot; library, which was recently extended with Subnet…

zeid.derhally avatar
zeid.derhally

Anyone work with SSM document using schema 2.x and get an error when associating them with an instance? I get the error

Error creating SSM association: InvalidDocument: Document schema version, 2.2, is not supported by association that is created with instance id

zeid.derhally avatar
zeid.derhally

Oh, i think i need to use the targets property on aws_ssm_association

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
08:25:20 PM

HCS Azure Marketplace Integration Affected Aug 26, 20:04 UTC Update - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide another update soon.

If you have questions or are experiencing difficulties with this service please reach out to your customer support team.

IMPACT: Updating, creating, or deleting HashiCorp Consul Service on Azure clusters may be delayed…

HCS Azure Marketplace Integration Affected

HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
09:05:16 PM

HCS Azure Marketplace Integration Affected Aug 26, 20:55 UTC Resolved - Our Azure partners have mitigated the issue and we are seeing recovery from our tests. We are considering this incident resolved. If you see further issues please contact HashiCorp Support.Aug 26, 20:04 UTC Update - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide another update soon.

If…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heads up! we’ve made a some significant improvements to the terraform-null-label module and it’s handling of context passing betwen modules. we’ve introduced a new concept of [context.tf> which is the standard set of variable inputs for each module. we’ll be slowly rolling this out to all our modules. this change :100: preserves backwards compatibility, but adds the ability to very tersely pass context between our modules. this change was spearheaded by @Jeremy (Cloud Posse)</strong](http://context.tf) who updated it.

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

now with the [context.tf](http://context\.tf) we can plop that into every module so we keep our variables consistent. as many of you have probably realized, we inconsistently support things like environment, label_order, etc. because it was a manual, error prone process of copying distributing them to all the modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can start using this today in your projects.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, this also adds a this module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so you can pass module.this.context between modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Joe Hosteny we also fixed the issue you raised.

Empty delimiter does not work with TF 0.12 · Issue #77 · cloudposse/terraform-null-label

The delimiter is calculated with the coalesce function, which considers and empty string and a null string as the same thing. This prevents the delimiter from being set as &quot;&quot;, which is us…

:--1:1

2020-08-25

btai avatar

anyone have an example of disabling/enabling replication configuration in an s3 bucket via a flag?

here’s my config for the main bucket that isn’t currently working:

replication_configuration {
    role = aws_iam_role.replication.*.arn
    
    rules {
      status = local.enable_replica_bucket ? "Enabled" : "Disabled"

      destination {
        bucket        = aws_s3_bucket.replica_bucket.*.arn
        storage_class = "STANDARD"
      }
    }
  }

The thing is the resources for the iam role and the replica bucket only exist if enable_replica_bucket is true

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

What do folks here do with respect to AWS Subnets and using data lookups to find Subnet IDs to use in other modules - more importantly need a pattern for finding Subnet IDs with free address space and then ensuring future state doesn’t change for already deployed infrastructure? Basically - our network team doesn’t give us alot of network/IP space - so we’ve carved what we do have and created this concept of Subnet Pools across AZs. Issue is how to fan across each Pool while being properly deterministic? I usually don’t have this problem since most places I’ve worked aren’t so stingy w/ IP space in AWS

1
RB avatar

we use a data source to find the vpc using an application tag

RB avatar

we have application=oregon for a single vpc so a data source uses that to get that vpc.

RB avatar

then we split that vpc up in public and private subnets

RB avatar

so we then use a data source using the vpc id from the first vpc data source and look for public=true and we get a list of public subnets

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

And then you just make sure you leverage all those subnet ids across the AZs; do you do anything more? We’re doing the same with respect to looking up the subnets via tags.

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

I’m thinking now maybe my devs haven’t been iterating through the list of subnets and that’s probably why we’re exhausting our little address we do have in one of the 4 subnets for which we’re having a current issue

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

@RB - appreciate it; needed to spitball out ideas. Cheers. Think we’ve found some culprits.

RB avatar

ah yes that makes sense. yes we leverage all those subnet ids across the azs so amazon will balance between them

RB avatar

we dont do anything more but we use /18s and /20s so we havent run out of address space in our vpcs

RB avatar

we do have a balancing issue tho. we recently added a new public and private subnet in the 2d az and now we have to re-apply the terraform that uses the dtaa sourcrs in order to take advantage of the new subnet

2020-08-24

Pierre Humberdroz avatar
Pierre Humberdroz

Has some gotten 0.13 working with locally installed providers that are not yet in the terraform registry ?

bricezakra avatar
bricezakra

Hello, I want to build a bootstrapping for a new SM IaaS process so that I can deploy SM resources in a standardized and repeatable fashion. I need some help and guidance to start my project. Any advice please?

Alan Kis avatar
Alan Kis

If you are having your hands first time dirty with Terraform, for starters this book is a must:

https://www.terraformupandrunning.com/

P.S. I am not related to the book or authors in any way.

Terraform: Up and Running

This book is the fastest way to get up and running with Terraform, an open source tool that allows you to manage your infrastructure as code across a variety of cloud providers.

Laurynas avatar
Laurynas

Hi, How do I output a resource created with for each?

resource "aws_route53_record" "cloudfront" {

  for_each = toset(var.domain_name)
  name     = each.key
}

output "dns_record" {
  value = aws_route53_record.cloudfront[each.key].fqdn
}

Gives error : 
The "each" object can be used only in "resource" blocks, and only when the
"for_each" argument is set.
loren avatar
loren

you can output the entire resource:

output "dns_record" {
  value = aws_route53_record.cloudfront
}

or a map of specific attributes:

output "dns_record" {
  value = { for key, resource in aws_route53_record.cloudfront : key => { fqdn = resource.fqdn } }
}

or a list of a a single attribute:

output "dns_record" {
  value = [ for resource in aws_route53_record.cloudfront : resource.fqdn ]
}
1
loren avatar
loren
Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

1
Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Can also do aws_route53_record.cloudfront.*.fqdn iirc

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Or does that only work on counts?

loren avatar
loren

That syntax only works on count… When using for_each the object is a map, not a list

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Ah, my apologies

walicolc avatar
walicolc

hello peoples - anyone know of any good terraform slack notification tool which shows plan/destroy in slack?

Alan Kis avatar
Alan Kis

https://github.com/terraform-aws-modules/terraform-aws-notify-slack

Usually, notifications are part of the CI/CD pipeline if you are using it for infra delivery.

terraform-aws-modules/terraform-aws-notify-slack

Terraform module which creates SNS topic and Lambda function which sends notifications to Slack - terraform-aws-modules/terraform-aws-notify-slack

walicolc avatar
walicolc

Thanks for the ping. Will this send plan and/or destroy output to slack? Is there an example slack msg I can look at to see if that’s what I want?

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Does anybody know if there’s a way to force a recreate on an instance when userdata changes? I’ve found a lot of documentation on people having the opposite problem a few years ago (that it used to do that, but now it doesn’t)

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Nevermind, it seems to be doing it now…

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In general though, where we’ve had to control these sorts of events we use the null_resource with triggers

:--1:1
MattyB avatar
MattyB

https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/commit/19b680d0d21796fa21bf97aef40ebd8f1acc84c4 It looks like this is a breaking change for the way we’re using this module which maybe I wasn’t using correctly. Why is aws_iam_role.ecs_service disabled when using awsvpc as the network_mode?

RB avatar

i believe the awsvpc netowrk mode is required for fargate

RB avatar

so the code is basically saying, if the network mode is not awsvpc (not for fargate), then disable creation of the ecs service role, the service policy, the role policy, and remove the iam role from the ecs service

RB avatar

could you create an issue ticket of what is breaking, how it’s breaking, and what your use case is ?

MattyB avatar
MattyB

Our Terraform plan is attempting to disable this role for our fargate instances..

MattyB avatar
MattyB

I’ll create a ticket

RB avatar

fyi, for now, can you pin to the tag version prior to that pr ?

RB avatar

please also include your arguments for the terraform module like the var.network_mode

MattyB avatar
MattyB

yeah, we’re pinning to a tagged version. that’s why we’re just now seeing an issue

RB avatar

oh ok cool so at least 0.25.0 works for now until we can come up with a solution upstream

MattyB avatar
MattyB

Sure thing! thanks for your help

RB avatar

then it will be easier to move the pin forward

RB avatar

of course, np!

RB avatar

i was the one that put the change in so i should probably fix the mistake wherever it may be

btai avatar

seems like region is not a field on the bucket resource anymore? how does it decide which region to provision in? using the provider region?

Jakub avatar
Jakub

hello, does anybody knows how to run tests and merge this PR ?

1
1
Jakub avatar
Jakub
Update terraform-aws-route53-cluster-hostname versions to support Ter… by bmonkman · Pull Request #68 · cloudposse/terraform-aws-elasticsearch

…raform 0.13 what Update referenced module version so that we don&#39;t get a version conflict when using TF 0.13 why Currently I get the following error message when running terraform init: Er…

Gowiem avatar
Gowiem

This new, opinionated Terraform wrapper / framework just launched: https://terraspace.cloud/

I wouldn’t use it, but would be interested in hearing from others if they would.

Terraspace | The Terraform Framework

The Terraform Framework

sheldonh avatar
sheldonh

There is a need for a nice template system I think. I’d be much more interested if it was a Go cross platform app and easy as git town or similar tool.

Terraspace | The Terraform Framework

The Terraform Framework

Gowiem avatar
Gowiem

Yeah, I mentally took points off because it was written in Ruby as well. I’m no great Go programmer by any means, but any ops / infra related tooling creator should realize by now that you need to write your tool in Go for the community to get behind you.

2
:100:1
sheldonh avatar
sheldonh

Lol. Maybe not required to be in Go but saying no windows support is kinda dropping a huge gapping hole in coverage. I work on macOS but any tooling I use for team needs cross platform support or at least be a docker workflow at the minimum. Has to be super easy to get going or no chance of adoption in such a busy world imo.

:100:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are releasing a library built around variant2 by @mumoshu that extends this kind of functionality for terraform and helmfile. The rad thing about variant2 is it cross compiles to multiple platforms and is written in go. But all the workflows are written in native HCL2.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The ETA is “any day” now, but there are a couple of bug fixes we need before it works as a remote module. Btw, variant2 supports remote variants just like terraform supports remote modules. So it’s an insanely reusable cli workflow.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
mumoshu/variant2

Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2

sheldonh avatar
sheldonh

Stoked

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

Hah. Uses Ruby/Thor. Looks just like my opinionated Puppet tool from ages ago: https://github.com/tehmaspc/puppet-magnum

Agree - wouldn’t want to use Ruby anymore myself either.

2020-08-23

Psy Shaitanya avatar
Psy Shaitanya

Hey Guys, I’ve started using EKS cluster terraform module. I was running few tests with the release tag - 0.26.1. I see that there is no kubeconfig path to store on local or the content that is being shown, all i see is the "kubernetes_config_map_id". I’ve seen people using kubeconfig_path = var.kubeconfig_path in the earlier versions. Not sure if i’m missing something or what is the best way to display the contents of kubeconfig from the outputs or storing it on local path.

Thanks for you help in advance.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

roth.andy avatar
roth.andy
07:11:18 PM

I’m currently working on a Terraform module that needs to create a Kubernetes cluster as well as deploy some helm charts to it. I need it to be as “production-ready” as possible. What’s the best approach right now for using Terraform to deploy things to Kubernetes?

For further context, the module will spin up AWS resources (EC2 instances, security groups, etc), then use the Terraform RKE provider to create the k8s cluster. Here’s an [example from Rancher> that is close to what I want to do, but they clearly say that it is not meant for production. Here’s <https://github.com/RothAndrew/terraform-aws-rke-rancher-master-cluster my repo](https://github.com/rancher/quickstart/tree/master/rancher-common) if you want to follow along with my progress. I’m working in the feature/initial_dev branch.

While not a set-in-stone requirement, if at all possible, I would like to avoid requiring any local-exec or dependencies on any local installed tools other than Terraform. If it does require something local, using Docker to do it would be nice.

Terraform Helm Provider? I don’t know much about it, though it looks to have decently good support

  1. Does it require helm to be installed on the machine running Terraform?
  2. Is it being used anywhere successfully in production? Terraform Helmfile Provider? Probably not much more than an honorable mention since it is so new, but I do :heart: pretty much anything @mumoshu touches :grin:
  3. Does it require helm, helmfile, helm-diff, helm-git, etc to be installed on the machine running Terraform? (If I am reading correctly, the answer is yes) Local-exec using helm/helmfile in an idempotent way? Some of my colleagues do this, but I believe it is just too crude to use in production

Terraform Shell Provider? This feels like a souped-up version of local-exec that at least gives me better lifecycle management (thanks @mumoshu for linking to it in the helmfile provider docs)

Flux Helm Operator? the Flux project has a Helm operator that looks really nice. I’d need to get the operator installed, and then need to figure out the best way to get the CRDs applied, but it looks like it has nice potential

Chris Fowles avatar
Chris Fowles

https://github.com/minamijoyo/hcledit - a commandline hcl2 attribute editor

minamijoyo/hcledit

A command line editor for HCL. Contribute to minamijoyo/hcledit development by creating an account on GitHub.

RB avatar

Ya. Minamijoyo has some great tools. Check out tfschema too

minamijoyo/hcledit

A command line editor for HCL. Contribute to minamijoyo/hcledit development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
minamijoyo/tfupdate

Update version constraints in your Terraform configurations - minamijoyo/tfupdate

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Niiiiiice!

Emmanuel Gelati avatar
Emmanuel Gelati

whic is one of those is better, resouce “foo_resource” “example” {} or resouce foo_resource “example” ?

Chris Fowles avatar
Chris Fowles

i’ve been going with the policy of quotes only when absolutely necessary so resource foo_resource example {}

:--1:4
Zach avatar

whoah hold on. The quotes aren’t required on the resource type and name, for terraform?

loren avatar
loren

Nope, maybe once upon a time they were, but not since hcl2 and tf 0.12 iirc

Zach avatar

omg

2020-08-22

RB avatar

is a .hcl file the same as a .tf file ?

pjaudiomv avatar
pjaudiomv

They are both HashiCorp configuration language but one uses the specific terraform syntax

RB avatar

Is there a way to convert between tf and hcl and back?

pjaudiomv avatar
pjaudiomv

What’s the use case

RB avatar

I was looking at https://github.com/minamijoyo/hcledit and wondering how to programmatically remove a resource from terraform code while maintaining file structure and comments

minamijoyo/hcledit

A command line editor for HCL. Contribute to minamijoyo/hcledit development by creating an account on GitHub.

loren avatar
loren

.tf is just a file extension. The syntax of the contents is HCL

pjaudiomv avatar
pjaudiomv

Looks like hcledit should be able to handle that

loren avatar
loren

Nifty project, nice find

pjaudiomv avatar
pjaudiomv

I know .hcl is used for vault config and policies

RB avatar
minamijoyo/tfschema

A schema inspector for Terraform providers. Contribute to minamijoyo/tfschema development by creating an account on GitHub.

:--1:1
RB avatar

it’s like policy_sentry querying of iam perms but instead it queries arguments for terraform resources

minamijoyo/tfschema

A schema inspector for Terraform providers. Contribute to minamijoyo/tfschema development by creating an account on GitHub.

RB avatar

this guy minamijoyo is killing it with the tf tools

RB avatar
minamijoyo - Overview

minamijoyo has 65 repositories available. Follow their code on GitHub.

Joe Niland avatar
Joe Niland

This is cool!

2020-08-21

Grid Cell avatar
Grid Cell

Hi All, I’m trying to setup a EKS cluster and have the following config on a brand new AWS account. I want the private subnets to use a NAT gateway

module "vpc" {
  source = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.16.1>"
  namespace = var.namespace
  stage = var.stage
  name = var.name
  attributes = var.attributes
  cidr_block = "172.16.0.0/16"
  tags = local.tags
  enable_internet_gateway = true
}

module "subnets" {
  source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.19.0>"
  availability_zones = var.availability_zones
  namespace = var.namespace
  stage = var.stage
  name = var.name
  attributes = var.attributes
  vpc_id = module.vpc.vpc_id
  igw_id = module.vpc.igw_id
  cidr_block = module.vpc.vpc_cidr_block
  nat_gateway_enabled = true
  nat_instance_enabled = true
  tags = local.tags
  vpc_default_route_table_id = ""

}

module "eks_cluster" {
  source = "git::<https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.24.0>"
  namespace = var.namespace
  stage = var.stage
  name = var.name
  attributes = var.attributes
  tags = var.tags
  region = var.region
  vpc_id = module.vpc.vpc_id
  subnet_ids = module.subnets.public_subnet_ids
  kubernetes_version = var.kubernetes_version
  local_exec_interpreter = var.local_exec_interpreter
  oidc_provider_enabled = var.oidc_provider_enabled
  enabled_cluster_log_types = var.enabled_cluster_log_types
  cluster_log_retention_period = var.cluster_log_retention_period
  kubernetes_config_map_ignore_role_changes = false
}

When I apply this there is some weird race condition which results in every time, even if I delete the resources and start again

Error: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
	status code: 400, request id: xxx
  on .terraform/modules/subnets/nat-gateway.tf line 62, in resource "aws_route" "default":
  62: resource "aws_route" "default" {



Error: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
	status code: 400, request id: xxx

  on .terraform/modules/subnets/nat-gateway.tf line 62, in resource "aws_route" "default":
  62: resource "aws_route" "default" {

I only see this error when

 nat_gateway_enabled = true
  nat_instance_enabled = true

in the subnets module. Can someone please help me track why this occurs or help me debug this before I raise it as a bug?

wannafly37 avatar
wannafly37

I think you either want a nat gateway or a nat instance, not both.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, either one, not both

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use nat gateways in staging and prod

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use nat instances (micro) in dev and test and sandbox to save some money (EC2 instances are cheaper than NAT Gateways)

ismail yenigul avatar
ismail yenigul

@Andriy Knysh (Cloud Posse) btw, is there any special reason to create EKS workers in public subnet? Isn’t secure to put in private subnet by default? at https://github.com/cloudposse/terraform-aws-eks-cluster examples

  module "eks_workers" {
...
    subnet_ids                         = module.subnets.public_subnet_ids
Grid Cell avatar
Grid Cell

thanks for the advice guys, the comments make it clear, at the moment I’m just testing around but the advice to use one instead of both worked

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ in production you should put the worker nodes into private subnets

ismail yenigul avatar
ismail yenigul

Yes, that would be great to update example to make it safer

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the cluster itself should be given all subnets, private and public. k8s will use the public subnets to create load balancers in

ismail yenigul avatar
ismail yenigul

also it could be better to add subnet tagging with elb and internal-elb by default

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

\# <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html>
locals {
  public_subnets_additional_tags = {
    "[kubernetes.io/role/elb](http://kubernetes\.io/role/elb)" : 1
  }

  private_subnets_additional_tags = {
    "[kubernetes.io/role/internal-elb](http://kubernetes\.io/role/internal\-elb)" : 1
  }
}

module "subnets" {
  source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.22.0>"

  namespace   = var.namespace
  stage       = var.stage
  environment = var.environment
  name        = var.name
  delimiter   = var.delimiter
  attributes  = var.attributes
  tags        = local.tags

  availability_zones              = var.availability_zones
  cidr_block                      = module.vpc.vpc_cidr_block
  igw_id                          = module.vpc.igw_id
  map_public_ip_on_launch         = var.map_public_ip_on_launch
  max_subnet_count                = var.max_subnet_count
  nat_gateway_enabled             = var.nat_gateway_enabled
  nat_instance_enabled            = var.nat_instance_enabled
  nat_instance_type               = var.nat_instance_type
  public_subnets_additional_tags  = local.public_subnets_additional_tags
  private_subnets_additional_tags = local.private_subnets_additional_tags
  subnet_type_tag_key             = var.subnet_type_tag_key
  subnet_type_tag_value_format    = var.subnet_type_tag_value_format
  vpc_id                          = module.vpc.vpc_id
}
ismail yenigul avatar
ismail yenigul
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the example is outdated and was not updated to use the subnet tags

:--1:1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we will update it asap when have time

ismail yenigul avatar
ismail yenigul

I can send PR if you happy for that changes but it seems README.md is auto generated

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have GitHub action for that, don’t worry, we’ll run the command on your PR

ismail yenigul avatar
ismail yenigul
update subnet tags and use private subnets for worker by ismailyenigul · Pull Request #73 · cloudposse/terraform-aws-eks-cluster

Enabled subnet tags for ALB ingress controller in the example Create NAT gateway for private subnets Use private subnet for EKS nodes

2020-08-20

paultath81 avatar
paultath81

hoping someone can help. I created an individual environment module which sources out to my main module.

module "test" {
  source     = "../../terraform-aws-ec2-auto-scale/"
  region     = var.region
  aws_vpc    = var.aws_vpc
  subnet_ids = var.subnet_ids
}

If i execute a plan in my environment module I get the following error

2020/08/20 11:00:57 [ERROR] eval: *terraform.EvalSequence, err: Your query returned no results. Please change your search criteria and try again.
2020/08/20 11:00:58 [WARN] ReferenceTransformer: reference not found: "var.subnet_ids"
module.prtg.data.aws_security_group.default: Refreshing state...
module.prtg.data.aws_subnet_ids.default: Refreshing state...

Error: Your query returned no results. Please change your search criteria and try again.

But if i execute a plan within my source module, terraform plan and apply works.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
11:27:48 PM
:--1:2
1
paultath81 avatar
paultath81

link?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
hashicorp/terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…

:--1:1
Adrian Navarrete avatar
Adrian Navarrete

Hi all, I am using this module https://registry.terraform.io/modules/terraform-aws-modules/atlantis/aws/2.23.0 thanks to @antonbabenko, for setting up Atlantis with Gitlab, it works fine when the the load balancer is external, but as soon as I change it to be internal is not reachable from the internet and therefore from Gitlab servers. Do you know if this module supports or what I can do for being able to use the internal ALB and being reachable from gitlab servers / the internet ? Many thanks in advance.

RB avatar

If you’re using public subnets, you’ll have to also set the public ip variable to true

RB avatar

If your elb is publicly facing, make sure to allow only the GitHub and your office ipcidrs

RB avatar

Otherwise anyone in the world will be able to hit it

Adrian Navarrete avatar
Adrian Navarrete

correct is how I have it right now, open but restricted to my office and the gitlab server

Adrian Navarrete avatar
Adrian Navarrete

I am using both public and private basically I am following this https://github.com/terraform-aws-modules/terraform-aws-atlantis#run-atlantis-as-a-terraform-module

I am going to set ecs_service_assign_public_ip=true as you mentioned

terraform-aws-modules/terraform-aws-atlantis

Terraform configurations for running Atlantis on AWS Fargate. Github, Gitlab and BitBucket are supported - terraform-aws-modules/terraform-aws-atlantis

2020-08-19

joshmyers avatar
joshmyers

https://github.com/aliscott/infracost looks interesting, but early days

aliscott/infracost

Get cost estimates from a Terraform project. Contribute to aliscott/infracost development by creating an account on GitHub.

sahil kamboj avatar
sahil kamboj

Hey guys i have a work flow with a loophole in terraform script i use a provision server which make a ami. that ami is used by autoscaing groups. after that i dont need that provision server , if i force remove it manually it creating a issue like whenever i do terraform apply it recreates that provision server and want to update that ami. I dont want to do provisioning manually.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Added to list for next week

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, haha thought I was in the #office-hours channel

Andrey Nazarov avatar
Andrey Nazarov

Does anybody know if Terraform has a state file size limit? Is there any recommendations that it shouldn’t be, say, more than nMb? All I could find is this discussion: https://discuss.hashicorp.com/t/getting-error-when-tfstate-is-larger-than-4mb/6121

Getting error when tfstate is larger than 4MB

We’re storing our tfstate files in our documentDB (AWS mongodb compatible database) with a HTTP REST API in front of it using the HTTP backend, which has been fine thus far but it seems the size of the state file has crossed the 4MB threshold and the terraform apply worked fine but now I can’t do a terraform destroy because I’m getting the error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5429624 vs. 4194304) Is there a way we can increase this limit for t…

randomy avatar
randomy

I don’t know about actual limits but I think people tend to keep their state files small to reduce the blast radius and make it not unbearably slow to run (big state file = lots of resources = lots of API calls = slow).

Getting error when tfstate is larger than 4MB

We’re storing our tfstate files in our documentDB (AWS mongodb compatible database) with a HTTP REST API in front of it using the HTTP backend, which has been fine thus far but it seems the size of the state file has crossed the 4MB threshold and the terraform apply worked fine but now I can’t do a terraform destroy because I’m getting the error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5429624 vs. 4194304) Is there a way we can increase this limit for t…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Agree with @randomy - but will also bring up next week

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, haha thought I was in the #office-hours channel

randomy avatar
randomy

On the other hand, it is pretty nice when you can run terraform and it handles everything. Fewer chicken-and-egg situations, no need to update various dependent stacks afterwards.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@randomy ya i’m :100: torn on it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Multiple projects is ideal for reducing blast radius, but it complicates cold-starts and moves the responsibility of managing the complete DAG to you.

Using a single project is great because it’s all handled for you, but plans/apply can take hours. Using -target is not ideal either because you need to know which things to target or risk inconsistent state as well.

randomy avatar
randomy

I’ve been working with CodePipeline lately to deploy to multiple AWS accounts. It’s pretty tempting to make a single TF stack that creates resources in every target environment/account + pipeline resources. It’s got a fairly small blast radius but still mixes nonprod with prod. I’m writing the module example this way and mostly prefer it over the actual implementation that involves separate stacks per account/environment using remote states etc.

randomy avatar
randomy

To add context, this is an auto scaling group in multiple environments + 2 pipelines to deploy new AMIs and app artifacts to them. So it would be a stack per “service” but it covers all environments. It is pretty quick to run TF so the dilemma is mostly around blast radius/separation. In my current case the ASG is stateless (web servers) so I’m comfortable with it. Would be less comfortable otherwise.

randomy avatar
randomy

Bringing this back to the original question, I struggle to decide on how to slice and dice state files but it’s always a long way off hitting 4mb.

walicolc avatar
walicolc

ello peoples, anyone know of any good resources on implementing ci/cd on aws with terraform. In particular best practices on managing the plan and apply commands in the build phase using codebuild and interacting with s3 state files?

sheldonh avatar
sheldonh

Is there any solution to using the Terraform Cloud module repository for my team without everyone needing a login? I mean, i like the browsing the ability to use version syntax without tags and all, but I realized if I wanted to rollout consuming those, NOT running plans, I’m stuck as 5 free users is hit. I need to basically allow “read-only” users for the registry for example.

Any known solution to this?

Theo Gravity avatar
Theo Gravity

Anyone know how to resolve this issue? https://github.com/cloudposse/terraform-aws-ecs-web-app/issues/63

I’m trying the complete example using EC2 with codepipeline_enabled = false, webhook_enabled=false, ecs_alarms_enabled=false, codepipeline_badge_enabled=false

I get:

Error: If `individual` is false, `organization` is required.

  on .terraform/modules/ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 7, in provider "github":
   7: provider "github" {

If I define codepipeline_repo_owner= "hashicorp like the issue describes, I get:

Error: If `anonymous` is false, `token` is required.

 on .terraform/modules/ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 7, in provider "github":
  7: provider "github" {

I looked at the provider "github" definition in the subdep and there does not seem to be a direct way of defining it from this repo

"Error: If `individual` is false, `organization` is required." unless known repo_owner org is supplied · Issue #63 · cloudposse/terraform-aws-ecs-web-app

Found a bug? Maybe our Slack Community can help. Describe the Bug Here is my terraform code module &quot;ecs_web_app&quot; { source = &quot;git://github.com/cloudposse/terraform-aws-ecs-web->…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Set the GITHUB_TOKEN environment variable and try again

Theo Gravity avatar
Theo Gravity

Hm, no go, tried a few ways

• export GITHUB_TOKEN=test

• GITHUB_TOKEN=test terraform apply –var-file=eng.tfvars

• terraform apply –var-file=eng.tfvars -var ‘GITHUB_TOKEN=test’

Theo Gravity avatar
Theo Gravity

Oh, I noticed I get 401 Bad credentials []

Theo Gravity avatar
Theo Gravity

but I don’t even want to use anything related to codepipeline

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The terraform-aws-ecs-web-app is designed to be an opinionated module that shows how to use all the other modules together. Since we write very small composable modules, this lets you pick and choose the best pieces. If the web-app module doesn’t fit that use-case, check out the main.tf file and see how it uses the other modules. Rip out what you don’t need.

But as @RB points out, there might be an easier alternative.

:--1:1
Theo Gravity avatar
Theo Gravity

maybe this module isn’t for me - maybe I should use terraform-aws-ecs-alb-service-task instead

thanks for your help, I’m going to use the other module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This looks like a good conclusion.

RB avatar

Just set the repo_owner = "hashicorp" and it should work. It’s mentioned in the steps to reproduce.

Theo Gravity avatar
Theo Gravity

i tried at the time (also stated it in the original message), that did not work

RB avatar

setting the repo_owner is not the same as setting the codepipeline_repo_owner

Theo Gravity avatar
Theo Gravity

I only see one reference to repo_owner in the example main.tf

repo_owner                           = var.codepipeline_repo_owner
RB avatar

have you tried setting repo_owner = "hashicorp" without codepipeline ?

Theo Gravity avatar
Theo Gravity

hm i see

Theo Gravity avatar
Theo Gravity

ok, i’ll try it out

Theo Gravity avatar
Theo Gravity

yeah, it is not working

Warning: Value for undeclared variable

The root module does not declare a variable named "repo_owner" but a value was
found in file "eng/gp-view.tfvars". To use this value, add a "variable" block
to the configuration.

Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.


Error: If `anonymous` is false, `token` is required.

  on .terraform/modules/ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 7, in provider "github":
   7: provider "github" {

I went to the [main.tf](http://main\.tf) in master, and that corresponds to

https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L175

which is the same in the example as

https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/examples/complete/main.tf#L138

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

Theo Gravity avatar
Theo Gravity

My source is terraform-aws-ecs-web-app.git?ref=tags/0.39.1 btw

RB avatar

try planning this entire block

module "ecs_web_app" {
  source = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.36.0>"

  name                                            = local.name
  vpc_id                                          = local.vpc_id
  alb_ingress_unauthenticated_listener_arns       = [data.aws_lb_listener.selected.arn]
  alb_ingress_unauthenticated_listener_arns_count = 1
  aws_logs_region                                 = "us-east-2"
  region                                          = "us-east-2"
  ecs_cluster_arn                                 = data.aws_ecs_cluster.selected.arn
  ecs_cluster_name                                = data.aws_ecs_cluster.selected.cluster_name
  ecs_security_group_ids                          = [data.aws_security_group.selected.id]
  ecs_private_subnet_ids                          = local.subnet_ids
  alb_ingress_healthcheck_path                    = "/healthz"
  alb_ingress_unauthenticated_paths               = ["/*"]
  codepipeline_enabled                            = false
  cloudwatch_log_group_enabled                    = false
  webhook_enabled                                 = false
  alb_security_group                              = "sg-11112222"
  repo_owner                                      = "hashicorp"
}
RB avatar

see if this block works for you and then modify it to your liking

Theo Gravity avatar
Theo Gravity

still no go

Theo Gravity avatar
Theo Gravity
Error: If `anonymous` is false, `token` is required.

  on .terraform/modules/ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 7, in provider "github":
   7: provider "github" {
Theo Gravity avatar
Theo Gravity

I replaced the ecs_web_app block from the complete example with yours, and replaced some variables

module "ecs_web_app" {
  source = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.36.0>"
  name                                            = "test-web-app"
  vpc_id                                          = module.vpc.vpc_id
  alb_ingress_unauthenticated_listener_arns       = [module.alb.alb_arn]
  alb_ingress_unauthenticated_listener_arns_count = 1
  aws_logs_region                                 = "us-west-2"
  region                                          = "us-west-2"
  ecs_cluster_arn                                 = aws_ecs_cluster.default.arn
  ecs_cluster_name                                = aws_ecs_cluster.default.name
  ecs_security_group_ids                          = [module.vpc.vpc_default_security_group_id]
  ecs_private_subnet_ids                          = module.subnets.private_subnet_ids
  alb_ingress_healthcheck_path                    = "/healthz"
  alb_ingress_unauthenticated_paths               = ["/*"]
  codepipeline_enabled                            = false
  cloudwatch_log_group_enabled                    = false
  webhook_enabled                                 = false
  alb_security_group                              = "sg-11112222"
  repo_owner                                      = "hashicorp"
}
Theo Gravity avatar
Theo Gravity

also made sure to terraform init

RB avatar

thats really weird

RB avatar

it looks like youre using an old version

RB avatar

can you try using 0.39.1 which is the latest tag in the repo

RB avatar

id also try a terraform init -upgrade

RB avatar

or better yet, wipe rm -rf .terraform/ && terraform init -upgrade

RB avatar

if that doesnt work then idk

2020-08-18

walicolc avatar
walicolc

hello all, I’m trying to point a list of records sets to a single load balancer. I’ve got this tf settup


\#---------------------------------------------------

\# CREATE ALIAS RECORDS

\#---------------------------------------------------
resource "aws_route53_record" "alias_route53_record" {
  zone_id = data.aws_route53_zone.selected.zone_id
  name    = values(var.record_sets)
  type    = "A"

  alias {
    name                   = data.aws_lb.selected.dns_name
    zone_id                = data.aws_lb.selected.zone_id
    evaluate_target_health = true
  }
}


\#tfvars
record_sets = {
  record_set0 = "foo"
  record_set1  = "bar"
}

Error output

Error: Incorrect attribute value type

  on [main.tf](http://main\.tf) line 39, in resource "aws_route53_record" "alias_route53_record":
  39:   name    = values(merge(var.record_sets))
    |----------------
    | var.record_sets is object with 2 attributes

Inappropriate value for attribute "name": string required.

Was wondering if this is possible or must I create two aws_route53_record resources. Thank you

walicolc avatar
walicolc

Resolved. Forgot for_each existed

resource "aws_route53_record" "alias_route53_record" {
  zone_id  = data.aws_route53_zone.selected.zone_id
  for_each = var.record_sets
  name     = each.value
  type     = "A"

  alias {
    name                   = data.aws_lb.selected.dns_name
    zone_id                = data.aws_lb.selected.zone_id
    evaluate_target_health = true
  }
}
:--1:1
HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
03:45:18 PM

Hashicorp Consul Service (HCS) for Azure affected Aug 18, 15:43 UTC Resolved - At approximately 14:26UTC we began experiencing errors with the creation of consul clusters for HashiCorp Consul Service (HCS) on Azure.

Engineers quickly identified the issue and implemented corrective action.

Services are now in fully operational.

Hashicorp Consul Service (HCS) for Azure affected

HashiCorp Services’s Status Page - Hashicorp Consul Service (HCS) for Azure affected.

Andrey Nazarov avatar
Andrey Nazarov

I’m facing a strange issue. Cannot figure out what’s happened

We just did some modifications to one resource in our .tf file, nothing else and from that point terraform plan started showing logs as if it were with TF_LOG=TRACE. There is no env var set.

Terraform 0.12.24, it’s a resource of some external provider.

Eric Berg avatar
Eric Berg

Are you sure it’s TRACE-level output and not tf dumping state?

Andrey Nazarov avatar
Andrey Nazarov

It might be, indeed. So initially we faced an issue with TF locks during terraform plan, but there couldn’t be any other pipeline or person triggering the same terraform apply. The error was:

2020/08/18 22:12:14 [TRACE] backend/local: requesting state manager for workspace "my-workspace"
2020/08/18 22:12:15 [TRACE] backend/local: requesting state lock for workspace "my-workspace"
o:Acquiring state lock. This may take a few moments...
e:
Error: Error locking state: Error acquiring the state lock: writing "<gs://my-bucket/terraform/my-workspace.tflock>" failed: googleapi: Error 412: Precondition Failed, conditionNotMet
Lock Info:
  ID:        1597788628081747
  Path:      <gs://my-bucket/terraform/my-workspace.tflock>
  Operation: OperationTypePlan
  Who:       [email protected]
  Version:   0.12.24
  Created:   2020-08-18 22:10:27.940237341 +0000 UTC
  Info:      
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.

Note the TRACE output, there are other TRACE, DEBUG and INFO lines prior to this.

If we do terraform plan -lock=false it shows tons of debug output like this:

...
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:   labels:
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     app: prometheus-operator-prometheus
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     chart: prometheus-operator-8.7.0
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     release: "prom"
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     heritage: "Helm"
2020-08-18T17:35:05.432Z [DEBUG] plugin.terraform-provider-helmfile: roleRef:
...

And it takes ages to proceed and usually fails afterwards without noticeable error.

But sometimes, quite rarerly now, it works ok, it shows normal output. We can’t figure out what was the reason. We started getting the same thing on another environment where there were no modification of tf resources, no tools were updated or something.

What a cryptic case.

Andrey Nazarov avatar
Andrey Nazarov

Filed an issue in helmfile-provider repo

Andy Hibbert avatar
Andy Hibbert

Hey! Would I be able to get a review on this PR to add in kms_key_id to terraform-aws-elasticache-redis - https://github.com/cloudposse/terraform-aws-elasticache-redis/pull/75

add_kms_key_id: Allow user to supply their own kms_key_id by hibbert · Pull Request #75 · cloudposse/terraform-aws-elasticache-redis

Change-Id: I23d1288851301328afaa61686b42d8376d303415 what This change allows a user to supply their own kms_key_id from a previously created kms key when at rest encryption is enabled why Securi…

1
Andy Hibbert avatar
Andy Hibbert

Thanks!

add_kms_key_id: Allow user to supply their own kms_key_id by hibbert · Pull Request #75 · cloudposse/terraform-aws-elasticache-redis

Change-Id: I23d1288851301328afaa61686b42d8376d303415 what This change allows a user to supply their own kms_key_id from a previously created kms key when at rest encryption is enabled why Securi…

Vlad Ionescu avatar
Vlad Ionescu

https://github.com/aws/containers-roadmap/issues/56 is killing me. Anybody got a better alternative to config files in Fargate, other than a sidecar container&base64?

I was thinking maybe EFS & local-exec to copy a file but that’s even worse

[ECS] [Volumes]: Ability to create config "volume" and mount it into container as file · Issue #56 · aws/containers-roadmap

Tell us about your request Would be very nice to be able to mount strings (secrets/configurations) defined in the task definition into the container as a file. Which service(s) is this request for?…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry! missed this. will add to agenda next week.

[ECS] [Volumes]: Ability to create config "volume" and mount it into container as file · Issue #56 · aws/containers-roadmap

Tell us about your request Would be very nice to be able to mount strings (secrets/configurations) defined in the task definition into the container as a file. Which service(s) is this request for?…

Vlad Ionescu avatar
Vlad Ionescu

I wouldn’t add it There really is no better way. But hey, it might lead to an interesting discusion

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, haha thought I was in the #office-hours channel

Vlad Ionescu avatar
Vlad Ionescu

What I ended up doing is using templatefile() + AWS SSM and overriding the entrypoint to the container.

I’d rather not change the app or modify the code / image so this was the easiest way.

secrets = [
  # Hacky hack to get the config file in the Fargate container
  #  see <https://github.com/aws/containers-roadmap/issues/56>
  # TODO: move this to Secrets Manager for extra 2KB size
  {
    name      = "ENCODED_CONFIG"
    valueFrom = aws_ssm_parameter.config.arn
  },
  {
    name      = "ENCODED_RULES"
    valueFrom = aws_ssm_parameter.rules.arn
  },
]

entrypoint = [
  "bash",
  "-c",
  "set -ueo pipefail; unset AWS_CONTAINER_CREDENTIALS_RELATIVE_URI; unset AWS_EXECUTION_ENV; mkdir /etc/samproxy; echo $ENCODED_CONFIG | base64 -d > /etc/samproxy/samproxy.toml; echo $ENCODED_RULES | base64 -d > /etc/samproxy/rules.toml; /usr/bin/samproxy"
]
Vlad Ionescu avatar
Vlad Ionescu
Vlaaaaaaad/terraform-aws-fargate-samproxy

A Terraform module for running Honeycomb.io’s Samproxy on AWS in Fargate - Vlaaaaaaad/terraform-aws-fargate-samproxy

2020-08-17

Andy avatar

Those of you using chamber to store secrets in AWS Parameter Store - do you store the secrets encrypted in a git repository as well as a backup?

segmentio/chamber

CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.

s_slack avatar
s_slack

I use chamber but keep it in AWS only.

segmentio/chamber

CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.

Gowiem avatar
Gowiem

@ I’ve done purely storing in PStore before. I’m also more recently using Mozilla Sops + the Sops provider alongside that.

Mike Martin avatar
Mike Martin

Hi there - we are looking to import our Route53 hosted zones (that are currently managed by hand) into TF Cloud. The ideal layout (at least the way I see it now) is that we have the following GitHub layout and can be managed by CODEOWNERS to approve specific files. Does it make sense to put it all in one workspace, or should each hosted zone get its own workspace? I know ideally, we’d want each lifecycle to get it’s own workspace, but that isn’t a reality yet as many things are old and spun up by hand and may never exist in terraform.


\### [main.tf](http://main\.tf)
provider "aws" {
  region     = "us-east-1"
  access_key = KEY
  secret_key = SECRET
}

terraform {
  backend "remote" {
    hostname     = "[app.terraform.io](http://app\.terraform\.io)"
    organization = "Org"

    workspaces {
      name = "aws-prod-route53"
    }
  }

  # <https://github.com/terraform-providers/terraform-provider-aws/issues/13626>
  required_providers {
    aws = "~> 2.64.0"
  }
}

resource "aws_route53_record" "staging_record" {
  zone_id = aws_route53_zone.org.zone_id
  name    = "[staging.org.com](http://staging\.org\.com)"
  type    = "A"
  ttl     = 300
  records = ["55.82.222.111"]
}


\### [network-team.tf](http://network\-team\.tf)
resource "aws_route53_record" "cisco_record" {
  zone_id = aws_route53_zone.org.zone_id
  name    = "[cisco.org.com](http://cisco\.org\.com)"
  type    = "A"
  ttl     = 300
  records = ["55.82.222.112"]
}


\### CODEOWNERS

\## <https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners>
*                  @global-owner1 @global-owner2
[network-team.tf](http://network\-team\.tf)    @network-owner
awatson avatar
awatson

I’m actually in the middle of doing this as well and interested to find out what people recommend. Right now because my network team oversees more than one client/envioronments… I created a repo infra-networking that is a single workspace vs one per environment - dev,stage,prod because the same core team would likely be in control of all of that and it doesn’t change as often as the application or operations code does

Emmanuel Gelati avatar
Emmanuel Gelati

if you have the dns staff infra-networking repo managed by the network team, what you can do is to use data source resources to query that resources and use them in your environments

1
sheldonh avatar
sheldonh

Anyone get a basic quote on Terraform Business Tier? Trying to figure general price to put a placeholder in the budget, but no phone number to call and not sure how long before I get an email from sales.

Ballpark would be helpful. Team tier $70 per person, and business is “contact sales”

Sheece avatar
Sheece

Hi, I noticed value for module.ssh_key_pair.name capital letter are automatically converted to lowercase!

module "ssh_key_pair" {
  source                = "git::<https://github.com/cloudposse/terraform-aws-key-pair.git?ref=0.11.0>"
  name                  = "KEYPAIR"
  ssh_public_key_path   = "${path.module}/"
  generate_ssh_key      = "true"
  private_key_extension = ".pem"
  public_key_extension  = ".pub"
}

Is this a known thing or am I doing something wrong ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The entire cloud posse ecosystem of modules uses the terraform-null-label module to enforce consistency

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that module normalizes everything

:--1:1
Sheece avatar
Sheece

thanks for letting me ill follow the convention

2020-08-16

bbhupati avatar
bbhupati

Hello All, I’m new to the terraform, i’m trying to covert terraform kubernetes ingress resource into module(https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/ingress#backend), but when i’m calling the module getting below error. Can someone help me how to fix it

bbhupati avatar
bbhupati

resource ”kubernetes_ingress” ”example_ingress” {   count = var.ingress ? length(var.spec) : 0   metadata {     name = var.name     namespace= var.namespace     labels   = var.labels     annotations = var.annotations   }   dynamic ”spec” {     for_each = length(keys(var.spec[count.index])) == 0 ? [] : [var.spec[count.index]]     content {       rule = lookup(spec.value, ”rule”, null)               dynamic ”rule” {       for_each = length(keys(lookup(spec.value, ”rule”, {}))) == 0 ? [] : [lookup(spec.value, ”rule”, {})]       content {           host = lookup(rule.value, ”host”, null)           http {             path {               path = lookup(rule.value, ”path”, ”*/”)               backend {                 service_name = lookup(rule.value, ”service_name”, null)                 service_port = lookup(rule.value, ”service_port”, null)               }             }           }         }       }     }   } }

bbhupati avatar
bbhupati

Error: Missing item separator

on ingress.tf line 17, in module “ingress”: 16: 17: rule = {

Expected a comma to mark the beginning of the next item.

[terragrunt] 2020/08/17 05:22:39 Hit multiple errors: exit status 1

2020-08-14

Saichovsky avatar
Saichovsky

Hey guys,

Saichovsky avatar
Saichovsky

I have a question regarding TF state as set from different GitHub branches

Saichovsky avatar
Saichovsky

Let me explain

Saichovsky avatar
Saichovsky

So I am working on a branch, seeking to create a new terraform resource. My branch creates this resource in a staging environment. Once I am done with all tests, then I will merge to merge and allow deployment to production

Saichovsky avatar
Saichovsky

My colleague is doing the same thing - a different resource, but deploying to staging, pending tests, upon whose success the resource will then be deployed to production

Saichovsky avatar
Saichovsky

So whenever I push my changes to my branch, CI/CD performs checks and deletes my colleagues resources because they are in the shared state file, but absent from my branch

Saichovsky avatar
Saichovsky

Same thing happens when he pushes his commits to the remote branch. Checks are triggered on GH, removing my resource(s) which is/are missing from his branch but present in the remote state file. This is really slowing us down as one has to wait for the other to be finished with their testing and so forth. I have a feeling that there should be a way to prevent this from happening, but I am just not sure how

Saichovsky avatar
Saichovsky

Your assistance is appreciated

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

I’m not sure if this is neccessarily a good idea, but Terragrunt can generate remote state configuration files when running. It may be possible to make branch-specific remote states.

Saichovsky avatar
Saichovsky

Sounds like it would work… Let me throw that to my lead and hear what thinks. Thank you

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Alternatively, if this is for testing anyway, I would spin up a new stack per developer when needed with a completely fresh state. This does assume your TF is modularised enough to be able to spin up test instances without costing a fortune though

RB avatar

You could use a different s3 key backend

RB avatar

Or you could merge the branches and both of you can work off the same branch

Saichovsky avatar
Saichovsky

Merging the branches is super unlikely because we work at different paces, different timezones…

Saichovsky avatar
Saichovsky

I like the idea of having branch-specific state files better

Saichovsky avatar
Saichovsky

I wonder if it would be possible to merge state files like you merge branches

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

That sounds… unpleasant

Saichovsky avatar
Saichovsky

What? The different paces of work or merging state files?

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Merging of state files. I think you’d be better off doing development in entirely separate states, merging the TF changes into a master branch, then deploying that to the master state file

Saichovsky avatar
Saichovsky

RB avatar

Ya different states make more sense to me too. Seems cleaner. If you ever want to combine, you can always reimport resources from another state and then remove that state.

RB avatar
RB avatar

thread

RB avatar

what i’ve done is separate all the in-line to separate aws_route resources and then imported each aws_route from the aws_route_table

2020-08-13

RB avatar

That’s pretty cool. I didn’t know this. I wonder if you can simply provide the same s3 key argument in the s3 backend from another module. I also wonder how this works with workspaces

1
Zach avatar

We’re a little paranoid in my office about giving terraform roles to an external service like Terraform Cloud, or if we had a GitHub Action that ran terraform on some trigger, to the point that we’ve resisted using any of them. … how do you guys scope/limit the role for a terraform ‘actor’ given that its potentially touching a lot of different types of resources and API Actions?

roth.andy avatar
roth.andy

plus proper use of the Resource: block

Zach avatar

I find those so confusing … its an IAM policy that governs another IAM policy that governs a role/user

Zach avatar

right?

roth.andy avatar
roth.andy

permission boundaries set the absolute maximum limit of what the entity is able to do. Even if someone screws up and gives them every permission under the sun the permission boundary will still block anything that isn’t allowed in it

Gowiem avatar
Gowiem

They’re a big pain IMO, but yeah permission boundaries is the way I’ve dealt with this in the past as well.

Zach avatar

Ok thanks for confirming I find this aspect of IAM very confusing but my junior guy says he’s got a good grasp of it

loren avatar
loren

Also use branch protection and required reviews, and when running in a pull request use a credential that is limited to read-only. plan will work, but not anything that actually makes changes… Gives you the ability to inspect and approve, before doing anything crazy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and on prem runners are now supported

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(announced yesterday)

Zach avatar

Whoah

Andy avatar

Hi with this cloudfront S3 module https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn it recommends creating the ACM certificate using the cli. Why isn’t the certificate terraformed? It looks like it’s possible: https://github.com/cloudposse/terraform-root-modules/blob/master/aws/acm-cloudfront/main.tf

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

iirc, due to cloudfront limitations, the cert has to be created in us-east-1. If your provider is defaulted to another region, you have to create another provider and use that one specifically for us-east-1, and I don’t think cloudposse repos generally pin providers?

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

1
RB avatar

cert has to be created and verified before cloudfront will work

1
RB avatar

i created mine outside of the module using a targeted apply, then verified the record, then applied the module which takes the resource acm as an input argument

RB avatar

if you reuse an acm cert, you can use a data source instead

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

This works if you want to do it as part of one pipeline, you can just depend on both of the validations:

resource "aws_route53_zone" "delegate" {
  name = var.domain
}

resource "aws_acm_certificate" "cert" {
  provider = aws.us-east-1 # Forced to use us-east-1 due to cloudfront limitations

  domain_name       = var.domain
  subject_alternative_names = ["*.${var.domain}"]
  validation_method = "DNS"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_route53_record" "cert_validation" {
  # It took me TWO HOURS to figure out that Terraform converted this from a list to a set in 0.13, grrrr!
  name    = element(tolist(aws_acm_certificate.cert.domain_validation_options), 0).resource_record_name
  type    = element(tolist(aws_acm_certificate.cert.domain_validation_options), 0).resource_record_type
  zone_id = aws_route53_zone.delegate.zone_id
  records = [element(tolist(aws_acm_certificate.cert.domain_validation_options), 0).resource_record_value]
  ttl     = 60
}

resource "aws_acm_certificate_validation" "cert" {
  count = 0
  provider = aws.us-east-1

  certificate_arn         = aws_acm_certificate.cert.arn
  validation_record_fqdns = [aws_route53_record.cert_validation.fqdn]
}
Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

(note specifically the provider us-east-1 alias)

Chris Wahl avatar
Chris Wahl

The “grrrr” comment is

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Haha, came across that one today, can you tell?

pjaudiomv avatar
pjaudiomv

Omg I ran into same issue probably took me about the same amount of time to figure out

1
pjaudiomv avatar
pjaudiomv

I finally looked over the change log and found it

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

It makes like, literally no difference operationally, it’s JUST there to break random tf files

:--1:1
pjaudiomv avatar
pjaudiomv

My coworker said to pin aws version at 2 but I was determined to upgrade to 3

RB avatar

ah i was not aware of the aws_acm_certificate_validation resource. very cool

RB avatar

and good to know abotu the aws 3.x provider with that breaking change. we’re not currently pinning but will be on the lookout for that same issue

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

I ended up figuring it out by looking at the state file, and nestled in between a bunch of lists was a lonely set…

:--1:1
Chris Wahl avatar
Chris Wahl

I’m still pessimistically pinning the AWS provider to avoid a 3.0 surprise.

loren avatar
loren

iirc, changing to a set was necessary because the aws api returns it as a set, which meant the order was constantly changing. order matters in a list, so when using multiple SANs and dealing with multiple validation records as a result, the constantly changing order would cause perpetual diffs in a plan

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Ah, that’s a fairly good reason, I rescind my comment about it being pointless then

loren avatar
loren

And also, a record with the domain and its wildcard subdomain use the same validation record… With a list, there would be a duplicate entry, but with a set all the duplicates get removed automatically

loren avatar
loren

Ran into some of these “fun” problems working this module… https://www.github.com/plus3it/terraform-aws-tardigrade-acm/tree/master/main.tf

plus3it/terraform-aws-tardigrade-acm

Contribute to plus3it/terraform-aws-tardigrade-acm development by creating an account on GitHub.

Andy avatar

Thanks for the advice . Got it all Terraformed now (with Terragrunt). Also noticed Erik has an issue to update the docs to use their ACM module: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/issues/26

Update Instructions to use our ACM Module · Issue #26 · cloudposse/terraform-aws-cloudfront-s3-cdn

what Use terraform-aws-acm-request-certificate instead of aws cli why 100% native terraform, no cli necessary references https://github.com/cloudposse/terraform-aws-acm-request-certificate

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’m working on updating our test infrastructure for 0.13. not much too it if we just want say, “screw it, we’re going to 0.13”. i’m definitely leaning towards that, but it means that support for 0.12 will be only for hotfixes.

loren avatar
loren

I guess you could do something like run the test twice in parallel stages, once for each version?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But at some point they diverge. What happens if it passes 0.13 and fails 0.12?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If it’s a contributor, so we push back and ask them to fix it? Make it work for both? What if it can’t work for both?

loren avatar
loren

Well that’s the decision, right? To use tf 0.13 features, or not

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

The ability to do for_each and count on modules is likely going to simplify and reduce a lot of code in the cloudposse repos. I’d really prefer not to disallow 0.13

loren avatar
loren

Personally I’d just cutoff 0.12 and go all in on 0.13

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

The cloudposse repos all need to work on 0.13 before that’s possible though

loren avatar
loren

I’d enforce a minimum version known to be required based on features used, e.g. >= not ~=. Enforcing the upper bound is too much, too hard

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

The pinned versions for commonly used modules is a big issue tbh. For example, almost every single repo is currently broken (on 0.13) due to being pinned at <=0.16.0 of null-label, which is pinned to tf 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@loren I think you’re right. mea culpa on this one. At least for for the terraform core version, I think we should only use a minimnum version because upgrading across minor versions is basically impossible in the current setup. the other challenge is our tests for examples/complete usually pull in many other modules to bring up a stack. so even simple modules with only 1 dependency, can have many dependencies in the examples.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@ so we’re using git ref sources, so it’s technically ==

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it’s time to switch our modules away from git ref sources too and use module registry sources

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gowiem

loren avatar
loren

i’m still wary of moving away from git refs, just because they are more portable and easier to override… maybe with the new registry features i’ll be able to set aside that concern, but i’d need to get some experience with it first

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Hm, how do releases work on the registry? Do they just copy directly from github releases?

loren avatar
loren

you register the module with the tf registry, and then any git tags are published as a version in the registry

loren avatar
loren

all automatic at that point, no further interaction required

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

I guess the only difference is the syntax looks slightly cleaner then?

loren avatar
loren

you also can use all the version constraints on the version field…

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Something to keep in mind is that large-reaching changes eg. microplane may be (slightly) more difficult using that method, since the sed find/replace will be a slightly more pain in the backside multiline

Gowiem avatar
Gowiem

Huh, switching to the terraform registry would be interesting. I could see the benefit for hurdles like this where git refs are causing us to do manual updates and using the module version input would provide us a cleaner way to say “use any upcoming release”. But that might mean that our always increasing minor version could break module dependencies out in the wild pretty easily. Or are we thinking we would want to only allow patch version increases for module deps?

@Erik Osterman (Cloud Posse) I wonder if we would want to do a test run or two with a couple popular modules to see how that would turn out?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we have hundreds of PRs we gotta open and merge to bump versions everywhere.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This change will forcefully require 0.13 for merge to master - so I don’t want to merge it until we decide how to proceed with 0.12 support. https://github.com/cloudposse/actions/pull/42/files

GitHub Use terraform 0.13 by osterman · Pull Request #42 · cloudposse/actions what Use terraform 0.13 for tests why Latest release

Use terraform 0.13 by osterman · Pull Request #42 · cloudposse/actions

what Use terraform 0.13 for tests why Latest release

Gowiem avatar
Gowiem

Interesting terraform module generator: https://github.com/sudokar/generator-tf-module

Very similar to the Cloud Posse example module: https://github.com/cloudposse/terraform-example-module

sudokar/generator-tf-module

Project scaffolding for Terraform. Contribute to sudokar/generator-tf-module development by creating an account on GitHub.

cloudposse/terraform-example-module

Example Terraform Module Scaffolding. Contribute to cloudposse/terraform-example-module development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

– TERRAFORM 0.13

Here’s my plan for tonight.

  1. Push a 0.12/master branch up to all modules, cut from current master
  2. Update test automation to use 0.13 for PRs against master
  3. Update test automation to use 0.12 for PRs against 0.12/master During this time, I’m going to change the default branch of cloudposse/actions to my development branch and chatops will likely be broken for a few hours. If I get that working, then I’ll merge that and restore the default branch. Then we’re set to test the onslaught of PRs.
:--1:2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Test infra has been updated. Unfortunately, the scope of this change is only now apparent to me. See the thread here: https://sweetops.slack.com/archives/CB6GHNLG0/p1597383937062900?thread_ts=1597346608.048900&cid=CB6GHNLG0

@loren I think you’re right. mea culpa on this one. At least for for the terraform core version, I think we should only use a minimnum version because upgrading across minor versions is basically impossible in the current setup. the other challenge is our tests for examples/complete usually pull in many other modules to bring up a stack. so even simple modules with only 1 dependency, can have many dependencies in the examples.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ll put a plan together tomorrow, but we’re likely going to need some help opening PRs to expedite this upgrade. Not sure how much of this we can automate.

mm avatar

hey all. I’m currently a bit stuck. I’ve successfully made and provisioned a server, setup s3 buckets etc. How do I then use the docker provider to setup my containers on the newly created server?

mm avatar

I tried moving the docker provisioning to a new module, but got this message, and can’t work out the syntax :

This module can be made compatible with depends_on by changing it to receive all of its provider configurations from the calling module, by using the "providers" argument in the calling module block.

2020-08-12

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
10:55:12 AM

Problems setting workspace execution mode in Terraform Cloud Aug 12, 10:35 UTC Investigating - We are currently investigating an issue where customers may be unable to set workspace execution mode in Terraform Cloud via the web interface. The API for this feature is still functional and can be used while we investigate. Please contact the support team if you need further assistance with this feature.

Problems setting workspace execution mode in Terraform Cloud

HashiCorp Services’s Status Page - Problems setting workspace execution mode in Terraform Cloud.

Josh Duffney avatar
Josh Duffney

How does one create an Azure Service Principal with sdk auth?

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
02:55:21 PM

Problems setting workspace execution mode in Terraform Cloud Aug 12, 14:42 UTC Resolved - We have deployed a fix for this issue. Customers should now be able to set workspace execution modes successfully in the web interface as well as via the API.Aug 12, 10:35 UTC Investigating - We are currently investigating an issue where customers may be unable to set workspace execution mode in Terraform Cloud via the web interface. The API for this feature is still functional and can be used while we investigate. Please contact the support team if you need…

Problems setting workspace execution mode in Terraform Cloud

HashiCorp Services’s Status Page - Problems setting workspace execution mode in Terraform Cloud.

Gowiem avatar
Gowiem

Hey for folks using the CP key-pair module (https://github.com/cloudposse/terraform-aws-key-pair) — How do you manage not checking that the pem file into git? Removing the pem file from the location the module writes it to obviously causes the module to want to recreate that file which I don’t want. Any tips to deal with that?

cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

RB avatar

could you put the pem in an s3 bucket and use a data source to bring it down in order to feed it into the module ?

cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

Gowiem avatar
Gowiem

Yeah definitely. Sadly I was just trying to make it work so that I didn’t need to pre-generate it or set up something like that.

I tried:


\# If keypair's public key exists then no need to generate the key again.
  generate_ssh_key = fileexists("../pub_keys/${var.project}-${var.environment}-keypair.pub") ? false : true

But then it ends up deleting the pub key on the 2nd apply which I don’t want.

Going to put it down for now unless somebody shouts in here: “Here’s the way to do it!”

PePe avatar

Parameter Store

PePe avatar

we use that for keys , licenses etc

PePe avatar

my Social insurance number etc

2
Gowiem avatar
Gowiem

I lol’d at that dude, well done.

Gowiem avatar
Gowiem

I created and documented a manual process for now. My team can deal.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Use our SSM module instead

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ssm-tls-ssh-key-pair

Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately, this is not the popular module but it’s the superior one.

Gowiem avatar
Gowiem

Hahah I’m the last contributor

Gowiem avatar
Gowiem

Didn’t even know about this. Did the most recent release as part of the ChatOps mass-update.

PePe avatar

hey…you are the last contributor on all the modules mister microplane

1
PePe avatar

next time use my username

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, @Gowiem is now probably the #1 contributor in terms of PRs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha

PePe avatar

I need more dark green in my github activity graph

Gowiem avatar
Gowiem

troll

Gowiem avatar
Gowiem

Yeah, I don’t mind my “74 contributions on Aug. 5th 2020” day. nyan_parrot

Gowiem avatar
Gowiem

Anyway — Thanks for pointing that out @Erik Osterman (Cloud Posse). Will definitely check that!

PePe avatar
PePe
05:55:08 PM

I do not care

Gowiem avatar
Gowiem

Haha @PePe you can do the microplane update for bumping the version of terraform-null-label across all the repos. That’ll leave you with some nice dark green in your graph!

PePe avatar

HAHAHAHAHA

PePe avatar

no worries , if someone ask who is this @Gowiem I will say is a machine bot user

1
Gowiem avatar
Gowiem

Look, I even documented it for you!

https://docs.cloudposse.com/community/contributor-tips/

RB avatar

following it now for my own stuff at work

1
Eric Berg avatar
Eric Berg

How do i slice a map, given an array of keys? I have a map containing monitor definitions, but i only need some of them for each invocation of the module. I pass in the keys to the monitor defs that I want, but I have not been able to create a map slice, containing just the keys that were passed in.

So, it looks something like this, where log_errors_high_volume and oom are keys that get passed in.:

locals {
  monitor_defs = {
    log_errors_high_volume = {
      type       = "log alert",
      recipients = local.recipients,
      query      = "....
    },
    oom = {
      type       = "log alert",
      recipients = local.recipients,
      query      = "....

If i only passed in oom, i’d need a map with one element. I intend to use the resulting map as the target for a for_each.

Eric Berg avatar
Eric Berg

Ok. Got it. I could have just referenced the larger data structure everywhere in the module call, using each.value as the key, but this worked nicely and got rid of a lot of text.

Given a map of definitions (local.monitor_defs), keyed on name, and a list of keys (local.monitor_def_keys), the following gives just the entries from the map that correspond to the specified keys:

  md = {
    for key, value in local.monitor_defs: key => value if contains(local.monitor_def_keys, key)
  }
Jaeson avatar
Jaeson

@, what version of TF is that?

Eric Berg avatar
Eric Berg

that was 0.12.29, but we’ve upgraded most things to 0.13.2.

Jaeson avatar
Jaeson

Can you link a reference you used to figure out the above? I can’t quite tell how you solved the issue – what is the structure of monitor_def_keys ?

Eric Berg avatar
Eric Berg

I don’t have the refs I used to figure this out anymore, @. monitor_key_defs is just a list of strings, representing keys in the monitor_defs map. Here, I’m returning (key, value) for each entry in monitor_defs for which there is an entry in monitor_def_keys.

:--1:1
sheldonh avatar
sheldonh

Terraform Cloud Plan creates 3 IAM Service Account Users & set the permissions inline for a group called “infra-service-accounts”…..

Would you:

One Workspace Per Account/Stage: Create a workspace, ie separate terraform job in the cloud, for each account so each runs independently and looks up the credentials based on a variable called “account_alias”.

One Single Plan With Aliased Providers: Create a single plan that uses aliased providers and just repeat the code/module call, credential lookup, and all 8 times in a single file. I’ve tended to keep things separate, so each plan and successful failure reports back, but wondering if the provider alias all in one plan is more common. Less workspaces to review as well.

PePe avatar

Hello, I have terraform repo that is going beyond any usefulness and I need to separate in different repos and hopefully different state files, what would be the best way to import the resources to the new states? by manually doing terraform import or something else?

Chris Fowles avatar
Chris Fowles

you can move resources between state files

Chris Fowles avatar
Chris Fowles

terraform state mv -state-out=PATH Path to the destination state file to write to. If this isn’t specified, the source state file will be used. This can be a new or existing path

Chris Fowles avatar
Chris Fowles

best to lock and disconnect your remote state before undertaking this task

PePe avatar

Ok so I will have to go from S3 to local and then back in another S3 backed I guess

PePe avatar

Ahhh but I need to move/import certain resources

Chris Fowles avatar
Chris Fowles
Usage: terraform state mv [options] SOURCE DESTINATION

 This command will move an item matched by the address given to the
 destination address. This command can also move to a destination address
 in a completely different state file.

 This can be used for simple resource renaming, moving items to and from
 a module, moving entire modules, and more. And because this command can also
 move data to a completely new state, it can also be used for refactoring
 one configuration into multiple separately managed Terraform configurations.

 This command will output a backup copy of the state prior to saving any
 changes. The backup cannot be disabled. Due to the destructive nature
 of this command, backups are required.

 If you're moving an item to a different state file, a backup will be created
 for each state file.
PePe avatar

ahhhhhhhh cool

PePe avatar

ok awesome , thanks

Chris Fowles avatar
Chris Fowles

2020-08-11

Phuc avatar

Hi guys, I have a use case like this. I want create a module to receive a custom variable as type “list” EX:

modules "example" {
services =  [ "a", "b", "c"]
.
.
.
}

As after that variable is input, matching resource for each element in the list will be created

Ex: if the list contains matching [“a”, “b”] then only matching resource “a”, “b” is created. If the list is empty, then no resource created additionally then the matching code will trigger up local value as I define

locals {
`` I need to know if local can loop on the list services and pick up each element and assign the value matching

a = if "a" exist then local.a = 1 or if not there no matching element a = 0
b = ...
c = ...
}

resource "aws_resoure_type" "service a" {
count = local.a 
.
.
.
}
resource "aws_resoure_type" "service b" {
count = local.b 
.
.
.
}

I know terraform is powerful but I need to ask you guys if that logic can be done. Thanks a lot

Sean Turner avatar
Sean Turner
locals {
  services = ["a", "b", "c"]
}

resource "aws_instance" "this" {
  for_each = toset(local.services)
  ...
}

This will create aws_instance.this["a"] aws_instance.this["b"] aws_instance.this["c"]

Phuc avatar

thank for your reply. unfortunately, it’s not the case I want. It’s not replicated resources. Each resources is different depend case. like iam policy, each service will have it own. And the condition to trigger creation follow what I described.

Emmanuel Gelati avatar
Emmanuel Gelati

maybe you are using the wrong type of variable, try to use a hash not a list

David J. M. Karlsen avatar
David J. M. Karlsen
TF13: Need v0.17.0 of terraform-null-label by davidkarlsen · Pull Request #61 · cloudposse/terraform-aws-ecr

Signed-off-by: David Karlsen [email protected] what Upgrade terraform-null-label to TF 0.13 compat version why Support TF 0.13 references #51

1
Gowiem avatar
Gowiem

Checking it out now.

TF13: Need v0.17.0 of terraform-null-label by davidkarlsen · Pull Request #61 · cloudposse/terraform-aws-ecr

Signed-off-by: David Karlsen [email protected] what Upgrade terraform-null-label to TF 0.13 compat version why Support TF 0.13 references #51

Gowiem avatar
Gowiem
Release 0.23.0: TF13: Need v0.17.0 of terraform-null-label (#61) · cloudposse/terraform-aws-ecr

what Upgrade terraform-null-label to TF 0.13 compat version why Support TF 0.13 references #51

David J. M. Karlsen avatar
David J. M. Karlsen

awesome - thanks!

David J. M. Karlsen avatar
David J. M. Karlsen

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Gowiem

:--1:1
David J. M. Karlsen avatar
David J. M. Karlsen
Incompatible version of terraform-null-label with TF 0.12.x · Issue #6 · cloudposse/terraform-aws-iam-user

When using either master or the terraform-0.12 branches, Terraform v0.12.20 complains about the terraform-null-label module, as the present module references terraform-null-label module.git?ref=0.1…

David J. M. Karlsen avatar
David J. M. Karlsen

is this module still maintained?

Gowiem avatar
Gowiem

Ah yeah @… that module is a good bit outdated. 12.x branch but no tests which is why it isn’t merged. We should get that back into the fold as I’ve definitely used the terraform-aws-modules equivalent in the past instead of using this. To provide consistency to the community and my own future tf codebases I’d like to fix that.

Seems like it’d be an easy modules to add tests to honestly. Unless you’re interested in tackling it, I can add that to my queue for when I have a spare hour or so.

David J. M. Karlsen avatar
David J. M. Karlsen

I switched to vanilla aws_* resources in the mean time, which seems to have me covered, I need some tweaking anyways. Thanks for responding though!

:--1:1
HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
06:45:20 PM

Terraform Cloud Outage Aug 11, 18:40 UTC Investigating - We are currently experiencing issues with Terraform Cloud. The UI is down and terraform runs and plans will not complete at this time. We are investigating the issue.

Terraform Cloud Outage

HashiCorp Services’s Status Page - Terraform Cloud Outage.

2
RB avatar

how many outages has it been this year?

RB avatar

6 outages since june 1

PePe avatar

I swear I’m not a contractor for hashicorp

1
PePe avatar

Hello, I’m using https://github.com/cloudposse/terraform-aws-dynamic-subnets and I set enabled = false and I’m getting this error

Error: Error in function call

  on .terraform/modules/haystack.dynamic_subnets/nat-gateway.tf line 42, in resource "aws_nat_gateway" "default":
  42:   subnet_id     = element(aws_subnet.public.*.id, count.index)
    |----------------
    | aws_subnet.public is empty tuple
    | count.index is 0

Call to function "element" failed: cannot use element function with an empty
list.

I think someone else had this issues before?

cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

PePe avatar

I think I saw this in other modules and we wrapped element on coalesense?

cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

PePe avatar

this is not going to work

PePe avatar
resource "aws_nat_gateway" "default" {
  count         = local.nat_gateways_count
PePe avatar

even if the module is set to false it will try to create the resource

PePe avatar

PR comming

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
07:05:21 PM

Terraform Cloud Outage Aug 11, 18:48 UTC Monitoring - We have identified the issue and plans and applies are currently succeeding, the UI is back up. We are continuing to monitor.Aug 11, 18:40 UTC Investigating - We are currently experiencing issues with Terraform Cloud. The UI is down and terraform runs and plans will not complete at this time. We are investigating the issue.

1
HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
07:15:23 PM

Terraform Cloud Outage Aug 11, 19:05 UTC Resolved - Terraform Cloud is operational again. If a run failed during this outage, please re-queue it. If you have problems queueing runs, please reach out to support.Aug 11, 18:48 UTC Monitoring - We have identified the issue and plans and applies are currently succeeding, the UI is back up. We are continuing to monitor.Aug 11, 18:40 UTC Investigating - We are currently experiencing issues with Terraform Cloud. The UI is down and terraform runs and plans will not complete…

Rhawnk avatar
Rhawnk

Hi All, got sort of an opinion question here. Does anyone have a preference on using remote state for looking up resources, or using data sources to look them up at runtime? Been trying to research, and not having much luck finding which is recommended or preferred

Rhawnk avatar
Rhawnk

I assume it might be related to a size thing, that is size of terraform repos, or amount of resources under management

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t think there’s a black/white rule on this. Off the cuff, here’s my recommendation:

• Use SSM for sharing values across toolchains (e.g. #terraform and helmfile) or where you need to access the values outside of terraform

• Use remote state between terraform projects. E.g. everything you provision that you are in control over.

• Use data sources for things which you might not have control over but depend on.

:--1:1
Rhawnk avatar
Rhawnk

Thanks @Erik Osterman (Cloud Posse), this pretty much aligns with at least my personal opinion. Though i never thought about the first bullet, that is a really cool idea

loren avatar
loren

a side benefit of using SSM is the permissions can be controlled with more granularity than tfstate, when needing to control access to specific values

:--1:1
PePe avatar

Hi, anyone have successfully used https://github.com/cloudposse/terraform-aws-ecs-alb-service-task with ECS+EC2 autoscaling and capacity providers?

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Rhawnk avatar
Rhawnk

I’m literally poc’ing them this week

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Rhawnk avatar
Rhawnk

I’ve had success with combo of cpu/mem/request count for the container metrics, and leveraging cps for the instance scaling

Rhawnk avatar
Rhawnk

Granted all against a crappy node/express app I mocked up with faker

Rhawnk avatar
Rhawnk

And I just crush it with artillery

PePe avatar

I’m just started playing with it but it complains that I have no instances in my capacity group

Rhawnk avatar
Rhawnk

New or existing cluster?

Rhawnk avatar
Rhawnk

It doesn’t play well with an existing cluster, but after cycling instances in the asg, it worked

PePe avatar

I deleted the ask and service, I will play with it a bit more

PePe avatar

So it looks like you can NOT create a capacity provider on a service

Rhawnk avatar
Rhawnk

oh, apologies i misunderstood, no the cp is on the cluster

PePe avatar

you need to create it and attach it to the cluster and then attach it to the ecs service

Rhawnk avatar
Rhawnk

that tracks the capacity cpu/mem of the instances in the asg

PePe avatar

no worries, the consolke showed some weird empty selector and then I was wondering

PePe avatar

so I just added in TF to the cluster and it worked

Rhawnk avatar
Rhawnk

PePe avatar

funny that TF let’s you do it and shows broken in the console

PePe avatar

well the aws API let you do it

2020-08-10

Almog Cohen avatar
Almog Cohen

Hi there! I’m trying to use https://github.com/cloudposse/terraform-aws-cicd and the README example seems really outdated. So much that it becomes a pain to try and use the project out-of-the-box.

The example:

  1.  says app but it should be elastic_beanstalk_application_name
  2. says env but it should be elastic_beanstalk_environment_name
  3. says aws_region but should be region Not sure if there are any other misses, but maybe it would be nice that someone who knows the project can revise this
cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Joe Niland avatar
Joe Niland

Hi @ if you’re looking at it in detail with the intention of using it, it’s a good opportunity for a PR!

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Juan Soto avatar
Juan Soto

Hi guys, I am very newbie to terraform, I need your help. I have been asked to conduct an audit and tighten a considerable list of IAM Roles. The problem is as follows: The previous DevOps guys manually added the AmazonEC2RoleforSSM policy to the IAM Roles (it’s not written the change in TF). SOC Auditors consider that AmazonEC2RoleforSSM is too wide open, and it needs to be replaced by the following managed policies instead: SSMMaintenanceWindowRole and SSMManagedInstanceCore. I don’t want to do this task manually, I would like to automate using terraform. Could you please give me some guidance on how to proceed? I already have the IAM roles list in locals { roles_list=list(role1,role2, role3) }, but I would like to: 1 - Check if AmazonEC2RoleforSSM exists in that role. 2 - If the policy exists, remove it and add SSMManagedInstanceCore and SSMMaintenanceWindowRole 3 - If AmazonEC2RoleforSSM doesn’t exist, add SSMManagedInstanceCore and SSMMaintenanceWindowRole I am not quite sure how to deal with loops. Thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If the infrastructure is not already in terraform, then using terraform to “patch” things in AWS won’t really work well. E.g. terraform cannot remove something it didn’t provision.

Juan Soto avatar
Juan Soto

Should I import the managed policy first in each IAM role?

pjaudiomv avatar
pjaudiomv

Yea you would have to first import what’s there into state using terraform import and then change it

pjaudiomv avatar
pjaudiomv

That process should be run carefully and you should run a plan until it’s a noop and then make your changes

Juan Soto avatar
Juan Soto

humn..I will remove that policy manually and add the correct ones using TF

:--1:1
pjaudiomv avatar
pjaudiomv

Yep that’s def the easier option

pjaudiomv avatar
pjaudiomv

cloudposse has a module for creating roles https://github.com/cloudposse/terraform-aws-iam-role

cloudposse/terraform-aws-iam-role

A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role

Juan Soto avatar
Juan Soto

thanks!

corcoran avatar
corcoran

Look at Cloudsplaining to help fix this.

corcoran avatar
corcoran
salesforce/cloudsplaining

Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report. - salesforce/cloudsplaining

1
Juan Soto avatar
Juan Soto

cool! didn’t know that!

:--1:1
corcoran avatar
corcoran

then either bridgecrew/airIAM or duo-labs/Parliament from duo-labs to fix your IAM configs going forward

pjaudiomv avatar
pjaudiomv

cool never heard of parliament, Im a check that out

pjaudiomv avatar
pjaudiomv

err the terraform aws docs seem to be down https://registry.terraform.io/providers/hashicorp/aws/latest/docs

1
1
RB avatar

this comes in handy https://kapeli.com/dash

Dash for macOS - API Documentation Browser, Snippet Manager - Kapeli

Dash is an API Documentation Browser and Code Snippet Manager. Dash searches offline documentation of 200+ APIs and stores snippets of code. You can also generate your own documentation sets.

RB avatar

terraform docset does exist even tho it’s not listed on their website

pjaudiomv avatar
pjaudiomv

`nice

pjaudiomv avatar
pjaudiomv

well technically they aren’t down, just a lot harder to read

MattyB avatar
MattyB

that’s a paddlin

Release notes from terraform avatar
Release notes from terraform
06:14:24 PM

v0.13.0 0.13.0 (August 10, 2020)

This is a list of changes relative to Terraform v0.12.29. To see the incremental changelogs for the v0.13.0 prereleases, see the v0.13.0-rc1 changelog.

This section contains details about various changes in the v0.13 major release. If you are upgrading from Terraform v0.12, we recommend first referring to <a href=”https://www.terraform.io/upgrade-guides/0-13.html” rel=”nofollow”>the…

hashicorp/terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…

:--1:1
party_parrot2
pjaudiomv avatar
pjaudiomv

think ill give that a few weeks, kind of hard to roll back as terraform upgrades pollute your remote state

Chris Wahl avatar
Chris Wahl

I feel that. I’ve been using the beta / rc versions on a few projects pointed at test regions that deploy in parallel with production. Haven’t hit any stumbling blocks yet, but bake time is always good.

:--1:2
Sean Turner avatar
Sean Turner

I gave it a try and was getting some really weird errors and rolled back haha

2020-08-09

2020-08-08

Eric Berg avatar
Eric Berg

I am using the kubernetes provider’s kubernetes_namespace resource, successfully, but I’m trying to get the LB URL, which the docs for kubernetes_namespace say should output self_url, but I get this error, when I try to use it:

Error: Unsupported attribute

  on ../../../../../application-stack/helm.tf line 19, in output "k-ns-url2":
  19:   value = kubernetes_namespace.borrower[0].self_url

This object has no argument, nested block, or exported attribute named
"self_url".

Instead, we’re using a data.kubernetes_service to pull out the elb name, to use that to create R53 CNAMEs. This results in never having a clean plan, because data.kubernetes_service.nginx-ingress has to be read and resource.aws_lb_ssl_negotiation_policy has a dependency to it for the id.

How do i get the ELB from helm.helm_release for nginx-ingress, maybe?

Thanks for any ideas.

loren avatar
loren

I see self_link in the docs, not self_url?

Eric Berg avatar
Eric Berg

Sorry if I wasn’t clear. Neither self_url, nor self_link work. Pycharm doesn’t see any of the documented attributes for auto-completion, either. It feels like i’m using a different kubernetes_namespace, but i don’t think that’s even possible. My kubernetes provider is pinned to “~> 1.12”.

loren avatar
loren

if the attribute isn’t there, all i can think of is that it is null due to a count/for_each issue… the error there looks to be from an output, and outputs do not support count/for_each. so it is not recommended to use indexing of an “optional” resource… instead use the old “join” trick

output = join("", kubernetes_namespace.borrower.*.self_link)
Eric Berg avatar
Eric Berg

I created an output of kubernetes_namespace.datadog and got this, which shows that self_link is an attribute of the metadata attribute, not a direct attribute of the namesapce:

test = {
  "id" = "datadog"
  "metadata" = [
    {
      "annotations" = {}
      "generate_name" = ""
      "generation" = 0
      "labels" = {}
      "name" = "datadog"
      "resource_version" = "637"
      "self_link" = "/api/v1/namespaces/datadog"
      "uid" = "1c61de6b-9dc3-4ee7-9b71-b85a78a54655"
    },
  ]
}

In any case, self_link doesn’t give me what I wanted.

the question remains, how do i get the URL of the ELB of the EKS clusters’ nginx-ingress

loren avatar
loren

shoot, was hoping by resolving the tf issue that would get you there. i don’t know EKS well enough to help with that part… maybe ask in #kubernetes ?

Eric Berg avatar
Eric Berg

I don’t think there’s anyplace to get that URL, other than using a data block, like this:

data "kubernetes_service" "nginx_ingress" {...

and then referencing it like this:

data "aws_elb" "nginx-ingress" {
  name = split("-", data.kubernetes_service.nginx_ingress.load_balancer_ingress[0].hostname)[0]
}
Eric Berg avatar
Eric Berg

The only problem with this way – the way we’ve been doing it - is that – and I recall someone else getting unclean plans, because TF was reading some data blocks, even though nothing had actually changed.

So, pivoting from finding another way to get the ELB for nginx-ingress running on an EKS cluster, to finding out how to do this, so that you get clean plans, rather than the following, is the goal:

Eric Berg avatar
Eric Berg
  # module.stack_install.data.aws_elb.nginx-ingress will be read during apply
  # (config refers to values not yet known)
 <= data "aws_elb" "nginx-ingress"  {
      + access_logs                 = (known after apply)
      + arn                         = (known after apply)
      + availability_zones          = (known after apply)
      + connection_draining         = (known after apply)
      + connection_draining_timeout = (known after apply)
      + cross_zone_load_balancing   = (known after apply)
      + dns_name                    = (known after apply)
      + health_check                = (known after apply)
      + id                          = (known after apply)
      + idle_timeout                = (known after apply)
      + instances                   = (known after apply)
      + internal                    = (known after apply)
      + listener                    = (known after apply)
      + name                        = (known after apply)
      + security_groups             = (known after apply)
      + source_security_group       = (known after apply)
      + source_security_group_id    = (known after apply)
      + subnets                     = (known after apply)
      + tags                        = (known after apply)
      + zone_id                     = (known after apply)
    }

  # module.stack_install.data.kubernetes_service.nginx_ingress will be read during apply
  # (config refers to values not yet known)
 <= data "kubernetes_service" "nginx_ingress"  {
      + id                    = (known after apply)
      + load_balancer_ingress = (known after apply)
      + spec                  = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "nginx-ingress-singleton-controller"
          + namespace        = "nginx-ingress"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.stack_install.aws_lb_ssl_negotiation_policy.external_tls must be replaced
-/+ resource "aws_lb_ssl_negotiation_policy" "external_tls" {
      ~ id            = "abcdabcd...:443:external-tls" -> (known after apply)
        lb_port       = 443
      ~ load_balancer = "abcdabcd..." -> (known after apply) # forces replacement
        name          = "external-tls"

....

  # module.stack_install.aws_route53_record.borrower_api_a_record will be updated in-place
  ~ resource "aws_route53_record" "borrower_api_a_record" {
        fqdn    = "[borrower-api-demo-brace.brace.ai](http://borrower\-api\-demo\-brace\.brace\.ai)"
        id      = "Z1WJ47LG7V031G_borrower-api-demo-brace.brace.ai_A"
        name    = "[borrower-api-demo-brace.brace.ai](http://borrower\-api\-demo\-brace\.brace\.ai)"
        records = []
        ttl     = 0
        type    = "A"
        zone_id = "Z1WJ47LG7V031G"

      - alias {
          - evaluate_target_health = false -> null
          - name                   = "[ab73319dbbd184637aed2ae9b56b85a6-1595539325.us-east-2.elb.amazonaws.com>" -](http://ab73319dbbd184637aed2ae9b56b85a6\-1595539325\.us\-east\-2\.elb\.amazonaws\.com) null
          - zone_id                = "Z3AADJGX6KTTL2" -> null
        }
      + alias {
          + evaluate_target_health = false
          + name                   = (known after apply)
          + zone_id                = (known after apply)
        }
    }

  # module.stack_install.aws_route53_record.servicer_api_a_record will be updated in-place
  ~ resource "aws_route53_record" "servicer_api_a_record" {
        fqdn    = "[servicer-api-demo-brace.brace.ai](http://servicer\-api\-demo\-brace\.brace\.ai)"
        id      = "Z1WJ47LG7V031G_servicer-api-demo-brace.brace.ai_A"
        name    = "[servicer-api-demo-brace.brace.ai](http://servicer\-api\-demo\-brace\.brace\.ai)"
        records = []
        ttl     = 0
        type    = "A"
        zone_id = "XYZ"      - alias {
          - evaluate_target_health = false -> null
          - name                   = "[ab73319dbbd184637aed2ae9b56b85a6-1595539325.us-east-2.elb.amazonaws.com>" -](http://ab73319dbbd184637aed2ae9b56b85a6\-1595539325\.us\-east\-2\.elb\.amazonaws\.com) null
          - zone_id                = "Z3AADJGX6KTTL2" -> null
        }
      + alias {
          + evaluate_target_health = false
          + name                   = (known after apply)
          + zone_id                = (known after apply)
        }
    }
Eric Berg avatar
Eric Berg
Eric Berg avatar
Eric Berg

Understanding that it has to run, before it gets the info for the load balancer, how do i set this up to get a clean plan? Every time, it recreates the aws_route_53 records for those API servers and the aws_lb_ssl_negotiation_policy. I need this to create a clean plan.

loren avatar
loren

i presume the elb is created from a different terraform config? can you just feed in the elb name as a variable, instead of a data source?

2020-08-07

Craig Dunford avatar
Craig Dunford

Any one have experience implementing Azure resource locks in tandem with terraform (ie https://www.terraform.io/docs/providers/azurerm/r/management_lock.html). Specifically I am wondering how folks deal with cases when terraform wants to recreate resources.

Azure Resource Manager: azurerm_management_lock - Terraform by HashiCorp

Manages a Management Lock which is scoped to a Subscription, Resource Group or Resource.

Jaeson avatar
Jaeson

Is anyone using TF to build auto-scaling groups with capacity-optimized spot instances? I’m not finding a good resource that demonstrates how this can be done.

rajeshb avatar
rajeshb

i hope this helps

resource "aws_emr_instance_group" "task" {
  cluster_id     = aws_emr_cluster.this.id
  instance_count = var.task_instance_count
  bid_price      = var.bid_price
  instance_type  = var.task_instance_type
  name           = "${var.cluster_name}-task-grp"

  ebs_config {
    size                 = var.core_volume_size
    type                 = var.core_volume_type
    volumes_per_instance = var.volumes_per_instance
  }

  autoscaling_policy = data.template_file.task_autoscaling_policy.rendered

}

data "template_file" "task_autoscaling_policy" {
  template = file("${path.module}/templates/autoscaling_policy.json.tpl")

  vars = {
    min_capacity = var.task_instance_count_min
    max_capacity = var.task_instance_count_max
  }
}
rajeshb avatar
rajeshb

done it for emr

Jaeson avatar
Jaeson

I’ll check it out. Thanks!

rajeshb avatar
rajeshb

Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: RequestError: send request failed caused by: Post <https://sts.amazonaws.com/>: dial tcp: lookup [sts.amazonaws.com> on 10.x.x.x:x: write udp 172.31.176.x:54764 ](http://sts\.amazonaws\.com)10.x.x.x:53: write: no buffer space available Does anyone know the cause please

RB avatar

maybe you ran out of space on your laptop?

rajeshb avatar
rajeshb

Yes unknown issue that sorted with system restart , but not out of space. thanks :–1:

RB avatar

what terraform version are you using? according to this it looks like it might be fixed with the latest 0.12

https://github.com/terraform-providers/terraform-provider-aws/issues/4709#issuecomment-453554068

Intermittent error using s3 state · Issue #4709 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a :–1: reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

rajeshb avatar
rajeshb
% terraform version
Terraform v0.12.26

Thanks for Info on the BUG :–1: RB

np1
Jaeson avatar
Jaeson

Using TF 12, I’d like to define a block for tags that I reuse everywhere, but I’m having a little trouble figuring out how to do this in a way that doesn’t feel hacky.

RB avatar
locals {
  tags = {
    application = "banana"
  }
}

resource "aws_iam_role" "bananas" {
  ...
  tags = local.tags
}
Jaeson avatar
Jaeson

Thanks!

np1
Eric Berg avatar
Eric Berg

I’ve got a ticket to update my tags to allow for both a standard set of tags as well as resource-specific set. Goes something like this, using @RB’s local.tags, above:

    tags = merge(local.tags,
                 {"Name" = "Resource Name}
                )
:--1:1
Jaeson avatar
Jaeson

Has anyone seen this kind of thing before?
on main.tf line 168, in resource “aws_security_group” “lb_adv2_sg”:
168: tags = locals.tags

A managed resource “locals” “tags” has not been declared in the root module.

locals {
  tags = {
    env         = var.environment
    owner       = "DevOps"
    product     = "adv2.0"
    managed_by  = "terraform"
  }
}

is defined above (before) the aws_security_group resource in the same file.

Jaeson avatar
Jaeson

This is when running terraform validate for TF 12. … Ah. I think I added an ‘s’ to local.

:--1:1
RB avatar

why did they even do that

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On recent #office-hours we talked about opsgenie automation. Today we released :tada: first version of our module to manage it with terraform , we currently use that to manage most of our opsgenie setup. https://github.com/cloudposse/terraform-opsgenie-incident-management

raghu avatar
raghu

Is there a way we can destroy specific resource from a workspace in terraform enterprise? when i comment out the block of code, it says block of code missing.

Tom Howarth avatar
Tom Howarth

You will need to use either terraform taint and point it and the requisite resource this will mark it for replacement or terraform destroy and again mark the relevant resource. Use terraform state list to find the correct name of the resource in your state file for deletion of replacement

:--1:1

2020-08-06

rajeshb avatar
rajeshb

Does anyone manage to view terraform docs for AWS PROVIDER – staying blank forever https://registry.terraform.io/providers/hashicorp/aws/latest/docs

RB avatar

loads for me

Tom Howarth avatar
Tom Howarth

Load perfectly for me, try clearing your cookies, as there could be a stale one

rajeshb avatar
rajeshb
10:48:38 AM

Thanks Tom & RB, still cannot figure out the issue, I have tried clearing cookies and tried in private browsers still no use and weirdly even i cant access using mobile network . and got this message when seen in Chrome developer tools

Failed to find a valid digest in the 'integrity' attribute for resource '<https://registry.terraform.io/assets/terraform-registry-3c3897b8880537ab9759d2e91a1a39c5.js>' with computed SHA-256 integrity 'NL3YiUcpnfSMYU99vstLHDhVzYi63JZfABW7NIQVZmQ='. The resource has been blocked.

le me know if i miss anything

RB avatar

• try firefox or another browser

RB avatar

• create a new profile in chrome and try that

RB avatar

if you’re on a vpn, try toggling it

rajeshb avatar
rajeshb

I am not in VPN, will try firefox.

rajeshb avatar
rajeshb

Its still the same .

RB avatar

can you try a vpn if you have one? if not, you can use free protonvpn. see if that works.

brew cask install protonvpn
RB avatar

if that works, then I’d run a traceroute to the link on and off the vpn and diff them side by side

RB avatar

see where the connection is failing. it’s possible that an ISP has blocked your IP address.

rajeshb avatar
rajeshb

will try in VPN and about IP Address Block – i wonder how they managed to even block my mobile network.

rajeshb avatar
rajeshb

I can access from Safari Now! .

:--1:2
cabrinha avatar
cabrinha

hey all, just a general Terraform question: I have an EC2 instance that I need to add some userdata to. I want to put a file on the node and run a command. The file is a CA cert:

Error: Invalid expression

  on [main.tf](http://main\.tf) line 35, in module "eks":
  35:     pre_userdata = << EOF

Expected the start of an expression, but found an invalid expression token.
loren avatar
loren

<<EOF ? i’ve only seen it with no space

loren avatar
loren

or <<-EOF to allow indentation of the heredoc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, I recommend avoiding all forms of HEREDOC and sticking it in a file and reading that file in.

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. I rather edit a shell script that I can run locally during development, rather than editing a shell script inside of terraform that I have to deploy with every change.

loren avatar
loren

definitely find it easier to manage userdata as a file, though if vars need to be passed from terraform, then you’re probably using templatefile() and running that locally probably won’t work anyway

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, possibly

loren avatar
loren

but still, yeah, set the template values at the top of script, then at least can swap them out easily-ish

sheldonh avatar
sheldonh

I am using terraform to create and manage most of my teams repos.

• I want to configure all the managed repos to have a hook integration with Microsoft Teams so Pull notifications come through, anyone done this?

• The branch protection policy is empty. I want to set it to enforce branch protection policy IF anything is placed in the branch hooks. Is that possible or is this just something I’ m going to have to do manually after creation of the repo? PR follow-ups are a pain to configure in teams with GitHub so was hoping for a simple way to integrate

2020-08-05

ayr-ton avatar
ayr-ton

Hey, how are you doing? I’m using cloudposse/eks-cluster/aws version 0.24.0 and I’m always experiencing the issue:

Error: configmaps "aws-auth" already exists

  on .terraform/modules/eks_cluster/terraform-aws-eks-cluster-0.24.0/auth.tf line 84, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
  84: resource "kubernetes_config_map" "aws_auth_ignore_changes" {

Does someone already experienced this config map issue?

ayr-ton avatar
ayr-ton

I have manually solved with:

terragrunt import --terragrunt-iam-role "arn:..." module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0] kube-system/aws-auth

Then running terragrunt apply again o/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This should not happen on a clean EKS deployment (from scratch). If you’re upgrading an existing EKS cluster from an older version of the module, I would expect that to happen.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our automated tests run on this module all the time and do not encounter this error.

ayr-ton avatar
ayr-ton

Will let you know on the case this happens again. But basically it was a clean EKS deployment.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) any thoughts on what’s going on?

ayr-ton avatar
ayr-ton

will deploy this module again in a few to see if will explode

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes I know, 1 sec

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

ayr-ton avatar
ayr-ton

ahh, that’s interesting because I did this yesterday when I was removing the fargate profile

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that will eliminate a race condition

ayr-ton avatar
ayr-ton

and then I didn’t saw this error

ayr-ton avatar
ayr-ton

We’re in syntony, nice.

ayr-ton avatar
ayr-ton

Thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-fargate-profile

Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile

:--1:1
JMC avatar

I’m running EKS 1.17 right now, and when I use terraform destroy to get rid of a kubernetes_deployment, the replicaset is not deleted in cascade and the pods stays. Anyone else experienced this ?

corcoran avatar
corcoran

Sounds like a conversation I was having this morning with a pal of mine - he was talking about using null_resources with triggers to handle that, maybe? https://www.terraform.io/docs/provisioners/null_resource.html

Provisioners Without a Resource - Terraform by HashiCorp

The null_resource is a resource allows you to configure provisioners that are not directly associated with a single existing resource.

JMC avatar

But it used to work before ?

JMC avatar

Or at least the leafover replicaset was downscaled to zero

corcoran avatar
corcoran

Ah - probably not then!

JMC avatar

Now the containers stays up

JMC avatar
Terraform delete succesfully the kubernetes_deployment resource, but the resource is still alive in Kubernetes cluster · Issue #944 · hashicorp/terraform-provider-kubernetes

Hello guys. I updated to EKS 1.17, so I think it&#39;s pretty much related to that, but it seems terraform is not able to delete my deployments anymore. When I terraform destroy a kubernetes_deploy…

Eric Berg avatar
Eric Berg

Over the past few weeks, i’ve had to describe pods or namespaces, update the JSON to remove the finalizers, and push that change back. not sure if a) this is asking for trouble at some point, or b) if there’s a better way…or if it’s directly related to your issue, but it’s helped me out quite a few times, recently.

paultath81 avatar
paultath81

Is there a tf module for aws workspaces in the cloudposse repo? I was not able to find any examples

pjaudiomv avatar
pjaudiomv

There’s still some features lacking in the terraform aws provider to probably make a full IaC module, but you could certainly handle the directory and creation of them like a list of users to create them

pjaudiomv avatar
pjaudiomv

The module could handle creation of vpc, subnets etc, workspace directory and users

paultath81 avatar
paultath81

I see thx for the info.

pjaudiomv avatar
pjaudiomv

Yea there’s some issue tickets in the aws provider tracking new and upcoming workspace features

pjaudiomv avatar
pjaudiomv

I think there’s a lack of aws api calls to so without those there’s nothing the provider can do

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, this was the state of it last year when we looked into it. The next best thing we found was this: https://github.com/eeg3/workspaces-portal

eeg3/workspaces-portal

Amazon WorkSpaces Self-Service Portal. Contribute to eeg3/workspaces-portal development by creating an account on GitHub.

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(CFT)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

….that could be deployed with terraform

1
1
pjaudiomv avatar
pjaudiomv

No apis for creating bundles

2020-08-04

contact871 avatar
contact871

Does anyone know what properties I can use in aws_cloudwatch_event_target.input when used with ECS target? I’m trying to get "PropagateTags": "TASK_DEFINITION" to work. The example shows only the usage of containerOverrides .

contact871 avatar
contact871
[Fargate, ECS] [Tagging]: Support tagging when starting a task from CWE · Issue #89 · aws/containers-roadmap

Tell us about your request Support for tagging a task started through CloudWatch Events. Which service(s) is this request for? Fargate, ECS Tell us about the problem you&#39;re trying to solve. Wha…

Briet Sparks avatar
Briet Sparks

Hi, facing an issue with dynamic values in CodeDeploy appspec.

With CodeBuild, you can set environment variables from Terraform:

resource "aws_codebuild_project" "myapp" {
  // ...
  environment_variable {
    name = "MY_VARIABLE"
    value = var.my_variable
  }
  // ...
}

Which you can conveniently reference in a buildspec.yml

phases:
  commands: 
    - echo $MY_VARIABLE

However with CodeDeploy, you can’t environment variables from Terraform, or at least, I have not found such an argument in the codedeploy_app>, [codedeploy_deployment_config](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codedeploy_deployment_config), and <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codedeploy_deployment_group|codedeploy_deployment_group resources.

So, I’m having to manually sync values between my TF file and appspec.yml. For example, I’m using ECS, so appspec.yml wants:

• TaskDefinition*

• ContainerName*

• Port*

• PlatformVersion

• NetworkConfiguration: { Subnets, SecurityGroups, AssignPublicIp } These are either TF variables or generated at TF runtime. It would be great if I could inject them into the appspec / CodeDeploy runtime environment and reference them as variables, but again it doesn’t seem possible. What might be good workaround?

Vincent Fiset avatar
Vincent Fiset

Hi! I am using terraform .12 with workspaces and modules and everything WAS fine. I had prod and qa as workspaces mainly for “namespacing” in the state and to use as variable around the code. Everything WAS fine because I had the exact same components with light variations that could easily be configured in their respective .tfvar files.

Now I need to add a new prod environment in europe and it won’t have the all the things deployed over there so my setup is kinda screwed

I was thinking moving to a structure such as:

.
├── environments
│   ├── prod
│   │   ├── america
│   │   │   ├── [main.tf](http://main\.tf)
│   │   │   └── prod-america.tfvars
│   │   └── europe
│   │       ├── [main.tf](http://main\.tf)
│   │       └── prod-europe.tfvars
│   ├── qa
│   │   ├── [main.tf](http://main\.tf)
│   │   └── qa.tfvars
│   └── staging
│       ├── america
│       │   ├── [main.tf](http://main\.tf)
│       │   └── staging-america.tfvars
│       └── europe
│           ├── [main.tf](http://main\.tf)
│           └── staging-europe.tfvars
└── modules
    ├── dns
    │   └── [main.tf](http://main\.tf)
    ├── gcp
    │   ├── [gke.tf](http://gke\.tf)
    │   ├── [main.tf](http://main\.tf)
    │   └── [network.tf](http://network\.tf)
    └── releases
        ├── product-a-infra
        │   └── [main.tf](http://main\.tf)
        ├── product-b-infra
        │   └── [main.tf](http://main\.tf)
        └── shared-infra
            └── [main.tf](http://main\.tf)

This would allow me to reference the modules I want in my various environments main.tf

What would you guys do in this case ?

Also, not sure the workspaces still make sense since each environments main.tf will need to define their own terraform config block thus will separate tfstate in there

loren avatar
loren

i’ve often seen folks use the exact <region> for the directory name, as it’s more of a unique key in the hierarchy… e.g. eu-central-1, because there are multiple aws regions in europe. of course, you can also get around that with multiple providers in a main.tf

Vincent Fiset avatar
Vincent Fiset

yep ok, makes sense, thanks

Alan Kis avatar
Alan Kis

If helps, also I would agree with @loren

:--1:1
jason einon avatar
jason einon

im currently creating a similar project.. however gone down the terragrunt route.. this is how my repo is current structured:

├── non-prod
│   ├── account.hcl
│   └── amer
│       └── us-west1
│           ├── mgmt
│           │   ├── env.hcl
│           │   ├── networking
│           │   │   ├── firewall_rules
│           │   │   │   └── ingress
│           │   │   │       └── terragrunt.hcl
│           │   │   ├── subnetworks
│           │   │   │   ├── subnet_data
│           │   │   │   │   └── terragrunt.hcl
│           │   │   │   ├── subnet_dev
│           │   │   │   │   └── terragrunt.hcl
│           │   │   │   ├── subnet_internaldmz
│           │   │   │   │   └── terragrunt.hcl
│           │   │   │   ├── subnet_services
│           │   │   │   │   └── terragrunt.hcl
│           │   │   │   └── subnet_stage
│           │   │   │       └── terragrunt.hcl
│           │   │   └── vpc
│           │   │       └── terragrunt.hcl
│           │   └── security
│           └── region.hcl
└── terragrunt.hcl
jason einon avatar
jason einon

because of the way vars are passed in, if there is a requirement to go into other regions will simply be a case of copying the directory and updating a few vars.. as it follows the DRY code practices helps reduced repeated code.

jason einon avatar
jason einon

this then pulls in the required modules

jason einon avatar
jason einon

this is obviously at the start of the project… 2 days into the first sprint so will grow out over the next week or so with additional resources…

Alan Kis avatar
Alan Kis

Terraform gurus

Using locals and built-in string functions in Terraform, is it possible to change or alter the label name not the value?

task_logging = [
    for k, v in var.task_logging_options : {
      name = trimprefix(k,"TASK_LOGGING_")
      value = v
    }
  ]

So I can do the following in the module instantiation:

task_logging_options = {
    TASK_LOGGING_Name = "es"
    // TASK_LOGGING_Host 
  }

Which is backed by task definition in the module:

task_logging_options = {
    TASK_LOGGING_Name = "es"
    // TASK_LOGGING_Host 
  }

So basically, strip the prefix from each argument to build a logging options object to pass down to the ECS task?

Brij S avatar
Brij S

has anyone created an EKS cluster using Terraform Cloud. How does one retrieve the kubeconfig file?

Paul Nicholson avatar
Paul Nicholson

Hey folks, has anyone come across a tool like scenery that supports terraform 0.12+? I’m more interested in cleaning up the output to obtain just the diff..

Chris Fowles avatar
Chris Fowles

| grep '#'

pjaudiomv avatar
pjaudiomv
lifeomic/terraform-plan-parser

Command line utility and JavaScript API for parsing stdout from “terraform plan” and converting it to JSON. - lifeomic/terraform-plan-parser

Chris Fowles avatar
Chris Fowles

terraform 0.12 can output plans to json natively

:--1:1
pjaudiomv avatar
pjaudiomv

Oh yea, disregard outdated npm

pjaudiomv avatar
pjaudiomv

terraform show -json <PLAN FILE>

2020-08-03

bricezakra avatar
bricezakra

Hello everyone, I am working on a multi availability zones terraform template on aws. I am fairly new to terraform. Can anyone help me with it please? Any advice or sample template to start with my project?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what AWS resources are you terraforming?

bricezakra avatar
bricezakra

Hello @Andriy Knysh (Cloud Posse), Sorry for the late reply! EC2 Instances resource

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

take a look at these modules, they all related to EC2 https://github.com/cloudposse?q=ec2&type=&language=

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each module has a working example in the examples/complete folder https://github.com/cloudposse/terraform-aws-ec2-instance/tree/master/examples/complete

cloudposse/terraform-aws-ec2-instance

Terraform Module for providing a general EC2 instance provisioned by Ansible - cloudposse/terraform-aws-ec2-instance

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-instance-group

Terraform Module for provisioning multiple general purpose EC2 hosts for stateful applications. - cloudposse/terraform-aws-ec2-instance-group

bricezakra avatar
bricezakra

Thanks @Andriy Knysh (Cloud Posse) :–1: Very helpful!

Gowiem avatar
Gowiem

Anyone using AWS SSO and configuring it via Terraform? I don’t believe there are resources to do so from a quick google search, but just want to confirm. Really dig AWS SSO, but suggesting it to a client without having Terraform support is making me hesitant as I’m trying to get everything for them onto IaC.

loren avatar
loren

i don’t think there is yet much in the way of an api for aws sso, for terraform to interact with… https://docs.aws.amazon.com/singlesignon/latest/PortalAPIReference/API_GetRoleCredentials.html

GetRoleCredentials - AWS Single Sign-On

Returns the STS short-term credentials for a given role name that is assigned to the user.

loren avatar
loren

this issue appears to discuss the same problem… https://github.com/terraform-providers/terraform-provider-aws/issues/13755

AWS Single Sign-On Resource · Issue #13755 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a :–1: reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

loren avatar
loren

i had tried to use aws sso previously, but found permission sets far too limiting, compared to managing policies on roles directly… posted a couple times to the aws sso forum, but haven’t come up with a viable approach yet…

https://forums.aws.amazon.com/thread.jspa?threadID=312303&tstart=0

https://forums.aws.amazon.com/thread.jspa?threadID=282793&tstart=0

Gowiem avatar
Gowiem

@loren Awesome — you confirmed my findings / fear. Thanks man!

RB avatar
Just released 2 container modules ([datadog> and <https://registry.terraform.io/modules/AdRoll/fluentbit/container/0.1.0 fluentbit](https://registry.terraform.io/modules/AdRoll/datadog/container/0.1.0)) on the tf registry to make fargate and datadog integration easier.
:--1:3

2020-08-02

kskewes avatar
kskewes

Hey everyone, have been using this module: https://github.com/cloudposse/terraform-aws-rds-cluster - and didn’t get around to sorting out our upgrade strategy :face_palm: Now looking to update our minor version via engine_version variable. Changing this and doing a plan shows:

  1. rds_cluster to be updated in place.
  2. rds_cluster_instance to be recreated « downtime event I guess (RDS instances can take many minutes to be created). Thoughts and questions:
  3. apply_immediately variable is set to true (default). We could change.
  4. auto_minor_version_upgrade - for instances defaults to true, we would want to change (add to module) if managing version in TF
  5. I guess ZDP (Zero Downtime Patching) isn’t available via Terraform? Or might happen if we do disable immediately and let update during maintenance window?
  6. We could use -target=cluster|instance[0-2] to limit the actions a plan does but this means babysitting the upgrade and could result in downtime if we did an apply without -target flag
  7. How are others doing this?
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

kskewes avatar
kskewes

Our plan is:

  1. version in tf - no surprises
  2. disable auto updates - no surprises - PR added
  3. apply immediately false - do when ready via maintenance window/console/target
PePe avatar

we do this manually and the we update terraform so it matches the version

PePe avatar

I have not found a clean way to do this without taking down the cluster cleanly or getting a timeout

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

released #74 as 0.29.0

kskewes avatar
kskewes

Thanks PePe. Thanks Erik for the quick turnaround!

kskewes avatar
kskewes

Indeed. So my latest attempt following this: https://github.com/terraform-providers/terraform-provider-aws/issues/9401#issuecomment-551350474

With this change to instance in module:


\# engine_version = "" # commented out, leave versioning in cluster only

lifecycle {
    create_before_destroy = true
    ignore_changes        = [engine_version]
  }

Then version bump in TF then terraform apply with:

  1. apply_immediately = false « resulted in no change, or even pending maintenance in Console
  2. apply_immediately = true (cluster only, not instance) « resulted in no change I think?
  3. apply_immediately = true (cluster & instances) << See below…

Test results:

  1. upgrade to cluster happening straight away in place
  2. upgrade to instances happening straight away in place without creating new instances
  3. ~23m taken to complete according to Terraform
  4. ~10s shutdown and restart events in Console. << presumably hard outage
  5. No failover events in Console - total cluster outage?
  6. versioning in Terraform matching AWS Console

To be confirmed:

  1. actual app downtime during this - reboots are at unknown/unscheduled time, didn’t have any apps connected to RDS.
  2. whether this is still worth doing rather than just updating manually in Console or splitting TF module
Destroy/recreate DB instance on minor version update rather than updating · Issue #9401 · terraform-providers/terraform-provider-aws

Terraform Version Terraform v0.12.3 provider.aws v2.16.0 provider.template v2.1.2 Affected Resource(s) aws_rds_cluster aws_rds_cluster_instance Terraform Configuration Files resource &quot;aws_rds_…

kskewes avatar
kskewes

Overall I think this might be sanest way to do in place upgrades with module as is.

Curious whether we:

  1. We PR this, or
  2. Give up and we split out of using module and try upgrading instances one at a time. We already need to blue green from Aurora MySQL 5.6 to 5.7 as that can’t be done in place. However I’d like to have a plan about future version bumps.
kskewes avatar
kskewes

Hey team, we’ve reached a consensus that internally we are going to continue with the RDS module modified per above so we can do in-place minor version upgrades of our Aurora MySQL clusters.

Question I have is whether this makes sense PR’d back to the upstream module or not.

• Because regular RDS and Postgres support major version upgrades and it’s only Aurora MySQL that (currently) doesn’t the changes may not make sense.

• I don’t have capacity to test such changes with the other database types. Can only confirm good for Aurora MySQL and thus doing related logic in the module seems overly complex.

PePe avatar

I guess ie better to leave the complexity out of the module

:--1:1
PePe avatar

but maybe adding something to the docs maybe a good idea?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) can you evaluate this when you have a chance? No rush.

:100:1
kskewes avatar
kskewes

Thanks everyone. It’s only a small diff and we’ll still be rebasing on and contrib upstream as go.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks for undestanding… 0.13 is taking all our spare time right now

kskewes avatar
kskewes

You’re welcome. Very grateful to you and team with all the modules. Cheers

    keyboard_arrow_up