#terraform (2020-11)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-11-29

2020-11-27

Michael avatar
Michael

qq for those of you that use terragrunt in CI. What does your pipeline look like essentially? going along the terragrunt path has lead me down the mono repository layout

Michael avatar
Michael

one thing that I see being problematic is having a static collection of known directories and executing plan-all within a pipeline would be problematic

2020-11-26

aaratn avatar
aaratn

Given an expression like this, how do I repeat elements in aws_subnet.db.**.id if num_nodes is greater than length(aws_subnet.db.**.id) ?

subnet_ids = [for subnet in range(var.num_nodes) : aws_subnet.db[subnet].id]
loren avatar
loren


If the given index is greater than the length of the list then the index is “wrapped around” by taking the index modulo the length of the list:

aaratn avatar
aaratn

Tried it, looks like doesn’t do the trick

[for subnet in range(var.num_nodes) : element(aws_subnet.db.*.id, subnet)]

Seeing this, expecting first element to get repeated which doesn’t look like it is

subnet_ids         = [
                      + "subnet-01759xxxxx5199",
                      + "subnet-02ea1xxxx2c019",
                      + "subnet-091ff5xxx6a116f",
                    ]
Michael avatar
Michael

Does anyone have a hack they’d like to share to get around the limitation of recursive templatefile calls?

Michael avatar
Michael

for e.g consider

Michael avatar
Michael
./example.tmpl
example = ${foo}
${templatefile("common.tmpl", { bar = "bar" })}
Michael avatar
Michael

will result in

templatefile(“./example.tmpl”, { foo = “bar”})



Error: Error in function call

on <console-input
line 1:
(source code not available)

Call to function “templatefile” failed: ./example.tmpl:2,3-16: Error in
function call; Call to function “templatefile” failed: cannot recursively call
templatefile from inside templatefile call..

loren avatar
loren

Pass the recursive value through as a template variable instead

Michael avatar
Michael

that’s a good idea

2020-11-25

JohnVal avatar
JohnVal

Hello all, I have just stumbled across https://github.com/cloudposse/terraform-aws-tfstate-backend/ - does this modules manage access permissions to S3 buckets being created ? I.e.: I would like to grant RW permission to person A and B so that nobody else can access the new S3 bucket.

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

JohnVal avatar
JohnVal

Do I need to modify this module or could I add those permissions outside of this module ?

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, so permissions is not something in scope for this module. That’s more something you control using IAM roles associated with groups or users.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. by default, no user has access to the bucket created.

JohnVal avatar
JohnVal

thanks @Erik Osterman (Cloud Posse) a lot for explanation.

JohnVal avatar
JohnVal

@Erik Osterman (Cloud Posse) but this modules is per cluster ? I mean I cannot create one S3 bucket for many clusters and i.e.: differentiate using DynamoDB keys ?

Jørgen Vik avatar
Jørgen Vik
api_pipeline_env_variables = [
  {
    name  = "AWS_DEFAULT_REGION"
    value = "eu-central-1"
  },
  {
    name  = "CONTAINER_NAME"
    value = var.api_container_name <-- Rookie her. This is illegal, but how can I inject a variable in list?
  }
]
loren avatar
loren

that should work fine… what version of terraform?

Jørgen Vik avatar
Jørgen Vik

@loren Terraform v0.13.5

loren avatar
loren

and what error are you getting?

loren avatar
loren
$ cat [main.tf](http://main\.tf)
variable foo {}

output bar {
  value = [
    {
      name = "foo"
      value = var.foo
    },
  ]
}
$ terraform apply -var foo=yoda

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:

Terraform will perform the following actions:

Plan: 0 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes


Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

bar = [
  {
    "name" = "foo"
    "value" = "yoda"
  },
]
Jørgen Vik avatar
Jørgen Vik

Error: Variables not allowed

on terraform.tfvars line 122: 122: value = var.api_container_name

Variables may not be used here.

Jørgen Vik avatar
Jørgen Vik

Might be related to this I guess

Error: No value for required variable

on vars.tf line 278: 278: variable “client_pipeline_env_variables” {

The root module input variable “client_pipeline_env_variables” is not set, and has no default value. Use a -var or -var-file command line argument to provide a value for this variable.

loren avatar
loren

ahh, in tfvars, no you cannot reference variable values. you have to do that in your .tf code

Jørgen Vik avatar
Jørgen Vik

Ah, right

Mikael Fridh avatar
Mikael Fridh

Pass through a locals to combine or decide things based on two different vars freely.

Jørgen Vik avatar
Jørgen Vik

@Mikael Fridh Allright. Thanks

Mikael Fridh avatar
Mikael Fridh

before I add docs and pull request… good idea? https://github.com/cloudposse/terraform-aws-iam-role/compare/master...sultans-of-devops:master

I use it for example like this, to attach also managed policies:

module "ssm_service_role" {
  source = "[github.com/sultans-of-devops/terraform-aws-iam-role](http://github\.com/sultans\-of\-devops/terraform\-aws\-iam\-role)"

  name      = "SSMServiceRole"
  namespace = var.namespace
  stage     = var.stage

  policy_description = "SSM Service Role policy"
  role_description   = "IAM role with permissions SSM and CloudWatch Agent policies"

  principals = {
    Service = ["[ssm.amazonaws.com](http://ssm\.amazonaws\.com)"]
  }

  policy_attachments = [
    "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore",
    "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy",
  ]
}
cloudposse/terraform-aws-iam-role

A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role

David Napier avatar
David Napier

Does cloudposse have any repos directed at vmware as a provider?

David Napier avatar
David Napier

I’m not finding any, but just wanted to make sure I wasn’t overlooking it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nope… we’re hyper-specialized on AWS

David Napier avatar
David Napier

Understandably so

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(some stuff for github, datadog, opsgenie too)

mrwacky avatar
mrwacky

do you recommend terraform-null-label or terraform-terraform-label these days? It’s not clear if one is preferred

David Napier avatar
David Napier

I’m not an official source, but I think terraform-null-label is preferred

:100:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

terraform-terraform-label should be archived

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

null label supports context.tf pattern which is a gamechanger

1
mrwacky avatar
mrwacky

I’m seeing this now. It is..

mrwacky avatar
mrwacky

We’re using an old fork of null-label and we were passing null-label resources as module inputs, but you are much smarter

1
David Napier avatar
David Napier

Using a for_each loop with the terraform-aws-route53-alias module, how would I specify a parent_zone_id from a resource. This (parent_zone_id = aws_route53_zone[each.key].zone_id) returns Invalid reference.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Add Binary Releases · Issue #57 · hashicorp/terraform-config-inspect

what Add semver releases Attach precompiled binaries to the releases why Not everyone has a go setup GitHub actions make it trivial to build and distribute binary releases (happy to provide example)

:--1:2
emem avatar

hi. please can anyone recommend a database ui tool on the web for managing databases. currently using datagrip and heildiSQL. But was looking for something that could be available on my webbrowser

David Napier avatar
David Napier

Uhhh.. kinda afraid to ask this at this point, but do you guys put your provisioners (ansible, puppet, etc.) in folders within your terraform folders or do you keep them in an adjacent folder outside of the IaC?

PePe avatar

we put them outside

:--1:1
Joan Porta avatar
Joan Porta

Design question: I have terraform with multiple AWS VPC’s and inside its SecGroups. Now I have created an “Opertions” VPC that needs to access to everywhere, do I change each SG (which are hundreds) adding a rule in each SG (solution not much scalable)? Do I create a SG module and I use it everywhere where I have a SG? Do I add a kind of “terraform linter” in the pipeline that detects if some SG doesn’t have the “Operations VPC rule”? WDYT?

maarten avatar
maarten

In all other VPC’s you can create a single SG allow_from_operations, this SG has a rule to allow the SG from the Operations VPC, or netblock. Now you only need to add this allow_from_operations SG to the resources in that VPC.

Joan Porta avatar
Joan Porta

@maarten Thx! question, what do ou mean with “Now you only need to add this allow_from_operations SG to the resources in that VPC.”

maarten avatar
maarten

you probably have a few EC2 instances already, they currently have a Security Group attached to it already as well.

Joan Porta avatar
Joan Porta

In remote VPC we have SG_remote and the SG_allow-_from__operations you mentioned, now what?

maarten avatar
maarten

You can add a second security group to the ec2 instance

maarten avatar
maarten

what do you mean with remote VPC, this is where the bastion resides ?

Joan Porta avatar
Joan Porta

but…are you sure that for any resource, EC2, RDS…we can add 2 SG’s?

maarten avatar
maarten

yes

Joan Porta avatar
Joan Porta

remote VPC would be where the EC2, RDS…are. the VPC where bastion is, I call it VPC operations

maarten avatar
maarten

all clear, so as an example

Joan Porta avatar
Joan Porta

Hey, I think that one EC2 only can have 1 SG

maarten avatar
maarten

How much bitcoin do you want to bet

Joan Porta avatar
Joan Porta

jaja, sorry, u are right

maarten avatar
maarten

so, example

Joan Porta avatar
Joan Porta

I’m checking RDS if can have multiple SG’s right now

Joan Porta avatar
Joan Porta

yeah!

Joan Porta avatar
Joan Porta

HEy, thax for your help, valid solution, if u need anything from my side, I woulñd help you if it is in my hands

maarten avatar
maarten

1 rds has 2 security groups

rds_sg = allow mysql/postgres access from
allow_from_operations = allow traffic from vpc/bastion_sg

1 web ec2 has 2 security groups

web_sg = allow traffic from port 80/internet
allow_from_operations = allow traffic from vpc/bastion_sg
maarten avatar
maarten

with load balancers it’s a bit more complicated, but same principles apply

maarten avatar
maarten

also keep in mind that you can create a security group in your bastion VPC, and you can refer to it in other VPC’s when they are connected

Joan Porta avatar
Joan Porta

yes, but our peering is made with TRansit GW which still doesn’t recognizes SG’s from other VPC’s

1

2020-11-24

Stephen Bennett avatar
Stephen Bennett

using//github.com/cloudposse/terraform-aws-efs> is it possible to turn off the creation of a security group and pass one to it instead? it has

resource "aws_security_group" "efs" {
  count       = module.this.enabled ? 1 : 0

but not sure how to use it and nothing in the readme

cloudposse/terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs

Alex Jurkiewicz avatar
Alex Jurkiewicz

hi again not an expert, but that looks like a way to disable the entire module only, rather than a way to disable the SG only. You might need to submit a PR to add this functionality if you want it

cloudposse/terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs

Stephen Bennett avatar
Stephen Bennett

hi again will take a look. thanks

loren avatar
loren

always exciting when aws changes the values returned by their api… the resource aws_securityhub_member is currently a bit broken as a result… https://github.com/hashicorp/terraform-provider-aws/issues/16403

Securityhub recent MemberStatus changes · Issue #16403 · hashicorp/terraform-provider-aws

AWS released changes today to SecurityHub that changed the MemberStatus fields to contain a few different values then is currently supported by the terraform AWS provider. Because of this change th…

:--1:1
1
Gareth avatar
Gareth

Taking the advice given about my previous question. I have tried to reframe the question into a simpler example. I hope I’ve managed it. Please have a look within this new thread and once again thank you all for your patience. Question: How do perform nested for loop on list(object) variable in Terraform. Without looping the resource?

Gareth avatar
Gareth

My stripped down Terraform structure:

 metrics = [
 {
   name       = "LogicalDisk"
   measurement = ["FreeSpace"]
                                                  
 },           
 {            
   name       = "Memory"
   measurement = ["CommittedBytes "]              
 },
]

Same structure but via JSON if that helps the readability:

{
 "metrics_collected": [
   {
     "measurement": ["FreeSpace"],
     "metrics_collection_interval": 60,
     "name": "LogicalDisk"
   },
   {
     "measurement": ["CommittedBytes"],
     "metrics_collection_interval": 60,
     "name": "Memory"
   }
 ]
}

This is the structure I need to generate by loop over the above structure to get a jsonencoded output like this?:

{
 "metrics_collected": {
   "LogicalDisk": {
     "measurement": ["FreeSpace"],
     "interval": 60
   },
   "Memory": {
     "measurement": ["CommittedBytes"],
     "interval": 60
   }
 }
}

I believe I need to do something like:-

for_each = {
 for metric in var.metrics: metric.name => metric 
}

However, I don’t know how to do that without looping the whole resource block and making multiple templates. I suspect I need to use a null_resource to loop over the items and then maybe join them in some way afterward but I’m guessing.

I wish to then use the above structure in a local_file resource block as one of the input variables of content: I’ve mocked up what I’d like to do but not I can’t because the syntax is illegal.  

resource "local_file" "test" {
 filename = "myconfig.json"
 content = templatefile("${path.module}/file.json.tmpl", {


\# I use this working loop to create a similar structure to the one I want but for other inputs

\# Left it here so you can see I need multiple inputs to the content and that I have got similar thing working
log_collect_list = jsonencode(
   [ for logs in var.log_object.files.collect_list : {
       "file_path"         = logs.file_path
       "log_group_name"    = logs.log_group_name
       "log_stream_name"   = logs.log_stream_name
       "auto_removal"      = logs.auto_removal
       } 
  ] # closing for loop
) # closing bracket for jsonencode


\# Now for the one I want but I can't establish how I use a for_each loop without duplicating the whole resource

\# I've have also tried to use of "dynamic variable" name but again the syntax is rejected.

\# Mockup of what I'd like to do
metrics_collected = jsonencode(
    for_each = {
         for metric in var.log_object.metrics.metrics_collected : metric.name => metric 
             name                = metric.name
             measurement                = metric.measurement
             metrics_collection_interval = metric.metrics_collection_interval
             resources                  = metric.resources
         }
     }
 ) # closing of jsonencode 

}) # closing of template 

} # closing of resource 

Any pointers would be appreciated.

Alex Jurkiewicz avatar
Alex Jurkiewicz

here’s an example of how to convert from your input format to desired structure

locals {
  in = [
    {
      name        = "LogicalDisk"
      measurement = ["FreeSpace"]
    },
    {
      name        = "Memory"
      measurement = ["CommittedBytes"]
    },
  ]
  out = {
    metrics_collected = {
      for item in [local.in> : item.name =](http://local\.in) {
        measurement = item.measurement
        interval    = 60
      }
    }
  }
}

output "out" {
  value = local.out
}
Gareth avatar
Gareth

Hello Alex thank you for taking the time to write an example. I appreciate the support you and others have given me. It’s enabled me to keep learning and achieve things I didn’t think were possible. I’ll be retuning to the office shortly and look forward to trying this.

Matt Gowie avatar
Matt Gowie

In the past week, I’ve upgraded probably a dozen or so root modules across two clients to TF 0.13 + aws provider 3.0 and both were heavily using dozens of CP modules. It was a breeze and that’s an awesome accomplishment by this community. Thanks to all you folks who contributed in that regard! cool-doge

2
2
PePe avatar

I have been doing the same and it has been great!

:100:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

0.14 will be easier too

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve been receiving tons of PRs to loosen provider pinning (>=) and update to [context.tf](http://context\.tf)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

which continued to make maintenance easier

Matt Gowie avatar
Matt Gowie

It sounds like it should be.

Matt Gowie avatar
Matt Gowie

Yeah, that’ll make everything pretty smooth. Happy to see it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve also published the terraform 0.14 RC1 to our packages repo

Matt Gowie avatar
Matt Gowie

Cool — are you using it with anything yet or just in prep?

ohad avatar

Hi everybody. In case you need help with tagging resources in AWS, Azure or GCP, please take a look on our new open-source http://github.com/env0/terratag that automatically and recursively tags all resources for you. (Disclaimer - I am co-founder and CEO of env0, the one that created this open-source)

env0/terratag

Terratag is a CLI tool that enables users of Terraform to automatically create and maintain tags across their entire set of AWS, Azure, and GCP resources - env0/terratag

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is interesting… so this goes through all your *.tf files and knows which TF resources support tags and can add the tags {... } (or similar) to it. Is that right?

env0/terratag

Terratag is a CLI tool that enables users of Terraform to automatically create and maintain tags across their entire set of AWS, Azure, and GCP resources - env0/terratag

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If so, does it preserve comments in the TF code?

ohad avatar

Yes it does exactly that. And keeps the comments as is.

:--1:1
omry avatar

@Erik Osterman (Cloud Posse) It will also go through any module that you have a tag those resources as well.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(fwiw, this is why we have terraform-null-label which we use in everyone of our modules to ensure consistent, enforced tagging)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But I can definitely see where this is nice…

:--1:1
Igor Bronovskyi avatar
Igor Bronovskyi

How to create healthcheck for wss protocol on a fargate?

Igor Bronovskyi avatar
Igor Bronovskyi

I need create socket server as service with one socket port.

Cristian Măgherușan-Stanciu avatar
Cristian Măgherușan-Stanciu

hello, I’m having some issues with the terraform-null-label, after upgrading to the latest version I’m always getting an empty ID, regardless what I tried.

module "label" {
  source          = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=0.21.0>"
  context         = module.this.context
  enabled         = true
  id_length_limit = 10
}

output "ID" {
  value = "'${module.label.id_full}'"
}

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
ID = ''

Am I missing anything? (the same happens when using module.label.id instead of module.label.id_full

Cristian Măgherușan-Stanciu avatar
Cristian Măgherușan-Stanciu

on another topic, did anyone figure out a way to use modules with for_each/count and different AWS regional providers?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is there a more compact way to write this sort of expression?

length(random_shuffle.shared_alb.result) > 0 ? random_shuffle.shared_alb.result[0] : null

Get the first element of a list if it’s not empty. If the list is empty I don’t care what the value is

Matt Gowie avatar
Matt Gowie

Not sure…. Both try and element both return errors if the list is empty or doesn’t evaluate.

loren avatar
loren

these days, i recommend using the same condition you use for the count expression, instead of the length

loren avatar
loren

it’s not more compact, but it is more sane and better demonstrates the relationship between the resource and the local/output

Alex Jurkiewicz avatar
Alex Jurkiewicz

that makes a lot of sense. I’ll pull that condition out into a local

Alex Jurkiewicz avatar
Alex Jurkiewicz

We have some random provider resources in our Terraform configuration. For instance a random_id resource: https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id We want to change the keeper values but not change the output. Has anyone done this? I assume it’s possible with some statefile hackery, which I’m OK with.

Alex Jurkiewicz avatar
Alex Jurkiewicz

looks like it’s pretty simple.

  1. Pull the state,
  2. (back up the state,)
  3. edit the resource so the input values match your new ones
  4. Increment state serial (up the top)
  5. push the state.
Gareth avatar
Gareth

Jsonencode looks to order name pairs into alphabetical order

log_collect_list = jsonencode(
   [ for logs in var.log_object.files.collect_list : {
       "file_path"         = logs.file_path
       "log_group_name"    = logs.log_group_name
       "log_stream_name"   = logs.log_stream_name
       "auto_removal"      = logs.auto_removal
       } 
    ]
)

auto_removal is listed at the bottom of the above for loop but the resulting json string list the items in the order

"auto_removal": true,
"file_path": "W3SVC1\\*",
"log_group_name": "/test/iis/cms",
"log_stream_name": "{instance_id}_{local_hostname}"

While I appreciate the order isn’t normally an issue for the consuming application and I think that might be true in my case. I was wondering if there is a way to tell jsonencode to respect the order in which it consumed the named pairs?

Alex Jurkiewicz avatar
Alex Jurkiewicz

No, there’s not. Are you positive the order matters for your application?

Gareth avatar
Gareth

I’m just testing the app (Cloud Watch agent), I’m sure it won’t matter but I’m not always that lucky, especially with some of this historical apps we run. So was curious if it was possible. Thank you for confirming.

Alex Jurkiewicz avatar
Alex Jurkiewicz

There are ways to make the order what you want, but they are complicated. I think you should be 100% sure there’s an issue before you solve this problem

:--1:1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Is there a better way to write

contains(keys(mymap), "asg")

?

loren avatar
loren

can(mymap.asg)?

loren avatar
loren

If that works, may be a little too clever… contains is very straightforward. Could put the keys in a local to improve readability/intent

Alex Jurkiewicz avatar
Alex Jurkiewicz

clever idea! But yeah I agree, too clever. Will stick with this

:--1:1
Alex Jurkiewicz avatar
Alex Jurkiewicz

hopefully contains supports maps in 1.0

Juha Patrikainen avatar
Juha Patrikainen

Hi! I’m using https://github.com/cloudposse/terraform-aws-tfstate-backend. Is it possible to have same s3 bucket to work with multiple state with unique locks? State file name can be given with terraform_state_file but lock name seems to come directly from bucket name -> same lock would be used when working with all states so you could not work with multiple states at the same time.

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

2020-11-23

loren avatar
loren

here’s a neat trick… we like to maintain iam policy templates as separate json files, and validate the the json syntax using jq. but these are actually json templates, rendered with terraform’s templatefile(), and so we can use any sort of terraform function inside these templates. in particular, using jsonencode() within the template to support hcl lists without hacky joins to render the list to json. the problem with using terraform functions like this is the templates become invalid as json, and so fail jq validation.

so, came up with a simple tf config to render the template and ensure it serializes to json…

locals {
  # specify all vars in the templates
  template_vars = {
    # foo = bar
  }
}

data null_data_source this {
  for_each = fileset(var.path, var.pattern)

  inputs = {
    # templatefile catches bad hcl syntax in interpolations
    # encode/decode cycle catches bad json in the template
    json = jsonencode(jsondecode(templatefile(each.value, local.template_vars)))
  }
}

variable path {
  type    = string
  default = "."
}

variable pattern {
  type    = string
  default = "**/*.json.template"
}

then have your CI system run a plan on that config:

terraform init -backend=false <path/to/test/config>
terraform plan <path/to/test/config>

the config intentionally uses a null data source to avoid needing any aws credentials

4
1
tim.j.birkett avatar
tim.j.birkett

Another “neat trick” that I implemented recently… we use Terraform to manage the versions of core pre-installed things on EKS, things like Kube Proxy, the CNI, pretty much replicated what AWS would have you do according to their upgrade docs.

We faced a problem when AWS changed their tags for kube-proxy to include eksbuild.1 in the tag. To try to combat this, we’ve used the docker_registry_image data resource like:


\# Validate images exist
data "docker_registry_image" "aws_node" {
  count = var.manage_daemonsets ? 1 : 0
  name  = "602401143452.dkr.ecr.${var.region}.[amazonaws.com/amazon-k8s-cni:${local.amazon-k8s-cni_image_tag}](http://amazonaws\.com/amazon\-k8s\-cni:\$\{local\.amazon\-k8s\-cni_image_tag\})"
}

data "docker_registry_image" "coredns" {
  count = var.manage_daemonsets ? 1 : 0
  name  = "602401143452.dkr.ecr.${var.region}.[amazonaws.com/eks/coredns:${local.coredns_image_tag}](http://amazonaws\.com/eks/coredns:\$\{local\.coredns_image_tag\})"
}

data "docker_registry_image" "kube_proxy" {
  count = var.manage_daemonsets ? 1 : 0
  name  = "602401143452.dkr.ecr.${var.region}.[amazonaws.com/eks/kube-proxy:${local.kube-proxy_image_tag}](http://amazonaws\.com/eks/kube\-proxy:\$\{local\.kube\-proxy_image_tag\})"
}

Provider configuration:

data "aws_ecr_authorization_token" "login" {}

provider "docker" {
  registry_auth {
    address  = "602401143452.dkr.ecr.${var.region}.[amazonaws.com](http://amazonaws\.com)"
    username = data.aws_ecr_authorization_token.login.user_name
    password = data.aws_ecr_authorization_token.login.password
  }
}

This results in an error being thrown way earlier in the Terraform run.

:--1:2
loren avatar
loren

@ nice. iirc, one issue with data sources in provider configs is that it breaks imports. have you run into that? or maybe the situation is improved since i last tried it… the linked issue is pretty old: https://github.com/hashicorp/terraform/issues/13018. but the docs still say no data sources in providers… https://www.terraform.io/docs/commands/import.html#provider-configuration
The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.

PePe avatar

@loren your witchery can be use to terraform init a backend config file?

loren avatar
loren

no backend config required with, terraform init -backend=false

PePe avatar

so basically double init, one with no backend to create the config and another with the remote backend?

loren avatar
loren

i’m doing this in a repo with only policy template files, no other tf config, so i specifically disable any backend setup. another repo references this one to pull down the template files for its own tf configs…

PePe avatar

I understand but imagine if the backend config is on json and is templated and generated by the first init?

PePe avatar

I’m trying to find a way to have templates of backend config files, and when I saw this I thought I could maybe use this

loren avatar
loren

oh i see, interesting. terragrunt can do this using its generate block. i imagine terraform could using a couple steps to create the file first with a targeted apply. but you’d need to be very careful with the vars you use for the backend template to avoid a resource cycle

PePe avatar

yes, the source should be pretty locked down and tested before any input is passed

Alex Jurkiewicz avatar
Alex Jurkiewicz

That’s very clever @loren

loren avatar
loren

Could probably do something pretty similar to validate templated yaml files using yamldecode/yamlencode…

Matt Gowie avatar
Matt Gowie
cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config

Matt Gowie avatar
Matt Gowie

I was also just thinking about this… Why do you store your policies in JSON instead of using the policy_document resource? I’ve always gone the policy_document approach as I thought that was the recommended way.

loren avatar
loren

because the teams managing the policies do not want to deal with terraform directly, so i try to keep the hcl-ness to an absolute minimum :)

Matt Gowie avatar
Matt Gowie

@loren totally valid reasoning

Alex Jurkiewicz avatar
Alex Jurkiewicz

IMO policy document is much better than <<EOF, slightly worse than external file. The underlying structure is JSON, it is simpler to reason about stuff which undergoes less transformation

Matt Gowie avatar
Matt Gowie

You’re saying you prefer: External JSON file > Policy Document > EOF?

Matt Gowie avatar
Matt Gowie

I like policy document because most of my policy_document resources include mentions to other resources in the root module or from remote state.

Matt Gowie avatar
Matt Gowie

And with it being all HCL, you don’t need to jump through as much string templating hoops.

Alex Jurkiewicz avatar
Alex Jurkiewicz

ah yeah. I didn’t think about that. You are right, for any IAM document with references to Terraform stuff, I would prefer the policy document data source

loren avatar
loren

for this use case, it’s mostly some cyber security folks. i just want them focused on reviewing/approving the iam policy documents that are in scope of their team’s review. i don’t want them getting their knickers twisted on some intricacies of hcl and terraform. this is also why these policies are in their own repo, which now no longer has any tf code (other than some very light hcl templating in the policy files). i just pull the repo to my root module using a module block with a source arg and no other arguments, then reference paths to the policy files from the .terraform cache. it’s my new favorite trick, learned here in cloudposse slack, but i don’t remember from who

loren avatar
loren

so it basically all looks like JSON to them, which they’re used to seeing and working with in the IAM console. just with a couple self-explanatory template var names that we pass in through templatefile()

Matt Gowie avatar
Matt Gowie

Yeah I like it — Sounds very useful for a larger org. And sounds very similar to terraform-yaml-config.

loren avatar
loren

i skimmed that module when it was posted recently, it seems super handy… i’m about to overhaul some of our root config management, will keep in it in mind to see if it fits… :)

Matt Gowie avatar
Matt Gowie

Yeah, let me know how that goes. I haven’t used it yet, but it does look a solid tool. I like the new Cloud Posse thought process behind pushing more configuration into Yaml > HCL. I’m finding I do that more and more.

:--1:1
Gareth avatar
Gareth

sorry to be a pest but I was hopeful that somebody might have time to look at my last question/thread (posted Saturday)?

David Napier avatar
David Napier

Anyone know whether modules are able to be looped over? Did that make it into v0.13.X?

loren avatar
loren

yes, count and for_each for modules is in 0.13

1
loren avatar
loren

along with depends_on for modules

David Napier avatar
David Napier

Thanks loren

:--1:1
loren avatar
loren
Announcing Terraform 0.14 with Increased Workflow Reliability and Upgrades for Sensitive Variables attachment image

Terraform 0.14 is all about perfecting workflow. Our latest version enables practitioners to leverage more predictability and reliability in infrastructure automation. In meeting this goal, we’ve also added some key features to help organizations heavily invested in Terraform, continue to mature. Additionally, updates included in this release will be equally valuable to practitioners and teams just starting their journey with infrastructure as code. In this webinar, Terraform OSS Product Manager Petros Kolyvas and Technical Product Marketing Manager Kyle Ruddy will walk you through new Terraform features such as concise diff, sensitive variables, and the provider dependency lock file.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looks like it’s still RC1 though

Announcing Terraform 0.14 with Increased Workflow Reliability and Upgrades for Sensitive Variables attachment image

Terraform 0.14 is all about perfecting workflow. Our latest version enables practitioners to leverage more predictability and reliability in infrastructure automation. In meeting this goal, we’ve also added some key features to help organizations heavily invested in Terraform, continue to mature. Additionally, updates included in this release will be equally valuable to practitioners and teams just starting their journey with infrastructure as code. In this webinar, Terraform OSS Product Manager Petros Kolyvas and Technical Product Marketing Manager Kyle Ruddy will walk you through new Terraform features such as concise diff, sensitive variables, and the provider dependency lock file.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Releases · hashicorp/terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Maybe it will drop *December 08 10:00 AM PST?*
loren avatar
loren

yeah, it hasn’t dropped yet. close though. my thinking is this could be one of the lead-up discussions, like how they published pre-release blog posts on tf 0.13 features

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re bracing for the onslaught of 0.14 issues/prs

Matt Gowie avatar
Matt Gowie

I like that they’re shipping new major versions soon… but it does cause a lot of havoc in terms of upgrading projects. this upgrade is as easy as 0.13 and that we’ve done enough of the minor version only pinning to make it easy on the CP module front.

loren avatar
loren

i’m not expecting all that much in the way of breaking changes, from what i’m seeing. long as the version pins on modules are >=!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are they dropping support for old provider syntax?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(i know they are dropping the upgrade command)

Matt Gowie avatar
Matt Gowie

Isn’t the old provider syntax already dropped in 0.13?

loren avatar
loren

interesting, the master branch changelog is already collecting 0.15 changes…

loren avatar
loren

the version argument of provider blocks is deprecated, but only results in an explicit deprecation warning. it has not been removed

loren avatar
loren

the changelog for 0.14 does not have a “BREAKING CHANGES” section, where for 0.13 it did…

Matt Gowie avatar
Matt Gowie

Their main push is the lock file, right? I would assume that wasn’t breaking.

Matt Gowie avatar
Matt Gowie

That’d be awesome. Looking forward to trying that out.

loren avatar
loren

i’m always of two minds on the lockfiles. we’ll see. i mean, i use terraform-bundle anyway, because i don’t want my CI downloading random things anyway. and i haven’t generally seen any problems across teams in a looong while (outside TF core upgrades breaking tfstate)

loren avatar
loren

backwards-compatible tfstate will be welcome, for sure, even though that problem is mostly addressed just by a strict pin on the version in the root config. some of those changes are filtering into the 0.13.x releases… https://github.com/hashicorp/terraform/blob/v0.13/CHANGELOG.md#0136-unreleased

hashicorp/terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…

Matt Gowie avatar
Matt Gowie

Yeah good point — Though I didn’t mean to say lock files are what I’m looking forward to trying out. I’m excited about the move towards faster releases with minimal breaking changes as that sounds like a great direction for the tool.

:100:1
Alex Jurkiewicz avatar
Alex Jurkiewicz

It sounds like the next release will be 1.0, so I guess you can consider this 1.0rc1

Alex Jurkiewicz avatar
Alex Jurkiewicz

Anyone have terraform syntax highlighting for json.tpl files working in vscode?

loren avatar
loren

I think vscode has a way of associating a file extension to a plugin? Haven’t looked at that in a while…

Zach avatar

You can associate the extension or override it manually in the bottom right corner of the ui where it says something like ‘json’ to identity the current syntax selected.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I know how to associate. But the hashicorp terraform plugin doesn’t add syntax langauge for “JSON with HCL templates”

loren avatar
loren

Check for, or open, an issue?

tim.j.birkett avatar
tim.j.birkett

I’ve resorted to adding the files correct file extension to aid in readability in text editors. Where we used append .tmplto everything we now name files like: values.tmpl.yaml - of course in JSON if you have any kind of linting turned on templated variables will light up as errors. It makes YAML easier on the in my case.

2020-11-21

Gareth avatar
Gareth

Please can I get some help to get me over the line with the right syntax for my final for loop? More in in thread…

Gareth avatar
Gareth

Good afternoon, Thanks for all the help so far with getting my templates sorted, I’m almost there but I’m struggling with the last for loop or maybe it should be a for_each, feels like I’m missing an important lesson but I’ve played around with this for over two hours and I’m getting nowhere.

I have this structure (cut down for this example.)

variable "log_object" {
type = object({
      metrics = object ({
           metrics_collected = list(object ({
                                      name        = string
                                      measurement = list(string)
                      metrics_collection_interval = number
                                        resources = list(string)
                                           })
                                    )
                        })
              })
 }

With this set as the variable:

log_object = {
  metrics = {
    metrics_collected = [
      {
        name                        = "LogicalDisk"
        measurement                 = ["% Free Space"]
        metrics_collection_interval = 60
        resources                   = ["*"]
      },
      {
        name                        = "Memory"
        measurement                 = ["% Committed Bytes In Use"]
        metrics_collection_interval = 60
        resources                   = [""]   # Note Memory does not use a resource but TF can't have optional data types in objects see <https://github.com/hashicorp/terraform/issues/19898>
      },

    ]
  }
}

I’m trying to create an jsonencode block for use in my custom tpl file.

With the help of some kind people here, I’ve managed to get it mostly working but I need to tweak the outputted structure slightly from what I’m able to produce. Current output

               "metrics": {
                               "metrics_collected": [
                                               {
                                                               "measurement": ["% Free Space"],
                                                               "metrics_collection_interval": 60,
                                                               "name": "LogicalDisk",
                                                               "resources": ["*"]
                                               },
                                               {
                                                               "measurement": ["% Committed Bytes In Use"],
                                                               "metrics_collection_interval": 60,
                                                               "name": "Memory",
                                                               "resources": [""]
                                               },
                               ]
               }

What I need:

 metrics = {
     metrics_collected = [
                           "LogicalDisk" =     {
                                                 measurement                 = ["% Free Space"]
                                                 metrics_collection_interval = 60
                                                 resources                   = ["*"]
                                               }
                           "Memory"      =     {
                                                 measurement                 = ["% Committed Bytes In Use"]
                                                 metrics_collection_interval = 60
                                                 resources                   = [""]                        
                                               }
     ]
   }

I have thought about changing my data structure to

Gareth avatar
Gareth
type = object({
      metrics = object ({
           metrics_collected = list(object ({
                                      name = object({ 
                                             measurement = list(string)
                             metrics_collection_interval = number
                                               resources = list(string)
                                           })
                                 })         
	)
                        })
              })

}

But doesn’t this then fixes everything to be called name rather than dynamic? So looks wrong. I want the flexibility to add and remove items from the overall list. So want to avoid having to explicitly set each one by a name within the variable declaration.

I’ve been trying lots of different combination of past suggestions but not hit on the right thing yet This is what working:

resource "local_file" "test" {
   filename = "myconfig.json"
   content = templatefile("${path.module}/file.json.tmpl", {

 metrics_collected = jsonencode(
    [ for metric in var.log_object.metrics.metrics_collected : {

        name                        = metric.name
        measurement                 = metric.measurement
        metrics_collection_interval = metric.metrics_collection_interval
        resources                   = metric.resources

       }
    ]
   )
 })
}

I feel like this is closest but it gives a TF error

resource "local_file" "test" {
   filename = "myconfig.json"
   content = templatefile("${path.module}/file.json.tmpl", {

metrics_collected = jsonencode(
    [ for metric in var.log_object.metrics.metrics_collected : metric.name =>  {

        name                 = metric.name
        measurement                 = metric.measurement
        metrics_collection_interval = metric.metrics_collection_interval
        resources                   = metric.resources

       }
    ]
   )

 })
}

Error: Key expression is not valid when building a tuple.

I’ve also tried adding a for_each in the mix like

  for_each = { for metric in var.log_object.metrics.metrics_collected : metric.name => metric }

but this errors as well.

I’m sure the answer is within my reach but I’m not sure how many more hours it’ll take me to find it on my own. So, any help you can offer would be appreciated.

Gareth avatar
Gareth

Wondering if anybody has time to help with this, I fought with it most of the weekend and I’m still no closer

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think you are trying to do something too fancy here for your use case. But it’s a little difficult to understand because of your complex data structure. Can you create a simplified test case using simper data structures and null resource?

Gareth avatar
Gareth

Many thanks for replying Alex. If that’s the case i guess I should reluctantly give up. I’ve tried to find the right combination of for_each and for or even just a for loop but I’m just not skilled enough in constructing the correct syntax. Which is driving me nuts as I’m used to programming with these types of data structure in other languages. I do understand that this is more a templating language but it really felt like I was almost there. Especially, as I’d managed to crack the first two parts with Loren’s kind help on the previous thread.

Putting terraform aside for a second and trying to explain in different way, all I want to do was 1.Take my list,

  1. loop over it
  2. for each object in the list,
  3. take the value for inside the object called name e.g. “LogicalDisk”. Use this as a key for the remaining items in the object. Be this on the fly or by creating a new object structure to temporarily use.
  4. outputs
    "LogicalDisk" =   {
                      measurement                 = ["% Free Space"]
                      metrics_collection_interval = 60
                      resources                   = ["*"]
                      }
    

    When all said and done, if what I’m trying to do, isn’t piratical i’ll shelve it. It’s clearly beyond my ability at the minute. Once again, thank you for your replying and to everybody else for taking the time to read the question.

Alex Jurkiewicz avatar
Alex Jurkiewicz

it sounds practical. But your examples are not easy to understand. You will get more help if you spend your time making the question clearer

Alex Jurkiewicz avatar
Alex Jurkiewicz

like, when I’m looking at your code blocks they are indented so much they wrap on my screen. It’s really hard to make sense of

Gareth avatar
Gareth

Thanks Alex, I’ll try again to reframe the question, not a problem to do so. After all, I’m the one asking for help. Apologies, its not clearly enough to start with. I do appreciate your time.

Gareth avatar
Gareth

Re the wrapping on the screen, totally unaware of that, as they appeared fine on my screen. I’ll clean them up. Thank you for pointing it out.

2020-11-20

ByronHome avatar
ByronHome

Hi, I am using Apigateway Module, I want add a sub paths into a path_part, like path/subpathA/subpathB . Im trying do it, but module cant do that. Someone know how i can do this?. Also, when a make this manually on apiGateway, when i refresh state, always it say that i have changes to apply. Cheers.

clouddrove/terraform-aws-api-gateway

Terraform module to create Route53 resource on AWS for create api gateway with it’s basic elements. - clouddrove/terraform-aws-api-gateway

David Napier avatar
David Napier

I’m getting an error about the Terraform Core version, but I’m using a version which is in the constraints listed..

David Napier avatar
David Napier

I get it, so version 0.13+ is out

foqal avatar
foqal
03:37:57 PM

@’s question was answered by <@Foqal>

Gareth avatar
Gareth

HI, I’m looking for help with terraforms built in templating function. More details in thread if you can spare the time

Gareth avatar
Gareth

I’d like to dynamically create the json configuration file for cloudwatch agent based on a data structure in terraform like this Reason being that I’d like to only have one place to add and remove logs, event viewer logs and metrics. Metrics being the most frequently changed. Currently the loggroups are created within TF. So having the ability to customise the agent config would save time etc etc. My terraform structure used to create

agent_objects = {  files = {    collect_list = [      {        ”name”              = “cms”        ”create_log_group”  = true        ”create_log_stream” = false                                           ”file_path”         = “C:\Connect\cms\logs\iis\W3SVC1\”        ”log_group_name”    = “/myapp/test/iis/cms”        ”log_stream_name”   = “{instance_id}_{local_hostname}”                ”retention_in_days” = 30        ”use_custom_kms_key” = true        ”kms_key_id”        = “Blah”        ”auto_removal”      = true      },      {        ”name”              = “maint”        ”create_log_group”  = true        ”create_log_stream” = true                                              ”file_path”         = “C:\Connect\maint\logs\iis\W3SVC2\”        ”log_group_name”    = “/myapp/test/iis/maint”        ”log_stream_name”   = “{instance_id}_{local_hostname}”                  ”retention_in_days” = 30        ”use_custom_kms_key” = true        ”kms_key_id”        = “Blah”        ”auto_removal”      = true      },    ]  }

 windows_events = {    collect_list = [      {        name        = “System”        event_format = “xml”

       event_levels = [          ”INFORMATION”,          ”WARNING”,          ”ERROR”,          ”CRITICAL”,        ]

       create_log_group  = true        log_group_name    = “/myapp/test/System”        create_log_stream = false                                   log_stream_name   = “{instance_id}_{local_hostname}”        retention_in_days = 30        use_custom_kms_key = true      },      {        name        = “Application”        event_format = “xml”

       event_levels = [          ”INFORMATION”,          ”WARNING”,          ”ERROR”,          ”CRITICAL”,        ]

       create_log_group  = true        log_group_name    = “/myapp/test/Application”        create_log_stream = false                                   log_stream_name   = “{instance_id}_{local_hostname}”        retention_in_days = 30        use_custom_kms_key = true      },    ]  }    metrics = {    metrics_collected = [      {        name                       = “LogicalDisk”        measurement                = [”% Free Space”]        metrics_collection_interval = 60        resources                  = [“*”]      },      {        name                       = “Memory”        measurement                = [”% Committed Bytes In Use”]        metrics_collection_interval = 60        resources                  = [””]                         },    ]  } }

With my very limited understand of the template functionality it appears that it should be possible to build the type of structure I need. An example of the cloudwatch configuration I need can be seen here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html Search for “The following is an example of a logs section” the example log is just below. Having read over the examples here  https://alexharv074.github.io/2019/11/23/adventures-in-the-terraform-dsl-part-x-templates.html#example-7-the-for-loop

 

locals {  fruits = [“apple”, “banana”, “pear”] }

output “fruits” {  value = «-EOF    My favourite fruits are:    %{ for fruit in local.fruits ~}  - ${ fruit }    %{ endfor ~}  EOF }

I can see the templating is capable of looping over a terraform list and outputting values. My questions is… am I being too ambitious for what the terraform language can do. I’ve failed to find any examples online of similar json creations. Is there a different approach / provider I should be investigation? my thanks in advance for any advice given.

loren avatar
loren

well one option would be to construct it entirely as an hcl object, then use jsonencode() to convert it to json

loren avatar
loren
templatefile - Functions - Configuration Language - Terraform by HashiCorp

The templatefile function reads the file at the given path and renders its content as a template.

Gareth avatar
Gareth

Hi Loren, thank you for coming to my aid again :)

Regarding the first suggestion, I don’t fully understand, is there a provider to do jsonencoding? I know you can read a template file in and add .json to get an encoded version. Is that what you mean?

As for the link I’ll go read that now.

loren avatar
loren

jsonencode() is a builtin function in terraform. no provider needed, https://www.terraform.io/docs/configuration/functions/jsonencode.html

jsonencode - Functions - Configuration Language - Terraform by HashiCorp

The jsonencode function encodes a given value as a JSON string.

Gareth avatar
Gareth

Ah I see from the link the jsonencode now. Thanks wasn’t aware of this.

loren avatar
loren

it’s very powerful, give it a go!

Gareth avatar
Gareth

I will; thank you for the point in the right direction

David Napier avatar
David Napier

I used terraform 0.13.X to update my state, but the modules I’m using require TF ~> 0.12.X, is there a way to revert state to work with an older TF version?

Alex Jurkiewicz avatar
Alex Jurkiewicz

you can attempt to pull the state and manually edit it to change the version it records. But I’m not sure if there were structural changes from 12 to 13 which mean this won’t work… Might be easier to upgrade those modules…

David Napier avatar
David Napier

I wish I had the time to do that. Seems like a lot of the cloudposse modules have version constraints towards 0.12.X though.

Joe Niland avatar
Joe Niland

@ there’s an ongoing effort to fix version pinning. Which ones are you using?

David Napier avatar
David Napier

Really good to know. terraform-aws-alb , terraform-aws-dynamic-subnets, and terraform-aws-vpc.

aaratn avatar
aaratn

If you are using s3 backend and have object versioning enabled, you can revert back to previous version

:--1:2
loren avatar
loren

make sure you check the module versions that you are referencing in your source argument… the current alb module version does work with tf0.13… https://github.com/cloudposse/terraform-aws-alb/blob/master/versions.tf

cloudposse/terraform-aws-alb

Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb

Joe Niland avatar
Joe Niland

Agree with @loren - I’m using all of the above with TF 0.13

:--1:1
loren avatar
loren

dynamic-subnets is also ready for tf 0.13

loren avatar
loren

and so is vpc

David Napier avatar
David Napier

terraform-aws-tfstate-backend is the only one I’m using that’s not in that list

Joe Niland avatar
Joe Niland

That looks like it was fixed in v0.28.0

David Napier avatar
David Napier

Now getting an sts:GetCallerIdentity error. No idea where that’s coming from. Can still use the aws cli with np

David Napier avatar
David Napier

Sorry, I know that’s not useful information

David Napier avatar
David Napier

Ah, apparently the [backend.tf](http://backend\.tf) file didn’t get the profile config

David Napier avatar
David Napier

Okay, I finally got it, had to pin the version for each item rather than going with the front page examples

Joe Niland avatar
Joe Niland

Right, front page example has master but you definitely have to pin to a specific tag for each module

David Napier avatar
David Napier

Well, that’s definitely on me, but I really appreciate you guys being willing to look.

Joe Niland avatar
Joe Niland

No worries. I remember it wasn’t obvious to me at first.

2020-11-19

Laurynas avatar
Laurynas

Is it possible to attach elastic IP to the aws_spot_instance_request with terraform?

simplepoll avatar
simplepoll
03:18:49 PM

How do you section up your Terraform root modules?

loren avatar
loren

all of the above

1
:100:1
1
rei avatar

00-base, 10-dns, 15-network, 20-eks, 30-iam-roles

PePe avatar

Sorry I mean the question can’t be answered that easy, it depends of the complexities of the project

Matt Gowie avatar
Matt Gowie

Haha yeah, definitely a “it depends” answer. I just think it’s an interesting question and would like to hear where folks draw their dividing lines.

PePe avatar

in my case I do it per application and AWS product too

PePe avatar

so I will create a repo to deploy this app on ECS which will include all the neceary access ( roles and such) plus any additional resources for the app to run but the Boundary of this TF repo is the container so if for example the container needs to connect to other external services or DB, that will go on a separate repo

PePe avatar

I do not mix DBs with apps since the deployment process of DBs is very different and has a far bigger blast radius

PePe avatar

the repo could start with the db but almost every time the DB gets migrated to it’s own repo once thing like replication/multi-region come into play or there are other teams consuming the same data directly from the db

PePe avatar

many factors to consider

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-components

Catalog of reusable Terraform components and blueprints for provisioning reference architectures - cloudposse/terraform-aws-components

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(btw, we’ve updated these in the past week - these are all our latest root modules)

EvanG avatar
EvanG

Does anyone know where I can find some simple code for automating aws config? I’ve been getting this error for about a day

Creating Delivery Channel failed: InsufficientDeliveryPolicyException: Insufficient delivery policy to s3 bucket: terraform-20201119163429797100000001, unable to write to bucket, provided s3 key prefix is 'config'.
EvanG avatar
EvanG

Actually looks like it’s an open issue https://github.com/hashicorp/terraform-provider-aws/issues/8655

Bug in AWS Config Delivery Channel · Issue #8655 · hashicorp/terraform-provider-aws

This issue was originally opened by @stsraymond as hashicorp/terraform#21325. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version …v…

V M avatar

I have new windows image I want to use for terraform installs. Currently it is in an image gallery. Does the terraform call change when its image gallery or storage account? I am trying to understand how those work together in terraform?

loren avatar
loren
Announcing Support for AWS Network Firewall in the Terraform AWS Provider attachment image

The Terraform AWS provider has added support for the newly released AWS Network Firewall service.

Yoni Leitersdorf avatar
Yoni Leitersdorf

Wow, I was just thinking about that last night and didn’t even search for it as I assumed they didn’t.

Announcing Support for AWS Network Firewall in the Terraform AWS Provider attachment image

The Terraform AWS provider has added support for the newly released AWS Network Firewall service.

Yoni Leitersdorf avatar
Yoni Leitersdorf

I should have more faith

Matt Gowie avatar
Matt Gowie

Not sure I care about Network Firewall support… but agreed that is super cool. I hope that becomes the pattern instead of the exception.

maarten avatar
maarten

It’s getting too much for me , what is the added value here compared to what we have already?

Yoni Leitersdorf avatar
Yoni Leitersdorf

There are some organizations out there that need to do filtering based on FQDN or have real IPS capabilities. Usually required by some regulations. Until now, they had to deploy a third party firewall (like check point, palo alto, etc). Now they can use this instead.

Doug Clow avatar
Doug Clow

Yes, some places we are required to filter egress traffic

vFondevilla avatar
vFondevilla

Yes. I have to do the math but probably will deprecate our current solution

Amit Karpe avatar
Amit Karpe

Hi, I am using RDS module , I was thinking how I can reuse my existing VPC or subnet names. As of now I have manually search for vpc_id, subnet_ids, security_group_ids from the console. And then use then into terraform.tfvars.

I know using data we can fetch that, but I don’t find any example which will use name of vpc or subnet.

i.e. I need to provision RDS db into existing VPC, which will be having same name/tags. How can I refer then instead of copying and pasting IDs from AWS console. Which is extra work.

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

PePe avatar

you need to use the names of the tags with a filter

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

PePe avatar

so it is the Tag named Name

PePe avatar

for example

data "aws_vpc" "main_vpc_us_east_1" {
  tags = {
    provisioning = "terraform"
    shared_vpc   = "false"
  }
}

data "aws_subnet_ids" "private_us_east_1" {
  vpc_id = data.aws_vpc.main_vpc_us_east_1.id
  tags = {
    provisioning = "terraform"
    Name         = "*private*"
  }
}
1
Amit Karpe avatar
Amit Karpe

Thanks, let me try this example

Alex Jurkiewicz avatar
Alex Jurkiewicz

For data sources referencing “unmanaged” resources, I like to add another tag to them like loaded_in_terraform = true so there’s a little documentation about the link on the source side

Amit Karpe avatar
Amit Karpe

Got following error:

module.db.module.db_subnet_group.aws_db_subnet_group.this[0]: Creating...
module.db.module.db_option_group.aws_db_option_group.this[0]: Creating...
module.db.module.db_option_group.aws_db_option_group.this[0]: Creation complete after 0s [id=db-migrate2-20201120063405956700000001]

Error: Error creating DB Subnet Group: InvalidParameterValue: Some input subnets in :[vpc-01e43736cca466036] are invalid.
        status code: 400, request id: 9e1683c2-f9e6-456d-99fb-6e21890c6546

  on .terraform/modules/db/modules/db_subnet_group/main.tf line 1, in resource "aws_db_subnet_group" "this":
   1: resource "aws_db_subnet_group" "this" {
Amit Karpe avatar
Amit Karpe

My mistake: Following value should be without []

subnet_ids = data.aws_subnet_ids.private2.ids
Amit Karpe avatar
Amit Karpe

Its working Thank you

Amit Karpe avatar
Amit Karpe

@ What is loaded_in_terraform = true ? I did not understand its usage? Should I add into data block?

Amit Karpe avatar
Amit Karpe

Using tags as variable can be good practise?

As of now, I have something like:

data "aws_vpc" "vpc" {
  tags = {
    Name = "mainvpc"
    Environment = "dev"
  }
}

data "aws_subnet_ids" "private2" {
  vpc_id = data.aws_vpc.vpc.id
  tags = {
    Custom-tag  = "*private_subnet2*"
    Name         = "sub_private_dev_*"
  }
}

Can I use variable for the module can be reuse and any Name of the vpc or subnet can be change?

data "aws_vpc" "vpc" {
  tags = {
    Name = var.vpcName
    Environment = "dev"
  }
}

data "aws_subnet_ids" "private2" {
  vpc_id = data.aws_vpc.vpc.id
  tags = {
    Custom-tag  = var.subnetCustomTag
    Name         = var.subnetNames
  }
}

Variables:

subnetNames = "*private_subnet2*"
subnetCustomTag = "sub_private_dev_*"
vpcName = "mainvpc"

Any suggestions?

PePe avatar

Yes you can use input variables to pass to the filters

Amit Karpe avatar
Amit Karpe

ok. I will try that

2020-11-18

nbrys avatar
nbrys

When we store the cloudtrail logs in a different account. Does the kms key for encrypting the objects be a key from the source account, or the account that stores the logs

nbrys avatar
nbrys

nobody?

Brij S avatar
Brij S

Does anyone know how I could point a provider to a github repo/locally? I looked online and didn’t find any info on this. I’ve made some mods to a provider that I would like to try out

Nick V avatar
Nick V

If you mean you changed some go and recompiled a provider not sure beyond just copying it into the local folder (something like ~/.terraform/plugins/) and giving it a version higher than exists in the online repo then setting that as a constraint in the TF code

Nick V avatar
Nick V
CLI Configuration - Terraform by HashiCorp

The general behavior of the Terraform CLI can be customized using the CLI configuration file.

Matt Gowie avatar
Matt Gowie

Yeah, I do what Nick outlined for a fork of the aws-provider. It’s not pretty, but it gets the job done.

Matt Gowie avatar
Matt Gowie

Here is my make target to do it:

amplify_aws_provider: $(GOEXE)
	git clone --single-branch \
			  --branch amplify \
			  [email protected]:masterpointio/terraform-provider-aws.git \
			  ./tmp/terraform-provider-aws
	cd ./tmp/terraform-provider-aws && \
		go build . && \
		cp ./terraform-provider-aws $(PLUGINS_DIR)/$(AWS_PROVIDER_FILE_NAME)
Matt Gowie avatar
Matt Gowie
PLUGINS_DIR = ~/.terraform.d/plugins
AWS_PROVIDER_FILE_NAME = terraform-provider-aws_$(AWS_PROVIDER_VERSION)_x4

2020-11-17

David avatar
David

Hi all. Quick sanity check on cloudposse/terraform-aws-s3-bucket - is there really no way to enable the lifecycle rule for aborting incomplete multipart uploads without also enabling full object deletion? I have buckets that are actively used where I want to keep objects around permanently, but want to ensure orphaned multiparts are cleaned up. Under v0.25.0, the object expiration rule is mandatory, while everything else can be disabled. Surely I’m miss-reading this somehow.

PePe avatar

I’m pretty sure I added that in long time ago

David avatar
David

This sure looks like it’s mandatory for expiration to always be on… https://github.com/cloudposse/terraform-aws-s3-bucket/blob/master/main.tf#L51-L53

cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

PePe avatar

yes, but no reason to be on a dynamic you could send a PR

PePe avatar

or set expiration to 0 that i think is unlimited

David avatar
David

Yeah, a dynamic seems more idiomatic with the rest of the module. Glad to know I wasn’t misreading things. Surprised me that others haven’t hit this before. Must not be a lot of users of lifecycle rules. :)

PePe avatar

we use it

PePe avatar

but we always set expiration

Jaeson avatar
Jaeson

I’d like to create multiple identical instances in TF 12 using a for loop and reference those instances in another loop to attach them to a LB. I’m having a lot of trouble figuring out how to get this pattern working. I’ve tried creating with count and references with for loops with local variables, direct for_each = aws_[instance.my](http://instance\.my)_instances under the resource, and really, many more different things that I can’t think of now because I’ve been staring at this problem too long. I haven’t been able to find anything using google except for much more complex patterns, where the instances are defined by static maps, and sometimes even not homogenous. I feel like this should be a much simpler pattern …

loren avatar
loren

do you have some code you can share? it’s easier to update example code to point you in the right direction than to come up with something totally from scratch

loren avatar
loren

though, if you look at this thread, we’re basically talking about exactly this use case… https://sweetops.slack.com/archives/CB6GHNLG0/p1605395904242600

Please can I get help to find the best way to link two resources (aws Log_group & KMS_Key) with a for_each loop within each of them - More details inside Thread

Jaeson avatar
Jaeson

That’s fair. This is how I’m doing it currently with count:

resource "aws_instance" "advprx_instance" {
  count = var.proxy_counts[var.environment]
  ami                         = var.override_ami != "" ? var.override_ami : var.AMI_map[var.environment]
  instance_type               = var.instance_type
  vpc_security_group_ids      = var.security_group_map[var.environment]
  subnet_id                   = var.subnet_map[var.environment]
  associate_public_ip_address = "false"
  key_name                    = var.keypair_name_map[var.environment]
  iam_instance_profile        = aws_iam_instance_profile.win-ec2-profile.name
  user_data                   = base64encode(data.template_file.advprx_user_data.rendered)

  credit_specification {
    cpu_credits = "standard"
  }

  # Give instance unique name
  tags = merge(local.proxy_tags, { "Name" = "${var.host_name}-${count.index + var.override_start_index + 1}" } )
}


\# Attach the generated instances to the appropriate target group
resource "aws_lb_target_group_attachment" "advprx_instance_alb_attachment" {
  count            = var.proxy_counts[var.environment]
  # how do I use for_each or for for this?
  target_group_arn = local.target_group_arn
  # target_id        = aws_instance.advprx_instance[each.key].id 
  target_id        = aws_instance.advprx_instance[count.index].id
  port             = 80
}
loren avatar
loren

and what is the object structure in var.proxy_counts?

loren avatar
loren

literally just a number?

loren avatar
loren

if you are creating a literal multiple of identical things, then count is a fine approach. for_each makes more sense when each item has some unique identifier as one of its properties. in your code, we could construct that identifier for the “Name” tag, but it would still be based on the index and so may not avoid classic limitations of count

Jaeson avatar
Jaeson

Yes, it’s literally a number.

Jaeson avatar
Jaeson

Oh. It wouldn’t avoid the limitations of count? That’s what I was after. Specifically this limitation: if I decrease the count, or increase the count, I didn’t want resources recreated.

loren avatar
loren

increasing the count by one will add one instance, decreasing the count by one will destroy one instance (the last one, you can’t pick)

Jaeson avatar
Jaeson

Ok. Fair enough. Thanks!

loren avatar
loren

fwiw, here’s one way of converting to for_each, but using a custom name per instance…

locals {
  instance_inputs = [
    {
      name = "foo"
    },
    {
      name = "bar"
    },
  ]
  
  instance_defaults = {
    ami                         = var.override_ami != "" ? var.override_ami : var.AMI_map[var.environment]
    instance_type               = var.instance_type
    vpc_security_group_ids      = var.security_group_map[var.environment]
    subnet_id                   = var.subnet_map[var.environment]
    associate_public_ip_address = "false"
    key_name                    = var.keypair_name_map[var.environment]
    iam_instance_profile        = aws_iam_instance_profile.win-ec2-profile.name
    tags                        = local.proxy_tags
    user_data                   = base64encode(data.template_file.advprx_user_data.rendered)
    target_group_arn            = local.target_group_arn
    port                        = 80
  }
  
  instances = [for instance in local.instance_inputs : merge(local.instance_defaults, instance)]
}

resource "aws_instance" "advprx_instance" {
  for_each = { for instance in local.instances : instance.name => instance }

  ami                         = each.value.ami
  instance_type               = each.value.instance_type
  vpc_security_group_ids      = each.value.vpc_security_group_ids
  subnet_id                   = each.value.subnet_id
  associate_public_ip_address = each.value.associate_public_ip_address
  key_name                    = each.value.key_name
  iam_instance_profile        = each.value.iam_instance_profile
  user_data                   = each.value.user_data

  credit_specification {
    cpu_credits = "standard"
  }

  # Give instance unique name
  tags = merge(each.value.tags, { "Name" = each.key } )
}

resource "aws_lb_target_group_attachment" "advprx_instance_alb_attachment" {
  for_each = { for instance in local.instances : instance.name => instance }

  target_group_arn = each.value.target_group_arn
  target_id        = aws_instance.advprx_instance[each.key].id
  port             = each.value.port
}
Jaeson avatar
Jaeson

great, thanks!

loren avatar
loren

i also set it up to let you override any property by specifying it in the instance_inputs object…

Jaeson avatar
Jaeson

Yes, this is awsome. Thanks again!

:--1:1
Tomek avatar
Tomek

:wave: Is there a way to define the session expiration time for the role an ECS task assumes in terraform? The AWS docs state that the default is 6 hours. max_session_duration for aws_iam_role only sets the allowed max session but it looks like when changing that to 12 hours, the ECS task’s role still uses the default 6 hour session duration

PePe avatar

There is timeouts for resource creation but the iam role has a session timeout that is default to 1 hour but can be changed to 30.days I think is the max

PePe avatar

That is in IAM not terraform

Joe Niland avatar
Joe Niland

What’s the reason you want to enforce this timeout?

Tomek avatar
Tomek

The ECS task itself generates a S3 presigned URL that it passes on to a worker outside AWS. That work can take up to 12 hours and if the session that generated the presigned URL timeouts, the URL becomes invalid.

V avatar

Experiences setting up innovative Terraform Workspaces using docker..

2020-11-16

charlespogi avatar
charlespogi
2020/11/15 15:03:18 [ERROR] eval: terraform.evalReadDataRefresh, err: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: d2c30533-f6da-40e2-925c-58b5404cb356
2020/11/15 15:03:18 [ERROR] eval: terraform.EvalSequence, err: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: d2c30533-f6da-40e2-925c-58b5404cb356
Error: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 2a7b0234-0255-496d-8deb-91877b5aad94
Error: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: d2c30533-f6da-40e2-925c-58b5404cb356
2020/11/15 15:03:18 [ERROR] eval: terraform.evalReadDataRefresh, err: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 2a7b0234-0255-496d-8deb-91877b5aad94
2020/11/15 15:03:18 [ERROR] eval: terraform.EvalSequence, err: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 2a7b0234-0255-496d-8deb-91877b5aad94
2020-11-15T15:03:18.578Z [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
Cleaning up file based variables
00:00
ERROR: Job failed: exit status 1 

my aim is to use the ami produced by packer to terraform,

charlespogi avatar
charlespogi
{
"builders": [
{
"type": "amazon-ebs",
"access_key": "###",
"secret_key": "###",
"ami_name":"EBS-{{isotime | clean_resource_name}}",
"temporary_iam_instance_profile_policy_document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:AssociateIamInstanceProfile",
"ec2:ReplaceIamInstanceProfileAssociation"

             ],
             "Resource": "*"
          },
          {
             "Effect" : "Allow",
             "Action": "iam:PassRole",
             "Resource": "*"
          }]
       },

        "region": "us-east-1",
        "ami_regions": ["us-east-1"],
        "instance_type": "t2.micro",
        "ssh_keypair_name": "SysOps2020",
        "ssh_private_key_file": "/home/ubuntu/keys/SysOps2020.pem",
        "source_ami_filter": {
            "filters": {
              "virtualization-type": "hvm",
              "name": "amzn2-ami-hvm-2.0.*-x86_64*",
              "root-device-type": "ebs"
            },
            "owners": ["amazon"],
            "most_recent": true      
        },
        "ssh_username": "ec2-user"
    }
],
"provisioners": [
    {
        "type": "file",
        "source": "scripts",
        "destination": "/home/ec2-user/"
      },
      {
        "type": "file",
        "source": "code",
        "destination": "/home/ec2-user/"
      },        
      {
        "type": "shell",
        "script": "scripts/install.sh"

     },
     {
        "type": "shell",
        "script": "scripts/cleanup.sh"

     }
]

any tips on what i missed?

Steve Wade avatar
Steve Wade

is there a way of having an if block inside a resource?

Steve Wade avatar
Steve Wade

i basically want to switch the value of

logging {
    target_bucket = "${var.org_namespace}-${var.environment}-access-logs"
  }

depending upon the value of a variable

tim.j.birkett avatar
tim.j.birkett

Take a look at dynamic blocks: https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks you’d be able to do it with that.

Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

:--1:1
Steve Wade avatar
Steve Wade

yeh

loren avatar
loren

here’s an example from the config aggregator i was just working on:

loren avatar
loren
resource aws_config_configuration_aggregator this {
  name = var.name
  tags = merge({ Name = var.name }, var.tags)

  dynamic account_aggregation_source {
    for_each = var.account_aggregation_source != null ? [var.account_aggregation_source] : []
    content {
      account_ids = account_aggregation_source.value.account_ids
      all_regions = account_aggregation_source.value.all_regions
      regions     = account_aggregation_source.value.regions
    }
  }

  dynamic organization_aggregation_source {
    for_each = var.organization_aggregation_source != null ? [var.organization_aggregation_source] : []
    content {
      all_regions = organization_aggregation_source.value.all_regions
      regions     = organization_aggregation_source.value.regions
      role_arn    = organization_aggregation_source.value.role_arn
    }
  }
}
1
EvanG avatar
EvanG

hmm that’s very creative

1
loren avatar
loren

complex object types can be null‘d! very convenient for this kind of thing

loren avatar
loren

here are the variable defs that work with that config…

variable account_aggregation_source {
  description = "Object of account sources to aggregate"
  type = object({
    account_ids = list(string)
    all_regions = bool
    regions     = list(string)
  })
  default = null
}

variable organization_aggregation_source {
  description = "Object with the AWS Organization configuration for the Config Aggregator"
  type = object({
    all_regions = bool
    regions     = list(string)
    role_arn    = string
  })
  default = null
}
ayr-ton avatar
ayr-ton

Is someone using this with terragrunt? https://www.infracost.io/ I’m trying to use the tf-state version but terragrunt stores states in different directories for each module (and I like it), so I’m trying to figure out a way of consolidating all states in a single one just for infracost (this should be the main question, actually).

Cost estimates for Terraform | Infracost attachment image

Cost estimates for Terraform - in your pull requests

1

2020-11-15

Mikael Fridh avatar
Mikael Fridh

No way to use a data source value in the mysql provider, right? … if I try it seems the value is just null.

Alex Jurkiewicz avatar
Alex Jurkiewicz

that’s right. If you check the github issues this is a known bug. Specifically the bug is that all the values have a default, so it seems to work but doesn’t really

Alex Jurkiewicz avatar
Alex Jurkiewicz

We wanted to create a MySQL server, then create users/databases within it, all in one Terraform configuration. This is not possible, and you have to split your configuration in two

Mikael Fridh avatar
Mikael Fridh

Actually I have to do the opposite. I have to combine it… So that I create the initial RDS cluster and all services that need to use it from the same configuration

Mikael Fridh avatar
Mikael Fridh

I wanted to fetch the endpoint, master username,password from parameter store to be able to split the creations …

Alex Jurkiewicz avatar
Alex Jurkiewicz

i suggest you provide these values as variable inputs. The system you use that runs Terraform can load the parameter and pass in the data

:--1:1
Mikael Fridh avatar
Mikael Fridh

Yeah that’s the method I guess. These things always make me want to try pulimi…

Mikael Fridh avatar
Mikael Fridh

Ok, I did some more digging.

You can use dynamic data in the provider. But it only works at initial creation. In the Refresh pass it does NOT populate the values at all.

Mikael Fridh avatar
Mikael Fridh

actually… seems like it’s maybe possible if I don’t use a module for the data reading… (I was using the cloudposse ssm parameter module)

Alex Jurkiewicz avatar
Alex Jurkiewicz

I had issues loading the data from terraform remote state data source.

I want to advise you not to go down this path. It’s a rabbit hole and is not a workflow well supported by Terraform (configuring providers with dynamic data). If you read the Terraform docs, it’s specifically called out as something to avoid:
You can use expressions> in the values of these configuration arguments, but can only reference values that are known before the configuration is applied. This means you can safely reference input variables, but not attributes exported by resources (with an exception for resource arguments that are specified directly in the configuration).</span

Provider Configuration - Configuration Language - Terraform by HashiCorp

Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Fresh off the press: https://github.com/cloudposse/terraform-yaml-config

We’re using YAML more and more to define configuration in a portable format that we use with terraform. This allows us to define that configuration from both local and remote sources (via https). For example, we use it for opsgenie escalations, datadog monitors, SCP policies, etc.

cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config

4
4
1
Chris Fowles avatar
Chris Fowles

terraform >= 0.12 has actually just become a really good structured config conversion tool

Chris Fowles avatar
Chris Fowles

yaml => json => hcl etc

2020-11-14

SlackBot avatar
SlackBot
10:37:17 PM

This message was deleted.

Gareth avatar
Gareth

Please can I get help to find the best way to link two resources (aws Log_group & KMS_Key) with a for_each loop within each of them - More details inside Thread

Gareth avatar
Gareth

Good evening, Please can somebody help confirm if this is the best approach to what I’m trying to achieve? It is working but I’ve seen similar things in past version of TF with count that appear to work in the similar way and cause really issues when the source data is changed or reordered. I’m trying to use the below collection of objects to be a data structure I use throughout my code, this is just a stripped back version of a much bigger structure. The aim is to say for each one of the collect_lists entries create a log_group based on the keys and values, dynamically decided if it needs a custom KMS key. If true, create one and then like the key ID to the log_group.

logs_map = {
  files = {
    collect_list = [
      {
        "file_path"          = "C:\cms\logs\iis\W3SVC1\*"
        "log_group_name"     = "cms/{environment}/iis"
        "log_stream_name"    = "{instance_id}_{local_hostname}"
        "retention_in_days"  = 7
        "use_custom_kms_key" = true
        "auto_removal"       = true
      },
      {
        "file_path"          = "C:\maint\logs\iis\W3SVC2\*"
        "log_group_name"     = "maint/{environment}/iis"
        "log_stream_name"    = "{instance_id}_{local_hostname}"
        "retention_in_days"  = 7
        "use_custom_kms_key" = true
        "auto_removal"       = true
      },
      {
        "file_path"          = "C:\cms\website\App_Data\Logs\*"
        "log_group_name"     = "cms2/{environment}/umbraco"
        "log_stream_name"    = "{instance_id}_{local_hostname}"
        "retention_in_days"  = 7
        "use_custom_kms_key" = true
        "auto_removal"       = true
      },
    ]
  }
}

resource "aws_cloudwatch_log_group" "loggroups" {
 for_each = { for key, value in var.logs_map.files.collect_list: key => value }
  name = replace(each.value.log_group_name, "{environment}", var.environment)
  retention_in_days =  each.value.retention_in_days
  kms_key_id = each.value.use_custom_kms_key == true ? aws_kms_key.log_group[each.key].arn : ""
 # tags = var.tags
}

resource "aws_kms_key" "log_group" {
  for_each = { for key, value in var.logs_map.files.collect_list: key => value if value["use_custom_kms_key"] == true }

  description             = "KMS key used to encrypt log files. One per log group. ${element(split("/", each.value.log_group_name), 0)}"
  deletion_window_in_days = 30
  enable_key_rotation     = true
  #tags = merge({ Name = "${element(split("/", each.value.log_group_name), 0)}-default-kms" }, var.tags)
}

While this works, it feels far too risky and susceptible to error if the items in the list change order. Am I mistaken when I thought terraform 0.13+ changed the way it stored items with the state? Rather than being array based index: aws_cloudwatch_log_group.loggroups[“0”] aws_kms_key.log_group[“0”] It used a name as a unique identify instead? Assumedly something like: aws_cloudwatch_log_group.loggroups[“mylog”] aws_kms_key.log_group[“myotherlog”] So when used within a for_each loop the resources created would be easier to refer to later on and it didn’t matter if the order changed. Am I completely way off with my resources configuration or even my data layout? Should I be doing the above in a different way. Thank you for taking the time to read this. Sorry, its was so long.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you use count on a list, and the items in the list change, then TF will try to recreate everything (this could be ok in some cases, but not acceptable in other cases)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for_each does not have that problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use for_each whenever possible (especially when you create many instances of the same resource)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, resources with count are lists, so you have to use a list index to get one item

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

resource with for_each are maps, so you have to use a map key to get one item

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Resources - Configuration Language - Terraform by HashiCorp

Resources are the most important element in a Terraform configuration. Each resource corresponds to an infrastructure object, such as a virtual network or compute instance.

Gareth avatar
Gareth

Hi Andriy thank you for taking the time to reply and for the information.

I get count and for_each are different and I’ve used them for a while but I’ve never need to link between the resources like I’ve done in my example.

Given what you’ve said, is it fair to assume that the each.key of the newly created KMS will always be in sync with the each value of the log group?

Sorry if it’s obvious, I’m a little paranoid that I set this up and somebody comes along and adds a new value to my data structure e.g log group number 4 and all of a sudden TF updates the existing resources to use the wrong keys.

To put you in the spot. Would you be happy doing it as I have or would you restructure in some way.

Again thanks in advance.

loren avatar
loren

i think your concern is coming from an ill-advised for_each expression…

for key, value in var.logs_map.files.collect_list: key => value

key is not an actual key in a map here. it is the index of the list. using the index of the list in this way makes the for_each subject to all the same problems as count

loren avatar
loren

you could instead do something like:

for item in var.logs_map.files.collect_list: item.file_path => item

note the use of item.file_path to create the key of the map. this becomes the resource label/identifier in the tfstate. you don’t have to use file_path, just any attribute that is unique over all the items. in all lists of objects, i often require a name attribute that must be unique, and use that as the for_each key

Gareth avatar
Gareth

Okay, I’m going to need more coffee. So, are you suggesting that a better data structure would be to do away with the list which I used to symbolise each log_group. Eg one item for each log I want. And change it to map/object so I have a key rather than the numerical index. Which would help do the matching later to ensure “key A”match “log group A”.

Sorry Loren, if I missed your point.

loren avatar
loren

no change to the data structure is necessary, just the for_each expression

loren avatar
loren
 for_each = { for item in var.logs_map.files.collect_list: item.file_path => item }
Gareth avatar
Gareth

Okay, I’ll go have a go. Would that be in both log group and the kms key or just one or the other ?

loren avatar
loren

both, they should use the same key so you can index into the map the way you have it: kms_key_id = each.value.use_custom_kms_key == true ? aws_kms_key.log_group[each.key].arn : ""

:--1:1
Gareth avatar
Gareth

thank you, I’ll go do some more testing.

loren avatar
loren

using file_path as the key, this makes your resource address look like:

aws_cloudwatch_log_group.loggroups["C:\\cms\\logs\\iis\\W3SVC1\\*"]
loren avatar
loren

if you do not like that resource address, then you might want to update the data structure with a new attribute that you use as the key… e.g.

logs_map = {
  files = {
    collect_list = [
      {
        "name"               = "cms_w3svc1"
        "file_path"          = "C:\\cms\\logs\\iis\\W3SVC1\\*"
        "log_group_name"     = "cms/{environment}/iis"
        "log_stream_name"    = "{instance_id}_{local_hostname}"
        "retention_in_days"  = 7
        "use_custom_kms_key" = true
        "auto_removal"       = true
      },
      ...
    ]
  }
}

which adds a unique name attribute to each item. and then your for_each like:

 for_each = { for item in var.logs_map.files.collect_list: item.name => item }

and your resource address would look like:

aws_cloudwatch_log_group.loggroups["cms_w3svc1"]
:--1:2
loren avatar
loren

hopefully that better demonstrates the relationship between the for_each expression and your resource address…

Gareth avatar
Gareth

@loren you have firmly hammered the nail in to my thick skull, its finally clicked. Thank you for talking the time to answer the question and my thanks also to @Andriy Knysh (Cloud Posse)

Gareth avatar
Gareth

I can clearly see now the naming is working and that it is indeed no longer based on an index count and It was my own doing that was name it in that way.

loren avatar
loren

i’ve converted a lot of code to for_each, it is totally worth it. good luck!

:--1:2
Pierre-Yves avatar
Pierre-Yves

Hello, I have read all the thread, and wants to thanks for taking time to address the question and explain every thing. I have runned in the same need previously and made it by changing datastructure to a map of map

:--1:1
loren avatar
loren

managing the map of objects directly certainly works also (instead of converting the list to a map with the for expression). i haven’t quite figured out why i prefer a list of objects, but something about it just works better for me

2020-11-13

btai avatar

Question for those of you that use Terraform Cloud: is there a way to run the remote applies against AWS using AWS profile instead of using aws creds environment variables yet?

btai avatar

@ to clarify this is to run remotely on terraform cloud (previously named terraform enterprise)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What about cloud agents?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-kubernetes-tfc-cloud-agent

Provision a Terraform Cloud Agent on an existing Kubernetes cluster. - cloudposse/terraform-kubernetes-tfc-cloud-agent

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using this, you can deploy it kubernetes with IRSA

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This way it also runs in your VPC and can manage things like database users with the postgres/mysql providers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(this is an alternative to “using aws creds environment variables yet”, not necessarily a solution to using “profiles”, however, I think that could also work if you mount a configmap with the profile settings, since terraform uses the AWS SDK, it should “just work” )

rei avatar

@btai ok, got it

SlackBot avatar
SlackBot
09:39:18 PM

This message was deleted.

Mikael Fridh avatar
Mikael Fridh

Anyone here using vscode for Terraform? Any of you have an actual working extension for popping up contextual docs that work? I remember the good old day when I had it… ever since language server came in I never had it anymore.

1
Matt Gowie avatar
Matt Gowie

If somebody has this working… I would love to hear about it.

Chris Wahl avatar
Chris Wahl

I’ve had to disable the Terraform extension’s LS. The error throwing was getting annoying.

:--1:1
Alex Jurkiewicz avatar
Alex Jurkiewicz

They’ve mostly fixed the error spam in the past couple of weeks. But yeah, it’s still borderline useless. I don’t get completions beyond the built in vscode “other tokens from this file”

:--1:1
Alex Jurkiewicz avatar
Alex Jurkiewicz

It seems to assume your terraform code is in the root directory, and follows modern standards exactly (eg using the 0.13 provider version constraints, etc)

Mikael Fridh avatar
Mikael Fridh

My colleague was informing me today how well his terraform things in intellij was working.. gee thanks

antonbabenko avatar
antonbabenko

For me this is the main (if not only reason) to use intellij. There are some plugins for VSCode for Terraform and Terragrunt ( https://github.com/4ops/vscode-language-terraform ), but they are rather limited and provide limited static completion.

4ops/vscode-language-terraform

Adds support for the Terraform configuration language to Visual Studio Code - 4ops/vscode-language-terraform

1
Chris Wahl avatar
Chris Wahl

I’ve also given IntelliJ a stab for these reasons. It beats having TF docs open in a browser or parsing the provider schema.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I just tried vscode again this morning and it managed to autocomplete (in order) lifecycle, ignore_changes = [ name ] give it another try everyone!

Mikael Fridh avatar
Mikael Fridh

In a single repo workspace?

Mikael Fridh avatar
Mikael Fridh

Most of my auto completion is via dictionary history and not due to functioning language support. Omg look at this though!!? https://github.com/hashicorp/terraform-ls/blob/master/CHANGELOG.md

hashicorp/terraform-ls

Terraform Language Server. Contribute to hashicorp/terraform-ls development by creating an account on GitHub.

Mikael Fridh avatar
Mikael Fridh

0.10.0 (Unreleased)

FEATURES:

Support module wide diagnostics (#288) Provide documentation on hover (#294) ENHANCEMENTS

Alex Jurkiewicz avatar
Alex Jurkiewicz

I was testing an internal module repo. So a repo with terraform files in the root directory, and basic terraform init having been run

Alex Jurkiewicz avatar
Alex Jurkiewicz

I just tested a monorepo which has some terraform configurations in sub-directories, and no dice there

Mikael Fridh avatar
Mikael Fridh
08:48:55 AM

man it just keeps getting worse here … falling like a deck of cards hehe

loren avatar
loren

the vscode terraform-ls just added support for docs on hover… https://github.com/hashicorp/terraform-ls/releases/tag/v0.10.0

Release v0.10.0 · hashicorp/terraform-ls

FEATURES: Support module wide diagnostics (#288) Provide documentation on hover (#294) ENHANCEMENTS: Add support for upcoming Terraform v0.14 (#289) completion: Prompt picking type of provider/d…

Matt Gowie avatar
Matt Gowie
11:15:53 PM

It’s not great… Like cmon gimme a link please.

Matt Gowie avatar
Matt Gowie

Haha I shouldn’t rag on them. They’re trying. I’m sure that’ll get better with time.

loren avatar
loren

Haha yeah that’s not so helpful

Alex Jurkiewicz avatar
Alex Jurkiewicz

still doesn’t work for monorepos. In fact it’s even worse – crashes repeatedly if you make the panels appear. It’s been months now so I guess they have no interest in supporting this

Mikael Fridh avatar
Mikael Fridh

Hey, do you even test your modules?

Error: error creating RDS cluster: InvalidParameterCombination: Aurora Serverless DB clusters are always encrypted at rest. Encryption can't be disabled.
        status code: 400, request id: 6f8ac312-53d1-4d12-9602-e5fb64cc102f
Mikael Fridh avatar
Mikael Fridh

Maybe that warning about using master branch was a real one? I’m shocked .

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Mikael Fridh avatar
Mikael Fridh

yup . Am just adding a serverless ? conditional on it …

Mikael Fridh avatar
Mikael Fridh

I think I found another bug in the example too …

Mikael Fridh avatar
Mikael Fridh

family = "mysql5.7 change to: family = "aurora-mysql5.7"

PePe avatar

we build serverless using this module without issues

PePe avatar

version 0.35.0

Mikael Fridh avatar
Mikael Fridh

Oh, I can too if I explicitly set those variables. So the bugs are very minor, no problemo.

Mikael Fridh avatar
Mikael Fridh

howdy PePe btw

PePe avatar

hello

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we automatically test on AWS only examples/complete

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the rest of the examples are for reference only

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks for finding the bug with the encryption for serverless

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we need to update that example (set the var to true)

Mikael Fridh avatar
Mikael Fridh

like I said, no biggie, but can’t hurt to fix those!

PePe avatar

come on @Mikael Fridh we want a PR

1
Mikael Fridh avatar
Mikael Fridh

yeah I’ll do that later.

Mikael Fridh avatar
Mikael Fridh

technically the correct thing could be to have that value not set… would that mean passing an explicit null in the case where this is a conditional?

PePe avatar

if serverless set to null if not set to current default

PePe avatar

merged and released @Mikael Fridh

1
Mikael Fridh avatar
Mikael Fridh

There’s no build-harness helper for the [context.tf](http://context\.tf) stuff?

Mikael Fridh avatar
Mikael Fridh

nevermind… hacking some mods for it

Padarn avatar
Padarn

Has anyone used terraform with the eks efs driver? I’m having a bit of trouble making the volume claim example here: https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html work with the kubernetes_volume_claim resource (thread)

Amazon EFS CSI driver - Amazon EKS

The Amazon EFS Container Storage Interface (CSI) driver provides a CSI interface that allows Kubernetes clusters running on AWS to manage the lifecycle of Amazon EFS file systems.

Padarn avatar
Padarn

I’ve got this:

resource "kubernetes_persistent_volume" "sona_efs" {
  metadata {
    name = local.efs_name
  }
  spec {
    access_modes = ["ReadWriteMany"]
    capacity = {
      storage = "50Gi"
    }
    volume_mode = "Filesystem"
    storage_class_name = "efs-sc"
    persistent_volume_reclaim_policy = "Retain"
    persistent_volume_source {

    }
    csi = {
      driver = "[efs.csi.aws.com](http://efs\.csi\.aws\.com)"
      volume_handle = aws_efs_file_system.efs_file_system.id
    }
  }
}

but it doesn’t like the csi block (I also tried putting it inside of persistent_volume_source

Amazon EFS CSI driver - Amazon EKS

The Amazon EFS Container Storage Interface (CSI) driver provides a CSI interface that allows Kubernetes clusters running on AWS to manage the lifecycle of Amazon EFS file systems.

Padarn avatar
Padarn
persistent_volume_source configuration for Terraform kubernetes_persistent_volume when using EFS

I’m using EFS as a CSI driver in a k8s cluster. I would like to use Terraform to create a PV that will use the efs storage class. I verified that I can create the PV &quot;manually&quot;. I would l…

Padarn avatar
Padarn
hashicorp/terraform-provider-kubernetes

Terraform Kubernetes provider. Contribute to hashicorp/terraform-provider-kubernetes development by creating an account on GitHub.

Padarn avatar
Padarn

I solved it - answered in the stackoverflow post

:--1:2

2020-11-12

Haroon Rasheed avatar
Haroon Rasheed

Hi I was trying to create Kubernetes Ingress resource using Terraform..I dont see an option to specify PathType like Prefix, Exact etc inside kubernete_ingress block..any idea on how to do that?

Mikhail Naletov avatar
Mikhail Naletov

@Erik Osterman (Cloud Posse) hi! I always wanted to ask you one thing. Why don’t you use terraform registry source in cloudposse modules instead of specifying git https url?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haha - mostly since when we started it wasn’t available

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but the reason I laugh, is this literally came up at standup today

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re probably going to start switching things over.

Mikael Fridh avatar
Mikael Fridh

What’s the benefit?

Mikhail Naletov avatar
Mikhail Naletov

Hurrah

Mikhail Naletov avatar
Mikhail Naletov

IMHO it’s a bit more comfort to read in code

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think our module stats will go through the roof once we do that everywhere

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Mikhail Naletov avatar
Mikhail Naletov

2020-11-11

Flávio Moringa avatar
Flávio Moringa

Hi, regarding my issue with the terraform-aws-cloudfront-s3-cdn module I found the issue:

Flávio Moringa avatar
Flávio Moringa

For the redirect_all_requests_to option to work, I also need to set the website_enabled = true variable… But the documentation does not say that.

Flávio Moringa avatar
Flávio Moringa

Please update the documentation. I’ve created a bug report with it at: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/issues/111

redirect_all_requests_to option seems to do nothing · Issue #111 · cloudposse/terraform-aws-cloudfront-s3-cdn

Hi, I&#39;ve asked on the terraform channel on slack, but got no answer. I&#39;m creating a standard cloudfront distribution, with an s3 origin and all works ok. I then need to create a second clou…

Flávio Moringa avatar
Flávio Moringa

Thanks and keep up the good work

Release notes from terraform avatar
Release notes from terraform
04:54:10 PM

v0.14.0-rc1 0.14.0 (Unreleased) NEW FEATURES:

Terraform now supports marking input variables as sensitive, and will propagate that sensitivity through expressions that derive from sensitive input variables.

terraform init will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (<a href=”https://github.com/hashicorp/terraform/issues/26524” data-hovercard-type=”pull_request”…

Initial integration of the provider dependency pinning work by apparentlymart · Pull Request #26524 · hashicorp/terraform

This follows on from some earlier work that introduced models for representing provider dependency &quot;locks&quot; and a file format for saving them to disk. This PR wires the new models and beha…

loren avatar
loren

Hooray!
Terraform will now support reading and writing all compatible state files, even from future versions of Terraform. This means that users of Terraform 0.14.0 will be able to share state files with future Terraform versions until a new state file format version is needed. We have no plans to change the state file format at this time.

Initial integration of the provider dependency pinning work by apparentlymart · Pull Request #26524 · hashicorp/terraform

This follows on from some earlier work that introduced models for representing provider dependency &quot;locks&quot; and a file format for saving them to disk. This PR wires the new models and beha…

:100:8
RB avatar

finally

Matt Gowie avatar
Matt Gowie

Yeah, this will be interesting. I wonder if the recommended approach of pinning an explicit terraform version in root modules will go away with v14

RB avatar

i believe they are deprecating that terraform hcl block

RB avatar


The version argument inside provider configuration blocks has been documented as deprecated since Terraform 0.12. As of 0.14 it will now also generate an explicit deprecation warning. To avoid the warning, use provider requirements> declarations instead. ([#26135](https://github.com/hashicorp/terraform/issues/26135))</span hmmm or perhaps just the version arg for providers. i can’t find the terraform hcl block referenced anywhere for 0.14 or 0.15 changelog

configs: deprecate version argument inside provider configuration blocks by mildwonkey · Pull Request #26135 · hashicorp/terraform

The version argument is deprecated in Terraform v0.14 in favor of required_providers and will be removed in a future version of terraform (expected to be v0.15). The provider configuration document…

V M avatar

can drive letters be assigned in Terraform

Joe Niland avatar
Joe Niland

You can call out to a script using remote exec (assuming you need to do this on a server)

V M avatar

Thank you. suggested. using “Azure VM Extension resource with a custom PowerShell script

2020-11-10

Dan avatar

hey guys

Dan avatar

is there a way to customize the metadata information using this module https://github.com/cloudposse/terraform-aws-ec2-autoscale-group ?

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

PePe avatar

if is not in the module variables then most probably no but is easy to do with a Dynamic

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

PePe avatar

PRs are welcome

Flávio Moringa avatar
Flávio Moringa

Hi guys, I’m using the terraform-aws-cloudfront-s3-cdn terraform module for creating a cloudfront distribution… And I’ve managed to use for a standard distribution…

Flávio Moringa avatar
Flávio Moringa

But I also need to create a second distribution for a redirect, so I used the variable redirect_all_requests_to with the url where to redirect….

Flávio Moringa avatar
Flávio Moringa

all goes well, except the s3 bucket created is not configured as a redirect website…. just a standard bucket as in the creating of a standard cloudfront distribution… Am I missing something? Do I need to configure the s3 bucket myself as a redirect website after the module finishes creating the cloudfront distribution?

PePe avatar

mmm I have never done this but I guess you add the redirect rules in cloudfront instead of the bucket

Flávio Moringa avatar
Flávio Moringa

Actually no, I’ve done it by hand, and you have to do it in the bucket for the redirect to work… In the terraform module I thought when you add the redirect_all variable the bucket would be created with the redirect, but it seems it doesn’t… So I don’t seed what that option actually does…

Flávio Moringa avatar
Flávio Moringa
cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Flávio Moringa avatar
Flávio Moringa

And I’m using the following variables:

Flávio Moringa avatar
Flávio Moringa
Flávio Moringa avatar
Flávio Moringa

Any help would be helpful… thanks

PePe avatar

Am I reading this right but is is true Terraform does not support cloning an RDS cluster? for real??????

loren avatar
loren

what is it you’re reading?

loren avatar
loren

oh it’s an aurora feature

loren avatar
loren

interesting. i’ve used the snapshot and create workflow. clone is new to me

PePe avatar

clone is hours faster

loren avatar
loren

makes sense

PePe avatar

I clone a 600GB cluster in 3 min

loren avatar
loren

there is a linked pr implementing an underpinning piece… looks active as of a few days ago, so might get some traction on the clone issue… https://github.com/hashicorp/terraform-provider-aws/pull/7031

Support Aurora point-in-time restore by Gufran · Pull Request #7031 · hashicorp/terraform-provider-aws

Relates #5286 Changes proposed in this pull request: Added support for Aurora point in time restore and clone. Output from acceptance testing: $ make testacc TEST=./aws TESTARGS=&quot;-run=TestAc…

PePe avatar

ohhh I did not see that one

PePe avatar

if that gets merged that will solve my problem

loren avatar
loren

oh it wasn’t linked after all. two prs implementing the same feature. i found this one just by searching pulls for “clone”

loren avatar
loren

@PePe fyi aws provider v3.15.0 just dropped with support for the aurora rds point in time restore feature

loren avatar
loren


resource/aws_db_instance: Add restore_to_point_in_time argument and latest_restorable_time attribute (#15969)

https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.15.0

resource/db_instance: add restore to point in time support by anGie44 · Pull Request #15969 · hashicorp/terraform-provider-aws

Community Note Please vote on this pull request by adding a :–1: reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

PePe avatar

NO FING WAY…..

PePe avatar

I literally hacked my way to do this with local exec

PePe avatar

I have been testing it…….

loren avatar
loren

hate it when that happens lol

PePe avatar

I was about to merge the PR

PePe avatar

I guess I have more work to do

PePe avatar

thanks for letting me no @loren

loren avatar
loren

no prob!

PePe avatar

this is a half done implementation

PePe avatar

they do not allow you to specify the security group

PePe avatar

it defaults to the default vpc security group

loren avatar
loren

Dang. So close! Open a new issue I guess

PePe avatar

I’m testing it, the docs are not so clear bu I was able to pass the SGs

PePe avatar

but I need to see if my data is there now

PePe avatar
resource "aws_rds_cluster" "clone_us_west_2" {
  count = var.clone_enabled ? 1 : 0
  cluster_identifier = "${local.cluster_identifier_us_west_2}-clone"
  vpc_security_group_ids = local.cluster_security_groups_ids_us_west_2
  db_subnet_group_name                = "${local.cluster_identifier_us_west_2}-clone"
  db_cluster_parameter_group_name     = "${local.cluster_identifier_us_west_2}-clone"
  skip_final_snapshot                 = true
  tags                            = local.complete_tags

  restore_to_point_in_time {
    source_cluster_identifier  = local.cluster_identifier_us_west_2
    restore_type               = "copy-on-write"
    use_latest_restorable_time = true
  }
  provider = aws.us_west_2
}

resource "aws_rds_cluster_instance" "clone_us_west_2" {
  count                           = var.clone_enabled ? 1 : 0
  identifier                      = "${local.cluster_identifier_us_west_2}-clone-1"
  cluster_identifier              = "${local.cluster_identifier_us_west_2}-clone"
  instance_class                  = var.instance_type
  db_subnet_group_name                = "${local.cluster_identifier_us_west_2}-clone"
  tags                            = local.complete_tags
  engine         = "aurora"
  engine_version = "5.6.mysql_aurora.1.22.2"
  provider = aws.us_west_2
  depends_on = [aws_rds_cluster.clone_us_west_2]
}
PePe avatar

it works

loren avatar
loren

Nice! Fwiw, instead of depends_on in the instance resource, can you pass through the cluster_identifier from the cluster instance?

PePe avatar

Yes, I think that is possible

aravind.kandalam498 avatar
aravind.kandalam498

hey guys i am using the https://github.com/cloudposse/terraform-aws-ec2-autoscale-group. I am following the example. I am able to plan it but getting the following error.

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

aravind.kandalam498 avatar
aravind.kandalam498
module.autoscale_group.aws_autoscaling_group.default[0]: Creating... Error: One of `id` or `name` must be set for `launch_template

`

aravind.kandalam498 avatar
aravind.kandalam498

TF version i am running is 0.12.0 and i am also using the 0.5.0 of the module.

aravind.kandalam498 avatar
aravind.kandalam498

Has anybody else had a similar issue or know what might cause this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Did you try invoking the example from the examples/complete?

aravind.kandalam498 avatar
aravind.kandalam498

Yes

aravind.kandalam498 avatar
aravind.kandalam498

@Erik Osterman (Cloud Posse) That’s exactly what i did but i removed the VPC & Subnets from it and then ran it.

aravind.kandalam498 avatar
aravind.kandalam498

@Jeremy (Cloud Posse) error i am getting is this

module.autoscale_group.aws_autoscaling_group.default[0]: Creating... Error: One of `id` or `name` must be set for `launch_template

`

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

@ Where did you start? I mean, what is the root module, what module is it calling.

aravind.kandalam498 avatar
aravind.kandalam498
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

aravind.kandalam498 avatar
aravind.kandalam498

calling the above module.

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

You are using 0.7.3 with Terraform 0.13?

aravind.kandalam498 avatar
aravind.kandalam498

Yes i tried that and then i went back to 0.5.0 with TF .12

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

Are you setting mixed_instances_policy ? If so, to what?

aravind.kandalam498 avatar
aravind.kandalam498

well i was not planning to use mixed_instances_policy but this is what i had in the example

mixed_instances_policy = {
  instances_distribution = null
  override = [{
    instance_type     = "t3.2xlarge"
    weighted_capacity = null
  }]
}
Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

I could be wrong, but this seems to me to be an issue with your environment. In particular, cached modules, providers, and or state. Clear out any .terraform or .module directories. Also, try module version 0.7.0 with the latest Terraform 0.13.x

aravind.kandalam498 avatar
aravind.kandalam498

sure will that try that again.

aravind.kandalam498 avatar
aravind.kandalam498

is there a version you recommend with TF .12?

aravind.kandalam498 avatar
aravind.kandalam498
Error: Unsupported Terraform Core version

  on .terraform/modules/autoscale_group.label/versions.tf line 2, in terraform:
   2:   required_version = "~> 0.12.0"

Module module.autoscale_group.module.label (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0>)
does not support Terraform version 0.13.0.

Looks like 0.7.0 is using null-label that needs 0.12.0 and cant run with 0.13.0

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

Try 0.7.1 with TF 0.13. or 0.6.0 with TF 0.12

aravind.kandalam498 avatar
aravind.kandalam498

will do

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

Roll back all the way to 0.3.0 with TF 0.12 if you are still stuck.

aravind.kandalam498 avatar
aravind.kandalam498

k

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

We will fix the module if you find and isolate the bug, but so far the module looks good to me.

aravind.kandalam498 avatar
aravind.kandalam498

Thanks again for your help. i will try it out

aravind.kandalam498 avatar
aravind.kandalam498

@Jeremy (Cloud Posse) you were correct.

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

Would you please be more specific? What fixed the issue for you?

aravind.kandalam498 avatar
aravind.kandalam498

Sure! I went ahead and deleted .terraform folder which had the modules as well. I then downloaded tf 0.12.29 using tfswitch and used 0.6.0 root module to build without the mixed_instance_policy.

1
aravind.kandalam498 avatar
aravind.kandalam498

I also verified modules were working with 0.12.29

2020-11-09

V M avatar
@Chris Fowles (build-and-launch.sh) #!/bin/bash … AMI_ID= ‘packer build -machine-readable packer.json awk -f, ‘$0 ~/artifact,0,id/ {print $6}’ echo ‘variable “AMI_id” { deafualt = “’${AMI_ID}” ‘ > [amivar.tf>…. use the shell script ‘ build-and-launch’ .. wich will first build the AMI and then extract the AMI_ID .. Put the ‘extracted’ AMI_ID as a ‘variable’ into “<http://amivar.tf amivar.tf](http://amivar.tf)” then run terraform apply
charlespogi avatar
charlespogi

Thanks for this. i did try a similar approach. but didnt work, maybe what i did is just wrong. so what i did is at the packer json file, i have "ami_name":"ebs_backed_ami_by_packer", and in variables.tf i have

variable "ami_name_filter" {
    description = "Filter to use to find the AMI by name"
    default = "EBS-*"
}

and created a separate ami.tf that have this

charlespogi avatar
charlespogi
data "aws_ami" "ami" {
  most_recent      = true
  owners           = ["var.ami_owner"]

  filter {
    name   = "name"
    values = ["${var.ami_name_filter}*"]
  }

so eventually was able to use it at terraform

resource "aws_launch_configuration" "ASGlaunchconfig" {
    image_id = data.aws_ami.ami.id

it looks to work .

charlespogi avatar
charlespogi

now my issue when ran in gitlab ci, i get this message data.aws_ami.ami: Refreshing state... Error: UnauthorizedOperation: You are not authorized to perform this operation. status code: 403, request id: ebbbd4fe-097f-4f78-b3bd-9ab1abac9f49 Error: UnauthorizedOperation: You are not authorized to perform this operation. status code: 403, request id: cd24c1fd-47b4-47c2-a4b6-8da6371efa9f

Cleaning up file based variables 00:01 ERROR: Job failed: exit status 1

charlespogi avatar
charlespogi

i read somewhere that somehow i have not given packer enough permissions to let terraform use probably the ami i created. i added this on the iam user i used to exec packer and terraform { “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “VisualEditor0”, “Effect”: “Allow”, “Action”: “iam:”, “Resource”: “” } ] }

charlespogi avatar
charlespogi

but not sure / and fully understand what it means. not working too

V M avatar

@Chris Fowles when you execute the “sh build-and-launch.sh”, you will see the last bit. then you ‘should see’ the AMI

V M avatar

@ please see my post(s) to @Chris Fowles hope it helps

David Napier avatar
David Napier

Just curious as I can’t find one, but does cloudposse have a repo for wordpress?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately not….

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(we use kinsta.com!)

David Napier avatar
David Napier

Thanks for letting me know! Based on the pricing I’ve seen from hosted solutions, it seems like a great business to be in. xD

tim.j.birkett avatar
tim.j.birkett

It’s quite a tough business to be in. I used to run the infrastructure and hosting platform for ghost.org - managing 10’000s of application instances, databases and the customers that come along with them is interesting.

We’d built a scheduler, billing, routing and caching layer out of Ruby, NodeJS, LXC, MySQL and Redis that was, at times, quite funky. We usually got about 200 - 250 sites on a single instance. Databases were “sharded” ( blogId % 2 == 1 ? "db01" : "db02") over 2 sets of primary/secondary MySQL replicas…

About 1200 of the DBs were double encoded UTF8 in Latin1 which was fun to fix…

Business wise, there are many competitors including self-hosting and “1-click apps” like those on DigitalOcean. That’s not necessarily bad though as DigitalOcean love OSS and donate back to many of the apps that they provide.

tim.j.birkett avatar
tim.j.birkett

Then backups… don’t get me started on backups…

David Napier avatar
David Napier

@t_humphrey Thanks for that insight. That’s actually the site that led me to the question. Currently I run no where near that size of infrastructure, but on AWS am getting just under 100 sites on their own t2.micro instances for < $1 / instance, without reserve instances. The margin between that and Ghost’s basic package seems quite large..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Self hosting Wordpress is easy. Keeping it hardened and patched is what we pay for. If there’s ever any issue, they fix it and my weekend isn’t ruined cleaning up some hack :-)

David Napier avatar
David Napier

Just dropping a complaint, but I’m sure everyone feels it. The change in how for loops are handled between v 12 & 13 is really frustrating and seems to a have broken A LOT of functionality

Mikael Fridh avatar
Mikael Fridh

What difference is that exactly?

2
loren avatar
loren

second the request to clarify. done a lot of upgrades from 0.12 to 0.13 and haven’t (yet) seen a big problem with for expressions…

David Napier avatar
David Napier

Hmm.. I seem to be mistaken. The problem I was having was within the vpc module, the terraform-aws-alb example had pinned the reference to tags/0.8.1, changing that master seems to have cleared the error I was having.

:--1:4
chris avatar
chris

I am working through trying to set up a new infrastructure and have bumped into an issue. I am trying to setup chamber so I can use it for the secret store, but am having troubles finding how to do it with terraform >= 0.12.0 . I used reference-archtecture to get it “working” but the other modules I am using have been upgraded to 0.12 so I would like to make this work as well

PePe avatar

setup chamber in which way?

PePe avatar

you mean setting a user to use chamber or something else?

chris avatar
chris

Yes. When using the reference architecture and creating the ‘child’ accounts it looks like the bucket and kms was setup for chamber but not the user. I am trying to add the user but not on the 0.11 versions that were used during the initial provisioning

chris avatar
chris

Here is what I am having some issues with… I used the reference-architecture repo to get started so everything was based on terraform 0.11 But since then I have setup EKS and ECR which those repos are on TF 0.12+ so I modified my Dockerfile for the child stage to use the newer TF… so I am kind of version split and it is just awkward. Maybe I am just not doing it correct either…

chris avatar
chris

I don’t want to have some of the directories in my conf/ to be TF 0.11 and some to be TF 0.12… that just seems wrong, but maybe that is expected at this point in time??

PePe avatar

I use chamber in project with tf 0.13 and 0.12

PePe avatar

I do not use the chamber user module

PePe avatar

I give access to the kms key to the ecs task/instance profile to pe able to read the secrets

PePe avatar

that is how I do it

chris avatar
chris

Yeah makes sense… I am trying to use with codefresh so I don’t think that will work… I think i will have to also have a user as well

PePe avatar

that module is the one that is still in 0.11?

chris avatar
chris

It looks like it… I forked it and started trying to upgrade it but bumped into some issues quickly so figured I would ask what others were doing before going to far down the rabbit hole. Terraform is not my strong suite yet, done the basics for a few years but nothing like how cloudposse is doing things so I am still climbing the steep learning curve

PePe avatar

sorry I mean to say which module?

chris avatar
chris

Well I started at https://github.com/cloudposse/terraform-aws-components/tree/master/deprecated/aws/chamber (which used to be terraform-aws-root-modules)

cloudposse/terraform-aws-components

Service catalog of reusable Terraform components and blueprints for provisioning reference architectures - cloudposse/terraform-aws-components

PePe avatar

did you notice the /deprecated/ ?

PePe avatar

it has not been updated in a while

PePe avatar

you are basically looking for an upgraded version of this module https://github.com/cloudposse/terraform-aws-iam-chamber-user

cloudposse/terraform-aws-iam-chamber-user

Terraform module to provision a basic IAM chamber user with access to SSM parameters and KMS key to decrypt secrets, suitable for CI/CD systems (e.g. TravisCI, CircleCI, CodeFresh) or systems which…

PePe avatar

you could open a PR and we can review it

chris avatar
chris

I am working on the PR, but having some issues with modules that are used inside this repo. I am going to have to back burner this for a little but will make note to come back to it.

rei avatar

Hi, I am interested in knowing how do you organize your IaaC. looking for ideas. Currently we are building our new k8s based infrastructure, thus requiring Terraform, helm, helmfiles and gitlab ci. which is a good pattern to combine all this elements? monorepo? repo with submodules? script/makefile magic? what if the helmfiles and charts repos also contain stuff for the infra and main application?

vixus0 avatar
vixus0

Hi Rel, I have a bit of experience with this. For now we are using a monorepo containing Terraform, Helm charts, Helmfile config and CI config.

vixus0 avatar
vixus0

If you make sure the configuration is separated well, for example:

repo/
  terraform/
  helm/
    charts/
    helmfile.yaml
  ci/

then it’s not too difficult to filter git history and have your CI watch for changes based on path filters.

rei avatar

and how to do you tag the repo?

rei avatar

Do you also combine infra helm(files) with application helm(files)?

vixus0 avatar
vixus0

We consider the state of the entire repo to represent the state of all the infra, so each commit can update TF, cluster config and Helm

vixus0 avatar
vixus0

Because of this I would keep application Helm charts in the application repos

rei avatar

But if the helm chart are in the application repos, the devs may checkout a helm chart in a different version, than the current tagged version available on master.

vixus0 avatar
vixus0

Yeah so you have to decide who has responsibility over the charts and release cycle - application developers or ops

melissa Jenner avatar
melissa Jenner

Anyone uses terraform-aws-elasticache-redis? I got error when I use this module. Error: Error creating Cache Parameter Group: InvalidParameterValue: The parameter CacheParameterGroupName must be provided and must not be blank. Below are the code:

[main_elasticache_redis.tf](http://main_elasticache_redis\.tf):
module "redis" {
  source                     = "git::<https://github.com/cloudposse/terraform-aws-elasticache-redis.git?ref=tags/0.25.0>"
  availability_zones               = data.aws_availability_zones.available.names
  vpc_id                           = module.vpc.vpc_id
  allowed_security_groups          = [module.vpc.default_security_group_id]
  subnets                          = module.vpc.private_subnets
  cluster_size                     = var.redis_cluster_size #number_cache_clusters
  instance_type                    = var.redis_instance_type
  apply_immediately                = true
  automatic_failover_enabled       = true
  engine_version                   = var.redis_engine_version
  family                           = var.redis_family
  #enabled                          = var.enabled
  cluster_mode_enabled             = true
  enabled                          = true
  replication_group_id             = var.replication_group_id
  elasticache_subnet_group_name    = var.elasticache_subnet_group_name
  at_rest_encryption_enabled       = var.at_rest_encryption_enabled
  transit_encryption_enabled       = var.transit_encryption_enabled
  cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled

  parameter = [
    {
      name  = "notify-keyspace-events"
      value = "lK"
    }
  ]

  context = module.this.context
}

[data.tf](http://data\.tf): 
provider "aws" {
  version = ">= 2.55.0" 
  region  = var.region
}

[context.tf](http://context\.tf):
module "this" {
  source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>"
  enabled             = var.enabled
  namespace           = var.namespace
  environment         = var.environment
  stage               = var.stage
  name                = var.name
  delimiter           = var.delimiter
  attributes          = var.attributes
  tags                = var.tags
  additional_tag_map  = var.additional_tag_map
  label_order         = var.label_order
  regex_replace_chars = var.regex_replace_chars
  id_length_limit     = var.id_length_limit

  context = var.context
}
variable "context" {
  type = object({
    enabled             = bool
    namespace           = string
    environment         = string
    stage               = string
    name                = string
    delimiter           = string
    attributes          = list(string)
    tags                = map(string)
    additional_tag_map  = map(string)
    regex_replace_chars = string
    label_order         = list(string)
    id_length_limit     = number
  })
  default = {
    enabled             = true
    namespace           = null
    environment         = null
    stage               = null
    name                = null
    delimiter           = null
    attributes          = []
    tags                = {}
    additional_tag_map  = {}
    regex_replace_chars = null
    label_order         = []
    id_length_limit     = null
  }
  description = <<-EOT
    Single object for setting entire context at once.
    See description of individual variables for details.
    Leave string and numeric variables as `null` to use default value.
    Individual variable settings (non-null) override settings in context object,
    except for attributes, tags, and additional_tag_map, which are merged.
  EOT
}

variable "enabled" {
  type        = bool
  default     = true
  description = "Set to false to prevent the module from creating any resources"
}

variable "namespace" {
  type        = string
  default     = null
  description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}

variable "environment" {
  type        = string
  default     = null
  description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}

variable "stage" {
  type        = string
  default     = null
  description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}

variable "name" {
  type        = string
  default     = null
  description = "Solution name, e.g. 'app' or 'jenkins'"
}

variable "delimiter" {
  type        = string
  default     = null
  description = <<-EOT
    Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
    Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
  EOT
}

variable "attributes" {
  type        = list(string)
  default     = []
  description = "Additional attributes (e.g. `1`)"
}

variable "tags" {
  type        = map(string)
  default     = {}
  description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}

variable "additional_tag_map" {
  type        = map(string)
  default     = {}
  description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}

variable "label_order" {
  type        = list(string)
  default     = null
  description = <<-EOT
    The naming order of the id output and Name tag.
    Defaults to ["namespace", "environment", "stage", "name", "attributes"].
    You can omit any of the 5 elements, but at least one must be present.
  EOT
}

variable "regex_replace_chars" {
  type        = string
  default     = null
  description = <<-EOT
    Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
    If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
  EOT
}

variable "id_length_limit" {
  type        = number
  default     = null
  description = <<-EOT
    Limit `id` to this many characters.
    Set to `0` for unlimited length.
    Set to `null` for default, which is `0`.
    Does not affect `id_full`.
  EOT
}
cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Alex Jurkiewicz avatar
Alex Jurkiewicz

try posting this code as a code snippet, it will be a little easier to read

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

1
melissa Jenner avatar
melissa Jenner

Done. Thanks.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I suspect it’s because your top-level context module is being initialised without any useful values

Alex Jurkiewicz avatar
Alex Jurkiewicz

eg, none of the variables that it loads are being set

Alex Jurkiewicz avatar
Alex Jurkiewicz

From what I know of the null label module, you need to set at least one of namespace/environment/stage/name

1
melissa Jenner avatar
melissa Jenner

Hi Alex. Thank you. I set all values for namespace/environment/stage/name. That error is gone. But, I got another error, Error: Error creating Elasticache Replication Group: InvalidParameterValue: Number of node groups cannot be less than 1. Do you have any idea?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think this question can be answered by reading the README for the module: https://github.com/cloudposse/terraform-aws-elasticache-redis

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

melissa Jenner avatar
melissa Jenner

Hi Alex, It works. Thank you very much!

1

2020-11-08

V M avatar

@ Cheers!

Mikhail Naletov avatar
Mikhail Naletov

Hey. Do we know each other?

2020-11-07

charlespogi avatar
charlespogi

hi all, can anyone please show me how to get the ami id of the image made in packer, i wanted to use it for my ASG in terraform

Chris Fowles avatar
Chris Fowles

tag your ami as part of your packer build then in your terraform use a datasource for the ami to look up the ami id

Chris Fowles avatar
Chris Fowles

i think you need to use name actually for the terraform datasource (off the top of my head)

Chris Fowles avatar
Chris Fowles

but the principle remains

charlespogi avatar
charlespogi

no idea where to begin

ikar avatar

try asking in #packer channel

2020-11-06

Luke Maslany avatar
Luke Maslany

Quick question: does anyone know how I can set the Execution Timeout of a maintenance window task using terraform? I can set the ‘Delivery Timeout’ value in the run_command_parameters block, using the parameter name: timeout_seconds, but I don’t know the name of the ‘Execution Timeout’ parameter.

resource "aws_ssm_maintenance_window_task" "task" {
  ....
  task_invocation_parameters {
    run_command_parameters {
      ...
      timeout_seconds = 600
    }
  }
}

Any insights/suggestions would be much appreciated.

Chris Wahl avatar
Chris Wahl

One trick I sometimes leverage is configuring the resource in the console (by hand) and then importing the resource to see how it was configured. I was not able to find another timeout value in the provider documentation.

V M avatar

Where are the best modules templates

Alex Jurkiewicz avatar
Alex Jurkiewicz

The internet

Alex Jurkiewicz avatar
Alex Jurkiewicz

But more specifically you can try the CloudPosse GitHub org

V M avatar

I always expect a bit of cynicism, and arrogance , from all the narcistic, genius people out there who have it all figured out. You mate, are the product of the world we live in today..

1
V M avatar

thanks for the answer

rms1000watt avatar
rms1000watt

General question: I found it normal to use like.. chtf to switch between terraform versions as needed. But is there a way to allow for backwards compatibility from terraform 0.13 binary but for code meant for 0.12?

Pretty sure 0.12 binary for code with 0.11 just fails

(This came up after some conversations with friends on the topic. Just curious.)

Alex Jurkiewicz avatar
Alex Jurkiewicz

are you asking if it’s possible to write terraform code that works with 0.12 and 0.13? It is

:--1:1
Ryan avatar

Best practice question: I have a handful of domains (zone / record data) that I’d like terraform to manage. Would you keep all this data in tf file itself (in the resource), or keep that data in a flat/json file and include it in the tf (resource)?

PePe avatar

when you say a handfull is like 10-20, or 100s?

PePe avatar

if is 10 20, I will make a list and put them all there and if there is a module that create the group of record per each domain I will for loop over the list and use the module to make it really small and simple

PePe avatar

but the same apply if you have 1000s in a json file, then you need an additional step to decode it and iterate

Alex Jurkiewicz avatar
Alex Jurkiewicz

If you manage the data with another tool, json is much easier to deal with than HCL. For humans, you might as well keep it HCL so the system has fewer moving parts

2
Ryan avatar

Thanks for the advice!

2020-11-05

Mikhail Naletov avatar
Mikhail Naletov

heey is it possible to update this https://github.com/cloudposse/terraform-aws-tfstate-backend to use the latest null-label module with context?

Delimiter setting doesn’t work for the module now

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Mikhail Naletov avatar
Mikhail Naletov
Use fresh null label and context by okgolove · Pull Request #76 · cloudposse/terraform-aws-tfstate-backend

what Use context feature and latest null-label module why Some settings like delimiter didn’t work for this module

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Use fresh null label and context by okgolove · Pull Request #76 · cloudposse/terraform-aws-tfstate-backend

what Use context feature and latest null-label module why Some settings like delimiter didn’t work for this module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’ll check it out

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ reviewed the PR, LGTM, a few comments

Mikhail Naletov avatar
Mikhail Naletov

@Andriy Knysh (Cloud Posse) hello! I’ve fixed the PR. Could you review again?

Mikhail Naletov avatar
Mikhail Naletov

bump

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ thanks again. I left a few comments

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
07:05:21 PM

[Waypoint] Services not connecting Nov 5, 18:45 UTC Monitoring - We’ve rolled out a fix (appears to be a bug in reading random numbers? pretty weird, we agree). We’ll keep an eye on it for the rest of the day.Nov 5, 17:53 UTC Identified - An old bug has reappeared! We’re working on a fix.

Alex Jurkiewicz avatar
Alex Jurkiewicz

^ could these move to another channel?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, I’ll disable the RSS for hashicorp availability

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m going to keep the terraform releases though, since those are pretty seldom

:100:1
Chris Wahl avatar
Chris Wahl

Agree - releases are nice.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Good conversation starters too

:--1:2
Jaeson avatar
Jaeson

terraform import is broken. I can’t get it to import more than one element of a map. Anyone else run into this? Bugs have been opened and closed with Hashicorp without them ever admitting fault. I’m so frustrated. Here’s what the output looks like:


\# terraform state list
data.aws_acm_certificate.amazon_issued_compeat_wc
aws_s3_bucket.beta_data_buckets["svc_feedback"]
aws_s3_bucket.frontend_beta_web_buckets["accounting"]
aws_s3_bucket.frontend_beta_web_buckets["integrations"]
aws_s3_bucket.frontend_beta_web_buckets["inventory"]
aws_s3_bucket.frontend_beta_web_buckets["portal"]
[email protected]:/tfroot/beta# terraform import aws_s3_bucket.beta_data_buckets[\"svc_imports\"] co-beta-service-imports
aws_s3_bucket.beta_data_buckets["svc_imports"]: Importing from ID "co-beta-service-imports"...
aws_s3_bucket.beta_data_buckets["svc_imports"]: Import prepared!
  Prepared aws_s3_bucket for import
aws_s3_bucket.beta_data_buckets["svc_imports"]: Refreshing state... [id=co-beta-service-imports]

Error: Invalid index

  on /tfroot/beta/resources.tf line 72, in locals:
  72:       aws_s3_bucket.beta_data_buckets[bucket].arn
    |----------------
    | aws_s3_bucket.beta_data_buckets is object with 1 attribute "svc_feedback"

The given key does not identify an element in this collection value.

I’m well aware that the given key doesn’t identify an element … that’s why I’m trying to import it!! Before I made the mistake of upgrading to 13 thinking that maybe it was fixed there, importing the first element added a minimal amount of config that I could maybe use to hack the file by copying for the other buckets, but at the latest 13, it appears to load a lot more into the config. Has anyone had to hack this file in order to get their TF working with objects created elsewhere like this? If so, what is the minimal amount of config that I need to hand-add for each bucket?

Chris Wahl avatar
Chris Wahl

Curious if you’ve tried removing that object from the state file and importing?

Jaeson avatar
Jaeson

Yeah. Going back and forth between import and state rm …

Jaeson avatar
Jaeson

about to try destroying the entire state file. … that’s definitely not my preferred option, but I just tried to recreate it with tf 12, and it just worked …

Matt Gowie avatar
Matt Gowie

What version of terraform are you on?

Matt Gowie avatar
Matt Gowie

I’ve had problems with terraform 0.13.2 and import. I think they’re known / potentially fixed in later patch versions of 0.13

Jaeson avatar
Jaeson
Terraform v0.13.5
+ provider [registry.terraform.io/hashicorp/aws](http://registry\.terraform\.io/hashicorp/aws) v3.13.0
+ provider [registry.terraform.io/hashicorp/null](http://registry\.terraform\.io/hashicorp/null) v3.0.0
+ provider [registry.terraform.io/hashicorp/template](http://registry\.terraform\.io/hashicorp/template) v2.2.0
loren avatar
loren

I feel import is pretty badly broken in 0.13, especially and completely when using count/for_each on modules. they’ve closed issues indicating the bugs are fixed in master and will be part of the 0.14 release, but that’s small solace in the interim

2
Jaeson avatar
Jaeson

I was able to fix my issue by importing with tf12 and then copying over the contents of the state file generated by tf12 into the working state file I was using with tf13. … this leaves me with working with unreproducible TF, but that seems to be the best they can do right now.

1
loren avatar
loren

Interested to see where this goes, seems hashicorp is trying to improve visibility of their internal priorities and how they merge community contributions… https://www.github.com/hashicorp/terraform-provider-aws/tree/master/ROADMAP.md

hashicorp/terraform-provider-aws

Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.

1
loren avatar
loren

Lulz didn’t realize I was still in thread…

Jaeson avatar
Jaeson

It does make more sense posted to the channel …

:--1:1
1
loren avatar
loren

Interested to see where this goes, seems hashicorp is trying to improve visibility of their internal priorities and how they merge community contributions… https://www.github.com/hashicorp/terraform-provider-aws/tree/master/ROADMAP.md

hashicorp/terraform-provider-aws

Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.

1
Yoni Leitersdorf avatar
Yoni Leitersdorf

I love the permission set support. Was just missing that recently.

hashicorp/terraform-provider-aws

Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.

Alex Jurkiewicz avatar
Alex Jurkiewicz

The data has been accurate so far (2 quarters)

ms16 avatar

Hello anyone getting error -

Error: InvalidParameter: 1 validation error(s) found. - minimum field size of 1, ListTargetsByRuleInput.EventBusName.

I just started getting this error in our pipeline when I tried to upgrade to the latest aws provider version

* hashicorp/aws: version = "~> 3.14.0"
Alex Jurkiewicz avatar
Alex Jurkiewicz

check the release notes of versions between your previous version and the latest

Alex Jurkiewicz avatar
Alex Jurkiewicz

however, that error looks like it comes direct from the AWS API

Alex Jurkiewicz avatar
Alex Jurkiewicz

“InvalidParameter” is a common AWS API error string

ms16 avatar

Yes whts weried is my previous provider version is 3.13.0 (October 29, 2020)

ms16 avatar

I will try running plan with TF_DEBUG=1 to rule out the issue

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is that the same as TF_LOG=trace? I also suggest -parallelism=1 which makes the logs much easier to read

ms16 avatar

yes I suppose

ms16 avatar

I use -parallelism=100 I will setting it to 1 to debug

Alex Jurkiewicz avatar
Alex Jurkiewicz
Release v3.14.1 · hashicorp/terraform-provider-aws

BUG FIXES resource/aws_cloudwatch_event_target: Prevent regression from version 3.14.0 with ListTargetsByRuleInput.EventBusName error (#16075)

Alex Jurkiewicz avatar
Alex Jurkiewicz

Has anyone integrated custom providers into CD? Specifically, I’m looking to build a custom version of the AWS provider with some pull requests merged. I am wondering about the best way to add a custom provider binary to our CD process

Yoni Leitersdorf avatar
Yoni Leitersdorf

Are you using TF 0.13? You can do something like this:

terraform {
  required_providers {
    restapi = {
      source  = "fmontezuma/restapi"
      version = "~> 1.14.0"
    }
  }
}
Yoni Leitersdorf avatar
Yoni Leitersdorf

It does require you to register it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or you can place the binary into terraform.d/plugins/linux_amd64, and TF will find it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is the name format that TF expects: terraform-provider-shell_v0.1.3

Alex Jurkiewicz avatar
Alex Jurkiewicz

It looks like the AWS provider is installed to

.terraform/plugins/registry.terraform.io/hashicorp/aws/3.7.0/darwin_amd64/terraform-provider-aws_v3.7.0_x5

I wonder if I can simply replace the binary with my own…

Alex Jurkiewicz avatar
Alex Jurkiewicz
Home - Extending Terraform - Terraform by HashiCorp

Extending Terraform is a section for content dedicated to developing Plugins to extend Terraform’s core offering.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are installed in that location, yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but terraform.d works as well as a static location

Alex Jurkiewicz avatar
Alex Jurkiewicz

I tried to put my custom compile of aws provider in there as terraform.d/plugins/darwin_amd64/terraform-provider-aws_v3.14.0, but Terraform still installed the “real” version from the internet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you prob need to change the name or the version number so TF would not find it in the registry and look into terraform.d (although we did it some time ago and things could have changed)

Alex Jurkiewicz avatar
Alex Jurkiewicz

dang. Why is this so hard? Does Hashicorp benefit keeping it difficult?

Yoni Leitersdorf avatar
Yoni Leitersdorf

Also keep in mind there’s a hash there, it may be comparing it (I don’t know… didn’t read the code):

% cat selections.json 
{
  "[registry.terraform.io/fmontezuma/restapi](http://registry\.terraform\.io/fmontezuma/restapi)": {
    "hash": "h1:dvLIvjzP1nGHcimSkM4mSLvuJ7yI+3aV/mZAWHu4EXs=",
    "version": "1.14.1"
  },
  "[registry.terraform.io/hashicorp/aws](http://registry\.terraform\.io/hashicorp/aws)": {
    "hash": "h1:3gkfYjOVSHc3g/eXnk/JnRuoYtoDRu1oV3YPmBnuVtY=",
    "version": "3.12.0"
  }
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

Figured it out. You can force install of packages using terraform init -plugin-dir – the plugin-dir argument is required to install “official namespace” packages from a local cache

Alex Jurkiewicz avatar
Alex Jurkiewicz

example [main.tf](http://main\.tf):


\# order doesn't matter
provider azure {} # only available online
provider aws {} # available in local cache

Your local cache is a directory with the following file:

[registry.terraform.io/hashicorp/aws/3.14.0/darwin_amd64/terraform-provider-aws_v3.14.0](http://registry\.terraform\.io/hashicorp/aws/3\.14\.0/darwin_amd64/terraform\-provider\-aws_v3\.14\.0)

Install aws provider:

$ terraform init -plugin-dir terraform.d/plugins
Initializing the backend...

Initializing provider plugins...
- Using previously-installed hashicorp/aws v3.14.0
- Finding latest version of hashicorp/azure...

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider
hashicorp/azure: provider [registry.terraform.io/hashicorp/azure](http://registry\.terraform\.io/hashicorp/azure) was not found
in any of the search locations

- terraform.d/plugins

Then you can install all other providers:

$ terraform init
Alex Jurkiewicz avatar
Alex Jurkiewicz

(well, azure is a bad provider name, if you use a real provider things work )

2020-11-04

Mario Dagrada avatar
Mario Dagrada

Hi, I am using https://github.com/cloudposse/terraform-aws-elasticsearch v0.24.1 to spin up a managed ElasticSearch domain. I want to add it to a previously created Route53 hosted zone. The ES cluster spins up fine, but when it starts creating the DNS record, I get the following error:

[ERR]: Error building changeset: AccessDenied: The resource hostedzone/XXXX can only be managed through AWS Cloud Map (arn:aws:servicediscovery:us-west-1:123456789:namespace/ns-xxxxxxx)
        status code: 403, request id: 4d9a9437-3af1-4982-ad58-c766dc1d18d6

How can overcome this error? Should I create the DNS records manually with the Cloud Map CLI or is there a better solution? Thank you very much!

cloudposse/terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sounds like you don’t have permission to modify that zone. I think there is a setting to disable automatic DNS. If not, we’ll accept any PRs to do that.

cloudposse/terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Mario Dagrada avatar
Mario Dagrada

Thanks Erik. Yes, in the meanwhile I figured that out.

I was trying to create DNS records within a hosted zone created for service discovery (in the Elastic Container Service). Apparently this is not allowed since, as the error says, can only be managed through Cloud Map.

A very simple workaround is to disable the hostname option when creating the module (it is there) and just use the Elasticsearch endpoint directly. Thanks anyway for the reply!

Mikhail Naletov avatar
Mikhail Naletov

Hello everyone Could someone tell me do we REALLY have to have “~> 2.0” for AWS provider here? https://github.com/cloudposse/terraform-aws-ssm-parameter-store/blob/master/versions.tf#L5

cloudposse/terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

PePe avatar

no, we can go with => 2.0

cloudposse/terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

PePe avatar

you are welcome to create a PR

Mikhail Naletov avatar
Mikhail Naletov

Thank you for the fast answer. I’ll definitely create a PR

PePe avatar
cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

PePe avatar

just add or delete the not needed providers

PePe avatar

and post the link to your pr in the pr-reviews channel afterwards

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Mikhail Naletov avatar
Mikhail Naletov
Update providers and terraform versions settings by okgolove · Pull Request #1 · okgolove/terraform-aws-ssm-parameter-store

what Update terraform and provider version requirements why 3.0 AWS provider is already here and we should be able to use the module with it

Mikhail Naletov avatar
Mikhail Naletov

ssm module uses the same providers so just copy-paste

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(wrong org okgolove)

Mikhail Naletov avatar
Mikhail Naletov

miss click lol

Mikhail Naletov avatar
Mikhail Naletov
Update providers and terraform versions settings by okgolove · Pull Request #19 · cloudposse/terraform-aws-ssm-parameter-store

what Update terraform and provider version requirements why 3.0 AWS provider is already here and we should be able to use the module with it

PePe avatar

some comments

1
joshmyers avatar
joshmyers

lol

joshmyers avatar
joshmyers
Loosen the AWS provider requirement by joshmyers · Pull Request #20 · cloudposse/terraform-aws-ssm-parameter-store

what Loosen the AWS provider requirement why Required versions was loosened for running Terraform 0.13 which wants to use the AWS v3 provider, allow it to do so. Otherwise we need to pin any module…

joshmyers avatar
joshmyers

I’ve literally just raised this, randomly

1
joshmyers avatar
joshmyers

Aren’t the required_providers in https://github.com/cloudposse/terraform-aws-ssm-parameter-store/pull/19 going to break TF 12?

joshmyers avatar
joshmyers

Ah, I see the Terratest output, you get a warning

joshmyers avatar
joshmyers
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: Warning: Provider source not supported in Terraform v0.12
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:   on ../../versions.tf line 4, in terraform:
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:    4:     aws = {
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:    5:       source  = "hashicorp/aws"
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:    6:       version = ">= 2.0"
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:    7:     }
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: A source was declared for provider aws. Terraform v0.12 does not support the
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: provider source attribute. It will be ignored.
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: (and 3 more similar warnings elsewhere)
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 
PePe avatar

are you guys on a race?

PePe avatar

I will close this last Pr then, is that ok?

PePe avatar

@

PePe avatar

closed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(when we run tests, just need to make sure to add terraform/0.13 label)

PePe avatar

ohhh I though we did not have to do that since we were keeping compatibility with 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t feel we have to keep 0.12, it’s more a nice to have

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

since 0.14 is dropping any day, we need to drop 0.12 soon

joshmyers avatar
joshmyers

Thanks folks!

joshmyers avatar
joshmyers

Any chance of getting a release cut? ;D

PePe avatar

I can do that

PePe avatar

done

joshmyers avatar
joshmyers

Thanks!

Mikhail Naletov avatar
Mikhail Naletov

Thank you for merging the fix

tristan avatar
tristan

hey there. was about to open an issue but the template seems to say i should bring it here first. https://github.com/cloudposse/terraform-aws-ecr

using it as such:

module "xxx_yyy" {
  source = "git::<https://github.com/cloudposse/terraform-aws-ecr.git?ref=tags/0.29.0>"
  name   = "xxx_yyy"
}

drops the underscore and yields xxxyy for the repo name which isn’t desired. need to have the underscore in my use case to maintain a convention. confirmed amazon supports it. am i missing something from a tf escaping perspective or is this a legitimate bug?

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Yoni Leitersdorf avatar
Yoni Leitersdorf

Did you try setting use_fullname to false?

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

tristan avatar
tristan

will try that now. i also see regex_replace_chars which would explain this but it seems to be defaulted to null

tristan avatar
tristan

actually. that thing i just said seems to be the problem. the default is null in the table but it says:

tristan avatar
tristan

If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.

tristan avatar
tristan

so to close this out, a working example is

module "xxx_yyy" {
  source = "git::<https://github.com/cloudposse/terraform-aws-ecr.git?ref=tags/0.29.0>"
  name   = "xxx_yyy"
  regex_replace_chars = "/[^a-zA-Z0-9-_]/"
}
tristan avatar
tristan

i would expect that the default character set should be the amazon-supported characters

paultath81 avatar
paultath81

hi all - hoping someone can help me here. I’m using the aws_instance resource, within my userdata argument i’m using user_data               = filebase64("${path.cwd}/scripts/${var.linux_user_data}.sh") - as this doesn’t work. What i like to be able to do here is reference my script via variable as I have many scripts i like to reference for this types of instances i spin up.

Yoni Leitersdorf avatar
Yoni Leitersdorf

What is not working? Can you share an error?

paultath81 avatar
paultath81

Hi it’s not that it’s not working. Since my user data reference a single script file I wanted a way where I can reference either script a or b depending on the type of instant I spin up.

paultath81 avatar
paultath81

Hope that clear

Yoni Leitersdorf avatar
Yoni Leitersdorf

And did you try doing it with the method you mentioned?

paultath81 avatar
paultath81

I did but it’s not able to find the script path which I defined in my variable

Yoni Leitersdorf avatar
Yoni Leitersdorf

So it’s giving you an error message?

paultath81 avatar
paultath81

Yes I’m not in front of my laptop unfortunately

Yoni Leitersdorf avatar
Yoni Leitersdorf

We’ll need the error message to help you. I suggest you send it when you’re at your laptop.

paultath81 avatar
paultath81

although the error states invalid function i’m pretty sure i’m not using this correctly.

Error: Invalid function argument

  on ..\..\..\..\..\terraform-aws-ec2\[main.tf](http://main\.tf) line 135, in resource "aws_instance" "linux":
 135:   user_data               = filebase64("${path.module}/scripts/${var.linux_user_data}.sh")
    |----------------
    | path.module is "../../../../../terraform-aws-ec2"
    | var.linux_user_data is ""

Invalid value for "path" parameter: no file exists at
..\..\..\..\..\terraform-aws-ec2\scripts\.sh; this function works only with
files that are distributed as part of the configuration source code, so if
this file will be created by a resource in this configuration you must instead
obtain this result from an attribute of that resource.
Yoni Leitersdorf avatar
Yoni Leitersdorf

How did you define the variable?

paultath81 avatar
paultath81

in my variables.tf i’m using

variable "linux_user_data" {
  description = "User data script"
  default     = ""
}

in the values are defined in my non-prod.tf

linux_user_data = "grafana"
paultath81 avatar
paultath81

and my module is set to use something like this

module "example_test01" {
  source                          = "../../../../../terraform-aws-ec2"
paultath81 avatar
paultath81
06:06:01 PM
Yoni Leitersdorf avatar
Yoni Leitersdorf

Let’s do a sanity - remove the default from your variable. This will cause TF to break if it can’t find a value for that variable.

paultath81 avatar
paultath81
06:36:06 PM

this is what i get

❯ terraform.exe plan -var-file="non-prod.tfvars"

Error: Missing required argument

  on main.tf line 1, in module "example_test01":
   1: module "example_test01" {

The argument "linux_user_data" is required, but no definition was found.

i also tried adding the var to the module and same error

paultath81 avatar
paultath81

actually this time it sees the file but it’s looking in the wrong path

Error: Invalid function argument

  on ..\..\..\..\..\terraform-aws-ec2\[main.tf](http://main\.tf) line 135, in resource "aws_instance" "linux":
 135:   user_data               = filebase64("${path.module}/scripts/${var.linux_user_data}.sh")
    |----------------
    | path.module is "../../../../../terraform-aws-ec2"
    | var.linux_user_data is "granfana"

Invalid value for "path" parameter: no file exists at
..\..\..\..\..\terraform-aws-ec2\scripts\granfana.sh; this function works only
with files that are distributed as part of the configuration source code, so
if this file will be created by a resource in this configuration you must
instead obtain this result from an attribute of that resource.
paultath81 avatar
paultath81

it should looking in the current dir .\scripts\

paultath81 avatar
paultath81

so i updated to used cwd

user_data               = filebase64("${path.cwd}/scripts/${var.linux_user_data}.sh")

and now i get

Error: Invalid function argument

  on ..\..\..\..\..\terraform-aws-ec2\[main.tf](http://main\.tf) line 135, in resource "aws_instance" "linux":
 135:   user_data               = filebase64("${path.cwd}/scripts/${var.linux_user_data}.sh")
    |----------------
    | path.cwd is "D:/Users/sotath/Desktop/repo/mso/terraform-aws-ec2/environment/non-prod/example_test/test"
    | var.linux_user_data is "granfana"

Invalid value for "path" parameter: no file exists at
D:\Users\sotath\Desktop\repo\mso\terraform-aws-ec2\environment\non-prod\example_test\test\scripts\granfana.sh;
this function works only with files that are distributed as part of the
configuration source code, so if this file will be created by a resource in
this configuration you must instead obtain this result from an attribute of
that resource.
paultath81 avatar
paultath81

it’s the correct location path now. but errors out

Yoni Leitersdorf avatar
Yoni Leitersdorf
D:\Users\sotath\Desktop\repo\mso\terraform-aws-ec2\environment\non-prod\example_test\test\scripts\granfana.sh

exists?

paultath81 avatar
paultath81

it does

paultath81 avatar
paultath81

oh wait

paultath81 avatar
paultath81

why is it picking up granfana instead of linux_user_data = "grafana"

paultath81 avatar
paultath81

spelling is off

paultath81 avatar
paultath81
06:46:05 PM
Yoni Leitersdorf avatar
Yoni Leitersdorf

That’s what you passed to your module

Yoni Leitersdorf avatar
Yoni Leitersdorf

It’s visible in the above screenshot

Yoni Leitersdorf avatar
Yoni Leitersdorf

from a few minutes ago

paultath81 avatar
paultath81

ah hell how did i miss that

paultath81 avatar
paultath81

i see it now

paultath81 avatar
paultath81

sweet! it works

paultath81 avatar
paultath81

omg!

paultath81 avatar
paultath81

sorry for the troubles

Yoni Leitersdorf avatar
Yoni Leitersdorf

Happy to help

paultath81 avatar
paultath81

thx you

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
01:45:14 AM

[Waypoint] Service Maintenance Nov 5, 01:30 UTC Investigating - The services are being upgraded to avoid conditions detected during the previous outages.

[Waypoint] Service Maintenance

HashiCorp Services’s Status Page - [Waypoint] Service Maintenance.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
02:15:15 AM

[Waypoint] Service Maintenance Nov 5, 02:13 UTC Resolved - Services have been upgraded and are working properly.Nov 5, 01:30 UTC Investigating - The services are being upgraded to avoid conditions detected during the previous outages.

Charles Kim avatar
Charles Kim
04:52:30 AM

Working with @ on this project. We started receiving feedback. In particular - the use of TF plan to run as part of our TF security scanning. would love to get this group’s input on passing TF plan externally to such tools like Cloudrail -> You can maintain anonymity with this google survey, but would love to chat!

https://forms.gle/dg4K89qcJfAxp8Hv9

As more and more people are switching to using infrastructure-as-code (like Terraform) to manage their cloud environments, we’re seeing an increase in the desire to do security reviews of the IaC code files. There’s a bunch of tools out there, and a couple of big challenges. Would appreciate your thoughts on the matter. Please see a blog post we’ve just published:

https://indeni.com/blog/identifying-security-violations-in-the-cloud-before-deployment/

Terraform Management Survey attachment image

Today, the management of Terraform environments has taken shape with varying security controls. We would like to understand how some of the security controls manifest for sensitive configuration files like Terraform Plan/State.

2020-11-03

Shankar Kumar Chaudhary avatar
Shankar Kumar Chaudhary

anyone have successfully updated eks from 1.14 using terraform terragrunt? using terraform-root-modules

Syn Romana avatar
Syn Romana

hi, I’m interested in terraform-terraform-label module to re-label random ELB name in CloudWatch alarms, however it looks like it needs separate label module for each resource. Is that true or I misunderstood something? For instance I have a ELB CloudWatch alert resource where I’d like to use a list of ELBs using [count.index]:

module "elb-5xx-label" {
  source     = "git::<https://github.com/cloudposse/terraform-terraform-label.git>"
  name       = var.name
  namespace  = var.namespace
  stage      = var.stage
  attributes = compact(concat(var.attributes, list("elb", "5xx")))
}

resource "aws_cloudwatch_metric_alarm" "elb-5xx-anomaly" {
  count               = length(var.monitored-elb-ids)
  alarm_name          = join("", ["ELB 5xx errors high - ", var.monitored-elb-ids[count.index]])

\#  alarm_name          = join("", ["ELB 5xx errors high - ", module.elb-5xx-label.id])
  comparison_operator = "LessThanLowerOrGreaterThanUpperThreshold"
  evaluation_periods  = "1"
  threshold_metric_id = "e1"
  alarm_description   = "The number of HTTP 5XX errors originating from the ELB are out of band. This is not an error generated by the targets (backend)"
  treat_missing_data  = "notBreaching"
  alarm_actions       = [element(var.sns-topics.*.topic-id, 1)]
  ok_actions          = [element(var.sns-topics.*.topic-id, 1)]

  metric_query {
    id                 = "e1"
    expression         = "ANOMALY_DETECTION_BAND(m1, 1)"
    label              = "HTTPCode_ELB_5XX (expected)"
    return_data        = "true"
  }

  metric_query {
    id          = "m1"
    return_data = "true"
    metric {
      metric_name = "HTTPCode_ELB_5XX"
      namespace   = "AWS/ELB"
      period      = "60"
      stat        = "Sum"
      unit        = "Count"

      dimensions = {
        LoadBalancerName = var.monitored-elb-ids[count.index]
      }
    }
  }
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

First, I would recommend using terraform-null-label. We started terraform-terraform-label as an alternative for terraform-null-label that would not use the null_resource ; however, since 0.12 shipped with many new featuers, we ended up dropping null_resource from the null label (so the name of the module is a bit of a misnomer now).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the new null label module, we support a context variable. This makes it very easy to define many label names.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Syn Romana avatar
Syn Romana

Hi @Erik Osterman (Cloud Posse) thanks for your reply. I’m looking on to terraform-null-label module but can’t find if that (and how) will work in my scenario. When I want to have one resource:

resource "aws_cloudwatch_metric_alarm" "elb-5xx-anomaly"

with multiple alarm_name taken from label module and multiple LBs defined in variable:

      dimensions = {
        LoadBalancerName = var.monitored-elb-ids[count.index]

How to configure it?

Sebastian Borrajo avatar
Sebastian Borrajo
Hello! good morning, i'm new to this slack! I wanted to ask you a question, I was trying to build a cluster with the cloudposse modules and they work perfect, my problem is that I don't understand how to continue after that, I don't understand how to make the services work, I raise the cluster, but automate the listeners part and target group complicates me, I always get to a point (both with workers and node groups) that I get stuck


Do you have any guidance from scratch on how to make all this work? mount the cluster and run the services
roth.andy avatar
roth.andy


how to make all this work
yikes. How kubernetes works? That’s more than a few slack messages lol

roth.andy avatar
roth.andy

or did you have anything a bit more specific

Sebastian Borrajo avatar
Sebastian Borrajo

Excuse me, I explained wrong, my English is not native, I take the opportunity to say that if something seems rude, it is not my intention

Sebastian Borrajo avatar
Sebastian Borrajo

I am trying to automate the deploy in terraform

the cloudposse modules helped me a lot

But, in my understanding, for an application to work I have to create a listener, give that listener a rule that points to a target group (I always did it this way)

Sebastian Borrajo avatar
Sebastian Borrajo

My problem is that I think I’m in a bad way or I can’t make it work well

Sebastian Borrajo avatar
Sebastian Borrajo

For example, if I use node groups, I cannot give them the necessary security groups

roth.andy avatar
roth.andy

If you have a question about something specific, like a particular module you are using, that will help get you the right answers. The best way to get the answers you are looking for on most OSS slack workspaces is to ask targeted questions that people are able to answer quickly. Asking “how does all this stuff work” generally just gets ignored since people don’t have time to walk you through everything

Sebastian Borrajo avatar
Sebastian Borrajo

I think my problem is this type of networking within amazon or maybe the operation of eks itself, so I asked if there was an example of how to use the modules up to that point

I never asked a specific question, just asked if there was a more detailed guide

roth.andy avatar
roth.andy
AWS Networking for Developers | Amazon Web Services attachment image

This post is co-authored with Mark Stephens, Partner Solutions Architect, AWS If you’re a developer and are new to AWS, there is a good chance you have not had the need to set up or configure many networks. As a developer that has a need to work with infrastructure, you might end up running into […]

roth.andy avatar
roth.andy

As far as the actual CloudPosse modules, the README that is in each repo is the documentation for that module

roth.andy avatar
roth.andy

Most (all?) of them also have a very good example in an examples folder

Sebastian Borrajo avatar
Sebastian Borrajo

Let me understand, I ask about specific terraform modules and eks in particular, I explain that I know how to make it work, but I don’t understand how to make it work with the cloudposse modules and do you give me amazon networking tutorials?

I can make it work by hand, that’s not a problem

Sebastian Borrajo avatar
Sebastian Borrajo

Yes, the modules have good examples on how to create the cluster, but not how to make any services available

Sebastian Borrajo avatar
Sebastian Borrajo

Does what I’m saying make sense? Or am I letting something go

roth.andy avatar
roth.andy

• What kind of services are you running?

• How do you want them made available? Do you want them accessible from the outside internet, or something else?

• What do you want to use to get traffic in? ALB, ELB, something else? None of this stuff has anything to do with the cloudposse modules, they just build the cluster. This is all “how do I do this stuff in Kubernetes”

roth.andy avatar
roth.andy

• Are you using any kind of service mesh? or nginx-ingress controller? Or alb-ingress controller? Gloo? Traefic? Any of the other dozen ways to do ingress and routing in a k8s cluster?

roth.andy avatar
roth.andy

Or just creating a k8s Service of type LoadBalancer? (this is the easiest way to start, but one of the most expensive)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ you question is how to deploy apps and services on the EKS cluster (including apps and system services like nginx-ingress etc.). This is a separate topic, but in short, you can use helm and helmfile to do it

:--1:2
roth.andy avatar
roth.andy
Helm

Helm - The Kubernetes Package Manager.

roth.andy avatar
roth.andy
roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

Sebastian Borrajo avatar
Sebastian Borrajo

can I use helm to solve these networking problems I’m having? thanks, it’s a very good way to keep going

@roth.andy for the moment I want to use as simple as possible, simple web app, outside, alb

roth.andy avatar
roth.andy

The Ghost helm chart is a simple blogging app. I’ve used it often when doing simple deployments to k8s. It has other optional stuff like databases and Ingress that you can enable if you want to test that stuff too

roth.andy avatar
roth.andy
kubernetes-sigs/aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers - kubernetes-sigs/aws-load-balancer-controller

Application load balancing on Amazon EKS - Amazon EKS

You can load balance application traffic across pods using the AWS Application Load Balancer (ALB). To learn more, see What is an Application Load Balancer? in the Application Load Balancers User Guide . You can share an ALB across multiple applications in your Kubernetes cluster using Ingress groups. In the past, you needed to use a separate ALB for each application. The controller automatically provisions AWS ALBs in response to Kubernetes Ingress objects. ALBs can be used with pods deployed to nodes or to AWS Fargate. You can deploy an ALB to public or private subnets.

roth.andy avatar
roth.andy

The one it comes with by default creates ELBs

roth.andy avatar
roth.andy

a.k.a. Classic Load Balancer

Sebastian Borrajo avatar
Sebastian Borrajo

I’m going to read about both options, thanks a lot to both! With alb ingress I tried a tutorial from amazon but referring to cf stacks and get stuck, this could work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

now that ALB ingress v2 is released, I’d go straight for that one. .

PePe avatar

@ Hablas español?

Sebastian Borrajo avatar
Sebastian Borrajo

@PePe Si! el ingles me cuesta un poco, pero lo intenteo de llevar con lo poco que se

Sebastian Borrajo avatar
Sebastian Borrajo

@Erik Osterman (Cloud Posse) Thank you! I’m going to try version 2

PePe avatar

TLDR To those english speakers , this is where the conversation will be encrypted in spanish from now on

3
PePe avatar

Lo que te decian mis compañeros es que los modules de cloudposse setean todo lo que necesitas para un EKS cluster

PePe avatar

en la carpeta examples/complete vas a ver un ejemplo que crea todo

PePe avatar

incluyendo las redes VPC, etc

PePe avatar

despues tienes que hacer un deployment y eso lo puedes hacer con helm que es lo mas facil

PePe avatar

pero para cualquier EKS cluster lo minimo que necesitas es tener un VPC, subnets, nat gateways y todo lo que corresponde desde el punto de vista de redes

PePe avatar

y eso no es solo cierto para EKS

PePe avatar

NO HAY producto en amazon que no necesite un VPC

PePe avatar

Cloudposse tambien tiene modules para eso con ejemplos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(i remember we started #terraform-es but it didn’t get much traction)

Sebastian Borrajo avatar
Sebastian Borrajo

Si! yo monte todo con el ejemplo puesto en cloudposse, pero mi mayor traba fue como deployar el producto mas tarde, tengo entendido que helm te resuelve estos problemas de ruteo no?

(por estos problemas me refiero a que por ejemplo, para deployar una app yo tengo que hacer un listener, un listener rule y a eso ponerle un target tengo entendidor que ALB ingress resuelve este problema)

PePe avatar

si esa parte te la hace el helm chart

PePe avatar

pero de una u otra manera te tienes que leer el manual de como deployar apps en K8s por que en algun minuto vas a tener que modificar el chart o el deployment para que funcione de la manera que quieras

Sebastian Borrajo avatar
Sebastian Borrajo

Muchisimas gracias pepe, voy a indagar sobre eso

PePe avatar

@Erik Osterman (Cloud Posse) yes, there is not much traction in it

Sebastian Borrajo avatar
Sebastian Borrajo

una ultima pregunta, por manual te refieres a cual? es un link? un libro?

PePe avatar

me refiero a la documentacion general de como hacr deployments en k8s

PePe avatar

cualquier tutorial sirve

PePe avatar

despues de que uses el helm chart y veas lo que creo

PePe avatar

vas a tener que entender lo que hizo para poder modificarlo

PePe avatar

a veces el helm chart hace todo lo que necesitas

Aumkar Prajapati avatar
Aumkar Prajapati

Hey quick question about the terraform-s3-website module, basically I’m trying to put up a route53 reference along with the website, the docs say to use parent_zone_name or id along with hostname but I noticed the alias / value it’s creating is just s3-website, any ideas what’s going on here?

   + alias {
          + evaluate_target_health = false
          + name                   = "[s3-website.ca-central-1.amazonaws.com](http://s3\-website\.ca\-central\-1\.amazonaws\.com)"
          + zone_id                = "xxx"
        }
    }

Here’s the module code

module "website" {
  source = "git::<https://github.com/cloudposse/terraform-aws-s3-website.git?ref=0.12.0>"

  delimiter                = "."
  region                   = var.region
  namespace                = var.name
  stage                    = local.stage_namespace
  name                     = local.cluster_domain
  hostname                 = local.domain
  versioning_enabled       = "true"
  cors_allowed_methods     = ["GET", "HEAD"]
  index_document           = "index.html"
  error_document           = "index.html"
  parent_zone_name         = local.namespace_domain

  tags = merge(
    map("Country", substr(var.region, 0, 2)),
    map("DataCenter", substr(var.region, 3, length(var.region) - 5))
  )
}

local.namespace_domain represents the specific route53 zone.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should create an A record in the DNS zone and point <local.domain.local.namespace_domain&gt; to [s3-website.ca-central-1.amazonaws.com](http://s3\-website\.ca\-central\-1\.amazonaws\.com)

Aumkar Prajapati avatar
Aumkar Prajapati

So this is working as expected then? Hmm, the site doesn’t seem to come up despite everything being there and the direct s3 endpoint working

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you see the A and AAAA records in the DNS zone?

Aumkar Prajapati avatar
Aumkar Prajapati

A record

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to use some external tool like https://mxtoolbox.com/DnsLookup.aspx to check the record

Aumkar Prajapati avatar
Aumkar Prajapati

Well I destroyed and recreated the whole thing and now it essentially just says Not found

Aumkar Prajapati avatar
Aumkar Prajapati

Despite the s3 endpoint having files being hosted

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

DNS has TTL

Aumkar Prajapati avatar
Aumkar Prajapati

Guess I gotta be a little patient haha

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it cached somewhere, you’ll not see the new record for the TTL time

Aumkar Prajapati avatar
Aumkar Prajapati

Alright new update, it just times out now

Aumkar Prajapati avatar
Aumkar Prajapati

Any ideas @Andriy Knysh (Cloud Posse)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how do you access the site using the A record?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you share it?

Aumkar Prajapati avatar
Aumkar Prajapati

I’ll dm ya

Alex Jurkiewicz avatar
Alex Jurkiewicz

does anyone use or recommend a tool to view Terraform (0.13) plans in a prettier format? I’ve been looking at https://prettyplan.chrislewisdev.com/ which is 0.12 only, wondering if there are alternatives

roth.andy avatar
roth.andy

Something I have used that has been pretty good is to just grep the output looking for hashes.

terraform plan | grep "#"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, there was also scenery, but it too was not updated.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

with 0.14 they are also improving plan output

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i would be weary of any tool right now because the plan output is seemingly changing with every new version

Alex Jurkiewicz avatar
Alex Jurkiewicz

yeah. I hoped some tools would use the JSON representation of plan, which I’m assuming hasn’t changed so much

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s true…

Alex Jurkiewicz avatar
Alex Jurkiewicz

I just tried 0.14 and the output is great though

Terraform will perform the following actions:

  # module.eb["770"].aws_elastic_beanstalk_environment.main will be updated in-place
  ~ resource "aws_elastic_beanstalk_environment" "main" {
        id                     = "XXX"
        name                   = "XXX"
        # (17 unchanged attributes hidden)

      + setting {
          + name      = "HostHeaders"
          + namespace = "aws:elbv2:listenerrule:SharedAlbRedirect"
          + value     = "XXX"
        }
      - setting {
          - name      = "HostHeaders" -> null
          - namespace = "aws:elbv2:listenerrule:SharedAlbRedirect" -> null
          - value     = "YYY" -> null
        }
        # (86 unchanged blocks hidden)
    }

Those 86 unchanged blocks used to take up sooo much space! Now my diffs fit in a single terminal window.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s awesome! I can also be posted as comments now to PRs too since sensitive data will can be filtered out

:1000:1
rei avatar

I found this tool which creates a colored JS Graph which I can study in the browser. Added to a make target

rei avatar
visualize-plan:
	@terraform show -json plan.out > plan.json
	@mkdir -pv terraform-visual-report
	@docker run --rm -it --name terraform-visual-cli \
		--entrypoint terraform-visual \
		-v $$(pwd)/plan.json:/plan.json \
		-v $$(pwd)/terraform-visual-report:/terraform-visual-report/ \
		hieven/terraform-visual-cli:0.1.0-0.12.29 --plan plan.json
	@echo "\nOpen in your Browser:\n\t\tfile://$$(pwd)/terraform-visual-report/index.html"
PePe avatar

I was using version 0.24 of https://github.com/cloudposse/terraform-aws-rds-cluster and I’m now upgrading to version 0.35 and Tf 0.13 and I’m getting

 module.datamart_writer_cluster_us_east_1.aws_rds_cluster.default[0] its
original provider configuration at
provider["[registry.terraform.io/-/aws](http://registry\.terraform\.io/\-/aws)"].us_east_1 is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.datamart_writer_cluster_us_east_1.aws_rds_cluster.default[0], after
which you can remove the provider configuration again.

I think is because of the removal of the label module local provider?

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

PePe avatar

I’m having trouble finding in the state the problematic resource so I can remove it

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

PePe avatar

obviously I do not want to remove my cluster

PePe avatar

I can go up to 0.31.0 version of the module so far no issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so… here are the steps to solve this: (yes, it’s b/c of the removal of the provider from the very old label module)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Copy the module locally
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. In the module, add the new version of the label module w/o removing the old one
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Place the module into modules folder in your solution and reference the module (instead of the remote one)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Update all the code to use the new label.id etc.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. terraform plan should shown no changes w/o destroying anything else
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. terraform apply
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Delete the old label from the code
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. terraform apply
PePe avatar

is there a easy way to find exactly the provider used in the submodule, like in this case the label module?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea search for this error in the archives

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Someone else has a simpler solution I think

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am on my phone so it’s hard to search, but I recall seeing someone run some command to migrate the provider from - to HashiCorp

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
SweetOps #terraform for September, 2020

SweetOps Slack archive of #terraform for September, 2020. terraform Discussions related to Terraform or Terraform Modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s the command I was thinking of

PePe avatar

ahhh cool

PePe avatar

I’m having a very weird issue with this project

PePe avatar

if I use version 0.31.0 of the module with tf 0.13.5 I can run apply no problem

PePe avatar

if I use version 0.32.0 the apply command hangs for ever trying to connect to mysql

PePe avatar

stays on a look of waiting to connect

PePe avatar
Error: Could not connect to server: dial tcp 127.0.0.1:3306: connect: connection refused
PePe avatar

I use a mysql user provider

PePe avatar

but if I lower the module version to 0.31 it works

PePe avatar

very strange

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Breaking Change: Fix outputs when `enabled=false`. Change Security Group rules from inline to resources by aknysh · Pull Request #80 · cloudposse/terraform-aws-rds-cluster

what Fix outputs when enabled=false Change Security Group rules from inline to resources why Fix outputs when enabled=false: coalesce will throw error when both parameters are empty Change Secur…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using inline rules was “our bad” and shouldn’t have made it through code review back in the day. Switching from inline rules to resources, isn’t supported by terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You might need to delete those inline rules first.

PePe avatar

interesting

PePe avatar

but the weird thing is that the security group had not been changed yet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
How to migrate from the inline Security Group rules to SG rules as separate resources · Issue #83 · cloudposse/terraform-aws-rds-cluster

This PR #80 changed the Security Group rules from inline to resource-based. This is a good move since using inline SG rules is a &quot;bad practice&quot;. Inline rules have many issues (one of them…

PePe avatar

this fails on a plan, which I found it weird

PePe avatar

so the steps to reproduce is:

1.- run plan with module version 0.31.0 and confirm plan/apply works
2.- change module version to 0.32.0
3.- run terraform init
4.- run plan to confirm it times out again
PePe avatar

I have yet to be able to apply with version > 0.31.0 of the module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is this with atlantis on fargate?

PePe avatar

no, this is from my laptop

PePe avatar

I even thought it was my connection, restarted and such and same issue

PePe avatar

I was going to run it on atlantis to see if is still a problem there

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have you tried setting TF_LOG=debug?

PePe avatar

yep, the mysql provider tries to connect to mysql all the time

PePe avatar

until it times out after 5 min

PePe avatar

and if I lower the module version everything works

PePe avatar

it is VERY strange

PePe avatar

the module does not have anything to do with the mysql user provider

PePe avatar

I figure it out

PePe avatar

I tried aws provider 2.7 and 3x and nothing

PePe avatar

there seems to be a problem with getting the .endpoint (hostname) of the cluster from the module that sometimes could not resolve

PePe avatar

so once I changed

provider "mysql" {
  # endpoint = module.datamart_writer_cluster_us_east_1.endpoint
  endpoint = "[xxxxxx.us-east-1.rds.amazonaws.com](http://xxxxxx\.us\-east\-1\.rds\.amazonaws\.com)"
  username = var.datamart_db_user
  password = random_string.db_password.result
}
PePe avatar

the plan worked

PePe avatar

now since the label module changed the cluster identifier changed so now it wants to destroy everything but that is another issue

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So without hard coding the hostname, what hostname was it connecting to in the debug output?

PePe avatar

localhost

PePe avatar

which makes no sense since the module.endpoint was used

PePe avatar

but now that I successfully run plan and since the plan wants to destroy the cluster because of a name change , my guess there is a race condition of the plan trying to calculate the new endpoint name instead of using it from the state

PePe avatar

or something along those lines

PePe avatar

actually the reason why the cluster is being recreated is because of the inline security group stuff but I will follow Andriy guide to fix it

PePe avatar

mmm that was not it

PePe avatar

the cluster_identifieer seems to be the problem but even if supplied still wants to delete everything

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@PePe I recommend to copy the module locally into modules folder, reference it from your code, and make changes locally to find the issues - this will allow you to iterate fast and find/fix the issues

PePe avatar

I found the version that is breaking the changes

PePe avatar

is 0.32.0

PePe avatar

we added the contex.tf

PePe avatar
 cluster_identifier                  = var.cluster_identifier == "" ? module.this.id : var.cluster_identifier
PePe avatar

I think this could be

PePe avatar

now I know…

PePe avatar

is this thing

module.datamart_writer_cluster_us_east_1.aws_rds_cluster.primary[0
PePe avatar

the introduction of primary and secondary cluster

:100:1
PePe avatar

the id in my state looks like module.datamart_writer_cluster_us_east_1.aws_rds_cluster.default[0]

PePe avatar

By the way, what was the reason behind adding primary and secondary?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
if Cluster is part of a Global Cluster, use the lifecycle configuration block ignore_changes argument to prevent Terraform from showing differences for replication_source_identifier argument instead of configuring this value (and we can't use dynamic since it's not supported in lifecycle blocks).
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Update to `context.tf`. Add `primary` and `secondary` cluster resources by aknysh · Pull Request #79 · cloudposse/terraform-aws-rds-cluster

what Update to context.tf Add primary and secondary resource &quot;aws_rds_cluster&quot; why Standardization and interoperability Keep the module up to date If Cluster is part of a Global Cluste…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are not created at the same time, only one or the other

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should not affect any names or IDs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

primary and secondary are not a very descriptive/good names for that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those are just for regular cluster (“primary” by itself) or a cluster that is part of a Global Cluster (hence “secondary”)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(any of those can have their own secondary read replicas)

PePe avatar

I use Global RDS too, and I was using this module with global rds but I can see how it can make things easier

PePe avatar

the primary and secondary were changed in the instance and cluster ids and that is why my plan wants to destroy everything

2020-11-02

Shankar Kumar Chaudhary avatar
Shankar Kumar Chaudhary

i tried using 0.12.24 terragrunt plan is working fine but on terragrunt apply its going to replace cluster and i am getting following error
Error: error creating EKS Cluster (dev_cluster): ResourceInUseException: Cluster already exists with name: dev_cluster
{
  RespMetadata: {
    StatusCode: 409,
    RequestID: “6a650024-bdab-4965-9940-d15506218621”
  },
  ClusterName: “dev_cluster”,
  Message_: “Cluster already exists with name: dev_cluster”
}

  on .terraform/modules/eks/cluster.tf line 9, in resource “aws_eks_cluster” “this”:
   9: resource “aws_eks_cluster” “this” {

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t think a terraform plan will catch those kinds of conflicts.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
03:55:14 PM

Waypoint URL Service Nov 2, 15:44 UTC Investigating - Investigating observed issues with Waypoint URL Service

Waypoint URL Service

HashiCorp Services’s Status Page - Waypoint URL Service.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
05:25:18 PM

Waypoint URL Service Nov 2, 17:04 UTC Update - Service is experiencing partial outage and returning “Deployment not found” for some deployments. We are continuing to investigate.Nov 2, 15:44 UTC Investigating - Investigating observed issues with Waypoint URL Service

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
10:15:13 PM

Waypoint URL Service Nov 2, 21:58 UTC Update - Service is experiencing degraded performance and may timeout for some deployments. We are continuing to investigate.Nov 2, 17:04 UTC Update - Service is experiencing partial outage and returning “Deployment not found” for some deployments. We are continuing to investigate.Nov 2, 15:44 UTC Investigating - Investigating observed issues with Waypoint URL Service

Waypoint URL Service

HashiCorp Services’s Status Page - Waypoint URL Service.

Sean Turner avatar
Sean Turner

Has anyone ever user local-exec or data.external.this with aws-vault? I’ve tried a couple of times over the past 6 months and never had any luck :disappointed: Our aws-vault setup is as follows: aws-vault exec terraform-profile -- terraform apply => assume role in account listed in tf code to deploy to

from there, I would think that any aws cli or other sdk calls would fall under the assumed role, but this doesn’t appear to be the case. I think it’s using the role associated with terraform-profile, rather than the assumed role

I’m trying to run the following:

data "external" "this" {
  program = ["go", "run", "${path.module}/../main.go", "-i", "i-12345678"]
}

main.go hits cloudwatch ListMetrics() for metrics on the provided instance id, and creates a json response which is then in theory used to set up disk monitoring with for_each without needing to input the fstype and device name when using aws_cloudwatch_metric_alarm in terraform. The response would look like the following:

{
  "/": { "Device": "rootfs", "FSType": "rootfs" },
  "/boot": { "Device": "nvme0n1p1", "FSType": "ext4" }
}

unfortunately, main.go is returning an empty response (which errors by design) when terraform runs the script due to it not being able to pull metrics with the provided instance id. Thoughts?

loren avatar
loren

Exactly right, you would need to configure vault to assume the role, not your terraform provider

loren avatar
loren

Or write your script to assume the role, which we’ve done before. Gimme a sec to find an example….

loren avatar
loren
plus3it/terraform-aws-tardigrade-security-hub

Terraform module to create SecurityHub. Contribute to plus3it/terraform-aws-tardigrade-security-hub development by creating an account on GitHub.

Sean Turner avatar
Sean Turner

Cheers again mate. Was just thinking I’m going to make my error print the output of get caller identity

loren avatar
loren
plus3it/terraform-aws-tardigrade-security-hub

Terraform module to create SecurityHub. Contribute to plus3it/terraform-aws-tardigrade-security-hub development by creating an account on GitHub.

Sean Turner avatar
Sean Turner

Ah this makes sense. Will try shortly :)

:--1:1
Sean Turner avatar
Sean Turner

So I got assume role working which is great, I hadn’t done that before. But now I think my problem is that i am returning a map of map (see above), not a map of string values as specified in the docs. Will need to flatten things out which is a bit messy looks like. Thanks for this

Sean Turner avatar
Sean Turner

Yep, doesn’t error out when I get rid of nesting

> data.external.this.result
{
  "Device" = "tmpfs"
  "FSType" = "tmpfs"
}
loren avatar
loren

You can return a json-encoded object, then use jsondecode() on the result

Sean Turner avatar
Sean Turner

Ah, even with nesting? I think all of the values need to be strings per the docs. Though this would be great! https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source

loren avatar
loren

Yes, you create a map where all of its keys have json-encoded values, that makes them all strings!

Sean Turner avatar
Sean Turner

Ooooh damnnnnn. Will give it a crack tomorrow

loren avatar
loren

Or, at that point, a map with a single key and your entire complex structure json-encoded as the value… Maybe a bit easier to refactor that way

:--1:1
Sean Turner avatar
Sean Turner

Smart :)

Sean Turner avatar
Sean Turner

Didn’t feel like waiting, got it working, I owe you a beer :)

loren avatar
loren

Glad it turned out

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
04:15:12 AM

Waypoint URL Service Nov 3, 03:59 UTC Resolved - Services have normalized.Nov 2, 21:58 UTC Update - Service is experiencing degraded performance and may timeout for some deployments. We are continuing to investigate.Nov 2, 17:04 UTC Update - Service is experiencing partial outage and returning “Deployment not found” for some deployments. We are continuing to investigate.Nov 2, 15:44 UTC Investigating - Investigating observed issues with Waypoint URL Service

Waypoint URL Service

HashiCorp Services’s Status Page - Waypoint URL Service.

2020-11-01

    keyboard_arrow_up