#terraform (2022-09)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-09-01

Nitin avatar

what

• Remove join splat on module.security_group_arn

why

• Fix conflict with using custom security group in associated_security_group_ids and argument create_security_group is false

references

• N/A

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

please post in #pr-reviews

what

• Remove join splat on module.security_group_arn

why

• Fix conflict with using custom security group in associated_security_group_ids and argument create_security_group is false

references

• N/A

srikaanth penugonda avatar
srikaanth penugonda

Hi, I have a map object as below. I was able to go one level down and was able to get the entire “dev” value . how do i get only node_group_name value ?

managed_node_groups = {
  "dev" = {
    eks = {
      node_group_name = "node-group-name1"
      instance_types  = ["m5.large"]
      update_config = [{
        max_unavailable_percentage = 30
      }]
   }
    mng_custom_ami = {
      node_group_name = "mng_custom_ami"
      custom_ami_id   = "ami-0e28cf2562b7b3c9d"
      capacity_type   = "ON_DEMAND"  
    }
  }
"qe"= {
    eks = {
      node_group_name = "node-group-name2"
      instance_types  = ["m5.large"]
   }
    mng_custom_ami = {
      node_group_name = "mng_custom_ami"
      custom_ami_id   = "ami-0e28cf2562b7b3c9d"
      capacity_type   = "ON_DEMAND"  
      block_device_mappings = [
        {
          device_name = "/dev/xvda"
          volume_type = "gp3"
          volume_size = 150
        }
      ]
    }
  }
}

variable env {}

mng = var.managed_node_groups[var.env]
Max avatar
var.managed_node_groups[*].eks["node_group_name"]
Max avatar
References to Values - Configuration Language | Terraform by HashiCorpattachment image

Reference values in configurations, including resources, input variables, local and block-local values, module outputs, data sources, and workspace data.

srikaanth penugonda avatar
srikaanth penugonda

thank you, how to get the node_group_name of just the first element for each environment, if i dont want to hardcode .eks below

var.managed_node_groups[*].eks["node_group_name"]

2022-09-02

kirupakaran avatar
kirupakaran

could anyone suggest, what will be the perfect auto-scaling during the high traffic of the ecs fargate, and also send me the github link for my reference, thanks in advance.

Alex Jurkiewicz avatar
Alex Jurkiewicz

7 is the perfect scale

kirupakaran avatar
kirupakaran

@Alex Jurkiewicz would you recommand any github links for creating perfect autoscaling tf?

Alex Jurkiewicz avatar
Alex Jurkiewicz

this slack is run by Cloudposse, who publish many Terraform modules. Check out their repos here: https://github.com/cloudposse/

Cloud Posse

DevOps Accelerator for Startups Hire Us! https://slack.cloudposse.com/

Mohammed Yahya avatar
Mohammed Yahya

start with these resources, do few tests,

resource "aws_appautoscaling_target" "ecs_target" {
  max_capacity       = 4
  min_capacity       = 1
  resource_id        = "service/${aws_ecs_cluster.example.name}/${aws_ecs_service.example.name}"
  scalable_dimension = "ecs:service:DesiredCount"
  service_namespace  = "ecs"
}
resource "aws_appautoscaling_scheduled_action" "dynamodb" {
  name               = "dynamodb"
  service_namespace  = aws_appautoscaling_target.ecs_target.service_namespace
  resource_id        = aws_appautoscaling_target.ecs_target.resource_id
  scalable_dimension = aws_appautoscaling_target.ecs_target.scalable_dimension
  schedule           = "at(2006-01-02T15:04:05)"

  scalable_target_action {
    min_capacity = 1
    max_capacity = 200
  }
}
Mohammed Yahya avatar
Mohammed Yahya

in your case use CloudPosse’s modules as target

kirupakaran avatar
kirupakaran

thank you

2022-09-03

2022-09-04

Amit Karpe avatar
Amit Karpe

What is best practise to install packages and configure few settings in ec2 instance? Do you prefer provisioner with “remote-exec”? or Ansible or packer? I need to run an applications in four ec2 instance with pre-configuration. I have shell script ready but wanted to know better approach.

managedkaos avatar
managedkaos

I would suggest keeping the server configuration out of terraform and use something like Ansible instead.

For my projects that involve a server or two, an application installation, and a bit of configuration, I’ve found the following to be the best approach:

  1. Keep the application code in one repo
  2. Keep the TR infra code in another repo
  3. Keep the server and application config in another repo and use ansible to: a. Install user/service accounts b. Configure and update the server c. deploy the application
managedkaos avatar
managedkaos

Having ansible and config in its own repo makes it easy to manage and deploy environments in a way that doesn’t require re-running TF or rebuilding the application. Also, its much easier to track configuration changes vs app or infra changes. Yes, in some cases a big change requires coordination across all three repos. but is most cases (daily operation), the only thing that changes is the config repo its much easier to track and apply changes there.

Amit Karpe avatar
Amit Karpe

Thank you. I will revise my ansible knowledge I was planning to invest time to learn packer (to build machine images ) and deploy/provision then using Terraform.

kirupakaran avatar
kirupakaran

Hi everyone, I supposed to create ecs on multi region using tf, now ecs running on us-east-1, could anyone help me to solve this problem. Thanks in advance

2022-09-05

jc avatar

Hey guys - I have creation of ECR in my TF. How do you flag the ECR part to avoid destroying it during executing terraform destroy?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can delete the resources manually from the state file before running terraform destroy

2
Alex Jurkiewicz avatar
Alex Jurkiewicz

See terraform state rm

jc avatar

Awesome! Thanks @Alex Jurkiewicz!

2022-09-06

Manjunath shetty avatar
Manjunath shetty

I have created multiple ec2 instance using count . In that one ec2 instance deleted using -target option or manually . In the subsequent deployment I want terraform to skip the deployment of manual deleted instance. How to achieve this?

Manjunath shetty avatar
Manjunath shetty
resource "aws_instance" "web" {

   count = 4 # create four similar EC2 instances

  ami           = "ami-00785f4835c6acf64"
  instance_type = "t2.micro"

  tags = {
    Name = "Server ${count.index}"
  }

  lifecycle {
    ignore_changes = [
      aws_instance.web[1]
    ]
  }

  
}
Manjunath shetty avatar
Manjunath shetty

i try to implement using lifecylce ignore change but getting error This object has no argument, nested block, or exported attribute named “aws_instance”.

Manjunath shetty avatar
Manjunath shetty

Any pointers on this?

Pierre-Yves avatar
Pierre-Yves

I’m not sure that the ignore_changes is compatible with what you want to achieve. you can ignore changes for a specific attribute or block of a ressource, but [I THINK] not for an entire resource.

It’s my own opinion, I let other answer if it is possible

1
Manjunath shetty avatar
Manjunath shetty

Thanks @Pierre-Yves. If we reduce the count then it will be impacted across all the subnets. Is there any other option without reducing the count?

Pierre-Yves avatar
Pierre-Yves

What do you mean by “reduce the count”.

For my part, i was not telling you to change your count ^^. I was just saying that I think you can’t use the ignore_changes meta-argument for your need

mrwacky avatar
mrwacky

The answer is probably:

• reduce the count

• use a moved block to tell Terraform what you did

Refactoring | Terraform by HashiCorpattachment image

How to make backward-compatible changes to modules already in use.

Manjunath shetty avatar
Manjunath shetty

Thanks @mrwacky, it worked

bananadance1

2022-09-07

kirupakaran avatar
kirupakaran

can anyone help me to ..assign ecs fargate public ip to target group, now private ip is assigned on target group.

Release notes from terraform avatar
Release notes from terraform
09:13:32 PM

v1.2.9 1.2.9 (September 07, 2022) ENHANCEMENTS: terraform init: add link to documentation when a checksum is missing from the lock file. (#31726)

Backport missed commits from #31408 by liamcervante · Pull Request #31726 · hashicorp/terraformattachment image

Original PR: #31408 Backport PR: #31480 For some reason the backport process only picked up the first two commits from the original PR. This PR manually copies over the changes missed by the backpo…

2022-09-08

jc avatar

Hey guys,

Running an initial terraform apply has been failed due to expired aws credential. I updated the creds and rerunning apply, it’s failed once again due to the resources being existed already resulted from the initial applied earlier.

How do you approach with this kind of case?

Ralf Pieper avatar
Ralf Pieper

I think a screen share might let me understand. If you cann’t rerun something bigger is wrong like the way the code is structured.

Ralf Pieper avatar
Ralf Pieper

I don’t know what the resource is, the simple solution would be to delete it, if that is possible? Then it will be rebuilt.

Ralf Pieper avatar
Ralf Pieper

I have seen it sometimes where a plan says resource will get remade, even though I think it isn’t needed.

Chris Dobbyn avatar
Chris Dobbyn

Because your session expired while the resource was being created and presumably your state lives in s3 or something similar (dependent on your session) the state has gone out of wack from the reality.

In order to remediate you will need to perform terraform import operations on the resources that were created and then not recorded into state.

Jonathan Forget avatar
Jonathan Forget

I think when a apply failed due to expired credentials, it should save a tfstate locally, pushing this tfstate to your backend should fix the issue.

OliverS avatar
OliverS

I discovered recently while I was looking at using HCL Go libraries to do our own config processing, that TF 1.3 will have some pretty awesome improvements to config defaults. And I saw in this channel a syndicated post about it just now, but it might have gotten missed, so I’m writing this.

The improvement actually goes way beyond providing the optional value in the optional() function call. That improvement alone is great, because it allows for a much more natural way to declare default objects and easier to grok the structure (instead of using a separate default attribute in variable. or defaults() function).

But HC also fixed a major issue with defaults merge in 1.2 (as was available in both deafult attrib and defaults() function): it will create default nested objects to full depth based on the spec. Which it does not do in the experimental support available in 1.2, thus rendering the defaults() function almost useless (IMO).

There’s really only 2 use cases that these 1.3 improvements do not solve for me, but I can live without them (whereas the issues that 1.3 fixes were deal breakers for us and we were going to roll our own using hclwrite lib).

I’ll be moving our current in-house config system to use the new capabilities of 1.3 over the next few weeks (depends on client priorities, might take longer), very excited to see how far I can get.

v1.3.0-beta1 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

loren avatar

does the defaults() function still even exist in 1.3? i thought it was part of the optional experiment, and the experiment was removed in 1.3…

v1.3.0-beta1 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

OliverS avatar
OliverS

yes defaults() has been removed entirely (the experiment_optional option has been removed altogether). Only optional() is left (and it’s a lot better than previous, as I explained).

loren avatar
Request for Feedback: Optional object type attributes with defaults in v1.3 alpha

Hi all , I’m the Product Manager for Terraform Core, and we’re excited to share our v1.3 alpha , which includes the ability to mark object type attributes as optional, as well as set default values (draft documentation here). With the delivery of this much requested language feature, we will conclude the existing experiment with an improved design. Below you can find some background information about this language feature, or you can read on to see how to participate in the alpha and pro…

OliverS avatar
OliverS

yes that’s how I found out about it

OliverS avatar
OliverS

Actually, found out about it in https://github.com/hashicorp/terraform/issues/28344 which also has interesting background about current (ie 1.2 experiment) limitations and links to that one you posted

Alex Jurkiewicz avatar
Alex Jurkiewicz

it should be great. But I wouldn’t be too quick on using Terraform betas. Some of them have done things like zero state in the past

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think a 1.x beta (or perhaps even x.0) had a bug where it would plan to remove all resources in certain conditions?

srikaanth penugonda avatar
srikaanth penugonda

hey guys, how are you managing user creation in rds, any best practices ?

jose.amengual avatar
jose.amengual

clusters?

jose.amengual avatar
jose.amengual

aurora?

jose.amengual avatar
jose.amengual

global?

jose.amengual avatar
jose.amengual

mysql?

jose.amengual avatar
jose.amengual

we need more details

srikaanth penugonda avatar
srikaanth penugonda

aurora/rds mysql clusters. i tried to search for a resource in terraform to create generic users other than the master one , but couldnt find any

jose.amengual avatar
jose.amengual

there is a mysql user provider you can use

Warren Parad avatar
Warren Parad

Not? Use IAM connected RDS user integration

jose.amengual avatar
jose.amengual

you can use that too, yes I forgot about that

2022-09-09

Jonas Steinberg avatar
Jonas Steinberg

Module development and best practices Looking for experience and opinions

Jonas Steinberg avatar
Jonas Steinberg

Tough not to have some of these overlap with just vanilla tf practices, but doing this for my team and thought I would post here for other people’s input as well

• modules do not reinvent the wheel e.g. if there is an aws module, a cloudposse module or similar these are used instead of home-rolling • modules have documentation and examples • modules have terratests • module code avoids code smells like ternaries, excessive remote state lookups • modules avoid using shell providers as much as possible • modes avoid reading or writing files at local or remote locations for the purposes of getting or creating effectively hard-coded information to then be used in later logic • modules are versioned and a versions file is used to pin modules • expose important outputs • limited use of custom scripts • modules follow a universally agreed-upon naming convention • modules are integrated with environment specific code and do not rely on lookups, etc to figure out what environment specific values to get • modules are not too specific, e.g. a databricks-s3-encrypted-with-kms-and-object-replication module should be instead databricks-component-a, databricks-component-b, …, kms-cm-key, s3 modules and all of these should be used from the tf registry via cloudposse, aws, or similar well-known publishers

• the root module should only call modules

• aws account numbers should be looked up, not hardcoded in tf files

2
Pierre-Yves avatar
Pierre-Yves

Thanks for sharing this .

1
loren avatar

i would add one, avoid using depends_on if at all possible, and make a special effort to avoid module-level depends_on (as opposed to resource-level depends_on). always prefer passing attributes instead, which terraform will use to construct the graph

Jonas Steinberg avatar
Jonas Steinberg

Cool @loren nice one. Love that.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

“the root module should only call modules”? What is a “root module”?

“A versions file is used to pin modules” Do you mean pinning providers?

I agree with most of the rest, but the list feels a bit “write clean code where possible, we won’t explain why these dot points lead to clean code or why clean code is good tho”

Jonas Steinberg avatar
Jonas Steinberg
Modules Overview - Configuration Language | Terraform by HashiCorpattachment image

Modules are containers for multiple resources that are used together in a configuration. Find resources for using, developing, and publishing modules.

1
Jonas Steinberg avatar
Jonas Steinberg

I meant to say modules should be pinned in source references

loren avatar

i consider a “root module” to be one that owns the backend config, state, the lock file, provider block configurations, and the config inputs

loren avatar

basically a “module” that you have designed explicitly to support directly running the init/plan/apply/destroy workflow for one or more configurations

Simpson Say avatar
Simpson Say

Hi team — hoping to get some eyes on this when someone has the time: https://github.com/cloudposse/terraform-datadog-platform/pull/71

what

• lookup function did not pull the correct value required for thresholds, and instead went to the default. • This resulted in an error when creating an SLO of type monitor when using more then one threshold.

why

• We are creating all of our metrics, monitors, SLOs, etc with IaC, using cloud posse’s modules (thanks!)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

please post to #pr-reviews

what

• lookup function did not pull the correct value required for thresholds, and instead went to the default. • This resulted in an error when creating an SLO of type monitor when using more then one threshold.

why

• We are creating all of our metrics, monitors, SLOs, etc with IaC, using cloud posse’s modules (thanks!)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Does the free edition of terraform cloud still require each workspace hardcode AWS credentials? Or can you setup an IAM role that it can assume?

3
Fizz avatar

In the free version you can configure the workspace to use API mode which will then make TF cloud just a state holder. In API mode, you define the workflow and provide the hardware to run the plans. E.g. you could run it in GitHub actions with GitHub runners. This then allows you to decide how you want to provide credentials. A role on the runners? GitHub secrets configured in the pipeline that then assumes a role? Basically you have full control.

Fizz avatar

You’ll also need to set local execution mode.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Fizz just confirming my understanding.

in that mode though, there are zero audit trails, no confirmations, and nothing represented in TFC, right? It’s only serving as the state backend (a glorified s3 bucket). To your point, you could then run terraform in conventional CI/CD, but TFC is providing no other benefit than state management.

Fizz avatar

Yes. In the paid version, you can have runners on your own infra managed by tf cloud. There you can attach a role to your runner (assuming you are on AWS)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I just find it odd that they don’t support the more native integration model where you provision an IAM role that trusts their principle and allow them to assume the role. This is how free/entry-level plans of Datadog and Spacelift work. Presumably others as well.

Fizz avatar

Yep. Cross account role that can be assumed by a user, or role, in their account would be a nice feature.

Fizz avatar

It might be a deliberate omission though. I’ve heard on the paid plan they charge $50 per apply. So it seems like they really want to encourage you to run on your own hardware.

2022-09-11

2022-09-12

muhaha avatar

Hey, are You using Checkov/TFsec/Kicks in CI ( Github Actions for example ) ? I just wanted to ask, I just discovered https://github.com/security-alert/security-alert/tree/master/packages/sarif-to-comment/, which can effectively convert SARIF to GH comment… But its not working correctly, because all these tools are predownloading modules and analyses them with given input on the filesystem. So It can generate comments, but it will generate diff URLs based on local path, instead of just pointing to the correct “upstream” module called from main.tf. Ideas?

Shlomo Daari avatar
Shlomo Daari

Does anyone know why I’m getting this error? An argument named "iam_role_additional_policies" is not expected here. In the Terraform site, it shows that this should be under the module eks section.

Ralf Pieper avatar
Ralf Pieper

I’m happy to take a look, I don’t think I have enough context to do anything but a google search.

Shlomo Daari avatar
Shlomo Daari

I tried to configure the following:

    create_iam_role          = true 
    iam_role_name            = "eks-manage-nodegroup-shlomo-tf"
    iam_role_use_name_prefix = false
    iam_role_description     = "Self managed node group role"
    iam_role_tags = {
    Purpose = "Protector of the kubelet"
    }
    iam_role_additional_policies = [
    "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
    "arn:aws:iam::810711266228:policy/SecretsManager-CurrentValueROAccess",
    "arn:aws:iam::810711266228:policy/SecretsManager-CurrentValueROAccess"
    ]

https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest?tab=inputs

Shlomo Daari avatar
Shlomo Daari

Thank you for the help

2022-09-13

Tommy avatar

is it somehow possible to test the github action pipelines of the modules locally or within the fork? I have some troubles to pass all pipeline steps

Andrey Taranik avatar
Andrey Taranik

@Tommy yes, answer is act

this1
1
loren avatar

act is awesome! Though, in most cases, for me it ended up being slower than just pushing and letting github handle it. I store logs as artifacts so I can troubleshoot better

Tommy avatar

thank you, I will take a look!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and watch out, you can do things in ACT that do not work in the actual github actions runners

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know some members on the team of tried it a couple times and given up because they didn’t get any further. They’d get it working in ACT, then it wouldn’t work in the runners. Vise versa.

2022-09-14

Release notes from terraform avatar
Release notes from terraform
06:13:34 PM

v1.3.0-rc1 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Lol, this headline could make some people do a double take…

2
jimp avatar

Hypothetical reasons to arrest an actual Terraform founder in this thread please

2
jimp avatar

For example, South Korea court reportedly issues arrest warrant for Terraform founder for AWS Provider v3 rollout.

3
Tyrone Meijn avatar
Tyrone Meijn

South Korea court reportedly issues arrest warrant for Terraform founder for charges that cannot be determined until apply

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

South Korea court reportedly issues an arrest warrant for Terraform founder for abusing local exec’s to manipulate the stock price.

Mallikarjuna M avatar
Mallikarjuna M

Hi Team, can some one help me with creating IAM user in terraform by passing variable from values.yml file

2022-09-15

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Has anyone tried using any of the existing EKS related TF modules to deploy a Windows EKS node group for a cluster?

automationtrainee avatar
automationtrainee

Anyone have an idea on which module I need to update this variable in?
module.tf_cloud_builder.module.bucket.google_storage_bucket.bucket: Destroying… [id=]

│ Error: Error trying to delete bucket containing objects without force_destroy set to true

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’d start by looking at source to whatever module is used for tf_cloud_builder as it appears to be calling the bucket module that is creating it so may be a variable being passed along

automationtrainee avatar
automationtrainee

Thanks! I started down that path but need to check again

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

The more you work with it reading the state paths make more sense to trace

jose.amengual avatar
jose.amengual

What is the greatest lates on TF pipelines lately? How do you run multi tenant/user self serve infra with feature branches in multi account, multi region setups?

jose.amengual avatar
jose.amengual

Interesting to know on how the pipeline is setup, how the input variables are pass over and how is the user flow

    keyboard_arrow_up