#terraform (2022-09)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-09-01

Nitin avatar

what

• Remove join splat on module.security_group_arn

why

• Fix conflict with using custom security group in associated_security_group_ids and argument create_security_group is false

references

• N/A

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

please post in #pr-reviews

what

• Remove join splat on module.security_group_arn

why

• Fix conflict with using custom security group in associated_security_group_ids and argument create_security_group is false

references

• N/A

sripe avatar

Hi, I have a map object as below. I was able to go one level down and was able to get the entire “dev” value . how do i get only node_group_name value ?

managed_node_groups = {
  "dev" = {
    eks = {
      node_group_name = "node-group-name1"
      instance_types  = ["m5.large"]
      update_config = [{
        max_unavailable_percentage = 30
      }]
   }
    mng_custom_ami = {
      node_group_name = "mng_custom_ami"
      custom_ami_id   = "ami-0e28cf2562b7b3c9d"
      capacity_type   = "ON_DEMAND"  
    }
  }
"qe"= {
    eks = {
      node_group_name = "node-group-name2"
      instance_types  = ["m5.large"]
   }
    mng_custom_ami = {
      node_group_name = "mng_custom_ami"
      custom_ami_id   = "ami-0e28cf2562b7b3c9d"
      capacity_type   = "ON_DEMAND"  
      block_device_mappings = [
        {
          device_name = "/dev/xvda"
          volume_type = "gp3"
          volume_size = 150
        }
      ]
    }
  }
}

variable env {}

mng = var.managed_node_groups[var.env]
Max avatar
var.managed_node_groups[*].eks["node_group_name"]
Max avatar
References to Values - Configuration Language | Terraform by HashiCorpattachment image

Reference values in configurations, including resources, input variables, local and block-local values, module outputs, data sources, and workspace data.

sripe avatar

thank you, how to get the node_group_name of just the first element for each environment, if i dont want to hardcode .eks below

var.managed_node_groups[*].eks["node_group_name"]

2022-09-02

kirupakaran avatar
kirupakaran

could anyone suggest, what will be the perfect auto-scaling during the high traffic of the ecs fargate, and also send me the github link for my reference, thanks in advance.

Alex Jurkiewicz avatar
Alex Jurkiewicz

7 is the perfect scale

kirupakaran avatar
kirupakaran

@Alex Jurkiewicz would you recommand any github links for creating perfect autoscaling tf?

Alex Jurkiewicz avatar
Alex Jurkiewicz

this slack is run by Cloudposse, who publish many Terraform modules. Check out their repos here: https://github.com/cloudposse/

Cloud Posse

DevOps Accelerator for Startups Hire Us! https://slack.cloudposse.com/

Mohammed Yahya avatar
Mohammed Yahya

start with these resources, do few tests,

resource "aws_appautoscaling_target" "ecs_target" {
  max_capacity       = 4
  min_capacity       = 1
  resource_id        = "service/${aws_ecs_cluster.example.name}/${aws_ecs_service.example.name}"
  scalable_dimension = "ecs:service:DesiredCount"
  service_namespace  = "ecs"
}
resource "aws_appautoscaling_scheduled_action" "dynamodb" {
  name               = "dynamodb"
  service_namespace  = aws_appautoscaling_target.ecs_target.service_namespace
  resource_id        = aws_appautoscaling_target.ecs_target.resource_id
  scalable_dimension = aws_appautoscaling_target.ecs_target.scalable_dimension
  schedule           = "at(2006-01-02T15:04:05)"

  scalable_target_action {
    min_capacity = 1
    max_capacity = 200
  }
}
Mohammed Yahya avatar
Mohammed Yahya

in your case use CloudPosse’s modules as target

kirupakaran avatar
kirupakaran

thank you

2022-09-03

2022-09-04

Amit Karpe avatar
Amit Karpe

What is best practise to install packages and configure few settings in ec2 instance? Do you prefer provisioner with “remote-exec”? or Ansible or packer? I need to run an applications in four ec2 instance with pre-configuration. I have shell script ready but wanted to know better approach.

managedkaos avatar
managedkaos

I would suggest keeping the server configuration out of terraform and use something like Ansible instead.

For my projects that involve a server or two, an application installation, and a bit of configuration, I’ve found the following to be the best approach:

  1. Keep the application code in one repo
  2. Keep the TR infra code in another repo
  3. Keep the server and application config in another repo and use ansible to: a. Install user/service accounts b. Configure and update the server c. deploy the application
managedkaos avatar
managedkaos

Having ansible and config in its own repo makes it easy to manage and deploy environments in a way that doesn’t require re-running TF or rebuilding the application. Also, its much easier to track configuration changes vs app or infra changes. Yes, in some cases a big change requires coordination across all three repos. but is most cases (daily operation), the only thing that changes is the config repo its much easier to track and apply changes there.

1
Amit Karpe avatar
Amit Karpe

Thank you. I will revise my ansible knowledge I was planning to invest time to learn packer (to build machine images ) and deploy/provision then using Terraform.

kirupakaran avatar
kirupakaran

Hi everyone, I supposed to create ecs on multi region using tf, now ecs running on us-east-1, could anyone help me to solve this problem. Thanks in advance

2022-09-05

James avatar

Hey guys - I have creation of ECR in my TF. How do you flag the ECR part to avoid destroying it during executing terraform destroy?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can delete the resources manually from the state file before running terraform destroy

2
Alex Jurkiewicz avatar
Alex Jurkiewicz

See terraform state rm

James avatar

Awesome! Thanks @Alex Jurkiewicz!

2022-09-06

Manjunath shetty avatar
Manjunath shetty

I have created multiple ec2 instance using count . In that one ec2 instance deleted using -target option or manually . In the subsequent deployment I want terraform to skip the deployment of manual deleted instance. How to achieve this?

Manjunath shetty avatar
Manjunath shetty
resource "aws_instance" "web" {

   count = 4 # create four similar EC2 instances

  ami           = "ami-00785f4835c6acf64"
  instance_type = "t2.micro"

  tags = {
    Name = "Server ${count.index}"
  }

  lifecycle {
    ignore_changes = [
      aws_instance.web[1]
    ]
  }

  
}
Manjunath shetty avatar
Manjunath shetty

i try to implement using lifecylce ignore change but getting error This object has no argument, nested block, or exported attribute named “aws_instance”.

Manjunath shetty avatar
Manjunath shetty

Any pointers on this?

Pierre-Yves avatar
Pierre-Yves

I’m not sure that the ignore_changes is compatible with what you want to achieve. you can ignore changes for a specific attribute or block of a ressource, but [I THINK] not for an entire resource.

It’s my own opinion, I let other answer if it is possible

1
Manjunath shetty avatar
Manjunath shetty

Thanks @Pierre-Yves. If we reduce the count then it will be impacted across all the subnets. Is there any other option without reducing the count?

Pierre-Yves avatar
Pierre-Yves

What do you mean by “reduce the count”.

For my part, i was not telling you to change your count ^^. I was just saying that I think you can’t use the ignore_changes meta-argument for your need

mrwacky avatar
mrwacky

The answer is probably:

• reduce the count

• use a moved block to tell Terraform what you did

Refactoring | Terraform by HashiCorpattachment image

How to make backward-compatible changes to modules already in use.

Manjunath shetty avatar
Manjunath shetty

Thanks @mrwacky, it worked

bananadance1

2022-09-07

kirupakaran avatar
kirupakaran

can anyone help me to ..assign ecs fargate public ip to target group, now private ip is assigned on target group.

Release notes from terraform avatar
Release notes from terraform
09:13:32 PM

v1.2.9 1.2.9 (September 07, 2022) ENHANCEMENTS: terraform init: add link to documentation when a checksum is missing from the lock file. (#31726)

Backport missed commits from #31408 by liamcervante · Pull Request #31726 · hashicorp/terraformattachment image

Original PR: #31408 Backport PR: #31480 For some reason the backport process only picked up the first two commits from the original PR. This PR manually copies over the changes missed by the backpo…

2022-09-08

James avatar

Hey guys,

Running an initial terraform apply has been failed due to expired aws credential. I updated the creds and rerunning apply, it’s failed once again due to the resources being existed already resulted from the initial applied earlier.

How do you approach with this kind of case?

Ralf Pieper avatar
Ralf Pieper

I think a screen share might let me understand. If you cann’t rerun something bigger is wrong like the way the code is structured.

Ralf Pieper avatar
Ralf Pieper

I don’t know what the resource is, the simple solution would be to delete it, if that is possible? Then it will be rebuilt.

Ralf Pieper avatar
Ralf Pieper

I have seen it sometimes where a plan says resource will get remade, even though I think it isn’t needed.

Chris Dobbyn avatar
Chris Dobbyn

Because your session expired while the resource was being created and presumably your state lives in s3 or something similar (dependent on your session) the state has gone out of wack from the reality.

In order to remediate you will need to perform terraform import operations on the resources that were created and then not recorded into state.

1
Jonathan Forget avatar
Jonathan Forget

I think when a apply failed due to expired credentials, it should save a tfstate locally, pushing this tfstate to your backend should fix the issue.

1
OliverS avatar
OliverS

I discovered recently while I was looking at using HCL Go libraries to do our own config processing, that TF 1.3 will have some pretty awesome improvements to config defaults. And I saw in this channel a syndicated post about it just now, but it might have gotten missed, so I’m writing this.

The improvement actually goes way beyond providing the optional value in the optional() function call. That improvement alone is great, because it allows for a much more natural way to declare default objects and easier to grok the structure (instead of using a separate default attribute in variable. or defaults() function).

But HC also fixed a major issue with defaults merge in 1.2 (as was available in both deafult attrib and defaults() function): it will create default nested objects to full depth based on the spec. Which it does not do in the experimental support available in 1.2, thus rendering the defaults() function almost useless (IMO).

There’s really only 2 use cases that these 1.3 improvements do not solve for me, but I can live without them (whereas the issues that 1.3 fixes were deal breakers for us and we were going to roll our own using hclwrite lib).

I’ll be moving our current in-house config system to use the new capabilities of 1.3 over the next few weeks (depends on client priorities, might take longer), very excited to see how far I can get.

v1.3.0-beta1 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

loren avatar

does the defaults() function still even exist in 1.3? i thought it was part of the optional experiment, and the experiment was removed in 1.3…

v1.3.0-beta1 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

OliverS avatar
OliverS

yes defaults() has been removed entirely (the experiment_optional option has been removed altogether). Only optional() is left (and it’s a lot better than previous, as I explained).

loren avatar
Request for Feedback: Optional object type attributes with defaults in v1.3 alpha

Hi all , I’m the Product Manager for Terraform Core, and we’re excited to share our v1.3 alpha , which includes the ability to mark object type attributes as optional, as well as set default values (draft documentation here). With the delivery of this much requested language feature, we will conclude the existing experiment with an improved design. Below you can find some background information about this language feature, or you can read on to see how to participate in the alpha and pro…

OliverS avatar
OliverS

yes that’s how I found out about it

OliverS avatar
OliverS

Actually, found out about it in https://github.com/hashicorp/terraform/issues/28344 which also has interesting background about current (ie 1.2 experiment) limitations and links to that one you posted

Alex Jurkiewicz avatar
Alex Jurkiewicz

it should be great. But I wouldn’t be too quick on using Terraform betas. Some of them have done things like zero state in the past

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think a 1.x beta (or perhaps even x.0) had a bug where it would plan to remove all resources in certain conditions?

sripe avatar

hey guys, how are you managing user creation in rds, any best practices ?

jose.amengual avatar
jose.amengual

clusters?

jose.amengual avatar
jose.amengual

aurora?

jose.amengual avatar
jose.amengual

global?

jose.amengual avatar
jose.amengual

mysql?

jose.amengual avatar
jose.amengual

we need more details

sripe avatar

aurora/rds mysql clusters. i tried to search for a resource in terraform to create generic users other than the master one , but couldnt find any

jose.amengual avatar
jose.amengual

there is a mysql user provider you can use

Warren Parad avatar
Warren Parad

Not? Use IAM connected RDS user integration

jose.amengual avatar
jose.amengual

you can use that too, yes I forgot about that

2022-09-09

Jonas Steinberg avatar
Jonas Steinberg

Module development and best practices Looking for experience and opinions

Jonas Steinberg avatar
Jonas Steinberg

Tough not to have some of these overlap with just vanilla tf practices, but doing this for my team and thought I would post here for other people’s input as well

• modules do not reinvent the wheel e.g. if there is an aws module, a cloudposse module or similar these are used instead of home-rolling • modules have documentation and examples • modules have terratests • module code avoids code smells like ternaries, excessive remote state lookups • modules avoid using shell providers as much as possible • modes avoid reading or writing files at local or remote locations for the purposes of getting or creating effectively hard-coded information to then be used in later logic • modules are versioned and a versions file is used to pin modules • expose important outputs • limited use of custom scripts • modules follow a universally agreed-upon naming convention • modules are integrated with environment specific code and do not rely on lookups, etc to figure out what environment specific values to get • modules are not too specific, e.g. a databricks-s3-encrypted-with-kms-and-object-replication module should be instead databricks-component-a, databricks-component-b, …, kms-cm-key, s3 modules and all of these should be used from the tf registry via cloudposse, aws, or similar well-known publishers

• the root module should only call modules

• aws account numbers should be looked up, not hardcoded in tf files

2
Pierre-Yves avatar
Pierre-Yves

Thanks for sharing this .

1
loren avatar

i would add one, avoid using depends_on if at all possible, and make a special effort to avoid module-level depends_on (as opposed to resource-level depends_on). always prefer passing attributes instead, which terraform will use to construct the graph

Jonas Steinberg avatar
Jonas Steinberg

Cool @loren nice one. Love that.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

“the root module should only call modules”? What is a “root module”?

“A versions file is used to pin modules” Do you mean pinning providers?

I agree with most of the rest, but the list feels a bit “write clean code where possible, we won’t explain why these dot points lead to clean code or why clean code is good tho”

Jonas Steinberg avatar
Jonas Steinberg
Modules Overview - Configuration Language | Terraform by HashiCorpattachment image

Modules are containers for multiple resources that are used together in a configuration. Find resources for using, developing, and publishing modules.

1
Jonas Steinberg avatar
Jonas Steinberg

I meant to say modules should be pinned in source references

loren avatar

i consider a “root module” to be one that owns the backend config, state, the lock file, provider block configurations, and the config inputs

loren avatar

basically a “module” that you have designed explicitly to support directly running the init/plan/apply/destroy workflow for one or more configurations

1
Simpson Say avatar
Simpson Say

Hi team — hoping to get some eyes on this when someone has the time: https://github.com/cloudposse/terraform-datadog-platform/pull/71

what

• lookup function did not pull the correct value required for thresholds, and instead went to the default. • This resulted in an error when creating an SLO of type monitor when using more then one threshold.

why

• We are creating all of our metrics, monitors, SLOs, etc with IaC, using cloud posse’s modules (thanks!)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

please post to #pr-reviews

what

• lookup function did not pull the correct value required for thresholds, and instead went to the default. • This resulted in an error when creating an SLO of type monitor when using more then one threshold.

why

• We are creating all of our metrics, monitors, SLOs, etc with IaC, using cloud posse’s modules (thanks!)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Does the free edition of terraform cloud still require each workspace hardcode AWS credentials? Or can you setup an IAM role that it can assume?

3
Fizz avatar

In the free version you can configure the workspace to use API mode which will then make TF cloud just a state holder. In API mode, you define the workflow and provide the hardware to run the plans. E.g. you could run it in GitHub actions with GitHub runners. This then allows you to decide how you want to provide credentials. A role on the runners? GitHub secrets configured in the pipeline that then assumes a role? Basically you have full control.

Fizz avatar

You’ll also need to set local execution mode.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Fizz just confirming my understanding.

in that mode though, there are zero audit trails, no confirmations, and nothing represented in TFC, right? It’s only serving as the state backend (a glorified s3 bucket). To your point, you could then run terraform in conventional CI/CD, but TFC is providing no other benefit than state management.

Fizz avatar

Yes. In the paid version, you can have runners on your own infra managed by tf cloud. There you can attach a role to your runner (assuming you are on AWS)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I just find it odd that they don’t support the more native integration model where you provision an IAM role that trusts their principle and allow them to assume the role. This is how free/entry-level plans of Datadog and Spacelift work. Presumably others as well.

Fizz avatar

Yep. Cross account role that can be assumed by a user, or role, in their account would be a nice feature.

Fizz avatar

It might be a deliberate omission though. I’ve heard on the paid plan they charge $50 per apply. So it seems like they really want to encourage you to run on your own hardware.

IK avatar

I’ve just set this up using OIDC providers in each account (deployed via stacksets).. then it’s just a matter of exposing the TFC_WORKLOAD_IDENTITY_TOKEN environment variable (i use the Epp0/environment provider) and bang.. multi-account TFC deployments using JWT

2022-09-11

2022-09-12

muhaha avatar

Hey, are You using Checkov/TFsec/Kicks in CI ( Github Actions for example ) ? I just wanted to ask, I just discovered https://github.com/security-alert/security-alert/tree/master/packages/sarif-to-comment/, which can effectively convert SARIF to GH comment… But its not working correctly, because all these tools are predownloading modules and analyses them with given input on the filesystem. So It can generate comments, but it will generate diff URLs based on local path, instead of just pointing to the correct “upstream” module called from main.tf. Ideas?

Shlomo Daari avatar
Shlomo Daari

Does anyone know why I’m getting this error? An argument named "iam_role_additional_policies" is not expected here. In the Terraform site, it shows that this should be under the module eks section.

Ralf Pieper avatar
Ralf Pieper

I’m happy to take a look, I don’t think I have enough context to do anything but a google search.

Shlomo Daari avatar
Shlomo Daari

I tried to configure the following:

    create_iam_role          = true 
    iam_role_name            = "eks-manage-nodegroup-shlomo-tf"
    iam_role_use_name_prefix = false
    iam_role_description     = "Self managed node group role"
    iam_role_tags = {
    Purpose = "Protector of the kubelet"
    }
    iam_role_additional_policies = [
    "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
    "arn:aws:iam::810711266228:policy/SecretsManager-CurrentValueROAccess",
    "arn:aws:iam::810711266228:policy/SecretsManager-CurrentValueROAccess"
    ]

https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest?tab=inputs

Shlomo Daari avatar
Shlomo Daari

Thank you for the help

2022-09-13

Tommy avatar

is it somehow possible to test the github action pipelines of the modules locally or within the fork? I have some troubles to pass all pipeline steps

Andrey Taranik avatar
Andrey Taranik

@Tommy yes, answer is act

this1
1
loren avatar

act is awesome! Though, in most cases, for me it ended up being slower than just pushing and letting github handle it. I store logs as artifacts so I can troubleshoot better

Tommy avatar

thank you, I will take a look!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and watch out, you can do things in ACT that do not work in the actual github actions runners

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know some members on the team of tried it a couple times and given up because they didn’t get any further. They’d get it working in ACT, then it wouldn’t work in the runners. Vise versa.

2022-09-14

Release notes from terraform avatar
Release notes from terraform
06:13:34 PM

v1.3.0-rc1 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Lol, this headline could make some people do a double take…

2
jimp avatar

Hypothetical reasons to arrest an actual Terraform founder in this thread please

2
jimp avatar

For example, South Korea court reportedly issues arrest warrant for Terraform founder for AWS Provider v3 rollout.

3
Tyrone Meijn avatar
Tyrone Meijn

South Korea court reportedly issues arrest warrant for Terraform founder for charges that cannot be determined until apply

4
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

South Korea court reportedly issues an arrest warrant for Terraform founder for abusing local exec’s to manipulate the stock price.

2
Mallikarjuna M avatar
Mallikarjuna M

Hi Team, can some one help me with creating IAM user in terraform by passing variable from values.yml file

2022-09-15

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Has anyone tried using any of the existing EKS related TF modules to deploy a Windows EKS node group for a cluster?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

@andylamp @Jeremy G (Cloud Posse) Do either of you know if the cloudposse/eks-workers/aws module should be able to accomplish this and set the self-managed node group similar to the Linux managed node group?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Jeremy (UnderGrid Network Services) I have never worked with a Windows EKS node group, and do not know the specifics, but I would expect cloudposse/eks-workers/aws module should be able to launch Windows nodes by selecting the appropriate AMI via eks_worker_ami_name_filter and eks_worker_ami_name_regex or image_id

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I thought as much. The hangup I’ve found with the eks-workers module is that it doesn’t allow me to override the user data which is obviously going to be different for windows than linux

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

With eka-node-groups can provide user data base64 encoded and it over rides the default I believe

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Specifying userdata is not a requirement to launch a node; EKS supplies appropriate defaults.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Not in the case of Windows eks nodes

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I don’t see anything in the AWS documentation about setting userdata. Please educate me.

Launching self-managed Windows nodes - Amazon EKS

This topic describes how to launch Auto Scaling groups of Windows nodes that register with your Amazon EKS cluster.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

if you read through you find the cloudformation template they have (https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-windows-nodegroup.yaml) and it has a user data block that it includes in the launch template that the ASG calls

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

even the eks-node-group module has a user data template for Linux managed node groups but the module has the userdata_override_base64 variable if you want to override the default. eks-workers doesn’t have any similar mechanism and the userdata.tpl is Linux specific

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

OK, I admit that I’m not completely following because TBH I believe you and don’t want to spend the time to learn it right now. Short story is that if you want to duplicate the relevant inputs from eks-workers in eks-node-group in a PR I will approve it.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

okay… I’ll work on a PR and test it… I know this is likely a bit of a niche situation as I have our dev team asking to include a Windows EKS node group to our cluster so they can work on moving some of the application(s) that runs on Windows into EKS and off EC2

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Tag me here when you have a PR ready to review

1
johnny avatar

wave Hey @Jeremy (UnderGrid Network Services) do you have any progress or tips on this?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

@johnny I have been a bit sidetracked lately wearing my firefighter hat so haven’t made the progress I wanted on it. The dev I was working with managed to get the Windows node up and running via click-ops after I’d stood up the Linux node group via TF but I haven’t gotten his steps into my TF yet.

johnny avatar

@Jeremy (UnderGrid Network Services) That’s fair. Do you happen to know what the userdata should look like for getting the nodes into the cluster? …I’m not sure if that’s how it works but I think I’m almost to that point. I believe I should have the nodes launching soon but not sure what happens after they go up given the userdata is not windows based.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

@johnny there is a user-data block passed to the node instance to enable joining the domain, there’s also the aws-auth ConfigMap change required as well to allow the nodes to join. I don’t know the specifics yet but the dev also reported they had trouble getting the Ingress to work initially but worked it out. I still need to determine what his steps to resolve that were

automationtrainee avatar
automationtrainee

Anyone have an idea on which module I need to update this variable in?
module.tf_cloud_builder.module.bucket.google_storage_bucket.bucket: Destroying… [id=]

│ Error: Error trying to delete bucket containing objects without force_destroy set to true

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’d start by looking at source to whatever module is used for tf_cloud_builder as it appears to be calling the bucket module that is creating it so may be a variable being passed along

automationtrainee avatar
automationtrainee

Thanks! I started down that path but need to check again

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

The more you work with it reading the state paths make more sense to trace

automationtrainee avatar
automationtrainee

Is there a way to push a variable from the root module down to sub modules?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

pass as variables to the module and you get outputs from the module

automationtrainee avatar
automationtrainee

does the code need to be re-initialized when you update variables in a module?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

if the source to module is relative directory based (eg - source = ‘../modules/x’) then no, but if it’s being pulled from a repo or registry yes you will

automationtrainee avatar
automationtrainee

That’s what I cannot figure out for some reason

automationtrainee avatar
automationtrainee

looking at the root module I don’t see any calls to the error module

automationtrainee avatar
automationtrainee

however when I look in the .terraform folder that get created I see many module directories

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

yes the terraform init process generates the module directories under the .terraform directory

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

if your .tf code has something like:

module "blah" {
  source = "../modules/my_module"
  ...
}

then the terraform init does not need to be done when the code under ../modules/my_module is updated or changed. However if it has something like:

module "blah" {
  source = "cloudposse/label/null"
  ...
}

or any other source that pulls from a Git repo or Terraform registry it does

automationtrainee avatar
automationtrainee

I see the folder for tf_cloud_builder in .terraform directory

automationtrainee avatar
automationtrainee

However I don’t see a folder for module bucket

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

you should not manipulate the .terraform directory manually… assume it doesn’t exist is safest

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

where is your module "tf_cloud_builder" { ... } in your working directory

automationtrainee avatar
automationtrainee

there is not one

automationtrainee avatar
automationtrainee

I copied the root folder terraform-example-foundation to my local machine. I changed directories into the 0-bootstrap folder and ran the appropriate TF commands

automationtrainee avatar
automationtrainee

It created the resources

automationtrainee avatar
automationtrainee

Now when I’m trying to delete them is where the problem comes into play

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

it’s in the [cb.tf](http://cb.tf)

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

as

module "tf_cloud_builder" {
  source  = "terraform-google-modules/bootstrap/google//modules/tf_cloudbuild_builder"
  version = "~> 6.2"

  project_id                   = module.tf_source.cloudbuild_project_id
  dockerfile_repo_uri          = module.tf_source.csr_repos[local.cloudbuilder_repo].url
  gar_repo_location            = var.default_region
  workflow_region              = var.default_region
  terraform_version            = local.terraform_version
  cb_logs_bucket_force_destroy = var.bucket_force_destroy
}
automationtrainee avatar
automationtrainee

The variable var.bucket_force_destroy is not being pulled from TF destroy

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

if you’re just trying to perform a terraform destroy and it’s complaining about not being able to delete the bucket because it is not empty then can you not go into the bucket and delete the objects stored inside it

automationtrainee avatar
automationtrainee

I did that as well

automationtrainee avatar
automationtrainee

still complaining

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

actually I think I may have found it… as I expected the variable is exposed

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

bucket_force_destroy = true needs to be added to your tfvars

automationtrainee avatar
automationtrainee

I know tfvars exposes variables you define in it, but if a variable is not defined in tfvars. Does TF look at the variables.tf at all?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

it defaults to false .. you see it passes var.bucket_force_destroy as cb_logs_bucket_force_destroy to the tf_cloud_builder module which then passes it along to the bucket module that it calls

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

if you don’t define the variable in tfvars then it gets the default value assigned in the [variables.tf](http://variables.tf)

automationtrainee avatar
automationtrainee

correct and I updated the variables.tf to be true

automationtrainee avatar
automationtrainee

In theory after doing that shouldn’t that have correct the problem?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

that’s not the ideal way to do it when you’re using someone elses module

automationtrainee avatar
automationtrainee

understood, but just asking for better understanding as I’m still learning TF

automationtrainee avatar
automationtrainee

now TF is complaining that the root module does not declare a variable named buckets_force_destroy

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

you mis-spelled it… it’s not plural bucket_force_destroy

automationtrainee avatar
automationtrainee

fixed that, but still getting the same error from above

automationtrainee avatar
automationtrainee

module.tf_cloud_builder.module.bucket.google_storage_bucket.bucket: Destroying… [id=tf-cloudbuilder-build-logs-prj-b-cicd] ╷ │ Error: Error trying to delete bucket tf-cloudbuilder-build-logs-prj-b-cicd containing objects without force_destroy set to true

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

That’s about the extent I can help with them as I don’t use GCE and reading the Terraform repo you gave shows that setting bucket_force_destroy = true in tfvars passed to it should be passed through to the bucket module when tf_cloud_builder calls it in https://github.com/terraform-google-modules/terraform-google-bootstrap/blob/master/modules/tf_cloudbuild_builder/cb.tf#L96

module "bucket" {
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

https://github.com/terraform-google-modules/terraform-example-foundation/blob/master/0-bootstrap/cb.tf#L102 is where tf_cloud_builder is called and passes the bucket_force_destroy variable value

  cb_logs_bucket_force_destroy = var.bucket_force_destroy
jose.amengual avatar
jose.amengual

What is the greatest lates on TF pipelines lately? How do you run multi tenant/user self serve infra with feature branches in multi account, multi region setups?

jose.amengual avatar
jose.amengual

Interesting to know on how the pipeline is setup, how the input variables are pass over and how is the user flow

2022-09-16

Angela Zhu avatar
Angela Zhu

Hey Team, does anyone know why account_id is not part of cloudposse/terraform-cloudflare-zone module?

resource "cloudflare_zone" "example" {
  account_id = "f037e56e89293a057740de681ac9abbe"
  zone       = "example.com"
}
RB avatar

How can the account_id help in that module ?

RB avatar

I believe the account_id is now implicit in the cloudflare provider itself

RB avatar

so it should be optional to set the account_id.

RB avatar

@Angela Zhu do you have a requirement to set an explicit account_id to each module.cloudflare_zone instantiation ?

Angela Zhu avatar
Angela Zhu

Hey RB, thanks for quick response. account_id embedding into the provider has been deprecated, it suggests to use specific ’account_id” attributes instead.

Angela Zhu avatar
Angela Zhu

I do have a requirement to set account_id in each zone

RB avatar

I don’t think you need to set the account_id in either the cloudflare provider or in any of the cloudflare terraform resources anymore.

RB avatar


I do have a requirement to set account_id in each zone
may I ask why you need to set this optional argument ?

Angela Zhu avatar
Angela Zhu

The situation I’m in right now is that I’m migrating from using cloudflare/cloudflare module into using cloudposse/terraform-cloudflare-zone. After I import resources, everything works except that it’s flagging account_id ~> from whatever to null. I can’t confidently push this code because I can’t find documentation on what happens when this is removed. Would it impact member or access_group? It seems to me every zone should have an account_id and zone_id.

Angela Zhu avatar
Angela Zhu

In their documentation, in only 1 place they mentioned It’s required that an account_id or zone_id is provided and in most cases using either is fine. Everywhere else are just saying this is optional

RB avatar

their docs need some TLC for sure

1
RB avatar


except that it’s flagging account_id ~
from whatever to null.
this should be OK but if you are uncomfortable, feel free to put in a PR to add an optional account_id with a default value of null

Angela Zhu avatar
Angela Zhu

I’m testing it in a lower environment right now. I might push a PR for this change. Thanks @RB

2

2022-09-19

ghostface avatar
ghostface

i have a for_each for an EKS_node_group resource like below:

resource "aws_eks_node_group" "nodegroup" {
  for_each               = var.nodegroups
...

how do i ignore all scaling configs for all of the keys?

  lifecycle {
    create_before_destroy = true
    ignore_changes        = [scaling_config.[0].desired_size]
  }

currently i have the above, am i right in thinking this will only effect the first loop?

Alex Jurkiewicz avatar
Alex Jurkiewicz

No. It will affect the first scaling config block of every loop

1
Ben Gray avatar
Ben Gray

Hi! Hopefully I can get some direction on my issue.

I am trying to use this module to create an AWS client VPN endpoint, and running into an issue. I cannot avoid getting this error:

│ Error: "name" isn't a valid log group name (alphanumeric characters, underscores, hyphens, slashes, hash signs and dots are allowed): ""
│
│   with module.ec2_client_vpn.module.cloudwatch_log.aws_cloudwatch_log_group.default[0],
│   on .terraform/modules/ec2_client_vpn.cloudwatch_log/main.tf line 17, in resource "aws_cloudwatch_log_group" "default":
│   17:   name              = module.log_group_label.id

I have been able to prove something is wrong with this module as if I modify the above referenced line in that file, with a name directly, it works. And I am very confused on how this is working.

Ben Gray avatar
Ben Gray

FWIW I have set logging_stream_name with a value, but this always gives me this validation error.

Ben Gray avatar
Ben Gray

I have tried names with and without slashes, dashes, and any other allowed chars outside alphanumeric values.

Ben Gray avatar
Ben Gray

Any help is greatly appreciated. I’m pretty much at the point I will need to abandon this module usage as a result of this problem.

Joe Niland avatar
Joe Niland

Can you share how you’re instantiating the module?

Ben Gray avatar
Ben Gray

Yeah sure!

Ben Gray avatar
Ben Gray
module "ec2_client_vpn" {
  source = "cloudposse/ec2-client-vpn/aws"

  ca_common_name     = "vpn.mycompany.com"
  root_common_name   = "vpn-client.mycompany.com"
  server_common_name = "vpn-server.mycompany.com"

  client_cidr                    = "10.5.4.0/22"
  vpc_id                         = data.aws_vpcs.mycompany-vpc.ids[0]
  organization_name              = "mycompany"
  name                           = "client_vpn"
  logging_enabled                = true
  logging_stream_name            = "client-vpn/aws-sso-enabled"
  id_length_limit                = 0
  retention_in_days              = 90
  associated_subnets             = ["subnet-idididid"]
  self_service_portal_enabled    = true
  authentication_type            = "federated-authentication"
  split_tunnel                   = true
  self_service_saml_provider_arn = "arn:aws:iam::ACCTNUMBER:saml-provider/AWSSSOROLE"
  authorization_rules = [
    {
      name                 = "grant_all"
      authorize_all_groups = true
      description          = "Grants all groups access to the full network"
      target_network_cidr  = "10.0.0.0/8"
    }
  ]

  additional_routes = [
    {
      destination_cidr_block = "10.0.0.0/8"
      description            = "Local traffic Route"
      target_vpc_subnet_id   = "subnet-idididid"
    }
  ]
}
Ben Gray avatar
Ben Gray

https://github.com/cloudposse/terraform-aws-cloudwatch-logs/blob/master/main.tf#L17

If I edit this line in the .terraform folder, after init and put just my log stream name, it will give me a working plan output.

  name              = module.log_group_label.id
Ben Gray avatar
Ben Gray

Sorry updated specific submodule.

Joe Niland avatar
Joe Niland

I had a quick look.

I think the issue is the log group not the stream. Most of these modules assume use of context.tf so in this case, module "log_group_label" has nothing set. You can set variables namespace, stage, name etc or you can use context.tf or the null-label module in your own project and set them there, then pass the reference into module "ec2_client_vpn" via the context variable.

The example shows the former.

Ben Gray avatar
Ben Gray

Ah I’ll try that tomorrow morning. Thank you!

Joe Niland avatar
Joe Niland

No worries, let us know how you go!

Ben Gray avatar
Ben Gray

Joe, that worked! Thank you so much!

Joe Niland avatar
Joe Niland

No worries @Ben Gray. Happy to help.

1
kirupakaran avatar
kirupakaran

Hi all, i want to redirect https://example1.example.com to https://example.com/example1 in nginx, if anyone aware of nginx please help me to slove this problem.

Alex Jurkiewicz avatar
Alex Jurkiewicz

what does this have to do with Terraform? Try #sre

Alex Jurkiewicz avatar
Alex Jurkiewicz

but this question seems like something you can solve by googling

kirupakaran avatar
kirupakaran

Ok

2022-09-21

Release notes from terraform avatar
Release notes from terraform
02:03:32 PM

v1.3.0 1.3.0 (September 21, 2022) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) #…

cool-doge5
1
kirupakaran avatar
kirupakaran

I have multiple databases in one db instances, how can i backup particular databases in aws, i am using aurora mysql.

Julian Olsson avatar
Julian Olsson

Hi Folks, I’m experiencing what feels like a fun bug with the Cloudposse Datadog-Lambda-Forwarder Module. For my use case, I’m deploying it to all of our accounts in a centralized workspace using provider blocks. Calling the module multiple times produces an error that calling it a single time does not. Error details and a minimally reproducible code example in :thread: . (Resolved by depends_on)

Julian Olsson avatar
Julian Olsson

Error Message:

Error: External Program Execution Failed
with module.datadog_staging_lambda_forwarder.module.forwarder_log_artifact[0].data.external.git[0]
on .terraform/modules/datadog_staging_lambda_forwarder.forwarder_log_artifact/main.tf line 9, in data "external" "git":
  program = ["git", "-C", var.module_path, "log", "-n", "1", "--pretty=format:{\"ref\": \"%H\"}"]
The data source received an unexpected error while attempting to execute the program.

Program: /usr/bin/git
Error Message: fatal: not a git repository (or any parent up to mount point /home/tfc-agent)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).

State: exit status 128

This should be referencing this line in this module which is called here in the main module.

Julian Olsson avatar
Julian Olsson

Minimal code example:

module "datadog_prod_lambda_forwarder" {
  source = "cloudposse/datadog-lambda-forwarder/aws"
  # Cloud Posse recommends pinning every module to a specific version
  version = "0.12.0"

  forwarder_log_enabled = true
  cloudwatch_forwarder_log_groups = {
    some_group = {
      name           = "<path to a log group>",
      filter_pattern = ""
    },
    some_other_group = {
      name           = "<path to a log group>"
      filter_pattern = ""
    }
  }

  dd_api_key_source = var.prod_dd_api_key_source

  dd_tags = []

  providers = {
    aws = aws.prod
  }
}

module "datadog_staging_lambda_forwarder" {
  source = "cloudposse/datadog-lambda-forwarder/aws"
  # Cloud Posse recommends pinning every module to a specific version
  version = "0.12.0"

  forwarder_log_enabled = true
  cloudwatch_forwarder_log_groups = {
    some_group = {
      name           = "<path to a log group>",
      filter_pattern = ""
    },
    some_other_group = {
      name           = "<path to a log group>"
      filter_pattern = ""
    }
  }

  dd_api_key_source = var.staging_dd_api_key_source

  dd_tags = []

  providers = {
    aws = aws.staging
  }
}

provider "aws" {
  region = "us-west-2"
  alias  = "prod"

  assume_role {
    role_arn     = var.prod_role_arn
    session_name = "Terraform"
    external_id  = var.prod_aws_external_id
  }

  access_key = var.prod_aws_access_key
  secret_key = var.prod_aws_secret_key
}

provider "aws" {
  region = "us-west-2"
  alias  = "staging"

  assume_role {
    role_arn     = var.staging_role_arn
    session_name = "Terraform"
    external_id  = var.staging_aws_external_id
  }

  access_key = var.staging_aws_access_key
  secret_key = var.staging_aws_secret_key
}

Provider should work without assume_role if you use access/secret keys into the specific accounts, I kept it as close to my implementation as possible just on the outside chance this is related (Although i doubt it).

Julian Olsson avatar
Julian Olsson

And to note: I can get any of the modules to work if I comment out the others, I’ve attempted it with 1, 2, and 3 modules. With 1, it works (no matter which), with 2, one will fail, with 3, two will fail. I haven’t tested it with 4+, but I think it’s reasonable to assume it will be n-1 failures.

Oh, and: this is executed via terraform cloud, if that makes a big difference.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Ben Smith (Cloud Posse)

1
Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

That’s pretty odd, taking a look now

Julian Olsson avatar
Julian Olsson

It’s possible that this might be related to the -C flag in the git command, and if it’s run multiple times. From the git documentation:

-C Run as if git was started in __ instead of the current working directory. When multiple `-C` options are given, each subsequent non-absolute `-C ` is interpreted relative to the preceding `-C `. If __ is present but empty, e.g. `-C ""`, then the current working directory is left unchanged. This option affects options that expect path name like `--git-dir` and `--work-tree` in that their interpretations of the path names would be made relative to the working directory caused by the `-C` option. For example the following invocations are equivalent:

git --git-dir=a.git --work-tree=b -C c status
git --git-dir=c/a.git --work-tree=c/b status

I’m frankly not sure if running (var.module_path collapsed to ${path.module} per this line): git -C ${path.module} log -n 1 --pretty=format:{"ref": "%H"} or the properly escaped equivalent multiple times would essentially stack deeper and deeper and be problematic, or if this is otherwise potentially related to path.module and the terraform warning: We do not recommend using path.module in write operations because it can produce different behavior depending on whether you use remote or local module sources. Multiple invocations of local modules use the same source directory, overwriting the data in path.module during each call. This can lead to race conditions and unexpected results. If that’s the case, it’s possible I may be able to avoid this by using depends_on to ensure each module fully completes before the next one attempts to run. I’m going to give that a try right now.

Julian Olsson avatar
Julian Olsson

Yep, using depends_on to ensure each module finishes before the next starts resolved the issue. It’s likely related to path.module.

1
Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

Gotcha, glad that unblocked you, I’ll add this to our notes, I know we’ve been seeing some more git -C issues recently, maybe theres a way to avoid it or clean it up

1
Julian Olsson avatar
Julian Olsson

Unfortunately while running the apply (rather than just plan) this morning, it came back. depends_on appears to resolve the plan-time error, but they don’t run properly.

OliverS avatar
OliverS

Looks like s3 bucket replication of existing objects is not currently supported by latest AWS provider (4.31).

So my best option seems to be to first run terraform apply to put new-object replication in place for desired buckets, then run a Batch Replication job from CLI using aws s3control create-job ... on each bucket (since I have a lot of buckets to replicate existing objects, and replication jobs require a replication config to already exist).

But then it is easy to forget to run that script after terraform apply, so better:

• Add a local-exec provisioner to the bucket replication config resource in my tf code, with when=create. But this would get skipped for buckets that already have replication config (ie already created). • Better add that provisioner to a null_resource that is enabled only if a variable is set to true (and no when set). I would set it to true, apply, set it to false, push. Any considerations I might be forgetting?

Denis avatar

I just enabled replication through terraform, and used the Batch jobs to replicate the existing objects initially. After that the replication rule is resuming as expected. But I only had to do that for 10 S3 buckets so the initial manual step was not that time consuming for me.

1
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Anyone looked at updating the terraform-aws-elasticsearch module to support OpenSearch or creating a new module for it?

Ray Botha avatar
Ray Botha

Hey all, I’m trying to set up a new AWS organization and accounts with the terraform-aws-components/account module but running into an odd issue on the atmos terraform plan:

│ Error: error reading Organizations Policy (p-9tkedynp): AWSOrganizationsNotInUseException: Your account is not a member of an organization.
│
│   with module.organizational_units_service_control_policies["platform"].aws_organizations_policy.this[0],
│   on .terraform/modules/organizational_units_service_control_policies/main.tf line 37, in resource "aws_organizations_policy" "this":
│   37: resource "aws_organizations_policy" "this" {

Yeah I’m not a member of an organization, my impression is the account module is supposed to create the organization no? (Resolved by terraform clean)

Ray Botha avatar
Ray Botha

This is my component in the atmos stack:

components:
  terraform:
    account:
      vars:
        enabled: true
        account_email_format: aws+%[email protected]
        account_iam_user_access_to_billing: DENY
        organization_enabled: true
        aws_service_access_principals:
          - cloudtrail.amazonaws.com
          - guardduty.amazonaws.com
          - ipam.amazonaws.com
          - securityhub.amazonaws.com
          - servicequotas.amazonaws.com
          - sso.amazonaws.com
          - auditmanager.amazonaws.com
          - ram.amazonaws.com
        enabled_policy_types:
          - SERVICE_CONTROL_POLICY
          - TAG_POLICY
        service_control_policies_config_paths:
          - "../aws-service-control-policies/catalog/organization-policies.yaml"
        organization_config:
          root_account:
            name: core-root
            stage: root
            tenant: core
            tags:
              eks: false
          accounts: [ ]
          organization:
            service_control_policies: [ ]
          organizational_units:
            - name: platform
              accounts:
                - name: platform-dev
                  tenant: platform
                  stage: dev
                  tags:
                    eks: false
                - name: platform-staging
                  tenant: platform
                  stage: staging
                  tags:
                    eks: false
                - name: platform-prod
                  tenant: platform
                  stage: prod
                  tags:
                    eks: false
              service_control_policies:
                - DenyLeavingOrganization
            - name: core
              accounts:
                - name: core-audit
                  tenant: core
                  stage: audit
                  tags:
                    eks: false
                - name: core-data
                  tenant: core
                  stage: data
                  tags:
                    eks: false
                - name: core-dns
                  tenant: core
                  stage: dns
                  tags:
                    eks: false
                - name: core-identity
                  tenant: core
                  stage: identity
                  tags:
                    eks: false
                - name: core-network
                  tenant: core
                  stage: network
                  tags:
                    eks: false
                - name: core-security
                  tenant: core
                  stage: security
                  tags:
                    eks: false
              service_control_policies:
                - DenyLeavingOrganization
Ray Botha avatar
Ray Botha

This error was magically resolved by terraform clean and deleting the state

2022-09-22

kirupakaran avatar
kirupakaran

Hey all, is there any tool for convert cloudformation to terraform ??

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes, you’ll find them with google

1
kirupakaran avatar
kirupakaran

Yeah, but i haven’t seen any proper tool

Joe Niland avatar
Joe Niland
DontShaveTheYak/cf2tf

Convert Cloudformation templates to Terraform.

1

2022-09-23

Herman Smith avatar
Herman Smith

Is it possible to have a terraform module enforce that the aws provider it inherits is configured to a certain region? (And fail if a provider for a different region is in use)

jose.amengual avatar
jose.amengual

no, I do not think is possible

jose.amengual avatar
jose.amengual

since the provider can use ENV variables to be configured

jose.amengual avatar
jose.amengual

it supports the same aws ENV variables so if you do it in you module, you can still set the AWS_REGION var to whatever and workaround the hardcoded region

jose.amengual avatar
jose.amengual

is that what you mean?

jose.amengual avatar
jose.amengual

or you are asking if you can create resources in your module for another region?

Herman Smith avatar
Herman Smith

I don’t want to violate the module user’s expectations and operate in a different region to what they asked - just want to let them know “you can only use this module in <X> region”

Herman Smith avatar
Herman Smith

Ah, looks like configuration_aliases in required_providers would essentially enable me to restrict to a given provider alias (named by region), that should suffice

jose.amengual avatar
jose.amengual

ohhh cool

Julian Olsson avatar
Julian Olsson

When you use configuration_aliases, it acts as though you have n+1 providers, as it assumes configuration_aliases = ["us-west-2"] is equal to two providers: aws and aws.us-west-2. I’ve experienced strange issues when only passing in a single provider to it (providers = { aws.us-west-2=aws }) and not passing in the aws=... provider as well.

You may wish to look into using something like data "aws_region" "current" {} and validate data.aws_region.current.name == myregion. I haven’t used it in this manner myself though, so you should experiment with both methodologies and see how they work in practice.

Herman Smith avatar
Herman Smith

Thanks @Julian Olsson. Worked perfectly with a lifecycle postcondition!

Julian Olsson avatar
Julian Olsson

The data "aws_region" variant, I assume? If so, excellent, thanks for letting me know it works, I’ll keep that one in my back pocket for another day.

Herman Smith avatar
Herman Smith

Yes, exactly. And me too!

Mazin Ahmed avatar
Mazin Ahmed

I have this issue where I can not run terraform import on a new remote state within TFE at a workspace. It’s a new workspace and does not have resources yet, I am trying run import script before merging a PR for all tf resources. Any ideas how to solve this?

Acquiring state lock. This may take a few moments...
Failed to persist state: Error uploading state: resource not found
Alex Jurkiewicz avatar
Alex Jurkiewicz

Create the state with one dummy resource, then run your imports?

Chris Dobbyn avatar
Chris Dobbyn

If runner based, just upload blank TF code and let an empty plan/apply run. Then add the stuff you want to import.

Chris Dobbyn avatar
Chris Dobbyn

Make sure you’ve got workspace configured in the cloud block.

jose.amengual avatar
jose.amengual

TFC Cloud pricing question: anyone know the actual price?

jose.amengual avatar
jose.amengual

I asked few people and said there is a cost per state ( workspace) , per user and per run?

jose.amengual avatar
jose.amengual

as usual website is not very detailed…..

jose.amengual avatar
jose.amengual

Talking Terrafrom Cloud SaaS not Enterprise

jose.amengual avatar
jose.amengual

I want to confirm it is a real per user only cost

Chris Dobbyn avatar
Chris Dobbyn

If you sign up (free) and look at the usage tab, it’ll give you everything you need to know.

jose.amengual avatar
jose.amengual

we are on the free

jose.amengual avatar
jose.amengual

but we need to forecast price so we need to have an idea on how to calculate the price

Chris Dobbyn avatar
Chris Dobbyn

I’m not sure re. the normal cloud one. It may be worthwhile setting up a meeting with them to establish cost estimates.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Basically it’s bespoke depending on your negotiating power. Think like new relic

this1
Alex Jurkiewicz avatar
Alex Jurkiewicz

If you are at all cost conscious, every other terraform SaaS is cheaper

this1
jose.amengual avatar
jose.amengual

I just had bad experiences with Hashicorp sales every single time

Chris Dobbyn avatar
Chris Dobbyn

They’re not super great (cost wise for features you get), I wouldn’t recommend if you are able to use any others.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yes I think the process is to delay giving you a quote for as long as possible until they can figure out what you can afford

1
Mohammed Yahya avatar
Mohammed Yahya

per user

jose.amengual avatar
jose.amengual

this guys have been useless to say the least….

jose.amengual avatar
jose.amengual

they keep asking for a number, like asking “tell me how much money you have, I will charge you that”

jose.amengual avatar
jose.amengual

what kind of sale tactic is that? I do not know, but I do not like it

Fizz avatar

Depends on your negotiating power. I had one customer who had to pay $50 per apply when using a tf cloud runner. That goes away when you use your own runner when you use tfcloud solely to manage state and workspace configuration, or you use the tfcloud agent to operate your own fleet of runners on your own hardware.

Fizz avatar

I think you would want to run your own runners anyway, solely to manage the principal used to run tf better, e.g. aws access key when running in tf cloud vs aws role when running your own runner.

2022-09-24

2022-09-26

OliverS avatar
OliverS

I have a stack that will consist of N tfstates. I could easily write an N-line bash script to do tf apply on each one, but I’m wondering if one of terragrunt, terramate, terraspace or cdktf might have good support for this and aspects of such design that I might now yet realize

Eg N-1 of those states will be completely independent one another and will depend only on the first module (which is a base layer), so technically they could all be updated in parallel. Does one of these tools support describing the stack in terms of separate states, and the dependencies of module on other modules, then it could automatically figure out the order of tf applies and do some in parallel.

loren avatar

terragrunt and terramate both handle that scenario. i find it rather hard to parse outputs of either though, when running against multiple stacks in parallel. easy to lose/miss something in review

loren avatar

i think tacos are likely a better solution, like spacelift or env0

1
OliverS avatar
OliverS

any reason in particular?

loren avatar

better visibility of the changeset

loren avatar

if you must roll your own CICD automation solution, here is a new tool that attempts to help you figure out the order of operations… https://github.com/contentful-labs/terraform-diff

contentful-labs/terraform-diff

Always know where you need to run Terraform plan & apply!

loren avatar

if you’re using github and github actions, there’s also tfcmt to post plan results back to github pull requests… https://github.com/suzuki-shunsuke/tfcmt

suzuki-shunsuke/tfcmt

Fork of mercari/tfnotify. tfcmt enhances tfnotify in many ways, including Terraform >= v0.15 support and advanced formatting options

2
OliverS avatar
OliverS

Thanks @loren for the suggestions

2022-09-27

Konrad Bloor avatar
Konrad Bloor

Just got to say, as someone new to terraform trying to build infrastructure quickly for a new venture, cloudposse terraform modules rule, wow. Thanks

3
1
1
1
4
Ray Botha avatar
Ray Botha

Has cloudposse developed any module/components for AWS ipam? I’m looking into using IPAM instead of working out all the IP blocks in a spreadsheet

RB avatar

We don’t but we’ve created a root terraform module (component) that wrapped this module

https://github.com/aws-ia/terraform-aws-ipam

aws-ia/terraform-aws-ipam

Terraform Module for create AWS IPAM Resources

Ray Botha avatar
Ray Botha

Thanks

2022-09-28

Ray Botha avatar
Ray Botha

Has anyone setup centralized egress for all your VPCs through the network account, via an NAT gateway, using cloudposse terraform-aws-components? I’m using transit gateway but it looks like that would require a lot of changes to the tgw components’ route configs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re building out this architecture right now for another customer

ghostface avatar
ghostface

out of curiosity, why would you want to do this?

Ray Botha avatar
Ray Botha
Centralized egress to internet - Building a Scalable and Secure Multi-VPC AWS Network Infrastructure

As you deploy applications in your Landing Zone, many apps will require outbound only internet access (for example, downloading libraries, patches, or OS updates).

Creating a single internet exit point from multiple VPCs Using AWS Transit Gateway | Amazon Web Servicesattachment image

In this post, we show you how to centralize outbound internet traffic from many VPCs without compromising VPC isolation. Using AWS Transit Gateway, you can configure a single VPC with multiple NAT gateways to consolidate outbound traffic for numerous VPCs. At the same time, you can use multiple route tables within the transit gateway to […]

this1
setheryops avatar
setheryops

Any recs on apps for detecting drift in Terraform if you are NOT on Terraform cloud? Every place ive worked we have always had an internally developed custom app. I really dont want to have to write another one again for my current gig.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

driftctl?

setheryops avatar
setheryops

Interesting…ill check it out. Thx

setheryops avatar
setheryops

Just an fyi for anyone else that pokes their head in this thread…its found at [docs.driftctl.com](http://docs.driftctl.com) not just [driftctl.com](http://driftctl.com) <– That takes you to a wordpress login

Lee Broom avatar
Lee Broom

Any recommendations for a good guide on deploying cloudposse modules into own projects?

Release notes from terraform avatar
Release notes from terraform
02:13:30 PM

v1.3.1 1.3.1 (September 28, 2022) NOTE: On darwin/amd64 and darwin/arm64 architectures, terraform binaries are now built with CGO enabled. This should not have any user-facing impact, except in cases where the pure Go DNS resolver causes problems on recent versions of macOS: using CGO may mitigate these issues. Please see the upstream bug <a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”1231779689” data-permission-text=”Title is private”…

1
Tim Schwenke avatar
Tim Schwenke

Hey everyone, I have a question regarding terraform-null-label: I get how to use it as a module. But do I also include the [context.tf](http://context.tf) in my own files if I’m writing a module myself (which I do all the time because everything in Terraform is a module)? Basically replicating what Cloud Posse is doing within their own modules.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

[context.tf](http://context.tf) has all the context variables used by the label module (and other things)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you include it, you don’t have to provide all those variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

our pattern is to always include [context.tf](http://context.tf) and don’t think about those common vars that are used by all modules and components

1
Tim Schwenke avatar
Tim Schwenke

Okay, that helps. Thanks

Thomas Panicker avatar
Thomas Panicker

Is there anyone out there interested in upgrading TF 0.12 to something more current..

1
mikesew avatar
mikesew

We just upgraded some of our terraform workspace/configs to 0.13. from there on, upgrading to further versions beyond was fairly easy (no major syntax changes). any questions in particular?

Nitin avatar

Hello Team,

How can remove a resouce created using cloudposse/vpc-peering-multi-account/aws

Nitin avatar

we don’t need vpc peering.. what is the best way to do it.

Nitin avatar

because if i delete it and then plan and apply it is faling

Nitin avatar

if i set enable = false then authorization issue is coming

RB avatar

@Nitin Could you create an issue for this on the module?

For now you could do a targeted destroy

terraform destroy -target module.peering

2022-09-29

    keyboard_arrow_up