#terraform (2022-04)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-04-01

Shanmugam .shan7 avatar
Shanmugam .shan7

Hi I’m very new to terraform and i’m using this CP module https://github.com/cloudposse/terraform-aws-service-control-policies getting the below error can some one help me on this

$ terraform apply
╷
│ Error: Reference to undeclared module
│
│   on main.tf line 9, in module "yaml_config":
│    9:     context = module.this.context
│
│ No module call named "this" is declared in the root module.
╵
╷
│ Error: Invalid reference
│
│   on main.tf line 18, in module "service_control_policies":
│   18:     service_control_policy_description = test
│
│ A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
╵
╷
│ Error: Reference to undeclared module
│
│   on main.tf line 21, in module "service_control_policies":
│   21:     context = module.this.context
│
│ No module call named "this" is declared in the root module.
╵
Shanmugam .shan7 avatar
Shanmugam .shan7
$ terraform apply plan.tf

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

organizations_policy_arn = ""
organizations_policy_id = ""
Shanmugam .shan7 avatar
Shanmugam .shan7

Am i missing something?

Josh Holloway avatar
Josh Holloway

@Shanmugam .shan7 are you using the code directly from the README ? You’re probably missing https://github.com/cloudposse/terraform-aws-service-control-policies/blob/master/examples/complete/context.tf

``` #

ONLY EDIT THIS FILE IN github.com/cloudposse/terraform-null-label

All other instances of this file should be a copy of that one

# #

Copy this file from https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf

and then place it in your Terraform module to automatically get

Cloud Posse’s standard configuration inputs suitable for passing

to Cloud Posse modules.

#

curl -sL https://raw.githubusercontent.com/cloudposse/terraform-null-label/master/exports/context.tf -o context.tf

#

Modules should access the whole context as module.this.context

to get the input variables with nulls for defaults,

for example context = module.this.context,

and access individual variables as module.this.<var>,

with final values filled in.

#

For example, when using defaults, module.this.context.delimiter

will be null, and module.this.delimiter will be - (hyphen).

#

module “this” { source = “cloudposse/label/null” version = “0.25.0” # requires Terraform >= 0.13.0

enabled = var.enabled namespace = var.namespace tenant = var.tenant environment = var.environment stage = var.stage name = var.name delimiter = var.delimiter attributes = var.attributes tags = var.tags additional_tag_map = var.additional_tag_map label_order = var.label_order regex_replace_chars = var.regex_replace_chars id_length_limit = var.id_length_limit label_key_case = var.label_key_case label_value_case = var.label_value_case descriptor_formats = var.descriptor_formats labels_as_tags = var.labels_as_tags

context = var.context }

Copy contents of cloudposse/terraform-null-label/variables.tf here

variable “context” { type = any default = { enabled = true namespace = null tenant = null environment = null stage = null name = null delimiter = null attributes = [] tags = {} additional_tag_map = {} regex_replace_chars = null label_order = [] id_length_limit = null label_key_case = null label_value_case = null descriptor_formats = {} # Note: we have to use [] instead of null for unset lists due to # https://github.com/hashicorp/terraform/issues/28137 # which was not fixed until Terraform 1.0.0, # but we want the default to be all the labels in label_order # and we want users to be able to prevent all tag generation # by setting labels_as_tags to [], so we need # a different sentinel to indicate “default” labels_as_tags = [“unset”] } description = «-EOT Single object for setting entire context at once. See description of individual variables for details. Leave string and numeric variables as null to use default value. Individual variable settings (non-null) override settings in context object, except for attributes, tags, and additional_tag_map, which are merged. EOT

validation { condition = lookup(var.context, “label_key_case”, null) == null ? true : contains([“lower”, “title”, “upper”], var.context[“label_key_case”]) error_message = “Allowed values: lower, title, upper.” }

validation { condition = lookup(var.context, “label_value_case”, null) == null ? true : contains([“lower”, “title”, “upper”, “none”], var.context[“label_value_case”]) error_message = “Allowed values: lower, title, upper, none.” } }

variable “enabled” { type = bool default = null description = “Set to false to prevent the module from creating any resources” }

variable “namespace” { type = string default = null description = “ID element. Usually an abbreviation of your organization name, e.g. ‘eg’ or ‘cp’, to help ensure generated IDs are globally unique” }

variable “tenant” { type = string default = null description = “ID element (Rarely used, not included by default). A customer identifier, indicating who this instance of a resource is for” }

variable “environment” { type = string default = null description = “ID element. Usually used for region e.g. ‘uw2’, ‘us-west-2’, OR role ‘prod’, ‘staging’, ‘dev’, ‘UAT’” }

variable “stage” { type = string default = null description = “ID element. Usually used to indicate role, e.g. ‘prod’, ‘staging’, ‘source’, ‘build’, ‘test’, ‘deploy’, ‘release’” }

variable “name” { type = string default = null description = «-EOT ID element. Usually the component or solution name, e.g. ‘app’ or ‘jenkins’. This is the only ID element not also included as a tag. The “name” tag is set to the full id string. There is no tag with the value of the name input. EOT }

variable “delimiter” { type = string default = null description = «-EOT Delimiter to be used between ID elements. Defaults to - (hyphen). Set to "" to use no delimiter at all. EOT }

variable “attributes” { type = list(string) default = [] description = «-EOT ID element. Additional attributes (e.g. workers or cluster) to add to id, in the order they appear in the list. New attributes are appended to the end of the list. The elements of the list are joined by the delimiter and treated as a single ID element. EOT }

variable “labels_as_tags” { type = set(string) default = [“default”] description = «-EOT Set of labels (ID elements) to include as tags in the tags output. Default is to include all labels. Tags with empty values will not be included in the tags output. Set to [] to suppress all generated tags. Notes: The value of the name tag, if included, will be the id, not the name. Unlike other null-label inputs, the initial setting of labels_as_tags cannot be changed in later chained modules. Attempts to change it will be silently ignored. EOT }

variable “tags” { type = map(string) default = {} description = «-EOT Additional tags (e.g. {'BusinessUnit': 'XYZ'}). Neither the tag keys nor the tag values will be modified by this module. EOT }

variable “additional_tag_map” { type = map(string) default = {} description = «-EOT Additional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id. This is for some rare cases where resources want additional configuration of tags and therefore take a list of maps with tag key, value, and additional configuration. EOT }

variable “label_order” { type = list(string) default = null description = «-EOT The order in which the labels (ID elements) appear in the id. Defaults to [“namespace”, “environment”, “stage”, “name”, “attributes”]. You can omit any of the 6 labels (“tenant” is the 6th), but at least one must be present. EOT }

variable “regex_replace_chars” { type = string default = null description = «-EOT Terraform regular expression (regex) string. Characters matching the regex will be removed from the ID elements. If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits. EOT }

variable “id_length_limit” { type = number default = null description = «-EOT Limit id to this many characters (minimum 6). Set to 0 for unlimited length. Set to null for keep the existing setting, which defaults to 0. Does not affect id_full. EOT validation { condition = var.id_length_limit == null ? true :…

Shanmugam .shan7 avatar
Shanmugam .shan7

@Josh Holloway I’m using the example which is give in the repo

Shanmugam .shan7 avatar
Shanmugam .shan7

Do we need modify something on the example

Josh Holloway avatar
Josh Holloway

Couple of things: Can you run the following in the directory where you’ve put the code? terraform version then: ls

Shanmugam .shan7 avatar
Shanmugam .shan7
$ terraform version
Terraform v1.1.6
on darwin_amd64

Your version of Terraform is out of date! The latest version
is 1.1.7. You can update by downloading from <https://www.terraform.io/downloads.html>
Shanmugam .shan7 avatar
Shanmugam .shan7
$ ls
LICENSE		Makefile	README.md	README.yaml	catalog		context.tf	docs		examples	main.tf		outputs.tf	test		variables.tf	versions.tf
Josh Holloway avatar
Josh Holloway

Oh so you’ve cloned the repo itself? Gotcha… Are you running terraform from this directory or from the examples/complete one?

Shanmugam .shan7 avatar
Shanmugam .shan7

Yes

Shanmugam .shan7 avatar
Shanmugam .shan7

Do i need to do any changes

Shanmugam .shan7 avatar
Shanmugam .shan7

Got it i think we need use terraform apply -var-file fixtures.us-east-2.tfvars

Josh Holloway avatar
Josh Holloway

Yeah… the recommended way will be to write your own terraform configuration and reference the module using the github source: https://www.terraform.io/language/modules/sources#github

Module Sources | Terraform by HashiCorpattachment image

The source argument tells Terraform where to find child modules’s configurations in locations like GitHub, the Terraform Registry, Bitbucket, Git, Mercurial, S3, and GCS.

Dan Herrington avatar
Dan Herrington

Anyone using beanstalk and govcloud with terraform? I’m running into an issue getting the zoneId of the beanstalk environment passed along during deploy. I keep getting NoSuchHostedZone when using the using the builtin terraform data resource for elasticbeanstalk_hosted_zone.

2022-04-02

sohaibahmed98 avatar
sohaibahmed98

Hi Guys,

Is it good practice to keep separate state file for AWS services and each microservices then use terraform remote state (https://www.terraform.io/language/state/remote-state-data)? or we should keep in single state file?

The terraform_remote_state Data Source | Terraform by HashiCorpattachment image

Retrieves the root module output values from a Terraform state snapshot stored in a remote backend.

RB avatar

we have many terraform root modules and they each have their own terraform state

The terraform_remote_state Data Source | Terraform by HashiCorpattachment image

Retrieves the root module output values from a Terraform state snapshot stored in a remote backend.

RB avatar

putting everything in a single root module is an antipattern for a number of reasons (large blast radius, very long plans and applies, difficult to modify)

1
1

2022-04-04

Matt Gowie avatar
Matt Gowie

Hey Cloud Posse team — What’s the current status of the opsgenie vs opsgenie-team components. From looking at it… opsgenie-team is a newer and less complicated, but I’m wondering if it’s intended to completely replace the older opsgenie component? Can anybody shed some light on that? @Yonatan Koren maybe you since I see you were part of the most recent updates?

Yonatan Koren avatar
Yonatan Koren

@Ben Smith (Cloud Posse) is the SME on this (because he largely implemented the second iteration of our OpsGenie Terraform modules / components with @Andriy Knysh (Cloud Posse))

So opsgenie-team is our current component, opsgenie is basically deprecated

Yonatan Koren avatar
Yonatan Koren

Long story short opsgenie-team is also better by design since in OpsGenie the toplevel construct is a team, and this component captures that

Matt Gowie avatar
Matt Gowie

Figured that out, but thanks for explaining.

Sadly… configuring OG’s data model from TF seems overly complex for internal my needs so I think I am going to punt on it. But I will likely use it in client projects in the future as I can see the utility when it’s a larger organization with many teams.

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

Yeah we took a deep dive into opsgenie and learned that everything is centered around a team, and theres a log of standards that are needed. since everything resolved around a team we started from there and added each additional resource to it. coming up with a stack yaml that the base component is your defaults, and you override for each team.

Essentially the opsgenie component is deprecated, but still fully functional if it provides use to someone. the opsgenie-team should provide alot more of a whole solution for each team, which could just be one, in an org

1

2022-04-05

jose.amengual avatar
jose.amengual

I use cloudposse modules pretty much always but I’m working on a client that does not allow to clone git repos from outside the org, how can I package a module an all the dependencies? is there such tool? because as you know is I clone the ecs module I will have to clone like 30 dependencies manually and I do not want to

RB avatar

hmmm perhaps fork all the modules ? but then youd also have to repoint all the dependent modules to the new org

RB avatar

what if you ran a terraform init and then committed the .terraform/modules directory

1
jose.amengual avatar
jose.amengual

I actually thought about doing that…..

Soren Jensen avatar
Soren Jensen

I been in similar situations before as a consultant. Decisions like these are nearly always based on the fear of a supply chain attack. Sometimes it’s possible to bend the rules and get an exception. Showing the compliance badges from crew bridge on the modules should help. As well as explain you can build it all from the bottom up, but this will both delay the project significantly as well as increase the price. All while they are introducing a risk of human error in to the code. Getting all the certifications on your own code will take even more time and cost a lot more. Hope you will succeed in arguing modules are safe also from an external repo. You can ask difficult questions as how they protect them self against rouge software in the terraform providers. Or how they handle software dependencies.

jose.amengual avatar
jose.amengual

agreed but this is not going to change and to be honest I do not care much if it does not change

jose.amengual avatar
jose.amengual

why? once they try to update one module and realize they need to update 20 they are going to fix it themselfs

jose.amengual avatar
jose.amengual

right now they just do not get it and they are understaffed

jose.amengual avatar
jose.amengual

so it is hard for them to focus

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Everytime we go down this path, we decide to walk it back, but there’s a need for it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hashicorp has a tool for vendoring providers, which is now built in to the core with terraform providers mirror

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There is a pretty easy workaround though

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

use artifactory as a git proxy or fork every cloud posse module repo you need. Then use the [url] section in the .git/config to use the local endpoint insteadOf <https://github.com/cloudposse/>

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For this “policy” to be enforceable, they should block access to github directly.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Easily rewriting Git URLs from HTTPS to SSH and vice versa · Jamie Tanna | Software Engineer

How to use Git’s config to rewrite HTTPS URLs to SSH and vice versa, for repo pushes and pulls.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I like this strategy because they can still pull in changes when they need to, or fork when they need to

jose.amengual avatar
jose.amengual

interesting

loren avatar

yeah, you can manage if all your modules are one-level deep, but if any module refers to another module, then it gets a bit hairy. hopefully the reference is a git reference, then you can use the gitconfig insteadOf trick…

loren avatar

i can’t find it now, but i remember seeing a tool somewhere that would help declare module sources and versions in a file, and clone them to a local directory. so your own references would be to the local location, and you could use the tool to create that “cache” and manage its lifecycle separately

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
devopsmakers/xterrafile

XTerrafile is a Go tool for managing vendored modules and formulas using a YAML file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(archived)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haha, they recommend vendir

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
coretech/terrafile

A binary written in Go to systematically manage external modules from Github for use in Terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, looks like i commented on this 4 years ago https://github.com/coretech/terrafile/issues/1

First off, this is sweet! We are looking for a way to vendor terraform modules and this looks like it could be the best way. It’s clear to me how this works.

  1. Does it support the case of automatically rewriting the source parameter in nested modules? E.g. modules that call other modules.
  2. If not, would this be practical or in scope for this project?
loren avatar

vendir is what i had in mind, https://github.com/vmware-tanzu/carvel-vendir

vmware-tanzu/carvel-vendir

Easy way to vendor portions of git repos, github releases, helm charts, docker image contents, etc. declaratively

1
jose.amengual avatar
jose.amengual

@Erik Osterman (Cloud Posse) the insteadof is not going to work, because cloudposse is an org and if you can’t create an org in your company where you can clone the repos then is a no go

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, if you cannot clone/fork the repos, that would be a problem. but i feel less sympathetic on that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I was talking to someone recently who mentioned that Artifactory was supposedly coming out with some sort of a passthru terraform registry. Not sure though to what extent.

loren avatar

Starting to feel like @RB had the right idea, just init and commit the modules dir…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, @loren I think you’re right!!

loren avatar

You can play dns and proxy tricks also, but at some point it’s just not worth it

jose.amengual avatar
jose.amengual

and vendir is not quite there

jose.amengual avatar
jose.amengual

you will have to somehow dig on the code, find the versions of all the sources and create a yaml for vendir

loren avatar

Some of that isn’t all that bad… I’ve used a little trick loading hcl into python, then inspected the content, constructed an object and written an override file to json…

loren avatar

But it’s still more about the remote source and whether they have more remote sources … Now you’re deep into reproducing terraform init already …

jose.amengual avatar
jose.amengual

yep

jose.amengual avatar
jose.amengual

and the docs in vendir are terrible

jose.amengual avatar
jose.amengual

you find more info in the issues

jose.amengual avatar
jose.amengual

there you go

jose.amengual avatar
jose.amengual
gh repo list cloudposse --limit 9999 --json url | jq '.[]|.url' | xargs -n1 git clone
jose.amengual avatar
jose.amengual

now if you could do something fancy with source = "git....." and use cloned repos in a repo…..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
jasonwbarnett/terraform-registry-proxy
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Finally!! someone has done it.

RB avatar

nice! this implements what we spoke about before pepe (and they laughed at it lol). it’s basically a MITM to overwrite the registry with a custom url

1
jose.amengual avatar
jose.amengual

well then…..when is this going to end…..

jose.amengual avatar
jose.amengual
outsideris/citizen

A Private Terraform Module Registry

jose.amengual avatar
jose.amengual

but this have the same problem with submodules

David avatar

I’ve worked around this using insteadOf for the terraform-null-label module

git config --global url.git@your-git-server:path/to/registry/modules.insteadOf <https://github.com/cloudposse> 
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@David does this work even when using the registry notation for modules?

David avatar

Hi @Erik Osterman (Cloud Posse) https://github.com/cloudposse/terraform-aws-ssm-parameter-store/blob/master/context.tf#L24 Is this what you are referring to?

  source  = "cloudposse/label/null"
jose.amengual avatar
jose.amengual

yes this

David avatar

Then yes,

source  = "cloudposse/label/null"

translates to

<https://github.com/cloudposse/terraform-null-label?ref=0.25.0>

running

git config --global url.git@your-git-server:path/to/registry/modules.insteadOf <https://github.com/cloudposse> 

forces the clone through your own source control server rather than going to Github, the colon is important rather than the /

David avatar

tldr, yes

jose.amengual avatar
jose.amengual

what is path/to/registry/modules ? is it under it’s own org?

David avatar

We use Gitlab, and this is the path to the project so these are just groups for structure

David avatar

does that make sense?

jose.amengual avatar
jose.amengual

so under path/to/registry/modules you have a bunch of cloudposse modules?

David avatar

some yes, they are mirrored from github

loren avatar

i think this works if you do have access to the terraform registry endpoint, or if you override using .terraformrc. something has to intercept/change the default value of [registry.terraform.io](http://registry.terraform.io), unless you do have access to that site

jose.amengual avatar
jose.amengual

the problem is when you can’t clone to an org/group in your VCS, or if you have the module releases for example in a s3 bucket

David avatar

we don’t have that issue as we have a forward proxy to fetch modules for us

loren avatar

yeah if you need to wholesale change the source url and proto, that’s trouble. can’t think of anything that will do that except some kind of custom, specialized proxy

David avatar

our egress rules for dev are more lenient than they are for prod our shared components environment allow egress to github which allows us to mirror repos from github but we don’t permit that anywhere else other than dev, so we are not so strict as your configuration

David avatar

we also permit .[hashicorp.com](http://hashicorp.com) and [registry.terraform.io](http://registry.terraform.io) in all our envs

1
loren avatar

aye, yeah, that’s what i figured. that’s why your gitconfig trick is working. it is hitting registry.terraform.io to get the manifest for the module, and that manifest is what supplies the github git url. then your gitconfig is able to swap the git url

this1
loren avatar

@jose.amengual i wonder if you could use the tf registry protocol to implement your own registry, and a .terraformrc config to force its use instead of the upstream registry, and have your registry return the s3 url (or wherever)… https://www.terraform.io/internals/module-registry-protocol

Module Registry Protocol | Terraform by HashiCorpattachment image

The module registry protocol is implemented by a host intending to be the host of one or more Terraform modules, specifying which modules are available and where to find their distribution packages.

loren avatar

i think that’s kinda what apparentlymart was saying over in hangops

jose.amengual avatar
jose.amengual

yes, that is what I want to try

jose.amengual avatar
jose.amengual

but can you change the registry url in terraformrc?

loren avatar

good question, there is a “provider_installation” block… not sure if that impacts modules also… https://www.terraform.io/cli/config/config-file#explicit-installation-method-configuration

CLI Configuration | Terraform by HashiCorpattachment image

Learn to use the CLI configuration file to customize your CLI settings, including credentials, plugin caching, provider installation methods, etc.

jose.amengual avatar
jose.amengual

so my idea was to add [registry.terraform.io](http://registry.terraform.io) in my host table and point it to my api gateway that will then return a 301/302 to point to my registry url and then see if it worked

loren avatar

Makes sense

jose.amengual avatar
jose.amengual

the proxy thing is cool but I do not like it much since is yet another thing I need to deploy BUT maybe is even possible to do it in NGIX pure or Varnish(I know it can do it)

jose.amengual avatar
jose.amengual

which instead of relaying in some golang code that is too new, it relays on well known tooling

jose.amengual avatar
jose.amengual

that is in the case the redirect does not work

jose.amengual avatar
jose.amengual

I still think this should be implemented in the TF with an option to change the default registry url for any call to the registry( including modules)

loren avatar

Totally agree, been a long time pain point. We ended up recompiling the terraform binary to change it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, @jose.amengual i think it would be interesting if the proxy-technique could just be reimplemented in native a native AWS API gateway with rewrites

1
jose.amengual avatar
jose.amengual

I might have got it:

curl <https://registry.terraform.io/.well-known/terraform.json> -X GET -k
{
      "modules.v1": "<https://iklqoxlmui.execute-api.us-west-2.amazonaws.com/live/modules.v1/>"
}⏎                                                       
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s cool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you going to try implement it this?

jose.amengual avatar
jose.amengual
127.0.0.1 registry.terraform.io registry.amengual.cl
1
jose.amengual avatar
jose.amengual

I was trying to avoid custom configs in /etc/hosts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Very cool!

jose.amengual avatar
jose.amengual

but I still need to figure out how to avoid this :

Failed to retrieve available versions for module "consul" (main.tf:1) from registry.terraform.io: Failed to request discovery document: Get
│ "<https://registry.terraform.io/.well-known/terraform.json>": x509: certificate is valid for registry.amengual.cl, not registry.terraform.io.
jose.amengual avatar
jose.amengual

which is the blocker now

jose.amengual avatar
jose.amengual

I’m trying to implement it but I want minimal custom config but since TF Cli is too smart now it looks like I will have to install CA certs and other suff which I do not really want to

jose.amengual avatar
jose.amengual

I remember the good old days of source = git://.......

loren avatar

heh. yeah. too smart, for its own good!

loren avatar

easier to change the registry host in source code, and/or disable the cert check, and recompile. then give your team your binary

loren avatar

i don’t have code for recent terraform versions handy, but here’s what we did back for terraform 0.13… eyeballing the current code base, looks like only minor adjustments to the file paths maybe, and should otherwise apply cleanly

jose.amengual avatar
jose.amengual

it is looking like that

jose.amengual avatar
jose.amengual

looking at the code it pisses me off because it will be very easy for them to offer the option to pass that as an argument

loren avatar

yep. in our actual implementation (what i posted is from an older patchset), i think we read it from an env, with a default value if the env is unset

jose.amengual avatar
jose.amengual

Loren did you ever created and issue to get this added?

loren avatar

negative, we did it in tf 0.11 days and just kept it up. meanwhile, tf was doing a bunch of stuff around the registry and made it sound like they were going to incorporate the use case. but if they did, i never could figure it out

jose.amengual avatar
jose.amengual

ok, I will create and issue and I will start making a lot of noise……

1
jose.amengual avatar
jose.amengual

Current Terraform Version

All

Use-cases

Make [registry.terraform.io](http://registry.terraform.io) a configurable parameter instead of a constant to be able to use a module/submodule internally hosted registry.

When using a module like so :

module "consul" {
  source = "hashicorp/consul/aws"
}

the source URL basically translates to :

source = "<https://registry.terraform.io/hashicorp/consul/aws>"

if the constant mentioned in L24 was configurable it would be possible to serve the .well-known/terraform.json with the URL of the module registry and index pointing to an internal repo.

This is a very well used pattern in many languages were the repo of the package dependencies libraries can be configured and pointed to hosted version on products like jfrog artifactory, Nexus IQ, S3 and so on.

Attempted Solutions

It is not possible to configure at the moment and the only way to do it is to hack SSL CAs and hots tables to make this work which is definitely not a good solution.

Proposal

make the default registry URL https://registry.terraform.io configurable via config file in .terraform.rc or a ENV variable.

References

https://github.com/hashicorp/terraform/blob/main/internal/addrs/provider.go#L24
https://github.com/apparentlymart/terraform-aws-tf-registry

2
1
Chin Sam avatar
Chin Sam

hi everyone! quick question, is there anyone who used cloudposse following module : https://github.com/cloudposse/terraform-aws-cloudwatch-events/tree/0.5.0, it provides the following required input : cloudwatch_event_rule_pattern , and i want to pass in a cron / rate expression of cloudwatch to it to trigger lambda, so my question, is the module supports it, since it’s failing, any feedback is greatly appreciated, thanks

Matt Gowie avatar
Matt Gowie

Can you post the error you’re receiving from Terraform @Chin Sam?

Chin Sam avatar
Chin Sam

sure, absolutely, i use Terragrunt to inject the inputs, so here is the inputs

Chin Sam avatar
Chin Sam
inputs = {

  name          = "testing"
#  namespace     = var.namespace
#  stage         = var.stage

  cloudwatch_event_rule_description = "testing-cw-enents"
  cloudwatch_event_rule_pattern = "cron(0 0 * * ? *)"
  cloudwatch_event_target_arn = "arn:aws:lambda:us-east-1:0...1:function:amis-housekeeping-dev-lambda-function" #module.sns.sns_topic.arn
  
Chin Sam avatar
Chin Sam

error

Chin Sam avatar
Chin Sam
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_cloudwatch_event_rule.this: Creating...

Error: error creating EventBridge Rule (dev-testing): InvalidEventPatternException: Event pattern is not valid. Reason: Filter is not an object
 at [Source: (String)""cron(0 0 * * ? *)""; line: 1, column: 2]

  on main.tf line 10, in resource "aws_cloudwatch_event_rule" "this":
  10: resource "aws_cloudwatch_event_rule" "this" {


[terragrunt] 2022/04/05 15:57:43 Hit multiple errors:
exit status 1
Matt Gowie avatar
Matt Gowie

So it looks like your issue is not a Terraform or module problem, but more a problem with AWS’s validation / what you’re passing to AWS.

InvalidEventPatternException: Event pattern is not valid. Reason: Filter is not an object

If you go and try to create the same CloudWatch event / rule pattern via the console — you would get the same error.

I would suggest you try doing what you want via the console first, get it working there, and then try to reverse engineer what you did in the console back to your Terraform code.

Matt Gowie avatar
Matt Gowie

To be more clear —

InvalidEventPatternException: Event pattern is not valid. Reason: Filter is not an object

That error is an AWS API error — their validation code is rejecting the Terraform made API request because it doesn’t like that event_rule_pattern.

Chin Sam avatar
Chin Sam

yeah i understand that, so i might passing it in wrong way cloudwatch_event_rule_pattern = "cron(0 0 * * ? *)" , so this input accepts cron and i am passing incorrectly ?

Matt Gowie avatar
Matt Gowie

@Chin Sam check out these docs — I believe they match up to that variable (I didn’t confirm):

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_rule#event_pattern

Matt Gowie avatar
Matt Gowie

They mention the event rule pattern should be JSON. They also point you to this documentation: https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html

Amazon EventBridge events - Amazon EventBridge

Provides basic scenarios and procedures for using Amazon EventBridge events.

Matt Gowie avatar
Matt Gowie

@Chin Sam it looks like your value for that attribute needs to be a map — See this line inside the module: https://github.com/cloudposse/terraform-aws-cloudwatch-events/blob/0.5.0/main.tf#L15

  event_pattern = jsonencode(var.cloudwatch_event_rule_pattern)
Chin Sam avatar
Chin Sam

i see, thank you @Matt Gowie, much appreciated!

1

2022-04-06

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Announcement and Request for Comments!

We have had numerous requests to adopt production-level SemVer versioning for our public Terraform modules. We feel we are now ready to begin this process (slowly), and somewhat forced to do so due to the numerous breaking changes we are going to be releasing, at least partly due to the release of AWS Terraform Provider v4, which substantially breaks how S3 is configured.

Unfortunately, we feel compelled to begin our new SemVer launches in a rather awkward way, again due to the AWS v4 changes, particularly around S3.

The general roadmap we are planning, and for which we would like from you either a show of support or recommendations for alternatives (and explanation of their benefits) is as follows. For each Terraform module (as we get to it):

  1. The latest version that is compatible with AWS v3 and has no breaking changes will be released as v1.0.0, and version pinned to only work with AWS v3 if it is not also compatible with v4
  2. The module may need refactoring, leading to breaking changes. Refactoring compatible with AWS v3 will be done and released as v2.0.0
  3. The module will be brought into compliance with our current design patterns, such as use of the security-group and s3-bucket modules and the standardized inputs they allow. These modules may or may not expose the complete feature set of the underlying modules, as part of the point of some of them is to provide reasonable, “best practice” defaults and minimize the effort needed to configure the other modules for a specific intended use case. The module will be version pinned to require AWS v4 *and Terraform v1*. This module will get the next major release number, either v2 or v3 depending on what happened in step 2.

One drawback is that this can result in 2 or 3 major version releases happening in a very rapid succession. In particular s3-bucket v0.47.1 will be re-released (new tag, same code) as v1.0.0 and v0.49.0 will be released as v2.0.0 minutes later. (See the release notes for details of the changes.)

In a similar way, s3-log-storage will have 3 rapid fire releases. v0.26.0 will be re-released as 1.0.0, v0.27.0 will be re-released as v2.0.0, and v0.28.0 will be re-released as v3.0.0.

Personally, I do not like the hassle imposed on consumers by having the rapid fire major version releases. However, whether we use the version numbers or not to indicate it, the case remains that we have a series of breaking changes and manual migrations to release, partly forced on us by the AWS upgrade and partly a refactoring for our own sanity in maintaining all these modules. We have too many modules each with their own implementation of an S3 bucket resource (and configuration interface) and/or EC2 security group. We need to standardize on the inputs and the implementation so that we can have a clearer, smoother response to future underlying changes like these. In particular, we want to be able to take advantage of Terraform’s support for coding migration paths in the module itself, but these only work within a module; they do not work when one module migrates another. By having all our resources encapsulated appropriately, we hope to make future migration much, easier. Please bear with us through these next few months of breaking changes and manual migrations.

If you have suggestions for improving on the above plan, please share them. Likewise, if you have suggestions, either general or specific, for improving our migration instructions, please share them.

2
1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

CC @Alex Jurkiewicz @loren @ekristen @Erik Osterman (Cloud Posse) I would particularly like your feedback.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Hooray!

Ultimately, your roadmap sounds like a lot of work for the CloudPosse team. If you have the resources, it’s a nice roadmap. But personally, I get little value from the bits beyond “re-release the current version as 1.0.0 and follow semver going forward”.

Stuff like coordinated update of all modules sounds hard. The null-module updates took months to roll out – is another coordinated update really what you want to take on?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Thanks, @Alex Jurkiewicz, but I’m not quite sure you what you mean by “coordinated updates of all modules”. You may have read something into what I wrote that I did not mean to imply.

The releases of v1, v2, and v3 of s3-log-storage, for example, are just renumberings of existing versions. Not hard at all. In general, on our roadmap, the release of v1 will be the current module, pinned to AWS v3 if it doesn’t work with v4. The v4 compatible release will incorporate all the other changes in our backlog, just so we can give ourselves a break on writing migration documents.

Alex Jurkiewicz avatar
Alex Jurkiewicz

oh! right, sounds then

Alex Jurkiewicz avatar
Alex Jurkiewicz

You must be Nuru on github

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

( don’t tell anyone)

Alex Jurkiewicz avatar
Alex Jurkiewicz

Just commenting on one thing I saw you say on Github. I don’t think you should fear major version bumps, or try to minimise them. The value I get from semver is informative. I wouldn’t want the migration to semver to slow development of modules, or require more changes to be backwards compatible

loren avatar

sounds like a plan… will the v1 releases on all modules be cut more or less at the same time, or will the v1 release for any given module wait until the v2/v3 releases of that module are ready?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think @Jeremy G (Cloud Posse) is for now focusing on just modules affected by s3 changes. Long term all modules, but the scope for now is smaller. The s3 changes mean anyone not watching the output of terraform plan closes might delete their bucket!

1
ekristen avatar
ekristen

@Jeremy G (Cloud Posse) Thanks for the ping! Also thank you for taking the time to type this up, I know it’s been a tough thing to sort out. This sounds like a solid plan. I believe the rapid fire is likely to be a rare occurrence and I believe the community can and will understand the reasons for it this time and should this happen again for AWS V4->V5 I suspect that it will still be considered rare.

Regarding the possible rapid fire of majors due to AWS version and refactoring I would say this … since progress is being made towards 1.x and using major releases, I’m very flexible. If you all feel like it’s more beneficial to refactor on 0.x and pin to 3.x AWS and once refactor is done rev to 1.x to reduce the rapid fire of major releases and then rev to 2.x for the AWS 4.x upgrade, I can see that working out too, it would be status quo to the current model to make breaking changes under 0.x and I would qualify this situation as unique and not likely to occur again any time soon.

Thanks for following back up on this. Thank you to the team and company for the work being done on these modules and the people that help support. Appreciate all the effort.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry for the spam in the channel. We are dealing with it. Please report any spam to me.

3
1
jose.amengual avatar
jose.amengual

define spam?, is that basically me?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha promoting bitcoin wallets!

Matt Gowie avatar
Matt Gowie

@Erik Osterman (Cloud Posse) another Terraform Framework hits the scene — This time from Nike.

https://github.com/nike-inc/pterradactyl

Nike-Inc/pterradactyl

Pterradactyl is a library developed to abstract Terraform configuration from the Terraform environment setup.

1
1
Matt Gowie avatar
Matt Gowie
Nike-Inc/pterradactyl

Pterradactyl is a library developed to abstract Terraform configuration from the Terraform environment setup.

1
Bryan Dady avatar
Bryan Dady

That’s cool that they included that comparison table.

Have you looked at or tried out Opta yet?

I like where they’re headed, but we haven’t used it to build/deploy anything to prod yet.

Matt Gowie avatar
Matt Gowie

I hadn’t heard of Opta. After a quick browse — It looks to high level IMHO. It seems to abstract too much away and then you run into having to peel back a bunch of Opta’s code and various layers that you don’t have the best visibility into to actually understand how you can X. That works for some teams and more power to em, but if you’re trying to do something more complicated it feels like it will break down.

Grubhold avatar
Grubhold

Hi folks, I’m using the following resources from CloudPosse modules to create a DynamoDB table with items from a json file. I’m trying to conditionally created the table and items depending on a bool variable. Normally I would use count to add the condition but I’m using for_each to loop the json file. Please see the thread for the full code I’m using. Any help is highly appreciated

Grubhold avatar
Grubhold
module "dynamodb_label" {
  source = "./modules/labels"

  enabled = var.ENABLE_TABLE
  name    = var.dynamodb_name

  context = module.this.context
}

locals {
  json_data               = file("./items.json")
  items                 = jsondecode(local.json_data)
}

module "dynamodb_table" {
  source = "./aws-dynamodb"

  count = var.ENABLE_TABLE ? 1 : 0

  hash_key                      = "schema"
  hash_key_type                 = "S"
  autoscale_write_target        = 50
  autoscale_read_target         = 50
  autoscale_min_read_capacity   = 5
  autoscale_max_read_capacity   = 1000
  autoscale_min_write_capacity  = 5
  autoscale_max_write_capacity  = 1000
  enable_autoscaler             = true
  enable_encryption             = true
  enable_point_in_time_recovery = true
  ttl_enabled                   = false

  dynamodb_attributes = [
    {
      name = "schema"
      type = "S"
    }
  ]

  context = module.dynamodb_label.context
}

resource "aws_dynamodb_table_item" "dynamodb_table_item" {
  for_each   = var.ENABLE_TABLE ? local.items : {}
  table_name = module.dynamodb_table.table_name
  hash_key   = "schema"
  item       = jsonencode(each.value)

  depends_on = [module.dynamodb_table]

}

The JSON file

{
  "Item1": {
    "schema": {
      "S": "<https://schema.org/government-documents#id-card>"
    },
    "properties": {
      "S": "{\"documentName\":{\"type\":\"string\"},\"dateOfBirth\":{\"type\":\"string\"}}"
    }
  },
  "Item2": {
    "schema": {
      "S": "<https://schema.org/government-documents#drivers-license>"
    },
    "properties": {
      "S": "{\"documentName\":{\"type\":\"string\"},\"dateOfBirth\":{\"type\":\"string\"}}"
    }
  }
}

The error

Error: Inconsistent conditional result types

  on dynamodb-table.tf line 173, in resource "aws_dynamodb_table_item" "dynamodb_table_item":
 173:   for_each   = var.ENABLE_TABLE ? local.items : {}
    ├────────────────
    │ local.items is object with 13 attributes
    │ var.ENABLE_TABLE is a bool, known only after apply

The true and false result expressions must have consistent types. The given expressions are object and object, respectively.

----------
Error: Unsupported attribute

  on dynamodb-table.tf line 174, in resource "aws_dynamodb_table_item" "dynamodb_table_item":
 174:   table_name = module.dynamodb_table.table_name
    ├────────────────
    │ module.dynamodb_table is a list of object, known only after apply

This value does not have any attributes.
Grubhold avatar
Grubhold

I’ve tried many options to pass this error even change the variable type from bool to object. If I remove the condition in for_each and just pass local.items the aws_dynamodb_table_item tries to create regardless of the depends_on and it fails of course to create because table_name is returned empty because of count = module.dynamodb_label.enabled ? 1 : 0 in dynamodb_table module

I want the aws_dynamodb_table_item to be skipped if var.ENABLE_TABLE is set to false

What am I missing here?

Matt Gowie avatar
Matt Gowie

Your main issue is this:

The given expressions are object and number, respectively

Try something like this:

resource "aws_dynamodb_table_item" "dynamodb_table_item" {
  for_each   = var.ENABLE_TABLE ? local.items : {}
  table_name = module.dynamodb_table.table_name
  hash_key   = "schema"
  item       = each.value

  depends_on = [module.dynamodb_table]

}
Grubhold avatar
Grubhold

@Matt Gowie Thanks for your reply. I’m sorry that I failed to include this. I have actually tried that and I just updated it. I’m actually using this and the value of ENABLE_TABLE is false of type bool

resource "aws_dynamodb_table_item" "dynamodb_table_item" {
  for_each   = var.ENABLE_TABLE ? local.items : {}
  table_name = module.dynamodb_table.table_name
  hash_key   = "schema"
  item       = each.value

  depends_on = [module.dynamodb_table]

}

I still get this error.

Error: Inconsistent conditional result types

  on dynamodb-table.tf line 173, in resource "aws_dynamodb_table_item" "dynamodb_table_item":
 173:   for_each   = var.ENABLE_TABLE ? local.items : {}
    ├────────────────
    │ local.items is object with 13 attributes
    │ var.ENABLE_TABLE is a bool, known only after apply

The true and false result expressions must have consistent types. The given expressions are object and object, respectively.
Matt Gowie avatar
Matt Gowie

@Grubhold check out https://github.com/hashicorp/terraform/issues/27877 and https://github.com/hashicorp/terraform/issues/23364

They describe your issue and how you can work around it.

Grubhold avatar
Grubhold

@Matt Gowie Thanks for your reply. Unfortunately none of that worked I had checked them. Its very poor error that Terraform is failing to tell exactly what type it needs even when its consistent and it says that its object and object. Because local.items and {} are both objects but they don’t have the same attributes, and therefore they are not the same object type.

Grubhold avatar
Grubhold

I finally found a way to get around this obscure error by adding this condition for_each = { for k,v in local.items : k => v if var.ENABLE_TABLE } for anyone with the same requirement this is a gem that I’ve missed you might find it useful as well.

2022-04-07

Release notes from terraform avatar
Release notes from terraform
05:13:15 PM

v1.1.8 1.1.8 (April 07, 2022) BUG FIXES: cli: Fix missing identifying attributes (e.g. “id”, “name”) when displaying plan diffs with nested objects. (#30685) functions: Fix error when sum() function is called with a collection of string-encoded numbers, such as sum([“1”, “2”, “3”]). (<a…

cli: Fix missing identifying attributes in diff by alisdair · Pull Request #30685 · hashicorp/terraformattachment image

When rendering a diff for an object value within a resource, Terraform should always display the value of attributes which may be identifying. At present, this is a simple rule: render attributes n…

Alibek avatar

hi, could anyone tell when this bug https://github.com/cloudposse/terraform-aws-elasticache-redis/issues/155 wil be fixed? i can’t deploy elasticcache-redis via ur terraform module

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When planning an elasticache redis instance with a cluster_size = 2 and cluster_mode_enabled = false, using v0.42.0, a deprecation warning is issued.

Expected Behavior

No deprecation warnings are issued and a you have a clean plan.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create a redis instance with a cluster size of 2 and cluster mode disabled.
  2. Run ‘terraform plan’
  3. See error
│ Warning: Argument is deprecated
│ 
│   with aws_elasticache_replication_group.default,
│   on main.tf line 116, in resource "aws_elasticache_replication_group" "default":
│  116: resource "aws_elasticache_replication_group" "default" {
│ 
│ Use num_node_groups and replicas_per_node_group instead
│ 
│ (and 4 more similar warnings elsewhere)

Screenshots

N/A

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

• OS: OSx • Version 12.1

Additional Context

N/A

Alibek avatar

on page of this terraform module https://registry.terraform.io/modules/cloudposse/elasticache-redis/aws/latest is written that there is 21 required variables, but only 1 variable marked as required, could anyone write others required variables? i have probelms with deploying cuz i don’t know all required variables.

2022-04-08

2022-04-10

Nimesh Amin avatar
Nimesh Amin

Hey everyone! I’m running into an issue where I’m trying to add an s3 bucket to my existing infra, but hitting what looks like a resource dependency before it’s created issue. However, I’ve verified in the AWS console that mailer@comp policy exists. This role here is owned by my mailer, which should allow it access to the new s3 bucket I’m creating.

Is this error due to the IAM policy, or really due to the s3 bucket needing to be created first?

╷
│ Error: Invalid for_each argument
│
│   on .terraform/modules/iam-eks-roles.eks_iam_role/main.tf line 82, in resource "aws_iam_policy" "service_account":
│   82:   for_each    = length(var.aws_iam_policy_document) > 0 ? toset(compact([module.service_account_label.id])) : []
│     ├────────────────
│     │ module.service_account_label.id is "mailer@comp"
│     │ var.aws_iam_policy_document is a string, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
╵
loren avatar

I’m guessing it’s the same issues reported here? https://github.com/terraform-aws-modules/terraform-aws-iam/issues/193

Description

When using iam-eks-role i cannot pass a role_policy_arn i created in the same module

Versions

• Terraform: 1.1.4 • Provider(s): • registry.terraform.io/hashicorp/aws: 3.74.0 • Module: • terraform-aws-modules/eks/aws: 18.2.7 • terraform-aws-modules/iam/aws//modules/iam-eks-role: 4.13.1

Reproduction

Using the code snippet below and

Code Snippet to Reproduce

resource "aws_iam_policy" "alb_load_balancer" {
    name        = "K8SALBController"
    path        = local.base_iam_path
    description = "Policy that allows k8s load balancer controller to provisione alb/elb"
    policy = file("${path.module}/policies/alb-policy.json")         
}

module "aws_alb_controller_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-eks-role"
  //source  = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
  version = "4.13.1"

  create_role      = true
  role_name = "aws-alb-controller"
  role_path = local.base_iam_path
  role_description = "IRSA role for load balancer controller"

  role_policy_arns               = [aws_iam_policy.alb_load_balancer.arn]
  cluster_service_accounts = {
    "${var.cluster_name}" = [
      "kube-system:aws-alb-controller"
    ]
  }
  depends_on = [
    module.eks.cluster_id
  ]
}

Expected behavior

Policy should be attached

Actual behavior

I get the error below

Terminal Output

│ Error: Invalid for_each argument
│
│   on .terraform\modules\eks.aws_node_termination_handler_role\modules\iam-eks-role\main.tf line 76, in resource "aws_iam_role_policy_attachment" "custom":
│   76:   for_each = toset([for arn in var.role_policy_arns : arn if var.create_role])
│     ├────────────────
│     │ var.create_role is true
│     │ var.role_policy_arns is list of string with 1 element
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.

Additional context

Using

resource "aws_iam_role_policy_attachment" "custom" {
  role       = module.aws_alb_controller_role.iam_role_name
  policy_arn = aws_iam_policy.alb_load_balancer.arn
}

I am able to attach a policy.
Since this is a common policy i can user terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks, but for a custom policy it wouldn’t work

1
1
Nimesh Amin avatar
Nimesh Amin

thank you!

2022-04-11

Chandler Forrest avatar
Chandler Forrest

Need some guidance on manipulating a data object. I have this list of maps:

 pet_store = [
        {
            animal_type = "cat"
            animal_name = "fluffy"
        },
        {
            animal_type = "cat"
            animal_name = "blah"
        },
        {
            animal_type = "dog"
            animal_name = "bingo"
        }
]

And I want to turn it into this:

pet_store2 = [
        {
            animal_type = "cat"
            animal_name = ["fluffy", "blah"]
        },
        {
            animal_type = "dog"
            animal_name = ["bingo"]
        }
]

I’ve played around with the for expressions, merge function, keys function, etc - > but I can’t quite get my output.

RB avatar

Try this

locals {
  # pet_store = {}
  unique_animals = [
    for store in local.pet_store :
    store.animal_type
  ]
  pet_store2 = [
    for animal_type in local.unique_animals :
    {
      animal_type = animal_type
      animal_names = [
        for store in local.pet_store :
        store.animal_name
        if store.animal_type = animal_type
      ]
    }
  ]
}
1
1
loren avatar

this:

  unique_animals = [
    for store in local.pet_store :
    store.animal_type
  ]

can just be:

  unique_animals = distinct(local.pet_store[*].animal_type)
cool-doge1
RB avatar

oh wow, even better

RB avatar

oh right and i forgot the distinct. my original code just grabbed all the animal types and didnt deduplicate. i admit that i didn’t run the code above

loren avatar

i feel like it should be possible to use the grouping operator for this kind of use case, but it’s always hard for me to wrap my head around how it works… https://www.terraform.io/language/expressions/for#grouping-results

For Expressions - Configuration Language | Terraform by HashiCorpattachment image

For expressions transform complex input values into complex output values. Learn how to filter inputs and how to group results.

Chandler Forrest avatar
Chandler Forrest

I was trying to use the grouping operator … to deduplicate previously.

Chandler Forrest avatar
Chandler Forrest

question - is that syntax on the if statement after the for expression correct… terraform is complaining

loren avatar

what is the complaint? it looks right at first glance to me

loren avatar
For Expressions - Configuration Language | Terraform by HashiCorpattachment image

For expressions transform complex input values into complex output values. Learn how to filter inputs and how to group results.

Chandler Forrest avatar
Chandler Forrest
 Error: Invalid 'for' expression
│ 
│   on main.tf line 62, in locals:
│   60:       animal_names = [
│   61:         for store in local.pet_store :
│   62:         store.animal_name if store.animal_type = animal_type
│ 
│ Extra characters after the end of the 'for' expression.
loren avatar

==?

1
Chandler Forrest avatar
Chandler Forrest

solid ^ that was it

Chandler Forrest avatar
Chandler Forrest

Thank you @RB and @loren

loren avatar

i could see how grouping could get there if the input looked like this:

pet_store = {
  fluffy = { animal_type = "cat" }
  blah   = { animal_type = "cat" }
  bingo  = { animal_type = "dog" }
}

then you could get the list with:

pets_by_type = { for name, pet in local.pet_store : pet.animal_type => name... }

and that should result in:

{
  cat = ["fluffy", "blah"]
  dog = ["bingo"]
}
loren avatar

and i guess you could construct that data structure and transform from there, somehow

2022-04-12

fulminato76 avatar
fulminato76

Hello everyone, I’d like to enable logging on elasticache-redis using https://registry.terraform.io/modules/cloudposse/elasticache-redis/aws/latest, but I don’t find documentation

Matt Gowie avatar
Matt Gowie

It doesn’t look like our module supports the log_delivery_configuration setting yet — https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_cluster#log_delivery_configuration

You can implement it yourself by forking the module and then PRing back. Then you’ll have implemented it for your project and for others that may want that in the future. It is always appreciated!

Ping me if you implement it and I’ll check out your PR.

fulminato76 avatar
fulminato76

Thank you @Matt Gowie for your reply… I talk with my team….

1
fulminato76 avatar
fulminato76

is it possible?

2022-04-13

Elleval avatar
Elleval

Hey guys, I was looking at the TF modules for ECS/Fargate but didn’t see a module for creating an ECS cluster. Have I missed it?

Elleval avatar
Elleval
resource "aws_ecs_cluster" "default" {
RB avatar

i don’t believe we have a module for it. just like you said, there’s not much to it so we just take the cluster as an input instead of creating

Joshua avatar

Hello, I have a question concerning atmos. I am currently using terraform cloud for the remote state, planning, and applying. If I use atmos, can I still keep my workflow as in will tf cloud plan and apply, or is it strictly geo/atmos that has to apply stacks to my environments? I love the idea of the yaml and stacks, as it seems to make life easier, but our devs also like seeing what is planned/applied in tf cloud or spacelift. So I hope this makes sense. TY!

TL;DR does Atmos replace Terraform Cloud for planning/applying since it uses yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use atmos with Spacelift (to generate TF workspaces and varfiles)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in any case, atmos don’t care about the backend, being it S3 or Tf Cloud

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the backend is configured in TF as usual

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos has the functionality to generate the TF backend file from YAML config, but you don’t have to use it

Joshua avatar

Okay, thanks, that makes sense, so I can only use tf cloud as remote since I have to plan/apply from within Atmos because terraform needs a .tf file in the working directory to read from.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have an unmaintained module for TFE that probably doesnt work any more https://github.com/cloudposse/terraform-tfe-cloud-infrastructure-automation

cloudposse/terraform-tfe-cloud-infrastructure-automation

Terraform Enterprise/Cloud Infrastructure Automation

Bhavik Patel avatar
Bhavik Patel

I created a tf module to provision an RDS database that has resources like security groups within the module. I’m wanting to change the naming that I setup without having to do terraform state mv module.old_resource module.new_resource for all the resources associated with the module. Anyone have ideas?

Alex Jurkiewicz avatar
Alex Jurkiewicz

there’s no other way

Release notes from terraform avatar
Release notes from terraform
06:43:10 PM

v1.2.0-alpha20220413 1.2.0 (Unreleased) UPGRADE NOTES: The official Linux packages for the v1.2 series now require Linux kernel version 2.6.32 or later. When making outgoing HTTPS or other TLS connections as a client, Terraform now requires the server to support TLS v1.2. TLS v1.0 and v1.1 are no longer supported. Any safely up-to-date server should support TLS 1.2, and mainstream web browsers have required it since 2020. When making outgoing HTTPS or other TLS connections as a client, Terraform will no longer…

Elleval avatar
Elleval

Hello! I was looking at this module - https://github.com/cloudposse/terraform-aws-ecs-alb-service-task. I’m not clear on the relationship between the ECS service and the ALB. I’m interpreting the docs as the ALB needs to be created first and then pass details via https://github.com/cloudposse/terraform-aws-ecs-alb-service-task#input_ecs_load_balancers? Its got me scratching my head as I thought the target group would need to be created before the ALB. Any pointers appreciated.

Elleval avatar
Elleval

Having read a bit more, I now believe it works like this….

Elleval avatar
Elleval

… I create ALB and IP target group separately. This is then referenced via ecs_load_balancers and linked to the ecs service.

nguyenbanguyen1993 avatar
nguyenbanguyen1993

Hi guys, i’m following this module https://github.com/cloudposse/terraform-aws-tfstate-backend for storing backend state on s3. The apply step works perfectly but when i do

terraform init -force-copy

No state file was uploaded on my s3 bucket. This is my state backend module and the “terraform init –force-copy” logs. Any step did i miss? Thank you.

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

1
nguyenbanguyen1993 avatar
nguyenbanguyen1993

Nvm, my silly mistake with

terraform_backend_config_file_path
terraform_backend_config_file_name

it should be a .tf file

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

RB avatar

glad you figured it out

2022-04-14

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

We’re looking to move our DR region so have to setup a new region from scratch. Previous DR and primary regions were not done via Terraform but by hand so trying to take this opportunity to change that. Looking to use the terraform-aws-vpc module but looking at the various subnet modules trying to determine which would be best and how to follow our network design. We only have 1 security VPC that has the IGW and we connect the rest of the VPCs to a TGW that has the default route going to the security VPC. Obviously I have to make sure the NGW settings are disabled and can disable the IGW creation on all but the security VPC. Looking for pros/cons against the different subnet modules to help narrow down which to utilize.

Matt Gowie avatar
Matt Gowie

Interesting new tool for Terraform AWS IAM Permission change diffs — https://semdiff.io/

Semdiffattachment image

Semdiff makes reviewing terraform PRs easier.

3
2
jose.amengual avatar
jose.amengual

funny, people are asking what I think

Semdiffattachment image

Semdiff makes reviewing terraform PRs easier.

jose.amengual avatar
jose.amengual

and the second time I see it

jose.amengual avatar
jose.amengual

they have atlantis integration docs

David Spedzia avatar
David Spedzia

Hey there, I am using the CloudPosse terraform modules for cloudposse/vpc/aws and cloudposse/multi-az-subnets/aws . I have two CIDR ranges in the VPC 10.20.0.0/22 and 10.21.0.0/18. The /22 is for public subnets in the VPC and the /18 is for private subnets. When I run the terraform the private subnets fail to create. I am able to create them manually in AWS however. What is the limitation here?

loren avatar

was reviewing the release notes for the upcoming 4.10.0 release of the aws provider, and noticed a mention about custom policies for config rules, which led me to this feature. pretty neat… an alternative to writing lambda functions for custom config rules…

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_cfn-guard.html

Creating AWS Config Custom Policy Rules - AWS Config

Create AWS Config Custom Policy rules

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt

Creating AWS Config Custom Policy Rules - AWS Config

Create AWS Config Custom Policy rules

matt avatar

That’s pretty cool…thanks!

1

2022-04-15

sephr avatar

Hello, we’re using the CloudPosse terraform module cloudposse/eks-cluster/aws and we’re currently in the process of upgrading our eks cluster to 1.22. However when we run terraform to perform the upgrade we receive the following error during terraform plan :

│ Error: configmaps "aws-auth" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
│
│   with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│   on .terraform/modules/eks_cluster/auth.tf line 115, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│  115: resource "kubernetes_config_map" "aws_auth_ignore_changes" {

I have a general understanding of what’s happening here. Essentially terraform can’t access the k8s config map inside the cluster to verify its there, but I thought the whole point of setting aws_auth_ignore_changes[0] = true was to avoid this situation. Perhaps I’m misunderstanding something.

Is there a recommended process to navigate around this issue and upgrade my eks cluster without having to re-create the aws-auth cm?

Thanks!

RB avatar

try this

kube_exec_auth_enabled = !var.kubeconfig_file_enabled
RB avatar

also try this

  kube_data_auth_enabled = false
  # exec_auth is more reliable than data_auth when the aws CLI is available
  # Details at https://github.com/cloudposse/terraform-aws-eks-cluster/releases/tag/0.42.0
  kube_exec_auth_enabled = !var.kubeconfig_file_enabled
  # If using `exec` method (recommended) for authentication, provide an explicit
  # IAM role ARN to exec as for authentication to EKS cluster.
  kube_exec_auth_role_arn         = coalesce(var.import_role_arn, module.iam_roles.terraform_role_arn)
  kube_exec_auth_role_arn_enabled = true
  # Path to KUBECONFIG file to use to access the EKS cluster
  kubeconfig_path         = var.kubeconfig_file
  kubeconfig_path_enabled = var.kubeconfig_file_enabled
sephr avatar

Thanks so much for the response. Last night we identified that our dev cluster had successfully updated to 1.22 despite the terraform error described above.

Kind of frustrating, but after looking through all the cluster configs and tests it seems to be running in a stable fashion and of course because it couldn’t access the aws-auth cm, the SSO roles in the cluster are all in tact.

Of course when we perform the changes to our prod env I’ll be sure to keep the above information on hand. I’ll also take a read through the link in the comments.

Out of curiosity, is this an expected error that we can safely ignore or should I suspect the worst and continue to investigate?

Thanks again for the feedback. Really appreciated.

2022-04-16

2022-04-18

Greg avatar

Hi all, i’ve imported a previously-manually-created elastic beanstalk environment into terraform that i’d like to manage using cloudposse/elastic-beanstalk-environment/aws; the environment name uses caps but terraform wants to recreate it with a lowercase name. Is there any way to avoid that and maintain the current case?

Matt Gowie avatar
Matt Gowie

Does the label module lowercase your input name… I cant’ remember.

Can you pass in the name as all caps so it matches your imported object?

Unfortunately, your other option would be to fork and add an ignore_changes for the name attribute, but that would be a shame and you wouldn’t get as much use of using the open source module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hi all,

Based on feedback about the existing experiment, we don’t intend to graduate it to be a stable feature exactly as currently designed. Instead, we intend to make another design iteration which better integrates the idea of default values into the constraints system, and deals with some other concerns that folks raised in feedback on the existing proposal.

We don’t intend to take the experiment away until there’s a new experiment to replace it, but if you do wish to use it in production in spite of its experimental status I suggest keeping in mind that you will probably need to change your configuration at least a little in response to the changes for the second round of experiment.

The purpose of experiments is to gather feedback before finalizing a design, so in this case the experiment process worked as intended but since the feedback led to us needing another round of design we had to wait until there was time to spend on that design work, since the team was already wholly assigned to other work. We’ll share more here when there’s more to share.

loren avatar

Somehow I wasn’t tracking that issue yet, subscribed and gave it a

Hi all,

Based on feedback about the existing experiment, we don’t intend to graduate it to be a stable feature exactly as currently designed. Instead, we intend to make another design iteration which better integrates the idea of default values into the constraints system, and deals with some other concerns that folks raised in feedback on the existing proposal.

We don’t intend to take the experiment away until there’s a new experiment to replace it, but if you do wish to use it in production in spite of its experimental status I suggest keeping in mind that you will probably need to change your configuration at least a little in response to the changes for the second round of experiment.

The purpose of experiments is to gather feedback before finalizing a design, so in this case the experiment process worked as intended but since the feedback led to us needing another round of design we had to wait until there was time to spend on that design work, since the team was already wholly assigned to other work. We’ll share more here when there’s more to share.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Best response:

loren avatar

Haha, when writing variable objects, it often occurs to me that I just want this, https://github.com/hashicorp/terraform/issues/19898#issuecomment-1084628191

I find the separate declaration of the types and the defaults a bit cumbersome and the “optional” keyword superfluous.

I think this has already been suggested, but the syntax for declaring the attributes of a variable is already defined, so it would be consistent to just make them recursive, eg.

variable "simple1" {
  default     = "blah"
  description = "A simple variable"
  type        = string
}
variable "complex1" {
  default = null
  description = "A complex variable"
  type = object({
    mystring1 = {
      default = "blah"
      description = "A string in an object variable (optional)"
      type = string
    }
    mystring2 = {
      description = "A string in an object variable (required)"
      sensitive = true
      type = string
    }
    mylist1 = {
      default = []
      description = "A list in an object variable (optional)"
      type = list(string)
    }
    mymap1 = {
      default = {}
      description = "A map in an object variable (optional)"
      type = map(string)
    }
    myobject1 = {
      default = null
      description = "An object within an object variable (optional)"
      type = object({
        ....
      )}
    }
    mystring3 = { description = "Another string in an object variable (required)", type = string }
  })
}

Matt Gowie avatar
Matt Gowie

Jeremy’s point about that experiment dragging on from 0.14 onward is a good one. It’s sad to me how slow to react they’ve been on this considering it’s one of the top requested features. We’re really not asking for much it seems to me.

2022-04-19

2022-04-20

Release notes from terraform avatar
Release notes from terraform
01:53:13 PM

v1.1.9 1.1.9 (April 20, 2022) BUG FIXES: cli: Fix crash when using sensitive values in sets. (#30825) cli: Fix double-quoted map keys when rendering a diff. (#30855) core:…

cli: Fix plan diff for sensitive nested attributes by alisdair · Pull Request #30825 · hashicorp/terraformattachment image

When rendering diffs for resources which use nested attribute types, we must cope with collections backing those attributes which are entirely sensitive. The most common way this will be seen is th…

cli: Fix double-quoted map keys in diff UI by alisdair · Pull Request #30855 · hashicorp/terraformattachment image

A previous change added missing quoting around object keys which do not parse as barewords. At the same time we introduced a bug where map keys could be double-quoted, due to calling the displayAtt…

Ikana avatar

I’m trying to use the lambda module, but I’m getting some errors when trying to use the var.custom_iam_policy_arns

RB avatar

hmm this seems like a recent error

RB avatar

cc: @Ikana

RB avatar

cc: @jose.amengual you added this recently, no ?

Ikana avatar

It works if I created the policy with -target

Ikana avatar

and then I created everything else

Ikana avatar

also tried this:

Ikana avatar
custom_iam_policy_arns = toset([aws_iam_policy.exec.arn])
jose.amengual avatar
jose.amengual

mmmm

jose.amengual avatar
jose.amengual

I can try to reproduce in like 1 hour

jose.amengual avatar
jose.amengual

I will report back

1
loren avatar

when an arn in var.custom_iam_policy_arns is created in the same state, that for_each expression will generate that error because the arn is not known until apply (as it says), and for_each key expressions must all be fully resolved in the plan phase

loren avatar

I discussed this recently on one of Anton’s modules also… https://github.com/terraform-aws-modules/terraform-aws-iam/issues/193#issuecomment-1063421930


I don’t follow, do you have an example?

Yes, if you create a resource and also in the same config/state attempt to reference an attribute of that resource in the key of the for_each expression, then you will encounter the error mentioned in the OP:
The “for_each” value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.

Note the “key” of the for_each expression is what must be known. That is important. The problem in this specific case is that the policy is being created in the same state, and so the ARN is unknown in a first apply, and the ARN is used as the for_each key.

My solution is to avoid referencing attributes of resources that are likely to be created in the same config/state. Instead of this:

resource "aws_iam_role_policy_attachment" "custom" {
  for_each = toset([for arn in var.role_policy_arns : arn if var.create_role])

  role       = aws_iam_role.this[0].name
  policy_arn = each.key
}

I would use either a map, or an object variable, and construct the for_each key from values that may be known in advance. Using a map(string), the key would be anything the user sets, presumably a “name” of the policy, and the value would be the ARN of the policy:

variable "role_policy_arns" {
  type    = map(string)
  default = {}
}

resource "aws_iam_role_policy_attachment" "custom" {
  for_each = var.create_role ? var.role_policy_arns : {}

  role       = aws_iam_role.this[0].name
  policy_arn = each.value
}

Using an object, you have a lot of flexibility in the data structure. I like lists of objects because it works with the splat operator, but a map of objects would be fine also. Here is a list of objects:

variable "role_policy_arns" {
  type = list(object({
    name = string  # <-- used as for_each key, so cannot reference attribute of resource in same state
    arn  = string  # <-- only in for_each value, so it is fine to reference attribute of resource in same state
  }))
  default = []
}

resource "aws_iam_role_policy_attachment" "custom" {
  for_each = {for policy in var.role_policy_arns : policy.name => policy if var.create_role}

  role       = aws_iam_role.this[0].name
  policy_arn = each.value.arn
}

And here is a map of objects, where the map key is used directly as the for_each key expression, and is again an arbitrary value set by the user that presumably represents the policy name…

variable "role_policy_arns" {
  type = map(object({
    arn  = string  # <-- only in for_each value, so it is fine to reference attribute of resource in same state
  }))
  default = {}
}

resource "aws_iam_role_policy_attachment" "custom" {
  for_each = var.create_role ? var.role_policy_arns : {}

  role       = aws_iam_role.this[0].name
  policy_arn = each.value.arn
}

jose.amengual avatar
jose.amengual

ok, my original pr was better @RB LOL

jose.amengual avatar
jose.amengual

I was just using a string…..

jose.amengual avatar
jose.amengual

and no for_each

jose.amengual avatar
jose.amengual

we might have to switch back to one policy and pass the arn and if there is a need for the user to have more than one policy they can merge them together in one before passing it

jose.amengual avatar
jose.amengual

@Ikana I can make a branch with the changes and maybe you can try it out?

loren avatar

you might want to also play with the moved block for that change, or otherwise it is backwards incompatible…

jose.amengual avatar
jose.amengual

it was merged last week and is broken

jose.amengual avatar
jose.amengual

so my guess no one have been able to use it

jose.amengual avatar
jose.amengual

unless the policy already existed……

loren avatar

it’s only partially broken. you could have supplied an aws-managed-policy-arn as those always exist. or you could use -target. or you could create the policy in another tfstate.

jose.amengual avatar
jose.amengual

correct

Ikana avatar

sure, I’m down to try

Ikana avatar

@jose.amengual

jose.amengual avatar
jose.amengual

you will need to use the github source

Ikana avatar
╷
│ Error: Invalid count argument
│ 
│   on .terraform/modules/main.lambda/iam-role.tf line 77, in resource "aws_iam_role_policy_attachment" "custom":
│   77:   count      = local.enabled && length(var.custom_iam_policy_arn) > 0 ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work
│ around this, use the -target argument to first apply only the resources that the count depends on.
jose.amengual avatar
jose.amengual

yes because it can’t be calculated….

jose.amengual avatar
jose.amengual

can you add a depends_on to the policy you are creating?

jose.amengual avatar
jose.amengual

ahhh but the count will not work either..

jose.amengual avatar
jose.amengual

ahhh I have a bad logic there, I will PM you

jose.amengual avatar
jose.amengual
variable "custom_iam_policy_enabled" {
RB avatar

to allow attaching the policy using the module, the arn would have to be contrived instead of passed in

RB avatar

to allow attaching the policy outside the module, the arn can be passed in using the resource

RB avatar

We added 2 new outputs to attach a policy from outside, the role_name, and role_arn which gets around the issue you saw above

jose.amengual avatar
jose.amengual

thanks @RB

loren avatar

That looks like it will be subject to a race condition on the creation of the policy, and its attachment to the lambda role?

RB avatar

I’d recommend the outside method personally but the example serves to be both a test and example.

wait how could there be a race condition? there’s a depends on on the module for the inside policy to be complete

loren avatar

Ahh, I missed the depends_on. That is handling the race condition for most cases. I’ve come to avoid depends_on anywhere I can, especially at the module-level, so just overlooked it.

1
RB avatar

the inside method seems better for managed aws policies and the outside method seems best for custom policies. no depends on required for either method

1
loren avatar

my personal preference, as a pattern for this kind of thing (not necessarily suggesting to change anything for this module), is still just to have the variable be a list of objects with name and arn, or a map of name => arn, instead of a list of arns if i’m gonna for_each over it, the key can’t be a value that is likely to be an output of a resource

1
RB avatar

so instead of

  custom_policy_arns = [
    "arn1",
    "arn2",
  ]

it would be

  custom_policy_arns = {
     role1 = "arn1"
     role2 = "arn2"
   }

and this would avoid the for each / count error ?

loren avatar

yep that’s how it works. only the keys in the for_each expression need to be known. the values do not. so role1 and role2 in your example are the keys and arn1 and arn2 are the values. that’s what i was attempting to demonstrate in this comment… https://github.com/terraform-aws-modules/terraform-aws-iam/issues/193#issuecomment-1063421930


I don’t follow, do you have an example?

Yes, if you create a resource and also in the same config/state attempt to reference an attribute of that resource in the key of the for_each expression, then you will encounter the error mentioned in the OP:
The “for_each” value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.

Note the “key” of the for_each expression is what must be known. That is important. The problem in this specific case is that the policy is being created in the same state, and so the ARN is unknown in a first apply, and the ARN is used as the for_each key.

My solution is to avoid referencing attributes of resources that are likely to be created in the same config/state. Instead of this:

resource "aws_iam_role_policy_attachment" "custom" {
  for_each = toset([for arn in var.role_policy_arns : arn if var.create_role])

  role       = aws_iam_role.this[0].name
  policy_arn = each.key
}

I would use either a map, or an object variable, and construct the for_each key from values that may be known in advance. Using a map(string), the key would be anything the user sets, presumably a “name” of the policy, and the value would be the ARN of the policy:

variable "role_policy_arns" {
  type    = map(string)
  default = {}
}

resource "aws_iam_role_policy_attachment" "custom" {
  for_each = var.create_role ? var.role_policy_arns : {}

  role       = aws_iam_role.this[0].name
  policy_arn = each.value
}

Using an object, you have a lot of flexibility in the data structure. I like lists of objects because it works with the splat operator, but a map of objects would be fine also. Here is a list of objects:

variable "role_policy_arns" {
  type = list(object({
    name = string  # <-- used as for_each key, so cannot reference attribute of resource in same state
    arn  = string  # <-- only in for_each value, so it is fine to reference attribute of resource in same state
  }))
  default = []
}

resource "aws_iam_role_policy_attachment" "custom" {
  for_each = {for policy in var.role_policy_arns : policy.name => policy if var.create_role}

  role       = aws_iam_role.this[0].name
  policy_arn = each.value.arn
}

And here is a map of objects, where the map key is used directly as the for_each key expression, and is again an arbitrary value set by the user that presumably represents the policy name…

variable "role_policy_arns" {
  type = map(object({
    arn  = string  # <-- only in for_each value, so it is fine to reference attribute of resource in same state
  }))
  default = {}
}

resource "aws_iam_role_policy_attachment" "custom" {
  for_each = var.create_role ? var.role_policy_arns : {}

  role       = aws_iam_role.this[0].name
  policy_arn = each.value.arn
}

RB avatar

that makes sense. i wouldn’t be against a solution like that instead of a straight list of arns.

terraform code… so much of it are workarounds.. sheesh

loren avatar

there are way fewer rough edges than there used to be, but this is certainly one of them

jose.amengual avatar
jose.amengual

isn’t this supposed to be solved in one of the newer versions? I remember someone talking about a coming fix or solution

loren avatar

there’s work/thought ongoing about something like a “progressive” apply. it’s a bit messy though, since there are things that are not known explicitly up front, so the initial plan is not terribly accurate. what is it going to look like? an interactive, guided plan/apply/plan/apply until everything that can be known is known, or a real error is encountered? i’m not sure…

jose.amengual avatar
jose.amengual

well you could apply data resources first and then pull right away to get the values and then continue

loren avatar

they already do that. data sources are known values, unless the inputs to the data sources are themselves attributes of un-applied resources…

loren avatar

For a while now I’ve been wringing my hands over the issue of using computed resource properties in parts of the Terraform config that are needed during the refresh and apply phases, where the values are likely to not be known yet.

The two primary situations that I and others have run into are:

• Interpolating into provider configuration blocks, as I described in #2976. This is allowed by Terraform but fails in unintuitive ways when a chicken-and-egg problem arises. • Interpolating into the count modifier on resource blocks, as described in #1497. Currently this permits only variables, but having it configurable from resource attributes would be desirable.

After a number of false-starts trying to find a way to make this work better in Terraform, I believe I’ve found a design that builds on concepts already present in Terraform, and that makes only small changes to the Terraform workflow. I arrived at this solution by “paving the cowpaths” after watching my coworkers and I work around the issue in various ways.


The crux of the proposal is to alter Terraform’s workflow to support the idea of partial application, allowing Terraform to apply a complicated configuration over several passes and converging on the desired configuration. So from the user’s perspective, it would look something like this:

$ terraform plan -out=tfplan
... (yada yada yada) ...

Terraform is not able to apply this configuration in a single step. The plan above
will partially apply the configuration, after which you should run "terraform plan"
again to plan the next set of changes to converge on the given configuration.

$ terraform apply tfplan
... (yada yada yada) ...

Terraform has only partially-applied the given configuration. To converge on
the final result, run "terraform plan" again to plan the next set of changes.

$ terraform plan -out=tfplan
... (yada yada yada) ...

$ terraform apply
... (yada yada yada) ...

Success! ....

For a particularly-complicated configuration there may be three or more apply/plan cycles, but eventually the configuration should converge.

terraform apply would also exit with a predictable exit status in the “partial success” case, so that Atlas can implement a smooth workflow where e.g. it could immediately plan the next step and repeat the sign-off/apply process as many times as necessary.

This workflow is intended to embrace the existing workaround of using the -target argument to force Terraform to apply only a subset of the config, but improve it by having Terraform itself detect the situation. Terraform can then calculate itself which resources to target to plan for the maximal subset of the graph that can be applied in a single action, rather than requiring the operator to figure this out.

By teaching Terraform to identify the problem and propose a solution itself, Terraform can guide new users through the application of trickier configurations, rather than requiring users to either have deep understanding of the configurations they are applying (so that they can target the appropriate resources to resolve the chicken-and-egg situation), or requiring infrastructures to be accompanied with elaborate documentation describing which resources to target in which order.

Implementation Details

The proposed implementation builds on the existing concept of “computed” values within interpolations, and introduces the new idea of a graph nodes being “deferred” during the plan phase.

Deferred Providers and Resources

A graph node is flagged as deferred if any value it needs for refresh or plan is flagged as “computed” after interpolation. For example:

• A provider is deferred if any of its configuration block arguments are computed. • A resource is deferred if its count value is computed.

Most importantly though, a graph node is always deferred if any of its dependencies are deferred. “Deferred-ness” propagates transitively so that, for example, any resource that belongs to a deferred provider is itself deferred.

After the graph walk for planning, the set of all deferred nodes is included in the plan. A partial plan is therefore signaled by the deferred node set being non-empty.

Partial Application

When terraform apply is given a partial plan, it applies all of the diffs that are included in the plan and then prints a message to inform the user that it was partial before exiting with a non-successful status.

Aside from the different rendering in the UI, applying a partial plan proceeds and terminates just like an error occured on one of the resource operations: the state is updated to reflect what was applied, and then Terraform exits with a nonzero status.

Progressive Runs

No additional state is required to keep track of partial application between runs. Since the state is already resource-oriented, a subsequent refresh will apply to the subset of resources that have already been created, and then plan will find that several “new” resources are present in the configuration, which can be planned as normal. The new resources created by the partial application will cause the set of deferred nodes to shrink – possibly to empty – on the follow-up run.


Building on this Idea

The write-up above considers the specific use-cases of computed provider configurations and computed “count”. In addition to these, this new concept enables or interacts with some other ideas:

#3310 proposed one design for supporting “iteration” – or, more accurately, “fan out” – to generate a set of resource instances based on data obtained elsewhere. This proposal enables a simpler model where foreach could iterate over arbitrary resource globs or collections within resource attributes, without introducing a new “generator” concept, by deferring the planning of the multiple resource instances until the collection has been computed. • #2976 proposed the idea of allowing certain resources to be refreshed immediately, before they’ve been created, to allow them to exist during the initial plan. Partial planning reduces the need for this, but supporting pre-refreshed resources would still be valuable to skip an iteration just to, for example, look up a Consul key to configure a provider. • #2896 talks about rolling updates to sets of resources. This is not directly supported by the above, since it requires human intervention to describe the updates that are required, but the UX of running multiple plan/apply cycles to converge could be used for rolling updates too. • The cycles that result when mixing create_before_destroy with not, as documented in #2944, could get a better UX by adding some more cases where nodes are “deferred” such that the “destroy” node for the deposed resource can be deferred to a separate run from the “create” that deposed it. • #1819 considers allowing the provider attribute on resources to be interpolated. It’s mainly concerned with interpolating from variables rather than resource attributes, but the partial plan idea allows interpolation to be supported more broadly without special exceptions like “only variables are allowed here”, and so it may become easier to implement interpolation of provider. • #4084 requests “intermediate variables”, where computed values can be given a symbolic name that can then be used in multiple places within the configuration. One way to support this would be to allow variable defaults to be interpolated and mark the variables themselves as “deferred” when their values are computed, though certainly other implementatio…

jose.amengual avatar
jose.amengual

ahhhhhh right sorry in this case is the arn of the resource aws_iam_policy which is not a data resource

1
jose.amengual avatar
jose.amengual

ufta that could get messy

jose.amengual avatar
jose.amengual

maybe is a feature that depends_on triggers but no matter what it could get messy

Ikana avatar
│ Error: Invalid for_each argument
│ 
│   on .terraform/modules/main.lambda/iam-role.tf line 77, in resource "aws_iam_role_policy_attachment" "custom":
│   77:   for_each   = local.enabled && length(var.custom_iam_policy_arns) > 0 ? var.custom_iam_policy_arns : toset([])
│     ├────────────────
│     │ local.enabled is true
│     │ var.custom_iam_policy_arns is set of string with 1 element
│ 
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work
│ around this, use the -target argument to first apply only the resources that the for_each depends on.
Ikana avatar

This is where I add the arn: custom_iam_policy_arns = [aws_iam_policy.exec.arn]

2022-04-21

Manoj Jadhav avatar
Manoj Jadhav

I am to create a new VPC. I see for subnets, there are 3 subnet repos.

  1. Dynamic
  2. Named subnets
  3. Multi AZ. I do not want to use the dynamic subnets as we have planned the CIDR range in a different way. Its not equal CIDR distribution for all subnets. If I used named subnets, would I be able to spin up subnets in diffrent AZ’s or am I to use Mutli AZ modules itself? What are the differences?
ikar avatar

Hey all :scientist: …as our infra slowly deprecates and each infra part was deployed with a different terraform version (oldest is 0.12.x) - is there any recommended way how to manage multiple tf installations? Quickly went through tfenv (https://github.com/tfutils/tfenv) but this seems a bit more complicated than needed. Here’s my idea on how such tool works: • when running terraform apply, the tool automatically selects which terraform version should be used (from tfstate) - ideally it would use highest compatible version • when running terraform init the newest available terraform is (installed and) used • terraform auto-complete works and ideally all commands are run with terraform Any recommendations, pls?

Stephen Tan avatar
Stephen Tan
Quick Start - TFSwitch

A command line tool to switch between different versions of terraform (install with homebrew and more)

Stephen Tan avatar
Stephen Tan

just run tfswitch in the tf dir and it’ll switch to the latest supported tf version in that dir

ikar avatar

that looks good! will try

ikar avatar

ok, good for start! ideally would love to skip the tfswitch part, but will stick with this for now. Thank!

Stephen Tan avatar
Stephen Tan

it’s the best thing I’ve used so far - all the others are clunky and don’t work so well. I got used to running tfswitch quite quickly. Ideally though, one would use Terraform Cloud or ’Atlantis right?

Matt Gowie avatar
Matt Gowie

@ikar AFAIK, there is nothing that does what your describe. Tfenv/tfswitch/etc are what is available.

Are you using .terraform-version files with tfenv? If not, I would check that out.

My suggestion would be to solidify on a recent version and then upgrade all your root modules to that version. Otherwise you create too much mental burn on your team who is trying to remember which version supports what.

ikar avatar

Hey @Matt Gowie, we had nothing, now we have tfswitch and we’ll see how that works. Keeping .terraform-version in my notes, thanks!

1
jedineeper avatar
jedineeper

i use asdf for this problem and it works great.

ikar avatar

oh, that must be why asdf stinks in one of my TODO browsers tabs

Bhavik Patel avatar
Bhavik Patel

Anyone know how to get the static_route ip address from a site to site VPN ?

Bhavik Patel avatar
Bhavik Patel

For future travelers, I was able to use the aws_route data type and I found the corresponding route table id through the aws cli via

aws ec2 describe-route-tables
Tyler Jarjoura avatar
Tyler Jarjoura

Hey all, I have a question about module design and security groups. Let’s say I have an ASG of workers and a database. Each is managed by a terraform module, with its own security group. The worker ASG needs to have an ingress rule to access the database. Does it make more sense to:

  1. Pass the worker sg_id into the database module, and create the security group ingress rule there
  2. Pass the database sg id into the worker module, and create the ingress rule there. I guess the pro of the second option, is that everything the worker needs is encapsulated in one module. But I noticed that most third party modules opt for the first approach (you pass in who needs to access your resources). Thoughts?
Bhavik Patel avatar
Bhavik Patel

Sounds like you aren’t using a VPC?

I personally would prefer the second option. It follows the same pattern as what database does my application connect to.

managedkaos avatar
managedkaos

I think it also depends on a sort of chicken-egg situation… are you building both modules in same state or seperately?

In my totally humble opinion, I prefer to keep databases in their own state away from the state where the app/web/presentation layer is handled. that way, i can wack the app and not working about stepping around the data layer.

So with that approach, I would know the database SG ID and would pass that into the ASG module as a variable… or if you have good tagging and/or naming, you can look up the database SG as a data source and use that to wire up your ASG SG to the DB SG.

Yet another approach I have used, create an SG that is common to all apps and use that as a “backplane” network… kinda like the “default” SG in an AWS default VPC. So any resource attached to the backplane SG can talk to any other resource attached to the backplane SG. Of course, that changes security requirements, but if your traffic is localized, this type of “openness” might not be a problem.

Tyler Jarjoura avatar
Tyler Jarjoura

@Bhavik Patel Why does it sound like I’m not using a VPC?

@managedkaos Ya we have everything in separate states.

Bhavik Patel avatar
Bhavik Patel


create an SG that is common to all apps and use that as a “backplane” network
It seems the “best” practice is to create a security group for each resource. It’s what we’ve adopted at our company and I kind of wish we didn’t. Having generic security groups is a lot easier to maintain in some cases

managedkaos avatar
managedkaos

@Bhavik Patel you can do both. and yes, i do create an app/resource specific SG if i am using the default SG approach. the default SG is there for cases that can benefit from it. however, in most cases, the DBs and other data sources have their own, dedicated SG with specific rules.

Bhavik Patel avatar
Bhavik Patel

@Tyler Jarjoura I’ve just seen some companies set their security groups to accept all traffic within their VPC instead of the single resource they are using

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’ve been looking at all the subnet modules available. It seems to me that they are all only support a “private” and “public” subnet… am I missing a more multi-tier setup? The network I’m working on really doesn’t use “public” subnets except in 1 VPC and everything else is either “private” and “data”

Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t think you’ll see too many modules supporting multiple different private subnet tiers – I don’t think it’s a very common pattern in cloud to have multiple networks like this. It’s a bit of a DC-anachronism

Alex Jurkiewicz avatar
Alex Jurkiewicz

simplest approach is probably to use the cloudposse named subnets module with a for_each containing explicit configuration for each subnet

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

what I’m dealing with here is most VPCs will have ‘private’ and ‘data’ subnets as there is no use of routable IP space as there are no NGW or IGW except in the security VPC that has the firewall. All VPCs are connected to a TGW with a route table allowing them to reach the security gateway in a hub/spoke model. Existing setup was done manually so we’re not trying to redo it with Terraform

Alex Jurkiewicz avatar
Alex Jurkiewicz

If the infra already exists, might be simplest to use terraformer to convert your existing infra into a 1:1 mapping of hardcoded resources

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Existing has been setup manually and by many hands at this point. We’re getting to deploy into a fresh region giving us the chance to work it all out in a consistent manner. After it’s deployed and fully tested in the new region we’ll be able to fail over to the new region and re-dress the old region.

Soren Jensen avatar
Soren Jensen

Can this module log to it self? https://github.com/cloudposse/terraform-aws-s3-log-storage I tried to set:

access_log_bucket_name   = random_pet.s3_logging_bucket_name.id
access_log_bucket_prefix = "${random_pet.s3_logging_bucket_name.id}/"

But getting an error that the bucket doesn’t exist

cloudposse/terraform-aws-s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt

cloudposse/terraform-aws-s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail

Elleval avatar
Elleval

Hello All, probably a daft question but I wondered why this module is on the init branch https://github.com/cloudposse/terraform-aws-sqs-queue. The master branch is an example module.

cloudposse/terraform-aws-sqs-queue
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah, I believe we didn’t finish it

cloudposse/terraform-aws-sqs-queue
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what

• Initialize sqs-queue module • Used list(string) instead of string due to new conventions seen in security-group module (see ref below)

why

• Wrap sqs queue with context to take advantage of name, namespace, environment, attributes, enabled, etc

references

https://github.com/cloudposse/terraform-aws-security-group/releases/tag/0.4.0

commands

$ make readme
$ cd test/src
$ make all
--- PASS: TestExamplesComplete (31.26s)
PASS
ok  	github.com/cloudposse/terraform-aws-sqs-queue	31.444s

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@ let’s finish this one off

1
RB avatar

the pr is ready to be merged i think

RB avatar

last time we left off on it was whether or not to use the single item list inputs like in the sg module

Elleval avatar
Elleval
  redrive_policy                    = try(var.redrive_policy[0], null)
Elleval avatar
Elleval

Sorry, ignore my last 2 comments!

Elleval avatar
Elleval

What’s the correct format for redrive_policy ?

RB avatar

it’s a single item list(string) input

RB avatar

so you’ll have to input the value like this

module sqs {
  redrive_policy = [jsonencode({
    deadLetterTargetArn = aws_sqs_queue.terraform_queue_deadletter.arn
    maxReceiveCount     = 4
  })]
}
Elleval avatar
Elleval

Awesome, thanks.

2022-04-22

Grummfy avatar
Grummfy

hello, is there any tools that scan multiple terraform state and make a list of resource not created by terraform but that exist in the infrastructure (AWS in my case) ?

RB avatar

if you tag all your terraform resources and your non-terraformed esources differently, you can use the awscli to retrieve the non-terraformed resources.

https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think what you are looking for is https://driftctl.com/

Catch Infrastructure Driftattachment image

driftctl is a free and open-source CLI that warns of infrastructure drift and fills in the missing piece in your DevSecOps toolbox.

Grummfy avatar
Grummfy

thanks

loren avatar

very cool enhancement coming in a future version of terraform… the ability to force a replacing update on a resource when there are changes in resources it depends on… https://github.com/hashicorp/terraform/pull/30900

Add a replace_triggered_by option in the lifecycle meta block for managed resources, which can be used to couple the lifecycle of resources which may need to be replaced based on conditions in the configuration. References to other managed resources can be used to force replacement of the containing resource, acting similarly to triggers in the ubiquitous null_resource, a manual taint, or a plan -replace operation.

The replace_triggered_by argument is a list of pseudo-expressions which refer to other managed resources. The syntax is limited to resources, resource instances, and attributes of a resource instance. The only variable values allowed in the expression are count.index and each.key, which can be used to tie individual instances of expanded resources together.

The references in the argument list are not evaluated as we typically do within Terraform, rather they are used to lookup changes in the plan. If any of the references do correspond to a change, the containing resource will be planned to be replaced.

The steps for determining if a replace_triggered_by is going to force replacement are as follows:

• Each expression provided is first split into the resource address and remaining attribute traversal. The address is used to locate the change (or changes in the case of multiple resource instances) in the plan. • If the reference is to a whole resource with multiple instances, any Update, DeleteThenCreate, or CreateThenDelete action on any instance will force replacement. • If the reference is to a resource instance with no remaining attribute traversal, a change action of Update, DeleteThenCreate, or CreateThenDelete will force replacement. • If there is a remaining attribute traversal and the change action is Update, DeleteThenCreate, or CreateThenDelete; then the before and after cty values of the change will be compared for equality. Unknown After values are considered a change in value for this purpose as well.

TODO: This initial implementation uses the existing mechanism in place for forced replacement (-replace=addr) to trigger the correct plan. The diff output will show that replace_triggered_by is the reason for replacement, but we may want to make a new mechanism for handing these replacements and a more specific message about the change in the diff output.

TODO: Cut a release of hcl before the RC.

Docs will be added in a separate PR.

Closes #8099
Closes #11418
Closes #22572
Closes #30210

4

2022-04-23

2022-04-24

jeannich avatar
jeannich

Hi,

When using utils_deep_merge_yaml do you know if there is a way to show the differences when yaml files content changes :grey_question: I’d like to see the differences during a terraform plan, not after the apply.

Currently what I do is I output the merged yaml result into a file:

data "utils_deep_merge_yaml" "merged_config" {
  input = local.configs_files_content
}
resource "local_file" "merged_yaml_files" {
    content  = data.utils_deep_merge_yaml.merged_config.output
    filename = "merged_config.yaml"
}

It works great when the local file is kept between executions, terraform does show the yaml differences when there is, but my run environment regularly deletes the file “merged_config.yaml” so what I often end up with is this output:

  # local_file.merged_yaml_files has been deleted
  - resource "local_file" "merged_yaml_files" {
      - content              = <<-EOT
            foo:
              bar:
                key1: 1
                key2: 2

[.........]

  # local_file.merged_yaml_files will be created
  + resource "local_file" "merged_yaml_files" {
      + content              = <<-EOT
            foo:
              bar:
                key1: 1
                key2: 2

Is there any way to keep the merged yaml content into a terraform resource that does not output its content to a local file ? I looked for terraform plugin that could do that but could not find any.

Thanks for your suggestions !

RB avatar

couldn’t you use an output instead of a file?

jeannich avatar
jeannich

Correct me if I’m wrong but output is only visible after an apply I think. And I’d like to view the diff at plan time.

RB avatar

it should show the diff during plan time in terraform 1.x

2022-04-25

bhavin vyas avatar
bhavin vyas

HI team ..i am new to terraform trying to create simple Security Group Module using below main.tf. resource “aws_security_group” “ec2_VDS_securityGroup” { for_each = var.security_groups name = each.value.name description = each.value.description vpc_id = var.aws_vpc_TRVPC_id dynamic “ingress” { for_each = each.value.ingress content { from_port = ingress.value.from to_port = ingress.value.to protocol = ingress.value.protocol prefix_list_ids = var.aws_zpa_prefix_id

}   }

egress { from_port = 0 to_port = 0 protocol = “-1” cidr_blocks = [“0.0.0.0/0”] }

bhavin vyas avatar
bhavin vyas

challage what i am facing is how to get output of this security group ID and use it in parent module as a reference for EC2 ?

Grummfy avatar
Grummfy

hello, look at https://www.terraform.io/language/values/outputs output “some_id” { value = aws_security_group.ec2_VDS_securityGroup.id }

and in the module that refer it you will use module.YOUR-MODULE-NAME.some_id

look at existing module like https://github.com/cloudposse/terraform-aws-eks-cluster you will see the outputs.tf (the name is not mandatatory, is just easier)

Output Values - Configuration Language | Terraform by HashiCorpattachment image

Output values are the return values of a Terraform module.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster

2022-04-26

bhavin vyas avatar
bhavin vyas

HI grummy thanks for your help ..it seems i am getting below massage while updating terraform security Module details in EC2 instance.

bhavin vyas avatar
bhavin vyas
Matt Gowie avatar
Matt Gowie

Try wrapping it in an array. Something like the following —

instance_security_group_ids = [ module.security_group.securitygroup_id["private"] ] 
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@here critical update - [git.io](http://git.io) deprecation 2022-04-29 means that build-harness links may stop working.

https://github.com/cloudposse/build-harness/issues/314

Elias Andres Machado Molleja avatar
Elias Andres Machado Molleja

Good afternoon!!! I need to ask you something about a issue I get in node-group cloudposse module terraform-aws-eks-node-group version “0.27.3”. I’m trying to create one with the option cluster_autoscaler_enabled in false, but the ASG still have the tags with CAS enabled in true (attach 1).

Here you have the POC code we created (attach 2) based in the example attach in README

Could this issue be a tag problem? I’m new using cloudposse module. Could you help me with it?

loren avatar

New issue aimed at discussing options to resolve these annoying errors:

│ Error: Invalid for_each argument

and

│ Error: Invalid count argument

Very much worth reading and understanding! https://github.com/hashicorp/terraform/issues/30937

The idea of “unknown values” is a crucial part of how Terraform implements planning as a separate step from applying.

An unknown value is a placeholder for a value that Terraform (most often, a Terraform provider) cannot know until the apply step. Unknown values allow Terraform to still keep track of type information where possible, even if the exact values aren’t known, and allow Terraform to be explicit in its proposed plan output about which values it can predict and which values it cannot.

Internally, Terraform performs checks to ensure that the final arguments for a resource instance at the apply step conform to the arguments previously shown in the plan: known values must remain exactly equal, while unknown values must be replaced by known values matching the unknown value’s type constraint. Through this mechanism, Terraform aims to promise that the apply phase will use the same settings as were used during planning, or Terraform will return an error explaining that it could not.
(For a longer and deeper overview of what unknown values represent and how Terraform treats them, see my blog post
Unknown Values: The Secret to Terraform Plan_.)

The design goal for unknown values is that Terraform should always be able to produce some sort of plan, even if parts of it are not yet known, and then it’s up to the user to review the plan and decide either to accept the risk that the unknown values might not be what’s expected, or to apply changes from a smaller part of the configuration (e.g. using -target) in order to learn more final values and thus produce a plan with fewer unknowns.

However, Terraform currently falls short of that goal in a couple different situations:

• The Terraform language runtime does not allow an unknown value to be assigned to either of the two resource repetition meta-arguments, count and for_each.

In that situation, Terraform cannot even predict how many instances of a resource are being declared, and it isn't clear how exactly Terraform should explain that degenenerate situation in a plan and so currently Terraform gives up and returns an error:
    │ Error: Invalid for_each argument
    │
    │ ...
    │
    │ The "for_each" value depends on resource attributes that cannot
    │ be determined until apply, so Terraform cannot predict how many
    │ instances will be created. To work around this, use the -target
    │ argument to first apply only the resources that the for_each
    │ depends on.
    
    
    │ Error: Invalid count argument
    │
    │ ...
    │
    │ The "count" value depends on resource attributes that cannot be
    │ determined until apply, so Terraform cannot predict how many
    │ instances will be created. To work around this, use the -target
    │ argument to first apply only the resources that the count depends
    │ on.
    
    

• If any known values appear in a provider block for configuring a provider, Terraform will pass those unknown values to the provider’s “Configure” function.

Although Terraform Core handles this in an arguably-reasonable way, we've never defined how exactly a provider ought to react to crucial arguments being unknown, and so existing providers tend to fail or behave strangely in that situation.

For example, some providers (due to quirks of the old Terraform SDK) end up treating an unknown value the same as an unset value, causing the provider to try to connect to somewhere weird like a port on localhost.

Providers built using the modern Provider Framework don't run into that particular malfunction, but it still isn't really clear what a provider ought to do when a crucial argument is unknown and so e.g. the AWS Cloud Control provider -- a flagship use of the new framework -- reacts to unknown provider arguments by returning an error, causing a similar effect as we see for `count` and `for_each` above.

Although the underlying causes for the errors in these two cases are different, they both lead to a similar problem: planning is blocked entirely by the resulting error and the user has to manually puzzle out how to either change the configuration to avoid the unknown values appearing in “the wrong places”, or alternatively puzzle out what exactly to pass to -target to select a suitable subset of the configuration to cause the problematic values to be known in a subsequent untargeted plan.

Terraform should ideally treat unknown values in these locations in a similar way as it does elsewhere: it should successfully produce a plan which describes what’s certain and is explicit about what isn’t known yet. The user can then review that plan and decide whether to proceed.

Ideally in each situation where an unknown value appears there should be some clear feedback on what unknown value source it was originally derived from, so that in situations where the user doesn’t feel comfortable proceeding without further information they can more easily determine how to use -target (or some other similar capabililty yet to be designed) to deal with only a subset of resources at first and thus create a more complete subsequent plan.


This issue is intended as a statement of a problem to be solved and not as a particular proposed solution to that problem. However, there are some specific questions for us to consider on the path to designing a solution:

• Is it acceptable for Terraform to produce a plan which can’t even say how many instances of a particular resource will be created?

That's a line we've been loathe to cross so far because the difference between a couple instances and tens of instances can be quite an expensive bill, but the same could be said for other values that Terraform is okay with leaving unknown in the plan output, such as the "desired count" of an EC2 autoscaling group. Maybe it's okay as long as Terraform is explicit about it in the plan output? • Conversely, is it acceptable for Terraform to _automatically_ produce a plan which explicitly covers only a subset of the configuration, leaving the user to run `terraform apply` again to pick up where it left off?

This was essence of the earlier proposal [#4149](https://github.com/hashicorp/terraform/issues/4149), which is now closed due to its age and decreasing relevance to modern Terraform. That proposal made the observation that, since we currently suggest folks work around unknown value errors by using `-target`, Terraform could effectively synthesize its own `-target` settings to carve out the maximum possible set of actions that can be taken without tripping over the two problematic situations above. • Should providers (probably with some help from the Plugin Framework) be permitted to return an _entirely-unknown_ response to the `ReadResource`, `ReadDataSource`, and `PlanResourceChange` operations for situations where the provider isn't configured completely enough to even _attempt_ these operations?

These are the three operations that Terraform needs to be able to ask a partially-configured provider to perform. If we allow a provider to signal that it isn't configured enough to even try at those, what should Terraform Core do in response? • We most frequently encounter large numbers of unknown values when planning the initial creation of a configuration, when nothing at all exists yet. That is definitely the most common scenario where these problems arise, but a provider can potentially return known values even as part of an in-place update if that is the best representation of the remote API's behavior -- for example, perhaps one of the output attributes is derived from an updated argument in a way that the provider cannot predict or simulate.

Do we need to take any extra care to deal…
1
2
Matt Gowie avatar
Matt Gowie

Ah that’d be a much appreciated fix! There is a lot of issues / workarounds in the CP modules for those issues.

The idea of “unknown values” is a crucial part of how Terraform implements planning as a separate step from applying.

An unknown value is a placeholder for a value that Terraform (most often, a Terraform provider) cannot know until the apply step. Unknown values allow Terraform to still keep track of type information where possible, even if the exact values aren’t known, and allow Terraform to be explicit in its proposed plan output about which values it can predict and which values it cannot.

Internally, Terraform performs checks to ensure that the final arguments for a resource instance at the apply step conform to the arguments previously shown in the plan: known values must remain exactly equal, while unknown values must be replaced by known values matching the unknown value’s type constraint. Through this mechanism, Terraform aims to promise that the apply phase will use the same settings as were used during planning, or Terraform will return an error explaining that it could not.
(For a longer and deeper overview of what unknown values represent and how Terraform treats them, see my blog post
Unknown Values: The Secret to Terraform Plan_.)

The design goal for unknown values is that Terraform should always be able to produce some sort of plan, even if parts of it are not yet known, and then it’s up to the user to review the plan and decide either to accept the risk that the unknown values might not be what’s expected, or to apply changes from a smaller part of the configuration (e.g. using -target) in order to learn more final values and thus produce a plan with fewer unknowns.

However, Terraform currently falls short of that goal in a couple different situations:

• The Terraform language runtime does not allow an unknown value to be assigned to either of the two resource repetition meta-arguments, count and for_each.

In that situation, Terraform cannot even predict how many instances of a resource are being declared, and it isn't clear how exactly Terraform should explain that degenenerate situation in a plan and so currently Terraform gives up and returns an error:
    │ Error: Invalid for_each argument
    │
    │ ...
    │
    │ The "for_each" value depends on resource attributes that cannot
    │ be determined until apply, so Terraform cannot predict how many
    │ instances will be created. To work around this, use the -target
    │ argument to first apply only the resources that the for_each
    │ depends on.
    
    
    │ Error: Invalid count argument
    │
    │ ...
    │
    │ The "count" value depends on resource attributes that cannot be
    │ determined until apply, so Terraform cannot predict how many
    │ instances will be created. To work around this, use the -target
    │ argument to first apply only the resources that the count depends
    │ on.
    
    

• If any known values appear in a provider block for configuring a provider, Terraform will pass those unknown values to the provider’s “Configure” function.

Although Terraform Core handles this in an arguably-reasonable way, we've never defined how exactly a provider ought to react to crucial arguments being unknown, and so existing providers tend to fail or behave strangely in that situation.

For example, some providers (due to quirks of the old Terraform SDK) end up treating an unknown value the same as an unset value, causing the provider to try to connect to somewhere weird like a port on localhost.

Providers built using the modern Provider Framework don't run into that particular malfunction, but it still isn't really clear what a provider ought to do when a crucial argument is unknown and so e.g. the AWS Cloud Control provider -- a flagship use of the new framework -- reacts to unknown provider arguments by returning an error, causing a similar effect as we see for `count` and `for_each` above.

Although the underlying causes for the errors in these two cases are different, they both lead to a similar problem: planning is blocked entirely by the resulting error and the user has to manually puzzle out how to either change the configuration to avoid the unknown values appearing in “the wrong places”, or alternatively puzzle out what exactly to pass to -target to select a suitable subset of the configuration to cause the problematic values to be known in a subsequent untargeted plan.

Terraform should ideally treat unknown values in these locations in a similar way as it does elsewhere: it should successfully produce a plan which describes what’s certain and is explicit about what isn’t known yet. The user can then review that plan and decide whether to proceed.

Ideally in each situation where an unknown value appears there should be some clear feedback on what unknown value source it was originally derived from, so that in situations where the user doesn’t feel comfortable proceeding without further information they can more easily determine how to use -target (or some other similar capabililty yet to be designed) to deal with only a subset of resources at first and thus create a more complete subsequent plan.


This issue is intended as a statement of a problem to be solved and not as a particular proposed solution to that problem. However, there are some specific questions for us to consider on the path to designing a solution:

• Is it acceptable for Terraform to produce a plan which can’t even say how many instances of a particular resource will be created?

That's a line we've been loathe to cross so far because the difference between a couple instances and tens of instances can be quite an expensive bill, but the same could be said for other values that Terraform is okay with leaving unknown in the plan output, such as the "desired count" of an EC2 autoscaling group. Maybe it's okay as long as Terraform is explicit about it in the plan output? • Conversely, is it acceptable for Terraform to _automatically_ produce a plan which explicitly covers only a subset of the configuration, leaving the user to run `terraform apply` again to pick up where it left off?

This was essence of the earlier proposal [#4149](https://github.com/hashicorp/terraform/issues/4149), which is now closed due to its age and decreasing relevance to modern Terraform. That proposal made the observation that, since we currently suggest folks work around unknown value errors by using `-target`, Terraform could effectively synthesize its own `-target` settings to carve out the maximum possible set of actions that can be taken without tripping over the two problematic situations above. • Should providers (probably with some help from the Plugin Framework) be permitted to return an _entirely-unknown_ response to the `ReadResource`, `ReadDataSource`, and `PlanResourceChange` operations for situations where the provider isn't configured completely enough to even _attempt_ these operations?

These are the three operations that Terraform needs to be able to ask a partially-configured provider to perform. If we allow a provider to signal that it isn't configured enough to even try at those, what should Terraform Core do in response? • We most frequently encounter large numbers of unknown values when planning the initial creation of a configuration, when nothing at all exists yet. That is definitely the most common scenario where these problems arise, but a provider can potentially return known values even as part of an in-place update if that is the best representation of the remote API's behavior -- for example, perhaps one of the output attributes is derived from an updated argument in a way that the provider cannot predict or simulate.

Do we need to take any extra care to deal…

2022-04-27

Tyler Jarjoura avatar
Tyler Jarjoura

Hey everybody, why is “name_prefix” used instead of just “name” in this module for certain resources (parameter group, option group)? https://github.com/cloudposse/terraform-aws-rds

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances

RB avatar

this resource doesnt have an argument for name prefix

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance

but the others do

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances

Tyler Jarjoura avatar
Tyler Jarjoura

Right. Is there some kind of policy to use name_prefix? It seems like name would be the more useful / human-friendly choice

RB avatar

maybe to prevent conflicts?

Release notes from terraform avatar
Release notes from terraform
07:33:13 PM

v1.2.0-beta1 1.2.0 (Unreleased) UPGRADE NOTES:

The official Linux packages for the v1.2 series now require Linux kernel version 2.6.32 or later.

When making outgoing HTTPS or other TLS connections as a client, Terraform now requires the server to support TLS v1.2. TLS v1.0 and v1.1 are no longer supported. Any safely up-to-date server should support TLS 1.2, and mainstream web browsers have required it since 2020.

When making outgoing HTTPS or other TLS connections as a client, Terraform will no…

mrwacky avatar
mrwacky

Did Hashicorp make it easier to update to v4.x AWS provider? I know they totally changed how S3 was managed, and then we the people collectively flipped our lids. Did they make it less painful?

RB avatar

yes they backported the new s3 resources with deprecation warnings

mrwacky avatar
mrwacky

ahh nice, so i can just blindly proceed to 4.x and worry about it at some future point? :D

RB avatar

probably best to use the latest if youre upgrading

2022-04-28

Roman Orlovskiy avatar
Roman Orlovskiy

Hi all. First of all wanted to thank Cloudposse team for their amazing work! Really appreciate what you are doing for the community.

As for my question, do I understand properly that iam-primary-roles iam-delegated-roles components from https://github.com/cloudposse/terraform-aws-components/tree/master/modules are not required when using AWS SSO (https://github.com/cloudposse/terraform-aws-sso)? Or is there still a need for them in some cases? And what would those case be? Thanks in advance

cloudposse/terraform-aws-sso
z0rc3r avatar

How does Cloud Posse configure Renovate Bot, so it automatically updates terraform module versions in generated docks? Like here https://github.com/cloudposse/terraform-aws-eks-cluster/commit/22ab0dd1271d272b134b62682d275a73e07dc0fd

z0rc3r avatar

Oh I see, it’s another bot that updates documentation.

Elleval avatar
Elleval

Hey guys, I’m looping through a module to create multiple resources and getting this error: The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created.

Elleval avatar
Elleval
Full error: Error: Invalid count argument
│ 
│   on .terraform/modules/ecs_service_task_services_queue/main.tf line 225, in data "aws_iam_policy_document" "ecs_task_exec":
│  225:   count = local.create_exec_role ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first
│ apply only the resources that the count depends on.
Elleval avatar
Elleval
The code: module "ecs_service_task_services_queue" {
  source   = "github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=0.64.0"
  for_each = var.my_queue_task_def

  container_definition_json = jsonencode([
    module.my_queue_task_def["${each.key}"].json_map_object
  ])

  context            = module.sw-services-ecs-queue-label
  launch_type        = "FARGATE"
  ecs_cluster_arn    = aws_ecs_cluster.sw-services.arn
  task_exec_role_arn = module.role_ecs_task_exec.arn
  task_role_arn      = module.role_ecs_task
  vpc_id             = data.aws_vpc.selected.id
}
Elleval avatar
Elleval
variable "my_queue_task_def" {
  type = map(any)
  default = {
    "my-queue" = {
      container_image              =nginx:latest"
      container_memory             = null
      container_memory_reservation = null

    }
loren avatar

i don’t think the problem is your for_each, it’s in how the module you are calling is determining the value of its internal local: local.create_exec_role

Elleval avatar
Elleval

Hi Loren, Thanks. Do you know why it’s happening and if there is a work-around? Trying to keep it dry

loren avatar

without looking, i am guessing it is related to:

  task_exec_role_arn = module.role_ecs_task_exec.arn

and it is testing var.task_exec_role_arn != null or something to determine whether or not to create the data source “aws_iam_policy_document” “ecs_task_exec”

loren avatar

you’ll have to look at the module source to see exactly how it is determining the value of that local

loren avatar

but once you know which variable it is, presuming it is an arn, you can construct the arn so it is completely known at plan time, instead of passing the attribute/output from the module that creates it

loren avatar

e.g. "arn:aws:iam:...:${role_name}"

Elleval avatar
Elleval

Thanks again, I’ll take a look.

loren avatar

see also this issue for more context: https://github.com/hashicorp/terraform/issues/30937

The idea of “unknown values” is a crucial part of how Terraform implements planning as a separate step from applying.

An unknown value is a placeholder for a value that Terraform (most often, a Terraform provider) cannot know until the apply step. Unknown values allow Terraform to still keep track of type information where possible, even if the exact values aren’t known, and allow Terraform to be explicit in its proposed plan output about which values it can predict and which values it cannot.

Internally, Terraform performs checks to ensure that the final arguments for a resource instance at the apply step conform to the arguments previously shown in the plan: known values must remain exactly equal, while unknown values must be replaced by known values matching the unknown value’s type constraint. Through this mechanism, Terraform aims to promise that the apply phase will use the same settings as were used during planning, or Terraform will return an error explaining that it could not.
(For a longer and deeper overview of what unknown values represent and how Terraform treats them, see my blog post
Unknown Values: The Secret to Terraform Plan_.)

The design goal for unknown values is that Terraform should always be able to produce some sort of plan, even if parts of it are not yet known, and then it’s up to the user to review the plan and decide either to accept the risk that the unknown values might not be what’s expected, or to apply changes from a smaller part of the configuration (e.g. using -target) in order to learn more final values and thus produce a plan with fewer unknowns.

However, Terraform currently falls short of that goal in a couple different situations:

• The Terraform language runtime does not allow an unknown value to be assigned to either of the two resource repetition meta-arguments, count and for_each.

In that situation, Terraform cannot even predict how many instances of a resource are being declared, and it isn't clear how exactly Terraform should explain that degenenerate situation in a plan and so currently Terraform gives up and returns an error:
    │ Error: Invalid for_each argument
    │
    │ ...
    │
    │ The "for_each" value depends on resource attributes that cannot
    │ be determined until apply, so Terraform cannot predict how many
    │ instances will be created. To work around this, use the -target
    │ argument to first apply only the resources that the for_each
    │ depends on.
    
    
    │ Error: Invalid count argument
    │
    │ ...
    │
    │ The "count" value depends on resource attributes that cannot be
    │ determined until apply, so Terraform cannot predict how many
    │ instances will be created. To work around this, use the -target
    │ argument to first apply only the resources that the count depends
    │ on.
    
    

• If any unknown values appear in a provider block for configuring a provider, Terraform will pass those unknown values to the provider’s “Configure” function.

Although Terraform Core handles this in an arguably-reasonable way, we've never defined how exactly a provider ought to react to crucial arguments being unknown, and so existing providers tend to fail or behave strangely in that situation.

For example, some providers (due to quirks of the old Terraform SDK) end up treating an unknown value the same as an unset value, causing the provider to try to connect to somewhere weird like a port on localhost.

Providers built using the modern Provider Framework don't run into that particular malfunction, but it still isn't really clear what a provider ought to do when a crucial argument is unknown and so e.g. the AWS Cloud Control provider -- a flagship use of the new framework -- reacts to unknown provider arguments by returning an error, causing a similar effect as we see for `count` and `for_each` above.

Although the underlying causes for the errors in these two cases are different, they both lead to a similar problem: planning is blocked entirely by the resulting error and the user has to manually puzzle out how to either change the configuration to avoid the unknown values appearing in “the wrong places”, or alternatively puzzle out what exactly to pass to -target to select a suitable subset of the configuration to cause the problematic values to be known in a subsequent untargeted plan.

Terraform should ideally treat unknown values in these locations in a similar way as it does elsewhere: it should successfully produce a plan which describes what’s certain and is explicit about what isn’t known yet. The user can then review that plan and decide whether to proceed.

Ideally in each situation where an unknown value appears there should be some clear feedback on what unknown value source it was originally derived from, so that in situations where the user doesn’t feel comfortable proceeding without further information they can more easily determine how to use -target (or some other similar capabililty yet to be designed) to deal with only a subset of resources at first and thus create a more complete subsequent plan.


This issue is intended as a statement of a problem to be solved and not as a particular proposed solution to that problem. However, there are some specific questions for us to consider on the path to designing a solution:

• Is it acceptable for Terraform to produce a plan which can’t even say how many instances of a particular resource will be created?

That's a line we've been loathe to cross so far because the difference between a couple instances and tens of instances can be quite an expensive bill, but the same could be said for other values that Terraform is okay with leaving unknown in the plan output, such as the "desired count" of an EC2 autoscaling group. Maybe it's okay as long as Terraform is explicit about it in the plan output? • Conversely, is it acceptable for Terraform to _automatically_ produce a plan which explicitly covers only a subset of the configuration, leaving the user to run `terraform apply` again to pick up where it left off?

This was essence of the earlier proposal [#4149](https://github.com/hashicorp/terraform/issues/4149), which is now closed due to its age and decreasing relevance to modern Terraform. That proposal made the observation that, since we currently suggest folks work around unknown value errors by using `-target`, Terraform could effectively synthesize its own `-target` settings to carve out the maximum possible set of actions that can be taken without tripping over the two problematic situations above. • Should providers (probably with some help from the Plugin Framework) be permitted to return an _entirely-unknown_ response to the `ReadResource`, `ReadDataSource`, and `PlanResourceChange` operations for situations where the provider isn't configured completely enough to even _attempt_ these operations?

These are the three operations that Terraform needs to be able to ask a partially-configured provider to perform. If we allow a provider to signal that it isn't configured enough to even try at those, what should Terraform Core do in response? • We most frequently encounter large numbers of unknown values when planning the initial creation of a configuration, when nothing at all exists yet. That is definitely the most common scenario where these problems arise, but a provider can potentially return unknown values even as part of an in-place update if that is the best representation of the remote API's behavior -- for example, perhaps one of the output attributes is derived from an updated argument in a way that the provider cannot predict or simulate.

Do we need to take any extra care to …
Elleval avatar
Elleval

awesome, cheers.

Joe Perez avatar
Joe Perez

Hello All! I just wanted to share my terraform blog with everyone. Here is my latest post on using 1password CLI with Terraform: https://www.taccoform.com/posts/tfg_p5/

Securing Terraform Credentials With 1Password

Overview One of the first things you learn with Terraform is that you need a way to authenticate and how to pass those credentials to Terraform. You can use environment variables, the -var flag, or use a .tfvars file to pass sensitive information from you to the provider. These methods create a security gap because anyone with access to your computer can see the secrets. Against our best judgement, we sometimes store these credentials our dotfiles, exchanging security for convenience.

1
Michael Galey avatar
Michael Galey

This is very interesting. I understand the setup now for a few individual secrets. Do you have any sense on how well it’d handle setting 100 or so secrets at once? e.g. to build an application’s env file. I use an ansible-vault at the moment.

Securing Terraform Credentials With 1Password

Overview One of the first things you learn with Terraform is that you need a way to authenticate and how to pass those credentials to Terraform. You can use environment variables, the -var flag, or use a .tfvars file to pass sensitive information from you to the provider. These methods create a security gap because anyone with access to your computer can see the secrets. Against our best judgement, we sometimes store these credentials our dotfiles, exchanging security for convenience.

Joe Perez avatar
Joe Perez

I think there’s support for building out an env file, let me try to look for that article

Joe Perez avatar
Joe Perez

I haven’t used it for more than a couple variables thus far

Joe Perez avatar
Joe Perez
Load Secrets Into Config Files | 1Password Developer Documentationattachment image

Learn how to use 1Password CLI to load secrets into config files without putting any plaintext secrets in code.

Michael Galey avatar
Michael Galey

ok interesting, so it’s reducing the calls by storing them all within one item, that implies to me that it’d be slow still to pull in many secrets at once. Still cool

Joe Perez avatar
Joe Perez

I wonder if you can store the whole config in one secret, pull it once and then place it. Of course that’s less flexible than having their individual items in a vault

David avatar

Hi Folks. Our deployment servers don’t have access to internet directly, only allowed out via a proxy which doesn’t have github whitelisted. We mirror repositories in our source control system. We tried to use the parameter store module which wants to clone the null label module directly from Github, but I cannot see a workaround currently. Am I missing something?

21:42:00  ╷
21:42:00  │ Error: Failed to download module
21:42:00  │ 
21:42:00  │ Could not download module "this" (context.tf:23) source code from
21:42:00  │ "git::<https://github.com/cloudposse/terraform-null-label?ref=0.25.0>": error
21:42:00  │ downloading
21:42:00  │ '<https://github.com/cloudposse/terraform-null-label?ref=0.25.0>':
21:42:00  │ /usr/bin/git exited with 128: Cloning into '.terraform/modules/this'...
21:42:00  │ fatal: unable to access
21:42:00  │ '<https://github.com/cloudposse/terraform-null-label/>': Received HTTP code
21:42:00  │ 403 from proxy after CONNECT
21:42:00  │ .
21:42:00  ╵
loren avatar

I use cloudposse modules pretty much always but I’m working on a client that does not allow to clone git repos from outside the org, how can I package a module an all the dependencies? is there such tool? because as you know is I clone the ecs module I will have to clone like 30 dependencies manually and I do not want to

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See the last few messages in that thread

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there’s now a proxy for the registry

loren avatar

need two tricks… a proxy for registry sources, and also gitconfig insteadOf for git sources

David avatar

Thanks, was hoping for an easier solution

loren avatar

no internet access === more complexity and $$$

David avatar

I’ve worked around this using insteadOf for the terraform-null-label module

git config --global url.git@your-git-server:path/to/registry/modules.insteadOf <https://github.com/cloudposse> 
Elleval avatar
Elleval

Hey folks, I’m getting errors when running the terraform-aws-iam-role .

Elleval avatar
Elleval

When running the example:

Elleval avatar
Elleval
Error: failed creating IAM Role (eg-prod-app): MalformedPolicyDocument: Could not parse the policy: Statement is not well formatted.
status code: 400, request id: 22c6c397-83a3-4498-b1fd-343b01b862dd
 
with module.role.aws_iam_role.default[0],
on .terraform/modules/role/main.tf line 29, in resource "aws_iam_role" "default":
29: resource "aws_iam_role" "default" {
Alex Jurkiewicz avatar
Alex Jurkiewicz

sounds like you are passing in an invalid policy document

1
Elleval avatar
Elleval

Hey Alex, When running something really simple example too:

Elleval avatar
Elleval
module "role_ecs_task_exec" {
  source              = "github.com/cloudposse/terraform-aws-iam-role.git?ref=0.16.2"
  role_description = "test"
  name = "test"
  enabled             = true
}
Elleval avatar
Elleval
 Error: failed creating IAM Role (test): MalformedPolicyDocument: Could not parse the policy: Statement is not well formatted.
        status code: 400, request id: 4533990b-2ffd-41b8-bebc-1a2972bfc2d7
  
   with module.role_ecs_task_exec.aws_iam_role.default[0],
   on .terraform/modules/role_ecs_task_exec/main.tf line 29, in resource "aws_iam_role" "default":
   29: resource "aws_iam_role" "default" {

 Error: error creating IAM Policy test: MalformedPolicyDocument: Syntax errors in policy.
        status code: 400, request id: 87499c42-00da-4cf7-81bf-8ca5b6ea0366
  
   with module.role_ecs_task_exec.aws_iam_policy.default[0],
   on .terraform/modules/role_ecs_task_exec/main.tf line 45, in resource "aws_iam_policy" "default":
   45: resource "aws_iam_policy" "default" {
Elleval avatar
Elleval

In the simple example, no policies are being passed to the role and it errors. Should the simple example work?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I read the code, and it looks like you probably need to specify at least one trusted principle

Elleval avatar
Elleval

I’ll check and confirm. Thanks.

Alex Jurkiewicz avatar
Alex Jurkiewicz

read the source luke

Elleval avatar
Elleval

Elleval avatar
Elleval

Hi, is there an ECS task module, which supports the creation of a ECS/Fargate scheduled task? I had a look but couldn’t see one.

Matt Gowie avatar
Matt Gowie

There isn’t a cloud posse module AFAIK. But there are others out there that I’ve seen before. I’d Google it.

Elleval avatar
Elleval

Cheers, Matt. I’ll take a look.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Did Cloudposse invent the null-label module idea, or did it exist beforehand? Just curious about where the idea came from

joshmyers avatar
joshmyers

Invented @ CloudPosse AFAIK

joshmyers avatar
joshmyers

Seen lots of other orgs use their own version since.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, we were the first ones to create any sort of label module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the pattern has now caught on

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re also discussing internally how to generalize it so we can support any labels

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, we invented the context pattern for terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
4
Alex Jurkiewicz avatar
Alex Jurkiewicz

that’s awesome!

2022-04-29

awl avatar

Anyone know how I can escape double quotes in an output statement but make them literal when the output is shown? For example:

output "quote_test" {
  value = "this is \"awesome\""
}

gives me this output:

quote_test = "this is \"awesome\""

I want it to show me:

quote_test = "this is "awesome""
Alex Jurkiewicz avatar
Alex Jurkiewicz

you can’t

Alex Jurkiewicz avatar
Alex Jurkiewicz

the output is in HCL format. You can view outputs in JSON format if you want, then you could easily parse and re-display in your preferred format

Joe Perez avatar
Joe Perez

not sure if this helps, but using the terraform output with the raw option might help:

output.tf

output "text" {
    value = "awesome"
}

bash/zsh

$ export TEXT=$(terraform output -raw text)
$ echo "this is \"${TEXT}\""
this is "awesome"
Pablo Silveira avatar
Pablo Silveira

Hello mates, how are you? is there a possibilty to call this module https://github.com/cloudposse/terraform-aws-vpn-connection without the creation of aws_customer_gateway.default , can it be sent as parameter?

cloudposse/terraform-aws-vpn-connection

Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network

    keyboard_arrow_up