#terraform (2022-03)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-03-01

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know why when i cut a new release for my module, the terraform registry does not update to reflect it?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have had this problem too

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Usually something wrong with the webhooks for the repo. We end up having to manually trigger a refresh

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt has tried to ask HashiCorp for support, but I believe no response

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yeh that is super frustrating

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i have re-synced the registry is update and now atlantis reckons it can’t find the version of the module

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

it seems we get a {"errors":["payload signature check failed"]} on the repos webhook

Alex Jurkiewicz avatar
Alex Jurkiewicz

I really wish CloudPosse modules would all bump to 1.0. The lack of semver is really annoying when upgrading. I have to carefully read each release’s changelog and try to determine if a change is backwards-compatible myself. For instance, this changelog (picking on myself): v0.29.0

Only specify ttl block if ttl_enabled is true @alexjurkiewicz (#95)
Is this backwards-compatible from 0.28.0? Who knows! I have to either read the PR’s diff or upgrade and check the plan.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are talking about it

2
2
2
loren avatar

i once read that if something is used for critical/production use cases, then it is 1.0 regardless of whether anyone thinks it is ready for that, and it should be versioned accordingly. that got me over all my hesitance. but i also make no promises about long-term support for any given version. if a change is backwards incompatible, no matter how minor, that gets a major version bump.

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes I know some people don’t like big major version numbers for aesthetic reasons, they want to get the design right and release v1 which never increments. But Terraform/AWS provider make that impossible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so right now, I believe we need to do it, it’s more about how to do it “at scale”. I hate that expression, but it really makes sense. Most companies doing releases have less than a handful to worry about. We have hundreds, more than any release manager can keep track of - but let’s discuss.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looking at major projects, e.g. kubernetes, they create a branch release-x.y for every release, so that patches can be made. Istio, et al follow this exact convention.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then look at Terraform and all their providers. They don’t do this! This seems like huge mental overhead, but nonetheless intrigued why they don’t.

loren avatar

i believe terraform does it also… e.g. https://github.com/hashicorp/terraform/tree/v1.1

loren avatar

which makes sense if you intend to patch older versions. though i guess you don’t have to persist the branch… you can always recreate it from a tag

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yeah. I don’t think you need to change the support timeline after bumping to 1.0. Everything about CloudPosse modules is really great at the moment, all I care about is the lack of semantic version numbers

Alex Jurkiewicz avatar
Alex Jurkiewicz

I see the current auto-release logic relies on adding a label to each PR to define if it’s a major, minor, or patch release

Alex Jurkiewicz avatar
Alex Jurkiewicz

You mentioned recently you don’t like chatops. One approach would be to change the merge process from “click the green merge button” to “apply the ‘merge’ label and one of ‘major/minor/patch’ labels, and github actions will process the PR merge”

Alex Jurkiewicz avatar
Alex Jurkiewicz

I’m happy to prototype this in a new repo if you’d like to try it

loren avatar

mergify can implement that workflow also, e.g. “merge if label exists and required jobs pass”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


You mentioned recently you don’t like chatops. One approach would be to change the merge process from “click the green merge button” to “apply the ‘merge’ label and one of ‘major/minor/patch’ labels, and github actions will process the PR merge”
That’s an interesting idea.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Dylan @Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I don’t see the advantage there. Currently the release defaults to a minor version increment unless otherwise labeled. I do not like giving Mergify the permission to do the merges because it increases the risk of malware or inadvertent breaking changes being released.

(We already have this problem when a dependent module has a breaking change and Renovate updates the module to use the new version and Mergify auto-approves and merges it. This problem will be resolved when all our modules go past major version zero, as we will prohibit automatic updates of major version updates, but for the near future this will remain a problem.)

@Erik Osterman (Cloud Posse) I think if we can get comfortable with eventually having version 126.0.0 we can switch to full SemVer with the next breaking release and otherwise not bother with ongoing support of earlier versions. In practice we rarely make breaking changes, so the major versions should not increment that fast, and we rarely update old versions, instead forcing people to accept the breaking change if they want new features. It’s not the best customer experience, but it is at the limit of what Cloud Posse can do for free, so all that would really change is that we would be making our level of support more explicit.

If we found we wanted/had to update an old version, say to patch a security issue, we could create a branch at that time. I expect that will be a rare occurrence, at least until Terraform v2 comes out.

3

2022-03-02

mfridh avatar

hmm… so the https://github.com/cloudposse/terraform-yaml-config module … have a question on “variability” of list entries …

cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps

mfridh avatar

[main.tf](http://main.tf):

module "yaml_config" {
  source = "cloudposse/config/yaml"
  map_config_local_base_path = "./config"

  map_config_paths = [
    "regsync.yaml"
  ]

  context = module.this.context
}

./config/regsync.yaml:

sync:
  - source: consul
    target: docker.local/mirror/consul
    type: repository
    tags:
      allow:
        - "latest"
        - "1\\.9.*"

  - source: cr.l5d.io/linkerd/grafana
    target: docker.local/mirror/linkerd/grafana
    type: repository
    tags:
      allow:
        - "stable-2\\.10\\..*"

  - source: tricksterproxy/trickster
    target: docker.local/mirror/tricksterproxy/trickster
    type: repository

That last entry in the sync list, which doesn’t have a tag key, fails the deep merge..

│ Error: Invalid function argument
│ 
│   on .terraform/modules/yaml_config/modules/deepmerge/depth.tf line 43, in locals:
│   43:           for key in keys(item["value"]) :
│     ├────────────────
│     │ item["value"] is tuple with 12 elements
│ 
│ Invalid value for "inputMap" parameter: must have map or object type.
╵
╷
│ Error: Invalid function argument
│ 
│   on .terraform/modules/yaml_config/modules/deepmerge/depth.tf line 55, in locals:
│   55:           for key in keys(item["value"]) :
│     ├────────────────
│     │ item["value"] is tuple with 12 elements
│ 
│ Invalid value for "inputMap" parameter: must have map or object type.
╵
Releasing state lock. This may take a few moments...
cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps

mfridh avatar

If I add a tag: {} on that last one it is fine.

mfridh avatar

I should maybe just file + yamldecode as I don’t really utilize the config module properly anyway…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all entries must have the same type and number of elements

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is how the TF code what we are using for deep-merge works

mfridh avatar

Yeah… Thanks. I just wanted to start using the yaml-config module so I used it even though I didn’t actually need it right here . Converted to a yamldecode(file()) now .

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are just reading YAML files and converting to TF structures, you don’t need the module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s only for deep-merge of maps which TF merge does not support

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(and for remotely reading YAML)

Release notes from terraform avatar
Release notes from terraform
07:53:15 PM

v1.1.7 1.1.7 (March 02, 2022) BUG FIXES: terraform show -json: Improve performance for deeply-nested object values. The previous implementation was accidentally quadratic, which could result in very long execution time for generating JSON plans, and timeouts on Terraform Cloud and Terraform Enterprise. (#30561) cloud: Update go-slug for…

jsonplan: Improve performance for deep objects by alisdair · Pull Request #30561 · hashicorp/terraformattachment image

When calculating the unknown values for JSON plan output, we would previously recursively call the unknownAsBool function on the current sub-tree twice, if any values were unknown. This was wastefu…

mrwacky avatar
mrwacky

Is there a good argument for or against adding remote_state entries in a Terraform module? We’re developing one internally, and having to have all the callers pass in information they find in remote state feels kludgy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

With our terraform framework we use it compulsively because it’s so easy. Just as easy as reading from SSM. So if you expect to be integrating with things outside of terraform, build a remote state framework around SSM and if just working with terraform, then remote state is fine.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think you will hear some counter arguments, but what are the alternatives? You can do a terralith and just access all settings directly. Terraliths are anti patterns. You can copy pasta all the settings. That’s not DRY and very error prone. You can use data sources, but not everything can be looked up that way.

mrwacky avatar
mrwacky

got an example module?

mrwacky avatar
mrwacky

nevermind, I learned a new skill today: search!

mrwacky avatar
mrwacky

ok, yeah, based on this and this I’d say I’m about to implement basically what you’re doing

mrwacky avatar
mrwacky

actually, is this repo currently used? almost a year without any commits, seems unlike you.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It is indeed, but we are behind upstreaming components

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You’ll see many of the components are more recently updated and many open PRs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
module "eks" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "0.22.0"

  component = "eks"

  context = module.this.context
}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, beadvised, our remote-state implementation is based on our stack configurations, so if you’re not using our stack configurations, the remote state lookups won’t work.

Mohammed Yahya avatar
Mohammed Yahya

I would prefer data sources its like query your aws resources to get their ids

and I don’t need to worry about outputs from other state files or how stacks or single stack being used

Mohammed Yahya avatar
Mohammed Yahya

and if you need extra non-normal use case you can stick with SSM

2022-03-03

Tyler Jarjoura avatar
Tyler Jarjoura

Hi everybody, I was wondering if I could get some input here. I am attempting to use your terraform-aws-rds-cluster module to manage some of our postgres aurora clusters. These clusters already exist, and I will need to import them into Terraform. The subnet group already has a name (which was autogenerated by Cloudformation, it is not pretty), which does not match the module.this.id pattern the module is using. The problem with this is that changing the name causes the subnet group to be recreated, which in turn will cause the database to be recreated (which we want to avoid). Are there are suggested work arounds here? Would it be possible to add a “subnet_group_name” variable to this module, to solve for cases like this? Thanks!

RB avatar

it makes sense to add a new input variable to support importing existing databases into the rds module

RB avatar

before submitting the pr, please check if you can fully import the database and set the appropriate inputs so the module returns “no changes”

1
Tyler Jarjoura avatar
Tyler Jarjoura

Here is the PR (not sure who to ping about it) https://github.com/cloudposse/terraform-aws-rds-cluster/pull/133

what

• Allow the user to specify the db_subnet_group name, rather than using the default label ID

why

• If importing an existing database cluster and subnet group, we need to be able to set the subnet group name to what it already has, otherwise the subnet group will be recreated. This in turn will cause the database cluster to be recreated, which we don’t want.

references

https://sweetops.slack.com/archives/CB6GHNLG0/p1646336110444589

RB avatar

Thank you @Tyler Jarjoura for the contribution.

This has been released as https://github.com/cloudposse/terraform-aws-rds-cluster/releases/tag/0.50.2

1
Brent Garber avatar
Brent Garber

Anyone know a way around feeding aws_iam_policy_documents into a for_each? Complains about The "for_each" value depends on resource attributes that cannot be determined until apply, which (to me) doesn’t make much sense, because that data source is just a way to specify a blob of json

Brent Garber avatar
Brent Garber

aha, instead of putting a for_each in the aws_iam_role_policy to attach the multiple policies, just did another aws_iam_policy_document with source_policy_documents set to the list

Brent Garber avatar
Brent Garber

then passed that to aws_iam_role_policy

2022-03-06

2022-03-07

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

anyone else tried using the tfstate-backend module v0.38.1 ? appears it’s broken and undeployable

Michael Galey avatar
Michael Galey

I just deployed this 5 mins ago and all good here. Terraform’s aws provider version 4 breaks everything that uses s3 though. I set it up with aws version 3, I think you’d have to do that, only modules that support v4 specifically would work, unless you can use multiple versions of a provider.

aws = {
      source = "hashicorp/aws"
      version = "~> 3.0"
    }
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

okay let me see which aws version it has

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

thanks @Michael Galey that seems to have been it… I hadn’t version locked hashicorp/aws and it grabbed v4

Michael Galey avatar
Michael Galey

yea same, that’s been the source of errors across my 40 state files for the last week , glad it worked out!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suspect we’ll have a v4 version soon

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Yeah I’ve got a bunch of my own terraform IaC that needs to be looked at for v4 compatibility ad well

jon avatar

Trying to use “cloudtrail-s3-bucket” - getting 2 messages This object does not have an attribute named "enable_glacier_transition".. Im sure its a UFU but I don’t know where to look

Michael Galey avatar
Michael Galey

same issue as the above message talking about s3 depending on the aws provider version?

jon avatar

interesting.. ty!

David avatar

Getting aws-auth exception when i try to make changes to eks cluster like updating security group and apply, tried to use apply load from local config but that also did not work. Any work around for this? i can make changes to node group though.

Exception:

null_resource.add_custom_tags_to_asg: Refreshing state... [id=5885391824448464606]
╷
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
│
│   with module.eks_cluster.kubernetes_config_map.aws_auth[0],
│   on .terraform/modules/eks_cluster/auth.tf line 135, in resource "kubernetes_config_map" "aws_auth":
│  135: resource "kubernetes_config_map" "aws_auth" {
│
provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(module.eks_cluster.eks_cluster_certificate_authority_data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}
module "eks_cluster" {
  source                                    = "cloudposse/eks-cluster/aws"
  version                                   = "0.45.0"

module "eks_node_group" {
  source                                   = "cloudposse/eks-node-group/aws"
  version                                  = "0.27.3"
  namespace                                = var.namespace
  stage                                    = var.stage
RB avatar

I believe the eks cluster uses it’s own kubernetes provider so you may have conflicting providers here between your consumer module and the consumed module

https://github.com/cloudposse/terraform-aws-eks-cluster/blob/b745ed18d8832c7e8e53966264687d2ee1d64e1a/auth.tf#L88

RB avatar

see the example here which doesnt use the kubernetes provider in the consumer/root module https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf and relies on the eks cluster module’s kubernetes provider

RB avatar

cc: @Jeremy G (Cloud Posse)

David avatar

removed the provider as you suggested but still it does not work…

null_resource.add_custom_tags_to_asg: Refreshing state... 
╷
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
│
│   with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│   on .terraform/modules/eks_cluster/auth.tf line 115, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│  115: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
│
╵
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@David Are you using kube_exec_auth_enabled ? As explained in the release notes for eks-cluster v0.42.0, authentication is an ongoing problem, for which “exec auth” is our preferred workaround.

RB avatar

maybe @David you could be tripping over this issue https://github.com/cloudposse/terraform-aws-eks-cluster/issues/143

see the workarounds

2022-03-08

Ross Rochford avatar
Ross Rochford

Hi everyone, I’m currently building a Terraform-like application for declarative cloud provisioning, using python/django. The reason for this is that Terraform cannot be used reliably within an automated system, for example when launching managed services as part of a SaaS offering. I was wondering if anyone else here is looking at solving the same problem? I am looking for contributors.

Alex Jurkiewicz avatar
Alex Jurkiewicz

is this a “the way to get tech support online is to say something obviously wrong” technique question? Terraform is perfect for managing PaaS infra

1
Ross Rochford avatar
Ross Rochford

No not at all. Terraform is great but it isn’t reliable if you are launching resources in customer’s accounts or VPCs, where numerous problems can arise

Ross Rochford avatar
Ross Rochford

Imagine launching thousands of services across thousands of clients, being able to fail gracefully, destroy resources, report errors, negotiate availability and so on.

1
venkata.mutyala avatar
venkata.mutyala

Starting from scratch seems a bit questionable. Have you considered: https://www.pulumi.com ? I haven’t used it myself but I believe it’s python friendly. There is a k8s project called cross plane that might also be worth looking at too. Also, have you thought about just forking the TF provider code base? I think cloudposse does this with the AWS provider.

Pulumi - Modern Infrastructure as Code

Pulumi’s open source infrastructure as code SDK enables you to create, deploy, and manage infrastructure on any cloud, using your favorite languages.

loren avatar

Or maybe crossplane… https://crossplane.io/

Crossplane

Compose cloud infrastructure and services into custom platform APIs

Ross Rochford avatar
Ross Rochford

Thank you Venkata and Loren. I will take a closer look at Pulumi. As for crossplane, it looks interesting but I’m not so interested in being tied to k8s. I basically want a terraform clone for creating resources.

Ross Rochford avatar
Ross Rochford

Running the commands for creation, deletion etc can be done with the cloud providers’ python clients. But managing the state changes, retries, error reporting etc seems like something that I would need to build?

Joe Niland avatar
Joe Niland

Hey @Ross Rochford super late but I was involved in something similar a few years ago. We used SaltStack with SNS and cloudwatch events (now called EventBridge I believe).

Salt has a REST API and can do everything on your list above.

I have not done anything with salt since then and I don’t know the state of the FOSS project, especially since VMware bought them.

loren avatar

I’m not a fan of k8s, generally, but crossplane is interesting in that I think the idea is to declare the desired state and let crossplane use k8s to converge to it

loren avatar

Otherwise, what you’re taking about sounds like something any of the TACOS might help with, e.g. Terraform Cloud, Spacelift, Env0, Atlantis, Scalr, etc…

managedkaos avatar
managedkaos


Imagine launching thousands of services across thousands of clients, being able to fail gracefully, destroy resources, report errors, negotiate availability and so on.
I think this is a perfect reason to use a tool like Terraform for Pulumi on your back end.

Your webapp could just be a wrapper with permission to call the correct commands on the backend with the correct permission.

But question, @Ross Rochford, how much variance is contained in the resources you would be deploying across so many clients? If its more than 2-3, you might need to place more effort in customer support to tweak all the changes needed per client. But if you are deploying just a few different collections of resources, you would be well served to code up a module in TF or Pulumi that describes the collection, test the heck out of it, and then have your webapp (or a TACO or other “Infra as code” manager) deploy your configurations.

I have another sidebar question: who would be using the webapp? You or the clients? I’m just trying to get an idea of who would benefit from the experience and what the needs are.

this2

2022-03-09

othman issa avatar
othman issa

Hello everyone,

othman issa avatar
othman issa

I have a question about for_each

othman issa avatar
othman issa

I have cluster role binding, resource “kubernetes_cluster_role_binding” “example” { metadata { name = “terraform-example” } role_ref { api_group = “rbac.authorization.k8s.io” kind = “ClusterRole” name = “cluster-admin” } subject { kind = “User” namespace = “” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } subject { kind = “User” namespace = “” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } subject { kind = “User” namespace = “” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } subject { kind = “User” namespace = “” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } subject { kind = “User” namespace = “*” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } }

othman issa avatar
othman issa

subject { kind = “User” namespace = “*” for_each = toset([“mike”, “david”, “adam”, “ranne”, “ken”]) name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#”-“${each.value}”-“@microsoft.com” }

othman issa avatar
othman issa

I feel not right, can anyone help plz ?

othman issa avatar
othman issa

thank you

2022-03-10

Jim G avatar

maybe you want a dynamic block instead?

Ross Rochford avatar
Ross Rochford

@managedkaos I will take a closer look at Pulumi but my experience with Terraform suggests that it is simply not designed for my use case. It often fails without grace or runs into problems with its state. There are various hacks, but in the long run I don’t see it as reliable in the way I need it to be. When a failure occurs it is important that we can trace exactly what the issue was and that we have full access to the API responses from the cloud provider.

In terms of variance, we would like this problem to be solved in general, for many use cases, resource types and environments. The business is a marketplace for managed services, we mediate between developers who provision and deploy services, and customers who want to run them with the support of people who have expertise in those services (say for example Redis clusters). This mediation involves providing convenient APIs to developers, so a robust terraform-like declarative API would be a key part of our offering.

managedkaos avatar
managedkaos


There are various hacks, but in the long run I don’t see it as reliable in the way I need it to be.
Got it. I can agree that everything isn’t for everyone.
This mediation involves providing convenient APIs to developers, so a robust terraform-like declarative API would be a key part of our offering.
It sounds like you’re set on building a very viable solution. I think in the end, you will likely have something that is on par with terraform and wonder if you would consider offering your solution as a business along with the business built on top of it.

If external developers are using your solution, I think that changes your focus a bit since its very likely the developers will want new features, updates to match changes in cloud provider APIs, support for any problems and so on. I totally get your point about not using TF as a solution and I’m on board with you figuring out your path without it, but I feel like using a third party solution like TF or Pulumi would save you from having to do a lot of the heavy lifting that’s already being done with those technologies.

I wish you all the best and look forward to hearing more about your solution!

Ross Rochford avatar
Ross Rochford

Yes, it is definitely a large project to take on and merits some caution on not reinventing the wheel.

On the other hand, my initial prototype suggests that this problem has a lot of repeated functionality that can be reused across all resources and providers. The webapp does the declarative->imperative mapping magic, so implementing a new resource simply involves adding 1-2 DB tables and 3-5 custom methods (create, list, get, delete, update). These are typically fairly short implementations because they can avail of the cloud provider’s python API client.

Ross Rochford avatar
Ross Rochford

I’ll keep you posted, would be great to demo it to this slack channel.

1

2022-03-12

Остап Василькевич avatar
Остап Василькевич

Error creating SSM activation: ValidationException: Nonexistent role or missing ssm service principal in trust policy module beanstalk environment.. What is wrong?

Matt Gowie avatar
Matt Gowie

If it works on the second apply then I would assume that your trust policy is referencing a role (or something similar) that it is not dependent on. Therefore it is trying to create the policy prior to the referenced resource being created and then :boom: .

You can try to find that dependent resource and explicitly define the relationship using depends_on OR you can reference the dependent resource directly in the policy and it will create that relationship for you (the preferred approach).

This is a guess from a small amount of information, but that is a common problem.

Остап Василькевич avatar
Остап Василькевич

It is appears after first apply. When I run terraform apply next time all works fine

2022-03-13

2022-03-14

Almondovar avatar
Almondovar

Hi colleagues, i need to import a key pair but our infra is in terraform cloud, any ideas of how i can get terminal access on terraform cloud please?

Stef avatar
SSH Keys - API Docs - Terraform Cloud and Terraform Enterprise | Terraform by HashiCorpattachment image

Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.

Almondovar avatar
Almondovar

thank you for your response, i have been able to take cli control with the terraform login command, but seems it can’t understand the state already used in terraform cloud and it wants to recreate everything. do i understand well that terraform import is not possible when using terraform cloud? found also many other enterprise users struggling with it, and the clear answer from tf employees is “sorry it works as it should” here

Matt Gowie avatar
Matt Gowie

@Almondovar Are you making sure to select the correct workspace ( e.g. terraform workspace select) that Terraform Cloud is using? I could imagine that being your issue.

1
Almondovar avatar
Almondovar

Thank you Matt, i am not sure these are applicable on the terraform cloud, because although i am connected with the token correctly

Terraform must now open a web browser to the tokens page for app.terraform.io.

If a browser does not open this automatically, open the following URL to proceed:
    <https://app.terraform.io/app/settings/tokens?source=terraform-login>

i cant see my workspaces

> terraform workspace list      
* default

2022-03-15

momot.nick avatar
momot.nick

How would you setup IIS with Terraform in a Windows EC2 instance?

I know that there’s the user_data argument that can be passed to the launch_configuration but I’m unsure of where to go from there.

The goal is to have instances setup for hosting without having to setup envs by hand

Jim G avatar

That’s how we do it - packer builds a base image with all of the prereqs, then user data runs PowerShell to configure IIS.

Jim G avatar

Could also use DSC or similar, which would be a little cleaner, but I haven’t gone down that road.

momot.nick avatar
momot.nick

@Jim G

The way its going for me so far is:

  1. Running Install-WindowsFeature to install all the IIS modules
  2. Calling New-WebApplication to set up a site How do you handle the IIS config? Like setting headers, and HTTPS redirects?
Jim G avatar

Sorry, to clarify:

• We use packer to install IIS and any other prereqs or configuration that can be done in the image.

• user_data is just used to bootstrap the image (configure Splunk, Cloudwatch Agent) and install our deployment system agent to phone home (in our case, Octopus Deploy)

• The deployment system actually deploys the website content and configures IIS.

Jim G avatar

I wouldn’t want to do all of that in user_data - it would be slow and hard to debug.

momot.nick avatar
momot.nick

Ah,that’s a good point; I’ve noticed an error during the user_data call can mess with the SSM agent - other stuff too probably

momot.nick avatar
momot.nick

I don’t know much about packer other than that it deals with os images, whats the major advantage vs creating a custom AMI w/ AWS?

Jim G avatar

speed. You can do all of the installs, config, whatever once when baking the AMI. Then during terraform apply, it’s already in the image - you’re not waiting for 10,000 lines of user_data to execute.

Jim G avatar

and with Windows instances, that adds up pretty quickly

1
Jim G avatar

it also helps if you’re trying to build immutable infrastruture - we don’t patch our windows instances. Once a month, we build a new (patched) AMI with packer, then do a rolling replacement of all of our instances.

momot.nick avatar
momot.nick

that’s pretty cool!

Can you point me to any resources for creating an IIS image w/ packer?

Samuel Crudge avatar
Samuel Crudge

Hi all, Trying to make changes to the Loadbalancer SG on CloudPosse ElasticBeanstalk v0.40.0, I’m trying to affect this by using the loadbalancer_managed_security_group but when referencing either a SG id or arn. when doing this i’m getting this error response:

Error: Error waiting for Elastic Beanstalk Environment (e-tpbxpwqcnp) to become ready: 2 errors occurred:
│       * 2022-03-15 14:30:44.562 +0000 UTC (e-tpbxpwqcnp) : Service:AmazonCloudFormation, Message:[/Resources/AWSEBV2LoadBalancer/Type/SecurityGroups] 'null' values are not allowed in templates
│       * 2022-03-15 14:30:44.704 +0000 UTC (e-tpbxpwqcnp) : Failed to deploy configuration.

When starting this the environment is in a OK state.

Any help would be appreciated

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment

Samuel Crudge avatar
Samuel Crudge

If there’s any information i’ve omitted that might help understand where i sit please let me know

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment

jose.amengual avatar
jose.amengual

I’m having an issue with awscc provider, I just added the provider to required_providers but I’m not using yet and after init I can’t plan anymore and I get this error

jose.amengual avatar
jose.amengual
Error: Could not load plugin


Plugin reinitialization required. Please run "terraform init".

Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.

Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".

Failed to instantiate provider "registry.terraform.io/hashicorp/awscc" to
obtain schema: Incompatible API version with plugin. Plugin version: 6, Client
versions: [5]
jose.amengual avatar
jose.amengual

any ideas?

jose.amengual avatar
jose.amengual
terraform version
Terraform v0.13.7
+ provider registry.terraform.io/-/random v3.1.0
+ provider registry.terraform.io/hashicorp/aws v3.74.3
+ provider registry.terraform.io/hashicorp/awscc v0.14.0
+ provider registry.terraform.io/hashicorp/local v2.2.2
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/time v0.7.2
jose.amengual avatar
jose.amengual

I tried TF 0.14, 1.1.7, deleting .terraform and nothing, same problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

delete the lock file in the same folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

.terraform.lock.hcl

jose.amengual avatar
jose.amengual

I do not have a lock file

jose.amengual avatar
jose.amengual

can the awscc and aws provider coexist? I will imagine that it should

loren avatar

did you try terraform providers?

loren avatar

also, do you use a .terraformrc file, or set the env TF_PLUGIN_CACHE_DIR?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to delete .terraform.d folder in HOME dir

jose.amengual avatar
jose.amengual

I have no .terraform* any files

jose.amengual avatar
jose.amengual

I can run terraform providers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you should have the folder in ~/.terraform.d

jose.amengual avatar
jose.amengual

let me check that one

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your HOME

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not in the repo

jose.amengual avatar
jose.amengual

yes, I just deleted it

jose.amengual avatar
jose.amengual

does awscc work with 13.7?

jose.amengual avatar
jose.amengual

I wonder if it has to be 1.x

jose.amengual avatar
jose.amengual

could be that the state is 0.13.7 and is an older version so it needs to be updated?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

after deleting the folder, did you run terraform init again?

jose.amengual avatar
jose.amengual

yes

jose.amengual avatar
jose.amengual

otherwise you can’t plan, the problem happens on plan only

jose.amengual avatar
jose.amengual

so I cleaned up the code

jose.amengual avatar
jose.amengual

my main.tf is empty

jose.amengual avatar
jose.amengual

new dir

jose.amengual avatar
jose.amengual
 terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (signed by HashiCorp)
jose.amengual avatar
jose.amengual

just the one provider

jose.amengual avatar
jose.amengual

same thing :

 terraform plan

Error: Could not load plugin


Plugin reinitialization required. Please run "terraform init".

Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.

Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".

Failed to instantiate provider "registry.terraform.io/hashicorp/awscc" to
obtain schema: Incompatible API version with plugin. Plugin version: 6, Client
versions: [5]
jose.amengual avatar
jose.amengual

ok so

jose.amengual avatar
jose.amengual

I did the same, but with tf 1.1.7 and it works

jose.amengual avatar
jose.amengual

I think the minimum version required is 0.15

jose.amengual avatar
jose.amengual

yep

jose.amengual avatar
jose.amengual

0.15.0 is the minimum required version

jose.amengual avatar
jose.amengual
rm -rf ~/.terraform.d .terraform .terraform.lock.hcl
prompt> tfenv use 0.15.0
Switching default version to v0.15.0
Switching completed
prompt> terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (self-signed, key ID 34365D9472D7468F)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
<https://www.terraform.io/docs/cli/plugins/signing.html>

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
prompt> terraform plan

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your configuration and the remote system(s). As a result, there are no actions to take.
prompt> rm -rf ~/.terraform.d .terraform .terraform.lock.hcl
prompt> tfenv use 0.14.11
Switching default version to v0.14.11
Switching completed
prompt> terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
prompt> terraform plan

Error: Could not load plugin


Plugin reinitialization required. Please run "terraform init".

Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.

Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".

Failed to instantiate provider "registry.terraform.io/hashicorp/awscc" to
obtain schema: Incompatible API version with plugin. Plugin version: 6, Client
versions: [5]


prompt [1]> 
jose.amengual avatar
jose.amengual

where is that in the docs?

loren avatar

i was looking for a min tf version for the awscc provider but didn’t see one. i do think that 0.15.0 is the last time they made changes to the state schema though, so kinda makes sense

jose.amengual avatar
jose.amengual

The provider requires a minimum version of Terraform 0.15.0, I could not find that on the docs and the use provider link shows 0.13+ usage as am example.

prompt> tfenv use 0.15.0
Switching default version to v0.15.0
Switching completed
prompt> terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (self-signed, key ID 34365D9472D7468F)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
<https://www.terraform.io/docs/cli/plugins/signing.html>

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
prompt> terraform plan

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your configuration and the remote system(s). As a result, there are no actions to take.
prompt> rm -rf ~/.terraform.d .terraform .terraform.lock.hcl
prompt> tfenv use 0.14.11
Switching default version to v0.14.11
Switching completed
prompt> terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
prompt> terraform plan

Error: Could not load plugin


Plugin reinitialization required. Please run "terraform init".

Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.

Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".

Failed to instantiate provider "registry.terraform.io/hashicorp/awscc" to
obtain schema: Incompatible API version with plugin. Plugin version: 6, Client
versions: [5]


prompt [1]> 

Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request • If you are interested in working on this issue or have submitted a pull request, please leave a comment • The resources and data sources in this provider are generated from the CloudFormation schema, so they can only support the actions that the underlying schema supports. For this reason submitted bugs should be limited to defects in the generation and runtime code of the provider. Customizing behavior of the resource, or noting a gap in behavior are not valid bugs and should be submitted as enhancements to AWS via the CloudFormation Open Coverage Roadmap.

Terraform CLI and Terraform AWS Cloud Control Provider Version

Terraform v0.14.11
+ provider registry.terraform.io/hashicorp/awscc v0.13.0

Terraform Configuration Files

empty main.tf, no resources added

jose.amengual avatar
jose.amengual

please vote

2
Matt McCredie avatar
Matt McCredie

I’m using “cloudposse/elastic-beanstalk-environment/aws” (v0.46.0) with loadbalancer_type = "classic" and tier = "WebServer" and I’m getting a bunch of modifications to elb settings every time I run plan (This is just one of the settings that changes):

      - setting {
          - name      = "HealthCheckInterval" -> null
          - namespace = "aws:elasticbeanstalk:environment:process:default" -> null
          - value     = "10" -> null
        } 

These changes are on an embedded resource of the module, so I don’t think there is a way to use lifecycle.ignore_changes. Are there any recommendations for reducing the noise in the output of terraform plan?

Matt McCredie avatar
Matt McCredie

Note I don’t have this problem when using a network load balancer. I’m looking for recommendations to reduce the noise in my plan output.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s difficult to give recommendation. What we saw with elastic beanstalk, the API returns the data in different order, and TF can’t compare it correctly. This issue going on for years

Matt McCredie avatar
Matt McCredie

In this case it seems like the module is specifying a bunch of parameters that aren’t used, so the platform returns nulls. Then the plan tries to update them again to what is specified.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, that’s the case

Matt McCredie avatar
Matt McCredie

So, is this a bug in the module? And if so, should I file it? I mean, it doesn’t solve my immediate problem, but it seems like it should be fixed.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not easy to fix (there are many combinations of settings). Unless we don’t add any settings to the module and let the caller to provide whatever they want

Alex Jurkiewicz avatar
Alex Jurkiewicz

for this reason we don’t use the CloudPosse module. Instead, we create the environments directly and pass in the exact settings we need. Elastic Beanstalk’s resources in Terraform are not great. EB is very opinionated, and its opinions don’t work well with Terraform. It needs to be handled with care and close attention, for better or worse

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we prob need to make all the settings optional and provide a variable to override all of them

2022-03-16

Lachlan Wells avatar
Lachlan Wells

Hi, does anyone have any tips on how to change the origin of the Default behavior when using this terraform-aws-cloudfront-s3-cdn module? The default behavior routes traffic to the S3 origin - I’d like it routed elsewhere.

Lachlan Wells avatar
Lachlan Wells

It seems as though this is the local I need to change - unsure if I can do so from my workspace.

    target_origin_id           = local.origin_id
jedineeper avatar
jedineeper

Sorry for the late reply but if you don’t want to use s3 i think this module might be a better fit for your use case https://github.com/cloudposse/terraform-aws-cloudfront-cdn

cloudposse/terraform-aws-cloudfront-cdn

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin.

Lachlan Wells avatar
Lachlan Wells

Thanks @jedineeper for the reply. I am using S3 as one origin, however not as the default. Are you aware of a way to utilise the module in this way?

jedineeper avatar
jedineeper

I think that the module i linked allows you to define your own origin structure, be it s3 or something else

jedineeper avatar
jedineeper

pretty sure the s3-cdn module uses it as a base so you could probably look into THAT module to see how the origins are defined as I’m not 100% sure

2022-03-18

Andrew Nazarov avatar
Andrew Nazarov

Hi! Is there any docs or best-practices about how you folks @ CloudPosse do TF refactoring? I mean suppose you have some functionality in the root module and you realise that some parts should go to a dedicated module. I’m mostly interested in how you manipulate with the state during the refactoring of the modules that are in use. What’s the workflow, who is responsible for making changes to the state, how you control this, etc. I do remember the discussion about when one should decide to write a module. However, I don’t remember the discussions about the refactoring:) Now we have this cool moved {} possibility, probably it’s a great help here. But last time we tried it didn’t work for modules outside of the current repo.

2
1
cloudposse1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately, we have no guidance/recommendations for this. Refactoring is a very small part of what we do. Possibly because we create very small components already built on our terraform modules, which are also pretty small and single purpose.

Where we still get bit is when the provider interface changes (e.g. S3 buckets) and for that, Terraform provides little to make it less painful.

We are grappling with one area of refactoring that’s very painful: how we handle security groups.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can’t recall if @ looked into using moved to help with that

Andrew Nazarov avatar
Andrew Nazarov

As for moved we are dealing with a cross-package move by making a module local to the root module first. And the second step is to make it external.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That makes sense. Tedious but probably no better way

Ramon de la Cruz Ariza avatar
Ramon de la Cruz Ariza

Hello! I’m using the module from Cloudposse to create waf acl / rules, i think module is not supporting the “RuleLabels” feature when adding a new rule based on countries, can someone help me? https://github.com/cloudposse/terraform-aws-waf#input_geo_match_statement_rules https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#rule_label Thanks!!!

RB avatar

could you create a ticket on the repo? this will help us to track it

1
RB avatar

if you’re feeling ambitious, you could even submit a pr for the addition

1
Ramon de la Cruz Ariza avatar
Ramon de la Cruz Ariza

Thanks for answering!! i can create a ticket yes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @matt

2022-03-19

2022-03-20

hkaya avatar

Hi everyone! I am interested in recommended approaches for managing infrastructure with terraform involving providers not accessible from a cloud based pipeline via public internet. My use case here looks like this:

  1. cloud based pipeline creates initial AWS stack (VPC, EKS, IAM stuff, etc.)
  2. another (cloud based?) pipeline using TF with Kubernetes, Vault and other providers creates resources in the cluster Now, this all works fine, when the Kubernetes API, Vault and other involved services are publicly accessible. However, if Kubernetes API and Vault is only accessible from within the cluster (or VPC) the 2. pipeline concept breaks as TF can’t manage resources using the Vault provider (the Kubernetes API connection might get worked around with whitelisting or such).

Also, my understanding is that some web hook based tooling might also break since GitHub would not be able to trigger anything inside the cluster. Are my assumptions correct? If so, are there any best practices or blue prints how to set things up in these scenarios? Appreciate any input here. Thanks!

2022-03-21

Evan avatar

Hello world. Does anyone have any good examples in managing AWS landing zones with AWS control tower? Or Landing Zones in general?

Waqar Ahmed avatar
Waqar Ahmed

Hi , There was a fully fledged Terraform landing zone (https://www.hashicorp.com/resources/aws-terraform-landing-zone-tlz-accelerator) , though this has been replaced with Terraform integration with AWS Control Tower https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/.

AWS Terraform Landing Zone (TLZ) Acceleratorattachment image

Watch Amazon announce and demo their HashiCorp Terraform Landing Zone (TLZ) AWS Accelerator preview at HashiConf.

New – AWS Control Tower Account Factory for Terraform | Amazon Web Servicesattachment image

AWS Control Tower makes it easier to set up and manage a secure, multi-account AWS environment. AWS Control Tower uses AWS Organizations to create what is called a landing zone, bringing ongoing account management and governance based on our experience working with thousands of customers. If you use AWS CloudFormation to manage your infrastructure as […]

Evan avatar

Thanks! Is my understanding correct, that I need a TF cloud account for TLZ?

Waqar Ahmed avatar
Waqar Ahmed

TF Cloud or Enterprise would be best in my experience, using S3/Dynamo in multi account setup can be difficult to manage with growing account numbers.

Typically Account Vending Machine sits in Management account, though ultimately depends on your multi aws account design.

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

It use to be the case you needed TF Cloud or Enterprise for landing zones, but the Control Tower Folks added support for OSS. Obviously we’d suggest folks use Cloud (even the free version) as the workflows are better, but it’s noy necessary.

Juan Soto avatar
Juan Soto

Hello people, I need to “copy and paste” all the resources from one AWS account to another. I am planning to try https://github.com/GoogleCloudPlatform/terraformer Do you have any experience on that? whats your feedback?

GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code

jose.amengual avatar
jose.amengual

I have used many times

GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code

jose.amengual avatar
jose.amengual

the problem is the name of the resources

jose.amengual avatar
jose.amengual

I do not know if is possible to add prefixes etc so that resource names are a bit more human readable

jose.amengual avatar
jose.amengual

but you can script that after the fact

jose.amengual avatar
jose.amengual

the other issue is that when you want to use modules instead of plain resources you will have to do a lot of terraform state mv commands

jose.amengual avatar
jose.amengual

but it is the best tool for doing this kind of things

Juan Soto avatar
Juan Soto

Do you know if AWS has a legacy tool for doing this? I couldn’t find any yet

jose.amengual avatar
jose.amengual

no they do not

jose.amengual avatar
jose.amengual

terraform is basically the competitor to Cloudformation

Juan Soto avatar
Juan Soto

yes, I know that

jose.amengual avatar
jose.amengual

just last year they seem to partner with hashicorp on CDF efforts and such

Juan Soto avatar
Juan Soto

humn…

jose.amengual avatar
jose.amengual

because people uses Tf more

jose.amengual avatar
jose.amengual

and Google have interest on such tool so that people can , migrate stufff to their cloud

jose.amengual avatar
jose.amengual

etc

jose.amengual avatar
jose.amengual

anyhow, the tool is good, do not use the other out there

zeid.derhally avatar
zeid.derhally

Note, terraformer just vomits out terraform, be prepared to do a lot of post processing of the generated terraform

2022-03-23

Pipo avatar

Hello! I am facing an issue… I developed a terraform module on a private repo on GitHub; it has:

examples     main.tf      tests        variables.tf

However, I am failing at the moment of calling the module; I am using a for_each to iterate over different services that need the module; the issue is: I can’t put a provider in the module due to the for_each If I did not put the provider in the module, terraform tries to use a source that doesn’t exist ( I have to use ‘DataDog/datadog’, but it tries to use ‘hashicorp/datadog’ ). In any part of the module, I didn’t declare ‘hashicorp/datadog.’

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the issue is, terraform does not support ANY provider in modules with for_each - this is a TF limitation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in some of our components, we instantiate a module many times with diff providers. Or you can write some code (in bash, Python, etc.) to generate that TF code

Pipo avatar

but the provider is always the same, is at the root, is the one that should use the module. I don’t need multiple providers…

Pipo avatar

It should work, but I don’t know why it is using a non-existent provider that I never declare

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, TF modules implicitly inherit the top-level providers

Pipo avatar

but is inheriting a provider that I didn’t declare…

Pipo avatar

I declare

terraform {
  required_providers {
    datadog = {
      source  = "DataDog/datadog"
      version = ">= 3.9.0"
    }
  }
}

And I get

                                │ Could not retrieve the list of available versions for provider
                                │ hashicorp/datadog: provider registry registry.terraform.io does not have a
                                │ provider named registry.terraform.io/hashicorp/datadog
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you also have this ?

provider "datadog" {
  api_key  = local.datadog_api_key
  app_key  = local.datadog_app_key
  validate = local.enabled
}
Pipo avatar

yes, I have that too

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in cases like this, always try to delete

.terraform
.terraform.lock.hcl
/Users/xxxxxx/.terraform.d
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

might help

Pipo avatar

Thanks, I’ve already tried that, without success

Pipo avatar

I found the issue, I had to declare the provider in the module root

terraform {
  required_providers {
    datadog = {
      source  = "DataDog/datadog"
      version = ">= 3.9.0"
    }
  }
}

and the same on the root of my code, but only in the root I can declare the

provider "datadog" {
  api_key = var.datadog_api_key
  app_key = var.datadog_app_key
}

In the module, the provider configuration can’t be there, even if it is in blank like

provider "datadog" {} 

it will fail.

2
Michael Galey avatar
Michael Galey

Anyone have thoughts on how to handle circular dependencies within security groups, across modules? Modules seem to need to be created as a package. So if I use a cloudposse-style module for my application, it creates a security group in the application module. I want to also pass in that security group to the elasticsearch module for ingress access, and then use the elasticsearch security group in the applications egress access rule.

Michael Galey avatar
Michael Galey

nm I can just add on the rules in the application module I bet

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, that would be the way. It does mean that the destruction of resources matters a lot, but that is typically a lesser concern.

1
Robert Berger avatar
Robert Berger

Is there a way to have subject_alternative_names that are not in the same route53 zone as the zone_name when using "cloudposse/acm-request-certificate/aws?

For instance if I set:

 domain_name                       = foo.bar.example.com
 subject_alternative_names         = foo.exmaple.com
 zone_name                         = [bar.example.com](http://bar.example.com)

bar.example.com and example.com are two different route53 zones. The module will only try to set the certificate validation in the zone specified by zone_name and thus will fail waiting for it to be approved

RB avatar

cc: @Robert Berger

RB avatar

use input subject_alternative_names

Robert Berger avatar
Robert Berger

Im not clear what you are suggesting. I am using subject_alternative_names (actually passing in a list ["[foo.example.com](http://foo.example.com)"] )

The problem is that when it tries to do the DNS validation, it does it in the zone specified by zone_name ([bar.example.com](http://bar.example.com)) and for it to work it needs to do one validation in there for [foo.bar.example.com](http://foo.bar.example.com) but also needs to do a DNS Validation in zone [example.com](http://example.com)

RB avatar

oh i see, so it might be getting stuck on validating the domain

RB avatar
  count                   = local.process_domain_validation_options && var.wait_for_certificate_issued ? 1 : 0
RB avatar

try setting wait_for_certificate_issued to false

RB avatar

then you could try to update the records for example.org in order to get dns validation to work correctly

Robert Berger avatar
Robert Berger

That would probably make the run complete but the certificate would never be validated. Like you just said, I guess a work around would be to update the dns validation with a resoruce outside of the terraform-aws-acm-request module, but would somehow need to get the validation values out of the module as well. Looks like that is an output of the module: domain_validation_options

RB avatar

do you think it would be possible to modify the module to allow it to work for your use case ?

RB avatar

or would it be better to disable validation within the module, create the records outside the module, and perform the validation outside the module?

RB avatar

this sounds like it’s worth a ticket, at least, in the repo issues section :)

Robert Berger avatar
Robert Berger

Well, its “Just software” I suspect its possible, but I’m just ramping my terraform fu. It would take some string processing and some logic to do it in the module. Or possibly have optional input variables that would just tell it what to do. Probably easier to just document this use case and do it outside the module.

Robert Berger avatar
Robert Berger

I’ll file a ticket this weekend…

RB avatar

i think one issue is that we’d need to know the zone id of the SANs in order to create the record which would get complicated

Robert Berger avatar
Robert Berger

Thanks for the help. I worked around my immediate block by just dropping the requirement for the alternate name for now.

RB avatar

if the other SANs are in route53, we could allow multiple zone ids to be passed in. that might be one avenue

Robert Berger avatar
Robert Berger

Yes, if the module would do it, it would need to know maybe the alternate name and the zone for the alternate names

RB avatar
output "domain_validation_options" {
Robert Berger avatar
Robert Berger

Yes, I see that now that could be used to do set up the DNS Validation outside of the acm module

Robert Berger avatar
Robert Berger

Would have to play with it to see if domain_validation_options would need further processing as it would still have the wrong base zone / dns name for one of the names

RB avatar

oh true. the module would require some refactoring for sure

2022-03-24

azec avatar

Curious what TF provider folks are using for provisioning things in PostgreSQL ?

azec avatar

I need to deploy some AWS extensions in PgSQL … aws_commons & aws_lambda to allow triggering of Lambda from PostgreSQL …

azec avatar

Historically we have been using https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs but I absolutely hate how it handles grants for DB resources.

RB avatar

yep this is the one we use as well

Matt Gowie avatar
Matt Gowie

Yeah — I think this is the one everyone uses. Unfortunately, I think the SQL model and Terraform don’t mix well. I have similar sentiments against this provider. I’m not at the point where I’m going to stop using it yet… but I’m close. If there was another option for better managed a PGSQL DB via IaC then I’d go that route.

Tyrone Meijn avatar
Tyrone Meijn

https://github.com/ariga/atlas maybe this is interesting @Matt Gowie?

ariga/atlas

A database toolkit

Matt Gowie avatar
Matt Gowie

@Tyrone Meijn — That’s very interesting, thanks for sharing. I don’t see support for manage roles / users, but if it adds that then I’d be on board.

2022-03-28

Release notes from terraform avatar
Release notes from terraform
10:43:14 AM

v1.2.0-alpha-20220328 1.2.0 (Unreleased) NEW FEATURES: precondition and postcondition check blocks for resources, data sources, and module output values: module authors can now document assumptions and assertions about configuration and state values. If these conditions are not met, Terraform will report a custom error message to the user and halt further evaluation. Terraform now supports run tasks, a Terraform Cloud…

Run Tasks - Workspaces - Terraform Cloud and Terraform Enterprise | Terraform by HashiCorpattachment image

Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Odd, these releases generally come on wednesdays!

Run Tasks - Workspaces - Terraform Cloud and Terraform Enterprise | Terraform by HashiCorpattachment image

Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.

2022-03-29

Steffan avatar
Steffan

hi guys please i need help with any pointers. I am trying to write terraform for route53 healthcheck and it requires fqdn or ip_address

resource "aws_route53_health_check" "example" {
  fqdn              = "[example.com](http://example.com)"
  port              = 80

problem is i am getting the fqdn dynamically from the output of api gateway stage (the invoke url) which is returned as <https://example.com> Do you know any function i can use to get rid of https:// so it will only input [example.com](http://example.com) to my health check resource or how can i achieve this (edited)

RB avatar

sounds like you could use the split function or you could use a provider to parse the url

https://github.com/matthewmueller/terraform-provider-url

mrwacky avatar
mrwacky

or regex function

regex - Functions - Configuration Language | Terraform by HashiCorpattachment image

The regex function applies a regular expression to a string and returns the matching substrings.

1
Jonathan Eid avatar
Jonathan Eid

heyy lowww everyone

Jonathan Eid avatar
Jonathan Eid

I had a question about the cloudposse/elasticsearch/aws terraform module

Does it support provisioning a cluster on the nvme ssds of the i3.2xlarge instances?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "instance_type" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Supported instance types in Amazon OpenSearch Service - Amazon OpenSearch Service

Amazon OpenSearch Service supports the following instance types. Not all Regions support all instance types. For availability details, see Amazon OpenSearch Service pricing .

Jonathan Eid avatar
Jonathan Eid

yes I see that, but those instances take some extra steps to mount the nvme ssd they come with onto the ec2 instance, then it’s another step to make sure the elasticsearch cluster is actually using the nvme

Jonathan Eid avatar
Jonathan Eid

I wonder if the terraform did the heavy lifting in that area already

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use this working example https://github.com/cloudposse/terraform-aws-elasticsearch/tree/master/examples/complete, specify the instance type and test

Jonathan Eid avatar
Jonathan Eid

Nah it doesn’t, can I add that as a feature request somewhere?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, you can open an issue anytime

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what it does not do? When you add the instance type, does it throw any errors?

Jonathan Eid avatar
Jonathan Eid

never mind, i had this project confused with something else. This spins up an official aws elasticsearch, not an baremetal elasticsearch on ec2

Wédney Yuri avatar
Wédney Yuri

Hi there, what do you do when you need something that hasn’t been implemented in provider terraform-provider-aws yet? I’m missing this merge request https://github.com/hashicorp/terraform-provider-aws/pull/21766

RB avatar

we also accept prs to the cloudposse aws utils provider https://registry.terraform.io/providers/cloudposse/utils/latest

Wédney Yuri avatar
Wédney Yuri

Thanks @jose.amengual, I didn’t know about awscc! Took a look, but It does not yet support wafv2 web acl.

jose.amengual avatar
jose.amengual

it worth the try

Wédney Yuri avatar
Wédney Yuri

Yes, looks promising!

Wédney Yuri avatar
Wédney Yuri

@RB I’m not sure it would make sense to add functionality that will be available inside terraform-provider-aws anytime in the future to this provider. Do you think it would fit into this use case?

Alex Jurkiewicz avatar
Alex Jurkiewicz

The PR might take months or years before it’s merged. It might never get merged – there are a lot of good PRs that languish forever because they are too complex and there’s not enough demand.

Because of this, I think the only reasonable approach is to either manage that resource attribute yourself (if you can use lifecycle { ignore_changes }), or completely eject from Terraform for this resource. For example, there might be support in CloudFormation. And as a worst case, clickops it.

Any other approach is going to bitrot over time. And IMO it’s worse to deal with exotic bitrot than to deal with documented clickops.

1
Wédney Yuri avatar
Wédney Yuri

Good point, so I think I’ll configure the WAF via Cloudformation.

Do you think it would be valid in this case to manage the “aws_cloudformation_stack” through terraform?

2022-03-30

Ross Hettel avatar
Ross Hettel

hi all, using the cloudposse rds-cluster module and running into some issues trying to perform a major engine version upgrade. made a bug ticket here: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/134 but the tl;dr is when the plan is applied AWS returns this error:

Failed to modify RDS Cluster (api-db): InvalidParameterCombination: The current DB instance parameter group api-db-xxxxxxx is custom. You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.

don’t believe i have the ability to set that parameter group name (or even use the default), so at a loss on the workaround here

RB avatar
  name_prefix = "${module.this.id}${module.this.delimiter}"
RB avatar

perhaps you need to modify the cluster family input var to upgrade the db?

Ross Hettel avatar
Ross Hettel

yeah, i saw that it’s creating a param group, and even lifecycle is set to create before destroy, so not sure why

Ross Hettel avatar
Ross Hettel

i did set cluster family input as well as engine version

Ross Hettel avatar
Ross Hettel

can post my module config and terraform plan if that’s helpful

RB avatar

yes please create an issue with a reproducible example and then link to it here

Ross Hettel avatar
Ross Hettel

i created an issue here: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/134 i’ll add a comment with my terraform plan output shortly

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Attempting to do a major version upgrade of an Aurora Postgres instance from 11.13 to 12.9. On latest version 50.2 and 3.63.0 of terraform. Below is my module config:

module "postgres" {
  source                      = "cloudposse/rds-cluster/aws"
  version                     = "0.50.2"
  name                        = "api-db"
  engine                      = "aurora-postgresql"
  cluster_family              = "aurora-postgresql12"
  engine_version              = "12.9"
  allow_major_version_upgrade = true
  apply_immediately           = true
  cluster_size                = 1
  admin_user                  = data.aws_ssm_parameter.db_admin_user.value
  admin_password              = data.aws_ssm_parameter.db_admin_password.value
  db_name                     = "api"
  db_port                     = 5432
  instance_type               = "db.t3.medium"
  vpc_id                      = var.vpc_id
  security_groups             = concat([aws_security_group.api.id], var.rds_security_group_inbound)
  subnets                     = var.rds_subnets
  storage_encrypted           = true
}

When running apply I get the error:

 Failed to modify RDS Cluster (api-db): InvalidParameterCombination: The current DB instance parameter group api-db-xxxxxxx is custom. You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

• OS: OSX • Version: 50.2 • Terraform: 3.63.0

RB avatar

when you run terraform version, what aws provider version are you using?

RB avatar

i see this issue with the aws provider was resolved in certain 3.63.0

https://github.com/hashicorp/terraform-provider-aws/issues/17357

Ross Hettel avatar
Ross Hettel

we were using 3.28.0, but saw that issue you linked - upgrade to 3.64.0 and saw the same error message

Ross Hettel avatar
Ross Hettel

just added a comment to my issue with the terraform plan output

Ross Hettel avatar
Ross Hettel

thank you for your help

1
RB avatar

so now you’re using the latest aws version and it’s still giving you the same issue? its possible this could be another bug with the provider then

Ross Hettel avatar
Ross Hettel

ah hmm could be. not the latest version of AWS, just the one past where that issue was resolved but i can try later versions of the aws provider and see if that changes anything

2022-03-31

Alan Kis avatar
Alan Kis

Hi Terraformers, how do you organize your local variables? Do you use single file eg. [locals.tf](http://locals.tf) and then you define all local variables in single file or you have locals all over the files?

I tend to have a [locals.tf](http://locals.tf) at least for the root modules to increase readability, but now I am facing an issue that I need to have locals {} in separate file, well to increase readability in junction with aws_organizations_policy .

Example as follows, please ignore generic resource names. Should serve just as an example.

locals {
  policy_template = templatefile("${path.module}/templates/template.tftpl", {
   {...}
  })
}

resource "aws_organizations_policy" "policy" {
  name        = local.policy_name
   content     = local.policy_template
  type        = "BACKUP_POLICY"

}

RB avatar

we just stick it at the top of main.tf

1
RB avatar

sometimes it’s also in other files but most of the time we have 4 files: main, outputs, variables, context

Alan Kis avatar
Alan Kis

I agree with sticking in at the top of the files. For child modules it’s easy to put all the local variables in separate file, for root modules I would also stick to this pattern but in some rare cases improving visibility breaks a context if tightly coupled local is in separate file.

Trade-offs, trade offs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I think sticking them atop the file they are used is one of the best places. When used by multiple files, sticking them in a [locals.tf](http://locals.tf) would make sense

Gaurav Kohli avatar
Gaurav Kohli

Hey all, while using the cloudposse terraform-aws-efs module https://github.com/cloudposse/terraform-aws-efs, I have stumbled on an issue and trying to figure out if there is a workaround for it.

So I have a efs file system created and within that I have 5 access points defined ( one for each micro service, so they have restricted access to subdirectories). Now if I add a name to the efs file system using name variable, then all the access points also get the same name. And from the code it looks like this is because of https://github.com/cloudposse/terraform-aws-efs/blob/master/main.tf#L86 where in the access points are using the same set of tags as been used by the efs file system. doesn’t it make sense to use “${each.key}” or something dynamic so each access point can have a different name.

cloudposse/terraform-aws-efs
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the tags are the same for all resources created by the module

cloudposse/terraform-aws-efs
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the tags are not related to name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you change name, all resources provisioned by the module will have diff IDs/names generated by the label module in the format <namespace>-<environment>-<stage>-<name>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is by design. name is part of tags. why this is causing a problem?

Gaurav Kohli avatar
Gaurav Kohli

that is correct, but in this case we can create more than one access point for the same efs file system and it would be nice to be able to name then differently so they have a logical name. As you can see here, all the access points have the same name efs-test

Gaurav Kohli avatar
Gaurav Kohli

whereas I could name them to data common media if I could

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

One more consideration, where are the microservices running? e.g. EKS or ECS?

    keyboard_arrow_up