#terraform (2022-02)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-02-01

Shrivatsan Narayanaswamy avatar
Shrivatsan Narayanaswamy

Hi Team, I would like add the network acl id to the outputs of https://github.com/cloudposse/terraform-aws-dynamic-subnets, so that i could use them to add acl rules to block and unblock ips.. So i would like to know the procedure to make my contirbutions to cloudposse modules..

GitHub - cloudposse/terraform-aws-dynamic-subnets: Terraform module for public and private subnets provisioning in existing VPCattachment image

Terraform module for public and private subnets provisioning in existing VPC - GitHub - cloudposse/terraform-aws-dynamic-subnets: Terraform module for public and private subnets provisioning in exi…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Shrivatsan Narayanaswamy thanks. Fork the repo, make the changes, then run from the root of the repo

make init
make github/init
make readme
GitHub - cloudposse/terraform-aws-dynamic-subnets: Terraform module for public and private subnets provisioning in existing VPCattachment image

Terraform module for public and private subnets provisioning in existing VPC - GitHub - cloudposse/terraform-aws-dynamic-subnets: Terraform module for public and private subnets provisioning in exi…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then open a PR against our repo, we’ll review, approve and merge

Simon avatar

We currently don’t really use terraform dynamically. Just using a template to deploy our instances, we destroy via the console atm.

I want to upgrade our controller from v0.12 to v1.1 and going through the gradual updates doesn’t really make sense since we don’t really care about their states. Would uninstalling 0.12 and installing 1.1 pose any issues? Like any syntax changes that would require our deployment template to change too?

Matt Gowie avatar
Matt Gowie

Between 0.12 => 1.1 there are very few changes to the syntax. The only ones that I can think of off the top of my head are the usage of the map and list functions which are now toMap and toList. https://www.terraform.io/language/functions/map https://www.terraform.io/language/functions/list

There likely are others, but that is one of the main ones. In all upgrades of this nature you typically just need to go for it and then deal and work through the errors that show up in your code from the new version.

1
Matt Gowie avatar
Matt Gowie

That said — Of course you should start managing your infra via TF You’re only getting about 30% of the benefit TF has to offer by just creating the infrastructure.

Simon avatar

Hope to eventually lol

julie avatar

I highly, highly recommend using these upgrade guides. There are breaking changes to look out for https://www.terraform.io/language/upgrade-guides/0-13

1
julie avatar

Terraform supports upgrade tools and features only for one major release upgrade at a time, so if you are currently using a version of Terraform prior to v0.14 please upgrade through the latest minor releases of all of the intermediate versions first, reviewing the previous upgrade guides for considerations relevant to your use case.

julie avatar

I working in support and see many sad users who did yolo upgrades and their infra and state files are spaghetti.

julie avatar

Reading your comment again, you are not concerned with state since you use manual destroy process. Hmm. Well, still good practice to use the upgrade guides in case you plan to enhance your terraform usage and fully commit to infra-as-code.

slackbot avatar
slackbot
09:38:13 PM

Cloud Posse, LLC has joined this channel by invitation from SweetOps.

cloudposse1
 avatar
09:38:14 PM
has joined the channel
Justin Smith avatar
Justin Smith

I’m attempting to incorporate the Cloudposse terraform-aws-iam-s3-user module into a module that I’m writing. After I add it and attempt to run a sample scenario to try it out, Terraform throws the error: The argument "region" is required, but was not set. However, region is, in fact, set in the AWS provider, and if I comment out the terraform-aws-iam-s3-user module in the code, the error goes away. I’m mystified.

1
Justin Smith avatar
Justin Smith

I didn’t check the release notes for iam-system-user to see the breaking change. Doy.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

awsutils provider is required now

Leon Garcia avatar
Leon Garcia

hello has anyone seen this issue with the terraform-aws-ec2-client-vpn module ? Basiscally at the end of the apply I get

 InvalidClientVpnEndpointId.NotFound Endpoint <id> does not exist

I confirm that the endpoint was createdm I am using an existing VPC so I am just passing the information of my current VPC and the client CIDRs.. I was not able to track down the issue but seems related to this resource:

data "awsutils_ec2_client_vpn_export_client_config" "default" {
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think this happens when you are missing a provider configuration for the awsutils provider

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i think you need to have two providers

provider "aws" {
  region = var.region
}

provider "awsutils" {
  region = var.region
}
Leon Garcia avatar
Leon Garcia

thanks for your reply I have this

provider "awsutils" {
  region = "us-east-2"
}

which is the region i am using

Leon Garcia avatar
Leon Garcia

I have both

Leon Garcia avatar
Leon Garcia

both pointing to the same region

Leon Garcia avatar
Leon Garcia

ahh OK found it.. thanks for the hint I am assuming roles on the aws provider so I just added the same assume role to the awsutil provider and seems to work

1
this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Glad that helped!

Mike Crowe avatar
Mike Crowe

Hi folks, quick tfstate-backend questions: • I’m assuming you run thru the process to initialize a new backend for each environment, right? So run once for root, once for dev, once for prod? • When you create a new component, do you need to copy the [backend.tf](http://backend.tf) file from tfstate-backend folder into the new component, or is there a more direct process? I think, but I’m not sure, I’m seeing actual local state in a terraform.tfstate.d folder

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Negative. We generally provision a single backend bucket and use workspace key prefixes instead.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If it’s a must, use the first tfstate backend as the backed state for all other state buckets, like a tree

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s supported by atmos too

Muhammad Badawy avatar
Muhammad Badawy

You can use #Terragrunt to run all the backends once for all and avoid repeating pieces of code.

Mike Crowe avatar
Mike Crowe

I haven’t gotten there yet, but I was curious how the subaccounts access the root accounts bucket/dynamodb tables. Or is TF smart enough to use different credentials for the state than you are using in the current plan?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Backends support role assumption

Mike Crowe avatar
Mike Crowe

@Erik Osterman (Cloud Posse) – we are using Control Tower to manage the accounts, so all our administrator/user roles have been auto setup (so I was trying to avoid using iam-primary-roles/iam-delegated-roles if possible. If I have both profiles active in Leapp, is it possible to specify the profile for the state and that be different for than the profile for the component? (or is that a stupid question because I haven’t wrapped my head around this yet?)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m not sure the best way. @Jeremy G (Cloud Posse)?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

btw, have you checked out AFT yet?

Mike Crowe avatar
Mike Crowe

AFT? No, because I don’t recognize that acronym

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Role or profile for backend can be specified in [backend.tf](http://backend.tf) . In our case we always set it to ...-root-gbl-terraform. Role or profile for provisioning AWS resources is set in [providers.tf](http://providers.tf) in the AWS provider configuration. We usually set it to <account>-gbl-terraform , although there are exceptions (see privileged=true and account-map/modules/iam-roles)

We create a single state bucket in the root account, accessed via the ...root-gbl-terraform role and a separate Terraform role (with Admin privileges, since it has to be able to create IAM roles) in each stage account.

We log in via Leapp and set AWS_PROFILE to a role in the identity account that is allowed to assume the various terraform roles. Then everything else happens automatically.

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

AWS SDK will not go looking for credentials to assume a role ARN. Instead, it will use whatever it considers to be the “current” or “active” credentials to try to assume that role. You can, instead of using Role ARNs, use Profile names, and then the SDK will go looking for credentials as specified in the profile. This way you can log into multiple roles via Leapp and have them used automatically, but it can require extra setup in that you may have to define all the profiles in ~/.aws/config

2022-02-02

jose.amengual avatar
jose.amengual

What is the latest on Terraform and Lambda canary deploy with 10% traffic shifting? anyone have implemented this?

2
jose.amengual avatar
jose.amengual

no one?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt should have our lambda module coming out soon

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we have a code deploy module, but we haven’t tied the two together yet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or ar you thinking canaries with API gateway?

Alex Jurkiewicz avatar
Alex Jurkiewicz

or canaries with lambda alias weighting? There are a lot of options, I’m curious how you can do any of them with Terraform

jose.amengual avatar
jose.amengual

that is the problem

jose.amengual avatar
jose.amengual

there is many ways

jose.amengual avatar
jose.amengual

there is the alias weight but you seems to need that with codedeploy

jose.amengual avatar
jose.amengual

and there is a lambda you can deploy to do this

jose.amengual avatar
jose.amengual

a lambda that controls the weight

jose.amengual avatar
jose.amengual

we are thinking on doing in TF but use codedeploy to do the canary

Alex Jurkiewicz avatar
Alex Jurkiewicz

We tried managing partial deploys with publish = true and aws_lambda_alias in pure Terraform, it was a big failure. Terraform can’t handle publish = true’s side effects and the alias weights were non-deterministic

jose.amengual avatar
jose.amengual

since the artifact is already there

jose.amengual avatar
jose.amengual

ahhh interesting to know

jose.amengual avatar
jose.amengual

so it seems that it will be like deploy once, do not touch terraform again and use codedeploy

Alex Jurkiewicz avatar
Alex Jurkiewicz

how do you measure if the new lambda is succeeding?

jose.amengual avatar
jose.amengual

there seems to be a cloudwatch alert you can tight in with the deployment but I’m unsure yet if will give you errors base on the versions deployed

jose.amengual avatar
jose.amengual

that cloudwatch alert supposed to feed codedeploy for automatic rollback

Release notes from terraform avatar
Release notes from terraform
09:03:15 PM

v1.1.5 1.1.5 (February 02, 2022) ENHANCEMENTS: backend/s3: Update AWS SDK to allow the use of the ap-southeast-3 region (#30363) BUG FIXES: cli: Fix crash when using autocomplete with long commands, such as terraform workspace select (<a href=”https://github.com/hashicorp/terraform/issues/30193” data-hovercard-type=”issue”…

Release v1.1.5 · hashicorp/terraform

1.1.5 (February 02, 2022) ENHANCEMENTS: backend/s3: Update AWS SDK to allow the use of the ap-southeast-3 region (#30363) BUG FIXES: cli: Fix crash when using autocomplete with long commands, su…

Terraform crash on attempted autocomplete of workspace name · Issue #30193 · hashicorp/terraformattachment image

Terraform Version [2021-12-17 0204]» terraform version Terraform v1.1.1 on linux_amd64 + provider registry.terraform.io/cloudflare/cloudflare v3.5.0 + provider registry.terraform.io/hashicorp/g

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I started a huddle fun

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Manage AWS Accounts Using Control Tower Account Factory for Terraform | Terraform - HashiCorp Learnattachment image

Use the AWS Control Tower Account Factory for Terraform to create a pipeline for provisioning and customizing AWS accounts in Control Tower. Create a new account and learn more about AWS Control Tower governance.

2
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’d heard about this after re:Invent but had only spent a fleeting amount of time looking into it. Haven’t looked at implementing in my own CT multi-org deployment

Manage AWS Accounts Using Control Tower Account Factory for Terraform | Terraform - HashiCorp Learnattachment image

Use the AWS Control Tower Account Factory for Terraform to create a pipeline for provisioning and customizing AWS accounts in Control Tower. Create a new account and learn more about AWS Control Tower governance.

sjl2024 avatar
sjl2024

What benefit does CT offer over and/or with orgs?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

CT is complimentary to operating in AWS with orgs. CT is for managing your accounts and provisioning a secure baseline. It makes it very easy to follow the many foundational best practices for organizing AWS accounts with Cloud Trails, OUs, SCP, SecurityHub, etc. Cloud Posse typically provisions all this with terraform rather than CT, but it’s taken us years to build out the modules and tools to be able to do it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately, CT itself has no API and therefore no terraform support.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

the new CT AFT is targeted at bridging that gap and allowing you to make use of CT with the ease of git commits and using Terraform. I just haven’t dived to deeply into it yet. As I understand the implementation it is meant to be deployed after CT and replace the standard method of using Cloudformation through Service Catalog

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, this is right, though I believe it is not mutually exclusive with CFT, it’s in addition to.

Mike Crowe avatar
Mike Crowe

CT seems overly complicated to me, and I’m not sure I’d advocate it when you have a reasonable TF solution that manages accounts/roles similarly

sjl2024 avatar
sjl2024

That’s helpful to know as I continue to wade through all of this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Mike Crowe it’s targeting a different demographic / use-case

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

imagine you’re an larger enterprise, with a sizable CT presence already. Teams are constantly pushing for using terraform and you want to get out of their way. This is how you can make amends.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You still have all the guardrails of CT and can continue to manage the enterprise the way you have been, but can open up the option of provisioning baseline infra inside of individual accounts with terraform.

2022-02-03

aimbotd avatar
aimbotd

Hey friends. If this is true, should all the appropriate pieces be built in place to support the cluster-autoscaler service? https://github.com/cloudposse/terraform-aws-eks-node-group#input_cluster_autoscaler_enabled. I have mine set true but I am not actually seeing it in action. I had to go and deploy the chart and set up the role to use the OIDC connection.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TBH, we usually deploy the autoscaler with helm, so this might not be thoroughly tested.

1
sjl2024 avatar
sjl2024

Hey friends. Starting our GitOps journey. Would love to set up AWS-SSO but struggling to understand how it ties into terraform. Does one typically set up AWS-SSO by hand and stick to TF for everything else (IAM roles and resources)? I did see that Cloud Posse has a module named terraform-aws-sso but wasn’t sure how mainstream it is

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can’t represent the AWS SSO connection with your SSO provider in Terraform code.

But you can represent users, groups and permission sets in Terraform. You probably want to

sjl2024 avatar
sjl2024

Okay, that makes sense. So basically we’ll need to set up some initial SSO configuration, partly on the IDP side, before being able to dive in. Thanks!

Alex Jurkiewicz avatar
Alex Jurkiewicz

you don’t normally need to specify users or groups. I mentioned it only for completion. I meant to say “you probably want to create permission sets with Terraform or some similar tool”

1
sjl2024 avatar
sjl2024

On a side note, does one generally operate terraform through a public/private key owned by a root AWS account? Or is it a better practice to create tf-specific credentials by hand after the fact and use those?

aimbotd avatar
aimbotd

My company and my own method is to have an AWS account with your users. We us a deploy user. We then have n accounts for services. Each account for services leverages role based authentication.

Our deploy users credential’s are associated to the build pipeline via protected env vars. In our provider , we specify an assume_role block with the desired deploy role.

sjl2024 avatar
sjl2024

Hey, thanks for the response. Sorry if this is sounds silly, but what parts must be done by hand before I can go “full” IaC? Does it make sense to create an account dedicated to all the things that are self-managed and needed for terraform? prerequisite roles, buckets for state, dynamodb, etc.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So generally, use of IAM users is discouraged. There are a few exceptions, such us necessary integrations with third party services which crudely support AWS credentials. Generally, we’d recommend going with AWS SSO or Federated IAM, however, as newcomers to AWS these will be a bit more to setup.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you are looking for some guidance, AWS publishes the Well-Architected Framework which is what we generally like to follow.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Our deploy users credential’s are associated to the build pipeline via protected env vars.
This is where the answer will depend on your implementation. The ideal way is to aovid any hardcoded ENVs and instead use instance profiles or OIDC. GitHub Actions supports this https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Does it make sense to create an account dedicated to all the things that are self-managed and needed for terraform?
We wouldn’t recommend this for any business because there’s poor audibility. You want to be able to trace everything back to an individual via Cloud Trail logs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


We then have n accounts for services.
What you really want are IAM Roles that are assigned to services (not IAM users). ECS and EKS make this very easy.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(actually, I misread, you’re talking about AWS accounts; yes, you should have multiple AWS accounts broken out depending on how you need to isolate your workloads)

sjl2024 avatar
sjl2024

Awesome. So one aws account for images and logs + IAM roles? Then one account per environment? Get short lived access tokens through SSO, aws-vault, etc. and assume a role when needing to deploy to a specific environment (AWS account)?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We typically have root (top level), audit for all logs, security which operates on audit, identity for federated IAM, network for transit gateways, artifact for ECR and S3 artifacts, dns as the registrar, auto for automation, and then platform level accounts like dev, staging, prod, and sandbox. This is pretty much the minimal number of accounts we would provision.

sjl2024 avatar
sjl2024

Ah. Control Tower. Of course, I replied to a thread about this elsewhere and now it comes full circle. Makes sense. And wow, what a list. A great reference

1
sjl2024 avatar
sjl2024

https://sweetops.slack.com/archives/CB6GHNLG0/p1643949425613539?thread_ts=1643944436.230679&channel=CB6GHNLG0&message_ts=1643949425.613539

So I get the SSO and aws_iam_instance_profile thing for humans. What about service accounts? Imagine in house software that reads from an s3 bucket. I was planning to manage in terraform via IAM user and download the keys to set as env variables. Is there a more secure or compliant way?

So generally, use of IAM users is discouraged. There are a few exceptions, such us necessary integrations with third party services which crudely support AWS credentials. Generally, we’d recommend going with AWS SSO or Federated IAM, however, as newcomers to AWS these will be a bit more to setup.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aws_iam_instance_profile is for “service accounts” at the EC2 instance level (not humans)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
IAM roles for service accounts - Amazon EKS

You can associate an IAM role with a Kubernetes service account. This service account can then provide AWS permissions to the containers in any pod that uses that service account. With this feature, you no longer need to provide extended permissions to the

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Service-linked role for Amazon ECS - Amazon Elastic Container Service

How to use the service-linked role to give Amazon ECS access to resources in your AWS account.

sjl2024 avatar
sjl2024

Hmm. Ok, so the ecs instance gets the credentials needed automatically

sjl2024 avatar
sjl2024

Or eks, or ec2. I guess for local development one could generate a dedicated key pair

sjl2024 avatar
sjl2024

Or wait, maybe I’m missing the point. Sounds like if one logs in through AWS-SSO, gets the right role, that solves the local development key issue. Nice

1
sjl2024 avatar
sjl2024

Zero to devops in 60 minutes. Really appreciate your time and insight

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


ecs instance
* ECS task

aimbotd avatar
aimbotd

Solid advice here. Theres a good amount to grok but you’ll capture that through more use.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
(shameless plug for cloudposse, as a devops accelerator this is how we help companies by quickly coming in and laying the foundation with terraform and gitops) also, checkout <#C01799EFEN7> our weekly webinar where we answer all kinds of questions.
2
sjl2024 avatar
sjl2024

If the service in question is not currently on AWS, is a programmatic (terraform-managed) IAM user an okay compromise? Hopefully it’s on ECS soon enough

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


If the service in question is not currently on AWS
Yes, this is more often where you might make a compromise. Nonetheless, the same best practices apply surrounding regular key rotation.

1

2022-02-04

 avatar
06:26:02 PM
has joined the channel

2022-02-06

Mike Crowe avatar
Mike Crowe

Can anybody point me to some more details regarding remote-state? I can’t seem to get it to work correctly: • I can download the state from S3 and see the output I want from the other module (remote s3 state has the wildcard_certificate_arn output I want) • NOTE: I have to specify profile in backend.tf.json (due to using control tower) • Error message is:

│ Error: Unsupported attribute
│   on main.tf line 19, in module "saml_cognito":
│   19:   certificate                            = module.dns_delegated.wildcard_certificate_arn
│     │ module.dns_delegated is a object, known only after apply
│ This object does not have an attribute named "wildcard_certificate_arn".

• My [remote-state.tf](http://remote-state.tf) is simply:

module "dns_delegated" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "0.22.0"
  component = "dns-delegated"
}

I think think this is borked up because I have to use profile in my backend specification, but I’m not positive.

RB avatar

can you comment out everything in the component and only leave the remote state, then output the entire remote state?

my guess is that you haven’t deployed dns delegated since adding this new output so when you try to use the remote state output in your new component, it fails

RB avatar

usually we use only role arn in the backend configuration so it’s more explicit. but i don’t think using either profile or role in the backend would change the above results

Mike Crowe avatar
Mike Crowe

Good idea – [main.tf](http://main.tf) is now blank, and [outputs.tf](http://outputs.tf) now has:

output "dns_delegated" {
  value = module.dns_delegated
}

Error:

│ Error: stack name pattern must be provided in 'stacks.name_pattern' config or 'ATMOS_STACKS_NAME_PATTERN' ENV variable
│ 
│   with module.dns_delegated.data.utils_component_config.config,
│   on .terraform/modules/dns_delegated/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like atmos.yaml is not correct, or atmos can’t find it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can DM me your code to take a look

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Also, ATMOS_BASE_PATH ENV var needs to be set

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

To the absolute path to the root of the repo

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is what we do in geodesic

cd /localhost/
cd <path to the infrastructure repo on localhost>
export ATMOS_BASE_PATH=$(pwd)
Mike Crowe avatar
Mike Crowe

@Andriy Knysh (Cloud Posse) – found the issue: I needed to add stack to my [remote-state.tf](http://remote-state.tf) file:

  stack = join("-", [var.environment, var.stage])

Does this seem right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mike Crowe it does not sound correct. I’ll look at the code that you DM-ed me

Markus Muehlberger avatar
Markus Muehlberger

Asking the obvious from the initial error: Aren’t you missing an outputs?

module.dns_delegated.outputs.wildcard_certificate_arn

That’s usually how the remote-state modules work. The Terraform error messages sometimes are misleading.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mike Crowe was missing context in the remote state (which provides environment and stage)

module "dns_delegated" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "0.22.0"
  component = "dns-delegated"
  context = module.this.context
}
1
Mike Crowe avatar
Mike Crowe

Correct, that was the root of my problem, however even with that context I’m still having the issue. I think it’s cuz I’m using something similar to the standard setup but not exactly because I’m using control tower for the account management

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it works like this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "dns_delegated" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "0.22.0"
  component = "dns-delegated"
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

will call the CloudPosse utils provider (which uses atmos code)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the provider will get the context from context (environment, stage, etc.) + the component name, and using atmos.yaml it will find the stack config for the provided component/environment/stage

Mike Crowe avatar
Mike Crowe

Yeah, I switched to this inside and still have the issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then from the stack config, it will get the s3 backend config and call TF remote backend data source to get the outputs for the component

Mike Crowe avatar
Mike Crowe

Is there any way to print the context to make sure I’m getting it in the module correctly?

RB avatar

you can output it

output "context" {
  value = module.this.context
}
Mike Crowe avatar
Mike Crowe

That only works if you can plan. This won’t plan

Mike Crowe avatar
Mike Crowe

I’ve even tried TF_LOG=debug but that doesn’t really show me what is wrong

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the issue you re facing now @Mike Crowe ?

RB avatar

(you can comment out the malfunctioning resources/data sources and only comment in a specific output)

Mike Crowe avatar
Mike Crowe

At a high level, my core issue is: • My accounts/environment/roles were set up by Control Tower, so I’m using multiple AWS_PROFILE entries to point to different accounts/roles (I am not using delegated-roles). So root points to the main account (profile=root in [backend.tf](http://backend.tf) at the top level) where the state is stored, and dev is the current environment (for the two components above) that I’m developing • I have one component deployed in a stack (ue1-dev) working fine and storing state remotely in s3 properly in the root S3 bucket/dynamodb tables just as expected. I’ve downloaded the JSON from this table for this component and it looks perfect • The second component is attempting to access the first’s component’s state using the [remote-state.tf](http://remote-state.tf) pattern • I have fixed all my context statements (as far as I can tell), but I cannot seem to get [remote-state.tf](http://remote-state.tf) working unless I add stack = join("-", [var.environment, var.stage]) (which should not be needed) in the second component. When I do a plan w/o stack=..., I get:

Error: stack name pattern must be provided in 'stacks.name_pattern' config or 'ATMOS_STACKS_NAME_PATTERN' ENV variable

I’ve confirmed (with Andriy’s help) my base_path and I (believe) everything is setup correctly in atmos (after all, it’s working properly in the first component). Even setting ATMOS_STACKS_NAME_PATTERN manually in the environment doesn’t resolve the error. Here’s the working [remote-state.tf](http://remote-state.tf) in the second component that I’m using:

module "dns_delegated" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "0.22.0"
  component = "dns-delegated"
  stack = join("-", [var.environment, var.stage])
  context = module.this.context
}

I’m not technically blocked (per se), but it concerns me that something in my setup isn’t quite right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the error above means the utils provider could not find atmos.yaml file

Mike Crowe avatar
Mike Crowe

FYI:

 ✗ . [none] (HOST) infrastructure ⨠ echo $ATMOS_BASE_PATH 
/localhost/Programming/Pinnacle/infrastructure
RB avatar

whats the output of this

find . -name atmos.yaml
Mike Crowe avatar
Mike Crowe

It’s in the root of this folder

Mike Crowe avatar
Mike Crowe

To be clear, atmos and the first component work fine and are properly configured. The core issue is the second component accessing the first components remote state

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mike Crowe please place it at this path https://github.com/cloudposse/atmos/blob/master/examples/complete/rootfs/usr/local/etc/atmos/atmos.yaml , or in home dir (~/.atmos)

atmos/atmos.yaml at master · cloudposse/atmosattachment image

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, etc) - atmos/atmos.yaml at master · cloudposse/atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and test

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the issue is, the utils provider gets called from the components’ folders

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it can’t see the root of the repo anymore

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(and remote-state calls the utils provider to process YAML config to get the remote state for the component in the stack)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it did not find it in /usr/local/etc/atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it did not find it in home dir (~/.atmos)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and it did not find it in the current directory since the current dir is the component folder

Mike Crowe avatar
Mike Crowe

So, it can go in /conf in geodesic?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in geodesic, we put it into rootfs/usr/local/etc/atmos/atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it’s visible to any process from any folder

Mike Crowe avatar
Mike Crowe

Ding, Ding, Ding – we have a winner. Looks like you need atmos.yaml in both the root of your project and at one of the global locations referenced above. Adding it to geodesic at rootfs/usr/local/etc/atmos/ fixed my problem. Thanks @Andriy Knysh (Cloud Posse) @RB for excellent help

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need it only in one location, which should be visible to all processes in all folders (the root of the repo is not one of them; although atmos itself worked when atmos.yaml was in the root of the repo b/c you ran atmos commands from the root and it was the “current dir”)

2022-02-07

2022-02-08

Brad Alexander avatar
Brad Alexander

I’m trying to use https://github.com/cloudposse/terraform-aws-datadog-integration and I’d like to set up integrations with multiple aws accounts. I don’t see any specific mention of it in the docs, do multiple instances of this module play well together? anyone have an example?

GitHub - cloudposse/terraform-aws-datadog-integration: Terraform module to configure Datadog AWS integrationattachment image

Terraform module to configure Datadog AWS integration - GitHub - cloudposse/terraform-aws-datadog-integration: Terraform module to configure Datadog AWS integration

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it should be deployed once per account

GitHub - cloudposse/terraform-aws-datadog-integration: Terraform module to configure Datadog AWS integrationattachment image

Terraform module to configure Datadog AWS integration - GitHub - cloudposse/terraform-aws-datadog-integration: Terraform module to configure Datadog AWS integration

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @

msharma24 avatar
msharma24

Hi there,

Neovim with terraformls auto completions and tflint checking as you type resource configuration Here is my editor config. https://mukeshsharma.dev/2022/02/08/neovim-workflow-for-terraform.html

ikar avatar

oh my, that looks so great!

1
Bryan Dady avatar
Bryan Dady
07:53:58 PM

Hi @Erik Osterman (Cloud Posse) I just discovered this Inc Mgmt / Opsgenie module and am excited to get it set up for our team.

I haven’t yet found any docs or description of how to think about or use the existing_users (existing_users) vs. users.yaml (users).

Fresh off the press: https://github.com/cloudposse/terraform-yaml-config

We’re using YAML more and more to define configuration in a portable format that we use with terraform. This allows us to define that configuration from both local and remote sources (via https). For example, we use it for opsgenie escalations, datadog monitors, SCP policies, etc.

Bryan Dady avatar
Bryan Dady

In practice, is it just a distinction between creating new users vs a data lookup of existing users?

Fresh off the press: https://github.com/cloudposse/terraform-yaml-config

We’re using YAML more and more to define configuration in a portable format that we use with terraform. This allows us to define that configuration from both local and remote sources (via https). For example, we use it for opsgenie escalations, datadog monitors, SCP policies, etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in short, it is

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but… companies can create users in Opsgenie using SAML

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so those users are “existing” to the terraform code

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to use those users in other code, you need to read them using data source

Bryan Dady avatar
Bryan Dady

Thank you @Andriy Knysh (Cloud Posse) That is exactly our case. An IT team adds users to our Confluence/Jira/Opsgenie (and other apps), but I want to manage the teams, rotations, services, and relationships between which teams ‘own’ which services etc. So in this case, I should use existing_users over users, right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

existing_users will be looked up using the data source

1
Bryan Dady avatar
Bryan Dady

And then just match the username (email address) of each user as their .id in the list of members in the teams . Is that right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we usually do

data "opsgenie_user" "team_members" {
  for_each = local.enabled ? {
    for member in var.members :
    member.user => member
  } : {}

  username = each.key
}

module "members_merge" {
  source  = "cloudposse/config/yaml//modules/deepmerge"
  version = "0.8.1"

  count = local.enabled ? 1 : 0

  maps = [
    # Existing members
    data.opsgenie_user.team_members,
    # New members
    local.members
  ]

}

module "team" {
  source  = "cloudposse/incident-management/opsgenie//modules/team"
  version = "0.15.0"

  team = {
    name           = module.this.name
    description    = var.description
    members        = try(module.members_merge[0].merged, [])
    ignore_members = var.ignore_team_members
  }

  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform-opsgenie-incident-management/main.tf at master · cloudposse/terraform-opsgenie-incident-managementattachment image

Terraform module to provision Opsgenie resources from YAML configurations using the Opsgenie provider,, complete with automated tests - terraform-opsgenie-incident-management/main.tf at master · cl…

Josh B. avatar
Josh B.

Can we get an example for ordered cache on module https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/tree/master/examples/complete. I could be wrong, but it seems to require some info that should be optional. I could also very well be doing it wrong I think mainly the function arn which I am not using any sort of lambda or cloud function so confused a tad.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe there’s no way to have optional parameters as of terraform 1.1 without using the experimental default(...) function which is not advisable yet for production. maybe try passing null to the parameters not needed.

Josh B. avatar
Josh B.

Thanks, @Erik Osterman (Cloud Posse) tried that, but still the same result; no worries appreciate the response!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Yonatan Koren

Yonatan Koren avatar
Yonatan Koren

@Josh B. can you please give me a snippet of what you’re trying to do?

Yonatan Koren avatar
Yonatan Koren

Or permalink to the lines in [variables.tf](http://variables.tf) that you think are problematic?

Josh B. avatar
Josh B.

So I am trying to do something along these lines with ordered cache.

ordered_cache = [
  {
     path_pattern     = "/avatars/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id =  module.s3_bucket_assets.bucket_domain_name

   trusted_signers    = ["self"]
    trusted_key_groups = []
    min_ttl                = 0
    default_ttl            = 0
    max_ttl                = 0
    compress               = true
    viewer_protocol_policy = "redirect-to-https"

    forward_query_string              = false
    forward_header_values             = [""]
    forward_cookies                   = ""
    forward_cookies_whitelisted_names = [""]

    cache_policy_id          = ""
    origin_request_policy_id = ""
    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  response_headers_policy_id = ""

    lambda_function_association = [{
      event_type   = "viewer-request"
      include_body = false
      lambda_arn   = ""
    }]

    function_association = [{
      event_type   = "viewer-request"
      function_arn = null
    }]

}
]
Josh B. avatar
Josh B.

It seems function_arn is the main culprit, but would need to to dig deeper.

Yonatan Koren avatar
Yonatan Koren

@Josh B. in your original message you wrote
I think mainly the function arn which I am not using any sort of lambda or cloud function so confused a tad.
So you’re not using Lambda at all? You should set lambda_function_association and function_association both to []

Josh B. avatar
Josh B.

Correct no lambda at all. Okay let me give that a try! (could have sworn it made me add it but maybe not)

Yonatan Koren avatar
Yonatan Koren

Let me know how it goes

Yonatan Koren avatar
Yonatan Koren

and yes intuitively you’d just want to omit anything having to do with lambda if you’re not using Lambda. However as Erik mentioned default() aka optional object attributes is opt-in only still (I was surprised they didn’t throw it into Terraform 1.0.0 as fully supported out of the box).

Yonatan Koren avatar
Yonatan Koren

If it was enabled by default in Terraform, then what we’d do is give those aforementioned variable a default of [], so users with use cases such as yours just wouldn’t have to think about those variables and wouldn’t run into this problem.

Josh B. avatar
Josh B.

Ahh, okay, that makes total sense. Thanks for the explanation. So I think that for sure fixed this issue; I think I am running into something like https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/188 but can work through it now. Thanks so much for the help Cloudposse

1
np1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Yonatan Koren

np1
Yonatan Koren avatar
Yonatan Koren

@Josh B.
I think I am running into something like https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/188 but can work through it now
Yes going to try to merge some of these open PRs soon

1

2022-02-09

Grubhold avatar
Grubhold

Hi folks, any idea why I’m suddenly getting this error with the WAF module it has been working fine and no changes made

│ Error: InvalidParameter: 1 validation error(s) found.
│ - minimum field size of 20, AssociateWebACLInput.ResourceArn.
│ 
│ 
│   with module.dk_waf.aws_wafv2_web_acl_association.default[1],
│   on modules/aws-waf/main.tf line 1, in resource "aws_wafv2_web_acl_association" "default":
│    1: resource "aws_wafv2_web_acl_association" "default" {
Grubhold avatar
Grubhold

This is the part its referring to

resource "aws_wafv2_web_acl_association" "default" {
  count = module.this.enabled && length(var.association_resource_arns) > 0 ? length(var.association_resource_arns) : 0

  resource_arn = var.association_resource_arns[count.index]
  web_acl_arn  = join("", aws_wafv2_web_acl.default.*.arn)
}
RB avatar

maybe the web acl arn input is empty?

justin.dynamicd avatar
justin.dynamicd

resourcearn is <20chars so either its empty or a bad validation error got added making your life difficult.

Can you create and output {} and dump the the attribute to make sure it’s populated/valid?

2
Grubhold avatar
Grubhold

Thanks for the reply folks, I will check and report.

loren avatar

I am getting the sense that the v4.0.0 release of the AWS provider is imminent… Based on comments on issues I’m following, and PR tags… Thought it might bea good time to preview the upgrade guide… https://github.com/hashicorp/terraform-provider-aws/blob/main/website/docs/guides/version-4-upgrade.html.md

terraform-provider-aws/version-4-upgrade.html.md at main · hashicorp/terraform-provider-awsattachment image

Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.

fb-wow1
loren avatar

The changes to the aws_s3_bucket resource are huge, in particular. That’s likely a reasonably big rewrite of most modules using s3 buckets

terraform-provider-aws/version-4-upgrade.html.md at main · hashicorp/terraform-provider-awsattachment image

Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.

loren avatar

Ugh, plus needing to import all the new, separate s3 resources

RB avatar

oh boy

RB avatar

maybe we need to set our versions from >= 3.0 to ~ 3.0

1
RB avatar

cc: @Erik Osterman (Cloud Posse)

loren avatar

I wouldn’t, in reusable modules. I’d instead encourage users to do that in their root config

jose.amengual avatar
jose.amengual

ohhhh my……those are big changes

jose.amengual avatar
jose.amengual

tons of refactor in many resources

loren avatar
Release v4.0.0 · hashicorp/terraform-provider-awsattachment image

BREAKING CHANGES: data-source/aws_connect_hours_of_operation: The hours_of_operation_arn attribute is renamed to arn (#22375) resource/aws_batch_compute_environment: No compute_resources configura…

1
loren avatar
Terraform AWS Provider 4.0 Refactors S3 Bucket Resourceattachment image

Version 4.0 of the HashiCorp Terraform AWS provider brings usability improvements to data sources and attribute validations along with a refactored S3 bucket resource.

1
jose.amengual avatar
jose.amengual

ohhhhh my…..I wasn’t expecting this the NEXT DAY……

1
mrwacky avatar
mrwacky


These changes, along with other minor updates, are aimed at simplifying your configurations and improving the overall experience of using the Terraform AWS provider.

But at what cost? AT WHAT COST

jose.amengual avatar
jose.amengual

The cost of makings us indispensable

mrwacky avatar
mrwacky

haha. yay, I guess.

2022-02-10

matt avatar

Just wanted to give everyone a heads-up that HashiCorp just released a new major version (v.4.0.0) of the Terraform provider within the last hour. There are a number of breaking changes. If you happen to see anything that was previously working that is no longer working, please let us know. You can pin the provider version to v3 to work around the issue for now.  (edited)

3
2
2
1
2
matt avatar

You can pin the provider version by updating [versions.tf](http://versions.tf) in the component to:

required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}
Kirill I. avatar
Kirill I.

Hello everybody. How can I fix this:

Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc_peering_cross_account.data.aws_route_table.accepter[0], │ on .terraform/modules/vpc_peering_cross_account/accepter.tf line 67, in data “aws_route_table” “accepter”: │ 67: data “aws_route_table” “accepter” {

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform-aws-vpc-peering-multi-account/examples/vpc-only at master · cloudposse/terraform-aws-vpc-peering-multi-accountattachment image

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - terraform-aws-vpc-peering-multi-account/examples/vpc-only at master · cloudposse…

Kirill I. avatar
Kirill I.

Am I understand right, I should define VPCs and subnets explicitly? And do this with your modules?

Kirill I. avatar
Kirill I.

Thank you for the link, but without any additional comments it is not really helpful

Kirill I. avatar
Kirill I.

I did as described in the registry:

Provision Instructions Copy and paste into your Terraform configuration, insert the variables, and run terraform init:

module "vpc-peering-multi-account" {
  source  = "cloudposse/vpc-peering-multi-account/aws"
  version = "0.5.0"
  # insert the 4 required variables here
}

but it doesn’t work

Kirill I. avatar
Kirill I.

I put 4 required variables

Kirill I. avatar
Kirill I.

What am I doing wrong?

Kirill I. avatar
Kirill I.

Am I understand right your example is for one region? I need cross region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use diff providers for the regions

provider "aws" {
  region = var.second_region
  alias  = "second"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then use the provider for the resources in that region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform-aws-vpc-peering-multi-account/main.tf at master · cloudposse/terraform-aws-vpc-peering-multi-accountattachment image

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - terraform-aws-vpc-peering-multi-account/main.tf at master · cloudposse/terraform…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you provide requester_region accepter_region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and role ARNs for the accepter and requester

Kirill I. avatar
Kirill I.

I did exactly this way

Kirill I. avatar
Kirill I.

And now I got this error. Everything is working except peering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kirill I. if you share the code (in Slack or by email), we’ll take a look (it’s difficult to say anything w/o looking at the code)

Kirill I. avatar
Kirill I.

variable “accepter_region” { default = “us-west-1” }

variable “requester_region” { default = “eu-west-2”

terraform { required_providers { aws = { version = “3.74.1” } } }

provider “aws” { # London region = “eu-west-2” }

provider “aws” { alias = “eu-west-2” # London region = “eu-west-2” }

provider “aws” { alias = “us-west-1” # California region = “us-west-1” }

#Create VPC in us-west-1 resource “aws_vpc” “vpc_uswest” { provider = aws.us-west-1 cidr_block = “10.10.0.0/16” enable_dns_support = true enable_dns_hostnames = true }

#Create VPC in eu-west-2 resource “aws_vpc” “vpc_euwest” { provider = aws.eu-west-2 cidr_block = “10.11.0.0/16” enable_dns_support = true enable_dns_hostnames = true }

resource “aws_iam_role” “test_role” { name = “cross_region_vpc_peering”

# Terraform’s “jsonencode” function converts a # Terraform expression result to valid JSON syntax. assume_role_policy = jsonencode({ Version = “2012-10-17” Statement = [ { Action = “sts:AssumeRole” Effect = “Allow” Sid = “” Principal = { “AWS” : “arnawsiam:user/kiril” } }] }) }

module “vpc-peering-multi-account” { source = “cloudposse/vpc-peering-multi-account/aws” // version = “0.5.0” # insert the 4 required variables here

accepter_region                          = var.accepter_region
requester_aws_assume_role_arn             = aws_iam_role.test_role.arn
requester_region                          = var.requester_region

}

resource “aws_subnet” “subnet_1_us” { provider = aws.us-west-1 # availability_zone = element(data.aws_availability_zones.azs.names, 0) vpc_id = aws_vpc.vpc_uswest.id cidr_block = “10.10.1.0/24” }

resource “aws_subnet” “subnet_1_eu” { // provider = aws.eu-west-2 vpc_id = aws_vpc.vpc_euwest.id cidr_block = “10.11.1.0/24” availability_zone = “eu-west-2a” }

resource “aws_route_table” “route_table_us” { provider = aws.us-west-1 vpc_id = aws_vpc.vpc_uswest.id // route { // cidr_block = “0.0.0.0/0” // gateway_id = aws_internet_gateway.igw-us.id // } // route { // cidr_block = “10.11.1.0/24” // vpc_peering_connection_id = aws_vpc_peering_connection.eu-us-peering.id // }

lifecycle { ignore_changes = all } tags = { Name = “US-Region-RT” } }

resource “aws_main_route_table_association” “set-us-default-rt-assoc” { provider = aws.us-west-1 vpc_id = aws_vpc.vpc_uswest.id route_table_id = aws_route_table.route_table_us.id }

resource “aws_route_table” “route_table_eu” { provider = aws.eu-west-2 vpc_id = aws_vpc.vpc_euwest.id /* route { cidr_block = “0.0.0.0/0” gateway_id = aws_internet_gateway.igw-eu.id } route { cidr_block = “10.0.1.0/24” vpc_peering_connection_id = aws_vpc_peering_connection.eu-us-peering.id } */ lifecycle { ignore_changes = all } tags = { Name = “EU-Region-RT” } }

resource “aws_main_route_table_association” “set-eu-default-rt-assoc” { provider = aws.eu-west-2 vpc_id = aws_vpc.vpc_euwest.id route_table_id = aws_route_table.route_table_eu.id }

Kirill I. avatar
Kirill I.

terraform plan ╷ │ Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc-peering-multi-account.data.aws_route_table.accepter[0], │ on .terraform/modules/vpc-peering-multi-account/accepter.tf line 67, in data “aws_route_table” “accepter”: │ 67: data “aws_route_table” “accepter” { │ ╵ ╷ │ Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc-peering-multi-account.data.aws_route_table.accepter[1], │ on .terraform/modules/vpc-peering-multi-account/accepter.tf line 67, in data “aws_route_table” “accepter”: │ 67: data “aws_route_table” “accepter” { │ ╵ ╷ │ Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc-peering-multi-account.data.aws_route_table.requester[1], │ on .terraform/modules/vpc-peering-multi-account/requester.tf line 121, in data “aws_route_table” “requester”: │ 121: data “aws_route_table” “requester” { │ ╵ ╷ │ Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc-peering-multi-account.data.aws_route_table.requester[0], │ on .terraform/modules/vpc-peering-multi-account/requester.tf line 121, in data “aws_route_table” “requester”: │ 121: data “aws_route_table” “requester” { │ ╵ ╷ │ Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc-peering-multi-account.data.aws_route_table.requester[2], │ on .terraform/modules/vpc-peering-multi-account/requester.tf line 121, in data “aws_route_table” “requester”: │ 121: data “aws_route_table” “requester” {

Kirill I. avatar
Kirill I.

@Andriy Knysh (Cloud Posse) any thoughts?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when we deploy this module, we usually have the VPCs already provisioned

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think it’s not possible to provision the VPCs at the same time as the VPC peering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to separate the VPCs into a separate component and provision it first

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then use remote state to get the VPC IDs and provide them to the peering component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, in your code

module "vpc-peering-multi-account" {
  source  = "cloudposse/vpc-peering-multi-account/aws"
//  version = "0.5.0"
  # insert the 4 required variables here
    accepter_region                          = var.accepter_region
    requester_aws_assume_role_arn             = aws_iam_role.test_role.arn
    requester_region                          = var.requester_region
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I don’t see

requester_vpc_id                          = var.requester_vpc_id
accepter_vpc_id                          = var.accepter_vpc_id
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you don’t provide the VPC IDs, then that’s the issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you just omitted it in the code snippet, try to provision the VPCs first (either in a separate component, or using terraform plan/apply --target=xxx

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

take a look at the example, all variables need to be provided to the module https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account/blob/master/examples/complete/main.tf

provider "aws" {
  region = var.region
}

module "vpc_peering_cross_account" {
  source = "../../"

  requester_aws_assume_role_arn             = var.requester_aws_assume_role_arn
  requester_region                          = var.requester_region
  requester_vpc_id                          = var.requester_vpc_id
  requester_allow_remote_vpc_dns_resolution = var.requester_allow_remote_vpc_dns_resolution

  accepter_enabled                         = var.accepter_enabled
  accepter_aws_assume_role_arn             = var.accepter_aws_assume_role_arn
  accepter_region                          = var.accepter_region
  accepter_vpc_id                          = var.accepter_vpc_id
  accepter_allow_remote_vpc_dns_resolution = var.accepter_allow_remote_vpc_dns_resolution

  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

role (which could specify the same account or diff account), region and VPC ID - for both accepter and requester

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kirill I.

Kirill I. avatar
Kirill I.

I looked into github page and put only those settings which was listed as mandatory.

Kirill I. avatar
Kirill I.

Does your module support ‘depends’ directive?

Kirill I. avatar
Kirill I.

Can I use explicit declaration instead of variables?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module accepts the variables as shown in the example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to provide all those variables for the module to work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have all the info in your code, you just did not provide some vars to the module (e.g. VPC IDs were not provided)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Add the variables VPC IDs to the module with the values from the VPCs that are already configured in your code

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kirill I. can you add the variables to the module (check which one is the accepter and which one is the requester, the code below is an example), and test it

requester_vpc_id = aws_vpc.vpc_uswest.id
accepter_vpc_id  = aws_vpc.vpc_euwest.id
Kirill I. avatar
Kirill I.

sure will do and get back to you

Kirill I. avatar
Kirill I.

│ Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc-peering-multi-account.data.aws_route_table.accepter[0], │ on .terraform/modules/vpc-peering-multi-account/accepter.tf line 67, in data “aws_route_table” “accepter”: │ 67: data “aws_route_table” “accepter” { │ ╵ ╷ │ Error: error reading EC2 VPC: UnauthorizedOperation: You are not authorized to perform this operation. │ status code: 403, request id: a70a1d6b-15b0-4df2-9f39-4277450eb88d │ │ with module.vpc-peering-multi-account.data.aws_vpc.requester[0], │ on .terraform/modules/vpc-peering-multi-account/requester.tf line 99, in data “aws_vpc” “requester”: │ 99: data “aws_vpc” “requester” { │ ╵

Kirill I. avatar
Kirill I.

errors anyway

Kirill I. avatar
Kirill I.

fixed policy and got the same error :

Kirill I. avatar
Kirill I.

│ Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc-peering-multi-account.data.aws_route_table.accepter[0], │ on .terraform/modules/vpc-peering-multi-account/accepter.tf line 67, in data “aws_route_table” “accepter”: │ 67: data “aws_route_table” “accepter” { │ ╵ ╷ │ Error: query returned no results. Please change your search criteria and try again │ │ with module.vpc-peering-multi-account.data.aws_route_table.requester[0], │ on .terraform/modules/vpc-peering-multi-account/requester.tf line 121, in data “aws_route_table” “requester”: │ 121: data “aws_route_table” “requester” {

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

instead of manually creating all those VPCs, subnets and route tables (prob something is missing when creating all of that manually), I’d recommend using some existing modules to create them, for example as shown here https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account/blob/master/examples/vpc-only/main.tf

provider "aws" {
  region = var.region
}

module "requester_vpc" {
  source     = "cloudposse/vpc/aws"
  version    = "0.21.1"
  cidr_block = "172.16.0.0/16"

  context = module.this.context
}

module "requester_subnets" {
  source               = "cloudposse/dynamic-subnets/aws"
  version              = "0.38.0"
  availability_zones   = var.availability_zones
  vpc_id               = module.requester_vpc.vpc_id
  igw_id               = module.requester_vpc.igw_id
  cidr_block           = module.requester_vpc.vpc_cidr_block
  nat_gateway_enabled  = true
  nat_instance_enabled = false

  context = module.this.context
}

module "accepter_vpc" {
  source     = "cloudposse/vpc/aws"
  version    = "0.21.1"
  cidr_block = "172.17.0.0/16"

  context = module.this.context
}

module "accepter_subnets" {
  source               = "cloudposse/dynamic-subnets/aws"
  version              = "0.38.0"
  availability_zones   = var.availability_zones
  vpc_id               = module.accepter_vpc.vpc_id
  igw_id               = module.accepter_vpc.igw_id
  cidr_block           = module.accepter_vpc.vpc_cidr_block
  nat_gateway_enabled  = true
  nat_instance_enabled = false

  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then the VPC IDs outputs from the modules can be used as inputs to the peering module as shown here https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account/blob/master/examples/complete/main.tf

provider "aws" {
  region = var.region
}

module "vpc_peering_cross_account" {
  source = "../../"

  requester_aws_assume_role_arn             = var.requester_aws_assume_role_arn
  requester_region                          = var.requester_region
  requester_vpc_id                          = var.requester_vpc_id
  requester_allow_remote_vpc_dns_resolution = var.requester_allow_remote_vpc_dns_resolution

  accepter_enabled                         = var.accepter_enabled
  accepter_aws_assume_role_arn             = var.accepter_aws_assume_role_arn
  accepter_region                          = var.accepter_region
  accepter_vpc_id                          = var.accepter_vpc_id
  accepter_allow_remote_vpc_dns_resolution = var.accepter_allow_remote_vpc_dns_resolution

  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use the first example as one component and provision it first

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then use remote state to get the VPC IDs outputs outputs and provision the peering component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(the issue could be 1) something is missing when creating all of the VPC resources; or 2) terraform can’t provision the peering and the VPC in one step (so either provision the two components separately, or use -target; or 3) both)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this terratest https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account/blob/master/test/src/examples_complete_test.go provisions the first component to create the VPCs/subnets/route tables, then provisions the second component to create the peering connection (and it’s a working test that deploys the resources on AWS)

package test

import (
	"testing"

	"github.com/gruntwork-io/terratest/modules/terraform"
	"github.com/stretchr/testify/assert"
)

// Test the Terraform module in examples/complete using Terratest.
func TestExamplesComplete(t *testing.T) {
	terraformVpcOnlyOptions := &terraform.Options{
		// The path to where our Terraform code is located
		TerraformDir: "../../examples/vpc-only",
		Upgrade:      true,
		// Variables to pass to our Terraform code using -var-file options
		VarFiles: []string{"fixtures.us-east-2.tfvars"},
		Targets: []string{"module.requester_vpc", "module.requester_subnets", "module.accepter_vpc", "module.accepter_subnets"},
	}

	defer func() {
		terraform.Init(t, terraformVpcOnlyOptions)
		terraform.Destroy(t, terraformVpcOnlyOptions)
	}()

	// This will run `terraform init` and `terraform apply` to create VPCs and subnets, required for the test
	terraform.InitAndApply(t, terraformVpcOnlyOptions)
	requesterVpcId := terraform.Output(t, terraformVpcOnlyOptions, "requester_vpc_id")
	acceptorVpcId := terraform.Output(t, terraformVpcOnlyOptions, "accepter_vpc_id")

	terraformOptions := &terraform.Options{
		// The path to where our Terraform code is located
		TerraformDir: "../../examples/complete",
		Upgrade:      true,
		// Variables to pass to our Terraform code using -var-file options
		VarFiles: []string{"fixtures.us-east-2.tfvars"},
		Vars: map[string]interface{}{
			"requester_vpc_id": requesterVpcId,
			"accepter_vpc_id": acceptorVpcId,
		},
	}

	defer terraform.Destroy(t, terraformOptions)

	// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
	terraform.InitAndApply(t, terraformOptions)

	println(terraform.OutputAll(t, terraformOptions))

	// Run `terraform output` to get the value of an output variable
	requesterConnectionId := terraform.Output(t, terraformOptions, "requester_connection_id")
	// Verify we're getting back the outputs we expect
	assert.Contains(t, requesterConnectionId, "pcx-")

	// Run `terraform output` to get the value of an output variable
	acceptorConnectionId := terraform.Output(t, terraformOptions, "accepter_connection_id")
	// Verify we're getting back the outputs we expect
	assert.Contains(t, acceptorConnectionId, "pcx-")

	// Run `terraform output` to get the value of an output variable
	acceptorAcceptStatus := terraform.Output(t, terraformOptions, "accepter_accept_status")
	// Verify we're getting back the outputs we expect
	assert.Equal(t, "active", acceptorAcceptStatus)

	// Run `terraform output` to get the value of an output variable
	requesterAcceptStatus := terraform.Output(t, terraformOptions, "requester_accept_status")
	// Verify we're getting back the outputs we expect
	assert.Equal(t, "pending-acceptance", requesterAcceptStatus)
}

Zeph avatar

Hi everyone, getting a strange error when trying to import a redis cluster already made with the module to another state (trying to consolidate our environment) and seeing this:

terragrunt import ‘module.redis.aws_elasticache_replication_group.default’ example-redis

│ Error: Invalid index │ │ on …/elasticache_redis_cluster.redis/main.tf line 80, in locals: │ 80: elasticache_member_clusters = module.this.enabled ? tolist(aws_elasticache_replication_group.default.0.member_clusters) : [] │ ├──────────────── │ │ aws_elasticache_replication_group.default is empty tuple

Any ideas?

2022-02-11

Vignesh avatar
Vignesh

New terraform aws provider version 4.0 released with some breaking changes.

For example, few aws_s3_bucket attributes were made Read-Only.

https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.0.0

Release v4.0.0 · hashicorp/terraform-provider-awsattachment image

BREAKING CHANGES: data-source/aws_connect_hours_of_operation: The hours_of_operation_arn attribute is renamed to arn (#22375) resource/aws_batch_compute_environment: No compute_resources configura…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am getting the sense that the v4.0.0 release of the AWS provider is imminent… Based on comments on issues I’m following, and PR tags… Thought it might bea good time to preview the upgrade guide… https://github.com/hashicorp/terraform-provider-aws/blob/main/website/docs/guides/version-4-upgrade.html.md

1
Don avatar

Since 4.0 I’m using : https://github.com/cloudposse/terraform-aws-s3-bucket/releases/tag/0.47.0 and even if I set s3_replication_enabled to disable I get the following error: 168: for_each = local.s3_replication_rules == null ? [] : local.s3_replication_rules (“This object does not have an attribute named “s3_replication_rules”)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll have to review and fix the module to support the provider v4. You can also pin the provider in your code to ~> 3

RB avatar
 terraform {
   required_version = ">= 0.13.0"
 
   required_providers {
     aws = {
       source  = "hashicorp/aws"
       version = "~> 3"
     }
   }
 }
RB avatar

You can pin the provider version by updating [versions.tf](http://versions.tf) in the component to:

required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}
Don avatar

has anyone else experienced this ?

Tyler Jarjoura avatar
Tyler Jarjoura

+1 I get the same thing

Grummfy avatar
Grummfy

warning, you could hae some issue for some poeple https://github.com/hashicorp/terraform-provider-aws/issues/23110

Community Note

• Please vote on this issue by adding a :+1: reaction to the original issue to help the community and maintainers prioritize this request • Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

v1.1.2

Affected Resource(s)

The provider itself

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

provider "aws" {
  region = "us-east-2"

  assume_role {
    role_arn = "<redacted>"
  }
}

Debug Output Panic Output Expected Behavior

Terraform should plan and run using the EC2 metadata.

Actual Behavior

| Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│ 
│ Please see <https://registry.terraform.io/providers/hashicorp/aws>
│ for more information about providing credentials.
│ 
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│ 
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on configuration.tf line 11, in provider "aws":
│   11: provider "aws" {

Steps to Reproduce

  1. terraform plan

Important Factoids

Today when switching to v4.0, we discovered we could no longer run Terraform on EC2 instances that use the AWS Instance Metadata service. Running v4.0 locally works fine. But running the same terraform on an EC2 instance (such as for CICD) results in the error shown above.

Rolling back to 3.74.1 fixes the issue and all works as planned.

The instances in question are running both v1 and v2 of the Instance Metadata service.

2022-02-12

mikesew avatar
mikesew

Certification Question: I’m in Canada (Vancouver). Appears my certification-provider is PSI. Are there any physical test-center locations I can take the terraform associate exam, or is it online-proctor-only?

2022-02-14

Rhys Davies avatar
Rhys Davies

hey guys is there any way to validate a variable based on input from another block? I know it’s not explicitly allowed in a validation block but I was wondering there are any patterns for saying something in Terraform effectively similar to this sentence “If this other argument to my terraform module is true, then this string I am evaluating must not be null or empty”?

loren avatar

there isn’t anything super clean, but you can use this approach: https://github.com/plus3it/terraform-null-validate-list-item/blob/master/main.tf#L1-L13

locals {
  # test that the specified item is in the valid_items list
  is_valid = contains(var.valid_items, var.item)
}

resource "null_resource" "invalid_item" {
  # forces/outputs an error when var.item is invalid
  count = !local.is_valid ? 1 : 0

  triggers = {
    assert_is_valid = local.is_valid == false ? file("ERROR: ${var.name} validation test failed: ${var.item}. Must be one of: ${join(", ", var.valid_items)}") : null
  }
}
Rhys Davies avatar
Rhys Davies

hm interesting, thank you - will play around with it

loren avatar

another idea might be to use an object for the variable type that includes both attributes? then your variable validation block would have access to both values

Rhys Davies avatar
Rhys Davies

That’s so weird I was just taking a break for coffee and was thinking about doing that, grouping the top level arguments together in an object just as I got your message ping

Rhys Davies avatar
Rhys Davies

I think that’s much nicer, gives the API caller a bit more of an idea of what’s going on without having to read descriptions and just looking at how it’s called

loren avatar

heh, great minds!

1
joshmyers avatar
joshmyers

Yeah but objects cannot have default values for particular attributes (for now)

joshmyers avatar
joshmyers

https://www.terraform.io/language/functions/defaults is a thing but requires expermintal flags which output warnings on every plan/apply

defaults - Functions - Configuration Language | Terraform by HashiCorpattachment image

The defaults function can fill in default values in place of null values.

loren avatar

The object and the attributes can have default values, but if the user specifies the object at all, then they must specify all values

joshmyers avatar
joshmyers

Yup which is very annoying

loren avatar

Doesn’t bother me much. I find I prefer the strong typing and improved error messages, and clear and easy documentation

loren avatar

It’ll be nice if the optional experiment makes it GA, of course

Rhys Davies avatar
Rhys Davies

I used the optional type and enabled the experiment for it, so far so good it has not eaten my lunch

1
joshmyers avatar
joshmyers

Aye, but them warnings were

Rhys Davies avatar
Rhys Davies

yeah big yellow blob definitely annoying, luckily most of my terraforming is automated these days so I don’t see it that often

Rhys Davies avatar
Rhys Davies

To give an example of what I’m doing: There are some circumstances I don’t want to attach a load balancer to my ECS service in the module I’m writing, but if I choose that I want to I would like to validate that the container name and port are not null

Rhys Davies avatar
Rhys Davies

I can’t seem to figure out how to write that fairly simple bit of logic. It’s not a huge problem for me, my code works without this validation I just have some docs written to explain, but I would like to explore options for constraining the state my module could be configured in

2022-02-15

Alyson avatar

Hi, I’m provisioning an AWS MSK (kafka) cluster using the “terraform-aws-msk-apache-kafka-cluster” module, version 0.8.3.

I noticed that when I put “client_tls_auth_enabled=true”, every time the cluster is destroyed and created again, even without me having made any modification in the terraform code.

Sorry, I’m not that good at English

Yonatan Koren avatar
Yonatan Koren
# Migration from 0.7.x to 0.8.x

Version `0.8.0` of this module introduces breaking changes that, without taking additional precautions, will cause the MSK
cluster to be recreated.

This is because version `0.8.0` relies on the [terraform-aws-security-group](<https://github.com/cloudposse/terraform-aws-security-group>)
module for managing the broker security group. This changes the Terraform resource address for the Security Group, which will
[cause Terraform to recreate the SG](<https://github.com/hashicorp/terraform-provider-aws/blob/3988f0c55ad6eb33c2b4c660312df9a4be4586b9/internal/service/kafka/cluster.go#L90-L97>). 

To circumvent this, after bumping the module version to `0.8.0` (or above), run a plan to retrieve the resource addresses of
the SG that Terraform would like to destroy, and the resource address of the SG which Terraform would like to create.

First, make sure that the following variable is set:

hcl security_group_description = “Allow inbound traffic from Security Groups and CIDRs. Allow all outbound traffic”


Setting `security_group_description` to its "legacy" value will keep the Security Group from being replaced, and hence the MSK cluster.

Finally, change the resource address of the existing Security Group.

bash $ terraform state mv “…aws_security_group.default[0]” “…module.broker_security_group.aws_security_group.default[0]”


This will result in an apply that will only destroy SG Rules, but not the itself or the MSK cluster.

Yonatan Koren avatar
Yonatan Koren

In a meeting, will respond more later.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Alyson Please post the output of terraform plan

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, what TF state are you using? If you are using local state, it could be wiped out and TF will want to create everything again. You need to use s3 for remote state

this1
Alyson avatar

@Jeremy G (Cloud Posse)

follow the result of my terraform plan

https://pastebin.com/JzXGy1g7

- Pastebin.com

Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.

Alyson avatar
module "kafka" {
  source      = "cloudposse/msk-apache-kafka-cluster/aws"
  version     = "0.8.3"
  label_order = ["namespace", "name", "stage"]
  //label_key_case = "lower"

  namespace                            = "iot"
  stage                                = "prod"
  name                                 = "msk"
  vpc_id                               = data.aws_vpc.selected.id
  subnet_ids                           = ["subnet-xxxxxxx", "subnet-xxxxxx", "subnet-xxxxxxx"]
  kafka_version                        = "2.6.2"
  number_of_broker_nodes               = 3 # this has to be a multiple of the of subnet_ids
  client_broker                        = "TLS"
  broker_instance_type                 = "kafka.t3.small"
  broker_volume_size                   = 20
  storage_autoscaling_max_capacity     = 90
  storage_autoscaling_target_value     = "70"
  encryption_in_cluster                = true
  client_tls_auth_enabled              = true // Defina true para habilitar a autenticação TLS do cliente
  cloudwatch_logs_enabled              = true
  cloudwatch_logs_log_group            = "/aws/msk/iot"
  encryption_at_rest_kms_key_arn       = "arn:aws:kms:us-east-1:xxxxxxxxxx:key/mrk-043a72c1c35549e29a1a92921150f42b"
  enhanced_monitoring                  = "PER_TOPIC_PER_BROKER"
  node_exporter_enabled                = false
  s3_logs_bucket                       = "iot-msk-prod"
  s3_logs_enabled                      = true
  s3_logs_prefix                       = ""
  security_group_description           = "Allow inbound traffic from Security Groups and CIDRs. Allow all outbound traffic"
  security_group_create_before_destroy = true
  //security_group_name = ["msk-iot-prod"]
  //allowed_security_group_ids = [""]
  // security groups to put on the cluster itself
  //associated_security_group_ids = ["sg-XXXXXXXXX", "sg-YYYYYYYY"]
  //zone_id                = "Z14EN2YD427LRQ"
  tags = var.DEFAULT_TAGS
}
Alyson avatar

@Andriy Knysh (Cloud Posse) I’m using TF remote!

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

This is the issue, but I wonder if it is an AWS provider bug. What do you think, @Andriy Knysh (Cloud Posse)?

      ~ client_authentication {                                                                                        

          + tls { # forces replacement}                                       

Terraform thinks that client_authentication has not been set up on the target cluster.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Alyson Have you successfully applied the changes once, so that the cluster was destroyed and recreated with client TLS authentication enabled?

Alyson avatar

@Jeremy G (Cloud Posse)

Yes, “terraform apply” was successfully executed!

Then I ran “terraform plan” again and it said that the cluster would be destroyed and recreated again.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I’m not sure what the issue is, but this code looks inconsistent to me (might be not related to the issue at all)

dynamic "client_authentication" {
    for_each = var.client_tls_auth_enabled || var.client_sasl_scram_enabled || var.client_sasl_iam_enabled ? [1] : []
    content {
      dynamic "tls" {
        for_each = var.client_tls_auth_enabled ? [1] : []
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Andriy Knysh (Cloud Posse) The code looks OK to me. The tls section is a subsection of client_authentication .

@Alyson We have not tested this code much because we do not have a cluster to test on. Please review this open issue on the Terraform AWS provider and let us know if you think it applies to your situation.



Hi All,


In September Amazon launch an update that allow update the authentication mechanism in a cluster https://aws.amazon.com/about-aws/whats-new/2021/09/amazon-msk-multiple-authentication-modes-tls-encryption-settings using the update-security API https://docs.aws.amazon.com/msk/latest/developerguide/msk-update-security.html. I tried to update my MSK cluster to enable TLS authentication, but instead of just update the security mechanism it tries to replace the whole cluster (I dont want the old cluster be deleted, just update the security mechanism)


Any idea what this is happening??


Terraform CLI and Terraform AWS Provider Version provider.terraform-provider-aws_v3.61.0_x5 Terraform/0.15.1


Affected Resource(s) aws_msk_cluster


Expected Behavior Update the security settings from the cluster that already exist


Actual Behavior Try to replace the cluster (delete the current one and create a new one with the given settings)


Steps to Reproduce



1. Create a cluster without any authentication mechanism using terraform

2. Update the aws_msk_cluster resource to add client_authentication

3. Run the terraform file again



I am using the latest provider version. We need this setting to enable the authentication in a cluster already exist but does not have authentication. @rocioemera - This is what this change aims to resolve. I am still waiting on a response back from the AWS MSK team; however I believe I have enough information now to progress my PR. Will try to make these changes over the weekend.

Hi, Any advance with this topic?? I tried this weekend to update the security settings to enable TLS in my current cluster but I am still getting the destructive behaviour.

Any idea when this will be solved?

 avatar
07:00:13 PM
has joined the channel
wave1

2022-02-16

Brent Garber avatar
Brent Garber

Is there a way to basically do aws ipam, but just in TF? IE: When expand list of services they automagically grab the next non-overlapping ip range for their subnets?

Release notes from terraform avatar
Release notes from terraform
06:53:12 PM

v1.1.6 1.1.6 (February 16, 2022) BUG FIXES: cli: Prevent complex uses of the console-only type function. This function may only be used at the top level of console expressions, to display the type of a given value. Attempting to use this function in complex expressions will now display a diagnostic error instead of crashing. (#30476)…

cli: Prevent complex uses of the console-only `type` function by alisdair · Pull Request #30476 · hashicorp/terraformattachment image

The console-only type function allows interrogation of any value&#39;s type. An implementation quirk is that we use a cty.Mark to allow the console to display this type information without the usua…

2022-02-17

 avatar
05:08:07 PM
has joined the channel
Eric Berg avatar
Eric Berg

Do I remember correctly that Cloudposse modules sometimes pull in config from YAML and drop it into the root dir as auto.tfvars.json files? Where do you create those files?

This seems to solve the problem of using variables, for typing and validation of yaml that’s pulled into locals.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
module "yaml_config" {
  source  = "cloudposse/config/yaml"
  version = "0.4.0"

  list_config_local_base_path = path.module
  list_config_paths           = var.service_control_policy_paths

  context = module.this.context
}

data "aws_caller_identity" "this" {}

module "service_control_policies" {
  source = "../../"

  service_control_policy_statements  = module.yaml_config.list_configs
  service_control_policy_description = var.service_control_policy_description
  target_id                          = data.aws_caller_identity.this.account_id

  context = module.this.context
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps

Eric Berg avatar
Eric Berg

I’ll check that out. I got the pattern of pulling in YAML files via locals from you, but this looks easy. I’d like to validate the input YAML files directly — or pass the product of yamldecode(file(xxx)) as a TF variable in the root mod. I generally pull the yaml in the root and pass all or some of what’s in those files as resource or module params

1
Eric Berg avatar
Eric Berg

I’m thinking — or recall your having said — that you pull yaml files in via locals and write them to the root dir as auto.tfvars.json files. Does that work?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

our yaml config module supports local files and remote files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for customers who want their configs local, we copy and commit them. it’s not something we use terraform for.

2022-02-18

Zachary Loeber avatar
Zachary Loeber

https://github.com/kube-champ/terraform-operator <– A tf operator that is not simply a front for using tf cloud, could be worth poking at

kube-champ/terraform-operator
Matt Gowie avatar
Matt Gowie

@Jeremy G (Cloud Posse) — Many thanks on these solid migration notes — They were much appreciated. https://github.com/cloudposse/terraform-aws-eks-node-group/blob/780163dacd9c892b64b988077a994f6675d8f56d/MIGRATION.md

Migration to v0.25.0 New Features

With v0.25.0 we have fixed a lot of issues and added several requested features.

• Full control over block device mappings via block_device_mappings • Ability to associate additional security groups with a node group via associated_security_group_ids • Ability to specify additional IAM Policies to attach to the node role • Ability to set whether or not the AmazonEKS_CNI_Policy is attached to the node role • Ability to provide your own IAM Role for the node group so you have complete control over its settings • Ability to specify node group placement details via placement • Ability to enable Nitro Enclaves on Nitro instances • Ability to configure Terrafrom create, update, and delete timeouts

We also take advantage of improved AWS support for managed node upgrades. Now things like changing security groups or disk size no longer require a full replacement of the node group but instead are handled by EKS as rolling upgrades. This release includes support for the new update_config configuration that sets limits on how many nodes can be out of service during an upgrade.

See the README for more details.

Breaking changes in v0.25.0

Releases v0.11.0 through v0.20.0 of this module attempted to maintain compatiblity, so that no code changes were needed to upgrade and node groups would not likely be
recreated on upgrade. Releases between v0.20.0 and v0.25.0 were never
recommended for use because of compatibility issues. With the release
of v0.25.0 we are making significant, breaking changes in order to bring
this module up to current Cloud Posse standards. Code changes will likely
be needed and node groups will likely need to be recreated. We strongly recommend
enabling create_before_destroy if you have not already, as in general
it provides a better upgrade path whenever an upgrade or change in configuration requires a node group to be replaced.

Terraform Version

Terraform version 1.0 is out. Before that, there was Terraform version 0.15, 0.14, 0.13 and so on.
The v0.25.0 release of this module drops support for Terraform 0.13. That version is old and has lots of known issues.
There are hardly any breaking changes between Terraform 0.13 and 1.0, so please upgrade to
the latest Terraform version before raising any issues about this module.

Behavior changes

• Previously, EBS volumes were left with the default value of delete_on_termination, which is true for EKS AMI root volumes. Now the default EBS volume has it set to true explicitly. • Previously, the Instance Metadata Service v1 (IMDSv1) was enabled by default, which is considered a security risk. Now it is disabled by default. Set metadata_http_tokens_required to false to leave IMDSv1 enabled. • Previously, a launch template was only generated and used if the specified configuration could only be accomplished by using a launch template. Now a launch template is alway generated (unless a launch template ID is provided) and used, and anything that can be set in the launch template is set there rather than in the node group configuration. • When a launch template is generated, a special security group to allow ssh access is also created if an ssh access key is specified. The name of this security group has changed from previous versions, to be consistent with Cloud Posse naming conventions. This will cause any previously created security group to be deleted, which will require the node group to be updated. • Previously, if a launch template ID was specified, the instance_types input was ignored. Now it is up to the user to make sure that the instance type is specified in the launch tempate or in instance_types but not both. • Did you want to exercise more control over where instances are placed? You can now specify placement groups and more via placement. • Are you using Nitro instances? You can now enable Nitro enclaves with enclave_enabled.

Input Variable Changes

enable_cluster_autoscaler removed. Use cluster_autoscaler_enabled instead. • worker_role_autoscale_iam_enabled removed. Use an EKS IAM role for service account for the cluster autoscaler service account instead, or add the policy back in via node_role_policy_arns. • source_security_group_ids renamed ssh_access_security_group_ids to reflect that the specified security groups will be given ssh access (TCP port 22) to the nodes. • existing_workers_role_policy_arns renamed node_role_policy_arns. • existing_workers_role_policy_arns_count removed (was ignored anyway). • node_role_arn added. If supplied, this module will not create an IAM role and instead will assign the given role to the node group. • permissions_boundary renamed to node_role_permissions_boundary. • disk_size removed. Set custom disk size via block_device_mappings. Defaults mapping has value 20 GB. • disk_type removed. Set custom disk type via block_device_mappings. Defaults mapping has value gp2. • launch_template_name replaced with launch_template_id. Use data "aws_launch_template" to get the id from the name if you need to. • launch_template_disk_encryption_enabled removed. Set via block_device_mappings. Default mapping has value true. • launch_template_disk_encryption_kms_key_id removed. Set via block_device_mappings. Default mapping has value null. • kubernetes_taints changed from key-value map of <key> = "<value>:<effect>" to list of objects to match the resource configuration format. • metadata_http_endpoint removed. Use metadata_http_endpoint_enabled instead. • metadata_http_tokens removed. Use metadata_http_tokens_required instead. • The following optional values used to be string type and are now list(string) type. An empty list is allowed. If the list has a value in it, that value will be used, even if empty, which may not be allowed by Terraform. The list may not have more than one value.

• `ami_image_id`
• `ami_release_version`
• `kubernetes_version`
• `launch_template_id`
• `launch_template_version`
• `ec2_ssh_key` renamed `ec2_ssh_key_name`
• `before_cluster_joining_userdata`
• `after_cluster_joining_userdata`
• `bootstrap_additional_options`
• `userdata_override_base64` • `kubelet_additional_options` was changed from `string` to `list(string)` but can contain multiple values, allowing you to specify options individually rather than requiring that you join them into one string (which you may still do if you prefer to).

Migration Tasks

In most cases, the changes you need to make are pretty easy.

Review behavior changes and new features

• Do you want node group instance EBS volumes deleted on termination? You can disable that now. • Do you want Instance Metadata Service v1 available? This module now disables it by default, and EKS and Kubernetes all handle that fine, but you might have scripts that curl the instance metadata endpoint that need it. • Did you have the “create before destroy” behavior disabled? The migration to v0.25.0 of this module is going to cause your node group to be destroyed and recreated anyway, so take the opportunity to enable it. It will save you and outage some day. • Were you supplying your own launch template, and stuck having to put an instance type in it because the earlier versions of this module would not let you do otherwise? Well, now you can leave the instance type out of your launch template and supply a set of types via the node group to enable a spot fleet. • Were you unhappy with the way the IAM Role for the nodes was configured? Now you can configure a role exactly the way you like and pass it in. • Were you frustrated that you had to copy a bunch of rules from one security group to the node group’s security group? Now you can just …

1
Maximiliano Moretti avatar
Maximiliano Moretti

wave Hello, team!

Maximiliano Moretti avatar
Maximiliano Moretti

SOmebody is usin this module? It is broken from my side

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please have a look in the history ;)

Maximiliano Moretti avatar
Maximiliano Moretti

I tried it, let me check again

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am getting the sense that the v4.0.0 release of the AWS provider is imminent… Based on comments on issues I’m following, and PR tags… Thought it might bea good time to preview the upgrade guide… https://github.com/hashicorp/terraform-provider-aws/blob/main/website/docs/guides/version-4-upgrade.html.md

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since 4.0 I’m using : https://github.com/cloudposse/terraform-aws-s3-bucket/releases/tag/0.47.0 and even if I set s3_replication_enabled to disable I get the following error: 168: for_each = local.s3_replication_rules == null ? [] : local.s3_replication_rules (“This object does not have an attribute named “s3_replication_rules”)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

New terraform aws provider version 4.0 released with some breaking changes.

For example, few aws_s3_bucket attributes were made Read-Only.

https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.0.0

Maximiliano Moretti avatar
Maximiliano Moretti

Thank you @Erik Osterman (Cloud Posse). It fixed the issue!

Maximiliano Moretti avatar
Maximiliano Moretti

you are the best

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

:-) glad it was an easy fix.

Maximiliano Moretti avatar
Maximiliano Moretti

Hackerman!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We will be releasing a new module but it will be breaking changes

Dogers avatar

Is there any chance of getting the module updated prior to that, just saying it needs provider <4? That way there’s a working version prior to the new “breaking changes” one. I’ve thrown this onto github the other day: https://github.com/cloudposse/terraform-aws-cloudfront-cdn/issues/83

RB avatar

looks like the s3 website module depends on the https://github.com/cloudposse/terraform-aws-s3-log-storage module which uses raw s3 bucket resources instead of our upstream s3 bucket module. we’d have to convert that to use our module, then get that merged, then bump the s3 website module to use the latest log storage to allow s3 website module to work with v4 aws provider

Maximiliano Moretti avatar
Maximiliano Moretti
module "website" {
  source = "cloudposse/s3-website/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"
  namespace = "eg"
  stage     = "prod"
  name      = "app"
  hostname  = "docs.prod.cloudposse.org"

  deployment_arns = {
    "arn:aws:s3:::principal1" = ["/prefix1", "/prefix2"]
    "arn:aws:s3:::principal2" = [""]
  }
}
David Karlsson avatar
David Karlsson

Slack Community

Describe the Bug

Fails to apply module, complaining about logs config

╷
│ Error: Unsupported attribute
│ 
│   on .terraform/modules/this.website_with_cname.logs/main.tf line 26, in resource "aws_s3_bucket" "default":
│   26:       for_each = var.enable_glacier_transition ? [1] : []
│ 
│ This object does not have an attribute named "enable_glacier_transition".
╵
╷
│ Error: Unsupported attribute
│ 
│   on .terraform/modules/this.website_with_cname.logs/main.tf line 40, in resource "aws_s3_bucket" "default":
│   40:       for_each = var.enable_glacier_transition ? [1] : []
│ 
│ This object does not have an attribute named "enable_glacier_transition".
╵


Expected Behavior

vanilla module useage should succeed.

Steps to Reproduce

Steps to reproduce the behavior:

module "website_with_cname" {
  source = "cloudposse/s3-website/aws"
  version = "v0.17.1"
  context        = module.this
  hostname       = "shipment-2.${var.domain}"
  parent_zone_id = var.frontend_zone_id
  logs_enabled   = true
  logs_expiration_days = 10
}

with:

Terraform v1.0.3
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.1.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0

run:

terraform init
terraform apply

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

• OS: OSX

Maximiliano Moretti avatar
Maximiliano Moretti
This object does not have an attribute named "enable_glacier_transition".
IK avatar

Anyone have a way of mapping a “friendly” name to an AWS account ID? Thinking of some sort of central repo with a simple YAML file that would be maintained as we add/remove accounts. The idea would be that users of our TF code (of which we have many modules) can specify the target account by name (as opposed to the account ID). Or am I overthinking this and just have them reference the account ID instead?

pjaudiomv avatar
pjaudiomv

Why it just use the account alias

IK avatar

Sorry, this was for being able to assume a role in the target account; we SSO to our management account, then assume roles into the relevant accounts via an account_id variable that we just substitute in the ARN of the providers assume-role block. I guess I’m looking for a way to replace this with a “friendlier” name but then do the mapping to the account_id in some way, to then be used in the provider config

pjaudiomv avatar
pjaudiomv

Ah I did this based off workspaces, with the workspace being the account alias and then a map of aliases and account ids would get the correct account I’d from workspace. But that’s pretty opinionated. You could just use a var and do same thing

RB avatar

you could have a remote state with an output map for all your accounts where each key is the account name

RB avatar

that’s how we do it with our account-map component

pjaudiomv avatar
pjaudiomv

yes this is the way, then it can be updated in one place.

IK avatar

Trying to avoid workspaces; I’ve shot myself in the foot with it a few times. We’ve standardised on terragrunt which is working really well.

2
IK avatar

Thanks @RB will check it out

pjaudiomv avatar
pjaudiomv

oh the foot has been shot, workspaces sounds like a good idea until its not

1
RB avatar

we use workspaces using atmos but it’s completely automated so we never have to even select the workspace

RB avatar

if you haven’t used atmos yet, it’s a pretty nice tool that converts deep merged yaml into terraform module input vars

pjaudiomv avatar
pjaudiomv

I actually just checked it out for first time a couple weeks ago, need to take a deeper dive.

pjaudiomv avatar
pjaudiomv

Thanks

2022-02-21

Almondovar avatar
Almondovar

hi team, how can i hide the password from being asked in terraform? │ Error: provider.aws: aws_db_instance: : “password”: required field is not set or, can i put something temporary into the code, and some how enable the “change password on first login”? Thanks!

RB avatar

you can generate the password using the random provider and store it as an ssm parameter

Almondovar avatar
Almondovar

thank you very much for your advise

hasinireddybitla2404 avatar
hasinireddybitla2404

Hi everyone! I have created the msk cluster using the aws-msk-cluster resource. But after creating the “unauthenticated access” is not enabled. Only the IAM access is enabled. Please tell me how can we enable the “unauthenticated access” using the terraform resource.

2022-02-22

Grubhold avatar
Grubhold

Hi folks, I recently deployed AWS Backup plans for DocumentDB and DynamoDB using the https://github.com/cloudposse/terraform-aws-backup module. Its working great and deployed everything as needed. I just want to understand how would the process of restoring work? Is that something that needs to be done from the console itself in a disaster scenario etc? If I restore from console how will the module act when running terraform apply again? Would appreciate your two cents about this.

hasinireddybitla2404 avatar
hasinireddybitla2404

Hello Everyone. Im trying to create the MSK cluster with terraform. Im able to create it but the “unauthenticated access” is not enabled. Im sharing the script please let me know where i need to add the “unauthenticated access” is true. Thanks in advance.

resource “aws_msk_configuration” “config” {

kafka_versions = [“2.6.2”] name = var.cluster_name description = “Manages an Amazon Managed Streaming for Kafka configuration” server_properties = <<PROPERTIES auto.create.topics.enable=true default.replication.factor=3 min.insync.replicas=2 num.io.threads=8 num.network.threads=5 num.partitions=1 num.replica.fetchers=2 replica.lag.time.max.ms=30000 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 socket.send.buffer.bytes=102400 unclean.leader.election.enable=true zookeeper.session.timeout.ms=18000 PROPERTIES }

resource “aws_msk_cluster” “example” { depends_on = [ aws_msk_configuration.config ]
cluster_name = var.cluster_name kafka_version = var.kafka_version number_of_broker_nodes = var.number_of_broker_nodes

broker_node_group_info { instance_type = var.broker_instance_type ebs_volume_size = var.broker_volume_size client_subnets = var.subnet_ids security_groups = var.associated_security_group_ids }

encryption_info { encryption_at_rest_kms_key_arn = var.encryption_at_rest_kms_key_arn encryption_in_transit { client_broker = var.client_broker in_cluster = “true” } }

configuration_info { arn = aws_msk_configuration.config.arn revision = aws_msk_configuration.config.latest_revision }

enhanced_monitoring = var.enhanced_monitoring

open_monitoring { prometheus { jmx_exporter { enabled_in_broker = false } node_exporter { enabled_in_broker = false } } }

logging_info { broker_logs { cloudwatch_logs { enabled = false } firehose { enabled = false } s3 { enabled = false } } }

tags = { Environment = var.Environment } }

output “zookeeper_connect_string” { value = aws_msk_cluster.example.zookeeper_connect_string }

output “bootstrap_brokers_tls” { description = “TLS connection host:port pairs” value = aws_msk_cluster.example.bootstrap_brokers_tls }

RB avatar

Hi @hasinireddybitla2404 it would be easier to help if you could clean up the terraform code in your post by formatting it using triple backticks

RB avatar

This is the module we use to provision kafka / msk https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster

cloudposse/terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK

RB avatar

Not sure what you mean by unauthenticated access

RB avatar

If you set all of these to false, authentication looks like it will be disabled

var.client_tls_auth_enabled
var.client_sasl_scram_enabled
var.client_sasl_iam_enabled
hasinireddybitla2404 avatar
hasinireddybitla2404

Hi @RB thanks for the reply. Im trying to say abt the “Access control methods” where we select two access controls one is “Unauthenticated access” and the other one is “IAM role-based authentication”. I want to enable above two access control methods. But with my script i can only enable the “IAM role-based authentication” not the “unauthenticated access”. Im stucked here. Im sure Im missing some logic in the script, but im not getting it.

2022-02-23

Grubhold avatar
Grubhold

Hi folks, when I’m importing a specific module such as terraform import module.dynamodb_document_table.aws_dynamodb_table.default disaster-pre-document

I’m getting the following error from another module. And the DynamoDB table is failing to import. How do we go around all the for_each when importing a module. Both modules are defaults from CloudPosse

Error: Invalid for_each argument
│ 
│   on modules/aws-vpc-endpoints/main.tf line 29, in module "interface_endpoint_label":
│   29:   for_each   = local.enabled ? data.aws_vpc_endpoint_service.interface_endpoint_service : {}
│     ├────────────────
│     │ data.aws_vpc_endpoint_service.interface_endpoint_service will be known only after apply
│     │ local.enabled is true

The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the
-target argument to first apply only the resources that the for_each depends on.
Manolo Scardino avatar
Manolo Scardino

Hey everyone how are you?

I’m trying to experiment something on terraform that I’ve never done before, I’m trying to duplicate the amount of resource groups based on a list. It will be something like that.

locals { vm_host = yamldecode(file(“./my-variables.yaml”))[“virtual_machines”] vm_host_map = flatten([for vm in local.vm_host : { “environment” = vm.environment “location” = vm.location “name” = vm.name “tech_type” = vm.tech_type “network_name” = vm.networks[*].loadbalancers “count” = vm.count } #if contains(i.environment, terraform.workspace) ]) }

resource “azurerm_resource_group” “rg” {

count =  local.vm_host_map[0].count
name     =  format(“rg-%s-%s-%03s”, local.vm_host_map[0].tech_type, local.vm_host_map[0].environment,  count.index+1)
location =  local.vm_host_map[0].location
}

virtual_machines:

  • name: docker-internal tech_type: docker type: Standard_F32s_v2 count: 10 location: westeurope environment:
    • prod
    • dev

The thing is that when I try to create the resources with multiple environments I have the following error message

Error: “name” may only contain alphanumeric characters, dash, underscores, parentheses and periods │ │ with azurerm_resource_group.rg[0], │ on main.tf line 44, in resource “azurerm_resource_group” “rg”: │ 44: name = format(“rg-%s-%v-%03s”, local.vm_host_map[0].tech_type, local.vm_host_map[0].environment, count.index+1)

Can anyone please try to help me ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please use codeblocks

Joe Niland avatar
Joe Niland

Can you show an example of the name you want output?

Chandler Forrest avatar
Chandler Forrest

I’m currently using terragrunt to generate providers in my root terragrunt.hcl file. I define a default aws provider and an aliased aws provider. I want to call another module that has a required_provider block included. This results in duplicate required provider blocks which threw an error. I found links to an overriding behavior that allow me to merge terraform blocks, but in this case this also fails because the aliased provider in my terragrunt.hcl isn’t referenced by the module I’m referencing. https://stackoverflow.com/questions/66770564/using-terragrunt-generate-provider-block-causes-conflicts-with-require-providers

What is the appropriate pattern for handling required_provider blocks in the root module when they are also defined in child module resources? Example terragrunt.hcl:

generate "provider" {
  path      = "provider_override.tf"
  if_exists = "overwrite_terragrunt"
  contents  = <<EOF
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "${local.provider_version}"
    }
  }
}
provider "aws" {
  region = "${local.aws_region}"
  assume_role {
    role_arn     =  "role.this"
  }
}

provider "aws" {
  region = "${local.aws_region}"
  alias = "that"
  assume_role {
    role_arn     =  "role.that"
  }
EOF
}  

Error message:

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
╷
│ Error: Missing base provider configuration for override
│ 
│   on provider_override.tf line 26:
│   26: provider "aws" {
│ 
│ There is no aws provider configuration with the alias "that". An override
│ file can only override an aliased provider configuration that was already
│ defined in a primary configuration file.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Try #terragrunt instead

2022-02-24

Almondovar avatar
Almondovar

hi guys, i want to spin an ec2 with 100g storage, i used the aws_volume and aws_volume_attachment and it created a second one that i dont want. how can i just have only 1 volume but 100g? please?

ikar avatar
resource "aws_instance" "laura" {
  instance_type = "r5.xlarge"

  ...

  root_block_device {
    volume_type = "gp3"
    volume_size = 100 # GB
  }
}
1
Rhys Davies avatar
Rhys Davies

Hi all! Got an auth question around Terraform and AWS. I would like to restrict people in my team from being able to terraform apply|destroy and most probably plan from their local machines (though it would be really great if people could still use plan locally it’s not a dealbreaker right now), so that people use our automated CI (env0). BUT I would like these same users to still have the same level of access that they do now in the AWS console so they can try stuff out and administer other concerns as needs be

Rhys Davies avatar
Rhys Davies

How would I go about doing this? RBAC so that only our automated CI users can mutate the Terraform State that’s in S3? Or maybe do something with DynamoDB?

2022-02-25

Wilson Mar avatar
Wilson Mar

Glad to be part of this. I’ve been working on a way to learn Terraform quickly and surely using Terragoat: https://wilsonmar.github.io/terraform/#adoptionlearning-strategy

Terraform

Immutable declarative versioned Infrastructure as Code (IaC) and Policy as Code provisioning into AWS, Azure, GCP, and other clouds using Terragoat, Bridgecrew, and Atlantis team versioning GitOps

2
    keyboard_arrow_up