#terraform (2022-03)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2022-03-01
does anyone know why when i cut a new release for my module, the terraform registry does not update to reflect it?
We have had this problem too
Usually something wrong with the webhooks for the repo. We end up having to manually trigger a refresh
@matt has tried to ask HashiCorp for support, but I believe no response
yeh that is super frustrating
i have re-synced the registry is update and now atlantis reckons it can’t find the version of the module
it seems we get a {"errors":["payload signature check failed"]}
on the repos webhook
I really wish CloudPosse modules would all bump to 1.0. The lack of semver is really annoying when upgrading. I have to carefully read each release’s changelog and try to determine if a change is backwards-compatible myself. For instance, this changelog (picking on myself):
v0.29.0
Only specify ttl block if ttl_enabled is true @alexjurkiewicz (#95)
Is this backwards-compatible from 0.28.0? Who knows! I have to either read the PR’s diff or upgrade and check the plan.
i once read that if something is used for critical/production use cases, then it is 1.0 regardless of whether anyone thinks it is ready for that, and it should be versioned accordingly. that got me over all my hesitance. but i also make no promises about long-term support for any given version. if a change is backwards incompatible, no matter how minor, that gets a major version bump.
yes I know some people don’t like big major version numbers for aesthetic reasons, they want to get the design right and release v1 which never increments. But Terraform/AWS provider make that impossible
so right now, I believe we need to do it, it’s more about how to do it “at scale”. I hate that expression, but it really makes sense. Most companies doing releases have less than a handful to worry about. We have hundreds, more than any release manager can keep track of - but let’s discuss.
Looking at major projects, e.g. kubernetes, they create a branch release-x.y
for every release, so that patches can be made. Istio, et al follow this exact convention.
Then look at Terraform and all their providers. They don’t do this! This seems like huge mental overhead, but nonetheless intrigued why they don’t.
i believe terraform does it also… e.g. https://github.com/hashicorp/terraform/tree/v1.1
which makes sense if you intend to patch older versions. though i guess you don’t have to persist the branch… you can always recreate it from a tag
Yeah. I don’t think you need to change the support timeline after bumping to 1.0. Everything about CloudPosse modules is really great at the moment, all I care about is the lack of semantic version numbers
I see the current auto-release logic relies on adding a label to each PR to define if it’s a major, minor, or patch release
You mentioned recently you don’t like chatops. One approach would be to change the merge process from “click the green merge button” to “apply the ‘merge’ label and one of ‘major/minor/patch’ labels, and github actions will process the PR merge”
I’m happy to prototype this in a new repo if you’d like to try it
mergify can implement that workflow also, e.g. “merge if label exists and required jobs pass”
You mentioned recently you don’t like chatops. One approach would be to change the merge process from “click the green merge button” to “apply the ‘merge’ label and one of ‘major/minor/patch’ labels, and github actions will process the PR merge”
That’s an interesting idea.
cc @Dylan @Jeremy G (Cloud Posse)
I don’t see the advantage there. Currently the release defaults to a minor version increment unless otherwise labeled. I do not like giving Mergify the permission to do the merges because it increases the risk of malware or inadvertent breaking changes being released.
(We already have this problem when a dependent module has a breaking change and Renovate updates the module to use the new version and Mergify auto-approves and merges it. This problem will be resolved when all our modules go past major version zero, as we will prohibit automatic updates of major version updates, but for the near future this will remain a problem.)
@Erik Osterman (Cloud Posse) I think if we can get comfortable with eventually having version 126.0.0 we can switch to full SemVer with the next breaking release and otherwise not bother with ongoing support of earlier versions. In practice we rarely make breaking changes, so the major versions should not increment that fast, and we rarely update old versions, instead forcing people to accept the breaking change if they want new features. It’s not the best customer experience, but it is at the limit of what Cloud Posse can do for free, so all that would really change is that we would be making our level of support more explicit.
If we found we wanted/had to update an old version, say to patch a security issue, we could create a branch at that time. I expect that will be a rare occurrence, at least until Terraform v2 comes out.
2022-03-02
hmm… so the https://github.com/cloudposse/terraform-yaml-config module … have a question on “variability” of list entries …
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps
[main.tf](http://main.tf)
:
module "yaml_config" {
source = "cloudposse/config/yaml"
map_config_local_base_path = "./config"
map_config_paths = [
"regsync.yaml"
]
context = module.this.context
}
./config/regsync.yaml
:
sync:
- source: consul
target: docker.local/mirror/consul
type: repository
tags:
allow:
- "latest"
- "1\\.9.*"
- source: cr.l5d.io/linkerd/grafana
target: docker.local/mirror/linkerd/grafana
type: repository
tags:
allow:
- "stable-2\\.10\\..*"
- source: tricksterproxy/trickster
target: docker.local/mirror/tricksterproxy/trickster
type: repository
That last entry in the sync
list, which doesn’t have a tag
key, fails the deep merge..
│ Error: Invalid function argument
│
│ on .terraform/modules/yaml_config/modules/deepmerge/depth.tf line 43, in locals:
│ 43: for key in keys(item["value"]) :
│ ├────────────────
│ │ item["value"] is tuple with 12 elements
│
│ Invalid value for "inputMap" parameter: must have map or object type.
╵
╷
│ Error: Invalid function argument
│
│ on .terraform/modules/yaml_config/modules/deepmerge/depth.tf line 55, in locals:
│ 55: for key in keys(item["value"]) :
│ ├────────────────
│ │ item["value"] is tuple with 12 elements
│
│ Invalid value for "inputMap" parameter: must have map or object type.
╵
Releasing state lock. This may take a few moments...
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps
If I add a tag: {}
on that last one it is fine.
I should maybe just file + yamldecode as I don’t really utilize the config module properly anyway…
all entries must have the same type and number of elements
this is how the TF code what we are using for deep-merge works
Yeah… Thanks. I just wanted to start using the yaml-config module so I used it even though I didn’t actually need it right here . Converted to a yamldecode(file()) now .
if you are just reading YAML files and converting to TF structures, you don’t need the module
it’s only for deep-merge of maps which TF merge
does not support
(and for remotely reading YAML)
v1.1.7 1.1.7 (March 02, 2022) BUG FIXES: terraform show -json: Improve performance for deeply-nested object values. The previous implementation was accidentally quadratic, which could result in very long execution time for generating JSON plans, and timeouts on Terraform Cloud and Terraform Enterprise. (#30561) cloud: Update go-slug for…
When calculating the unknown values for JSON plan output, we would previously recursively call the unknownAsBool function on the current sub-tree twice, if any values were unknown. This was wastefu…
Is there a good argument for or against adding remote_state
entries in a Terraform module? We’re developing one internally, and having to have all the callers pass in information they find in remote state feels kludgy
With our terraform framework we use it compulsively because it’s so easy. Just as easy as reading from SSM. So if you expect to be integrating with things outside of terraform, build a remote state framework around SSM and if just working with terraform, then remote state is fine.
I think you will hear some counter arguments, but what are the alternatives? You can do a terralith and just access all settings directly. Terraliths are anti patterns. You can copy pasta all the settings. That’s not DRY and very error prone. You can use data sources, but not everything can be looked up that way.
got an example module?
nevermind, I learned a new skill today: search!
actually, is this repo currently used? almost a year without any commits, seems unlike you.
It is indeed, but we are behind upstreaming components
You’ll see many of the components are more recently updated and many open PRs
Here’s an example: https://github.com/cloudposse/terraform-aws-components/blob/master/modules/alb-controller/remote-state.tf
module "eks" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "0.22.0"
component = "eks"
context = module.this.context
}
However, beadvised, our remote-state implementation is based on our stack configurations, so if you’re not using our stack configurations, the remote state lookups won’t work.
I would prefer data sources its like query your aws resources to get their ids
and I don’t need to worry about outputs from other state files or how stacks or single stack being used
and if you need extra non-normal use case you can stick with SSM
2022-03-03
Hi everybody, I was wondering if I could get some input here. I am attempting to use your terraform-aws-rds-cluster module to manage some of our postgres aurora clusters. These clusters already exist, and I will need to import them into Terraform. The subnet group already has a name (which was autogenerated by Cloudformation, it is not pretty), which does not match the module.this.id
pattern the module is using. The problem with this is that changing the name causes the subnet group to be recreated, which in turn will cause the database to be recreated (which we want to avoid). Are there are suggested work arounds here? Would it be possible to add a “subnet_group_name” variable to this module, to solve for cases like this? Thanks!
it makes sense to add a new input variable to support importing existing databases into the rds module
before submitting the pr, please check if you can fully import the database and set the appropriate inputs so the module returns “no changes”
Here is the PR (not sure who to ping about it) https://github.com/cloudposse/terraform-aws-rds-cluster/pull/133
what
• Allow the user to specify the db_subnet_group name, rather than using the default label ID
why
• If importing an existing database cluster and subnet group, we need to be able to set the subnet group name to what it already has, otherwise the subnet group will be recreated. This in turn will cause the database cluster to be recreated, which we don’t want.
references
• https://sweetops.slack.com/archives/CB6GHNLG0/p1646336110444589
Thank you @Tyler Jarjoura for the contribution.
This has been released as https://github.com/cloudposse/terraform-aws-rds-cluster/releases/tag/0.50.2
Anyone know a way around feeding aws_iam_policy_document
s into a for_each
? Complains about The "for_each" value depends on resource attributes that cannot be determined until apply,
which (to me) doesn’t make much sense, because that data source is just a way to specify a blob of json
aha, instead of putting a for_each
in the aws_iam_role_policy
to attach the multiple policies, just did another aws_iam_policy_document
with source_policy_documents
set to the list
then passed that to aws_iam_role_policy
2022-03-06
2022-03-07
anyone else tried using the tfstate-backend module v0.38.1 ? appears it’s broken and undeployable
I just deployed this 5 mins ago and all good here. Terraform’s aws provider version 4 breaks everything that uses s3 though. I set it up with aws version 3, I think you’d have to do that, only modules that support v4 specifically would work, unless you can use multiple versions of a provider.
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
okay let me see which aws version it has
thanks @Michael Galey that seems to have been it… I hadn’t version locked hashicorp/aws and it grabbed v4
yea same, that’s been the source of errors across my 40 state files for the last week , glad it worked out!
I suspect we’ll have a v4 version soon
Yeah I’ve got a bunch of my own terraform IaC that needs to be looked at for v4 compatibility ad well
Trying to use “cloudtrail-s3-bucket” - getting 2 messages This object does not have an attribute named "enable_glacier_transition"
.. Im sure its a UFU but I don’t know where to look
same issue as the above message talking about s3 depending on the aws provider version?
interesting.. ty!
Getting aws-auth exception when i try to make changes to eks cluster like updating security group and apply, tried to use apply load from local config but that also did not work. Any work around for this? i can make changes to node group though.
Exception:
null_resource.add_custom_tags_to_asg: Refreshing state... [id=5885391824448464606]
╷
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth[0],
│ on .terraform/modules/eks_cluster/auth.tf line 135, in resource "kubernetes_config_map" "aws_auth":
│ 135: resource "kubernetes_config_map" "aws_auth" {
│
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(module.eks_cluster.eks_cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.cluster.token
}
module "eks_cluster" {
source = "cloudposse/eks-cluster/aws"
version = "0.45.0"
module "eks_node_group" {
source = "cloudposse/eks-node-group/aws"
version = "0.27.3"
namespace = var.namespace
stage = var.stage
I believe the eks cluster uses it’s own kubernetes provider so you may have conflicting providers here between your consumer module and the consumed module
see the example here which doesnt use the kubernetes provider in the consumer/root module https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf and relies on the eks cluster module’s kubernetes provider
cc: @Jeremy G (Cloud Posse)
removed the provider as you suggested but still it does not work…
null_resource.add_custom_tags_to_asg: Refreshing state...
╷
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│ on .terraform/modules/eks_cluster/auth.tf line 115, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│ 115: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
│
╵
@David Are you using kube_exec_auth_enabled
? As explained in the release notes for eks-cluster v0.42.0, authentication is an ongoing problem, for which “exec auth” is our preferred workaround.
maybe @David you could be tripping over this issue https://github.com/cloudposse/terraform-aws-eks-cluster/issues/143
see the workarounds
2022-03-08
Hi everyone, I’m currently building a Terraform-like application for declarative cloud provisioning, using python/django. The reason for this is that Terraform cannot be used reliably within an automated system, for example when launching managed services as part of a SaaS offering. I was wondering if anyone else here is looking at solving the same problem? I am looking for contributors.
is this a “the way to get tech support online is to say something obviously wrong” technique question? Terraform is perfect for managing PaaS infra
No not at all. Terraform is great but it isn’t reliable if you are launching resources in customer’s accounts or VPCs, where numerous problems can arise
Imagine launching thousands of services across thousands of clients, being able to fail gracefully, destroy resources, report errors, negotiate availability and so on.
Starting from scratch seems a bit questionable. Have you considered: https://www.pulumi.com ? I haven’t used it myself but I believe it’s python friendly. There is a k8s project called cross plane that might also be worth looking at too. Also, have you thought about just forking the TF provider code base? I think cloudposse does this with the AWS provider.
Pulumi’s open source infrastructure as code SDK enables you to create, deploy, and manage infrastructure on any cloud, using your favorite languages.
Or maybe crossplane… https://crossplane.io/
Compose cloud infrastructure and services into custom platform APIs
Thank you Venkata and Loren. I will take a closer look at Pulumi. As for crossplane, it looks interesting but I’m not so interested in being tied to k8s. I basically want a terraform clone for creating resources.
Running the commands for creation, deletion etc can be done with the cloud providers’ python clients. But managing the state changes, retries, error reporting etc seems like something that I would need to build?
Hey @Ross Rochford super late but I was involved in something similar a few years ago. We used SaltStack with SNS and cloudwatch events (now called EventBridge I believe).
Salt has a REST API and can do everything on your list above.
I have not done anything with salt since then and I don’t know the state of the FOSS project, especially since VMware bought them.
I’m not a fan of k8s, generally, but crossplane is interesting in that I think the idea is to declare the desired state and let crossplane use k8s to converge to it
Otherwise, what you’re taking about sounds like something any of the TACOS might help with, e.g. Terraform Cloud, Spacelift, Env0, Atlantis, Scalr, etc…
Imagine launching thousands of services across thousands of clients, being able to fail gracefully, destroy resources, report errors, negotiate availability and so on.
I think this is a perfect reason to use a tool like Terraform for Pulumi on your back end.
Your webapp could just be a wrapper with permission to call the correct commands on the backend with the correct permission.
But question, @Ross Rochford, how much variance is contained in the resources you would be deploying across so many clients? If its more than 2-3, you might need to place more effort in customer support to tweak all the changes needed per client. But if you are deploying just a few different collections of resources, you would be well served to code up a module in TF or Pulumi that describes the collection, test the heck out of it, and then have your webapp (or a TACO or other “Infra as code” manager) deploy your configurations.
I have another sidebar question: who would be using the webapp? You or the clients? I’m just trying to get an idea of who would benefit from the experience and what the needs are.
2022-03-09
Hello everyone,
I have a question about for_each
I have cluster role binding, resource “kubernetes_cluster_role_binding” “example” { metadata { name = “terraform-example” } role_ref { api_group = “rbac.authorization.k8s.io” kind = “ClusterRole” name = “cluster-admin” } subject { kind = “User” namespace = “” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } subject { kind = “User” namespace = “” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } subject { kind = “User” namespace = “” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } subject { kind = “User” namespace = “” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } subject { kind = “User” namespace = “*” name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#[email protected]” } }
subject { kind = “User” namespace = “*” for_each = toset([“mike”, “david”, “adam”, “ranne”, “ken”]) name = “https://sts.windows.net/3e3f1922-84b0-4718-a307-431aa543dae5/#”-“${each.value}”-“@microsoft.com” }
I feel not right, can anyone help plz ?
thank you
2022-03-10
maybe you want a dynamic block instead?
@managedkaos I will take a closer look at Pulumi but my experience with Terraform suggests that it is simply not designed for my use case. It often fails without grace or runs into problems with its state. There are various hacks, but in the long run I don’t see it as reliable in the way I need it to be. When a failure occurs it is important that we can trace exactly what the issue was and that we have full access to the API responses from the cloud provider.
In terms of variance, we would like this problem to be solved in general, for many use cases, resource types and environments. The business is a marketplace for managed services, we mediate between developers who provision and deploy services, and customers who want to run them with the support of people who have expertise in those services (say for example Redis clusters). This mediation involves providing convenient APIs to developers, so a robust terraform-like declarative API would be a key part of our offering.
There are various hacks, but in the long run I don’t see it as reliable in the way I need it to be.
Got it. I can agree that everything isn’t for everyone.
This mediation involves providing convenient APIs to developers, so a robust terraform-like declarative API would be a key part of our offering.
It sounds like you’re set on building a very viable solution. I think in the end, you will likely have something that is on par with terraform and wonder if you would consider offering your solution as a business along with the business built on top of it.
If external developers are using your solution, I think that changes your focus a bit since its very likely the developers will want new features, updates to match changes in cloud provider APIs, support for any problems and so on. I totally get your point about not using TF as a solution and I’m on board with you figuring out your path without it, but I feel like using a third party solution like TF or Pulumi would save you from having to do a lot of the heavy lifting that’s already being done with those technologies.
I wish you all the best and look forward to hearing more about your solution!
Yes, it is definitely a large project to take on and merits some caution on not reinventing the wheel.
On the other hand, my initial prototype suggests that this problem has a lot of repeated functionality that can be reused across all resources and providers. The webapp does the declarative->imperative mapping magic, so implementing a new resource simply involves adding 1-2 DB tables and 3-5 custom methods (create, list, get, delete, update). These are typically fairly short implementations because they can avail of the cloud provider’s python API client.
2022-03-12
Error creating SSM activation: ValidationException: Nonexistent role or missing ssm service principal in trust policy module beanstalk environment.. What is wrong?
If it works on the second apply then I would assume that your trust policy is referencing a role (or something similar) that it is not dependent on. Therefore it is trying to create the policy prior to the referenced resource being created and then :boom: .
You can try to find that dependent resource and explicitly define the relationship using depends_on
OR you can reference the dependent resource directly in the policy and it will create that relationship for you (the preferred approach).
This is a guess from a small amount of information, but that is a common problem.
It is appears after first apply. When I run terraform apply next time all works fine
2022-03-13
2022-03-14
Hi colleagues, i need to import a key pair but our infra is in terraform cloud, any ideas of how i can get terminal access on terraform cloud please?
Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.
thank you for your response, i have been able to take cli control with the terraform login command, but seems it can’t understand the state already used in terraform cloud and it wants to recreate everything. do i understand well that terraform import is not possible when using terraform cloud? found also many other enterprise users struggling with it, and the clear answer from tf employees is “sorry it works as it should” here
@Almondovar Are you making sure to select the correct workspace ( e.g. terraform workspace select
) that Terraform Cloud is using? I could imagine that being your issue.
Thank you Matt, i am not sure these are applicable on the terraform cloud, because although i am connected with the token correctly
Terraform must now open a web browser to the tokens page for app.terraform.io.
If a browser does not open this automatically, open the following URL to proceed:
<https://app.terraform.io/app/settings/tokens?source=terraform-login>
i cant see my workspaces
> terraform workspace list
* default
2022-03-15
How would you setup IIS with Terraform in a Windows EC2 instance?
I know that there’s the user_data
argument that can be passed to the launch_configuration but I’m unsure of where to go from there.
The goal is to have instances setup for hosting without having to setup envs by hand
That’s how we do it - packer builds a base image with all of the prereqs, then user data runs PowerShell to configure IIS.
Could also use DSC or similar, which would be a little cleaner, but I haven’t gone down that road.
@Jim G
The way its going for me so far is:
- Running Install-WindowsFeature to install all the IIS modules
- Calling
New-WebApplication
to set up a site How do you handle the IIS config? Like setting headers, and HTTPS redirects?
Sorry, to clarify:
• We use packer to install IIS and any other prereqs or configuration that can be done in the image.
• user_data is just used to bootstrap the image (configure Splunk, Cloudwatch Agent) and install our deployment system agent to phone home (in our case, Octopus Deploy)
• The deployment system actually deploys the website content and configures IIS.
I wouldn’t want to do all of that in user_data - it would be slow and hard to debug.
Ah,that’s a good point; I’ve noticed an error during the user_data
call can mess with the SSM agent - other stuff too probably
I don’t know much about packer other than that it deals with os images, whats the major advantage vs creating a custom AMI w/ AWS?
speed. You can do all of the installs, config, whatever once when baking the AMI. Then during terraform apply, it’s already in the image - you’re not waiting for 10,000 lines of user_data to execute.
it also helps if you’re trying to build immutable infrastruture - we don’t patch our windows instances. Once a month, we build a new (patched) AMI with packer, then do a rolling replacement of all of our instances.
that’s pretty cool!
Can you point me to any resources for creating an IIS image w/ packer?
Hi all, Trying to make changes to the Loadbalancer SG on CloudPosse ElasticBeanstalk v0.40.0, I’m trying to affect this by using the loadbalancer_managed_security_group
but when referencing either a SG id
or arn.
when doing this i’m getting this error response:
Error: Error waiting for Elastic Beanstalk Environment (e-tpbxpwqcnp) to become ready: 2 errors occurred:
│ * 2022-03-15 14:30:44.562 +0000 UTC (e-tpbxpwqcnp) : Service:AmazonCloudFormation, Message:[/Resources/AWSEBV2LoadBalancer/Type/SecurityGroups] 'null' values are not allowed in templates
│ * 2022-03-15 14:30:44.704 +0000 UTC (e-tpbxpwqcnp) : Failed to deploy configuration.
When starting this the environment is in a OK state.
Any help would be appreciated
Terraform module to provision an AWS Elastic Beanstalk Environment
If there’s any information i’ve omitted that might help understand where i sit please let me know
Terraform module to provision an AWS Elastic Beanstalk Environment
I’m having an issue with awscc
provider, I just added the provider to required_providers but I’m not using yet and after init
I can’t plan anymore and I get this error
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provider "registry.terraform.io/hashicorp/awscc" to
obtain schema: Incompatible API version with plugin. Plugin version: 6, Client
versions: [5]
any ideas?
terraform version
Terraform v0.13.7
+ provider registry.terraform.io/-/random v3.1.0
+ provider registry.terraform.io/hashicorp/aws v3.74.3
+ provider registry.terraform.io/hashicorp/awscc v0.14.0
+ provider registry.terraform.io/hashicorp/local v2.2.2
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/time v0.7.2
I tried TF 0.14, 1.1.7, deleting .terraform and nothing, same problem
delete the lock file in the same folder
.terraform.lock.hcl
I do not have a lock file
can the awscc
and aws
provider coexist? I will imagine that it should
did you try terraform providers
?
also, do you use a .terraformrc
file, or set the env TF_PLUGIN_CACHE_DIR
?
try to delete .terraform.d
folder in HOME dir
I have no .terraform*
any files
I can run terraform providers
you should have the folder in ~/.terraform.d
let me check that one
in your HOME
not in the repo
yes, I just deleted it
does awscc work with 13.7?
I wonder if it has to be 1.x
could be that the state is 0.13.7
and is an older version so it needs to be updated?
after deleting the folder, did you run terraform init
again?
yes
otherwise you can’t plan
, the problem happens on plan only
so I cleaned up the code
my main.tf is empty
new dir
terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (signed by HashiCorp)
just the one provider
same thing :
terraform plan
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provider "registry.terraform.io/hashicorp/awscc" to
obtain schema: Incompatible API version with plugin. Plugin version: 6, Client
versions: [5]
ok so
I did the same, but with tf 1.1.7 and it works
I think the minimum version required is 0.15
yep
0.15.0 is the minimum required version
rm -rf ~/.terraform.d .terraform .terraform.lock.hcl
prompt> tfenv use 0.15.0
Switching default version to v0.15.0
Switching completed
prompt> terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (self-signed, key ID 34365D9472D7468F)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
<https://www.terraform.io/docs/cli/plugins/signing.html>
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
prompt> terraform plan
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your configuration and the remote system(s). As a result, there are no actions to take.
prompt> rm -rf ~/.terraform.d .terraform .terraform.lock.hcl
prompt> tfenv use 0.14.11
Switching default version to v0.14.11
Switching completed
prompt> terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
prompt> terraform plan
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provider "registry.terraform.io/hashicorp/awscc" to
obtain schema: Incompatible API version with plugin. Plugin version: 6, Client
versions: [5]
prompt [1]>
where is that in the docs?
i was looking for a min tf version for the awscc provider but didn’t see one. i do think that 0.15.0 is the last time they made changes to the state schema though, so kinda makes sense
The provider requires a minimum version of Terraform 0.15.0, I could not find that on the docs and the use provider link shows 0.13+ usage as am example.
prompt> tfenv use 0.15.0
Switching default version to v0.15.0
Switching completed
prompt> terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (self-signed, key ID 34365D9472D7468F)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
<https://www.terraform.io/docs/cli/plugins/signing.html>
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
prompt> terraform plan
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your configuration and the remote system(s). As a result, there are no actions to take.
prompt> rm -rf ~/.terraform.d .terraform .terraform.lock.hcl
prompt> tfenv use 0.14.11
Switching default version to v0.14.11
Switching completed
prompt> terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/awscc versions matching "0.13.0"...
- Installing hashicorp/awscc v0.13.0...
- Installed hashicorp/awscc v0.13.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
prompt> terraform plan
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provider "registry.terraform.io/hashicorp/awscc" to
obtain schema: Incompatible API version with plugin. Plugin version: 6, Client
versions: [5]
prompt [1]>
Community Note
• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request • If you are interested in working on this issue or have submitted a pull request, please leave a comment • The resources and data sources in this provider are generated from the CloudFormation schema, so they can only support the actions that the underlying schema supports. For this reason submitted bugs should be limited to defects in the generation and runtime code of the provider. Customizing behavior of the resource, or noting a gap in behavior are not valid bugs and should be submitted as enhancements to AWS via the CloudFormation Open Coverage Roadmap.
Terraform CLI and Terraform AWS Cloud Control Provider Version
Terraform v0.14.11
+ provider registry.terraform.io/hashicorp/awscc v0.13.0
Terraform Configuration Files
empty main.tf, no resources added
I’m using “cloudposse/elastic-beanstalk-environment/aws” (v0.46.0) with loadbalancer_type = "classic"
and tier = "WebServer"
and I’m getting a bunch of modifications to elb settings every time I run plan (This is just one of the settings that changes):
- setting {
- name = "HealthCheckInterval" -> null
- namespace = "aws:elasticbeanstalk:environment:process:default" -> null
- value = "10" -> null
}
These changes are on an embedded resource of the module, so I don’t think there is a way to use lifecycle.ignore_changes. Are there any recommendations for reducing the noise in the output of terraform plan
?
Note I don’t have this problem when using a network load balancer. I’m looking for recommendations to reduce the noise in my plan output.
it’s difficult to give recommendation. What we saw with elastic beanstalk, the API returns the data in different order, and TF can’t compare it correctly. This issue going on for years
In this case it seems like the module is specifying a bunch of parameters that aren’t used, so the platform returns nulls. Then the plan tries to update them again to what is specified.
yes, that’s the case
So, is this a bug in the module? And if so, should I file it? I mean, it doesn’t solve my immediate problem, but it seems like it should be fixed.
it’s not easy to fix (there are many combinations of settings). Unless we don’t add any settings to the module and let the caller to provide whatever they want
for this reason we don’t use the CloudPosse module. Instead, we create the environments directly and pass in the exact settings we need. Elastic Beanstalk’s resources in Terraform are not great. EB is very opinionated, and its opinions don’t work well with Terraform. It needs to be handled with care and close attention, for better or worse
we prob need to make all the settings optional and provide a variable to override all of them
2022-03-16
Hi, does anyone have any tips on how to change the origin of the Default behavior when using this terraform-aws-cloudfront-s3-cdn module? The default behavior routes traffic to the S3 origin - I’d like it routed elsewhere.
It seems as though this is the local I need to change - unsure if I can do so from my workspace.
target_origin_id = local.origin_id
Sorry for the late reply but if you don’t want to use s3 i think this module might be a better fit for your use case https://github.com/cloudposse/terraform-aws-cloudfront-cdn
Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin.
Thanks @jedineeper for the reply. I am using S3 as one origin, however not as the default. Are you aware of a way to utilise the module in this way?
I think that the module i linked allows you to define your own origin structure, be it s3 or something else
pretty sure the s3-cdn module uses it as a base so you could probably look into THAT module to see how the origins are defined as I’m not 100% sure
2022-03-18
Hi! Is there any docs or best-practices about how you folks @ CloudPosse do TF refactoring? I mean suppose you have some functionality in the root module and you realise that some parts should go to a dedicated module. I’m mostly interested in how you manipulate with the state during the refactoring of the modules that are in use. What’s the workflow, who is responsible for making changes to the state, how you control this, etc. I do remember the discussion about when one should decide to write a module. However, I don’t remember the discussions about the refactoring:) Now we have this cool moved {}
possibility, probably it’s a great help here. But last time we tried it didn’t work for modules outside of the current repo.
Unfortunately, we have no guidance/recommendations for this. Refactoring is a very small part of what we do. Possibly because we create very small components already built on our terraform modules, which are also pretty small and single purpose.
Where we still get bit is when the provider interface changes (e.g. S3 buckets) and for that, Terraform provides little to make it less painful.
We are grappling with one area of refactoring that’s very painful: how we handle security groups.
I can’t recall if @ looked into using moved
to help with that
As for moved
we are dealing with a cross-package move by making a module local to the root module first. And the second step is to make it external.
That makes sense. Tedious but probably no better way
Hello! I’m using the module from Cloudposse to create waf acl / rules, i think module is not supporting the “RuleLabels” feature when adding a new rule based on countries, can someone help me? https://github.com/cloudposse/terraform-aws-waf#input_geo_match_statement_rules https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#rule_label Thanks!!!
Thanks for answering!! i can create a ticket yes
cc @matt
2022-03-19
2022-03-20
Hi everyone! I am interested in recommended approaches for managing infrastructure with terraform involving providers not accessible from a cloud based pipeline via public internet. My use case here looks like this:
- cloud based pipeline creates initial AWS stack (VPC, EKS, IAM stuff, etc.)
- another (cloud based?) pipeline using TF with Kubernetes, Vault and other providers creates resources in the cluster Now, this all works fine, when the Kubernetes API, Vault and other involved services are publicly accessible. However, if Kubernetes API and Vault is only accessible from within the cluster (or VPC) the 2. pipeline concept breaks as TF can’t manage resources using the Vault provider (the Kubernetes API connection might get worked around with whitelisting or such).
Also, my understanding is that some web hook based tooling might also break since GitHub would not be able to trigger anything inside the cluster. Are my assumptions correct? If so, are there any best practices or blue prints how to set things up in these scenarios? Appreciate any input here. Thanks!
2022-03-21
Hello world. Does anyone have any good examples in managing AWS landing zones with AWS control tower? Or Landing Zones in general?
Hi , There was a fully fledged Terraform landing zone (https://www.hashicorp.com/resources/aws-terraform-landing-zone-tlz-accelerator) , though this has been replaced with Terraform integration with AWS Control Tower https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/.
Watch Amazon announce and demo their HashiCorp Terraform Landing Zone (TLZ) AWS Accelerator preview at HashiConf.
AWS Control Tower makes it easier to set up and manage a secure, multi-account AWS environment. AWS Control Tower uses AWS Organizations to create what is called a landing zone, bringing ongoing account management and governance based on our experience working with thousands of customers. If you use AWS CloudFormation to manage your infrastructure as […]
Thanks! Is my understanding correct, that I need a TF cloud account for TLZ?
TF Cloud or Enterprise would be best in my experience, using S3/Dynamo in multi account setup can be difficult to manage with growing account numbers.
Typically Account Vending Machine sits in Management account, though ultimately depends on your multi aws account design.
It use to be the case you needed TF Cloud or Enterprise for landing zones, but the Control Tower Folks added support for OSS. Obviously we’d suggest folks use Cloud (even the free version) as the workflows are better, but it’s noy necessary.
Hello people, I need to “copy and paste” all the resources from one AWS account to another. I am planning to try https://github.com/GoogleCloudPlatform/terraformer Do you have any experience on that? whats your feedback?
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code
I have used many times
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code
the problem is the name of the resources
I do not know if is possible to add prefixes etc so that resource names are a bit more human readable
but you can script that after the fact
the other issue is that when you want to use modules instead of plain resources you will have to do a lot of terraform state mv
commands
but it is the best tool for doing this kind of things
Do you know if AWS has a legacy tool for doing this? I couldn’t find any yet
no they do not
terraform is basically the competitor to Cloudformation
yes, I know that
just last year they seem to partner with hashicorp on CDF efforts and such
humn…
because people uses Tf more
and Google have interest on such tool so that people can , migrate stufff to their cloud
etc
anyhow, the tool is good, do not use the other out there
Note, terraformer just vomits out terraform, be prepared to do a lot of post processing of the generated terraform
2022-03-23
Hello! I am facing an issue… I developed a terraform module on a private repo on GitHub; it has:
examples main.tf tests variables.tf
However, I am failing at the moment of calling the module; I am using a for_each to iterate over different services that need the module; the issue is: I can’t put a provider in the module due to the for_each If I did not put the provider in the module, terraform tries to use a source that doesn’t exist ( I have to use ‘DataDog/datadog’, but it tries to use ‘hashicorp/datadog’ ). In any part of the module, I didn’t declare ‘hashicorp/datadog.’
the issue is, terraform does not support ANY provider in modules with for_each
- this is a TF limitation
in some of our components, we instantiate a module many times with diff providers. Or you can write some code (in bash, Python, etc.) to generate that TF code
but the provider is always the same, is at the root, is the one that should use the module. I don’t need multiple providers…
It should work, but I don’t know why it is using a non-existent provider that I never declare
yes, TF modules implicitly inherit the top-level providers
but is inheriting a provider that I didn’t declare…
I declare
terraform {
required_providers {
datadog = {
source = "DataDog/datadog"
version = ">= 3.9.0"
}
}
}
And I get
│ Could not retrieve the list of available versions for provider
│ hashicorp/datadog: provider registry registry.terraform.io does not have a
│ provider named registry.terraform.io/hashicorp/datadog
do you also have this ?
provider "datadog" {
api_key = local.datadog_api_key
app_key = local.datadog_app_key
validate = local.enabled
}
yes, I have that too
in cases like this, always try to delete
.terraform
.terraform.lock.hcl
/Users/xxxxxx/.terraform.d
might help
Thanks, I’ve already tried that, without success
I found the issue, I had to declare the provider in the module root
terraform {
required_providers {
datadog = {
source = "DataDog/datadog"
version = ">= 3.9.0"
}
}
}
and the same on the root of my code, but only in the root I can declare the
provider "datadog" {
api_key = var.datadog_api_key
app_key = var.datadog_app_key
}
In the module, the provider configuration can’t be there, even if it is in blank like
provider "datadog" {}
it will fail.
Anyone have thoughts on how to handle circular dependencies within security groups, across modules? Modules seem to need to be created as a package. So if I use a cloudposse-style module for my application, it creates a security group in the application module. I want to also pass in that security group to the elasticsearch module for ingress access, and then use the elasticsearch security group in the applications egress access rule.
Yup, that would be the way. It does mean that the destruction of resources matters a lot, but that is typically a lesser concern.
Is there a way to have subject_alternative_names
that are not in the same route53 zone as the zone_name
when using "cloudposse/acm-request-certificate/aws
?
For instance if I set:
domain_name = foo.bar.example.com
subject_alternative_names = foo.exmaple.com
zone_name = [bar.example.com](http://bar.example.com)
bar.example.com and example.com are two different route53 zones. The module will only try to set the certificate validation in the zone specified by zone_name and thus will fail waiting for it to be approved
subject_alternative_names = var.subject_alternative_names
cc: @Robert Berger
use input subject_alternative_names
Im not clear what you are suggesting. I am using subject_alternative_names
(actually passing in a list ["[foo.example.com](http://foo.example.com)"]
)
The problem is that when it tries to do the DNS validation, it does it in the zone specified by zone_name
([bar.example.com](http://bar.example.com)
) and for it to work it needs to do one validation in there for [foo.bar.example.com](http://foo.bar.example.com)
but also needs to do a DNS Validation in zone [example.com](http://example.com)
oh i see, so it might be getting stuck on validating the domain
count = local.process_domain_validation_options && var.wait_for_certificate_issued ? 1 : 0
try setting wait_for_certificate_issued
to false
then you could try to update the records for example.org in order to get dns validation to work correctly
That would probably make the run complete but the certificate would never be validated.
Like you just said, I guess a work around would be to update the dns validation with a resoruce outside of the terraform-aws-acm-request module, but would somehow need to get the validation values out of the module as well. Looks like that is an output of the module: domain_validation_options
do you think it would be possible to modify the module to allow it to work for your use case ?
or would it be better to disable validation within the module, create the records outside the module, and perform the validation outside the module?
this sounds like it’s worth a ticket, at least, in the repo issues section :)
Well, its “Just software” I suspect its possible, but I’m just ramping my terraform fu. It would take some string processing and some logic to do it in the module. Or possibly have optional input variables that would just tell it what to do. Probably easier to just document this use case and do it outside the module.
I’ll file a ticket this weekend…
i think one issue is that we’d need to know the zone id of the SANs in order to create the record which would get complicated
Thanks for the help. I worked around my immediate block by just dropping the requirement for the alternate name for now.
if the other SANs are in route53, we could allow multiple zone ids to be passed in. that might be one avenue
Yes, if the module would do it, it would need to know maybe the alternate name and the zone for the alternate names
looks like you could also get the validation info as an output from the module too
output "domain_validation_options" {
Yes, I see that now that could be used to do set up the DNS Validation outside of the acm module
Would have to play with it to see if domain_validation_options
would need further processing as it would still have the wrong base zone / dns name for one of the names
oh true. the module would require some refactoring for sure
2022-03-24
Curious what TF provider folks are using for provisioning things in PostgreSQL ?
I need to deploy some AWS extensions in PgSQL … aws_commons
& aws_lambda
to allow triggering of Lambda from PostgreSQL …
Historically we have been using https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs but I absolutely hate how it handles grants for DB resources.
yep this is the one we use as well
Yeah — I think this is the one everyone uses. Unfortunately, I think the SQL model and Terraform don’t mix well. I have similar sentiments against this provider. I’m not at the point where I’m going to stop using it yet… but I’m close. If there was another option for better managed a PGSQL DB via IaC then I’d go that route.
https://github.com/ariga/atlas maybe this is interesting @Matt Gowie?
A database toolkit
@Tyrone Meijn — That’s very interesting, thanks for sharing. I don’t see support for manage roles / users, but if it adds that then I’d be on board.
2022-03-28
v1.2.0-alpha-20220328 1.2.0 (Unreleased) NEW FEATURES: precondition and postcondition check blocks for resources, data sources, and module output values: module authors can now document assumptions and assertions about configuration and state values. If these conditions are not met, Terraform will report a custom error message to the user and halt further evaluation. Terraform now supports run tasks, a Terraform Cloud…
Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.
Odd, these releases generally come on wednesdays!
Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.
2022-03-29
hi guys please i need help with any pointers. I am trying to write terraform for route53 healthcheck and it requires fqdn or ip_address
resource "aws_route53_health_check" "example" {
fqdn = "[example.com](http://example.com)"
port = 80
problem is i am getting the fqdn dynamically from the output of api gateway stage (the invoke url) which is returned as <https://example.com>
Do you know any function i can use to get rid of https://
so it will only input [example.com](http://example.com)
to my health check resource or how can i achieve this (edited)
sounds like you could use the split function or you could use a provider to parse the url
or regex function
The regex function applies a regular expression to a string and returns the matching substrings.
heyy lowww everyone
I had a question about the cloudposse/elasticsearch/aws
terraform module
Does it support provisioning a cluster on the nvme ssds of the i3.2xlarge instances?
there is a variable to specify the instance type https://github.com/cloudposse/terraform-aws-elasticsearch/blob/master/variables.tf#L55
variable "instance_type" {
Amazon OpenSearch Service supports the following instance types. Not all Regions support all instance types. For availability details, see Amazon OpenSearch Service pricing .
yes I see that, but those instances take some extra steps to mount the nvme ssd they come with onto the ec2 instance, then it’s another step to make sure the elasticsearch cluster is actually using the nvme
I wonder if the terraform did the heavy lifting in that area already
you can use this working example https://github.com/cloudposse/terraform-aws-elasticsearch/tree/master/examples/complete, specify the instance type and test
Nah it doesn’t, can I add that as a feature request somewhere?
yes, you can open an issue anytime
what it does not do? When you add the instance type, does it throw any errors?
never mind, i had this project confused with something else. This spins up an official aws elasticsearch, not an baremetal elasticsearch on ec2
Hi there, what do you do when you need something that hasn’t been implemented in provider terraform-provider-aws yet? I’m missing this merge request https://github.com/hashicorp/terraform-provider-aws/pull/21766
we also accept prs to the cloudposse aws utils provider https://registry.terraform.io/providers/cloudposse/utils/latest
Thanks @jose.amengual, I didn’t know about awscc! Took a look, but It does not yet support wafv2 web acl.
it worth the try
Yes, looks promising!
@RB I’m not sure it would make sense to add functionality that will be available inside terraform-provider-aws
anytime in the future to this provider. Do you think it would fit into this use case?
The PR might take months or years before it’s merged. It might never get merged – there are a lot of good PRs that languish forever because they are too complex and there’s not enough demand.
Because of this, I think the only reasonable approach is to either manage that resource attribute yourself (if you can use lifecycle { ignore_changes }
), or completely eject from Terraform for this resource. For example, there might be support in CloudFormation. And as a worst case, clickops it.
Any other approach is going to bitrot over time. And IMO it’s worse to deal with exotic bitrot than to deal with documented clickops.
Good point, so I think I’ll configure the WAF via Cloudformation.
Do you think it would be valid in this case to manage the “aws_cloudformation_stack” through terraform?
2022-03-30
hi all, using the cloudposse rds-cluster module and running into some issues trying to perform a major engine version upgrade. made a bug ticket here: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/134 but the tl;dr is when the plan is applied AWS returns this error:
Failed to modify RDS Cluster (api-db): InvalidParameterCombination: The current DB instance parameter group api-db-xxxxxxx is custom. You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.
don’t believe i have the ability to set that parameter group name (or even use the default), so at a loss on the workaround here
the module creates a param group
name_prefix = "${module.this.id}${module.this.delimiter}"
perhaps you need to modify the cluster family input var to upgrade the db?
yeah, i saw that it’s creating a param group, and even lifecycle is set to create before destroy, so not sure why
i did set cluster family input as well as engine version
can post my module config and terraform plan if that’s helpful
yes please create an issue with a reproducible example and then link to it here
i created an issue here: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/134 i’ll add a comment with my terraform plan output shortly
Found a bug? Maybe our Slack Community can help.
Describe the Bug
Attempting to do a major version upgrade of an Aurora Postgres instance from 11.13 to 12.9. On latest version 50.2 and 3.63.0 of terraform. Below is my module config:
module "postgres" {
source = "cloudposse/rds-cluster/aws"
version = "0.50.2"
name = "api-db"
engine = "aurora-postgresql"
cluster_family = "aurora-postgresql12"
engine_version = "12.9"
allow_major_version_upgrade = true
apply_immediately = true
cluster_size = 1
admin_user = data.aws_ssm_parameter.db_admin_user.value
admin_password = data.aws_ssm_parameter.db_admin_password.value
db_name = "api"
db_port = 5432
instance_type = "db.t3.medium"
vpc_id = var.vpc_id
security_groups = concat([aws_security_group.api.id], var.rds_security_group_inbound)
subnets = var.rds_subnets
storage_encrypted = true
}
When running apply I get the error:
Failed to modify RDS Cluster (api-db): InvalidParameterCombination: The current DB instance parameter group api-db-xxxxxxx is custom. You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.
Environment (please complete the following information):
Anything that will help us triage the bug will help. Here are some ideas:
• OS: OSX • Version: 50.2 • Terraform: 3.63.0
when you run terraform version, what aws provider version are you using?
i see this issue with the aws provider was resolved in certain 3.63.0
https://github.com/hashicorp/terraform-provider-aws/issues/17357
we were using 3.28.0, but saw that issue you linked - upgrade to 3.64.0 and saw the same error message
just added a comment to my issue with the terraform plan output
so now you’re using the latest aws version and it’s still giving you the same issue? its possible this could be another bug with the provider then
ah hmm could be. not the latest version of AWS, just the one past where that issue was resolved but i can try later versions of the aws provider and see if that changes anything
2022-03-31
Hi Terraformers, how do you organize your local
variables? Do you use single file eg. [locals.tf](http://locals.tf)
and then you define all local variables in single file or you have locals all over the files?
I tend to have a [locals.tf](http://locals.tf)
at least for the root modules to increase readability, but now I am facing an issue that I need to have locals {}
in separate file, well to increase readability in junction with aws_organizations_policy
.
Example as follows, please ignore generic resource names. Should serve just as an example.
locals {
policy_template = templatefile("${path.module}/templates/template.tftpl", {
{...}
})
}
resource "aws_organizations_policy" "policy" {
name = local.policy_name
content = local.policy_template
type = "BACKUP_POLICY"
}
sometimes it’s also in other files but most of the time we have 4 files: main, outputs, variables, context
I agree with sticking in at the top of the files. For child modules it’s easy to put all the local variables in separate file, for root modules I would also stick to this pattern but in some rare cases improving visibility breaks a context if tightly coupled local is in separate file.
Trade-offs, trade offs
Yea, I think sticking them atop the file they are used is one of the best places. When used by multiple files, sticking them in a [locals.tf](http://locals.tf)
would make sense
Hey all, while using the cloudposse terraform-aws-efs module https://github.com/cloudposse/terraform-aws-efs, I have stumbled on an issue and trying to figure out if there is a workaround for it.
So I have a efs file system created and within that I have 5 access points defined ( one for each micro service, so they have restricted access to subdirectories). Now if I add a name to the efs file system using name
variable, then all the access points also get the same name.
And from the code it looks like this is because of https://github.com/cloudposse/terraform-aws-efs/blob/master/main.tf#L86 where in the access points are using the same set of tags as been used by the efs file system.
doesn’t it make sense to use “${each.key}” or something dynamic so each access point can have a different name.
the tags are the same for all resources created by the module
the tags are not related to name
if you change name
, all resources provisioned by the module will have diff IDs/names generated by the label
module in the format <namespace>-<environment>-<stage>-<name>
this is by design. name
is part of tags
. why this is causing a problem?
that is correct, but in this case we can create more than one access point for the same efs file system and it would be nice to be able to name then differently so they have a logical name. As you can see here, all the access points have the same name efs-test
whereas I could name them to data
common
media
if I could
One more consideration, where are the microservices running? e.g. EKS or ECS?