#pr-reviews (2024-06)
Pull Request Reviews for Cloud Posse Projects
2024-06-03
Hey everyone, could someone please take a look at the AWS account used for terratest? Specifically, tests for https://github.com/cloudposse/terraform-aws-rds-cluster/pull/184 are failing because the VPC quota is exhausted: https://github.com/cloudposse/terraform-aws-rds-cluster/pull/184#issuecomment-1968534802.
I tried to re-trigger the tests, but the bot ignores my comments (good, because I’m not a project member or anything )
Hi @Maurice Just asked our engineers to have a look at this
I am still looking into the VPC issue, but regardless, I think we are not going to approve this PR. It is a breaking change with insufficient justification for it. Null values are already allowed, and no changes are needed for that.
The VPC error was in Feb, and the situation causing it was resolved in the beginning of April. Nothing left to do.
@Gabriela Campana (Cloud Posse) Thanks!
@Jeremy G (Cloud Posse) I can’t retrigger the tests, so I’d need some help for this. Anyways, as already discussed in the PR, I’ll update it with more context about why it is needed - ideally I notice that it isn’t and can close it out.
2024-06-04
Hi All, could someone take a look at https://github.com/cloudposse/terraform-aws-rds-cluster/pull/213? I had to rebase my branch.
@Gabriela Campana (Cloud Posse) can you help follow up with this one?
Asked our team to follow-up on the error message
https://github.com/cloudposse/actions/actions/runs/9371790219/job/25801835125#step<i class="em em-9"terratest
Process completed with exit code 2.
terratest
Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: actions/checkout@v3.
OK, I fixed the problem and did another push. It’s waiting for another approval: https://github.com/cloudposse/terraform-aws-rds-cluster/actions/runs/9372514860
Bumped this up internally today
2024-06-05
hey everyone just submitted my first PR! https://github.com/cloudposse/terraform-aws-rds-cluster/pull/216 would love a review of the change itself & the question in the description about a strange default value I noticed
what
• Adds restore_to_time
as an attribute of the restore_to_point_in_time
parameter, allowing to restore to a specific datetime rather than the latest restorable time.
why
• Fixes missing functionality that’s supported by the aws_rds_cluster
resource.
references
• Closes #163
questions
I noticed while making this PR that source_cluster_identifier
defaults to "120m"
, which looks like an inadvertent copy-and-paste from another parameter. In the aws_rds_cluster
resource, this parameter is required, so it doesn’t make sense to have a default anyway. Would you recommend leaving this parameter as-is, or should we fix it?
Thanks for creating this PR! This looks good to me, but we can make use of the optional
variable syntax to easily support both types of object with use_latest_restorable_time
or with restore_to_time
left a comment on the PR. Feel free to to @ me here!
what
• Adds restore_to_time
as an attribute of the restore_to_point_in_time
parameter, allowing to restore to a specific datetime rather than the latest restorable time.
why
• Fixes missing functionality that’s supported by the aws_rds_cluster
resource.
references
• Closes #163
questions
I noticed while making this PR that source_cluster_identifier
defaults to "120m"
, which looks like an inadvertent copy-and-paste from another parameter. In the aws_rds_cluster
resource, this parameter is required, so it doesn’t make sense to have a default anyway. Would you recommend leaving this parameter as-is, or should we fix it?
thanks! pushing up the change in a sec :slightly_smiling_face: wasn’t sure if the lack of using optional
params there was legacy or intentional
likely legacy
any thoughts on the default for source_cluster_identifier
being 120m
? It’s a required parameter in the aws_rds_cluster
resource and it looks like it was unintentionally copied and pasted from the timeouts block above
(i.e. should we just make it required and remove the optional / default value?)
oh I see! Yeah that’s definitely a mistake. Looking, one second
how about making source_cluster_identifier
not optional with no default?
yeah that’s what I was thinking
it shouldn’t be a breaking change either because there’s no way anyone is using the default value lol
yeah agreed
cool, thanks for the review! the PR is up to date now and ready for another look
np, great! tests are running now. Will approve once / if they pass
thanks! I also noticed that the default value for restore_type
(copy-on-write) differs from the default in the aws_rds_cluster
resource (full-copy). Is it worth noting that difference in the description?
I think full-copy
is required for restoring from a continuous backup
It wouldnt hurt to mention, but it’s not required
thanks again for the review! this was my first open-source contribution is the release process automated?
ah i see it’s already out, nice
I have to revisit this tomorrow :weary: when instantiating the module, I specify a timestamp for restore_to_time
, and it conflicts with use_latest_restorable_time
regardless of whether I set it explicitly to false
, or null
, in which case it defaults to true
I was assuming that Terraform would detect that it’s not a conflict if you specify the default value explicitly (in this case, the default for use_latest_restorable_time
is false
but it conflicts unless you set it to null
)
oh I see. Yeah we’ll probably have to dynamically create each block based on which input is specified
Took another stab at this and tested all of the permutations locally with a project I’m working on https://github.com/cloudposse/terraform-aws-rds-cluster/pull/217
what
• Fixes handling of use_latest_restorable_time
and restore_to_time
while preserving existing default values for backwards compatibility
why
• Original attempt at fixing this (#216) was insufficient
• The previous default value of true
for the use_latest_restorable_time
attribute prevented using restore_to_time
. To use restore_to_time
, use_latest_restorable_time
must be null. Even if you set this value to null
, the existing default overrode it with true
instead.
references
• Follow-up to original attempt: #216 • Correctly fixes #163
Hey all, I have another PR for review: https://github.com/cloudposse/terraform-aws-eks-iam-role/pull/58
what
Add support for adding managed policies to the eks iam role.
why
The module currently only allows a single policy json to be we can have multiple iam polices that we need to attache to the role.
references
Closes #57
2024-06-06
2024-06-07
Hello, could you review this PR please? It add support of AL2023 on EKS Node Group module https://github.com/cloudposse/terraform-aws-eks-node-group/pull/185
what
Added sopport of AL2023 worker node AMI based on latest module version, without add breaking chance
why
We need it
references
cc @Gabriela Campana (Cloud Posse)
what
Added sopport of AL2023 worker node AMI based on latest module version, without add breaking chance
why
We need it
references
Hi @Quentin BERTRAND Just asked our engineers to have a look at this
@Quentin BERTRAND Thank you for this PR.
As you probably know, PRs
- #178
- #180
- #182 have already started on this, but were missing a lot.
I think you have this most of the way there, although I do see some things that need fixing. You did get it far enough down the field that I think I can take it the rest of the way. Follow up on Thursday if you don’t see progress by then.
Go ahead and take a look at the initial support for EKS on AL2023 and share your constructive comments.
New Features, Breaking Changes
The major new feature in this release is support for Amazon Linux 2023 (AL2023). EKS support for AL2023 is still evolving, and this module will evolve along with that. Some detailed configuration options (e.g. KubeletConfiguration JSON) are not yet supported, but the basic features are there.
The other big improvement is in selecting AMIs, as explained below.
Along with that, we have dropped some outdated support, resulting in minor breaking changes that we expect do not affect many users.
Changes in AMI selection
Previously, unless you specified a specific AMI ID, this module picked the “newest” AMI that met the selection criteria, which in turn was based on the AMI Name. The problem with that was that the “newest” might not be the latest Kubernetes version. It might be an older version that was patched more recently, or simply finished building a little later than the latest version.
Now that AWS explicitly publishes the AMI ID corresponding to the latest (or, more accurately, “recommended”) version of their AMIs via SSM Public Parameters, the module uses that instead. This is more reliable and should eliminate the version regression issues that occasionally happened before.
• The ami_release_version
input is now obsolete, because it was based on the AMI name, which is different than the SSM Public Parameter Path.
• The new ami_specifier
takes the place of ami_release_version
, and is specifically whatever path element in the SSM Public Parameter Path replaces “recommended” or “latest” in order to find the AMI ID. Unfortunately, the format of this value varies by OS.
Examples of AMI specifier based on OS:
• AL2: amazon-eks-node-1.29-v20240117 • AL2023: amazon-eks-node-al2023-x86_64-standard-1.29-v20240605 • Bottlerocket: 1.20.1-7c3e9198 • Windows:
The main utility of including a specifier rather than an AMI ID is that it allows you to have a consistent version across multiple regions without having to have region-specific configuration.
Kubernetes Version No Longer Inferred from AMI
Previously, if you specified an AMI ID, the Kubernetes version would be deduced from the AMI ID name. That is not sustainable as new OSes are launched, so the module no longer tries to do that. If you do not supply the Kubernetes version, the EKS cluster’s Kubernetes version will be used.
Special Support for Kubernetes Cluster Autoscaler removed
This module used to takes some steps (mostly labeling) to try to help the Kubernetes Cluster Autoscaler. As the Cluster Autoscaler and EKS native support for it evolved, the steps taken became either redundant or ineffective (or were always ineffective), so they have been dropped.
• cluster_autoscaler_enabled
has been deprecated. If you set it, you will get a warning in the output, but otherwise it has no effect.
AWS Provider v5.8 or later now required
Previously, this module worked with AWS Provider v4, but no longer. Now v5.8 or later is required.
Special Thanks
This PR builds on the work of @Darsh8790 (#178 and #180) and @QuentinBtd (#182 and #185). Thank you to both for your contributions.
what
• Add initial support for EKS under Amazon Linux 2023 (AL2023) • Improve AMI selection process • Deprecate Kubernetes Cluster Autoscaler support
why
• Amazon Linux 2023 (AL2023) is the latest offering from Amazon • Previously, AMIs were selected by name and date, which occasionally led to undesirable results • The support was either redundant or ineffective
references
• Supersedes and closes #178 • Supersedes and closes #180 • Supersedes and closes #182 • Supersedes and closes #185
Nice! I just saw last week that replacing the image_id with a public ssm param was the way to keep amis up to date. Very nice to see it happening in the eks module too
@Quentin BERTRAND You said:
Instances created with a launch template and whose AMI is specified in the template do not have kubelet/nodeadm configuration, so do not join the cluster
Could you be more specific? Which AMI, which AMI Type, which K8s version, which region? This may not be something the module can do anything about.
The module creates a launch template with an AMI ID unless you supply a launch template yourself. The Terratest (and other testing) confirm that these nodes join the cluster, with EKS supplying the configuration if you do not include any userdata. If you do include userdata for AL2023, then you have to include all the required elements of the NodeConfig as part of it.
Other possibilities include Security Group issues or using the wrong service CIDR (it’s confusingly vague).
Hello,
My cluster is running 1.30 K8S version in eu-west-3
region.
Here are some of my values:
ami_specifier = "amazon-eks-node-al2023-x86_64-standard-1.30-v20240605"
ami_type = "AL2023_x86_64_STANDARD"
associate_cluster_security_group = false
associated_security_group_ids = [
dependency.sg_default.outputs.id,
dependency.sg_eks_nodes.outputs.id
]
node_group_terraform_timeouts = [{
create = "1h"
update = "1h"
delete = "1h"
}]
block_device_mappings = [
{
"delete_on_termination" : true,
"device_name" : "/dev/xvda",
"encrypted" : true,
"volume_size" : 20,
"volume_type" : "gp3"
}
]
tags = {
"k8s.io/cluster-autoscaler/node-template/label/nodetype" = "default"
}
I have no problem with 2.12.0
module version.
I tried to create a new node group, with min size and desired capacity set to 0
, the creation of node group end in failure.
NodeCreationFailure Instances failed to join the kubernetes cluster
ami_type = "AL2023_x86_64_STANDARD"
But in EKS console, AMI type is CUSTOM
@Jeremy G (Cloud Posse) Thanks for you GitHub comment ;
If you supply an AMI ID in the launch template, then EKS does not add anything to the userdata, since it cannot know what the AMI is expecting. If you do not supply an AMI ID in the launch template, EKS creates a copy of the launch template, adding the AMI ID and its userdata to set up the node.
Maybe the module could let us choose between set ami_specifier
or ami_version
?
I’m am going to revert the change from ami_release_version
to ami_specifier
, because ami_release_version
is an input to aws_node_group
. (Even though I wrote most of this code, it was 4 years ago and I forgot a lot.)
@Quentin BERTRAND The AL2023 support PR is ready to be reviewed again.
@Jeremy G (Cloud Posse) Thank you for this!
Tested on a staging env with AL2023, it works as expected
cloudposse/eks-node-group/aws
v3.0.0 released with AL2023 support and several other minor improvements and bug fixes.
2024-06-09
Hey all! Appreciate a review on PR https://github.com/cloudposse/terraform-aws-components/pull/1063 for adding backup configs to RDS cluster in the components repo
what
The CloudPosse RDS cluster module https://github.com/cloudposse/terraform-aws-rds-cluster has these backup configurations that would be helpful to have!
why
- Allows me to set backup configurations
- Without these inputs, the cluster is forced to use the default configs from the module
- Allows me to set these variable values to
null
or empty when I don’t want any backing up happening - Without these inputs to set it as empty values, there’s a conflict when using RDS clusters with AWS Backup plans because the RDS resource maintains its backup values, while AWS Backup plans wants other values.
references
• CloudPosse’s RDS module
image - AWS Backup conflict with default backup values unless explictly set https://github.com/<https://github.com/hashicorp/terraform-provider-aws/issues/33465%7Chashicorp/terraform-provider-aws/issues/33465>
Hi @Yangci Ou Just asked our team to review this PR
what
The CloudPosse RDS cluster module https://github.com/cloudposse/terraform-aws-rds-cluster has these backup configurations that would be helpful to have!
why
- Allows me to set backup configurations
- Without these inputs, the cluster is forced to use the default configs from the module
- Allows me to set these variable values to
null
or empty when I don’t want any backing up happening - Without these inputs to set it as empty values, there’s a conflict when using RDS clusters with AWS Backup plans because the RDS resource maintains its backup values, while AWS Backup plans wants other values.
references
• CloudPosse’s RDS module
image - AWS Backup conflict with default backup values unless explictly set https://github.com/<https://github.com/hashicorp/terraform-provider-aws/issues/33465%7Chashicorp/terraform-provider-aws/issues/33465>
Can someone please review this one? https://github.com/cloudposse/terraform-aws-elasticache-redis/pull/235
what
• Add support for Serverless Redis instances
why
• Supporting Redis Serverless for demanding applications
notes
• This upgrades the required version of the AWS provider from “>= 5.27.0” to “>= 5.32” as this is the first release where aws_elasticache_serverless_cache
was introduced
Hi @Frank Just asked our team to review this PR
what
• Add support for Serverless Redis instances
why
• Supporting Redis Serverless for demanding applications
notes
• This upgrades the required version of the AWS provider from “>= 5.27.0” to “>= 5.32” as this is the first release where aws_elasticache_serverless_cache
was introduced
2024-06-10
terraform-cloudflare-zone !43 is ready for review
Remove deprecated filter and firewall resources what
Support for the following resources, which are deprecated, are removed from this module.
• cloudflare_firewall_rule • cloudflare_filter
why
Cloudflare converted existing firewall rules into WAF custom rules
The Firewall Rules API and the associated Cloudflare Filters API are now deprecated. These APIs will stop working on 2024-07-01
Relevant changes for Terraform users
The following Terraform resources from the Cloudflare provider are now deprecated:
- cloudflare_firewall_rule
- cloudflare_filter
Removing the functionality from this module, related to the deprecated resources, addresses multiple issues, including indirectly supporting #40 and #20
references
• closes #25
@Hans D, @RB the GitHub automation tagged you as reviewers for this PR.
Remove deprecated filter and firewall resources what
Support for the following resources, which are deprecated, are removed from this module.
• cloudflare_firewall_rule • cloudflare_filter
why
Cloudflare converted existing firewall rules into WAF custom rules
The Firewall Rules API and the associated Cloudflare Filters API are now deprecated. These APIs will stop working on 2024-07-01
Relevant changes for Terraform users
The following Terraform resources from the Cloudflare provider are now deprecated:
- cloudflare_firewall_rule
- cloudflare_filter
Removing the functionality from this module, related to the deprecated resources, addresses multiple issues, including indirectly supporting #40 and #20
references
• closes #25
The code looks good to me.
The tests are failing due to what looks like a permission limitation on the cloudflare api token. I added a comment and cc’ed other cloudposse members who have more permission to cloudflare
Thanks.
@Andriy Knysh (Cloud Posse) do you suppose this is something you can fix with the cloudflare token, or is there anything else I might do to make this test pass?
2024-06-11
2024-06-12
2024-06-13
2024-06-14
Hey folks! Nice to be around! and congrats for the wonderful work!
I am working with the https://github.com/cloudposse/terraform-opsgenie-incident-management module and I’ve crafted some PRs with some improvements and a also an abstraction suggestion:
Improvements:
• [Notification Policy] Imrprove module
• [escalation] Single recipient per rule
• [Existing Resources] Add Team and Schedule
• [schedule_rotation] Allow “none” as rotation participant New abstraction:
• [config] Add squad intermediate abstraction Let me know what you think about it and if I need to make changes
Hey folks, any update regarding these PRs? Do you want me to make any change on them, or in their flows?
@Gabriela Campana (Cloud Posse)
Hey @Diego Rabatone Oliveira sorry for the hold up, taking a look at these now
@Ben Smith (Cloud Posse) it seems https://github.com/cloudposse/terraform-opsgenie-incident-management/pull/44 failed test/terratest/opentofu
what
Added many missing structures, which also required a bit of “refactoring”.
why
references
• https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/resources/notification_policy#de_duplication_action • https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/resources/notification_policy#delay_action
That link gave a 404, which PR are you referring to?
what
Added many missing structures, which also required a bit of “refactoring”.
why
references
• https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/resources/notification_policy#de_duplication_action • https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/resources/notification_policy#delay_action
@Diego Rabatone Oliveira
• [escalation] Single recipient per rule #58 has tests failing
Commit pushed to Escalation PR. I’ll take a look at the other one.
Regarding the Notification Policy
PR, I don’t get what is wrong over there.
It seems it failed in an unrelated tests. I wonder if that was just a opportunistic failure or something like that.
Yeah the notification PR i’m working on a fix for - theres an outdated test. will update test suite then re-review and update PRs
Oh! Make sense….
PRs updated (with some lint fixes):
• https://github.com/cloudposse/terraform-opsgenie-incident-management/pull/58
• https://github.com/cloudposse/terraform-opsgenie-incident-management/pull/61
• https://github.com/cloudposse/terraform-opsgenie-incident-management/pull/60
• https://github.com/cloudposse/terraform-opsgenie-incident-management/pull/62
@Ben Smith (Cloud Posse)
I know you eventually still need to handle the CI issue btw
Just merged in a PR that will run all tests, only ones that we cannot test are Escalations, config, and some others. but should be a bit more comprehensive
PRs updated
Regarding https://github.com/cloudposse/terraform-opsgenie-incident-management/pull/58#pullrequestreview-2194658191 - I was thinking of doing this in the test PR but decided it was out of scope to fix. Right now we have some inconsistencies - modulesconfig/escalations uses rules
but modules/escalation uses rule
with out a dynamic which it looks like you are fixing, we’ll want to update rule
to `rules across the board if this PR is willing to take that on
Hey Ben, sure thing!
It does make sense and probably I missed this change while I was splitting the changes into multiple PRs.
I’ll review it to enforce rules
both on config and on the module itself.
Btw, over there I have a doubt, and it is a hcl
doubt to be honest.
Within the dynamic "rules"
block we have another block recipient
, and the type
attribute over there.
To fill it, should I use rules.value.recipient.type
or rules.value.recipient["type"]
? (locally a saw an issue with it)
should be rules.value.recipient.type
. We might also want to setup enforced types but perhaps thats another day.
did you see issues with rules.value.recipient.type
?
│ 19: type = rules.value.recipient.type
│ ├────────────────
│ │ rules.value is tuple with 7 elements
│
│ This value does not have any attributes.
what is the for_each line for the dynamic rules
for_each = try([each.value.rules], [])
that should probably be for_each = try(each.value.rules, [])
since rules is now an array type (I changed that yesterday because we didn’t have it as an array but an object in config
I’m wondering if the mistake is in the yaml definition: and if it should be:
recipient:
- type: ...
name: ...
instead of
recipient:
type: ...
name: ...
But, indeed, you are right, the for_each
is wrong.
yeah
rules:
- condition: if-not-acked
notify_type: default
delay: 0
recipient:
type: team
team_name: acme.dev.some-service
Rules is an array (we have many) and is a dynamic, recipient is a singular object which we found from trying to hit multiple and getting an opsgenie error
Reviewed most of the PRs. Couple questions comments or updates for each
OMG! I didn’t know about the opsgenie-team
module beforehand
I spent alot of time building that component. in general look to terraform-aws-components for pre-built stuff (it’s opinionated though) and modules are our building blocks
It uses this module VERY heavily.
I can imagine!
Well, I’ll answer there anyway, thanks for the reviews, feedback and attention As soon as I have the PRs updated I’ll let you know
I’ll try to get to them - I’m on PTO next week though so i’ll be looking throughout this week and in august
No problem at all! Just make a good usage of your PTO If I can’t finish until then.
I did my best over there - although I have to be honest, I’m totally disappointed with myself for not finding the opsgenie-team
before
(taking the opportunity…
I saw on opsgenie-team
an example of “business hours” definition as:
# 9-5 Mon-Fri
business_hours: &business_hours
type: "weekday-and-time-of-day"
restrictions:
- start_hour: 9
start_min: 00
start_day: "monday"
end_hour: 17
end_min: 00
end_day: "friday"
This is technically wrong, right?
this is from Monday 9h until Friday 17h
instead of From Monday to Friday, from 9h to 17h
.
@Ben Smith (Cloud Posse)
I’m totally disappointed with myself for not finding the opsgenie-team before
Maybe we could do a better job advertising it
—
You might be right, I assumed it was 9h -> 17h on each day from monday -> friday, but I could see how it could be as you said. I haven’t checked deploying that in a while if that’s how it deploys. if it does deploy incorrectly then we should update the example to be
# 9-5 Mon-Fri
business_hours: &business_hours
type: "weekday-and-time-of-day"
restrictions:
- start_hour: 9
start_min: 00
start_day: "monday"
end_hour: 17
end_min: 00
end_day: "monday"
- start_hour: 9
start_min: 00
start_day: "tuesday"
end_hour: 17
end_min: 00
end_day: "tuesday"
...
I avoided doing that being hopeful the other way is how it worked
2024-06-16
2024-06-17
2024-06-18
2024-06-19
Hi. Could you please review this PR to update the test framework in terraform-aws-documentdb-cluster
.
https://github.com/cloudposse/terraform-aws-documentdb-cluster/pull/100
what
Update this module’s test framework. Reduce boilerplate and make reusable functions.
Changelog:
- Create framework_test.go, following how it’s done in https://github.com/cloudposse/terraform-aws-eks-node-group/blob/e9f908c026d8ca5dc30190a050de68a510ff3983/test/src/framework_test.go
- Add functions
generateUniqueID
,createTestFolder
andbuildTerraformOptions
as part of the test framework and to reduce boilerplate. - Reorganize test structure into:
- input variables,
- terraform initialization,
- deferred steps,
- terraform commands,
- terraform output and results validation.
- Remove
enabled=true
from fixtures and add new testTestExamplesCompleteDisabled
for whenenabled=false
.
why
As a result of the necessity of updating the framework in #94 (comment) and following the instructions from #94 (comment)
references
• #94 • https://github.com/cloudposse/terraform-aws-eks-node-group/tree/e9f908c026d8ca5dc30190a050de68a510ff3983/test/src • https://github.com/cloudposse/terraform-aws-dynamic-subnets/tree/8528dadc5cbdcf657faa1d917c9f069c254f41a3/test/src
Hi @Eligio Mariño https://github.com/cloudposse/terraform-aws-documentdb-cluster/pull/100 is approved
what
Update this module’s test framework. Reduce boilerplate and make reusable functions.
Changelog:
- Upgrade
go
to 1.21. - Update
test/Makefile
andtest/src/Makefile
from https://github.com/cloudposse/terraform-aws-eks-node-group/tree/e9f908c026d8ca5dc30190a050de68a510ff3983/test - Create framework_test.go and default_test.go, following how it’s done in https://github.com/cloudposse/terraform-aws-eks-node-group/blob/e9f908c026d8ca5dc30190a050de68a510ff3983/test/src/framework_test.go
- Remove
enabled=true
from fixtures and add new testTestExamplesCompleteDisabled
for whenenabled=false
- Add
Test_ExistingDeployment
for faster test cycle, and modify Makefile accordingly - Update dependencies
- Migrate test DB Engine from 3.6.0 to 5.0.0
why
• Enable testing with OpenTofu • Support automated maintenance
references
• Closes #88 • Closes #98 • #94 • https://github.com/cloudposse/terraform-aws-eks-node-group/tree/e9f908c026d8ca5dc30190a050de68a510ff3983/test/src • https://github.com/cloudposse/terraform-aws-dynamic-subnets/tree/8528dadc5cbdcf657faa1d917c9f069c254f41a3/test/src
Thank you @Gabriela Campana (Cloud Posse) for the update. It was mentioned that we’ll wait a few more days for a few changes from other contributor. I’m looking forward to the merge of this intermediate PR, so we can focus in my original PR https://github.com/cloudposse/terraform-aws-documentdb-cluster/pull/94
@Eligio Mariño Thank you for this PR. It looks pretty good. I would like to add to it over the weekend if you don’t mind waiting.
Sure. Go ahead. Thank you so much for taking it into consideration.
I actually got it updated already. @Eligio Mariño Please take a look and let me know if you have any ideas for improving it.
I took away some of your utility functions as I didn’t see them likely to be used. Instead I have a simple testRunner
which supplies defaults and a more complex explicitTestRunner
which allows (and requires) more configuration.
Thank you @Jeremy G (Cloud Posse)! Your changes look great.
I misunderstood the instructions, I saw the testRunner
and I thought the intention was to split it into smaller reusable functions so the test inputs, outputs, and cleanup were readable in only one function. My approach was to remove those two functions per test and leave only one, like TestExamplesComplete
and testExamplesComplete
, which are necessary once testRunner
is introduced. I understand your reasoning, and I think they make sense because you are already using testRunner
in other modules.
Your changes look good to me. I also saw that the PR is approved, but it was mentioned that we’ll wait for other changes. I’m really looking forward to all the improvements and merging this intermediate PR, to focus again on my original PR.
what
This PR adds the argument allow_major_version_upgrade that was released in https://github.com/hashicorp/terraform-provider-aws/releases/tag/v5.21.0
When testing, I noticed that -get-plugins
is not supporting anymore, as mentioned in Terraform documentation. So I removed TF_CLI_ARGS_init
in the test Makefile.
test/src/go.mod
was updated as a result of go mod tidy
when setting up to run the tests in test/src
.
why
When upgrading the engine_version
to a new major version, allow_major_version_upgrade
needs to be enabled for AWS to apply the upgrade.
references
• https://github.com/hashicorp/terraform-provider-aws/releases/tag/v5.21.0 • https://awscli.amazonaws.com/v2/documentation/api/latest/reference/docdb/modify-db-cluster.html • https://docs.aws.amazon.com/documentdb/latest/developerguide/docdb-mvu.html
@Eligio Mariño PR 100 has been merged.
what
Update this module’s test framework. Reduce boilerplate and make reusable functions.
Changelog:
- Upgrade
go
to 1.21. - Update
test/Makefile
andtest/src/Makefile
from https://github.com/cloudposse/terraform-aws-eks-node-group/tree/e9f908c026d8ca5dc30190a050de68a510ff3983/test - Create framework_test.go and default_test.go, following how it’s done in https://github.com/cloudposse/terraform-aws-eks-node-group/blob/e9f908c026d8ca5dc30190a050de68a510ff3983/test/src/framework_test.go
- Remove
enabled=true
from fixtures and add new testTestExamplesCompleteDisabled
for whenenabled=false
- Add
Test_ExistingDeployment
for faster test cycle, and modify Makefile accordingly - Update dependencies
- Migrate test DB Engine from 3.6.0 to 5.0.0
why
• Enable testing with OpenTofu • Support automated maintenance
references
• Closes #88 • Closes #98 • #94 • https://github.com/cloudposse/terraform-aws-eks-node-group/tree/e9f908c026d8ca5dc30190a050de68a510ff3983/test/src • https://github.com/cloudposse/terraform-aws-dynamic-subnets/tree/8528dadc5cbdcf657faa1d917c9f069c254f41a3/test/src
Amazing! Thank you @Jeremy G (Cloud Posse).
2024-06-20
2024-06-21
2024-06-24
Please re-run terratest for this PR (the issue in the dependency should be fixed): https://github.com/cloudposse/terraform-aws-ecs-web-app/pull/258
@Gabriela Campana (Cloud Posse) could you please take a look?
@Erik Osterman (Cloud Posse) I can see you already escalated this via https://github.com/cloudposse/terraform-aws-ecs-web-app/pull/258#issuecomment-2100641731
Please check the test failures.
worth trying again?
Hi @Moritz @Igor Rodionov re-ran terratest on Thursday, and it seems terrates/opentofu and terratest/terraform are failing.
what is the way forward in that case?
So the automatic update by Dependabot shows that upgrading to the v5 provider is a breaking change.
It will require someone to open a PR to address the underlying changes. Cloud Posse predominantly performs services on behalf of customers, so if this is something you need, we recommend opening a PR to address the breaking changes.
(we don’t have a customer sponsoring this work for the ECS app)
I can take a look, it might also be an issue with the “complete” example used for the test.
Could someone please run terratest for some of module dependency PRs (and merge if it passes), the bot ignored the trigger comment by mergify
?
https://github.com/cloudposse/terraform-aws-ecs-web-app/pull/279
https://github.com/cloudposse/terraform-aws-ecs-web-app/pull/272
this might also be an intermittent issue/race condition present in older provider versions: https://github.com/hashicorp/terraform-provider-aws/issues/427 https://github.com/hashicorp/terraform/issues/10737
It’s even documented here: https://github.com/cloudposse/terraform-aws-components/blob/main/deprecated/aws/ecs/README.md#objectnotfoundexception-n[…]ered-for-service-namespace
Could someone please re-run terratest for the v5 provider PR?
it’s looking good now
This PR contains the following updates:
Release Notes
hashicorp/terraform-provider-aws (aws)
v5.43.0
FEATURES:
• New Data Source: aws_resourceexplorer2_search
(#36560)
• New Data Source: aws_servicecatalogappregistry_application
(#36596)
• New Resource: aws_cloudfrontkeyvaluestore_key
(#36534)
• New Resource: aws_devopsguru_notification_channel
(#36557)
• New Resource: aws_ec2_instance_metadata_defaults
(#36589)
• New Resource: aws_lakeformation_resource_lf_tag
(#36537)
• New Resource: aws_m2_application
(#35399)
• New Resource: aws_m2_deployment
(#35408)
• New Resource: aws_m2_environment
(#35311)
• New Resource: aws_redshiftserverless_custom_domain_association
(#35865)
• New Resource: aws_servicecatalogappregistry_application
(#36277)
ENHANCEMENTS:
• data-source/aws_cloudfront_function: Add key_value_store_associations
attribute (#36585)
• data-source/aws_db_snapshot: Add original_snapshot_create_time
attribute (#36544)
• resource/aws_cloudfront_function: Add key_value_store_associations
argument (#36585)
• resource/aws_ec2_host: Add user configurable timeouts (#36538)
• resource/aws_glacier_vault_lock: Allow policy
to have leading whitespace (#36597)
• resource/aws_iam_group_policy: Allow policy
to have leading whitespace (#36597)
• resource/aws_iam_policy: Allow policy
to have leading whitespace (#36597)
• resource/aws_iam_role: Allow assume_role_policy
and inline_policy.*.policy
to have leading whitespace (#36597)
• resource/aws_iam_role_policy: Allow policy
to have leading whitespace (#36597)
• resource/aws_iam_user_policy: Allow policy
to have leading whitespace (#36597)
• resource/aws_kinesisanalyticsv2_application: Add support for FLINK-1_18
runtime_environment
value (#36562)
• resource/aws_media_store_container_policy: Allow policy
to have leading whitespace (#36597)
• resource/aws_ssoadmin_permission_set_inline_policy: Allow inline_policy
to have leading whitespace (#36597)
• resource/aws_transfer_access: Allow policy
to have leading whitespace (#36597)
• resource/aws_transfer_user: Allow policy
to have leading whitespace (#36597)
• resource/aws_vpc_ipam: Add tier
argument (#36504)
BUG FIXES:
• data-source/aws_cur_report_definition: Direct all API calls to the us-east-1
endpoint as this is the only Region in which AWS Cost and Usage Reports is available (#36540)
• resource/aws_applicationinsights_application: Make ACTIVE
a valid create target status (#36615)
• resource/aws_autoscaling_group: Don’t attempt to remove scale-in protection from instances that don’t have the feature enabled (#36586)
• resource/aws_cur_report_definition: Direct all API calls to the us-east-1
endpoint as this is the only Region in which AWS Cost and Usage Reports is available (#36540)
• resource/aws_elasticsearch_domain_policy: Handle delayed domain status propagation, preventing a ValidationException
. (#36592)
• resource/aws_iam_instance_profile: Detect when the associated role
no longer exists (#34099)
• resource/aws_instance: Replace an instance when an instance_type
change also requires an architecture change, such as x86_64 to arm64 (#36590)
• resource/aws_opensearch_domain_policy: Handle delayed domain status propagation, preventing a ValidationException
. (#36592)
• resource/aws_quicksight_dashboard: Fix failure when updating a dashboard takes a while (#34227)
• resource/aws_quicksight_template: Fix “Invalid address to set” errors (#34227)
• resource/aws_quicksight_template: Fix “a number is required” errors when state contains an empty string (#34227)
• resource/aws_redshift_cluster: Fix InvalidParameterCombination
errors when updating only skip_final_snapshot
(#36635)
• resource/aws_route53_zone: Prevent re-creation when name
casing changes (#36563)
• resource/aws_secretsmanager_secret_version: Fix to handle versions deleted out-of-band without raising an InvalidRequestException
(#36609)
• resource/aws_ssm_parameter: force create a new SSM parameter when data_type
is updated. (#35960)
FEATURES:
• New Data Source: aws_redshift_producer_data_shares
(#36481)
• New Resource: aws_devopsguru_event_sources_config
(#36485)
• New Resource: aws_devopsguru_resource_collection
(#36489)
• New Resource: aws_dynamodb_table_export
(#30399…
Will merge now
great work, thank you again
Hi, could you review this PR please?
It adds a depends_on
for EB env
https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/267
what
Add depends_on
in elastic_beanstalk_environment
why
If instance profile is created after the environment, the creation of this last will failed.
references
I just encountered the problem;
The instance profile my-env-eb-ec2 associated with the environment does not exist.
@Andriy Knysh (Cloud Posse)
what
Add depends_on
in elastic_beanstalk_environment
why
If instance profile is created after the environment, the creation of this last will failed.
references
I just encountered the problem;
The instance profile my-env-eb-ec2 associated with the environment does not exist.