#pr-reviews (2024-06)

Pull Request Reviews for Cloud Posse Projects

2024-06-03

Maurice avatar
Maurice

Hey everyone, could someone please take a look at the AWS account used for terratest? Specifically, tests for https://github.com/cloudposse/terraform-aws-rds-cluster/pull/184 are failing because the VPC quota is exhausted: https://github.com/cloudposse/terraform-aws-rds-cluster/pull/184#issuecomment-1968534802.

I tried to re-trigger the tests, but the bot ignores my comments (good, because I’m not a project member or anything )

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I am still looking into the VPC issue, but regardless, I think we are not going to approve this PR. It is a breaking change with insufficient justification for it. Null values are already allowed, and no changes are needed for that.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The VPC error was in Feb, and the situation causing it was resolved in the beginning of April. Nothing left to do.

Maurice avatar
Maurice

@Gabriela Campana (Cloud Posse) Thanks!

@Jeremy G (Cloud Posse) I can’t retrigger the tests, so I’d need some help for this. Anyways, as already discussed in the PR, I’ll update it with more context about why it is needed - ideally I notice that it isn’t and can close it out.

1

2024-06-04

Ray Finch avatar
Ray Finch

Hi All, could someone take a look at https://github.com/cloudposse/terraform-aws-rds-cluster/pull/213? I had to rebase my branch.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gabriela Campana (Cloud Posse) can you help follow up with this one?

2
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Asked our team to follow-up on the error message
https://github.com/cloudposse/actions/actions/runs/9371790219/job/25801835125#step<i class="em em-9"terratest
Process completed with exit code 2. terratest
Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: actions/checkout@v3.

Ray Finch avatar
Ray Finch

OK, I fixed the problem and did another push. It’s waiting for another approval: https://github.com/cloudposse/terraform-aws-rds-cluster/actions/runs/9372514860

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Bumped this up internally today

Ray Finch avatar
Ray Finch

Thanks @Gabriela Campana (Cloud Posse) looks like it was merged!

1

2024-06-05

el avatar

hey everyone wave just submitted my first PR! https://github.com/cloudposse/terraform-aws-rds-cluster/pull/216 would love a review of the change itself & the question in the description about a strange default value I noticed

#216 Support restoring to a specific time

what

• Adds restore_to_time as an attribute of the restore_to_point_in_time parameter, allowing to restore to a specific datetime rather than the latest restorable time.

why

• Fixes missing functionality that’s supported by the aws_rds_cluster resource.

references

• Closes #163

questions

I noticed while making this PR that source_cluster_identifier defaults to "120m", which looks like an inadvertent copy-and-paste from another parameter. In the aws_rds_cluster resource, this parameter is required, so it doesn’t make sense to have a default anyway. Would you recommend leaving this parameter as-is, or should we fix it?

1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Thanks for creating this PR! This looks good to me, but we can make use of the optional variable syntax to easily support both types of object with use_latest_restorable_time or with restore_to_time

left a comment on the PR. Feel free to to @ me here!

#216 Support restoring to a specific time

what

• Adds restore_to_time as an attribute of the restore_to_point_in_time parameter, allowing to restore to a specific datetime rather than the latest restorable time.

why

• Fixes missing functionality that’s supported by the aws_rds_cluster resource.

references

• Closes #163

questions

I noticed while making this PR that source_cluster_identifier defaults to "120m", which looks like an inadvertent copy-and-paste from another parameter. In the aws_rds_cluster resource, this parameter is required, so it doesn’t make sense to have a default anyway. Would you recommend leaving this parameter as-is, or should we fix it?

el avatar

thanks! pushing up the change in a sec :slightly_smiling_face: wasn’t sure if the lack of using optional params there was legacy or intentional

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

likely legacy

el avatar

any thoughts on the default for source_cluster_identifier being 120m? It’s a required parameter in the aws_rds_cluster resource and it looks like it was unintentionally copied and pasted from the timeouts block above

el avatar

(i.e. should we just make it required and remove the optional / default value?)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

oh I see! Yeah that’s definitely a mistake. Looking, one second

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

how about making source_cluster_identifier not optional with no default?

el avatar

yeah that’s what I was thinking

el avatar

it shouldn’t be a breaking change either because there’s no way anyone is using the default value lol

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

yeah agreed

el avatar

cool, thanks for the review! the PR is up to date now and ready for another look

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

np, great! tests are running now. Will approve once / if they pass

el avatar

thanks! I also noticed that the default value for restore_type (copy-on-write) differs from the default in the aws_rds_cluster resource (full-copy). Is it worth noting that difference in the description?

el avatar

I think full-copy is required for restoring from a continuous backup

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

It wouldnt hurt to mention, but it’s not required

el avatar

gotcha, will leave it as-is then thanks!

np1
el avatar

thanks again for the review! this was my first open-source contribution is the release process automated?

el avatar

ah i see it’s already out, nice

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

no problem and thanks for helping out!

1
el avatar

I have to revisit this tomorrow :weary: when instantiating the module, I specify a timestamp for restore_to_time, and it conflicts with use_latest_restorable_time regardless of whether I set it explicitly to false, or null, in which case it defaults to true

el avatar

I was assuming that Terraform would detect that it’s not a conflict if you specify the default value explicitly (in this case, the default for use_latest_restorable_time is false but it conflicts unless you set it to null)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

oh I see. Yeah we’ll probably have to dynamically create each block based on which input is specified

1
el avatar

Took another stab at this and tested all of the permutations locally with a project I’m working on https://github.com/cloudposse/terraform-aws-rds-cluster/pull/217

#217 Fix conflicting `restore_to_point_in_time` attributes

what

• Fixes handling of use_latest_restorable_time and restore_to_time while preserving existing default values for backwards compatibility

why

• Original attempt at fixing this (#216) was insufficient • The previous default value of true for the use_latest_restorable_time attribute prevented using restore_to_time. To use restore_to_time, use_latest_restorable_time must be null. Even if you set this value to null, the existing default overrode it with true instead.

references

• Follow-up to original attempt: #216 • Correctly fixes #163

1
Ray Finch avatar
Ray Finch
#58 add managed_policy_arns to eks iam role

what

Add support for adding managed policies to the eks iam role.

why

The module currently only allows a single policy json to be we can have multiple iam polices that we need to attache to the role.

references

Closes #57

1

2024-06-06

2024-06-07

Quentin BERTRAND avatar
Quentin BERTRAND

Hello, could you review this PR please? It add support of AL2023 on EKS Node Group module https://github.com/cloudposse/terraform-aws-eks-node-group/pull/185

#185 Feat: add support of AL2023

what

Added sopport of AL2023 worker node AMI based on latest module version, without add breaking chance

why

We need it

references

https://docs.aws.amazon.com/eks/latest/APIReference/API_Nodegroup.html#AmazonEKS-Type-Nodegroup-amiType

1
Quentin BERTRAND avatar
Quentin BERTRAND

cc @Gabriela Campana (Cloud Posse)

#185 Feat: add support of AL2023

what

Added sopport of AL2023 worker node AMI based on latest module version, without add breaking chance

why

We need it

references

https://docs.aws.amazon.com/eks/latest/APIReference/API_Nodegroup.html#AmazonEKS-Type-Nodegroup-amiType

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Quentin BERTRAND Just asked our engineers to have a look at this

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Quentin BERTRAND Thank you for this PR.

As you probably know, PRs

  • #178
  • #180
  • #182 have already started on this, but were missing a lot.

I think you have this most of the way there, although I do see some things that need fixing. You did get it far enough down the field that I think I can take it the rest of the way. Follow up on Thursday if you don’t see progress by then.

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Go ahead and take a look at the initial support for EKS on AL2023 and share your constructive comments.

#186 Add support for AL2023

New Features, Breaking Changes

The major new feature in this release is support for Amazon Linux 2023 (AL2023). EKS support for AL2023 is still evolving, and this module will evolve along with that. Some detailed configuration options (e.g. KubeletConfiguration JSON) are not yet supported, but the basic features are there.

The other big improvement is in selecting AMIs, as explained below.

Along with that, we have dropped some outdated support, resulting in minor breaking changes that we expect do not affect many users.

Changes in AMI selection

Previously, unless you specified a specific AMI ID, this module picked the “newest” AMI that met the selection criteria, which in turn was based on the AMI Name. The problem with that was that the “newest” might not be the latest Kubernetes version. It might be an older version that was patched more recently, or simply finished building a little later than the latest version.

Now that AWS explicitly publishes the AMI ID corresponding to the latest (or, more accurately, “recommended”) version of their AMIs via SSM Public Parameters, the module uses that instead. This is more reliable and should eliminate the version regression issues that occasionally happened before.

• The ami_release_version input is now obsolete, because it was based on the AMI name, which is different than the SSM Public Parameter Path. • The new ami_specifier takes the place of ami_release_version, and is specifically whatever path element in the SSM Public Parameter Path replaces “recommended” or “latest” in order to find the AMI ID. Unfortunately, the format of this value varies by OS.

Examples of AMI specifier based on OS:

• AL2: amazon-eks-node-1.29-v20240117 • AL2023: amazon-eks-node-al2023-x86_64-standard-1.29-v20240605 • Bottlerocket: 1.20.1-7c3e9198 • Windows:

The main utility of including a specifier rather than an AMI ID is that it allows you to have a consistent version across multiple regions without having to have region-specific configuration.

Kubernetes Version No Longer Inferred from AMI

Previously, if you specified an AMI ID, the Kubernetes version would be deduced from the AMI ID name. That is not sustainable as new OSes are launched, so the module no longer tries to do that. If you do not supply the Kubernetes version, the EKS cluster’s Kubernetes version will be used.

Special Support for Kubernetes Cluster Autoscaler removed

This module used to takes some steps (mostly labeling) to try to help the Kubernetes Cluster Autoscaler. As the Cluster Autoscaler and EKS native support for it evolved, the steps taken became either redundant or ineffective (or were always ineffective), so they have been dropped.

cluster_autoscaler_enabled has been deprecated. If you set it, you will get a warning in the output, but otherwise it has no effect.

AWS Provider v5.8 or later now required

Previously, this module worked with AWS Provider v4, but no longer. Now v5.8 or later is required.

Special Thanks

This PR builds on the work of @Darsh8790 (#178 and #180) and @QuentinBtd (#182 and #185). Thank you to both for your contributions.



what

• Add initial support for EKS under Amazon Linux 2023 (AL2023) • Improve AMI selection process • Deprecate Kubernetes Cluster Autoscaler support

why

• Amazon Linux 2023 (AL2023) is the latest offering from Amazon • Previously, AMIs were selected by name and date, which occasionally led to undesirable results • The support was either redundant or ineffective

references

• Supersedes and closes #178 • Supersedes and closes #180 • Supersedes and closes #182 • Supersedes and closes #185

1
RB avatar

Nice! I just saw last week that replacing the image_id with a public ssm param was the way to keep amis up to date. Very nice to see it happening in the eks module too

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Quentin BERTRAND You said:
Instances created with a launch template and whose AMI is specified in the template do not have kubelet/nodeadm configuration, so do not join the cluster
Could you be more specific? Which AMI, which AMI Type, which K8s version, which region? This may not be something the module can do anything about.

The module creates a launch template with an AMI ID unless you supply a launch template yourself. The Terratest (and other testing) confirm that these nodes join the cluster, with EKS supplying the configuration if you do not include any userdata. If you do include userdata for AL2023, then you have to include all the required elements of the NodeConfig as part of it.

Other possibilities include Security Group issues or using the wrong service CIDR (it’s confusingly vague).

Quentin BERTRAND avatar
Quentin BERTRAND

Hello, My cluster is running 1.30 K8S version in eu-west-3 region.

Here are some of my values:

ami_specifier         = "amazon-eks-node-al2023-x86_64-standard-1.30-v20240605"
ami_type              = "AL2023_x86_64_STANDARD"

associate_cluster_security_group = false

associated_security_group_ids = [
  dependency.sg_default.outputs.id,
  dependency.sg_eks_nodes.outputs.id
]

node_group_terraform_timeouts = [{
  create = "1h"
  update = "1h"
  delete = "1h"
}]

block_device_mappings = [
  {
    "delete_on_termination" : true,
    "device_name" : "/dev/xvda",
    "encrypted" : true,
    "volume_size" : 20,
    "volume_type" : "gp3"
  }
]

tags = {
  "k8s.io/cluster-autoscaler/node-template/label/nodetype" = "default"
}

I have no problem with 2.12.0 module version.

I tried to create a new node group, with min size and desired capacity set to 0 , the creation of node group end in failure.

NodeCreationFailure	Instances failed to join the kubernetes cluster
Quentin BERTRAND avatar
Quentin BERTRAND

ami_type = "AL2023_x86_64_STANDARD"

But in EKS console, AMI type is CUSTOM

Quentin BERTRAND avatar
Quentin BERTRAND

@Jeremy G (Cloud Posse) Thanks for you GitHub comment ;
If you supply an AMI ID in the launch template, then EKS does not add anything to the userdata, since it cannot know what the AMI is expecting. If you do not supply an AMI ID in the launch template, EKS creates a copy of the launch template, adding the AMI ID and its userdata to set up the node.
Maybe the module could let us choose between set ami_specifier or ami_version?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I’m am going to revert the change from ami_release_version to ami_specifier, because ami_release_version is an input to aws_node_group. (Even though I wrote most of this code, it was 4 years ago and I forgot a lot.)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Quentin BERTRAND The AL2023 support PR is ready to be reviewed again.

Quentin BERTRAND avatar
Quentin BERTRAND

@Jeremy G (Cloud Posse) Thank you for this!

Tested on a staging env with AL2023, it works as expected

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

cloudposse/eks-node-group/aws v3.0.0 released with AL2023 support and several other minor improvements and bug fixes.

1
1

2024-06-09

Yangci Ou avatar
Yangci Ou

Hey all! Appreciate a review on PR https://github.com/cloudposse/terraform-aws-components/pull/1063 for adding backup configs to RDS cluster in the components repo

#1063 feat(aurora-postgres): backup configs

what

The CloudPosse RDS cluster module https://github.com/cloudposse/terraform-aws-rds-cluster has these backup configurations that would be helpful to have!

why

  1. Allows me to set backup configurations
  2. Without these inputs, the cluster is forced to use the default configs from the module
  3. Allows me to set these variable values to null or empty when I don’t want any backing up happening
  4. Without these inputs to set it as empty values, there’s a conflict when using RDS clusters with AWS Backup plans because the RDS resource maintains its backup values, while AWS Backup plans wants other values.

references

• CloudPosse’s RDS module

image - AWS Backup conflict with default backup values unless explictly set https://github.com/<https://github.com/hashicorp/terraform-provider-aws/issues/33465%7Chashicorp/terraform-provider-aws/issues/33465>

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Yangci Ou Just asked our team to review this PR

#1063 feat(aurora-postgres): backup configs

what

The CloudPosse RDS cluster module https://github.com/cloudposse/terraform-aws-rds-cluster has these backup configurations that would be helpful to have!

why

  1. Allows me to set backup configurations
  2. Without these inputs, the cluster is forced to use the default configs from the module
  3. Allows me to set these variable values to null or empty when I don’t want any backing up happening
  4. Without these inputs to set it as empty values, there’s a conflict when using RDS clusters with AWS Backup plans because the RDS resource maintains its backup values, while AWS Backup plans wants other values.

references

• CloudPosse’s RDS module

image - AWS Backup conflict with default backup values unless explictly set https://github.com/<https://github.com/hashicorp/terraform-provider-aws/issues/33465%7Chashicorp/terraform-provider-aws/issues/33465>

Frank avatar
#235 feat: add support for redis serverless

what

• Add support for Serverless Redis instances

why

• Supporting Redis Serverless for demanding applications

notes

• This upgrades the required version of the AWS provider from “>= 5.27.0” to “>= 5.32” as this is the first release where aws_elasticache_serverless_cache was introduced

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Frank Just asked our team to review this PR

#235 feat: add support for redis serverless

what

• Add support for Serverless Redis instances

why

• Supporting Redis Serverless for demanding applications

notes

• This upgrades the required version of the AWS provider from “>= 5.27.0” to “>= 5.32” as this is the first release where aws_elasticache_serverless_cache was introduced

2024-06-10

Bryan Dady avatar
Bryan Dady

terraform-cloudflare-zone !43 is ready for review

#43 feat! remove firewall rules

Remove deprecated filter and firewall resources what

Support for the following resources, which are deprecated, are removed from this module.

• cloudflare_firewall_rule • cloudflare_filter

why

Cloudflare converted existing firewall rules into WAF custom rules

The Firewall Rules API and the associated Cloudflare Filters API are now deprecated. These APIs will stop working on 2024-07-01

Relevant changes for Terraform users

The following Terraform resources from the Cloudflare provider are now deprecated:

  • cloudflare_firewall_rule
  • cloudflare_filter

Removing the functionality from this module, related to the deprecated resources, addresses multiple issues, including indirectly supporting #40 and #20

references

• closes #25

Bryan Dady avatar
Bryan Dady

@Hans D, @RB the GitHub automation tagged you as reviewers for this PR.

#43 feat! remove firewall rules

Remove deprecated filter and firewall resources what

Support for the following resources, which are deprecated, are removed from this module.

• cloudflare_firewall_rule • cloudflare_filter

why

Cloudflare converted existing firewall rules into WAF custom rules

The Firewall Rules API and the associated Cloudflare Filters API are now deprecated. These APIs will stop working on 2024-07-01

Relevant changes for Terraform users

The following Terraform resources from the Cloudflare provider are now deprecated:

  • cloudflare_firewall_rule
  • cloudflare_filter

Removing the functionality from this module, related to the deprecated resources, addresses multiple issues, including indirectly supporting #40 and #20

references

• closes #25

RB avatar

The code looks good to me.

The tests are failing due to what looks like a permission limitation on the cloudflare api token. I added a comment and cc’ed other cloudposse members who have more permission to cloudflare

1
1
Bryan Dady avatar
Bryan Dady

Thanks.

@Andriy Knysh (Cloud Posse) do you suppose this is something you can fix with the cloudflare token, or is there anything else I might do to make this test pass?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us check this, we’ll let you know

cool-doge1

2024-06-11

2024-06-12

2024-06-13

2024-06-14

Diego Rabatone Oliveira avatar
Diego Rabatone Oliveira

Hey folks! Nice to be around! and congrats for the wonderful work!

I am working with the https://github.com/cloudposse/terraform-opsgenie-incident-management module and I’ve crafted some PRs with some improvements and a also an abstraction suggestion:

Improvements:

[Notification Policy] Imrprove module

[escalation] Single recipient per rule

[Existing Resources] Add Team and Schedule

[schedule_rotation] Allow “none” as rotation participant New abstraction:

[config] Add squad intermediate abstraction Let me know what you think about it and if I need to make changes

2024-06-16

2024-06-17

    keyboard_arrow_up