#terraform-aws-modules (2021-12)

terraform Terraform Modules

Discussions related to https://github.com/terraform-aws-modules

Archive: https://archive.sweetops.com/terraform-aws-modules/

2021-12-06

Nikhil Owalekar avatar
Nikhil Owalekar

Hi, I’m trying to upgrade the Redis engine on my AWS Elasticache cluster from 5.0.5 to 6.2 I’m using this module//github.com/cloudposse/terraform-aws-elasticache-redis/releases/tag/0.41.2> And seeing this error

Error: error updating ElastiCache Replication Group (elasticache-env5-redis): InvalidParameterCombination: Must specify a parameter group for the engine major version upgrade from redis 5.0 to 6.2 status code: 400, request id: 6811f0fc-9f44-4e3b-beff-a973135174ac
with module.redis.aws_elasticache_replication_group.default[0]
on .terraform/modules/redis/main.tf line 104, in resource "aws_elasticache_replication_group" "default":
resource "aws_elasticache_replication_group" "default" {

Completely stumped. Where can I find help ?

RB avatar

Please include your inputs in this thread

RB avatar
terraform-aws-elasticache-redis/main.tf at 7293a0e34195637ce11ae6e157620bca48b88e46 · cloudposse/terraform-aws-elasticache-redisattachment image

Terraform module to provision an ElastiCache Redis Cluster - terraform-aws-elasticache-redis/main.tf at 7293a0e34195637ce11ae6e157620bca48b88e46 · cloudposse/terraform-aws-elasticache-redis

RB avatar

it’s possible you have a mismatch with the engine_version supplied and the replication group

Nikhil Owalekar avatar
Nikhil Owalekar

While pasting my inputs here, I think I have found the problem

1
Nikhil Owalekar avatar
Nikhil Owalekar

I updated the engine_version to 6.x, but the family is still set to my default value redis5.0

Nikhil Owalekar avatar
Nikhil Owalekar

Let me try updating that. If it still fails, I’ll post the inputs.

Nikhil Owalekar avatar
Nikhil Owalekar
module "redis" {
  source                       = "git::<https://github.com/cloudposse/terraform-aws-elasticache-redis.git?ref=tags/0.41.2>"
  availability_zones           = ["us-east-1a", "us-east-1b", "us-east-1c"]
  namespace                    = var.namespace
  stage                        = var.env
  name                         = "redis"
  vpc_id                       = data.terraform_remote_state.network.outputs.vpc_id
  create_security_group        = false
  associated_security_group_ids= length(var.external_sg_list) != 0 ? concat([data.terraform_remote_state.platform_networking.outputs.elasticache_security_group], var.external_sg_list) : [data.terraform_remote_state.platform_networking.outputs.elasticache_security_group]
  subnets                      = data.terraform_remote_state.network.outputs.private_subnets
  instance_type                = var.instance_type
  apply_immediately            = true
  automatic_failover_enabled   = true
  multi_az_enabled             = true
  transit_encryption_enabled   = true
  at_rest_encryption_enabled   = true
  cluster_mode_enabled         = true
  cluster_mode_num_node_groups = var.cluster_mode_num_node_groups
  cluster_mode_replicas_per_node_group = 1
  replication_group_id         = "${var.namespace}-${var.env}-redis"
  cluster_size                 = 2
  engine_version               = "6.x"
  family                       = "redis6.0"
  snapshot_retention_limit     = 5
  snapshot_window              = "06:30-07:30"
}
RB avatar

a family of redis6.0 should prevent the error

Nikhil Owalekar avatar
Nikhil Owalekar

Now it tries to delete and recreate the parameter group, but it’s already in use by the same cluster

Error: error deleting ElastiCache Parameter Group (elasticache-env5-redis): InvalidCacheParameterGroupState: One or more cache clusters are still members of this parameter group elasticache-env5-redis, so the group cannot be deleted. status code: 400, request id: 5b7f4b2e-1273-4c28-85b9-dc714205a002
RB avatar

try a full destroy and then run terraform apply

Nikhil Owalekar avatar
Nikhil Owalekar

Oh, so in place upgrade is not possible ?

Nikhil Owalekar avatar
Nikhil Owalekar

That might be a problem in prod

RB avatar

ah yes i thought this was a new cluster

RB avatar

perhaps just remove the old param group from the state and allow it to craete a new param group

Nikhil Owalekar avatar
Nikhil Owalekar

I’m (spoiled by) using TF cloud. Not sure whether I can modify the state

RB avatar

you could try creating a new param group, associating the existing rds instance with the new param group, upgrading the rds instance to the version specified

RB avatar

but you may have to do state manipulations

RB avatar

cc: @Max Lobur (Cloud Posse) (in case you have seen this before)

1
Nikhil Owalekar avatar
Nikhil Owalekar

I just tried deleting the parameter group from the statefile. I would also need to delete the actual resource, and would need to create a duplicate param group before I can delete the desired one. And I can’t seem to find a copy command. All this feels like too much manual work. At this point I might as well propose recreating the Elasticache Redis cluster with the new engine version/family.

2021-12-09

neil avatar

Hey, not sure if I needed to ping in here to get a PR reviewed but here it goes https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/188

Fix: ordered_cache_behavior forwarded_values dynamic block by neilbartley · Pull Request #188 · cloudposse/terraform-aws-cloudfront-s3-cdnattachment image

what Corrects logic for ordered_cache_behavior -> forwarded_values Should be If a cache policy or origin request policy is specified, we cannot include a &#39;forwarded_values&#39; block at all…

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

Hi Neil! Post it to #pr-reviews I left a comment inside

Fix: ordered_cache_behavior forwarded_values dynamic block by neilbartley · Pull Request #188 · cloudposse/terraform-aws-cloudfront-s3-cdnattachment image

what Corrects logic for ordered_cache_behavior -> forwarded_values Should be If a cache policy or origin request policy is specified, we cannot include a &#39;forwarded_values&#39; block at all…

2021-12-10

2021-12-16

DaniC (he/him) avatar
DaniC (he/him)

heads up with a giant PR on EKS module https://github.com/terraform-aws-modules/terraform-aws-eks/pull/1680 What caught my attention is
• Drop support for managing aws-auth configmap
◦ Drop requirement of requiring aws-iam-authenticator
◦ Drop requirement of Kubernetes terraform provider
as i’ve bumped into it recently (has been very nicely documented by @Erik Osterman (Cloud Posse) and his team btw)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)

2
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Erik Osterman (Cloud Posse) You scared me there, I thought at first this was a PR for our EKS module. I think it is a mistake to drop support for managing the aws-auth ConfigMap. It needs to be managed, and AWS is hopefully going to provide an AWS (as opposed to Kubernetes) API for managing it, so I think removing it is not a good long-term plan. Our module mostly handles it OK for now, and of course has the option to not manage it if you want to go that way.

That said, this Kubernetes operator looks like it is worth investigating.

[EKS] [request]: Manage IAM identity cluster access with EKS API · Issue #185 · aws/containers-roadmapattachment image

Tell us about your request CloudFormation resources to register IAM roles in the aws-auth ConfigMap. Which service(s) is this request for? EKS Tell us about the problem you&#39;re trying to solve. …

GitHub - cloudposse/terraform-aws-eks-cluster: Terraform module for provisioning an EKS clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

GitHub - gp42/aws-auth-operator: Kubernetes operator to manage aws-auth ConfigMap for AWS EKSattachment image

Kubernetes operator to manage aws-auth ConfigMap for AWS EKS - GitHub - gp42/aws-auth-operator: Kubernetes operator to manage aws-auth ConfigMap for AWS EKS

2021-12-20

    keyboard_arrow_up