#terraform-aws-modules (2024-02)

terraform Terraform Modules

Discussions related to https://github.com/terraform-aws-modules

Archive: https://archive.sweetops.com/terraform-aws-modules/

2024-02-01

Aayush Harwani avatar
Aayush Harwani

@Andriy Knysh (Cloud Posse), can you please review this PR - https://github.com/cloudposse/terraform-aws-components/pull/971 ?

#971 Update vpc-flow-logs.tf

If the VPC flag is enabled, you can create AWS Flow Logs. Without VPC, it will result in an error.

what

• In vpc-flow-logs.tf, I have updated condition, so that vpc flow logs gets created only if enabled and vpc_flow_logs_enabled is true.

why

• If the VPC flag is enabled, you can create AWS Flow Logs. Without VPC, it will result in an error.

references

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Aayush Harwani what error are you seeing? Asking b/c what you added in the PR is already implemented here https://github.com/cloudposse/terraform-aws-components/blob/main/modules/vpc/main.tf#L5

  vpc_flow_logs_enabled                 = local.enabled && var.vpc_flow_logs_enabled
Aayush Harwani avatar
Aayush Harwani

@Andriy Knysh (Cloud Posse), my bad, i was getting error due to this,, here in remote-state.tf var is used instead of local. - https://github.com/cloudposse/terraform-aws-components/pull/972

#972 Update remote-state.tf in vpc module

what

• replace var.vpc_flow_logs_enabled with local.vpc_flow_logs_enabled in remote-state.tf

why

• Because it was giving error when vpc was disabled (when enabled flag was set to false).

references

Aayush Harwani avatar
Aayush Harwani

I faced this issue, while trying out atmos.

Aayush Harwani avatar
Aayush Harwani
Aayush Harwani avatar
Aayush Harwani

@Andriy Knysh (Cloud Posse), can you help me with this - https://github.com/cloudposse/terraform-aws-components/pull/973

#973 SQS module - replace relative path

in sqs-queue module changed relative path with absolute path, as it was giving error when used with atmos , pulled it using vendor pull.

what

• in sqs-queue module changed relative path with absolute path

why

• it was giving error when used with atmos , pulled it using vendor pull.

references

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Aayush Harwani this is not related to Atmos, this is pure Terraform. The component uses a local module https://github.com/cloudposse/terraform-aws-components/blob/main/modules/sqs-queue/main.tf#L6

  source = "./modules/terraform-aws-sqs-queue"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

This is a correct and used Terraform pattern. Why do you think it’s a problem?

Aayush Harwani avatar
Aayush Harwani

whenever i am pulling this module using vendor pull, modules folder is not getting pulled and it is giving error.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
    included_paths:
      - "**/**"
      # The glob library does not treat the ** above as including the sub-folders if the sub-folders is not also explicitly included
      - "**/modules/**"
 
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please add this to the included_paths in vendor.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(the Go lib that Atmos uses to download the sources does not create sub-folders automatically, they need to be specified in included_paths)

2024-02-02

2024-02-03

pjf719 avatar

Hi all - I’m trying to use this beanstalk module to spin up some infra https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment My requirement is that the beanstalk environment attaches itself to port 443 so that it’s on SSL.

Here is my configuration:

module "alb" {
  source  = "cloudposse/alb/aws"
  version = "1.10.0"

  namespace          = "tpo"
  name               = "elastic-beanstalk"
  vpc_id             = data.aws_vpc.default.id
  subnet_ids         = data.aws_subnets.private.ids
  internal           = true
  certificate_arn    = data.aws_acm_certificate.cert.arn
  security_group_ids = [module.security_groups.alb_sg]

  http_enabled                            = false
  https_enabled                           = true

  enabled = true
  
  stage                                   = "prod"
  access_logs_enabled                     = true
  access_logs_prefix                      = "tpo-prod"
  alb_access_logs_s3_bucket_force_destroy = true

  # This additional attribute is required since both the `alb` module and `elastic_beanstalk_environment` module
  # create Security Groups with the names derived from the context (this would conflict without this additional attribute)
  attributes = ["shared"]

}


module "elastic_beanstalk_application" {
  source  = "cloudposse/elastic-beanstalk-application/aws"
  version = "0.11.1"
  enabled = true

  for_each = toset(var.EB_APPS)

  name = each.value

}

module "elastic_beanstalk_environment" {
  source   = "cloudposse/elastic-beanstalk-environment/aws"
  for_each = toset(var.EB_APPS)
  enabled = true

  region = var.REGION

  elastic_beanstalk_application_name = each.value
  name                               = "prod-${each.value}-tpo"
  environment_type                   = "LoadBalanced"
  loadbalancer_type                  = "application"
  loadbalancer_is_shared             = true
  shared_loadbalancer_arn            = module.alb.alb_arn
  loadbalancer_certificate_arn       = data.aws_acm_certificate.cert.arn

  tier          = "WebServer"
  force_destroy = true

  instance_type = "t4g.xlarge"

  vpc_id               = data.aws_vpc.default.id
  loadbalancer_subnets = data.aws_subnets.private.ids
  application_subnets  = data.aws_subnets.private.ids
  application_port = 443
  allow_all_egress = true

  additional_security_group_rules = [
    {
      type                     = "ingress"
      from_port                = 0
      to_port                  = 65535
      protocol                 = "-1"
      source_security_group_id = data.aws_security_group.vpc_default.id
      description              = "Allow all inbound traffic from trusted Security Groups"
    }
  ]
  solution_stack_name = "64bit Amazon Linux 2 v5.8.10 running Node.js 14"

  additional_settings = [
    {
      namespace = "aws:elasticbeanstalk:application:environment"
      name      = "NODE_ENV"
      value     = "prod"
    },
    {
      namespace = "aws:elbv2:listenerrule:${each.value}"
      name      = "HostHeaders"
      value     = "prod-${each.value}-taxdev.io"
    }
  ]
  env_vars = {
    "NODE_ENV" = "prod"
  }

  enable_stream_logs = true
  extended_ec2_policy_document = data.aws_iam_policy_document.minimal_s3_permissions.json
  prefer_legacy_ssm_policy     = false
  prefer_legacy_service_policy = false

}
pjf719 avatar

What ends up happening is that the beanstalk application tries to map to port 80 rather than 443, and the whole thing errors out

pjf719 avatar

The only way I can get it to finish successfully is if I set http_enabled = true but then my beanstalk app ends up on the wrong listener port

pjf719 avatar

Does anybody have any ideas how I can get this working on listener port 443 instead of 80?

Joe Niland avatar
Joe Niland

Can you show the error?

pjf719 avatar
Error: creating Elastic Beanstalk Environment (prod-adjustments-tpo): ConfigurationValidationException: Configuration validation exception: Invalid option value: 'default' (Namespace: 'aws:elbv2:listener:80', OptionName: 'Rules'): The load balancer you specified doesn't have a listener on port 80. Specify listener options only for existing listeners.
│       status code: 400, request id: c06c4cab-f781-445b-8bc4-2158de99f923
│ 
│   with module.elastic_beanstalk_environment["adjustments"].aws_elastic_beanstalk_environment.default[0],
│   on .terraform/modules/elastic_beanstalk_environment/main.tf line 602, in resource "aws_elastic_beanstalk_environment" "default":
│  602: resource "aws_elastic_beanstalk_environment" "default" {
pjf719 avatar

There you go @Joe Niland

pjf719 avatar

Would you happen to know how to add multiple “Rules” in the configuration, like this?

pjf719 avatar

I’m finding that the following rule is required to avoid the error shown above…

{
      namespace = "aws:elbv2:listener:443"
      name      = "Rules"
      value     = "default"
 },

However, when I use that rule, I am unable to add a secondary rule as shown in the screenshot.

2024-02-04

2024-02-10

Hans D avatar

https://github.com/cloudposse/terraform-aws-iam-account-settings/blob/78e9718eabbeca8e8c66bcf387e09b3c3333d411/main.tf#L3-L7 I don’t believe the account alias is used anywhere in the Cloud Posse components. But wanted to double-check if there could be any unforeseen issues if we update these with different values.

resource "aws_iam_account_alias" "default" {
  count = module.this.enabled == true ? 1 : 0

  account_alias = module.this.id
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i believe if you didn’t use it before, it should not affect you (but you have to test it)

resource "aws_iam_account_alias" "default" {
  count = module.this.enabled == true ? 1 : 0

  account_alias = module.this.id
}

2024-02-12

Matthew Reggler avatar
Matthew Reggler

Quick question about the lambda component. The version of the cloudposse/lambda-function/aws module listed in the repo (https://github.com/cloudposse/terraform-aws-components) is 0.4.1. The current version of 0.5.3 and importantly contains the fixes to the names of function log groups from 0.5.1 .

The lambda component is still regularly updated (e.g. v1.396.0, two weeks ago). Is there a reason for the listed version of the module being behind? I’m not clear why the module version used here in the main repo has drifted behind its source — as in most cases bumping the version after the atmos vendor causes no problems

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

There isnt any particular reason that the module version hasnt been updated. It was likely the case of pushing a quick fix for a requirement and neglecting to upgrade the module version at the same time.

If you do upgrade that version and everything works as expected, please feel free to submit a PR to the component!

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(this is more a refarch question, as it’s components)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do not automatically update dependencies in terraform-aws-components right now because we lack automated testing. We want/need these to be vetted & working modules. They there’re will lag behind because we upgrade them mostly when we need to to retrieve new functionality.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re in parallel in the early stages of building out refarch v2 components. These will add terratests (we’re adding a helper for terratest to support atmos). All v2 components will have tests out-of-the-box. This means we can accept more contributions, automatically update component dependencies, and ensure everything continues to work.

this1
1
Hans D avatar

i’m using tf overrides for these kind of updates. So far, all working very nice

this1

2024-02-14

2024-02-15

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

We have been eagerly anticipating the day when we could manage EKS cluster authentication via AWS APIs rather than the Kubernetes ConfigMap. That day is here, but there are still some bugs to be worked out. Please upvote this issue that, when resolved, will make upgrading to the new APIs significantly easier.

#35824 [Bug]: bootstrap configuration should be ignored for existing EKS clusters

Terraform Core Version

1.7.3

AWS Provider Version

5.36.0

Affected Resource(s)

aws_eks_cluster

Expected Behavior

Setting bootstrap_cluster_creator_admin_permissions on an existing cluster should have no effect.

Actual Behavior

Setting bootstrap_cluster_creator_admin_permissions = true (the default value) on an existing cluster forces the cluster to be replaced.

  ~ access_config {
      ~ authentication_mode                         = "CONFIG_MAP" -> "API_AND_CONFIG_MAP"
      ~ bootstrap_cluster_creator_admin_permissions = false -> true # forces replacement
    }

Terraform Configuration Files

resource "aws_eks_cluster" "default" {
  # create the cluster without `access_config` and then uncomment the following
  /* 
  access_config {
    authentication_mode                         = "API_AND_CONFIG_MAP"
    bootstrap_cluster_creator_admin_permissions = true
  }
  */
  ...
}

Steps to Reproduce

• Create an EKS cluster without specifying access_config at all • Modify the configuration adding the access_config block and setting • authentication_mode = "API_AND_CONFIG_MAP"bootstrap_cluster_creator_admin_permissions = true

Important Factoids

The setting of bootstrap_cluster_creator_admin_permissions only matters during cluster creation. It should be ignored for existing clusters.

References

From containers-roadmap:

Note: The value that you set for bootstrapClusterCreatorAdminPermissions on cluster creation is not returned in the response of subsequent EKS DescribeCluster API calls. This is because the value of that field post cluster creation may not be accurate. Further changes to access control post cluster creation will always be performed with access entry APIs. The ListAccessEntries API is the source of truth for cluster access post cluster creation.

Would you like to implement a fix?

No

Luis Longo avatar
Luis Longo

I was going to ask for this, and found your message… I think this will simplify things

https://aws.amazon.com/blogs/containers/a-deep-dive-into-simplified-amazon-eks-access-management-controls/

A deep dive into simplified Amazon EKS access management controls | Amazon Web Servicesattachment image

Introduction Since the initial Amazon Elastic Kubernetes Service (Amazon EKS) launch, it has supported AWS Identity and Access Management (AWS IAM) principals as entities that can authenticate against a cluster. This was done to remove the burden—from administrators—of having to maintain a separate identity provider. Using AWS IAM also allows AWS customers to use their […]

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Yes, this is actively under development. In fact, the coding is mostly done, but the upgrade path is complicated and there is a lot of testing to do and documentation to write. I plan to release a pre-release version next week to get feedback.

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Luis Longo Please try eks-cluster v4.0.0-rc1 (PR 206) and let us know what you think. It (finally!) ditches all the hacks needed to update the aws-auth ConfigMap and uses the new AWS API for access control. Be sure to read the migration doc, as there a few manual steps needed to upgrade, and a lot of deprecated features finally removed.

Note: at present, v4.0.0-rc1 is not available via the Terraform registry. Use a git ref instead:

source = "github.com/cloudposse/terraform-aws-eks-cluster?ref=v4.0.0-rc1"
#206 Use AWS API for EKS authentication and authorization

Major Breaking Changes

Warning

This release has major breaking changes and requires significant manual intervention
to upgrade existing clusters. Read the migration document
for more details.

what

• Use the AWS API to manage EKS access controls instead of the aws-auth ConfigMap • Remove support for creating an extra security group, deprecated in v2 • Add IPv6 service CIDR output • Update test framework to go v1.21, Kubernetes 1.29, etc.

why

• Remove a large number of bugs, hacks, and flaky behaviors • Encourage separation of concerns (use another module to create a security group) • Requested and authored by @colinh6 • Stay current

references

New API for EKS access control • Obsoletes and closes #148 • Obsoletes and closes #155 • Obsoletes and closes #167 • Obsoletes and closes #168 • Obsoletes and closes #193 • Obsoletes and closes #202 • Fixes #203 • Supersedes and closes #173 • Supersedes and closes #194 • Supersedes and closes #195 • Supersedes and closes #196 • Supersedes and closes #197 • Supersedes and closes #198 • Supersedes and closes #199 • Supersedes and closes #200 • Supersedes and closes #201

1
Quentin BERTRAND avatar
Quentin BERTRAND

@Jeremy G (Cloud Posse) Hi !

Thanks for the work.

I’m testing it and I have this error during terraform plan:

│ Error: Get “<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>”: dial tcp [::1]:80: connect: connection refused

I can’t figure why

My values comply with the migration guide

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Quentin BERTRAND Verify that you really are on v4.0.0-rc1. (It is now available via the Terraform Registry.) That error comes from settings that have been removed. If you are on the right version, and have removed the auth-map as detailed in the migration guide, and are still having this issue, then you may need to manually remove ...null_resource.wait_for_cluster[0] the way you removed the auth-map resource. Please let me know if this helps, and if it does not, please let me know what settings you previously had for all the inputs that start with “kube”.

Quentin BERTRAND avatar
Quentin BERTRAND

@Jeremy G (Cloud Posse) Hello,

Verify that you really are on v4.0.0-rc1 I use source = "[github.com/cloudposse/terraform-aws-eks-cluster?ref=v4.0.0-rc1](http://github.com/cloudposse/terraform-aws-eks-cluster?ref=v4.0.0-rc1)" on my terragrunt.hcl :white_check_mark:

I removed manually ...null_resource.wait_for_cluster[0]

Now I have these states;

❯ terragrunt state list
data.aws_iam_policy_document.assume_role[0]
data.aws_iam_policy_document.cluster_elb_service_role[0]
data.aws_partition.current[0]
data.tls_certificate.cluster[0]
aws_eks_cluster.default[0]
aws_iam_openid_connect_provider.default[0]
aws_iam_policy.cluster_elb_service_role[0]
aws_iam_role.default[0]
aws_iam_role_policy_attachment.amazon_eks_cluster_policy[0]
aws_iam_role_policy_attachment.amazon_eks_service_policy[0]
aws_iam_role_policy_attachment.cluster_elb_service_role[0]
aws_security_group_rule.managed_ingress_cidr_blocks[0]
aws_security_group_rule.managed_ingress_security_groups[0]
kubernetes_config_map.aws_auth_ignore_changes[0]

I keep getting the error during terraform plan/apply;

Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp 127.0.0.1:80: connect: connection refused

``` Do I have to delete kubernetes_config_map.aws_auth_ignore_changes[0] ?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

:face_palm: Yes, of course, my mistake. You need to delete (terraform state rm):

kubernetes_config_map.aws_auth_ignore_changes[0]
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Quentin BERTRAND Also, if you have kubernetes in your required_providers block (usually in [versions.tf](http://versions.tf)) and/or have provider "kubernetes" in your [providers.tf](http://providers.tf) you need to remove those, too.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Quentin BERTRAND Did you not get this error:

│ Error: Provider configuration not present
│ 
│ To work with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0] (orphan) its original provider configuration at
│ module.eks_cluster.provider["registry.terraform.io/hashicorp/kubernetes"] is required, but it has been removed. This occurs when a provider
│ configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy
│ module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0] (orphan), after which you can remove the provider configuration again.

or something almost exactly like it? If not, I’m curious how, because I cannot reproduce the case where that error does not show up.

Quentin BERTRAND avatar
Quentin BERTRAND

@Jeremy G (Cloud Posse) Hello It’s better after deleting kubernetes_config_map.aws_auth_ignore_changes[0] Thank you

2024-02-19

gusse avatar

Hey! Any chance this could be looked into https://github.com/cloudposse/terraform-aws-elasticache-redis/issues/194? It even has a proposed fix done as a PR

#194 random_password as auth_token rotation causes destroy and create because of transit_encryption handling != nuill

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

If one tries to set a new dynamic auth_token the way this module is written it will cause a destroy and create of the redis cache. The upstream aws provider added this support here.

The way transit_encryption_enabled is handled causes a destroy create when the auth_token is changed with a random_password as a source. Any dynamic input for auth_token causes this, a string literal will not replicate this issue. Because a string literal will be known at plan time. The work around with random_password has to be in state for us to get good modify only behavior.

Because the conditional var.transit_encryption_enabled || var.auth_token != null when auth_token is rotated but not yet known, terraform assumes the worst and I believe this causes the destroy create. (Credits to apparentlymart in the hangops slack. Thread here)

Expected Behavior

Using random_password to generate an auth_token and rotating it, I expected the redis cache to be modified not, destroy/created again. This is more disruptive than it has to be.

Steps to Reproduce

See this gist as a reference.

  1. Apply random_password.password, aws_elasticache_replication_group.default and module.redis.
  2. At the same time uncomment the code for random_password.password2 and use it in both redis blocks. (you’ll see the resource works fine where the module destroy creates.) • the key to the above is in the same plan/apply phase we have to be adding a dynamic auth_token value which isn’t fully known at plan time.
  3. You’ll see the redis cache get destroyed and recreated

The work around with the code as it is written is:

  1. Apply random_password.password, aws_elasticache_replication_group.default and module.redis.
  2. Plan and apply just random_password.password2 but do not reference it in the redis blocks.
  3. Once password2 is known in state, use it as the auth_token in both redis blocks
  4. this will only modify the cache instead of destroy create because the auth_token will be fully known since it’s in the state.

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

OSX
terraform 1.3.1
aws provider 4.63.0
random provider 3.5.1

Anything that will help us triage the bug will help. Here are some ideas:

• OS: [e.g. Linux, OSX, WSL, etc] • Version [e.g. 10.15]

Additional Context

Add any other context about the problem here.

gusse avatar

if that isn’t a good solution, I could take a stab at it

#194 random_password as auth_token rotation causes destroy and create because of transit_encryption handling != nuill

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

If one tries to set a new dynamic auth_token the way this module is written it will cause a destroy and create of the redis cache. The upstream aws provider added this support here.

The way transit_encryption_enabled is handled causes a destroy create when the auth_token is changed with a random_password as a source. Any dynamic input for auth_token causes this, a string literal will not replicate this issue. Because a string literal will be known at plan time. The work around with random_password has to be in state for us to get good modify only behavior.

Because the conditional var.transit_encryption_enabled || var.auth_token != null when auth_token is rotated but not yet known, terraform assumes the worst and I believe this causes the destroy create. (Credits to apparentlymart in the hangops slack. Thread here)

Expected Behavior

Using random_password to generate an auth_token and rotating it, I expected the redis cache to be modified not, destroy/created again. This is more disruptive than it has to be.

Steps to Reproduce

See this gist as a reference.

  1. Apply random_password.password, aws_elasticache_replication_group.default and module.redis.
  2. At the same time uncomment the code for random_password.password2 and use it in both redis blocks. (you’ll see the resource works fine where the module destroy creates.) • the key to the above is in the same plan/apply phase we have to be adding a dynamic auth_token value which isn’t fully known at plan time.
  3. You’ll see the redis cache get destroyed and recreated

The work around with the code as it is written is:

  1. Apply random_password.password, aws_elasticache_replication_group.default and module.redis.
  2. Plan and apply just random_password.password2 but do not reference it in the redis blocks.
  3. Once password2 is known in state, use it as the auth_token in both redis blocks
  4. this will only modify the cache instead of destroy create because the auth_token will be fully known since it’s in the state.

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

OSX
terraform 1.3.1
aws provider 4.63.0
random provider 3.5.1

Anything that will help us triage the bug will help. Here are some ideas:

• OS: [e.g. Linux, OSX, WSL, etc] • Version [e.g. 10.15]

Additional Context

Add any other context about the problem here.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Erik Osterman (Cloud Posse) Who is our best resource for ElastiCache?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe @Dan Miller (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

unfortunately no. I’ve not worked on elasticache more than the basics of setting it up

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

I should have raised my hand sooner, but missed the DM’s about this till now

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

reading the PR

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

so, from what I’ve gathered, our module doesn’t use random_password. In fact, the code explicitly requests a variable. This means that the behavior is out of scope.

To be clear, I empathize with the issue. I would recommend using the -target flag, which lets you apply the state of just one resource in a stack. If you need to rotate the password, just use something like taint, and then run another -target apply.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)
Target resources | Terraform | HashiCorp Developerattachment image

Apply changes to an AWS S3 bucket and bucket objects using resource targeting. Target individual resources, modules, and collections of resources to change or destroy. Explore how Terraform handles upstream and downstream dependencies.

2024-02-20

Brian avatar

I am adding support for latest Karpenter. I noticed the IAM role for nodes provisioned or managed by Karpenter is created by the eks/cluster component instead of eks/karpenter. I also noticed this is the “recommend” configuration. I cannot understand why, though. Can someone help me understand?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The tl;dr is that we want to ensure that when you destroy an EKS cluster via terraform destroy it does not leave orphaned resources like the Karpenter IAM role. If eks/karpenter creates the role, and then you destroy the EKS cluster, you then don’t have a good way to destroy the Karpenter IAM role. You cannot terraform destroy the eks/karpenter component at that point because the cluster is gone and you get errors.

Previously, we ran into even greater issues with bringing up a new cluster with the orphaned role still lying around, the exact details of which I don’t recall, but it had something to do with the special relationship between IAM roles and EC2 instance profiles.

Anyway, it just works out better for eks/cluster to be responsible for all the IAM roles in the cluster that are part of the cluster infrastructure (rather than the applications).

Also, updated Karpenter support is high on our to-do list. It has been held up by our work on updating eks/cluster to use the new AWS API for access control, but now that we have a release candidate for that, we hope to have updated Karpenter support “real soon now”

Brian avatar

Thank you @Jeremy G (Cloud Posse)! I did not think about race-condition or sudo circular dependency between eks/karpenter and its IAM role.

Brian avatar


Also, updated Karpenter support is high on our to-do list. It has been held up by our work on updating eks/cluster to use the new AWS API for access control, but now that we have a release candidate for that, we hope to have updated Karpenter support “real soon now”
I would love to see this support too. I am hopeful that EKS Pod Identity can improve EKS deployment, upgrade, and management experiences.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Brian I encourage you to try out eks-cluster v4.0.0-rc1 to validate our approach to migrating to the new AWS API for EKS access control, and give us any feedback before we release a production version.

#206 Use AWS API for EKS authentication and authorization

Major Breaking Changes

Warning

This release has major breaking changes and requires significant manual intervention
to upgrade existing clusters. Read the migration document
for more details.

what

• Use the AWS API to manage EKS access controls instead of the aws-auth ConfigMap • Remove support for creating an extra security group, deprecated in v2 • Add IPv6 service CIDR output • Update test framework to go v1.21, Kubernetes 1.29, etc.

why

• Remove a large number of bugs, hacks, and flaky behaviors • Encourage separation of concerns (use another module to create a security group) • Requested and authored by @colinh6 • Stay current

references

New API for EKS access control • Obsoletes and closes #148 • Obsoletes and closes #155 • Obsoletes and closes #167 • Obsoletes and closes #168 • Obsoletes and closes #193 • Obsoletes and closes #202 • Fixes #203 • Supersedes and closes #173 • Supersedes and closes #194 • Supersedes and closes #195 • Supersedes and closes #196 • Supersedes and closes #197 • Supersedes and closes #198 • Supersedes and closes #199 • Supersedes and closes #200 • Supersedes and closes #201

Brian avatar

Okay. Ill check it out. I am in the process of upgrading eks k8s clusters from v1.27 to v1.29 using the blue-green upgrade pattern, so this is probably good timing.

Matthew Reggler avatar
Matthew Reggler

One-line bug in the CloudPosse terraform-aws-cloudwatch-logs module: https://github.com/cloudposse/terraform-aws-cloudwatch-logs/issues/52

This bug affects how this module is called by the lambda (and in fact any other) modules for a AWS resource where an underscore is a valid character in the resource name.

#52 Malformed log group name when used with terraform-aws-lambda-function

Describe the Bug

When using terraform-aws-lambda-function to create a lambda and associated log group, the resulting log groups can in some cases not match the function name, which results in no logs being sent to the created group.

Expected Behavior

I create a lambda using the terraform-aws-lambda-function module, and the name of this lambda contains an underscore

module "lambda" {
  source  = "cloudposse/lambda-function/aws"
  version = "0.5.3"

  function_name      = "my_function_name"
  ...

  context = module.this.context
}

The resultant log group created by this module should be called /aws/lambda/my_function_name.

Instead the log group created is called /aws/lambda/myfunctionname, as the label for the log group contains a regex_replace_chars rule that does not allow for underscores.

This means the AWS creates its own log group (with default config like no expiration) for the lambda, and the log group created by this module is orphaned.

https://github.com/cloudposse/terraform-aws-cloudwatch-logs/blob/f622326cce042d0e49b2613cc994ab710355ac7f/main.tf#L5C1-L13C2

Steps to Reproduce

Invoke the lambda module with var.function_name set to a value that includes an underscore.

Screenshots

Screenshot 2024-02-20 at 16 32 36

Environment

v0.6.6 – version of the module used in the lambda module – IS AFFECTED
v0.6.8 – current version of the module – IS AFFECTED

Additional Context

This fix requires a bump to the version of the cloudwatch logs module used in the lambda module (and/or anywhere else in the CloudPosse module/component libraries that support the creation of resources with underscores in their names.

Brian avatar
#983 feat: add support for karpenter 0.34.0 or newer

what

• update eks/karpetner and eks/karpenter-provisioner to support Karpenter v0.34.0 • reduce code smell

why

• Karpetner in v0.34.0 multiple breaking changes • moved chart values for settings.aws to settings • replaced Provisioner with NodePool resource • replaced AWSNodeTemplate with EC2NodeClass

references

• closes #982

2024-02-21

Brian avatar
#985 feat: add support latest alb controller

what

• add support aws-loadbalancer-controller helm chart v1.7.1 • add resources configuration to the snippet in readme • remove controller’s dependency on the ec2 metadata • move chart value aws.region to the expected location of region • add chart value vpcId

why

• support the latest alb controller and its helm chart

references

• n/a

Brian avatar
#986 docs: improve external-dns snippet in readme

what

• update the eks/external-dns component example in readme • set latest chart version • set the resource configure properly • add txt_prefix var to snippet

why

• help the future engineers deploying or updating external-dns

references

• n/a

2024-02-23

    keyboard_arrow_up