#terraform-aws-modules (2022-01)
Terraform Modules
Discussions related to https://github.com/terraform-aws-modules
Archive: https://archive.sweetops.com/terraform-aws-modules/
2022-01-04
Quick question on versions.tf - is it necessary or desired to include a versions.tf in both the child and the root modules.? It seems like child modules should declare their own versions.tf and it would not be necessary in the root module unless there was some need to overwrite the versions or am I misunderstanding? Thank you.
usually, in a low-level module we include some restrictions on TF and provider versions in [version.tf](http://version.tf)
, but those restrictions could be very loose. For example, for TF version, we can use >= 0.15.0
because the child module would work with any TF version greater or equal to 0.15 (but not 0.14 or lower). In top-level module, we can additionally restrict the versions, e.g. TF >= 1.0.0
if we want the top-level module to work with TF 1.0 only. This way, the low-level module restrictions don’t affect anything we want to use in top-level modules (but allows the low-level modules to add just the lower bounds for example)
Thank you, makes sense. I like that you call it a low level module too. I feel like the child reference is confusing as the “child” module exists before the parent which would be weird in real life.
2022-01-17
Hey, how is it possible to update the node group in terraform-aws-eks-node-group when kubernetes version updates? I had eks cluster (v1.20) with “bottlerocket-aws-k8s-1.20-x86_64-v1.5.2-1602f3a8” ami node group. after upgrade the cluster to 1.21, I can’t get the nodes to update their ami to “bottlerocket-aws-k8s-1.21-x86_64-v1.5.2-1602f3a8”. Even hard-coded, terraform doesn’t see any change in the module. Any advice?
@Maya Aravot did you set variable "kubernetes_version"
in https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/variables.tf#L243 ?
Terraform module to provision a fully managed AWS EKS Node Group - terraform-aws-eks-node-group/variables.tf at master · cloudposse/terraform-aws-eks-node-group
I think if the version is set, TF will see the change and try to update the node group to the new version if the var is not set, then we prob need to follow https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html to manually initiate the upgrade (b/c EKS does not know that you want to upgrade even if the control plane is at a diff version)
When you initiate a managed node group update, Amazon EKS automatically updates your nodes for you, completing the steps listed in . If you’re using an Amazon EKS optimized AMI, Amazon EKS automatically applies the latest security patches and operating system updates to your nodes as part of the latest AMI release version.
yes,
module "eks_node_group_imported2" {
source = "cloudposse/eks-node-group/aws"
enabled = true
version = "0.27.0"
cluster_name = module.eks_cluster.eks_cluster_id
kubernetes_version = ["1.21"]
ami_type = "BOTTLEROCKET_x86_64"
ami_release_version = ["1.5.2-1602f3a8"]
}
i have opened a detailed ticket about it as well - https://github.com/cloudposse/terraform-aws-eks-node-group/issues/104
Describe the Bug I had eks cluster (v1.20) with “bottlerocket-aws-k8s-1.20-x86_64-v1.5.2-1602f3a8” ami node group. after upgrade the cluster to 1.21, I can’t get the nodes to update their ami to “b…
there is some logic here which you can review/play with https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/ami.tf#L23
Terraform module to provision a fully managed AWS EKS Node Group - terraform-aws-eks-node-group/ami.tf at master · cloudposse/terraform-aws-eks-node-group
looking at this code https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/main.tf#L72, if this condition is true
length(compact(concat([local.launch_template_ami], var.ami_release_version))) == 0
Terraform module to provision a fully managed AWS EKS Node Group - terraform-aws-eks-node-group/main.tf at master · cloudposse/terraform-aws-eks-node-group
then
var.kubernetes_version
is not taken into account
i was manage to fix that by specify only those vars:
ami_type = "BOTTLEROCKET_x86_64"
kubernetes_version = [module.eks_cluster.eks_cluster_version]
which means i didn’t define var.ami_release_version after that the upgrade worked
@Andriy Knysh (Cloud Posse) thank you for your help!
thanks @Maya Aravot
2022-01-27
:wave: How come you got rid of security_group_rules
from https://github.com/cloudposse/terraform-aws-ecs-alb-service-task ?
Terraform module which implements an ECS service which exposes a web service via ALB. - GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service whic…
Guess can add own rules from the outputted SG
Terraform module which implements an ECS service which exposes a web service via ALB. - GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service whic…
@Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)
That bit is fine, but now hitting
Error: Invalid count argument
│
│ on .terraform/modules/ecs/main.tf line 125, in data "aws_iam_policy_document" "ecs_task":
│ 125: count = local.create_task_role ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.
╵
╷
│ Error: Invalid count argument
│
│ on .terraform/modules/ecs/main.tf line 203, in data "aws_iam_policy_document" "ecs_ssm_exec":
│ 203: count = local.create_task_role && var.exec_enabled ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.
╵
Am passing in task_role_arn
Assuming because task_role_arn
hasn’t been created yet https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/master/main.tf#L4
Terraform module which implements an ECS service which exposes a web service via ALB. - terraform-aws-ecs-alb-service-task/main.tf at master · cloudposse/terraform-aws-ecs-alb-service-task
this module was not updated to use the latest SG module, it uses the resources instead https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/master/main.tf#L294
Terraform module which implements an ECS service which exposes a web service via ALB. - terraform-aws-ecs-alb-service-task/main.tf at master · cloudposse/terraform-aws-ecs-alb-service-task
terraform does not like length
in count
create_task_role = local.enabled && length(var.task_role_arn) == 0
@Andriy Knysh (Cloud Posse) Sure - I don’t want to all egress to all though, want to allow egress to specific SGs, which is fine, I can use aws_security_group_rule
external to the module call and use the SG Id output
count = local.create_task_role ? 1 : 0
Yes, we reverted the use of cloudposse/terraform-aws-security-group v0.3.0 because it had a lot of problems. We have not upgraded ecs-alb-service-task
to security-group
v0.4.x because it involves a lot of breaking changes and we do not have a good test environment for it at this time. One of the things we are broad categories of fixes we are making as part of the v0.4.x upgrade is eliminating or greatly reducing the occurrences of
..."count" value depends on resource attributes that cannot be determined until apply...
Until then, it looks like you have found a workaround that works for you. Let us know if you need further help getting the existing module to work.
Yeah, couldn’t pass a role in so had to rely on the role creation in the module