#terraform-aws-modules (2024-06)

terraform Terraform Modules

Discussions related to https://github.com/terraform-aws-modules

Archive: https://archive.sweetops.com/terraform-aws-modules/

2024-06-04

2024-06-17

Jackie Virgo avatar
Jackie Virgo

Has anyone used terraform-aws-s3-bucket module for creating bi-directional replication?

Evgenii Vasilenko avatar
Evgenii Vasilenko

Hi team, Can someone explain me how module cloudposse/label/null works and how to use it properly? In this example https://github.com/cloudposse/terraform-aws-eks-cluster/blob/main/examples/complete/main.tf I see that we have almost in every module context = module.this.context What kind of fields it will include? I’m curious because I connect Infracost and got the error:

Missing mandatory tags: Service, Environment even if I added these tags in this module like this: ``` module “label” { source = “cloudposse/label/null” version = “0.25.0”

namespace = “eg” stage = “dev” name = “work” attributes = [“cluster”] delimiter = “-“

tags = { “Environment” = “Dev”, “Service” = “EKS Cluster” }

context = module.this.context } ```

provider "aws" {
  region = var.region
}

module "label" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  attributes = ["cluster"]

  context = module.this.context
}

data "aws_caller_identity" "current" {}

data "aws_iam_session_context" "current" {
  arn = data.aws_caller_identity.current.arn
}

locals {
  enabled = module.this.enabled

  private_ipv6_enabled = var.private_ipv6_enabled

  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # <https://aws.amazon.com/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/>
  # <https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/deploy/subnet_discovery.md>
  tags = { "kubernetes.io/cluster/${module.label.id}" = "shared" }

  # required tags to make ALB ingress work <https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html>
  public_subnets_additional_tags = {
    "kubernetes.io/role/elb" : 1
  }
  private_subnets_additional_tags = {
    "kubernetes.io/role/internal-elb" : 1
  }

  # Enable the IAM user creating the cluster to administer it,
  # without using the bootstrap_cluster_creator_admin_permissions option,
  # as a way to test the access_entry_map feature.
  # In general, this is not recommended. Instead, you should
  # create the access_entry_map statically, with the ARNs you want to
  # have access to the cluster. We do it dynamically here just for testing purposes.
  access_entry_map = {
    (data.aws_iam_session_context.current.issuer_arn) = {
      access_policy_associations = {
        ClusterAdmin = {}
      }
    }
  }

  # <https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-cni-latest-available-version>
  vpc_cni_addon = {
    addon_name               = "vpc-cni"
    addon_version            = null
    resolve_conflicts        = "OVERWRITE"
    service_account_role_arn = one(module.vpc_cni_eks_iam_role[*].service_account_role_arn)
  }

  addons = concat([
    local.vpc_cni_addon
  ], var.addons)
}

module "vpc" {
  source  = "cloudposse/vpc/aws"
  version = "2.2.0"

  ipv4_primary_cidr_block = "172.16.0.0/16"
  tags                    = local.tags

  context = module.this.context
}

module "subnets" {
  source  = "cloudposse/dynamic-subnets/aws"
  version = "2.4.2"

  availability_zones              = var.availability_zones
  vpc_id                          = module.vpc.vpc_id
  igw_id                          = [module.vpc.igw_id]
  ipv4_cidr_block                 = [module.vpc.vpc_cidr_block]
  ipv6_cidr_block                 = [module.vpc.vpc_ipv6_cidr_block]
  ipv6_enabled                    = true
  max_nats                        = 1
  nat_gateway_enabled             = true
  nat_instance_enabled            = false
  tags                            = local.tags
  public_subnets_additional_tags  = local.public_subnets_additional_tags
  private_subnets_enabled         = true
  private_subnets_additional_tags = local.private_subnets_additional_tags

  context = module.this.context
}

module "eks_cluster" {
  source = "../../"

  subnet_ids                   = concat(module.subnets.private_subnet_ids, module.subnets.public_subnet_ids)
  kubernetes_version           = var.kubernetes_version
  oidc_provider_enabled        = var.oidc_provider_enabled
  enabled_cluster_log_types    = var.enabled_cluster_log_types
  cluster_log_retention_period = var.cluster_log_retention_period

  cluster_encryption_config_enabled                         = var.cluster_encryption_config_enabled
  cluster_encryption_config_kms_key_id                      = var.cluster_encryption_config_kms_key_id
  cluster_encryption_config_kms_key_enable_key_rotation     = var.cluster_encryption_config_kms_key_enable_key_rotation
  cluster_encryption_config_kms_key_deletion_window_in_days = var.cluster_encryption_config_kms_key_deletion_window_in_days
  cluster_encryption_config_kms_key_policy                  = var.cluster_encryption_config_kms_key_policy
  cluster_encryption_config_resources                       = var.cluster_encryption_config_resources

  addons            = local.addons
  addons_depends_on = [module.eks_node_group]

  access_entry_map = local.access_entry_map
  access_config = {
    authentication_mode                         = "API"
    bootstrap_cluster_creator_admin_permissions = false
  }

  # This is to test `allowed_security_group_ids` and `allowed_cidr_blocks`
  # In a real cluster, these should be some other (existing) Security Groups and CIDR blocks to allow access to the cluster
  allowed_security_group_ids = [module.vpc.vpc_default_security_group_id]
  allowed_cidr_blocks        = [module.vpc.vpc_cidr_block]

  kubernetes_network_ipv6_enabled = local.private_ipv6_enabled

  context = module.this.context

  cluster_depends_on = [module.subnets]
}

module "eks_node_group" {
  source  = "cloudposse/eks-node-group/aws"
  version = "2.12.0"

  subnet_ids        = module.subnets.private_subnet_ids
  cluster_name      = module.eks_cluster.eks_cluster_id
  instance_types    = var.instance_types
  desired_size      = var.desired_size
  min_size          = var.min_size
  max_size          = var.max_size
  kubernetes_labels = var.kubernetes_labels

  context = module.this.context
}

RB avatar
provider "aws" {
  region = var.region
}

module "label" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  attributes = ["cluster"]

  context = module.this.context
}

data "aws_caller_identity" "current" {}

data "aws_iam_session_context" "current" {
  arn = data.aws_caller_identity.current.arn
}

locals {
  enabled = module.this.enabled

  private_ipv6_enabled = var.private_ipv6_enabled

  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # <https://aws.amazon.com/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/>
  # <https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/deploy/subnet_discovery.md>
  tags = { "kubernetes.io/cluster/${module.label.id}" = "shared" }

  # required tags to make ALB ingress work <https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html>
  public_subnets_additional_tags = {
    "kubernetes.io/role/elb" : 1
  }
  private_subnets_additional_tags = {
    "kubernetes.io/role/internal-elb" : 1
  }

  # Enable the IAM user creating the cluster to administer it,
  # without using the bootstrap_cluster_creator_admin_permissions option,
  # as a way to test the access_entry_map feature.
  # In general, this is not recommended. Instead, you should
  # create the access_entry_map statically, with the ARNs you want to
  # have access to the cluster. We do it dynamically here just for testing purposes.
  access_entry_map = {
    (data.aws_iam_session_context.current.issuer_arn) = {
      access_policy_associations = {
        ClusterAdmin = {}
      }
    }
  }

  # <https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-cni-latest-available-version>
  vpc_cni_addon = {
    addon_name               = "vpc-cni"
    addon_version            = null
    resolve_conflicts        = "OVERWRITE"
    service_account_role_arn = one(module.vpc_cni_eks_iam_role[*].service_account_role_arn)
  }

  addons = concat([
    local.vpc_cni_addon
  ], var.addons)
}

module "vpc" {
  source  = "cloudposse/vpc/aws"
  version = "2.2.0"

  ipv4_primary_cidr_block = "172.16.0.0/16"
  tags                    = local.tags

  context = module.this.context
}

module "subnets" {
  source  = "cloudposse/dynamic-subnets/aws"
  version = "2.4.2"

  availability_zones              = var.availability_zones
  vpc_id                          = module.vpc.vpc_id
  igw_id                          = [module.vpc.igw_id]
  ipv4_cidr_block                 = [module.vpc.vpc_cidr_block]
  ipv6_cidr_block                 = [module.vpc.vpc_ipv6_cidr_block]
  ipv6_enabled                    = true
  max_nats                        = 1
  nat_gateway_enabled             = true
  nat_instance_enabled            = false
  tags                            = local.tags
  public_subnets_additional_tags  = local.public_subnets_additional_tags
  private_subnets_enabled         = true
  private_subnets_additional_tags = local.private_subnets_additional_tags

  context = module.this.context
}

module "eks_cluster" {
  source = "../../"

  subnet_ids                   = concat(module.subnets.private_subnet_ids, module.subnets.public_subnet_ids)
  kubernetes_version           = var.kubernetes_version
  oidc_provider_enabled        = var.oidc_provider_enabled
  enabled_cluster_log_types    = var.enabled_cluster_log_types
  cluster_log_retention_period = var.cluster_log_retention_period

  cluster_encryption_config_enabled                         = var.cluster_encryption_config_enabled
  cluster_encryption_config_kms_key_id                      = var.cluster_encryption_config_kms_key_id
  cluster_encryption_config_kms_key_enable_key_rotation     = var.cluster_encryption_config_kms_key_enable_key_rotation
  cluster_encryption_config_kms_key_deletion_window_in_days = var.cluster_encryption_config_kms_key_deletion_window_in_days
  cluster_encryption_config_kms_key_policy                  = var.cluster_encryption_config_kms_key_policy
  cluster_encryption_config_resources                       = var.cluster_encryption_config_resources

  addons            = local.addons
  addons_depends_on = [module.eks_node_group]

  access_entry_map = local.access_entry_map
  access_config = {
    authentication_mode                         = "API"
    bootstrap_cluster_creator_admin_permissions = false
  }

  # This is to test `allowed_security_group_ids` and `allowed_cidr_blocks`
  # In a real cluster, these should be some other (existing) Security Groups and CIDR blocks to allow access to the cluster
  allowed_security_group_ids = [module.vpc.vpc_default_security_group_id]
  allowed_cidr_blocks        = [module.vpc.vpc_cidr_block]

  kubernetes_network_ipv6_enabled = local.private_ipv6_enabled

  context = module.this.context

  cluster_depends_on = [module.subnets]
}

module "eks_node_group" {
  source  = "cloudposse/eks-node-group/aws"
  version = "2.12.0"

  subnet_ids        = module.subnets.private_subnet_ids
  cluster_name      = module.eks_cluster.eks_cluster_id
  instance_types    = var.instance_types
  desired_size      = var.desired_size
  min_size          = var.min_size
  max_size          = var.max_size
  kubernetes_labels = var.kubernetes_labels

  context = module.this.context
}

1
this1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Evgenii Vasilenko do you need further support here?

Evgenii Vasilenko avatar
Evgenii Vasilenko

nope, I’m good thanks

1

2024-06-18

2024-06-19

2024-06-24

Shirisha Sudhakar Rao avatar
Shirisha Sudhakar Rao

We are working with a customer that requires the development environment to be created in the commercial cloud of AWS but requires the production environment to be in GovCloud. We are investigating the potential for using the atmos framework for provisioning stacks in both commercial and GovCloud at the same time. I looked around in the various channels on this forum, but could not find any mentions of GovCloud integration. Is it possible to integrate account creation and role based access via aws-teams and aws-team-roles components so that we can create and access accounts in both commercial and GovCloud AWS accounts at the same time?

Is this level of integration across the commercial and GovCloud accounts possible within the atmos framework structure? Did anyone complete this integration successfully?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Atmos is very generic and can be used with anything.

Most of our Terraform modules will work with GovCloud. Some have built-in assumptions that they are working in the standard aws partition, and for those we can easily accept PRs that use the “current” partition instead of a hard-coded aws partition.

Our reference architecture and components are built on the assumption that all the accounts (and therefore all the teams) are in the same AWS Organization. I believe we have people using them in GovCloud, but AFAIK those people are running everything exclusively in GovCloud. I personally have no experience with GovCloud, but I expect that you cannot have a single AWS Org that has some accounts in aws and some accounts in aws-us-gov.

Enhancing our reference architecture to support 2 organizations at the same time would be possible, but would be beyond the scope of what I would expect a customer to do. Contact @Erik Osterman (Cloud Posse) if you want to discuss hiring Cloud Posse technical services to make this enhancement.

Marat Bakeev avatar
Marat Bakeev

Does anyone use https://github.com/cloudposse/terraform-aws-vpn-connection together with atmos? -_- Maybe some kind soul can share configs to make it work with the rest of the components?

cloudposse/terraform-aws-vpn-connection

Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Ben Smith (Cloud Posse) do we have a component for this?

cloudposse/terraform-aws-vpn-connection

Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gabriela Campana (Cloud Posse)

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Marat, we are discussing this internally and will get back to you asap

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Marat Bakeev doesn’t appear we’ve used this module in any recent engagements

1

2024-06-25

    keyboard_arrow_up