#terraform (2022-08)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-08-01

Isaac Campbell avatar
Isaac Campbell

What did I do

• Add detailed monitoring flag to the launch template of EC2 nodes

Why did I do this

• Some compliance tools will flag nodes used by this module because they don’t have detailed monitoring. This also allows metrics to be reported every minute as opposed to five minute intervals

Helpful references

More AWS Documentation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please use #pr-reviews

What did I do

• Add detailed monitoring flag to the launch template of EC2 nodes

Why did I do this

• Some compliance tools will flag nodes used by this module because they don’t have detailed monitoring. This also allows metrics to be reported every minute as opposed to five minute intervals

Helpful references

More AWS Documentation

Isaac Campbell avatar
Isaac Campbell

pog ty

2022-08-03

Sam Skynner avatar
Sam Skynner

Any chance this could get merged, it was approved 8 days ago https://github.com/cloudposse/terraform-aws-ssm-tls-self-signed-cert/pull/14 Would be incredibly helpful

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please use #pr-reviews

Release notes from terraform avatar
Release notes from terraform
05:43:32 PM

v1.3.0-alpha20220803 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

Brent Farand avatar
Brent Farand

Hello! Our organization has been making use of your https://github.com/cloudposse/terraform-aws-components as a starting point for our infrastructure. I notice that the iam-primary-roles and iam-delegated-roles components have been replaced by the aws-teams and aws-team-roles components respectively. I was planning on moving to these new components, but it doesn’t look like the account-map component has a module that they refer to - team-assume-role-policy. I also see a reference to an aws-saml component in the documentation and code that also doesn’t appear to be present in the repo.

Is there an ETA on when these pieces will make their way to the main branch of the repo? Thank you!

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

RB avatar

Sounds like account-map needs to be updated

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

RB avatar

The aws-saml is the new name for the sso component

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

account-map/modules/team-assume-role-policy is the new name for account-map/modules/iam-assume-role-policy . Sorry we have not been keeping pace with upgrades to terraform-aws-components. @Erik Osterman (Cloud Posse) what can we say about timelines for that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

These should be published already, but if not @Dan Miller (Cloud Posse) will be publishing them soon.

1
Sean avatar

What are y’all doing these days for feeding data (such as outputs) between root modules?

• a) tagging resources in the source module and using data resources from the target (this works within providers, such as looking up with AWS tags)

• b) remote state

• c) terragrunt

• d) something else

Joe Niland avatar
Joe Niland

All of the above, including SSM param store

Mohammed Yahya avatar
Mohammed Yahya

data sources

Alex Mills avatar
Alex Mills

A and SSM (when required by things like Serverless Framework)

Sean avatar

Tell me more about your use of SSM.

Using a simple config store with outputs pushed to SSM?

Val Naipaul avatar
Val Naipaul

a) tagging and using data sources

2022-08-04

mog avatar

does anyone have any experience setting up GuardDuty in an AWS Org? i’m a bit confused about what the difference is between aws_organizations_delegated_administrator and aws_guardduty_organization_admin_account

Hosfm avatar

Org admin account is the master account in your AWS organisation. Typically used for managing other accounts, SSO and in the past has been used for the master in the master-member model for guardduty and security hub. You can delegate the guardduty admin to an account which is not the org master account to something like a infosec account

2022-08-05

Adarsh Hiwrale avatar
Adarsh Hiwrale

Hi everyone! I am trying to attach multiple load balancer with ECS service ecs-alb-service-task and ecs-container-definition this is the module I am using, is it posible to attach multiple load balancer, application lb for internal use and Network LB for external use??

2022-08-08

Bradley Peterson avatar
Bradley Peterson

Hi! Anyone know how to work around this bug? I hit it when using cloudposse/ecs-web-app/aws , same as the reporter. https://github.com/cloudposse/terraform-aws-alb-ingress/issues/56

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Despite using terraform 1.0.7, I get “count” error:

│ Error: Invalid count argument
│ 
│   on .terraform/modules/proxy_service.alb_ingress/main.tf line 50, in resource "aws_lb_listener_rule" "unauthenticated_paths":
│   50:   count = module.this.enabled && length(var.unauthenticated_paths) > 0 && length(var.unauthenticated_hosts) == 0 ? length(var.unauthenticated_listener_arns) : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To
│ work around this, use the -target argument to first apply only the resources that the count depends on.

Also, the minimum version should also be updated to 0.14:

terraform-aws-alb-ingress/versions.tf

Line 2 in ab6033c

Expected Behavior

TF shouldn’t complain about “count” and fail.

Steps to Reproduce

Steps to reproduce the behavior:

I’m using alb-ingress indirectly:

module "proxy_service" {
  source  = "cloudposse/ecs-web-app/aws"
  version = "0.65.2"

  launch_type = "FARGATE"
  vpc_id      = local.vpc_id

  desired_count    = 1
  container_image  = module.proxy_ecr.repository_url
  container_cpu    = 256
  container_memory = 512
  container_port   = local.container_port

  codepipeline_enabled = false
  webhook_enabled      = false
  badge_enabled        = false
  ecs_alarms_enabled   = false
  autoscaling_enabled  = false

  aws_logs_region        = data.aws_region.current.name
  ecs_cluster_arn        = aws_ecs_cluster.proxy.arn
  ecs_cluster_name       = aws_ecs_cluster.proxy.name
  ecs_private_subnet_ids = local.public_subnets # misleading name, can be public

  alb_security_group = module.proxy_alb.security_group_id
  alb_arn_suffix     = module.proxy_alb.alb_arn_suffix

  alb_ingress_healthcheck_path                 = "/"
  alb_ingress_health_check_timeout             = 3
  alb_ingress_health_check_healthy_threshold   = 2
  alb_ingress_health_check_unhealthy_threshold = 2
  alb_ingress_health_check_interval            = 30

  # All paths are unauthenticated
  alb_ingress_unauthenticated_paths         = ["/*"]
  alb_ingress_unauthenticated_listener_arns = module.proxy_alb.listener_arns

  context = module.proxy_label.context
}

NOTE: Commenting out alb_ingress_unauthenticated_paths = ["/*"] removes the error, but then no aws_lb_listener_rule is created.

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

% terraform -version

Terraform v1.0.7
on darwin_amd64
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/aws v3.60.0
+ provider registry.terraform.io/hashicorp/external v2.1.0
+ provider registry.terraform.io/hashicorp/github v3.0.0
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

Additional Context

Add any other context about the problem here.

1
Bradley Peterson avatar
Bradley Peterson

Okay, what worked was to comment out alb_ingress_unauthenticated_paths and deploy everything else (particularly the ALB) and then uncomment that parameter and deploy again.

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Despite using terraform 1.0.7, I get “count” error:

│ Error: Invalid count argument
│ 
│   on .terraform/modules/proxy_service.alb_ingress/main.tf line 50, in resource "aws_lb_listener_rule" "unauthenticated_paths":
│   50:   count = module.this.enabled && length(var.unauthenticated_paths) > 0 && length(var.unauthenticated_hosts) == 0 ? length(var.unauthenticated_listener_arns) : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To
│ work around this, use the -target argument to first apply only the resources that the count depends on.

Also, the minimum version should also be updated to 0.14:

terraform-aws-alb-ingress/versions.tf

Line 2 in ab6033c

Expected Behavior

TF shouldn’t complain about “count” and fail.

Steps to Reproduce

Steps to reproduce the behavior:

I’m using alb-ingress indirectly:

module "proxy_service" {
  source  = "cloudposse/ecs-web-app/aws"
  version = "0.65.2"

  launch_type = "FARGATE"
  vpc_id      = local.vpc_id

  desired_count    = 1
  container_image  = module.proxy_ecr.repository_url
  container_cpu    = 256
  container_memory = 512
  container_port   = local.container_port

  codepipeline_enabled = false
  webhook_enabled      = false
  badge_enabled        = false
  ecs_alarms_enabled   = false
  autoscaling_enabled  = false

  aws_logs_region        = data.aws_region.current.name
  ecs_cluster_arn        = aws_ecs_cluster.proxy.arn
  ecs_cluster_name       = aws_ecs_cluster.proxy.name
  ecs_private_subnet_ids = local.public_subnets # misleading name, can be public

  alb_security_group = module.proxy_alb.security_group_id
  alb_arn_suffix     = module.proxy_alb.alb_arn_suffix

  alb_ingress_healthcheck_path                 = "/"
  alb_ingress_health_check_timeout             = 3
  alb_ingress_health_check_healthy_threshold   = 2
  alb_ingress_health_check_unhealthy_threshold = 2
  alb_ingress_health_check_interval            = 30

  # All paths are unauthenticated
  alb_ingress_unauthenticated_paths         = ["/*"]
  alb_ingress_unauthenticated_listener_arns = module.proxy_alb.listener_arns

  context = module.proxy_label.context
}

NOTE: Commenting out alb_ingress_unauthenticated_paths = ["/*"] removes the error, but then no aws_lb_listener_rule is created.

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

% terraform -version

Terraform v1.0.7
on darwin_amd64
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/aws v3.60.0
+ provider registry.terraform.io/hashicorp/external v2.1.0
+ provider registry.terraform.io/hashicorp/github v3.0.0
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

Additional Context

Add any other context about the problem here.

RB avatar

Basically a targeted apply and then a full apply

RB avatar

We see this issue come up a lot. Its a limitation in terraform. There are some methods to get around it but they are tricky. Several functions will set it off like length(), distinct, sort. I think in this case, its length.

Bradley Peterson avatar
Bradley Peterson

Yes, the error message tells you what to do, but it was hard for me to figure out what I needed to target in the apply.

RB avatar

Eh, I disagree. It doesn’t actually solve the root problem. The root problem is that the module cannot decipher how many arns are passed in before the dependent module is fully applied. There is a way to do it but I’m not the best at solving them.

cc: @Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse) any ideas/guidance on how to fix the The "count" value depends on resource attributes that cannot be determined until apply error for terraform-aws-alb-ingress ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a tough case. Sometimes count works (if it does not use the collection functions that can change the list items, like distinct()), sometimes for_each works

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we “solved” the issue before in some of the modules by providing an explicit var e.g. arn_count and use it in count (if you know the number of ARNs, you can do it, but terraform does not know it)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we had the issue in even 2017-2018, and it’s not solved yet by TF b/c it’s not possible to solve for all cases even in theory

The fix:

Remove dynamic counts (provide explicit counts if possible)
Or remove maps from counts
Or try to remove the data source (could work in some cases)
apply in stages with -target (not a pretty solution)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what

• Document the error
module.ec2_instance.module.label.null_resource.tags_as_list_of_maps: null_resource.tags_as_list_of_maps: value of ‘count’ cannot be computed

• Document Terraform issues with counts in maps

why

Terraform (in the current incarnation) is very sensitive to these two things:

  1. Dynamic counts across modules - when you have a dynamic count (calculated by some expression with input params) in one module and then use the module from other modules
  2. It does not especially like those dynamic counts in maps and lists

Some know issues about that:
hashicorp/terraform#13980
hashicorp/terraform#10857
hashicorp/terraform#12570
hashicorp/terraform#17048

I know this issue has been discussed time and again (Ex: #12570) and that if a module has a map variable and has interpolation inside this map variable, count inside a module results in value of ‘count’ cannot be computed. What puzzles me is that this error occurs when terraforming a new environment but not any existing environment!

In our case:

Here the count depends on the map and the input var.tags
https://github.com/cloudposse/terraform-null-label/blob/master/main.tf#L23

And here var.tags depends on the map, the other inputs and on the data provider
https://github.com/cloudposse/terraform-aws-ec2-instance/blob/master/main.tf#L68

This circular dependency breaks TF.

It’s very difficult to say for sure what’s going on, because it could work in some cases and in some environments, but not in the others.
(see the complains above).

I know this is not a good explanation, but they have been discussing the issue for years and can’t explain it eigher.
Because nobody understands it.

The fix:

  1. Remove dynamic counts (provide explicit counts if possible)
  2. Or remove maps from counts
  3. Or try to remove the data source (could work in some cases)
  4. apply in stages with -target (not a pretty solution)
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The solution is to ensure that Terraform can compute the length of the variables at plan time:

length(var.unauthenticated_paths) > 0 && length(var.unauthenticated_hosts) == 0 ? length(var.unauthenticated_listener_arns)

All 3 are lists by design, so you do not need to know the values at plan time, but you need to know how many paths and hosts and listener ARNs there are. This means you need to create the lists without applying functions like compact, distinct, or (due to a design flaw) sort on the lists.

2022-08-09

Adam Kenneweg avatar
Adam Kenneweg

Hi for <https://github.com/cloudposse/terraform-aws-eks-cluster> is there a way to automatically update my kubeconfig? (like: aws eks update-kubeconfig) so I can apply kubectl manifests later?

I manually made a resource

resource "null_resource" "updatekube" {
  depends_on = [module.eks_cluster]
  provisioner "local-exec" {
    command = format("aws eks update-kubeconfig --region %s --name %s", var.region, module.eks_cluster.eks_cluster_id)
  }
}

but it breaks because eks_cluster.eks_cluster_id is delayed in it’s creation and the value is wrong so it takes mutliple terraform applies to work, messing up the rest of my terraform stuff

2022-08-10

Abdelaziz Besbes avatar
Abdelaziz Besbes

Hello all!

A while ago, I have created a backup module as follows that I deployed with Terraform:

module "backup" {

  source  = "cloudposse/backup/aws"
  version = "0.12.0"
...
}

When I try to redeploy my stack again, it states that my recovery points has been changed outside of terraform (which is logical)

# module.backup.aws_backup_vault.default[0] has changed
  ~ resource "aws_backup_vault" "default" {
      ~ recovery_points = 2 -> 78
...
    }

How could I add a lifecycle that ignores this change on the module side! Thank you all!

RB avatar

We’d have to make a change to the module it would seem

Eric Berg avatar
Eric Berg

@Abdelaziz Besbes, this looks like the new “this has changed” plan output section that just shows state diffs, not the planned changes, right?

I find that output to be confusing sometimes, but potentially-useful for seeing changes in unmanaged resource params.

So, if there’s no proposed change, based on that change, suppressing the info message is just cosmetic.

The issue, btw, that forces an update of the module, is that lifecycle.ignore_changes entries must be static references. I ran into a case, where our app is handling some interaction with TF-managed resources, that required an entry in ignore_changes, but only for some instances of that resource type. That required duplicating the module and using the hacked version just for one instance.

Someone in an #office-hours session suggested updating the module to include both the standard resource invocation and the one with the ignore_changes entry hard-coded. Then use a boolean var with count to toggle between them. Not a solution for a public module, but much cleaner than some other options.

Release notes from terraform avatar
Release notes from terraform
06:13:30 PM

v1.2.7 1.2.7 (August 10, 2022) ENHANCEMENTS: config: Check for direct references to deprecated computed attributes. (#31576) BUG FIXES: config: Fix an crash if a submodule contains a resource whose implied provider local name contains invalid characters, by adding additional validation rules to turn it into a real error. (<a…

validate deprecated attributes from static traversals by jbardin · Pull Request #31576 · hashicorp/terraformattachment image

We can’t validate if data from deprecated nested attributes is not used in the configuration, but we can at least catch the simple case where a deprecated attribute is referenced directly. Partial …

emem avatar

Hello all, I recently had this error while applying terraform for github_repository_webhook. I dont know if anyone has come across something like this before while working on code-pipeline

│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to
│ module.xxxx.aws_codepipeline_webhook.main, provider
│ "module.xxxxx.provider[\"<http://registry.terraform.io/hashicorp/aws\|registry.terraform.io/hashicorp/aws\>"]"
│ produced an unexpected new value: Root resource was present, but now
│ absent.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.


Error: POST <https://api.github.com/repos/xxx/test/hooks>: 422 Validation Failed [{Resource:Hook Field: Code:custom Message:The "push" event cannot have more than 20 hooks}]
loren avatar

The error message looks clear. I’d open an issue on the provider github repo, or see if one already exists

Alex Jurkiewicz avatar
Alex Jurkiewicz

Sometimes errors like this appear when the AWS API accepts multiple types of value for the field, but terraform expects just one format.

For example, you see this bug with some resources where you specify KMS key. Terraform usually expects a key ARN, but you can often use a key alias too, and the provider barfs

2022-08-11

Frank avatar

It seems that terraform-aws-ses is broken since Terraform 1.2.7 because its dependency terraform-aws-iam-system-user is: It is throwing unexpected parameter_write and context errors when running terraform validate on TF 1.2.7, which works fine on 1.2.6.

Anyone else experienced this? Based on their changelog it is likely due to this change: https://github.com/hashicorp/terraform/issues/31576

We can’t validate if data from deprecated nested attributes is not used in the configuration, but we can at least catch the simple case where a deprecated attribute is referenced directly.

Partial fix for #7569

Isaac Campbell avatar
Isaac Campbell

https://github.com/cloudposse/terraform-aws-eks-cluster there currently is no mapping to create custom ingress rules on this correct? I see you link the aws_security_group_rule.ingress_security_groups in the module but no actual inputs in the module to adjust for HTTPS or custom ports

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster

JoseF avatar

Hello Sweet Ops Genies.

I was wondering how to create 2 different rds instances at the same time (same file), using cloudposse/rds/aws

If I add create the first one with this block

    vpc_id                               = module.vpc.vpc_id
    security_group_ids                   = [module.vpc.vpc_default_security_group_id, module.sg.id]
    associate_security_group_ids         = [module.vpc.vpc_default_security_group_id, module.sg.id]
    subnet_ids                           = module.subnets.public_subnet_ids
    engine                               = var.engine
    engine_version                       = var.engine_version
    major_engine_version                 = var.major_engine_version
    instance_class                       = var.instance_class
    db_parameter_group                   = var.db_parameter_group
    multi_az                             = var.multi_az
    dns_zone_id                          = var.dns_zone_id
    host_name                            = "rds-${var.namespace}-${var.environment}-auth-${var.stage}-${var.name}"
    publicly_accessible                  = var.publicly_accessible
    database_name                        = var.database_name
    database_user                        = var.database_user
    database_password                    = var.database_password
    database_port                        = var.database_port
    auto_minor_version_upgrade           = var.auto_minor_version_upgrade
    allow_major_version_upgrade          = var.allow_major_version_upgrade
    deletion_protection                  = var.deletion_protection
    storage_type                         = var.storage_type
    iops                                 = var.iops
    allocated_storage                    = var.allocated_storage
    storage_encrypted                    = var.encryption_enabled
    kms_key_arn                          = var.kms_key_arn 
    snapshot_identifier                  = var.snapshot_identifier_auth
    performance_insights_enabled         = var.encryption_enabled
    performance_insights_kms_key_id      = var.kms_key_arn
    performance_insights_retention_period= var.performance_insights_retention_period
    monitoring_interval                  = var.monitoring_interval
    monitoring_role_arn                  = aws_iam_role.enhanced_monitoring.arn
    apply_immediately                    = var.apply_immediately
    backup_retention_period              = var.backup_retention_period
    context = module.this.context

And create the second one with the same module block except

host_name                            = "rds-${var.namespace}-${var.environment}-api-${var.stage}-${var.name}"

The terraform apply fail with the error Error creating DBInstances, the dbsubnetgroup already exists. The module create 2 different groups with the same naming convention (name, stage, environment).

Inside my head that was perfect (how dreamer I was). Clearly the logic is different. I was looking to do it with a for_each? But I am stuck. What I am trying to do is create 2 rds within the same parameters, inside the same vpc, subnet and so. Any clue of how should I approach this? The reason? Simple, have 2 isolate rds sharing the same parameters between them, for auth/api respectively.

Isaac Campbell avatar
Isaac Campbell

Easy way to do this is probably with a for_each, and have a vairable, which is a list(object({})) so you can define what params you want to generate (like subnet) vs things you care about, like naming

variable "cow_farms" {
  type = list(object({
     cow_name = string
}
Isaac Campbell avatar
Isaac Campbell

Can then do a

for_each      = {for cow in var.cow_farms: cow.cow_name => cow }
source = "git::https://{SOURCE}.git"
name = ${each.value.cow_name}
subnet_ids = var.subnet
Isaac Campbell avatar
Isaac Campbell

by calling

cow_farms = [
 {
  cow_name = "fred"
},
{
cow_name = "frida"
}
]
JoseF avatar

I see. Then I should be capable to do something like name = "var.name-${each.value.cow_name}"? Considering that I would like to not loose the entire content of the variable, but keep it and concat the cow_name. name = mr. name = "var.name-${each.value.cow_name}" would give me mr. fred or mr. frida

Isaac Campbell avatar
Isaac Campbell

yeah

JoseF avatar

Wonderful. Let me try that and keep post it. Thanks mate for the enlightenment.

JoseF avatar

I will like to confirm that it did work flawless. Many thanks.

Just by curiosity, if I looking to create the rds with a snapshot, and there is 2 different snapshot to be used (one for auth, another for api), which modification would required the for_each? Considering that I have different variables for each snapshot_identifier arn.

Isaac Campbell avatar
Isaac Campbell

You could add it to the object

Isaac Campbell avatar
Isaac Campbell
variable "cow_farms" {
  type = list(object({
     cow_name = string
     cow_snapshot = string
}

cow_farms = [
 {
  cow_name = "fred"
  cow_snapshot = "1"
},
{
cow_name = "frida"
cow_snapshot = "2"
}
]
JoseF avatar

I see… That’s awesome. Many thanks. It worked flawless… Also this gave me the picture about many other things. I own you a beer Thanks mate.

1
Clemens avatar
Clemens

Hi together, i’m currently playing around with elastic beanstalk and writing a module myself to understand the behaviour. Just creating a simple ebs app and environment. Using aws provider version = “~> 4.5.0” and terraform version 1.1.7 The ebs app was created by another module i’ve written and is working but when creating the ebs environment there is an error at the apply step not at the plan step which is the following: Error: ConfigurationValidationException: Configuration validation exception: Invalid option specification (Namespace: 'VPCId', OptionName: 'aws:ec2:vpc'): Unknown configuration setting.

Referencing the AWS docs the attribute should be correct. I double checked the plan output of vpc id against the actual state in the AWS UI. I’l post the code as comment

General options for all environments - AWS Elastic Beanstalk

Configure globally available options for your Elastic Beanstalk environment.

Clemens avatar
Clemens

Code:

resource "aws_elastic_beanstalk_environment" "main" {
  name                = var.env_name
  application         = var.ebs_application_name
  solution_stack_name = "64bit Amazon Linux 2 v3.4.0 running PHP 8.1"

  setting {
    namespace = "aws:autoscaling:asg"
    name      = "Availability Zones"
    value     = var.availability_zones
  }

  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MinSize"
    value     = var.asg_min_instances
  }

  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MaxSize"
    value     = var.asg_max_instances
  }

  setting {
    name      = "aws:ec2:vpc"
    namespace = "VPCId"
    value     = var.vpc_id
  }

  setting {
    namespace = "aws:ec2:vpc"
    name      = "AssociatePublicIpAddress"
    value     = var.associate_public_ip_address
  }

  setting {
    namespace = "aws:ec2:vpc"
    name      = "Subnets"
    value     = var.public_subnets_join
  }

….. }

General options for all environments - AWS Elastic Beanstalk

Configure globally available options for your Elastic Beanstalk environment.

Clemens avatar
Clemens

Output of plan: + setting { + name = “SecurityGroups” + namespace = “awslaunchconfiguration” + value = “sg-0dd7945fac577b9b2” } + setting { + name = “ServiceRole” + namespace = “awsenvironment” + value = “eb-service” } + setting { + name = “Subnets” + namespace = “awsvpc” + value = “subnet-059483d3fdb776d1c,subnet-033f1ae5c7fecf260,subnet-038522e3a8bd63b96” } + setting { + name = “awsvpc” + namespace = “VPCId” + value = “vpc-0ef5769a1eaeb4beb” }

2022-08-12

Alex Mills avatar
Alex Mills

When using cloudposse/ecr/aws is there a way to force-delete ecr repositories that contain images?

Chris Dobbyn avatar
Chris Dobbyn

There’s no force_delete setting so no. Someone would have to PR it in.

Christopher Wade avatar
Christopher Wade

Hello all! $Job had a period of time over the last two days where a set of deployments failed seemingly because a single reference to cloudposse/label/null 0.24.1 failed to download for a usage of cloudposse/route53-alias/aws 0.12.0 within one of our internal modules, despite that same reference downloading several other times for other usages in the same template. In at least one of the impacted pipelines, we can confirm that no other changes were present between the working and non-working runs. This coincidentally started after Terraform 1.2.7 was released, but has now started working again with no other changes.

RB avatar

Is it possible that it had something to do with the terraform registry?

RB avatar

Did you try changing the source from the registry format to the git format to skip the registry?

Christopher Wade avatar
Christopher Wade

That has been my thought as well. Some transient issue with the registry. We’re currently expanding our usage of JFrog Artifactory, so I will probably just configure pull-through caching of the registry. The part that worries me is this was a silent failure from the perspective of terraform init.

RB avatar

We had another client, very recently after the tf 1.2.7 update, had similar weird issues with the registry

RB avatar

I guess the best solution is to create some kind of pass thru private reg in nexus or jfrog so if the registry is experiencing issues, you wont notice them

Christopher Wade avatar
Christopher Wade

“Good” to know we weren’t alone in that regard. And yes, it’s on our roadmap, but this may help re-prioritize it. Thank you!

1
RB avatar
HashiCorp Services Status - Incident History

HashiCorp Services’s Incident and Scheduled Maintenance History

RB avatar

Thread from the #aws channel

Regarding usage of terraform modules

RB avatar

Cc @Shawn Stout

RB avatar

Let’s use this thread

1
Shawn Stout avatar
Shawn Stout

ok

Shawn Stout avatar
Shawn Stout

im just creating the folders and code and such

RB avatar

I would follow that blog post by spacelift, create the necessary directories and then you can go through the workflow

Shawn Stout avatar
Shawn Stout

the terraform code, whats the normal file extension?

RB avatar

You may not have a terraform backend setup so your terraform state may be saved locally

RB avatar

The extension is .tf

RB avatar

You may also want to run through the hashicorp terraform tutorial

RB avatar

Theres a decent amount to learn before it all makes sense

Shawn Stout avatar
Shawn Stout

ok so with the blog you mentioned, where do i find it at?

Shawn Stout avatar
Shawn Stout

ill start there

RB avatar
What Are Terraform Modules and How to Use Them - Tutorialattachment image

Terraform modules are a way of extending your present Terraform configuration with already existing parts of reusable code, to reduce the amount of code you have to develop for similar infrastructure components. Others would say that the module definition is a single or many .tf files stacked together in their own directory. Both sides would be right.

RB avatar

Before that blog post, the basics would be good to look over too

https://learn.hashicorp.com/terraform

Terraform Tutorials - HashiCorp Learnattachment image

Learn to provision infrastructure with HashiCorp Terraform

Shawn Stout avatar
Shawn Stout

Just to make sure, this is a template to standardize and automate codebuild projects, correct?

Shawn Stout avatar
Shawn Stout

i want to make sure at the end of this, it does what im looking for

RB avatar

A module is a directory of reusable terraform code that can be used to bulk create resources. Its akin to a function with a set of inputs and outputs

RB avatar

The codebuild module you referenced, im not super familiar with, but it seems like it might be at least 1 module you may need to bootstrap what youre trying to accomplish

RB avatar

You wont be able to solve the task at hand without understanding the underlying terraform cli tool and how it works tho..

Shawn Stout avatar
Shawn Stout

ok

Shawn Stout avatar
Shawn Stout

looks like i need to install terraform

RB avatar

Lol yes definitely

Shawn Stout avatar
Shawn Stout

i was able to get it to build ok

Shawn Stout avatar
Shawn Stout

but during plan, i received some errors

Shawn Stout avatar
Shawn Stout

oh i think i know whats wrong

Shawn Stout avatar
Shawn Stout
 Provider "registry.terraform.io/hashicorp/aws" requires explicit configuration. Add a provider block to the root
│ module and configure the provider's required arguments as described in the provider documentation.
│
╵
╷
│ Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: c9801cb8-34ae-4688-bdae-548d0e1d2acf, api error InvalidClientTokenId: The security token included in the request is invalid.
│
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on <empty> line 0:
│   (source code not available)
Shawn Stout avatar
Shawn Stout

aws configure

Shawn Stout avatar
Shawn Stout

my session ended

RB avatar

yep, gotta make sure your aws is logged in

RB avatar

aws sts get-caller-identity is the command i run most often

Shawn Stout avatar
Shawn Stout

sso is a pain that way

Shawn Stout avatar
Shawn Stout

yeah me too, find out what role i am under

Shawn Stout avatar
Shawn Stout

ok, i think it built, going to check it out

Shawn Stout avatar
Shawn Stout
Shawn Stout avatar
Shawn Stout

i have to update quite a few things, but this is great

party_parrot1
Shawn Stout avatar
Shawn Stout

off to a good start

1
Shawn Stout avatar
Shawn Stout

here

2022-08-16

Isaac Campbell avatar
Isaac Campbell
05:51:01 PM

bump

https://github.com/cloudposse/terraform-aws-eks-cluster there currently is no mapping to create custom ingress rules on this correct? I see you link the aws_security_group_rule.ingress_security_groups in the module but no actual inputs in the module to adjust for HTTPS or custom ports

Isaac Campbell avatar
Isaac Campbell

im probably going to make a PR against this to support a for_each for custom ingress rules

Alex Jurkiewicz avatar
Alex Jurkiewicz

IMO, adding awsutils provider as a dependency of (what will ultimately be) many of your modules is a mistake: https://github.com/cloudposse/terraform-aws-iam-system-user/releases/tag/0.23.0

The use-case for this functionality (automatically expiring access keys) is not needed for the majority of these modules, which makes the requirement extra weird.

2
Chris Dobbyn avatar
Chris Dobbyn

And it’s extra annoying when you’ve got multiple regions across hundreds of states.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am not sure I follow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do it because the aws provider doesn’t support what we are doing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the alternative is local exec

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

awsutils is so much more than expiring access keys

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and we only use awsutils when it’s needed. if you can point me to some examples where it’s not needed, happy to take a look

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, looks like we’ve just implemented a workaround for defaulting the region:

https://github.com/cloudposse/terraform-provider-awsutils/pull/30#pullrequestreview-1073422213

Alex Jurkiewicz avatar
Alex Jurkiewicz

What I mean is, the feature which needs awsutils in the IAM system user module is not used by other cloudposse modules. But it still results in a new provider being added as a dependency to our configurations

Alex Jurkiewicz avatar
Alex Jurkiewicz

From my perspective, it’s really poor DX that will slow down terraform init for any stack using cloudposse modules

Chris Dobbyn avatar
Chris Dobbyn

From a user experience the extra functionality is almost never worthwhile except for a few niche cases. While the impact of implementing it is quite jarring. As Alex said it also results in a slower init. There’s also an impact to plans / applies.

People are going to be opinionated about it though. It’s also confusing for newbies since there’s also a aws utils module from cloudposse. It would be better if the niche cases were forked rather than implementing a required provider.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Per your guys’ feedback and some internal discussion, we’re going to rethink how/when we use awsutils provider in regular child modules and consider restricting our usage of it to our terraform root modules (https://github.com/cloudposse/terraform-aws-components)

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) will be deprecating the usage of this provider in terraform-aws-iam-system-user

Chris Dobbyn avatar
Chris Dobbyn

Thanks for the update @Erik Osterman (Cloud Posse)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The original use-case for this rotation is lessened by the fact that platforms like GitHub Actions and CircleCI support OIDC authentication with AWS for short-lived credentials. Supporting this feature at the expense of all the confusion it has caused is not worth it.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Love the explanation and update! Thank you

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Alex Jurkiewicz @Erik Osterman (Cloud Posse) terraform-aws-iam-system-user v1.0.0 has been released, which removes the feature of expiring AWS IAM access keys so it can then remove the dependency on the cloudposse/awsutils Terraform provider. Also bug fixes and enahancements.

terraform-aws-iam-s3-user v1.0.0 has been released, upgrading to use iam-system-user v1.0.0 and related bug fixes and enhancements.

terraform-aws-s3-bucket v3.0.0 is ready to be reviewed. Besides dropping the cloudposse/awsutils requirement, is also implements the previously broken features for configuring the S3 bucket to act as a static website, and implements a feature request from Feb 2019: saving the AWS IAM key in SSM.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

what a pile of work! Nice one mate

Alex Jurkiewicz avatar
Alex Jurkiewicz

I wish you could publish pre-release versions to TF registries. It would make downstream user testing so much simpler

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You can test by using a GitHub ref

  source = "github.com/cloudposse/terraform-aws-s3-bucket?ref=remove-awsutils"
Module Sources | Terraform by HashiCorpattachment image

The source argument tells Terraform where to find child modules’s configurations in locations like GitHub, the Terraform Registry, Bitbucket, Git, Mercurial, S3, and GCS.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You also can publish pre-release versions to the TF registry. We did it for terraform-aws-security-group so we can get experience with the new defaults.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Alex Jurkiewicz

Alex Jurkiewicz avatar
Alex Jurkiewicz

oh cool! I didn’t realise the registry supported non- x.y.z version formats. Thanks for the tip

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

terraform-aws-s3-bucket version 3.0.0 has been released.

1

2022-08-17

Release notes from terraform avatar
Release notes from terraform
03:43:30 PM

v1.3.0-alpha20220817 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

4
Isaac Campbell avatar
Isaac Campbell

sweet lord thats what we’ve been waiting for

GM avatar

Anyone got a svelte solution for replacing rather than updating a resource if there are any changes? I am wondering if I can just use replace_triggered_by and point to a string of input variables that could potentially change.

Warren Parad avatar
Warren Parad

Use a different provider? Use a different module? That does the right thing. The only guarantee us change the name of the TF resource

Alex Jurkiewicz avatar
Alex Jurkiewicz

Right, this new Terraform 1.2 feature is what you want. You may need to use it in conjunction with a null_resource, if you want to replace when a variable changes value

GM avatar

yep - that’s what I ended up doing

2022-08-18

jose.amengual avatar
jose.amengual

a while a go someone posted a tool to create a diagram of resources in TF, even better if it does network diagram, does anyone remember?

jose.amengual avatar
jose.amengual

I know cloudcraft

jose.amengual avatar
jose.amengual

but I can’t use it due to sec restrictions

RB avatar
InfraMap - the awesome open-source cloud diagram maker that's going to save YOU timeattachment image

InfraMap is an open-source and free tool that allows you to automatically generate cloud architecture diagrams from your .tfstate files.

Grummfy avatar
Grummfy

does someone know if it support multiple tfstate?

2022-08-19

Adnan avatar

How to you deal with aws_db_instance password attribute?

Chris Dobbyn avatar
Chris Dobbyn

I create a secret manager secret with a value of initial, the rotate it before first use. Depending on your confidence this can be done with a lambda and done periodically over time.

Warren Parad avatar
Warren Parad

You can create secret Manager secrets and have AWS automatically generate the value and auto rotate it

Chris Dobbyn avatar
Chris Dobbyn

Just be aware that what you use in the db_instance resource will be present in your terraform state. I highly recommend rotating before first use outside of terraform.

Adnan avatar

thanks sounds great. rotation won’t cause drift?

Mikhail Naletov avatar
Mikhail Naletov

Hi there. Seems this pattern does not work for records pointing to multiple destinations https://github.com/cloudposse/terraform-cloudflare-zone/blob/master/main.tf#L9-L14 Shouldn’t we add value to map index? Or may be introduce new resource handling multiple values per one record?

  records = local.records_enabled ? {
    for record in flatten(var.records) :
    format("%s%s",
      record.name,
      record.type,
    ) => record
RB avatar

Hmm thats the way we used to do sg rule keys as well. We have since changed this model in the sg module to override the key of each sg rule just like we could for each cloudflare zone record.

https://github.com/cloudposse/terraform-aws-security-group/blob/a7ff89ba6103964d830b683b3c01985c70257307/normalize.tf#L62

  records = local.records_enabled ? {
    for record in flatten(var.records) :
    format("%s%s",
      record.name,
      record.type,
    ) => record
RB avatar

cc: @Jeremy G (Cloud Posse) since you made the changes to the sg module, instead of generating the keys from the inputs and then md5ing them, do you think overriding the resource key in the cloudflare module is beter?

Mikhail Naletov avatar
Mikhail Naletov

I’m ready to help with the module but would like to know what solution I should use here

Mikhail Naletov avatar
Mikhail Naletov
    format("%s-%s",
      rule.action,
      md5(rule.expression),
    ) => rule
RB avatar

Right, the md5 approach and concatenating the inputs i believe are deprecated since in both cases if not all inputs are used to define the keys, then we have conflicting keys. I believe that may have been the reason to opt to override the key. I defer to Jeremy on that. He may also have some guidance here on the best way to move forward on the module

Mikhail Naletov avatar
Mikhail Naletov

Great, thanks. Can I ask one more question? https://github.com/cloudposse/terraform-cloudflare-zone/blob/master/variables.tf#L95 here we have list(any) . In case of I don’t set products at all it fails because it has to be list of strings in resource argument. The same behaviour for null value. Of course I can’t set it [] because this is a list and all values should be the same type. Should we rework it to list(object{(... or this is a bug of cloudflare provider?

variable "firewall_rules" {
RB avatar

See the sg module. We would most likely follow the same approach and i believe that sg module has a similar interface to what you desire in the cloudflare module

1
Mikhail Naletov avatar
Mikhail Naletov

Thanks!

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Mikhail Naletov Because we use the keys in local.records in for_each here, the keys must be completely known at terraform plan time. So we have to be careful about what inputs we use in generating keys.

Seems this pattern does not work for records pointing to multiple destinations Presumably the name is under full user control, so why doesn’t this work for multiple destination?

Regarding https://github.com/cloudposse/terraform-cloudflare-zone/blob/master/variables.tf#L95, firewall_rules is list(any) so that you do not have to supply values for optional inputs. Unless action = "bypass" you should be able to set products = null. If that does not work, I would report that as a bug in the Cloudflare provider.

Mikhail Naletov avatar
Mikhail Naletov

@Jeremy G (Cloud Posse) hi there. Thank you for your answer.
Presumably the name is under full user control, so why doesn’t this work for multiple destination?
Example: I want to delegate test.example.com to Google Cloud DNS/AWS Route53. I have to create 4 NS records:

test.example.com NS aws-resolver1.blabla
test.example.com NS aws-resolver2.blabla
test.example.com NS aws-resolver21.blabla
test.example.com NS aws-resolver88.blabla

Using the module I have 4 same resources in for_each loop:

test.example.comNS
test.example.comNS
test.example.comNS
test.example.comNS

Related to Cloudflare provider - I’ve already created an issue. That’s strange but it seems products argument not depends on action but on pause.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

OK, if it normal to have records of the same name and type, the obvious options are

  1. Switch from for_each to count so that the records do not need unique keys. This means that changes to the inputs may have a ripple effect, but it should not be significant issue to have all the record deleted and immediately recreated.
  2. Add a user-defined unique key per record. We tried that with Security Group rules. It does not work terribly well, in that the user has to define keys that are valid at plan time, and often ends up simply creating the same kind of index that is automatically created with count. @Mikhail Naletov I’m in favor of option 1. What do you think?

Regarding Cloudflare, sounds like the solution is for you to provide a valid products value.

RB avatar

The issue with option 1, using count, would mean that if a record was added in the middle of the list, then it would cause a bunch of recreations.

I’d personally be more a fan of option 2 where a key is generated (which is currently the case but not all inputs are used in the key) and the key can be optionally overridden. This would at least prevent recreations of resources if a new record is added in the middle of the inputs

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@RB Yes, using count causes unnecessary recreation of resources, but it has fewer failure modes, so it is worth considering the impact of that recreation versions the complete failure that occurs in the current case or cases where the keys cannot be computed at plan time. We also need to consider the likelihood that someone will generate keys with something like

rmap = { for i, v in var.records : i => v }

which has exactly the same problem as using count, but requires extra effort on the part of the user.

What we are talking about deleting and recreating are DNS entries, which probably should have negligible impact.

RB avatar

err… deleting dns entries. I think that would cause a lot of impact, no?

Mikhail Naletov avatar
Mikhail Naletov

I’d prefer not to use count. I’m agree with @RB here, recreating records may cause serious downtime.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The other option would be to add an optional key to the records objects, and use the key if present (falling back to the current synthetic key if it is absent). I do not want to add value to they synthetic key because there is a good chance the value will not be known at plan time, in which case terraform plan will fail and there would be no way to fix it.

This would be backward compatible, and the key would only need to be specified in cases where either the name or type is not known at plan time or there are duplicates. Of course, this would require users to create keys that can be computed at plan time, which could be a problem. Do you think this will work for you, @Mikhail Naletov?

RB avatar

The current interface

  records = [
    {
      name  = "bastion"
      value = "192.168.1.11"
      type  = "A"
      ttl   = 3600
    },
    {
      name  = "api"
      value = "192.168.2.22"
      type  = "A"
      ttl   = 3600
    }
  ]

how about an interface like this where the keys are provided?

  records = {
    bastion = {
      name  = "bastion"
      value = "192.168.1.11"
      type  = "A"
      ttl   = 3600
    },
    api = {
      name  = "api"
      value = "192.168.2.22"
      type  = "A"
      ttl   = 3600
    }
  }
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)


how about an interface like this where the keys are provided?
That would

  1. not be backward compatible
  2. would make the keys required which is why I prefer my proposal, which would be backward compatible by making the keys optional, and with the keys being optional, would not impose the added burden of creating keys for users who do not need them.
RB avatar

the proposal you’re referring to is the count proposal, no?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

No.
The other option would be to add an optional key to the records objects, and use the key if present (falling back to the current synthetic key if it is absent)

RB avatar

ok so to be clear, you’re proposing an interface like this where key is optional

  records = [
    {
      key   = "bastion_A"
      name  = "bastion"
      value = "192.168.1.11"
      type  = "A"
      ttl   = 3600
    },
    {
      key   = "api_A"
      name  = "api"
      value = "192.168.2.22"
      type  = "A"
      ttl   = 3600
    },
    {
      key   = "test.example.com_NS_1"
      name  = "test"
      value = "aws-resolver1.blabla"
      type  = "NS"
      ttl   = 3600
    },
    {
      key   = "test.example.com_NS_2"
      name  = "test"
      value = "aws-resolver2.blabla"
      type  = "NS"
      ttl   = 3600
    },
  ]
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Key would be optional, but if present, must be present for all the records in the list. That is a Terraform requirement.

1
RB avatar

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Need to create 4 records with the same value with different targets

i.e.

test.example.com NS aws-resolver1.blabla
test.example.com NS aws-resolver2.blabla
test.example.com NS aws-resolver21.blabla
test.example.com NS aws-resolver88.blabla

This results in resource conflicts because the value is not added to the key

test.example.comNS
test.example.comNS
test.example.comNS
test.example.comNS

Expected Behavior

Unique keys per record

Proposal

  records = [
    {
      key   = "bastion_A"
      name  = "bastion"
      value = "192.168.1.11"
      type  = "A"
      ttl   = 3600
    },
    {
      key   = "api_A"
      name  = "api"
      value = "192.168.2.22"
      type  = "A"
      ttl   = 3600
    },
    {
      key   = "test.example.com_NS_1"
      name  = "test"
      value = "aws-resolver1.blabla"
      type  = "NS"
      ttl   = 3600
    },
    {
      key   = "test.example.com_NS_2"
      name  = "test"
      value = "aws-resolver2.blabla"
      type  = "NS"
      ttl   = 3600
    },
  ]

This would

• Allow key to be optional but if it was set, it would be mandatory for all items in the list • Backwards compatibility

references

https://sweetops.slack.com/archives/CB6GHNLG0/p1660906495013409

2022-08-21

2022-08-22

András Sándor avatar
András Sándor

Hi, I’m looking for some clarification on a tf behavior I don’t fully understand. I’m using terraform-aws-ecs-alb-service-task (as a submodule of terraform-aws-ecs-web-app), and running plan/apply on it creates an update even when nothing was modified, something like this:

module.ecs_web_app.module.ecs_alb_service_task.aws_ecs_service.default[0] will be updated in-place
...
task_definition                    = "dev-backend:29" -> "dev-backend:8"
...

Task definition revision number is the only thing that’s modified by TF, and the the revisions themselves are identical. Anyone met this behaviour before?

Joe Niland avatar
Joe Niland

It looks like you have an exsting task def 29, presumably created by ECS deployments, whereas the TF state has 8. You can avoid TF seeing this as a change by setting the ignore_changes_task_definition var to true.

1
András Sándor avatar
András Sándor

how do you usually manage changes to the task definition (env vars, cpu resources, etc.)?

Joe Niland avatar
Joe Niland

It’s still managed by terraform but you will need to update your service to use the latest task def in another way

András Sándor avatar
András Sándor

thanks, appreciate your help

Joe Niland avatar
Joe Niland

No problem. I usually use CD tools such as GitHub Actions or CodeDeploy to update it. Or ecs-deploy for manual cli usage.

CodeDeploy in particular can be a pain when you update the task def in Terraform but CodeDeploy uses the last version it knew about to create a new task definition, effectively skipping the one that TF created.

el avatar

Any recommendations for how to handle enums in Terraform? Should I just use local variables and refer to those?

ghostface avatar
ghostface

I have a list(map(string)) like this:

Changes to Outputs:
  + internal_ranges_map = [
      + {
          + "CIDR"        = "10.1.xx.54/32"
          + "description" = "x"
        },
      + {
          + "CIDR"        = "10.2.9.1xxxx/32"
          + "description" = "x"
        },
      + {
          + "CIDR"        = "172.22.16.0/23"
          + "description" = "xx"
        },
      + {
          + "CIDR"        = "172.22.18.0/23"
          + "description" = "xxx"
        },
      + {
          + "CIDR"        = "172.22.1x.0/23"
          + "description" = "xx"
        },
      + {
          + "CIDR"        = "172.22.1xxx.0/23"
          + "description" = "sharxxxx1_az1"
        },
      + {
          + "CIDR"        = "172.22.xx.0/23"
          + "description" = "sxxxxxuse1_az4"
        },
      + {
          + "CIDR"        = "xx.xx.xx/23"
          + "description" = "sxxx"
        },
    ]

how do i form a list of each CIDR from internal_ranges_map ?

loren avatar

does this get you what you want? internal_ranges_map[*].CIDR?

1
1
ghostface avatar
ghostface

it does, thank you.

taking your advice above leads me onto my next question:

i have this:

ingress_db = concat(local.vpc_cidrs, var.internal_ranges_map[*].CIDR)

i want to reference those descriptions from the map in the below

resource "aws_security_group_rule" "rds" {
  for_each          = var.ingress_db
  from_port         = var.config["port"]
  protocol          = "tcp"
  security_group_id = aws_security_group.rds.id
  to_port           = var.config["port"]
  type              = "ingress"
  cidr_blocks       = [each.key]
  description = ""
}

but i’m struggling with how to get the description from the map

loren avatar

ahh well. in that case, you don’t. instead, you build up a list of objects with all the attributes you want. and then you use that in your for_each expression…

loren avatar

the easiest way is to change your local.vpc_cidrs to match the structure of var.internal_ranges_map… e.g.

locals {
  vpc_cidrs = [
    {
      CIDR = "..."
      description = "..."
    }
  ]
}

and then you can just do:

rules = { for item in concat(local.vpc_cidrs, var.internal_ranges_map) : item.CIDR => item }

and:

resource "aws_security_group_rule" "rds" {
  for_each          = local.rules
  from_port         = var.config["port"]
  protocol          = "tcp"
  security_group_id = aws_security_group.rds.id
  to_port           = var.config["port"]
  type              = "ingress"
  cidr_blocks       = [each.key]
  description = each.value.description
}
ghostface avatar
ghostface

thanks for your reply @loren

is each.key correct?

ghostface avatar
ghostface

for the cidr ?

ghostface avatar
ghostface

here are my locals

  locals { 
vpc_cidrs = [
    {
      "CIDR"        = data.terraform_remote_state.vpc.outputs.vpc_cidr_block
      "description" = "Primary Self VPC",
    },
    {
      "CIDR"        = data.terraform_remote_state.vpc.outputs.secondary_cidr_block
      "description" = "Secondary Self VPC",
    }
  ]

  ingress_db = { for item in concat(local.vpc_cidrs, var.internal_ranges_map) : item.CIDR => item }
}
ghostface avatar
ghostface

here is where i use the ingress_db

module "rds" {
  source      = "../modules/rds"
  config      = merge(local.rds, var.rds)
  environment = var.environment
  ingress_db  = local.ingress_db
  subnet_ids  = data.terraform_remote_state.vpc.outputs.private_subnets
  vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id
}
ghostface avatar
ghostface

and here is the sg rule inside the module

resource "aws_security_group_rule" "rds" {
  for_each          = var.ingress_db
  from_port         = var.config["port"]
  protocol          = "tcp"
  security_group_id = aws_security_group.rds.id
  to_port           = var.config["port"]
  type              = "ingress"
  cidr_blocks       = [each.key]
  description = each.value.description
}
ghostface avatar
ghostface
│ Error: "" is not a valid CIDR block: invalid CIDR address: 
│ 
│   with module.rds.aws_security_group_rule.rds[""],
│   on ../modules/rds/security.tf line 21, in resource "aws_security_group_rule" "rds":
│   21:   cidr_blocks       = [each.key]
│ 
╵
loren avatar

in this expression:

{ for item in concat(local.vpc_cidrs, var.internal_ranges_map) : item.CIDR => item }

when used as a for_each value, when you reference each.key you are getting the value of item.CIDR. (the value of the expression between the : and the => becomes the value of the map key)

loren avatar

what your error means, is that somewhere in your object, you have a CIDR that has a value of ""

ghostface avatar
ghostface

hmmm

ghostface avatar
ghostface

do maps have a ‘compact’ feature to remove potential empty values?

loren avatar

hmmm, well you can test for it in the for expression…

{ for item in concat(local.vpc_cidrs, var.internal_ranges_map) : item.CIDR => item if item.CIDR != "" }
ghostface avatar
ghostface

@loren thanks, i’ll check this.

ghostface avatar
ghostface

hi @loren the above worked great.

here’s my code:

  vpc_cidrs = [
    {
      "CIDR"        = data.terraform_remote_state.vpc.outputs.vpc_cidr_block
      "description" = "Primary Self VPC"
    },
    {
      "CIDR"        = data.terraform_remote_state.vpc.outputs.secondary_cidr_block
      "description" = "Secondary Self VPC"
    }
  ]

  ingress_db = {
  for item in concat(local.vpc_cidrs, var.ingress_db, var.internal_ranges_map) : item.CIDR => item if item.CIDR != ""
  }

where local.vpc_cidrs , var.ingress_db and var.internal_ranges_map are the same format.

sometimes these variables may have duplicate CIDRs which ends up resulting in this issue.

Error: Duplicate object key
│ 
│   on main.tf line 41, in locals:
│   40:   ingress_db = {
│   41:   for item in concat(local.vpc_cidrs, var.ingress_db, var.internal_ranges_map) : item.CIDR => item if item.CIDR != ""
│   42:   }
│     ├────────────────
│     │ item.CIDR is "10.1.xx.xx/32"

do you know of a way to remove duplicates after the concat ?

loren avatar

no, that means it is not an appropriate value for the key in the for_each expression, and you have to devise a new expression for the key

loren avatar

i’ll often have a “name” attribute in the data structure. and that attribute value, i require to be unique. that makes it appropriate for use as the for_each key

loren avatar

e.g.

  vpc_cidrs = [
    {
      name          = "primary"
      "CIDR"        = data.terraform_remote_state.vpc.outputs.vpc_cidr_block
      "description" = "Primary Self VPC"
    },
    {
      name          = "secondary"
      "CIDR"        = data.terraform_remote_state.vpc.outputs.secondary_cidr_block
      "description" = "Secondary Self VPC"
    }
  ]

and then

  ingress_db = {
  for item in concat(local.vpc_cidrs, var.ingress_db, var.internal_ranges_map) : item.name => item if item.CIDR != ""
  }
ghostface avatar
ghostface

@loren when you say you require it to be unique, is that enforced somehow?

loren avatar

well, as you saw, when the key expression for for_each must be unique and plan will error if it is not…

loren avatar

so, that’s my enforcement. as far as how i create or manage the unique values, they’re typically a user input or hardcoded in a local value

loren avatar

we’re kinda starting to divert into some more general theory and practice on how to manage and structure data/config… i am somewhat particular about these things, particularly in light of how for_each works, and i go out of my way when writing modules to manage inputs in a way that i can be reasonably sure will work from a cold start (empty state). it’s not necessarily for everyone though, requires a fair bit from the user, rather than being an easy button

loren avatar

this is a very good section on for_each. well worth reading and re-reading. i often learn something new when re-reading the docs, after playing around a little bit… https://www.terraform.io/language/meta-arguments/for_each#limitations-on-values-used-in-for_each

The for_each Meta-Argument - Configuration Language | Terraform by HashiCorpattachment image

The for_each meta-argument allows you to manage similar infrastructure resources without writing a separate block for each one.

OliverS avatar
OliverS

Does anyone have an easy way to determine the current version of modules in use, vs the latest available? I’m thinking like a list of module names, URL, version installed by init, and version available in the registry.

loren avatar

check whether i have any prs open by dependabot/renovate?

OliverS avatar
OliverS

@loren where?

loren avatar

you mean where are my pull requests? wherever the repo is hosted, typically github, or gitlab, or codecommit

Warren Parad avatar
Warren Parad

Is there something specific you are trying to do?

Denis avatar

I think Oliver isn’t aware of dependabot. If that’s the case, Oliver you can read about it here. Basically github actions are being run as you configure them and it can be set to create a PR that has the newer versions in it. It’s on you to fix any breaking changes as part of the newer versions, of course. But it will give you the insights of what’s behind the latest version, and encourages a better operational discipline.

OliverS avatar
OliverS

I’ll rephrase my original question: given a root module, is there a way to get a list of third-party modules used, with the current version installed by terraform init, and the latest available version in registry?

Eg if my tf module is using cloudposse lambda module v1.0, and v1.1 is available, such command should show (something like) cloudposse/lambda current = v1.0 available = v1.1

OliverS avatar
OliverS

I just saw @Denis answer about dependabot let me check this out

1
Denis avatar

Yup, you should read through it, and terraform is supported. So it should fit your need. It’s a great tool.

Warren Parad avatar
Warren Parad

Can I ask why, do you need to do this?

OliverS avatar
OliverS

Just need to update dependencies, and it’s not something I want to automate, too risky atm. So I’m looking for a command to show me the list of modules that could be upgraded.

Warren Parad avatar
Warren Parad

“too risky, always”

loren avatar

i like both dependabot and renovate, and only automate the creation of the pr. we don’t merge the pr until we’ve tested it separately

OliverS avatar
OliverS

RIght. So the idea is that if dependabot is run regularly to check say daily, the module version updates will be minimal and frequent… and only accepted if manual verifications on the PR branch succeed…

Andrew Nazarov avatar
Andrew Nazarov

We are using Renovate. And yes for us it just creates PRs/MRs which we double check and merge manually

1
loren avatar

honestly i’ve found too frequent checks to be more annoying than anything. most updates are minor things that impact no feature i’m using. i am usually proactively aware if something i am doing requires an updated module, so i just update it myself. as far as just staying current, i don’t need prs to update modules any more often than once a month

1
Warren Parad avatar
Warren Parad

this genius advice

OliverS avatar
OliverS

Dependabot seems to require github? which my client is not using (they use bitbucket)

OliverS avatar
OliverS

(still checking if that conclusion is correct)

loren avatar

my deepest condolences

OliverS avatar
OliverS

pays the bills

Warren Parad avatar
Warren Parad

Yikes

loren avatar

the dependabot service does require github. but you can run the dependabot container yourself, against several other hosts

loren avatar

```

frozen_string_literal: true

require “dependabot/shared_helpers” require “excon”

module Dependabot module Clients class Bitbucket class NotFound < StandardError; end

  class Unauthorized < StandardError; end

  class Forbidden < StandardError; end

  #######################
  # Constructor methods #
  #######################

  def self.for_source(source:, credentials:)
    credential =
      credentials.
      select { |cred| cred["type"] == "git_source" }.
      find { |cred| cred["host"] == source.hostname }

    new(credentials: credential)
  end

  ##########
  # Client #
  ##########

  def initialize(credentials:)
    @credentials = credentials
    @auth_header = auth_header_for(credentials&.fetch("token", nil))
  end

  def fetch_commit(repo, branch)
    path = "#{repo}/refs/branches/#{branch}"
    response = get(base_url + path)

    JSON.parse(response.body).fetch("target").fetch("hash")
  end

  def fetch_default_branch(repo)
    response = get(base_url + repo)

    JSON.parse(response.body).fetch("mainbranch").fetch("name")
  end

  def fetch_repo_contents(repo, commit = nil, path = nil)
    raise "Commit is required if path provided!" if commit.nil? && path

    api_path = "#{repo}/src"
    api_path += "/#{commit}" if commit
    api_path += "/#{path.gsub(%r{/+$}, '')}" if path
    api_path += "?pagelen=100"
    response = get(base_url + api_path)

    JSON.parse(response.body).fetch("values")
  end

  def fetch_file_contents(repo, commit, path)
    path = "#{repo}/src/#{commit}/#{path.gsub(%r{/+$}, '')}"
    response = get(base_url + path)

    response.body
  end

  def commits(repo, branch_name = nil)
    commits_path = "#{repo}/commits/#{branch_name}?pagelen=100"
    next_page_url = base_url + commits_path
    paginate({ "next" => next_page_url })
  end

  def branch(repo, branch_name)
    branch_path = "#{repo}/refs/branches/#{branch_name}"
    response = get(base_url + branch_path)

    JSON.parse(response.body)
  end

  def pull_requests(repo, source_branch, target_branch)
    pr_path = "#{repo}/pullrequests"
    # Get pull requests with any status
    pr_path += "?status=OPEN&status=MERGED&status=DECLINED&status=SUPERSEDED"
    next_page_url = base_url + pr_path
    pull_requests = paginate({ "next" => next_page_url })

    pull_requests unless source_branch && target_branch

    pull_requests.select do |pr|
      pr_source_branch = pr.fetch("source").fetch("branch").fetch("name")
      pr_target_branch = pr.fetch("destination").fetch("branch").fetch("name")
      pr_source_branch == source_branch && pr_target_branch == target_branch
    end
  end

  # rubocop:disable Metrics/ParameterLists
  def create_commit(repo, branch_name, base_commit, commit_message, files,
                    author_details)
    parameters = {
      message: commit_message, # TODO: Format markup in commit message
      author: "#{author_details.fetch(:name)} <https://github.com/dependabot/dependabot-core/blob/main/common/lib/dependabot/clients/bitbucket.rb#{author_details.fetch(:email)}>",
      parents: base_commit,
      branch: branch_name
    }

    files.each do |file|
      absolute_path = file.name.start_with?("/") ? file.name : "/" + file.name
      parameters[absolute_path] = file.content
    end

    body = encode_form_parameters(parameters)

    commit_path = "#{repo}/src"
    post(base_url + commit_path, body, "application/x-www-form-urlencoded")
  end
  # rubocop:enable Metrics/ParameterLists

  # rubocop:disable Metrics/ParameterLists
  def create_pull_request(repo, pr_name, source_branch, target_branch,
                          pr_description, _labels, _work_item = nil)
    reviewers = default_reviewers(repo)

    content = {
      title: pr_name,
      source: {
        branch: {
          name: source_branch
        }
      },
      destination: {
        branch: {
          name: target_branch
        }
      },
      description: pr_description,
      reviewers: reviewers,
      close_source_branch: true
    }

    pr_path = "#{repo}/pullrequests"
    post(base_url + pr_path, content.to_json)
  end
  # rubocop:enable Metrics/ParameterLists

  def default_reviewers(repo)
    path = "#{repo}/default-reviewers?pagelen=100&fields=values.uuid,next"
    reviewers_url = base_url + path

    default_reviewers = paginate({ "next" => reviewers_url })

    reviewer_data = []

    default_reviewers.each do |reviewer|
      reviewer_data.append({ uuid: reviewer.fetch("uuid") })
    end

    reviewer_data
  end

  def tags(repo)
    path = "#{repo}/refs/tags?pagelen=100"
    response = get(base_url + path)

    JSON.parse(response.body).fetch("values")
  end

  def compare(repo, previous_tag, new_tag)
    path = "#{repo}/commits/?include=#{new_tag}&exclude=#{previous_tag}"
    response = get(base_url + path)

    JSON.parse(response.body).fetch("values")
  end

  def get(url)
    response = Excon.get(
      url,
      user: credentials&.fetch("username", nil),
      password: credentials&.fetch("password", nil),
      # Setting to false to prevent Excon retries, use BitbucketWithRetries for retries.
      idempotent: false,
      **Dependabot::SharedHelpers.excon_defaults(
        headers: auth_header
      )
    )
    raise Unauthorized if response.status == 401
    raise Forbidden if response.status == 403
    raise NotFound if response.status == 404

    if response.status >= 400
      raise "Unhandled Bitbucket error!\n"\
            "Status: #{response.status}\n"\
            "Body: #{response.body}"
    end

    response
  end

  def post(url, body, content_type = "application/json")
    response = Excon.post(
      url,
      body: body,
      user: credentials&.fetch("username", nil),
      password: credentials&.fetch("password", nil),
      idempotent: false,
      **SharedHelpers.excon_defaults(
        headers: auth_header.merge(
          {
            "Content-Type" => content_type
          }
        )
      )
    )
    raise Unauthorized if response.status == 401
    raise Forbidden if response.status == 403
    raise NotFound if response.status == 404

    response
  end

  private

  def auth_header_for(token)
    return {} unless token

    { "Authorization" => "Bearer #{token}" }
  end

  def encode_form_parameters(parameters)
    parameters.map do |key, value|
      URI.encode_www_form_component(key.to_s) + "=" + URI.encode_www_form_component(value.to_s)
    end.join("&")
  end

  # Takes a hash with optional `values` and `next` fields
  # Returns an enumerator.
  #
  # Can be used a few ways:
  # With GET:
  #     paginate ({"next" =&gt; url})
  # or
  #     paginate(JSON.parse(get(url).body))
  #
  # With POST (for endpoints that provide POST methods for long query parameters)
  #     response = post(url, body)
  #     first_page = JSON.parse(repsonse.body)
  #     paginate(first_page)
  def paginate(page)
    Enumerator.new do |yielder|
      loop do
        page.fetch("values", []).each { |value| yielder &lt;&lt; value }
        break unless page.key?("next")

        next_page_url = page.fetch("next")
        page…
OliverS avatar
OliverS

yeah just saw a mention of dependabot core…

loren avatar

here’s a starting point for running dependabot yourself… https://github.com/dependabot/dependabot-script

dependabot/dependabot-script

A simple script that demonstrates how to use Dependabot Core

OliverS avatar
OliverS

Thanks I’ll check it out…

Tyrone Meijn avatar
Tyrone Meijn

https://github.com/keilerkonzept/terraform-module-versions This one works from me in a project where I have not yet configured renovate.

keilerkonzept/terraform-module-versions

CLI tool that checks Terraform code for module updates. Single binary, no dependencies. linux, osx, windows. #golang #cli #terraform

OliverS avatar
OliverS

Thanks @Tyrone Meijn, based on the docs it might be exactly what I was looking for, I’ll definitely try it out

Charles Smith avatar
Charles Smith

Hello, @Brent Farand and I are making use of your https://github.com/cloudposse/terraform-aws-components and trying to follow much of your foundational reference architecture at our organisation. (Thank you BTW, it’s some truly awesome work). We’re deploying EKS with your components and I’ve found the components eks-iam , external-dns, alb-controller for deploying iam service accounts and the necessary k8s controllers. I’m curious, however, if you have a module that handles the deployment of a cluster autoscaler controller? I can see reference to an autoscaler in a number of components but haven’t been able to find one that actually deploys it. Am I missing something or should we just use the external-dns component as a starting place and create our own cluster-autoscaler component?

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

RB avatar

Very cool you folks are setting this up yourselves

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

RB avatar

We probably just haven’t upstreamed the cluster-autoscaler component yet

RB avatar

The eks-iam component has been deprecated since the iam roles are now created within each specific helm chart component

RB avatar

One thing to be mindful of is the number of months it’s been since a component has been updated

Charles Smith avatar
Charles Smith

Thank you. I should have read the code closer but that makes sense now that you mention it. We’ll start with those other 2 helm components and see how it goes.

2022-08-23

Soren Jensen avatar
Soren Jensen

Hi all, I got an issue automating our deployment of an ECS Service. The service consists of 1 task with 3 containers. Network mode is HOST as one of the containers is running an ssh service where we can login. (Should be an EC2 instance instead, but out of my control to change this right now)

Terraform apply gives me the following error:

service demo-executor was unable to place a task because no container instance met all of its requirements. The closest matching container-instance 5afff47b0b0e4199a97644b7a050d368 is already using a port required by your task

How do I get terraform to destroy my service before it attempts to redeploy it?

jose.amengual avatar
jose.amengual

you do not need to destroy the service

jose.amengual avatar
jose.amengual

you need to make sure the task is using the available ram from the ec2 instance

jose.amengual avatar
jose.amengual

which is NOT the total amount of ram of the instance type

jose.amengual avatar
jose.amengual

you need to leave room for the ecs agent etc so if you tune the ram of each task plus task count, TF will stop the task and then start it again

Soren Jensen avatar
Soren Jensen

The joy of inheriting code.. The project had a few issues, the main reasons it wouldn’t deploy the new service was because the desired amount of running containers was set to 1, the CPU request was 4vCPUs for all 3 containers so no room to spin up the new version parallel to the old one.

Soren Jensen avatar
Soren Jensen

Thanks for the input

jose.amengual avatar
jose.amengual

np

2022-08-24

Release notes from terraform avatar
Release notes from terraform
02:43:32 PM

v1.2.8 1.2.8 (August 24, 2022) BUG FIXES: config: The flatten function will no longer panic if given a null value that has been explicitly converted to or implicitly inferred as having a list, set, or tuple type. Previously Terraform would panic in such a situation because it tried to “flatten” the contents of the null value into the result, which is impossible. (<a href=”https://github.com/hashicorp/terraform/issues/31675” data-hovercard-type=”pull_request”…

go.mod: go get github.com/zclconf/[email protected] by apparentlymart · Pull Request #31675 · hashicorp/terraformattachment image

This fixes a possible panic in what Terraform calls the “flatten” function in situations where a user passes in a null value of a sequence type. The function will now treat that the same as a null …

Eric Berg avatar
Eric Berg

Regarding the terraform-aws-rds-cluster module, I’d like to apply db_role tags, as appropriate, to the writer and readers. Anybody know how to make that happen?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

:tada: You can ask questions now on StackOverflow and tag #cloudposse

https://stackoverflow.com/tags/cloudposse/info

'cloudposse' tag wiki
Stack OverflowThe World’s Largest Online Community for Developers
4

2022-08-25

2022-08-26

Nikhil Purva avatar
Nikhil Purva

Hi Team, Regarding the terraform-aws-waf module, I would like to use and_statement, or_statement and not_statement of different types. Is it possible to do that? if yes then please let me know how we can achieve this.

Pierre-Yves avatar
Pierre-Yves

What do you mean by “different types” ? you can use those together, but I’m not sure to understand the “different types” part .

Nikhil Purva avatar
Nikhil Purva

My bad I meant something like this

statement {
            or_statement {
                statement {
                    ip_set_reference_statement {
                        arn = 
                        }
                    }
                statement {
                    byte_match_statement {
                     }
                }
            }
Pierre-Yves avatar
Pierre-Yves

oh alright, I think that in your case, the “and_statement” should encapsulate the statement {not_statement{}}.

Pierre-Yves avatar
Pierre-Yves

oh , your message has been edited

Nikhil Purva avatar
Nikhil Purva

yeah, made it smaller and something what I want to achieve

Pierre-Yves avatar
Pierre-Yves

It should work like this

Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t believe it’s possible with the cloudposse module. The WAF resources are hard to build a module around, it can be easier to use the resources directly for exactly this reason

Pierre-Yves avatar
Pierre-Yves

Han, I was not aware if it was with or without the module ^^, I don’t use the cloudposse module for our WAF, so I don’t know .

Stephen Bennett avatar
Stephen Bennett

Any ideas on whats wrong with this external data call. It returns a output of a “23445234234” so would expect that to be seen as a string?

data "external" "cognito" {
program = ["sh", "-c", "aws cognito-idp list-user-pool-clients --user-pool-id eu-west-xxx | jq '.UserPoolClients | .[] | select(.ClientName | contains(\"AmazonOpenSearchService\")) | .ClientId'"] 
}

but i get an error:

│ The data source received unexpected results after executing the program.
│ 
│ Program output must be a JSON encoded map of string keys and string values.
│ 
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging: <https://www.terraform.io/internals/debugging>
│ 
│ Program: /bin/sh
│ Result Error: json: │ The data source received unexpected results after executing the program.
│ 
│ Program output must be a JSON encoded map of string keys and string values.
│ 
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging: <https://www.terraform.io/internals/debugging>
│ 
│ Program: /bin/sh
│ Result Error: json: cannot unmarshal string into Go value of type map[string]string
Alex Jurkiewicz avatar
Alex Jurkiewicz

The problem is clear in the error message. External data output is not a plain string. It’s a JSON map

2022-08-27

2022-08-28

Alex Jurkiewicz avatar
Alex Jurkiewicz

Using cloudposse’s null label module, how can I zero the attributes when inheriting a context? For example:

module "label" {
  source  = "cloudposse/label/null"
  name        = "waf"
  attributes  = ["regional"]
}

module "label_cloudfront" {
  source  = "cloudposse/label/null"
  context    = module.label.context
  attributes = ["cloudfront"]
}

I end up with labels waf-regional and waf-regional-cloudfront, when I want the second one to be waf-cloudfront

RB avatar

It doesn’t seem possible but i could be mistaken

It’s gross but this may be a workaround but there may be a better way

module "label" {
  source  = "cloudposse/label/null"
 
  name        = "waf"
  attributes  = ["regional"]
}
 
module "label_cloudfront" {
  source  = "cloudposse/label/null"

  name       = "waf"
  attributes = ["cloudfront"]
}

@Andriy Knysh (Cloud Posse) @Jeremy G (Cloud Posse) thoughts?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’ve seen the same behavior and I came to the conclusion that when passing the context from one label it isn’t possible to reset the values and they’re additive. Someone from Cloud Posse may know a way but I haven’t found one myself.

RB avatar

I think only the attributes are additive whereas the other arguments can be overridden

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

@rb possibly correct, I think attributes is the most common one so most obvious. If I need multiple labels with different base attributes I just set them up in the root module then pass the respective context . It gets messy/ugly but it works. To obtain the naming schema being requested by manager I’ve had to then tweak the label_order and label_as_tags.

Joe Niland avatar
Joe Niland

@Alex Jurkiewicz I would suggest just making another “regional” label and remove the attributes from the existing module.label

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

By design, attributes and tags are additive, but the explicitly named labels can be overridden.

In the next version of null-label, the labels will be upgraded to a map so you can use any key/name, not just the pre-defined ones we have now. Until then, you might prefer to use environment = "regional" and environment = "cloudfront" instead, or create a base label to inherit from:

module "label_base" {
  source = "cloudposse/label/null"
  name   = "waf"
}

module "label" {
  source      = "cloudposse/label/null"
  context     = module.label_base.context
  attributes  = ["regional"]
}

module "label_cloudfront" {
  source     = "cloudposse/label/null"
  context    = module.label_base.context
  attributes = ["cloudfront"]
}

2022-08-29

Nikhil Purva avatar
Nikhil Purva

Hi Team, I am using cloudposses’s waf module and trying to use byte_match_statement_rules with single_header but getting error The given value is not suitable for module.waf.var.byte_match_statement_rules declared at modules/terraform-aws-waf/variables.tf:46,1-38: all list elements must have the same type. Below is my template

field_to_match = {
   single_header = [{
     name = "Host"
     }]
  }
cloudposse/terraform-aws-waf
Alex Jurkiewicz avatar
Alex Jurkiewicz

you asked a very similar question yesterday. The CloudPosse module doesn’t support what you want to do

cloudposse/terraform-aws-waf
Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t believe it’s possible with the cloudposse module. The WAF resources are hard to build a module around, it can be easier to use the resources directly for exactly this reason

Nikhil Purva avatar
Nikhil Purva

my last question was different, it was about having and_statement. Today I am aksing about https://github.com/cloudposse/terraform-aws-waf/blob/master/rules.tf#L172-L178 which is mentioned in the module and giving error

                dynamic "single_header" {
                  for_each = lookup(field_to_match.value, "single_header", null) != null ? [1] : []

                  content {
                    name = single_header.value.name
                  }
                }
Jonas Steinberg avatar
Jonas Steinberg

Does anyone have any opinions on how best to go about introducing aws terraform tagging standards across a ton of github repos that contain application code (not really important), as well as all relevant terraform for that service (key factor)?

• The brute force approach I suppose would be to clone all repos to local (already done) and then write a script that handles iterating over the and traversing into the relevant directory and files where the terraform source is and writing a tagging object

• Another approach would be to mandate to engineering managers that people introduce a pre-determined tagging standard

• Any other ideas? I’m new to trying out ABAC across an org.

loren avatar

can you use a tag policy in aws organizations?

loren avatar
We’re Opensourcing Terratag to Make Multicloud Resource Tagging Easier | env0 blogattachment image

env0 is open sourcing Terratag - a CLI tool that enables users of Terraform to automatically create and maintain tagging across their entire set of AWS, Azure, and GCP resources. It enables you to easily add dynamic tags to your existing Infrastructure-as-Code.

1
loren avatar

or if you go the brute force approach, there is git-xargs by the gruntwork team… https://github.com/gruntwork-io/git-xargs

gruntwork-io/git-xargs

git-xargs is a command-line tool (CLI) for making updates across multiple Github repositories with a single command.

cool-doge1
loren avatar

in the end, for enforcement, i’d strongly consider a CI test that fails a pull request if the tag standard is not met

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

How many configurations are you talking? It’s not a very sexy answer but if the answer is under 100, it will take you less than an hour to update them all by hand

Alex Jurkiewicz avatar
Alex Jurkiewicz

I recently did a few hundred by hand. It’s not something you’ll do frequently enough to bother investing in automation. Elbow grease

Jonas Steinberg avatar
Jonas Steinberg

@loren Tbh I don’t know the answer to any of those questions, but I sure do appreciate the input and I’m looking into all three now, actually, thanks.

@Alex Jurkiewicz I appreciate the pragmatism of your input, thank you, and that may be a viable approach, as yes it’s probably less than 100 repos. Maybe. It could be 200, I’m not totally sure. I think the bigger concern is what we do after which would be using something like rapid7’s CloudInsightSec tool to then crawl across our AWS accounts and enforce compliance on AWS resources.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Do you have a centralised deployment system for your Terraform? That’s where to introduce compliance gates

this1
Jonas Steinberg avatar
Jonas Steinberg

@Alex Jurkiewicz Yep – Terraform Cloud (and CircleCI does a lot as well).

Jim Park avatar
Jim Park

@loren question, do you know how terratag compares to using default_tags in the aws provider?

loren avatar

not really. they feel like very different mechanisms. default_tags currently has a number of issues that make it appropriate for only a very limited use case.

loren avatar

default_tags, as currently implemented, are only “default” at the provider level. at the resource level, they’re effectively “mandatory” tags and tag-values. if you try to override a tag-key with a different value at the resource level, you’ll either get an error or a persistent diff. very annoying

Jonas Steinberg avatar
Jonas Steinberg

default_tags seems like a both specific and painfully limited use case.

fwiw it’s worth it turns out that AWS Organizations, combined with AWS Resource Groups, actually has evolved to offer a fair amount of functionality today. For anyone considering broad tag-based controls it’s definitely at least worth reading up on.

1
omry avatar

@Jonas Steinberg if you have any questions about Terratag let us know (Disclaimer - I am the co-founder and CTO of env0 who owns and maintain Terratag). We have quite a lot of customers using it to create a standard in terms of tagging resources.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jonas Steinberg we discussed this yesterday on office hours. Make sure to check out the recording.

Jonas Steinberg avatar
Jonas Steinberg

@Erik Osterman (Cloud Posse) very nice as usual

2022-08-30

Aumkar Prajapati avatar
Aumkar Prajapati

Question all, does this module support MSK serverless? https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster

cloudposse/terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can check the underlying terraform resources for msk serverless, then see if the required resources/attributes are set in the module’s code

cloudposse/terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK

2022-08-31

Release notes from terraform avatar
Release notes from terraform
01:33:32 PM

v1.3.0-beta1 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

2
Michał Woś avatar
Michał Woś

Any chances to have cloudposse/s3-bucket/aws bumped to at least 2.0.1 here https://github.com/cloudposse/terraform-aws-s3-log-storage/pull/72 and merged?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

My apologies. This was held up for what was supposed to be a short time due to expected changes on the roadmap, and when the roadmap priorities were changed, I failed to release the hold. It has now been updated and released as v0.28.2. (An update using s3-bucket 2.0.0 was accidentally released as v0.28.1.)

Michał Woś avatar
Michał Woś

Thank you for taking a second look and prompt executing!

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Thank you for bringing this delay to our attention.

Release notes from terraform avatar
Release notes from terraform
05:03:32 PM

v1.3.0-beta1 1.3.0 (Unreleased) NEW FEATURES:

Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…

Release v1.3.0-beta1 · hashicorp/terraformattachment image

1.3.0 (Unreleased) NEW FEATURES: Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual at…

1
    keyboard_arrow_up