#terraform-aws-modules (2024-10)

terraform Terraform Modules

Discussions related to https://github.com/terraform-aws-modules

Archive: https://archive.sweetops.com/terraform-aws-modules/

2024-10-01

2024-10-02

Erik Parawell avatar
Erik Parawell

@Ben Smith (Cloud Posse) I heard you are the expert on Ecspresso partial task definition stuff. When we change something in the ECS Service terraform component, it should upload to the S3 Bucket and not touch the current live deployment yes? Right now we are experiencing when we do a atmos terraform apply it is clobbering the live deployment with a new taskdef that isn’t merged with the json in the bucket. We made sure the s3_mirror_name is set and that files are being uploaded to the bucket.

Erik Parawell avatar
Erik Parawell

Maybe in the ecs_alb_service_task module invocation, adjust the ignore_changes_task_definition variable to look at the s3_mirroring_enabled flag:

ignore_changes_task_definition = local.s3_mirroring_enabled ? true : lookup(local.task, "ignore_changes_task_definition", false)

When s3_mirroring_enabled is true, ignore_changes_task_definition is set to true. So it will ignore any changes to the task_definition but will still upload the template json to the bucket

Erik Parawell avatar
Erik Parawell

But that feels wrong and like a hacky workaround. I feel like something else is wrong either with the component or my implementation/usage.

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)


When we change something in the ECS Service terraform component it should upload to the S3 Bucket and not touch the current live deployment yes?
So Terraform apply should still deploy to ECS. but after the first apply (once a file exists in s3) it should use that entirely. (this creates a slight hiccup where when you make a change to the TF config, you apply (no changes) then have to make a new release to have changes get seen).
Right now we are experiencing when we do a atmos terraform apply it is clobbering the live deployment with a new taskdef that isn’t merged with the json in the bucket.
This seems like the core of the problem. I’d check that the task template is what your TF configures, and the deploy/taskdef.json in your repository has some basic stuff like

{
  "containerDefinitions": [
    {
      "name": "app",
      "image": "{{ must_env `IMAGE` }}"
    }
  ],
  "cpu": "256",
  "memory": "1024"
}

then see what GHA is uploading to s3. Finally double check all these files exist within S3. under the path S3://<your mirror bucket>/acme-plat-<short-region>-<stage>-ecs-platform/<ecs-service-id>/task-definition.json

Erik Parawell avatar
Erik Parawell

My task-definition.json and my task-template.json look like they are correct as well

Erik Parawell avatar
Erik Parawell

Since it is clobbering the ECS taskdef it seems to me almost as if it isn’t merging in the TF. Am I correctly specifying the ECS Service in my YAML? (btw I have modified ECS Service component to add some things like map_asm_secrets, repository_credentials, iam_exec_policy_statements and iam_policy_statements )

Erik Parawell avatar
Erik Parawell

Update: So I went ahead and updated the S3 task-definition.json by executing the GHA again and the TF seems to be picking up the remote S3 definition. However I am running into this error:

│ Error: Invalid index
│
│   on main.tf line 125, in locals:
│  125:     local.container_aliases[item.name] => { container_definition = item }
│     ├────────────────
│     │ item.name is "datadog-agent"
│     │ local.container_aliases is object with 2 attributes
│
│ The given key does not identify an element in this collection value.
Erik Parawell avatar
Erik Parawell

@Ben Smith (Cloud Posse) I think this current issue is being caused by the datadog integration. https://github.com/cloudposse/terraform-aws-components/blob/26a33bd5392991921129b63b2bec64d9bc580d81/modules/ecs-service/main.tf#L118-L126

  container_aliases = {
    for name, settings in var.containers :
    settings["name"] => name if local.enabled
  }

  container_s3 = {
    for item in lookup(local.task_definition_s3, "containerDefinitions", []) :
    local.container_aliases[item.name] => { container_definition = item }
  }

Basically when local.container_aliases[item.name] is accessed local.container_aliases only has those containers that we specify in our atmos configuration. In my case I specify two container definitions. However the remote definition has four containers because we are using the datadog integration that injects containers into our taskdef. This makes a discrepancy in expectations.

  container_aliases = {
    for name, settings in var.containers :
    settings["name"] => name if local.enabled
  }

  container_s3 = {
    for item in lookup(local.task_definition_s3, "containerDefinitions", []) :
    local.container_aliases[item.name] => { container_definition = item }
  }
Erik Parawell avatar
Erik Parawell

As a workaround this works:

  base_containers = {
    for name, settings in var.containers :
    settings["name"] => name
  }

  datadog_containers = var.datadog_agent_sidecar_enabled ? {
    "datadog-agent"      = "datadog-agent"
    "datadog-log-router" = "datadog-log-router"
  } : {}

  container_aliases = merge(local.base_containers, local.datadog_containers)

I’m not familiar enough with how what it expects to make a proper fix. I might spend some more time though to do one.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Ben Smith (Cloud Posse)

2024-10-03

2024-10-07

2024-10-15

2024-10-16

    keyboard_arrow_up