#terraform (2022-01)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-01-03

2022-01-04

joshmyers avatar
joshmyers

What are people doing to get around https://github.com/hashicorp/terraform/issues/28803 ? “Objects have changed outside of Terraform” in > 1.0.X

A way to hide certain expected changes from the "refresh" report ("Objects have changed outside of Terraform") · Issue #28803 · hashicorp/terraformattachment image

After upgrading to 0.15.4 terraform reports changes that are ignored. It is exactly like commented here: #28776 (comment) Terraform Version Terraform v0.15.4 on darwin_amd64 + provider registry.ter…

loren avatar

one thing that i think helps is to always run a refresh after apply

A way to hide certain expected changes from the "refresh" report ("Objects have changed outside of Terraform") · Issue #28803 · hashicorp/terraformattachment image

After upgrading to 0.15.4 terraform reports changes that are ignored. It is exactly like commented here: #28776 (comment) Terraform Version Terraform v0.15.4 on darwin_amd64 + provider registry.ter…

joshmyers avatar
joshmyers

Hmm, don’t think that is gonna cut it in a lot of cases…

loren avatar

perhaps, but it helps an awful lot of the time

joshmyers avatar
joshmyers

Won’t help in places we have ignore_changes

joshmyers avatar
joshmyers

Currently seeing

joshmyers avatar
joshmyers
Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the
last "terraform apply":

  # aws_cloudwatch_log_resource_policy.default has been changed
  ~ resource "aws_cloudwatch_log_resource_policy" "default" {
        id              = "userservices-ecs-cluster"
      ~ policy_document = jsonencode( # whitespace changes
            {
                Statement = [
                    {
                        Action    = [
                            "logs:PutLogEvents",
                            "logs:CreateLogStream",
                        ]
                        Effect    = "Allow"
                        Principal = {
                            Service = [
                                "events.amazonaws.com",
                                "delivery.logs.amazonaws.com",
                            ]
                        }
                        Resource  = "arn:aws:logs:us-east-2:789659335040:log-group:/userservices/events/ecs/clusters/userservices-ecs-cluster:*"
                        Sid       = "TrustEventBridgeToStoreLogEvents"
                    },
                ]
                Version   = "2012-10-17"
            }
        )
        # (1 unchanged attribute hidden)
    }

Unless you have made equivalent changes to your configuration, or ignored the
relevant attributes using ignore_changes, the following plan may include
actions to undo or respond to these changes.

─────────────────────────────────────────────────────────────────────────────

No changes. Your infrastructure matches the configuration.
joshmyers avatar
joshmyers

Not helpful at all

loren avatar

well that’s a very specific case, and not the majority

joshmyers avatar
joshmyers

I’ve run 5 plans, 4 have confusing “Objects have changed outside of Terraform” , some for “whitespace changes” some not

joshmyers avatar
joshmyers

Folks will just start to ignore all plans if they don’t understand what is going on

loren avatar

the whitespace changes is something the aws provider has been working on pretty actively the last few releases. it’s getting better, but there is still work there

joshmyers avatar
joshmyers
Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the
last "terraform apply":

  # aws_cloudwatch_log_group.default has been changed
  ~ resource "aws_cloudwatch_log_group" "default" {
        id                = "/userservices/events/ecs/clusters/userservices-ecs-cluster"
        name              = "/userservices/events/ecs/clusters/userservices-ecs-cluster"
        tags              = {
            "Environment"       = "global"
            "Name"              = "userservices-global-userservices-ecs-cluster"
            "Namespace"         = "userservices"
            "bamazon:app"       = "userservices-ecs-cluster"
            "bamazon:env"       = "global"
            "bamazon:namespace" = "bamtech"
            "bamazon:team"      = "userservices"
        }
      + tags_all          = {
          + "Environment"       = "global"
          + "Name"              = "userservices-global-userservices-ecs-cluster"
          + "Namespace"         = "userservices"
          + "bamazon:app"       = "userservices-ecs-cluster"
          + "bamazon:env"       = "global"
          + "bamazon:namespace" = "bamtech"
          + "bamazon:team"      = "userservices"
        }
        # (2 unchanged attributes hidden)
    }
  # aws_cloudwatch_log_resource_policy.default has been changed
  ~ resource "aws_cloudwatch_log_resource_policy" "default" {
        id              = "userservices-ecs-cluster"
      ~ policy_document = jsonencode( # whitespace changes
            {
                Statement = [
                    {
                        Action    = [
                            "logs:PutLogEvents",
                            "logs:CreateLogStream",
                        ]
                        Effect    = "Allow"
                        Principal = {
                            Service = [
                                "events.amazonaws.com",
                                "delivery.logs.amazonaws.com",
                            ]
                        }
                        Resource  = "arn:aws:logs:eu-west-1:789659335040:log-group:/userservices/events/ecs/clusters/userservices-ecs-cluster:*"
                        Sid       = "TrustEventBridgeToStoreLogEvents"
                    },
                ]
                Version   = "2012-10-17"
            }
        )
        # (1 unchanged attribute hidden)
    }
  # aws_ecs_cluster.default has been changed
  ~ resource "aws_ecs_cluster" "default" {
        id                 = "arn:aws:ecs:eu-west-1:789659335040:cluster/userservices-ecs-cluster"
        name               = "userservices-ecs-cluster"
        tags               = {
            "Environment"       = "global"
            "Name"              = "userservices-global-userservices-ecs-cluster"
            "Namespace"         = "userservices"
            "bamazon:app"       = "userservices-ecs-cluster"
            "bamazon:env"       = "global"
            "bamazon:namespace" = "bamtech"
            "bamazon:team"      = "userservices"
        }
      + tags_all           = {
          + "Environment"       = "global"
          + "Name"              = "userservices-global-userservices-ecs-cluster"
          + "Namespace"         = "userservices"
          + "bamazon:app"       = "userservices-ecs-cluster"
          + "bamazon:env"       = "global"
          + "bamazon:namespace" = "bamtech"
          + "bamazon:team"      = "userservices"
        }
        # (2 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }
  # aws_cloudwatch_event_rule.default has been changed
  ~ resource "aws_cloudwatch_event_rule" "default" {
        id             = "userservices-ecs-cluster"
        name           = "userservices-ecs-cluster"
        tags           = {
            "Environment"       = "global"
            "Name"              = "userservices-global-userservices-ecs-cluster"
            "Namespace"         = "userservices"
            "bamazon:app"       = "userservices-ecs-cluster"
            "bamazon:env"       = "global"
            "bamazon:namespace" = "bamtech"
            "bamazon:team"      = "userservices"
        }
      + tags_all       = {
          + "Environment"       = "global"
          + "Name"              = "userservices-global-userservices-ecs-cluster"
          + "Namespace"         = "userservices"
          + "bamazon:app"       = "userservices-ecs-cluster"
          + "bamazon:env"       = "global"
          + "bamazon:namespace" = "bamtech"
          + "bamazon:team"      = "userservices"
        }
        # (5 unchanged attributes hidden)
    }

Unless you have made equivalent changes to your configuration, or ignored the
relevant attributes using ignore_changes, the following plan may include
actions to undo or respond to these changes.

─────────────────────────────────────────────────────────────────────────────

No changes. Your infrastructure matches the configuration.
joshmyers avatar
joshmyers

lol

loren avatar

oof, the tags_all stuff is implemented so poorly. very annoying

DaniC (he/him) avatar
DaniC (he/him)

just bumped/ found https://github.com/boltops-tools/terraspace and i thought i should share it in case folks are not aware of it.

Josh B. avatar
Josh B.

ruby

1
this1
Sebastian Macarescu avatar
Sebastian Macarescu

Hi team. I’ve opened a bug here https://github.com/cloudposse/terraform-provider-awsutils/issues/26 For me the awsutils_default_vpc_deletion resource deletes an unknown vpc then it reports as no default vpc found.

awsutils_default_vpc_deletion does nothing · Issue #26 · cloudposse/terraform-provider-awsutilsattachment image

Describe the Bug I'm trying to delete the default VPC using awsutils_default_vpc_deletion but nothing happens on apply. After apply it said it removed the default vpc with id vpc-caf666b7 but m…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use it

awsutils_default_vpc_deletion does nothing · Issue #26 · cloudposse/terraform-provider-awsutilsattachment image

Describe the Bug I'm trying to delete the default VPC using awsutils_default_vpc_deletion but nothing happens on apply. After apply it said it removed the default vpc with id vpc-caf666b7 but m…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt

matt avatar

The AWS v4 provider allows you to delete the default VPC…

matt avatar
Implement Full Resource Lifecycle for Default Resources
Default resources (e.g. aws_default_vpc, aws_default_subnet) previously could only be read and updated. However, recent service changes now enable users to create and delete these resources within the provider. AWS has added corresponding API methods that allow practitioners to implement the full CRUD lifecycle.

In order to avoid breaking changes to default resources, you must upgrade to use create and delete functionality via Terraform, with the caveat that only one default VPC can exist per region and only one default subnet can exist per availability zone.
matt avatar

Obviously haven’t tested it, but that’s what the blog says

Chris Picht avatar
Chris Picht

It used to be best practice to not delete the default VPC. Is that no longer true?

Chris Wahl avatar
Chris Wahl

It is still a best practice to remove the default VPC. In fact, AWS Security Hub will flag default VPCs in the standard compliance results as a threat to remediate.

1
Sebastian Macarescu avatar
Sebastian Macarescu

Anybody actually used that resource?

Release notes from terraform avatar
Release notes from terraform
09:53:16 PM

v1.1.0 1.1.0 (December 08, 2021) If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to the latest version as soon as possible. Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure to construct the apply-time graph can cause Terraform to incorrectly report success and save an empty state, effectively “forgetting” all existing infrastructure. Although configurations that already worked on previous releases should not encounter this problem, it’s possible that incorrect future…

Release v1.1.0 · hashicorp/terraformattachment image

1.1.0 (December 08, 2021) If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to the latest version as soon as possible. Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure …

Release notes from terraform avatar
Release notes from terraform
09:53:16 PM

v1.1.1 1.1.1 (December 15, 2021) If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to the latest version as soon as possible. Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure to construct the apply-time graph can cause Terraform to incorrectly report success and save an empty state, effectively “forgetting” all existing infrastructure. Although configurations that already worked on previous releases should not encounter this problem, it’s possible that incorrect future…

Release v1.1.1 · hashicorp/terraformattachment image

1.1.1 (December 15, 2021) If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to the latest version as soon as possible. Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure …

Alex Jurkiewicz avatar
Alex Jurkiewicz

hm, v1.1.2 already came out. I guess they edited the release notes to mention the major bug

loren avatar

yeah, someone just commented that in hangops (i assume they’re a Hashicorp employee)…
Hey all, those are just release notes updates that add some scare text to the top, to try to keep folks off those versions - nothing new to see here.

2
Brad McCoy avatar
Brad McCoy

Hey folks, we have a DevSecOps webinar coming up this week about Terraform and how you can use it safely in pipelines https://www.meetup.com/sydney-hashicorp-user-group/events/283063949/

Setting up Terraform guardrails with OPA and TFSEC | Meetupattachment image

Fri, Jan 7, 12:00 PM AEDT: Deploying Infrastructure with Infrastructure as code is great, but are you protected in case someone accidentally commits the wrong thing? In this webinar, Brad McCoy and B

Moritz avatar

is there any usable, open-source baseline ruleset yet?

Setting up Terraform guardrails with OPA and TFSEC | Meetupattachment image

Fri, Jan 7, 12:00 PM AEDT: Deploying Infrastructure with Infrastructure as code is great, but are you protected in case someone accidentally commits the wrong thing? In this webinar, Brad McCoy and B

Brad McCoy avatar
Brad McCoy

Hey @Moritz we have setup some common rego files that we use for gcp, azure, and aws. recording is here where we talk about it https://www.youtube.com/watch?v=V12785HySYM

2022-01-05

Almondovar avatar
Almondovar

Hi colleagues, i need to make appstream work via terraform, do i understand correctly that the only way to do that is the usage of a 3rd party provider that is not the official aws one?

  appstream = {
      source = "arnvid/appstream"
      version = "2.0.0"
    }
ismail yenigul avatar
ismail yenigul
Release v3.67.0 · hashicorp/terraform-provider-awsattachment image

FEATURES: New Data Source: aws_ec2_instance_types (#21850) New Data Source: aws_imagebuilder_image_recipes (#21814) New Resource: aws_account_alternate_contact (#21789) New Resource: aws_appstream…

terraform-provider-aws/ROADMAP.md at main · hashicorp/terraform-provider-awsattachment image

Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.

ismail yenigul avatar
ismail yenigul
Feature Request: AppStream support · Issue #6508 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…

Almondovar avatar
Almondovar

thanks but i can’t make it work even if i use the aws 3.70 provider - any idea what is wrong?

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "3.70.0"...
- Finding latest version of hashicorp/appstream...
- Using previously-installed hashicorp/aws v3.70.0
╷
│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/appstream: provider registry registry.terraform.io
│ does not have a provider named registry.terraform.io/hashicorp/appstream
│ 
│ All modules should specify their required_providers so that external consumers will get the correct providers when using a
│ module. To see which modules are currently depending on hashicorp/appstream, run the following command:
│     terraform providers
╵
> terraform providers    

Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] 3.70.0
├── provider[registry.terraform.io/hashicorp/appstream]
├── module.iam
│   └── provider[registry.terraform.io/hashicorp/aws]
└── module.tags

Providers required by state:

    provider[registry.terraform.io/hashicorp/aws]
Almondovar avatar
Almondovar
terraform {
  required_version = ">= 1.0.2"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.70.0"
    }
  }
}
Almondovar avatar
Almondovar

this is our config @ismail yenigul if you have any idea please

ismail yenigul avatar
ismail yenigul

clean your .terraform caches etc.

$ cat main.tf 
terraform {
  required_version = ">= 1.0.2"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.70.0"
    }
  }
}

$  terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "3.70.0"...
- Using hashicorp/aws v3.70.0 from the shared cache directory

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

try

$ rm -rf ~/.terraform.d
$ rm -rf .terraform
ismail yenigul avatar
ismail yenigul

I haven’t tested it yet btw. I am using cloudformation over terraform for appstream

ismail yenigul avatar
ismail yenigul
cat main.tf 
terraform {
  required_version = ">= 1.0.2"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.70.0"
    }
  }
}

resource "aws_appstream_fleet" "test" {
  name          = "test"
  image_name    = "Amazon-AppStream2-Sample-Image-02-04-2019"
  instance_type = "stream.standard.small"
  compute_capacity {
    desired_instances = 1
  }
}
erraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_appstream_fleet.test will be created
  + resource "aws_appstream_fleet" "test" {
      + arn                                = (known after apply)
      + created_time                       = (known after apply)
      + description                        = (known after apply)
      + disconnect_timeout_in_seconds      = (known after apply)
      + display_name                       = (known after apply)
      + enable_de
Almondovar avatar
Almondovar

i tried the delete comands but still the same error

Providers required by configuration:
.
├── provider[[registry.terraform.io/hashicorp/appstream](http://registry.terraform.io/hashicorp/appstream)]

something in the config is asking for the [registry.terraform.io/hashicorp/appstream](http://registry.terraform.io/hashicorp/appstream) provider but i cant figure out what is this…..

ismail yenigul avatar
ismail yenigul
 Finding latest version of hashicorp/appstream..

can you check your all providers. this should be not there

ismail yenigul avatar
ismail yenigul

do you have something like following?

provider "appstream" {
  # Configuration options
}
Almondovar avatar
Almondovar

not really

ismail yenigul avatar
ismail yenigul

can you search for provider string in the codes

Almondovar avatar
Almondovar

sure, but only the screenshots that i provided before exist

ismail yenigul avatar
ismail yenigul

remove .lock file and try again or create a new directory and copy only .tf files there

Almondovar avatar
Almondovar

once i delete the resource “appstream_stack” then error dissapears……

ismail yenigul avatar
ismail yenigul

it should be resource "aws_appstream_stack" "test"

ismail yenigul avatar
ismail yenigul

for official aws module

ismail yenigul avatar
ismail yenigul
New resource for AppStream Stack Fleet Association by coderGo93 · Pull Request #21484 · hashicorp/terraform-provider-awsattachment image

Added a new resource, doc and tests for AppStream Stack Fleet Association called aws_appstream_stack_fleet_association Community Note Please vote on this pull request by adding a reaction to the…

Almondovar avatar
Almondovar

indeed :tada: mate thank you so much!!! just a small aws_ keyword created so much confusion!!!

1
David avatar
resource "aws_acm_certificate" "this" {
  for_each = toset(var.vpn_certificate_urls)

  domain_name       = each.value
  subject_alternative_names = ["*.${each.value}"]
  certificate_authority_arn = var.certificate_authority_arn

   tags = {
      Name = each.value
   }

  options {
    certificate_transparency_logging_preference = "ENABLED"
  }

  lifecycle {
    create_before_destroy = true
  }

  provider = aws.so
}

Any tips on this? Error message on certificate in console: ‘The signing certificate for the CA you specified in the request has expired.’ https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/acm_certificate According to the docs above, you can create a cert signed by a private CA by passing the CA arn

Creating a private CA issued certificate
domain_name - (Required) A domain name for which the certificate should be issued
certificate_authority_arn - (Required) ARN of an ACM PCA
subject_alternative_names - (Optional) Set of domains that should be SANs in the issued certificate. To remove all elements of a previously configured list, set this value equal to an empty list ([]) or use the terraform taint command to trigger recreation.
Aziz avatar

Hello Guys - I used the bastion incubator helm chart from here and deployed on K8s and tried to connect to it using below commands but getting permission denied always.

1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=bastion-bastion" -o jsonpath="{.items[0].metadata.name}")
  echo "Run 'ssh -p 2222 127.0.0.1' to use your application"
  kubectl port-forward $POD_NAME 2222:22
➜ ssh -p 2211 [email protected]
The authenticity of host '[127.0.0.1]:2211 ([127.0.0.1]:2211)' can't be established.
RSA key fingerprint is SHA256:S44NDDfev4x8NCJHMVJgYXrhx4OS/SoYGer5TMGUgqg.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:2211' (RSA) to the list of known hosts.
[email protected]: Permission denied (publickey).

• I checked the Github API Key is correct • I checked the SSH key in Github is there as well • I also checked the users created in github-authorized-keys & bastion containers as well from Github team that is configured in values.yaml file Is there anything missing from my end? can you point me somewhere to fix the issue?

charts/incubator/bastion at master · cloudposse/chartsattachment image

The “Cloud Posse” Distribution of Kubernetes Applications - charts/incubator/bastion at master · cloudposse/charts

Aziz avatar

This is what I get in logs.

Connection closed by authenticating user azizzoaib786 127.0.0.1 port 59712 [preauth]
Connection closed by authenticating user azizzoaib786 127.0.0.1 port 59730 [preauth]
charts/incubator/bastion at master · cloudposse/chartsattachment image

The “Cloud Posse” Distribution of Kubernetes Applications - charts/incubator/bastion at master · cloudposse/charts

Aziz avatar
bastion does not seems to work and returns permission denied. · Issue #269 · cloudposse/chartsattachment image

I used the bastion incubator helm chart from here and deployed on K8s and tried to connect to it using below commands but getting permission denied always. 1. Get the application URL by running the…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re not really using this anymore

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’ve moved to using SSM agent with a bastion instance

Aziz avatar

So that means its deprecated project?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

pending customer sponsorship, we’re not prioritizing investment into it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub - cloudposse/terraform-aws-ec2-bastion-server: Terraform module to define a generic Bastion host with parameterized user_data and support for AWS SSM Session Manager for remote access with IAM authentication.attachment image

Terraform module to define a generic Bastion host with parameterized user_data and support for AWS SSM Session Manager for remote access with IAM authentication. - GitHub - cloudposse/terraform-aw…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is actively maintained

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem with the k8s bastion is if your k8s is truly hosed, even the bastion is unavailable.

2022-01-06

Frank avatar

Does anyone know how I can do this using Terraform? I have deployed a Lambda function, created a CF Distribution, associated the origin-request to it.. But right now it’s giving me a 503 because the “function is invalid or doesn’t have the required permissions”.

omry avatar

This is our setup for the Lambda:

data "archive_file" "edge-function" {
  type = "zip"
  output_path = "function.zip"
  source_file = "function.js"
}

data "aws_iam_policy_document" "lambda-role-policy" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type = "Service"
      identifiers = [
        "lambda.amazonaws.com",
        "edgelambda.amazonaws.com"
      ]
    }
  }
}

resource "aws_iam_role" "function-role" {
  name = "lambda-role"
  assume_role_policy = data.aws_iam_policy_document.lambda-role-policy.json
}

resource "aws_iam_role_policy_attachment" "function-role-policy" {
  role = aws_iam_role.function-role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

resource "aws_lambda_function" "function" {
  function_name = "headers-function"
  filename = data.archive_file.edge-function.output_path
  source_code_hash = data.archive_file.edge-function.output_base64sha256
  role = aws_iam_role.function-role.arn
  runtime = "nodejs14.x"
  handler = "function.handler"
  memory_size = 128
  timeout = 3
  publish = true
}

This is the CF assignment of the Lambda:

lambda_function_association {
      event_type = "origin-response"
      lambda_arn = aws_lambda_function.function.qualified_arn
      include_body = false
    } 
omry avatar

I bet there’s a module for that

Frank avatar

Thanks @omry. I am doing - more or less - the same yet the function doesn’t seem to work.. Just to check: do you see CF under Triggers in your Lambda function?

Frank avatar

But perhaps I’m looking in the wrong place and is such a trigger not needed. Haven’t needed Lambda@Edge until now

omry avatar

Why do you need it?

Frank avatar

One of our dev teams is working on a new application and they are using NextJS for that.

This uses some kind of proxy running within Lambda@Edge to determine whether to serve static content (from an S3 bucket) or pass it along to the API gateway and subsequent Lambda’s attached to it to handle that particular request

omry avatar

According to the error it looks like the lambda is calling some other AWS services but might not have permission to access them

Frank avatar

It appears that it is not published as a Lambda@Edge function, just as a plain Lambda. That’s why CF can’t find it

Frank avatar

But in your L@E function, do you see anything at the Triggers page?

omry avatar

Not sure, but it’s presented in CF

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Let take a look at my TF code as I have a origin-request L@E function to handle the CSP headers for both my static site and API Gateway/Lambda function

OliverS avatar
OliverS

Does anyone use, or has anyone used Ansible enough to shed some light on when (what types of tasks) Ansible would definitely be better than Terraform? Context: cloud, not on-prem.

Basically wondering if I should invest some time learning Ansible. It’s yet another DSL and architecture and system to manage (master etc) so there should be a significantly sized set of tasks that are significantly easier to do with it than with Terraform, in order to justify it.

1
steenhoven avatar
steenhoven

For me Ansible is super usable for creating machine images (together with packer), but other than that I never fall back on Ansible anymore.

Ralf Pieper avatar
Ralf Pieper

I use Terraform or Ansible, but try to avoid using both. I lean towards using Terraform myself, not very good at writing YAML.

loren avatar

os-level system management is all i do with ansible, e.g. files, configs, packages, services, etc. nothing specifically cloud

steenhoven avatar
steenhoven

That ^^

loren avatar

which has its place. i think both are worth learning, but i wouldn’t focus on the cloud portions of ansible

OliverS avatar
OliverS

I get the sense that Ansible is strongest in configuration management. One concern though, when applied to OS, is updates: it does not reset the state of the OS, eg if you did something manually to the OS, this change will remain.

OliverS avatar
OliverS

But I could see creating a fresh OS image (like an AMI) by creating a temp machine via Ansible, configuring it, capturing it as image and tearing it down. Then when you need to update a system, you treat it as stateless and use terraform to apply the new AMI (ie there is no post-startup saved state on there except in data volumes that can be remounted after changing to the new AMI).

IK avatar

Ansible is a configuration management tool, mutable and stateless by design. Definitely worth learning. Re your concern on OS updates; Ansible will just run the playbook to check for updates and install them as required.. if someone manually installs updates after this, no problem; the next time Ansible runs, those updates won’t be available to install so it’ll come back with nothing to do

IK avatar

In an ideal world, you should be using Terraform and Ansible together. For e.g. you’d use Terraform to deploy your ec2 instance and Ansible to configure it after the fact (change hostname, make local changes to the host, join domain etc;). Whilst you can use things like SSM documents in Terraform to achieve some of those things (for e.g. joining the domain), they become immutable and when you do things like a Terraform destroy, Terraform won’t disjoin the machine from the domain so you’ll end up with orphaned objects in AD. As the saying goes; if a hammer is your only tool, every problem begins to look like a nail.. hope this helps

OliverS avatar
OliverS

Thanks @IK exactly what I was looking for, in particular your example about using ssm via terraform leading to orphaned objects in AD.

If you (or anyone else!) has any other examples where there is a clear advantage of using ansible over the equivalent in terraform, it would help me make a case for it.

loren avatar

if your requirement is to build images, i would use packer+ansible. packer is purpose built for that, and addresses a ton of use cases and scenarios around interacting with the cloud provider. and it has a packer plugin to make it easy to use them together

IK avatar

i think the advantages are pretty clear.. i mean ultimately, terraform is not a configuration management tool and when you try to make it one, it quickly falls over (as per the SSM example i gave).. the way i see it, terraform is a day-0 tool (provisioning) and ansible is a day1+ tool (configuration management)

loren avatar

terraform is fantastic at continuing to manage the things it provisions. defining “configuration management” only as what happens inside an operating system is a very very limited and restricted definition of the term!

OliverS avatar
OliverS

@IK Yeah but it’s the concrete examples that are hard to come by. I’ve seen the exact same statements about day 0 and 1 on the web, mutabilty, configuration management vs provisioning, but it’s the substantive examples that give weight to these statements

IK avatar

@loren I agree.. would be interested to hear your thoughts on what more configuration management is

loren avatar

oh i just feel like most everything has configuration. the tags applied to a vpc are configuration. the subnets associated to a load balancer are configuration. i use terraform to manage and update that configuration, and if somehow it was changed i often use terraform to tell me and to set it back

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Release notes from terraform avatar
Release notes from terraform
10:03:15 PM

v1.1.3 1.1.3 (January 06, 2022) BUG FIXES: terraform init: Will now remove from the dependency lock file entries for providers not used in the current configuration. Previously it would leave formerly-used providers behind in the lock file, leading to “missing or corrupted provider plugins” errors when other commands verified the consistency of the installed plugins against the locked plugins. (<a…

Dependency Lock File (.terraform.lock.hcl) - Configuration Language | Terraform by HashiCorpattachment image

Terraform uses the dependency lock file .teraform.lock.hcl to track and select provider versions. Learn about dependency installation and lock file changes.

Jim Park avatar
Jim Park

I’ve recently took a stand that our teams should avoid using terraform to configure Datadog Monitors and Dashboards (despite having written a module to configure Datadog - AWS integration>).

I’ll say more about why in the thread, but the relevant TL;DR is that part of the workflow for configuring Datadog is to contextualize the monitors and dashboards with historical data. Doing so via a manifest doesn’t make sense.

What do you think? Do you agree? Have you seen examples where Datadog via code is super useful?

Jim Park avatar
Jim Park

Here’s a snippet of my argument:
I do not believe that Datadog Monitors (and other configurations) are suited to be configured from API.

In more detail, I believe that despite the advantage of being able to use Terraform’s for_each or module functionality to templatize certain Datadog resource patterns, that doing so introduces an interweaving of dependencies that invokes the question of updating the larger pattern every time an incremental improvement is made. I believe this linking to templates reduces the ability of engineers to tune the Datadog Monitors and Dashboards to their needs, especially as Slideshare’s infrastructure is being heavily refactored. I do not believe that this advantage outweighs the need for monitoring to be continuously updated with the latest context about operational concerns. Reducing the toil associated with adding notes, and adjusting thresholds, etc. should be minimized with the utmost priority.

Using terraform directly for configuring Datadog severely curtails the usability of Datadog, since part of the workflow for configuring Datadog is to contextualize the monitors and dashboards with historical data. Doing so via a manifest would require actually using the web interface to produce the configuration, then exporting it to JSON, then translating that to HCL.

Having two places where changes may be introduced creates a sort of dual-writer problem, would require periodic reconciliation, and would result in confusion and non-beneficial work.

I’d rather not introduce a release process to a SaaS service, especially one that benefits greatly from using the web interface.

RB avatar

I disagree. We’ve been configuring datadog monitors using yaml and terraform and it’s wonderful. We can recreate standard monitors for specific services and we can create generic monitors with standardized messages using code.

if we used clickops, it would be a lot more challenging for consistency

Andy Miguel avatar
Andy Miguel

@Nathaniel Selzer interesting thread re: our recent internal conversations

RB avatar

this might help make it easier to use.

so we dont have to mess around with custom hcl per monitor, we have a generic module that reads it from yaml. here’s an example yaml catalog.

https://github.com/cloudposse/terraform-datadog-platform/blob/master/catalog/monitors/amq.yaml

terraform-datadog-platform/amq.yaml at master · cloudposse/terraform-datadog-platformattachment image

Terraform module to configure and provision Datadog monitors, custom RBAC roles with permissions, Datadog synthetic tests, Datadog child organizations, and other Datadog resources from a YAML confi…

RB avatar

we also allow deep merging of this yaml to setup default monitor inputs.

Eric Berg avatar
Eric Berg

We also use TF to deploy DD monitors. We deploy the same monitors to multiple similar environments (client A prod, client B prod, Client a UAT, etc.)j, across multiple services. We do this rather than use multi-monitors, because we need different thresholds in each environment/service and that’s not supported.

I broke it out so each monitor is in its own yaml file, and once you create the monitor, we configure environments.yaml so the monitor is deployed to either selected or to all environments. In my last job, we used Python and JSON to persist monitors, which was awful for multi-line messages.

This was my initiation to flattening() environments, services, and monitor defs into a data structure suitable for for_each processing.

We have several kinds of monitors that monitor different types of thigns at different levels. Currently, our TF only supports deployment of monitors at the client environments/service level. We have plans to add one-offs and enhance the inclusion/exclusion functionality.

Jim Park avatar
Jim Park

I could see the use case if Datadog Monitors are being deployed consistently across multiple client environments. This essentially turns Datadog configuration into part of the product. Makes sense to version control then, since delivering updates to the monitoring suite becomes more important.

To elaborate, at my organization there is only a couple environments, and I found that some engineers were using terraform to codify Datadog Monitors and Dashboards to be applied once. This was hampering their ability to make changes without invoking a terraform release process, so we’re moving away from that.

@RB, I’m curious, what kind of environment(s) are you working on that work well with the terraform-datadog pattern?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s all tradeoffs. Picking which battles you want to fight based on what resources you have available. e.g. the same argument can be made for so many things. It’s so easy, afterall, to clickops an EKS cluster and deploy mongodb with a helm chart. Maybe < 30 minutes of work. Now “operationalize it”, takes 10x more effort.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it’s okay if teams want to click ops some datadog monitors to get unblocked, but as a convention, I don’t like it. Maybe for a small company, with 1-3 devs with small scale infra.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We just revamped our entire strategy for how to handle datadog/opsgenie. Major refactor on the opsgenie side. On the datadog side, we did the refactor last year some time, moving to the catalog convention.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

When we develop a new monitor, we develop it still in the UI, but then codify it in via the YAML as a datadog query.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Does this UX suck? ya, it’s a tradeoff, but one that we can always walk back. We know how to deploy any previous version. We have a smooth CD process with terraform via spacelift.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the reality is very few companies I think practice good gitops when it comes to monitoring and incident management. when going through our rewrite, i reflected on this. i think it’s because it’s so hard to get it right with the right level of abstraction in the right places. i think now we finally have the patterns down to solve it (literally we’re rolling it out this week). check back with me in a couple months to see how it’s going.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


To elaborate, at my organization there is only a couple environments, and I found that some engineers were using terraform to codify Datadog Monitors and Dashboards to be applied once. This was hampering their ability to make changes without invoking a terraform release process, so we’re moving away from that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is the terraform release process cumbersome?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the other thing to think about with monitoring is how easy it is to screw it up. someone can go change the thresholds and make an honest mistake of the wrong unit (E.g. 1s vs 1000ms)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

not every engineer will have the same level of experience. the PR process helps level the playing field making it possible for more people to contribute.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

with the spacelift drift detection, if changes are made in the datadog UI, that’s cool, but we’ll get a drift detected in our terraform plans within 24 hours. Then we can go remediate it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

lastly, we’ve probably spent 2000 hours (20+ sprints) on datadog/opsgenie work in terraform just in the past year. so is it easy to get it right? nope. but is it possible, believe it is which is why we do what we do.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can certainly manage some subset of your monitors with clickops. don’t get in the way of developers, if you don’t have the resources in place to handle it in better ways. let them develop the monitors, burn them in and mature. then document them in code. tag monitors managed by terraform as such, so you know which clickops and which are automated.

IK avatar

@Erik Osterman (Cloud Posse) It's so easy, afterall, to clickops an EKS cluster and deploy mongodb with a helm chart. Maybe < 30 minutes of work. Now "operationalize it", takes 10x more effort. can you expand on this please? Particularly the 10x more effort part

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Operationalizing it is about taking it over the line into a production-ready configuration.

  1. DR, Backups, restores
  2. Environment consistency (e.g. via GitOps)
  3. Logging
  4. Monitoring & incident management
  5. Upgrading automation of mongo (not all helm upgrades are cake walk, if the charts change it can lead to destruction of resources)
  6. Upgrading the EKS cluster
  7. Detecting environment drift, remediation
  8. Security Hardening
  9. Scaling or better yet autoscaling (storage and compute) for monogodb
  10. Managing monogodb collections, indexes, etc. …just some things that come to mind
IK avatar

Great points, thanks for that. I totally agree.. i’m actually in 2 minds about deploying a simple ECS cluster with TF vs clickops.. will take me about 15mins via clickops but likely longer via TF.. was wondering if the effort using TF was worthwhile.. after thinking about some of the points you raise, i can see why i’d spend the extra time doing it in TF

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, it sort of hurts when you think about it. You’re right, it takes just 15 minutes with clickops. And it seems stupid sometimes that we spend all this time on automating it. With IAC, there’s this dip. You are first much less productive, until you become tremendously efficient. It’s not until you have the processes implemented that you start achieving greater efficiencies. So if you’re building a POC that will be thrown away, think twice about the IAC (if you don’t already have it). And if it’s inevitably going to reach production, then push back and do it the right way. Also, as consultants, what we see all the time is that something was done quick and dirty, but now we need to come in and fix it, but it’s a lot more gnarly because nothing was documented. There’s no history to understand why something was set up the way it was setup.

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

let’s discuss on #office-hours

1
Eric Berg avatar
Eric Berg

I just got this error, when trying to plan the complete example of the Spacelift cloud-infrastructure-automation mod. What’s the preferred way to report this?

│ Error: Plugin did not respond
│
│   with module.example.module.yaml_stack_config.data.utils_spacelift_stack_config.spacelift_stacks,
│   on .terraform/modules/example.yaml_stack_config/modules/spacelift/main.tf line 1, in data "utils_spacelift_stack_config" "spacelift_stacks":
│    1: data "utils_spacelift_stack_config" "spacelift_stacks" {
│
│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain
│ more details.
╵

Stack trace from the terraform-provider-utils_v0.17.10 plugin:

panic: interface conversion: interface {} is nil, not map[interface {}]interface {}

goroutine 55 [running]:
github.com/cloudposse/atmos/pkg/stack.ProcessConfig(0xc000120dd0, 0x6, 0xc00011e348, 0x17, 0xc000616690, 0x100, 0x0, 0x0, 0xc00034bcf0, 0xc00034bd20, ...)
        github.com/cloudposse/[email protected]/pkg/stack/stack_processor.go:276 +0x42ad
github.com/cloudposse/atmos/pkg/stack.ProcessYAMLConfigFiles.func1(0xc000120dc0, 0x0, 0x0, 0xc0005130e0, 0x1040100, 0xc0005130d0, 0x1, 0x1, 0xc00050d0e0, 0x0, ...)
        github.com/cloudposse/[email protected]/pkg/stack/stack_processor.go:72 +0x3f9
created by github.com/cloudposse/atmos/pkg/stack.ProcessYAMLConfigFiles
        github.com/cloudposse/[email protected]/pkg/stack/stack_processor.go:39 +0x1a5

Error: The terraform-provider-utils_v0.17.10 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please open an issue

Eric Berg avatar
Eric Berg

Thanks, @Andriy Knysh (Cloud Posse). Already started the ticket, but the message on that page suggests reaching out here, so…

jonjitsu avatar
jonjitsu

Anyone know of a working json2hcl2 tool? I’ve tried kvx/json2hcl but it’s hcl1.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fwiw, terraform supports pure json

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so no need to convert anything

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we exploit this fact to generate “HCL” (in json) code, but prefer to leave it in JSON so it’s clear it’s machine generated.

Chris Fowles avatar
Chris Fowles

jsondecode() ?

loren avatar

Heh, pretty much, just don’t. If it’s in json, and you want tf to process it, just save the file extension as .tf.json…

2022-01-07

Almondovar avatar
Almondovar

hi colleagues i am working on terraforming the appstream on aws, but i cant find anywhere in terraform code the lines for the appstream image registry - am i missing something?

mikesew avatar
mikesew

I have a terraform workspace upgrade question. Using self-hosted Terraform Enterprise. My workspace is tied to Github VCS (no jenkins or other CI), and set to remote execution, not local. That means the workspace will use whatever tf version is config’d (ie. 0.12.31). I’m trying to upgrade to 0.13 However, I’m trying to test whether the plan using the new (0.13) binary runs clean or not. Am I doing this right?

tfenv use 0.12.31
terraform plan
  # it makes the workspace run the plan.. using version 0.12.31
tfenv list-remote | grep 0.13
  0.13.6
tfenv install 0.13.6
tfenv use 0.13.6
terraform init
terraform 0.13upgrade
terraform plan
  # it STILL runs the workspace's terraform version, 0.12.31!! NOT my local 0.13 binary.

^^ terraform plans whatever version is set by the workspace. Do I have any other options to test the upgrade locally?

Fizz avatar

If you want to test it locally, you’ll need to set the workspace to local, or change the backend to something else and use an alternate state file.

1
Fizz avatar

In remote, tf plans on tf cloud, and streams the output to your local machine

mikesew avatar
mikesew

Thanks, confirms my suspicions. If I change execution mode to “local” , it breaks my CI/CD since I now have to supply variables somehow. So I’ve come up with :

  1. Ensure tf apply runs clean
  2. Create feature branch
  3. Tfenv install 0.13.7
  4. Tfenv use 0.13.7
  5. Terraform 0.13upgrade
  6. Fix syntax
  7. In TFE UI, make sure auto apply = OFF. Set version to 0.13.7 and branch to our feature branch
  8. Run plan . Do NOT APPLY.
  9. If any syntax errors , fix in feature branch, git push.
  10. Run plan in TFE UI again until it plans clean .
  11. After this, point of no return.
  12. Change workspace back to original (master) branch
  13. Do a PR to master with new 0.13 syntax
  14. Run tfe apply , should run clean. Fix errors and iterate until done

.. am I off track here ?

Fizz avatar

Seems reasonable.

1
Florin Andrei avatar
Florin Andrei

Using git::[email protected]:cloudposse/terraform-aws-msk-apache-kafka-cluster.git?ref=tags/0.6.0

I need to enable monitoring on Kafka (AWS MSK). While running terragrunt plan I noticed the server_properties from the aws_msk_configuration resource was going to be deleted. I will try to figure out whether the properties were added later manually, or whether those are some defaults by the module or by AWS itself, etc.

But if anyone knows what’s the default behavior of this module, and what are the best practices for the server properties, that would be useful to know.

Question 2 (related): Let’s say I decide it’s best to freeze server_properties in our Gruntwork templates. AWS MSK appears to store that variable as a plain text file, one key/value per line, no spaces, which I can retrieve with AWS CLI and base64 --decode it:

auto.create.topics.enable=true
default.replication.factor=3
min.insync.replicas=2
...

I’ve tried to pass all that as a Terraform map into inputs / properties for the Kafka cluster module, but it gets deleted / rewritten with spaces by terragrunt plan:

auto.create.topics.enable = true
default.replication.factor = 3
min.insync.replicas = 2
...

I’ve tried to include it as a template file, with the contents of the template just literally the properties file from AWS:

properties = templatefile("server.properties.tpl", { })

But then I get this error:

Error: Extra characters after expression

  on <value for var.properties> line 1:
  (source code not available)

An expression was successfully parsed, but extra characters were found after
it.

What is a good way to force terragrunt to inject that variable into AWS exactly as I want it?

1
RB avatar
GitHub - cloudposse/terraform-aws-msk-apache-kafka-cluster: Terraform module to provision AWS MSKattachment image

Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.

RB avatar

have you tried using that? @Florin Andrei

Florin Andrei avatar
Florin Andrei

I ended up passing the parameters I need as a map via the properties input, and that seems to work. AWS Support told me I only need to declare the non-default values there.

2022-01-09

Ryan avatar

Most organizations have at least 1 of these infrastructure problems? How are you solving them?

-Broken Modules Tearing Down Your Configurations -Drifting Away From What You Had Defined -Lack of Security & Compliance -Troublesome Collaboration -Budgets Out of Hand

2
RB avatar

those are a lot of issues

RB avatar

the first 2 are solved with terraform deploy services like spacelift and atlantis

RB avatar

3rd is solved with appropriate usage of compliance and aws services like guardduty, shield, inspector, config, etc

RB avatar

4th is solved with git

RB avatar

5th, if related to terraform, can be solved by rightsizing and using the tool infracost

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
DaniC (he/him) avatar
DaniC (he/him)

i gotta watch the recording, interesting topic !

Ben Dubuisson avatar
Ben Dubuisson

Hi ! Using https://github.com/cloudposse/terraform-aws-sso/ . It creates iam roles through permission sets and I wonder if anybody figured out how to get access to the IAM role name (want to save it as SSM.

It seems to follow the pattern: AWSReservedSSO_{permissionSetName}_{someRandomnHash}

GitHub - cloudposse/terraform-aws-sso: Terraform module to configure AWS Single Sign-On (SSO)attachment image

Terraform module to configure AWS Single Sign-On (SSO) - GitHub - cloudposse/terraform-aws-sso: Terraform module to configure AWS Single Sign-On (SSO)

1
Ben Dubuisson avatar
Ben Dubuisson
Using service-linked roles for AWS SSO - AWS Single Sign-On

Learn how the service-linked role for AWS SSO is used to access resources in your AWS account.

loren avatar

far as i know, the role name is not returned by the permission set api calls…

loren avatar

i think you could use the iam_roles data source, with a regex pattern specific enough to always match just the one role… probably need to use depends_on to force the dependency. and also, it can take a bit of time for the permission set to populate the role to the account, so might need to use the time provider to delay the call a bit…

1
loren avatar

instead of depends_on, you could parse the name from the arn attribute of the permission set. that would be somewhat better, in terms of letting terraform manage the dependency graph

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, I think @loren’s suggestion is correct. When we wrote the module, the IAM data source didn’t support a regex pattern, but that is a nice addition.

loren avatar

The trouble is the account where the permission set is created is not necessarily the account(s) where the role is created, which means likely need a second provider alias to perform the lookup in the correct account

2022-01-10

Jens Lauterbach avatar
Jens Lauterbach

Hi wave ,

I started looking at the Terraform AWS EC2 Client VPN module and got everything deployed based on the complete example. That worked well so far. I downloaded the client configuration and imported it in the OpenVPN client (which should be supported based on the AWS documentation).

But that’s when my luck runs out. I can’t connect to the VPN and the client provides following error:

Transport Error: DNS resolve error on 'cvpn-endpoint-.....prod.clientvpn.eu-central-1.amazonaws.com' for UDP session: Host not found.

So this appears to be a “networking issue”. My computer can’t resolve the endpoint address. So it appears I missed something in my VPN setup?

Any suggestions what I might be doing wrong?

shamb0 avatar

what OS are you using?

Jens Lauterbach avatar
Jens Lauterbach

macOS Monterey

shamb0 avatar

are you using tunnelblick? if yes, disable it

Jens Lauterbach avatar
Jens Lauterbach

FWIW: I am only using the OpenVPN connect client. I don’t think it has anything to do with the VPN tools. The endpoint is just not accessible from the net somehow.

shamb0 avatar

sorry, ya, just realised that after I typed it

shamb0 avatar
Troubleshooting Client VPN - AWS Client VPN

The following topic can help you troubleshoot problems that you might have with a Client VPN endpoint.

shamb0 avatar
Jens Lauterbach avatar
Jens Lauterbach

I used the Terraform module provided by the Cloud Possee to set up the VPN. But that help article properly contains the answer to my problem

Jens Lauterbach avatar
Jens Lauterbach

I’ll try modifying the DNS as suggested in the article.

shamb0 avatar

ya I get it… this is “close” to me because Ive been fighting with a bunch of issues on my cvpn deployments… its always had nothing to do with terraform and almost always had to do with missing things in aws documentation tbh

Matt Gowie avatar
Matt Gowie

Yeah — the AWS Client VPN hostnames are not resolve-able by all DNS servers AFAIR. I remember having to have clients add additional DNS servers to properly resolve the VPN hostname when I last set one up. An annoying issue.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Leo Przybylski might have some ideas

Leo Przybylski avatar
Leo Przybylski

@Jens Lauterbach You cannot use that address of the VPN endpoint as is. It must be a subdomain like acme.cvpn-endpoint-.....prod.clientvpn.eu-central-1.amazonaws.com

In the client config, you will see something like

remote cvpn-endpoint-.....prod.clientvpn.eu-central-1.amazonaws.com 443
remote-random-hostname

remote-random-hostname is not supported by openvpn AFAIK, so the client tries to connect directly rather than using a remote random hostname. You will need to add this yourself. FYI, you could just use the AWS VPN client which does recognize it.

Disclaimer: It was difficult getting openvpn to work. I don’t believe the AWS client vpn client configuration is fully supported by openvpn; therefore, I don’t recommend it. If possible, use https://aws.amazon.com/vpn/client-vpn-download/ instead

@Matt Gowie I also encountered what you were running into in separate instances. One was when I finally got my VPN client to connect, I could not resolve DNS entries. The reason for this is that unless you are using AWS for DNS service, it will not work. For this, I had to enable this option https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_client_vpn_endpoint#split_tunnel

What split_tunnel does is it allows traffic outside the VPN. In this case, it allows traffic to travel to DNS servers outside AWS. If you are using DNS servers elsewhere or your application needs to communicate with services outside AWS infrastructure, this will be important.

1
Leo Przybylski avatar
Leo Przybylski

@Jens Lauterbach Regarding the remote-random-hostname in the client config, are you using the client configuration exported from the terraform module or from AWS console? I ask because the terraform module automatically produces a client configuration that will connect to openvpn. If it is not, I’d like to discuss further and resolve that. (See: https://github.com/cloudposse/terraform-aws-ec2-client-vpn#output_client_configuration)

GitHub - cloudposse/terraform-aws-ec2-client-vpnattachment image

Contribute to cloudposse/terraform-aws-ec2-client-vpn development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

Pretty interesting email that just lit up all of my inboxes from AWS on a Terraform provider fix… I’m very surprised they went this far to announce a small fix. I wonder if it is bogging down their servers for some reason.
Hello,

You are receiving this message because we identified that your account uses Hashicorp Terraform to create and update Lambda functions. If you are using the V2.x release with version V2.70.1, or V3.x release with version V3.41.0 or newer of the AWS Provider for Terraform you can stop reading now.

As notified in July 2021, AWS Lambda has extended the capability to track the state of a function through its lifecycle to all functions [1] as of November 23, 2021 in all AWS public, GovCloud and China regions. Originally, we informed you that the minimum version of the AWS Provider for Terraform that supports states (by waiting until a Lambda a function enters an Active state) is V2.40.0. We recently identified that this version had an issue where Terraform was not waiting until the function enters an Active state after the function code is updated. Hashicorp released a fix for this issue in May 2021 via V3.41.0 [2] and back-ported it to V2.70.1 [3] on December 14, 2021.

If you are using V2.x release of AWS Provider for Terraform, please use V2.70.1, or update to the latest release. If you are using V3.x version, please use V3.41.0 or update to the latest release. Failing to use the minimum supported version or latest can result in a ‘ResourceConflictException’ error when calling Lambda APIs without waiting for the function to become Active.
If you need additional time to make the suggested changes, you can delay states change for your functions until January 31, 2022 using a special string (awsopt-out) in the description field when creating or updating the function. Starting February 1, 2022, the delay mechanism expires and all customers see the Lambda states lifecycle applied during function create or update. If you need additional time beyond January 31, 2022, please contact your enterprise support representative or AWS Support [4].

To learn more about this change refer to the blog post [5]. If you have any questions, please contact AWS Support [4].

[1] https://docs.aws.amazon.com/lambda/latest/dg/functions-states.html
[2] https://newreleases.io/project/github/hashicorp/terraform-provider-aws/release/v3.41.0
[3] https://github.com/hashicorp/terraform-provider-aws/releases/tag/v2.70.1
[4] https://aws.amazon.com/support
[5] https://aws.amazon.com/blogs/compute/coming-soon-expansion-of-aws-lambda-states-to-all-functions

Sincerely,
Amazon Web Services

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow, that is surprising

J Norment avatar
J Norment

Does anyone know how I would be able to determine what version of TLS is used by TF when making calls to AWS APIs?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What problem are you trying to solve?

J Norment avatar
J Norment

Security appears to need the information for some kind of corporate security control.

Raim avatar

Hello everyone and good evening. I’m trying to set up https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account for a VPC peering between two regions in a single AWS account.

module "vpc_peering" {
  source    = "cloudposse/vpc-peering-multi-account/aws"
  version   = "0.17.1"
  namespace = var.namespace
  stage     = var.stage
  name      = var.name

  requester_vpc_id                          = var.requester_vpc_id
  requester_region                          = var.requester_region
  requester_allow_remote_vpc_dns_resolution = true
  requester_aws_assume_role_arn             = aws_iam_role.vpc_peering_requester_role.arn
  requester_aws_profile                     = var.requester_profile

  accepter_enabled                         = true
  accepter_vpc_id                          = var.accepter_vpc
  accepter_region                          = var.accepter_region
  accepter_allow_remote_vpc_dns_resolution = true
  accepter_aws_profile                     = var.accepter_profile

  requester_vpc_tags = {
    "Primary" = false
  }

  accepter_vpc_tags = {
    Primary = true
  }
}

This is how I’m defining the module right now.

I’ve run terraform init no problems, but when I try to create a plan I get:

│ Error: no matching VPC found
│ 
│   with module.vpc_peering_west_east.module.vpc_peering.data.aws_vpc.accepter[0],
│   on .terraform/modules/vpc_peering_west_east.vpc_peering/accepter.tf line 43, in data "aws_vpc" "accepter":
│   43: data "aws_vpc" "accepter" {
╷
│ Error: no matching VPC found
│ 
│   with module.vpc_peering_west_east.module.vpc_peering.data.aws_vpc.requester[0],
│   on .terraform/modules/vpc_peering_west_east.vpc_peering/requester.tf line 99, in data "aws_vpc" "requester":
│   99: data "aws_vpc" "requester" {

The module is:

module "vpc_peering_west_east" {
  source    = "../modules/vpc_peering"
  namespace = "valley"
  stage     = terraform.workspace
  name      = "valley-us-east-2-to-us-west-2-${terraform.workspace}"

  accepter_vpc    = "vpc-id-1"
  accepter_region = "us-west-2"
  accepter_profile  = "valley-prod-us-west-2"

  requester_vpc_id = "vpc-id-2"
  requester_region = "us-east-2"
  requester_profile = "valley-prod-us-east-2"
  
  vpc_peering_requester_role_name = "valley-us-west-2-to-us-east-2-${terraform.workspace}"
}

terraform version output is:

Terraform v1.1.3
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.68.0
+ provider registry.terraform.io/hashicorp/null v3.1.0

Both VPCs exist, and if I try to do a simple data block they are detected with the IDs. What am I missing and perhaps what I have not read about this? Thank you beforehand for any help.

The referenced profiles do exist, and they are the ones from which the existing infrastructures exist in their respective regions.

Lloyd O'Brien avatar
Lloyd O'Brien

HNY all just wondering, can a .tfvars file reference a file (EC2 userdata file) as input, or can it only take string? TIA team

jose.amengual avatar
jose.amengual

no you can’t so thing like imports or such

1
Lloyd O'Brien avatar
Lloyd O'Brien

Cheers, Pepe

1

2022-01-11

DevOpsGuy avatar
DevOpsGuy

Hi All, I am trying to install mysql database on Windows Server 2016 (64 bit) using terraform. This is not going to be RDS. I am not sure where to start on how to install mysql on Windows Server 2016 (64 bit) EC2 in aws using terraform. Can someone provide me the insight.

Raim avatar

I don’t think you’d use Terraform to do something like that. An playbook with Chef/Ansible/Puppet might be better.

this1
Jim G avatar

you can do it in user_data, but terraform will not have any visibility into the results - it just blindly executes the script. Probably a better solution would be to bake an AMI with Packer first, then use terraform to spin up an instance based on it?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, agree with the recommendations above

DevOpsGuy avatar
DevOpsGuy

@Jim G Thank you. Now I have an AMI ready. From here how can I use that AMI to spin up using terraform? Is that to create a new ec2 based on that AMI?

jose.amengual avatar
jose.amengual

@DevOpsGuy try not to double post

Balazs Varga avatar
Balazs Varga

Ansible is much better for this task.

DevOpsGuy avatar
DevOpsGuy

@jose.amengual These are two different issues and this is terraform group and I posted in that context only.

jose.amengual avatar
jose.amengual

yes but now I know you are using an instance instead of a RDS instance, that is what I mention it

1
DevOpsGuy avatar
DevOpsGuy

I got something helpful here https://thepracticalsysadmin.com/create-windows-server-2019-amis-using-packer/ In case it may help others cheers

Create Windows Server 2019 AMIs using Packerattachment image

There are quite a few blog posts out there detailing this, but none of them seem to be up to date for use with the HCL style syntax, introduced in Packer 1.5, which has a number of advantages over …

Jas Rowinski avatar
Jas Rowinski

Hi, I was wondering what the official process is to add enhancements to Cloudposse git repos? I wanted to enable ebs_optimized to your eks_node_group but seems you require a fork, branch -> PR ? First time wanting to contribute to a public repo so not sure if this is a standard way of doing it.

Is it possible to become a contributor or is this the only way to handle updates from the community?

GitHub - cloudposse/terraform-aws-eks-node-group: Terraform module to provision a fully managed AWS EKS Node Groupattachment image

Terraform module to provision a fully managed AWS EKS Node Group - GitHub - cloudposse/terraform-aws-eks-node-group: Terraform module to provision a fully managed AWS EKS Node Group

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Jas Rowinski, fork, branch -> PR is the correct way of doing it

GitHub - cloudposse/terraform-aws-eks-node-group: Terraform module to provision a fully managed AWS EKS Node Groupattachment image

Terraform module to provision a fully managed AWS EKS Node Group - GitHub - cloudposse/terraform-aws-eks-node-group: Terraform module to provision a fully managed AWS EKS Node Group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we all do branch -> PR at Cloud Posse (we don’t need to fork since it’s our own repo)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thank you for your contribution

Jas Rowinski avatar
Jas Rowinski

no problem, just put a PR in. Tested on my own cluster and it worked as intended

Next up is scheduling capacity with on-demand and spot types

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks, I reviewed, a few comments

1
1
Jas Rowinski avatar
Jas Rowinski

Thanks for the review and merge!

1
Lloyd O'Brien avatar
Lloyd O'Brien

Greetings! Is there a good way to pass an EC2 user data file that is stored (in the same repo obvs) in another folder via the production.tfvars file? “hardcoding” isn’t a option as the EC2 module is called in another capacity also. How do you manage passing user data files? TIA

jose.amengual avatar
jose.amengual

you should use a template

jose.amengual avatar
jose.amengual
terraform-aws-ec2-bastion-server/amazon-linux.sh at master · cloudposse/terraform-aws-ec2-bastion-serverattachment image

Terraform module to define a generic Bastion host with parameterized user_data and support for AWS SSM Session Manager for remote access with IAM authentication. - terraform-aws-ec2-bastion-server…

jose.amengual avatar
jose.amengual
terraform-aws-ec2-bastion-server/main.tf at master · cloudposse/terraform-aws-ec2-bastion-serverattachment image

Terraform module to define a generic Bastion host with parameterized user_data and support for AWS SSM Session Manager for remote access with IAM authentication. - terraform-aws-ec2-bastion-server…

2022-01-12

Matt Gowie avatar
Matt Gowie

Sad to see this conversation in the sops repo considering it’s such an essential tool for at least my own Terraform workflow. Wanted to bring it up here to get more eyes on it and if anyone who knows folks at Mozilla so they can bug them.

https://github.com/mozilla/sops/discussions/927

New maintainers · Discussion #927 · mozilla/sopsattachment image

It&#39;s quite apparent to me that neither @ajvb nor me currently have enough time to maintain the project, with PRs sitting unreviewed. I think it&#39;s time to look for some new maintainers. I do…

3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

wow, this is almost like if the libcurl maintainers said they don’t think they can maintain it anymore, … sops is integral to many tools

New maintainers · Discussion #927 · mozilla/sopsattachment image

It&#39;s quite apparent to me that neither @ajvb nor me currently have enough time to maintain the project, with PRs sitting unreviewed. I think it&#39;s time to look for some new maintainers. I do…

10002
venkata.mutyala avatar
venkata.mutyala

Looks like Mozilla isn’t going to let this die just yet. Ref: https://github.com/mozilla/sops/discussions/927#discussioncomment-2183834

Hi all, I’m the new Security Engineering manager here at Mozilla. I wanted to update the community on our current status and future plans for the SOPS tool.

While the project does appear stagnant at this time, this is a temporary situation. Like many companies we’ve faced our own resource constraints that have led to SOPS not receiving the support many of you would have liked to see us provide, and I ask that you bear with us a bit longer as we pull this back in. I’m directing some engineer resources towards SOPS now and growing my team as well (see below, we’re hiring!), so we expect to work on the SOPS issue backlog and set our eyes to its future soon.

I realize there is interest in the community taking over SOPS. It may go that way eventually, but at the moment SOPS is so deeply integrated throughout our stack that we’re reluctant to take our hands completely off the wheel. In the longer term we’ll be evaluating the future for SOPS as we modernize and evolve Mozilla’s tech stack. At present it serves some important needs however, so for at least the next year you can expect Mozilla to support both the tool and community involvement in its development.

Lastly, as I noted above I am growing my team! If working on tools like SOPS or exploring other areas of security involving cloud, vulnerability management, fraud, crypto, or architecture sounds interesting to you, see our job link below and apply! I have multiple roles open and these are fully remote across most of US, Canada, and Germany.
https://www.mozilla.org/en-US/careers/position/gh/3605345/

Thank you,
rforsythe

2022-01-13

Shilpa avatar

Hi everyone, I m new bee in Terraform and started Github Administration through Terraform. I am creating repo, setting all required config. Now, wants to upload files present into working directory or fetch from different repo/S3 to the newly created repo. Any pointers to achieve this? Thank you

Jim Park avatar
Jim Park

Correct me if I’m mistaken, but I think you are saying you are currently using terraform to manage the GitHub configuration of your git repositories, and now you wish to use terraform to add files to the git repository.

Could you explain in more detail why you would wish to do this?

Shilpa avatar

What I am trying to achieve is when developers need to create repo, run terraform we have to enter repo name, it will then create repo, required access, events, set webhooks, labels, etc. Say if it is java project using gradle all default files I want to create/upload using terraform.

Shilpa avatar

so template will be ready for dev they just need to add their code to start CI

jimp avatar

Do template repositories serve this purpose? There is the ability to create the repository from a template: https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository#template-repositories

I don’t think there is a way to manage template updates afterward, so this is only a solution for the start of the lifecycle.

2
Shilpa avatar

Thanks @Jim Park, will have a look.

Jonás Márquez avatar
Jonás Márquez

Hi everyone! I am trying to use the terraform-null-label module and I get an error with the map, Terraform recommends the use of tomap, but I have been doing tests passing the keys and values in various ways and I can’t get it to work, has anyone had the same problem? I leave an example of the error, thanks in advance to all!

│ Error: Error in function call
│
│   on .terraform/modules/subnets/private.tf line 8, in module "private_label":
│    8:     map(var.subnet_type_tag_key, format(var.subnet_type_tag_value_format, "private"))
│     ├────────────────
│     │ var.subnet_type_tag_key is a string, known only after apply
│     │ var.subnet_type_tag_value_format is a string, known only after apply
│
│ Call to function "map" failed: the "map" function was deprecated in Terraform v0.12 and is no longer available; use tomap({ ...
│ }) syntax to write a literal map.
╵
╷
│ Error: Error in function call
│
│   on .terraform/modules/subnets/public.tf line 8, in module "public_label":
│    8:     map(var.subnet_type_tag_key, format(var.subnet_type_tag_value_format, "public"))
│     ├────────────────
│     │ var.subnet_type_tag_key is a string, known only after apply
│     │ var.subnet_type_tag_value_format is a string, known only after apply
│
│ Call to function "map" failed: the "map" function was deprecated in Terraform v0.12 and is no longer available; use tomap({ ...
│ }) syntax to write a literal map.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

check that you are using the latest versions of the modules

Jonás Márquez avatar
Jonás Márquez

yes, version 0.25.0 of module cloudposse/label/null

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the error above is in some subnets module

Jonás Márquez avatar
Jonás Márquez
GitHub - cloudposse/terraform-aws-ecs-atlantis at 0.24.1attachment image

Terraform module for deploying Atlantis as an ECS Task - GitHub - cloudposse/terraform-aws-ecs-atlantis at 0.24.1

Jonás Márquez avatar
Jonás Márquez

I use the latest subnet module version 0.39.8

Jonás Márquez avatar
Jonás Márquez
Release v0.39.8 · cloudposse/terraform-aws-dynamic-subnetsattachment image

Enhancements Bump providers @nitrocode (#146) what Bump providers why so consumers don’t see errors based on new features used by this module references Closes #145

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform-aws-ecs-atlantis was not updated for a while and prob uses the map() function

jose.amengual avatar
jose.amengual

uff yes that module is very very old

jose.amengual avatar
jose.amengual

and it was created for a forked version of atlantis

Jonás Márquez avatar
Jonás Márquez

Jonás Márquez avatar
Jonás Márquez

Do you have any recommendations for deploying atlantis in HA mode?

joshmyers avatar
joshmyers

There is no HA mode.

1
jose.amengual avatar
jose.amengual

@joshmyers is right, there is no HA for atlantis

jose.amengual avatar
jose.amengual

you can have multiple atlantis and have WAF or something in between to match the repo URL from the body request of the webhook call and send it to the appropriate atlantis server, but there is no concept of load balancing or anything

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(just to be clear, the HA atlantis is an atlantis limitation; fault tolerance can be achieved with healthchecks and automatic recovery)

this1
Jonás Márquez avatar
Jonás Márquez

Thank you all for the information!

Jonás Márquez avatar
Jonás Márquez

Jim Park avatar
Jim Park

Did you start a terragrunt run, but then spam Ctrl + C out of cowardice, like me?

Of course you didn’t, because you know you’d end up with a bajillion locks. If you need some guidance to clearing those locks on dynamodb, reference this gist.

1
1
1
Jim Park avatar
Jim Park

(I foot-gun’d months ago, but neglected to share with y’all. Recommend bookmarking just in case =))

1

2022-01-14

greg n avatar

Heya, we’ve got all our terraform in a ./terraform repo subdirectory. Does pre-commit does support passing args to tflint like:

repos:
  - repo: <https://github.com/gruntwork-io/pre-commit>
    rev: v0.1.17 # Get the latest from: <https://github.com/gruntwork-io/pre-commit/releases>
    hooks:
      - id: tflint
        args:
          - "--config ./terraform/.tflint.hcl"
          - "./terraform"

Looking at this, I’m guessing not ? https://github.com/gruntwork-io/pre-commit/blob/master/hooks/tflint.sh#L14 Thanks

pre-commit/tflint.sh at master · gruntwork-io/pre-commitattachment image

A collection of pre-commit hooks used by Gruntwork tools - pre-commit/tflint.sh at master · gruntwork-io/pre-commit

RB avatar

not sure about that one but have you looked at this one

https://github.com/antonbabenko/pre-commit-terraform#terraform_tflint

GitHub - antonbabenko/pre-commit-terraform: pre-commit git hooks to take care of Terraform configurationsattachment image

pre-commit git hooks to take care of Terraform configurations - GitHub - antonbabenko/pre-commit-terraform: pre-commit git hooks to take care of Terraform configurations

RB avatar

you can use args to configure it

greg n avatar

Thanks! I hadn’t seen that

RB avatar

see the review dog version too

https://github.com/reviewdog/action-tflint

GitHub - reviewdog/action-tflint: Run tflint with reviewdog on pull requests to enforce best practicesattachment image

Run tflint with reviewdog on pull requests to enforce best practices - GitHub - reviewdog/action-tflint: Run tflint with reviewdog on pull requests to enforce best practices

greg n avatar

Cool, I’m gonna go with the Anton Babeko stuff for now

1
RB avatar

the difference is that reviewdog comments the pr inline whereas the anton one will just fail if a rule is broken

greg n avatar

nice feature

2022-01-15

Zeeshan S avatar
Zeeshan S

whats the best ci/cd pipeline for terraform these days ?

tim.davis.instinct avatar
tim.davis.instinct

A lot of the standard CI/CD tools these days can execute TF, but you’re better off looking at the TACoS platforms (Terraform Automation and COllaboration Software). These platforms are purpose built for IaC lifecycle Automation. env0, Terraform Cloud, Spacelift, and Scalr.

Disclaimer, I’m the DevOps Advocate for env0

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

this The Spacelift docs (which I recommend) have a great explanation of why dedicated systems for Terraform are worth it: https://docs.spacelift.io/#do-i-need-another-ci-cd-for-my-infrastructure

Hello, Spacelift!

Take your infra-as-code to the next level

Mohammed Yahya avatar
Mohammed Yahya

• env0

• scalr

• spacelift

• TFC ( enterprise is pricey)

• DIY Pipeline - I prefer this, as I can use Github actions or Gitlab or Bitbucket or ( my best choice ) CircleCI to implement all needed stages with fine grain control.

Mohammed Yahya avatar
Mohammed Yahya

now with options like OIDC between AWS and Github Actions runners, no need to share credentials or worry about access

Mohammed Yahya avatar
Mohammed Yahya

I used most of these, and fount that Terraform Cloud is best for new comers, and small team < 5

Zeeshan S avatar
Zeeshan S

Thanks everyone.

@Mohammed Yahya among the DIY pipeline which ones the best.

Mohammed Yahya avatar
Mohammed Yahya

there is no best one, only I found CircleCI feature rich and speedy

But depends on organisation selective tools

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub - suzuki-shunsuke/tfcmt: Fork of mercari/tfnotify. tfcmt enhances tfnotify in many ways, including Terraform >= v0.15 support and advanced formatting optionsattachment image

Fork of mercari/tfnotify. tfcmt enhances tfnotify in many ways, including Terraform >= v0.15 support and advanced formatting options - GitHub - suzuki-shunsuke/tfcmt: Fork of mercari/tfnotify. t…

1
Zeeshan S avatar
Zeeshan S

mostly for provisioning aws

Zeeshan S avatar
Zeeshan S

Thanks. Have heard of spacelift but wasnt sure if it was mature enough. Ill have a look

2022-01-16

2022-01-17

Jas Rowinski avatar
Jas Rowinski

Was wondering how people have setup their EKS clusters when it comes to Node Groups (EKS managed or Self Managed). I’m running EKS managed, but trying to find a way to achieve mixed_instances_policy when it comes to SPOT & ON_DEMAND instance types.

Using the node group Cloudposse module currently. But after reviewing others, it seems that mixed_instances_policy can only be done via Self Managed. Is that correct or am I missing something?

Looking at this module, they offer 3 different strategies when it comes to node groups and mixed instances. But like I said before, it seems to be only Self Managed. Anyone else manage to get this to work with EKS managed node groups?

Jas Rowinski avatar
Jas Rowinski

Seems to be an open ticket for AWS for this: https://github.com/aws/containers-roadmap/issues/1297

[EKS] [request]: Spot & On-demand Mixed Policy for Managed Node Group · Issue #1297 · aws/containers-roadmapattachment image

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

1
Zach avatar

I’ve switched to using a small Managed Node Group to bootstrap the cluster and then using Karpenter to provision instances for the rest of the workload. Karpenter can have multiple profiles for the nodes and can easily mix spot/on-demand types

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s cool - you’re already using karpenter?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…in production too?

Zach avatar

No just dev, we’re transitioning to k8s

2022-01-18

aimbotd avatar
aimbotd

Hey friends. I’m running into an odd issue. I can run a terraform plan successfully from my user. I cannot run it from the user in our pipeline, who has the same permissions/policies. That pipeline user keeps hitting this error, but I have no idea why. This is deploy with cloudposse/eks-cluster/[email protected]

This cluster was originally deployed with this pipeline user as well.

module.eks_cluster.aws_iam_openid_connect_provider.default[0]: Refreshing state... [id=arn:aws:iam::amazonaws.com/id/<REDACTED>]
module.eks_cluster.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]
╷
│ Error: Get "https://eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps/aws-auth": getting credentials: exec: executable aws failed with exit code 255
│ 
│   with module.eks_cluster.kubernetes_config_map.aws_auth[0],
│   on .terraform/modules/eks_cluster/auth.tf line 132, in resource "kubernetes_config_map" "aws_auth":
│  132: resource "kubernetes_config_map" "aws_auth" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

is this the same IAM role/user that created the cluster? If not, is that role/user in the auth config map? (by default, only the role/user that created the cluster has access to it until you add additional roles/users to the auth config map)

aimbotd avatar
aimbotd

It should’ve been. However, I’ve added the user to the mapUsers and the associated roles to the mapRoles. Maybe I’m missing a role.

aimbotd avatar
aimbotd

I’ll dig a bit more here.

aimbotd avatar
aimbotd

Do you happen to know what the appropriate groups should be?

aimbotd avatar
aimbotd

would - system:masters be viable?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, system:masters is for admins

aimbotd avatar
aimbotd

The user here can authenticate to the cluster, it can access the cluster via the roles, but it cant do anything via the terraform due to the error above. Do you see anything I’m inherently missing?

  mapRoles: |
    - groups:
      - system:masters
      rolearn: arn:aws:iam::123456789012:role/DevRole
      username: DevRole
    - groups:
      - system:masters
      username: DepRole
      rolearn: arn:aws:iam::123456789012:role/DepRole
    - groups:
      - system:masters
      rolearn: arn:aws:iam::123456789012:role/OrgAccountAcc
      username: OrgAccountAcc
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::123456789012:role/dev-cluster-workers
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: |
    - userarn: arn:aws:iam::098765432123:user/deploy.user
      username: deploy.user
      groups:
      - system:masters
aimbotd avatar
aimbotd

I found the issue. It was this, kube_exec_auth_role_arn = var.map_additional_iam_roles[0].rolearn Lets not taco bout it.

1
aimbotd avatar
aimbotd

Thanks for your sanity checks. I appreciate you.

Kartik V avatar
Kartik V

Hello everyone , I am trying to achieve vpc cross region peering using terragrunt need some ideas/suggestion to use provider as alias in terragunt.hcl root

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe try #terragrunt

1
chris avatar

I am trying to get my copy of reference-architectures up and running again — I know it is outdated but we already have 1 architecture built using it and we need another and the team wants them to be consistent — I believe I have everything resolved except that template_file is not available for the M1 (at least as far as I can find) so I need to update

data "template_file" "data" {
  count = "${length(keys(var.users))}"

  # this path is relative to repos/$image_name
  template = "${file("${var.templates_dir}/conf/users/user.tf")}"

  vars = {
    resource_name    = "${replace(element(keys(var.users), count.index), local.unsafe_characters, "_")}"
    username         = "${element(keys(var.users), count.index)}"
    keybase_username = "${element(values(var.users), count.index)}"
  }
}

resource "local_file" "data" {
  count    = "${length(keys(var.users))}"
  content  = "${element(data.template_file.data.*.rendered, count.index)}"
  filename = "${var.output_dir}/overrides/${replace(element(keys(var.users), count.index), local.unsafe_characters, "_")}.tf"
}

I think I have to use templatefile() but can’t figure out how to re-write it.

Thanks in advance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you’re running inside of geodesic amd64 it should “just work”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(with the emulation under the m1)

chris avatar

I didn’t realize I could run this repo inside geodesic… I guess I will try that

chris avatar

I guess that makes sense though. Thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep! use geodesic as your base toolbox image for all devops. that way it works uniformly on all workstations

chris avatar

No luck getting geodesic built… bumped into a build issue.

Did a fresh clone and check out the latest version muck when running make all I end up with a build error

 => ERROR [stage-2  7/30] RUN apk add --update $(grep -h -v '^#' /etc/apk/packages.txt /etc/apk/packages-alpine.txt) &&     mkdir -p /etc/bash_completion.  6.1s
------
 > [stage-2  7/30] RUN apk add --update $(grep -h -v '^#' /etc/apk/packages.txt /etc/apk/packages-alpine.txt) &&     mkdir -p /etc/bash_completion.d/ /etc/profile.d/ /conf &&     touch /conf/.gitconfig:
#13 0.223 fetch <https://dl-cdn.alpinelinux.org/alpine/v3.15/main/aarch64/APKINDEX.tar.gz>
#13 1.483 fetch <https://dl-cdn.alpinelinux.org/alpine/v3.15/community/aarch64/APKINDEX.tar.gz>
#13 3.843 fetch <https://apk.cloudposse.com/3.13/vendor/aarch64/APKINDEX.tar.gz>
#13 4.584 fetch <https://alpine.global.ssl.fastly.net/alpine/edge/testing/aarch64/APKINDEX.tar.gz>
#13 5.378 fetch <https://alpine.global.ssl.fastly.net/alpine/edge/community/aarch64/APKINDEX.tar.gz>
#13 6.044 ERROR: unable to select packages:
#13 6.044   awless (no such package):
#13 6.044     required by: world[awless]
#13 6.044   aws-iam-authenticator (no such package):
#13 6.044     required by: world[aws-iam-authenticator]
#13 6.044   chamber (no such package):
#13 6.044     required by: world[chamber]
...
...
chris avatar

Calling an evening for now and will have to pick this up tomorrow. Thanks for the suggestion and I will keep picking away at this

chris avatar

Thought: maybe because I am on M1 and not emulating correctly…

chris avatar

Realized I may not need to build it and can just do a docker run going to try that first

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can just use our geodesic public image

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

use FROM cloudposse/geodesic:debian-latest

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use github actions to build the docker image and there could be other things needed to get it to build

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

those packages come from our cloudposse package host. They must be installed after configuring the cloudsmith registry

aimbotd avatar
aimbotd

How does one use the kubernetes_taints variable? I’m trying the following but to no success, │ The given value is not suitable for child module variable "kubernetes_taints" defined at .terraform/modules/eks_node_group/variables.tf:142,1-29: map of string required.

kubernetes_taints = [
  {
    key    = "foo.gg/emr"
    effect = "NO_SCHEDULE"
    value  = null
  }
]
Jim G avatar

I don’t know much about the module, but looking at the error, it wants a map of strings - perhaps it doesn’t like the null value.

Jim G avatar

try an empty string or "null"?

aimbotd avatar
aimbotd

Thanks. I’ve tried that, along with "", and even omitting the value key all together. Though that one threw the error I expected.

aimbotd avatar
aimbotd

Update…I learned that my version for the node group was not up to date with the docs I was looking at.

1
Danish Kazi avatar
Danish Kazi

Hello, I am trying to create custom modules for mongodbatlas using the verified mongodbatlas provider https://registry.terraform.io/providers/mongodb/mongodbatlas/1.2.0 but i get the below error on “terraform init”

Initializing the backend...

Initializing provider plugins…

  • Finding latest version of hashicorp/mongodbatlas…
  • Reusing previous version of mongodb/mongodbatlas from the dependency lock file
  • Using previously-installed mongodb/mongodbatlas v1.2.0 ╷ │ Error: Failed to query available provider packages │ │ Could not retrieve the list of available versions for provider hashicorp/mongodbatlas: provider registry registry.terraform.io does not have a provider named │ registry.terraform.io/hashicorp/mongodbatlas │ │ Did you intend to use mongodb/mongodbatlas? If so, you must specify that source address in each module which requires that provider. To see which modules are currently │ depending on hashicorp/mongodbatlas, run the following command: │ terraform providers

Is this because mongodbatlas provider is a “verified” provider and not a “published” provider ?

2022-01-19

SlackBot avatar
SlackBot
01:47:20 PM

This message was deleted.

Release notes from terraform avatar
Release notes from terraform
07:03:19 PM

v1.1.4 1.1.4 (January 19, 2022) BUG FIXES: config: Non-nullable variables with null inputs were not given default values when checking validation statements (#30330) config: Terraform will no longer incorrectly report “Cross-package move statement” when an external package has changed a resource from no count to using count, or…

Handle null variable input with nullable=false by jbardin · Pull Request #30330 · hashicorp/terraformattachment image

v1.1 targeted fix for #30307. The variable handling has been overhauled in v1.2 already, but we want to head off any possible incorrect use of null variables slipping into validation when nullable=…

Brij S avatar

Hi all, has anyone attempted to set WARM_IP_TARGET when using the terraform-eks module ? Im having a tough time finding a way to set that up

2022-01-20

Almondovar avatar
Almondovar

hi colleagues, few weeks ago i found somewhere a tool that detects the missing terraform from the infra, and gives output like “50% terraformed” - but i cant recall what was its name, any ideas? ~(its not driftctl)~

omry avatar
GitHub - GoogleCloudPlatform/terraformer: CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Codeattachment image

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GitHub - GoogleCloudPlatform/terraformer: CLI tool to generate terraform files from e…

Almondovar avatar
Almondovar

Thank you for your response, no, it was more like terraform checker

1
Almondovar avatar
Almondovar

name of the tool is driftctl if someone is intrested

1
omry avatar


(its not driftctl)
@Almondovar you mentioned in the original message that it isn’t driftctl

Almondovar avatar
Almondovar

true!! it was indeed troll

2
andylamp avatar
andylamp

hi there! I am trying to create an additional authorisation rule when using terraform-aws-ec2-client-vpn repo. I try to create an authorization rule as per example like so:

  authorization_rules           = [{
    name = "Authorise ingress traffic"
    access_group_id = "-"
    authorize_all_groups = true
    description = "Authorisation for traffic using this VPN to VPC resources"
    target_network_cidr = "0.0.0.0/0"
  }]

However, this creates an error saying

"access_group_id": only one of `access_group_id,authorize_all_groups` can be specified, but `access_group_id,authorize_all_groups` were specified.

If I remove the access_group_id complains that it is required - does anyone know how to resolve this issue?

RB avatar

can you try setting one of those to null

1
andylamp avatar
andylamp

actually, I just did that just after posting

andylamp avatar
andylamp

it worked

andylamp avatar
andylamp

it was not clear from the documentation

RB avatar

nice!

RB avatar

it’s a terraform thing

andylamp avatar
andylamp

do you accept PR’s for such things?

andylamp avatar
andylamp

it’d be great to document that!

RB avatar

if we documented it here, we’d have to document it in every module

andylamp avatar
andylamp

oh, a pain

RB avatar

it’s a restriction in terraform when we explicitly set the object definition in the variable type

RB avatar

one way around it is to set the type to any and use lookups to retrieve the value then we can document the appropriate keys in the description

RB avatar

the other way, eventually, when terraform allows the optional argument without the experiment enabled

andylamp avatar
andylamp

I see, that’s great when that happens

RB avatar

if you want to put in a pr, you can put one in to switch the type definition to any

RB avatar

you’d also have to define the object body in the description

RB avatar

i can’t promise that it would be merged but it would spark discussion and would be reviewed and maybe merged

andylamp avatar
andylamp

thing is, I think it’s more like to know the expected behaviour. As the object definition does not clearly imply that either should be null in order for this to work properly. The error messages do not help either

andylamp avatar
andylamp

more like, I would like to add an example rule, for each case, so others can refer to that!

RB avatar

there is also type validation that can be added

RB avatar

@Leo Przybylski

andylamp avatar
andylamp

that’s something I can try and do! so other avoid this pitfall that I did

1
Isaac avatar

Has anyone used the control tower account factory for terraform? I’m about to explore setting up multi-account AWS environments. HashiCorp Teams with AWS on New Control Tower Account Factory for Terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re very interested in AFT. Haven’t yet kicked the tires.

Isaac avatar

I’ll give it a shot this weekend and share first impressions.

Josh B. avatar
Josh B.

No. It’s like the only thing I did manually

Mohammed Yahya avatar
Mohammed Yahya

BTW you can still achieve 95% automation using Terraform only, I’m not sure if this over complicated thing still be needed, may be I should test it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


95% automation using Terraform only
this is true, however, 0% control tower

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, to be clear to others AFT has nothing to do about managing control tower itself with terraform, but rather provisioning of resources inside of an account managed by control tower

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and AFT itself must be manually provisioned, like @Josh B. said. The terraform parts of it with TFC/Spacelift/scalr/env0/etc. But any accounts needed by AFT are still clickops in control tower.

Isaac avatar

Ah, I never reported back. So I did try it that weekend, it was frustrating. AWS really wants you to use CodeCommit. I tried several times to do it using GitHub and it would work up to the point that mattered, which was vending an account. So I gave up and did it the official way with CodeCommit and that worked first time. It is indeed quite a manual process to setup, and you need to read the docs carefully, but I can see it being smooth sailing once you’re done with the initial setup.

Isaac avatar

Here’s the tutorial I followed for the GitHub attempt. Would really love to hear if anyone succeeded following it. I’d try again but I don’t want to create a whole new bunch of AWS accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Shaun Wang

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Shaun Wang has been working on updating the tutorial for AFT and already fixed a lot of issues with it. But not sure if he tested the GitHub integration yet.

Shaun Wang avatar
Shaun Wang

Hey! didn’t see this earlier Issac, will reach out to you

Vucomir Ianculov avatar
Vucomir Ianculov

Hey, I using terraform-aws-cloudfront-cdn and traing to used Origin access identity for my default origin, is it posible ?

RB avatar

how are you trying this?

RB avatar

are you trying to reuse an existing access identify? if so i don’t think it’s possible with this module since it creates its own

RB avatar
terraform-aws-cloudfront-cdn/main.tf at 9f0b0654a221131d2ab6b6d855c70748dd771f98 · cloudposse/terraform-aws-cloudfront-cdnattachment image

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - terraform-aws-cloudfront-cdn/main.tf at 9f0b0654a221131d2ab6b6d855c70748dd771f98 · cloudposse/terraform-aws-…

Vucomir Ianculov avatar
Vucomir Ianculov

I’m not taring to use an existing access identify, i would like to use the one created by the module

Vucomir Ianculov avatar
Vucomir Ianculov
module "cdn_test" {
source = "cloudposse/cloudfront-cdn/aws"
  version = "0.24.1"

  name                            = "s3-test"
  aliases                         = ["test.com"]
  origin_domain_name              = module.s3_test-origin.bucket_domain_name
  origin_path                     = "/current"
  price_class                     = "PriceClass_All"
  dns_aliases_enabled             = false
  viewer_protocol_policy          = "redirect-to-https"
  viewer_minimum_protocol_version = "TLSv1.2_2021"
  
   
  acm_certificate_arn = module.acm_staging.arn
  logging_enabled = false
  comment = "test"
  tags = var.default_tags
}
Vucomir Ianculov avatar
Vucomir Ianculov

when a CDN is deployed the default origin is of type Custom Origin and not S3 that uses Origin access identity

RB avatar
GitHub - cloudposse/terraform-aws-cloudfront-s3-cdn: Terraform module to easily provision CloudFront CDN backed by an S3 originattachment image

Terraform module to easily provision CloudFront CDN backed by an S3 origin - GitHub - cloudposse/terraform-aws-cloudfront-s3-cdn: Terraform module to easily provision CloudFront CDN backed by an S3…

Vucomir Ianculov avatar
Vucomir Ianculov

@RB Thanks

2022-01-21

andylamp avatar
andylamp

hi there again :slightly_smiling_face: - are there any concrete examples of ConfigMap s to use in https://registry.terraform.io/modules/cloudposse/eks-cluster/aws? like for example adding new users etc?

RB avatar
GitHub - cloudposse/terraform-aws-eks-cluster: Terraform module for provisioning an EKS clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

RB avatar

list(object({ rolearn = string username = string groups = list(string) }))

RB avatar
map_additional_iam_roles = [
  {
    rolearn  = ""
    username = ""
    groups   = []
  }
]
andylamp avatar
andylamp

ah, yes right - that’s helpful! I created the cluster and when I tried to refresh the state I got

│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp 127.0.0.1:80: connect: connection refused
│ 
│   with module.my_eks.module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│   on .terraform/modules/my_eks.eks_cluster/auth.tf line 115, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│  115: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
RB avatar
terraform-aws-eks-cluster/main.tf at b745ed18d8832c7e8e53966264687d2ee1d64e1a · cloudposse/terraform-aws-eks-clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

andylamp avatar
andylamp

that’s what I am trying to use, however I am adding custom IAM roles

andylamp avatar
andylamp

and users

andylamp avatar
andylamp

to be allowed

andylamp avatar
andylamp

I have only taken the bits (almost verbatim) from the node group and for the eks cluster - as I need to use a different VPC rather than create a new one.

andylamp avatar
andylamp

the cluster is created and it shows correctly, however when I view it on AWS it does say that I have access (while I have provided the arn and the user name with empty groups for users I want to have access). And also caused this error in refresh.

andylamp avatar
andylamp

I could solve this by using

tf state rm "module.my_eks.module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]"

and then use targeted destroy on the module

andylamp avatar
andylamp

however, recreating I do not think will solve the issue

RB avatar

seems like you don’t have access to the eks cluster api so it’s defaulting to localhost

andylamp avatar
andylamp

yeah, I am using the template you’ve provided but I am also adding as I said additional IAM users.

andylamp avatar
andylamp

through the provided map

andylamp avatar
andylamp

nothing else…

RB avatar

try just using the example, does it work correctly?

RB avatar

are you using your own subnets or also creating the subnets and vpc from the example?

andylamp avatar
andylamp

so, what user is that supposed to give access - just by running the example

andylamp avatar
andylamp

in order to stay safe, I will use the example and peer the VPC later on with the one I want to target.

andylamp avatar
andylamp

that should be a safe bet, right?

RB avatar

every pr validates the example using terratest so the full example should work

RB avatar

the test does a plan and apply and then destroy

RB avatar

ya that sounds like a safe bet

andylamp avatar
andylamp

alright - cool

andylamp avatar
andylamp

so, by running the example - which user owns the cluster/

andylamp avatar
andylamp

the one used the AWS keys to create it?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Yes, the user that runs terraform apply

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Or the role

andylamp avatar
andylamp

right - alright, thanks! am modifying my code to first to the example verbatim, then add new IAM users, and then do the peering. Fingers that it works as expected!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)



Error: Get “http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth”: dial tcp 127.0.0.1 connect: connection refused

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a networking issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

could be several reasons for that:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. The cluster is deployed into private subnets only, and you can’t access the API server from your local computer
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Terraform can’t access the cluster to get the kubeconfig. The AWS provider is written is such a way that if it can’t access the remote cluster, it tries localhost
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

either way, the error in almost all cases (maybe not in all) is about networking and access, not about IAM

andylamp avatar
andylamp

right, however AWS said when I browsed EKS that I the user I am using (which was the one I used to create the cluster) did not have access - is this normal?

andylamp avatar
andylamp

to be concrete, this is the message that I saw:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, this is related to IAM

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there could be two diff issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try our example first

andylamp avatar
andylamp

aye, thanks so much regardless both your and Ronak’s help have been great!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it could be both networking and IAM issues b/c when you provision the cluster and you get

Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp 127.0.0.1:80: connect: connection refused
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you are still the same user/role with all the permissions, so it should not be related to IAM

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform-aws-eks-cluster/variables.tf at master · cloudposse/terraform-aws-eks-clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

andylamp avatar
andylamp

so, upon further inspection it seems that map_additional_iam_users is either not doing something correctly, I am not setting it up properly.

andylamp avatar
andylamp
map_additional_iam_users = [
    {
      userarn  = "arn:aws:iam::1231132967555:user/myuser"
      username = "myuser"
      groups   = []
    }]
andylamp avatar
andylamp

is this a correct way to place it?

andylamp avatar
andylamp

the user arn is the one that IAM shows in the dashboard

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to add some

groups
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
             - system:bootstrappers
              - system:nodes
              - system:masters
 
andylamp avatar
andylamp

oh, lol - I thought this was AWS groups

andylamp avatar
andylamp

andylamp avatar
andylamp

so what would be the groups to give full access to the cluster - would this be putting all the ones you provided as a list?

andylamp avatar
andylamp

btw, even using the full example as you provided when I refresh something in the cluster (say change it’s name) I still get the same error as before.

andylamp avatar
andylamp

wrt to Auth

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

system:masters

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for admins

Tyler Pickett avatar
Tyler Pickett

Howdy everyone, I opened a PR against terraform-aws-ecs-web-app and realized that there was probably some process that I missed. Is there a process for contributing documented somewhere?

feat: Expose underlying service task module's ignore_changes_desired_count by tpickett66 · Pull Request #180 · cloudposse/terraform-aws-ecs-web-appattachment image

what This exposes the ignore_changes_desire_count flag from terraform-aws-ecs-alb-service-task why When a service has scaled up the desired count will be changed by the Autoscaling process but a…

Tyler Pickett avatar
Tyler Pickett

It looks like my contributing question has been asked/answered before here but that is old enough that the history has been truncated

jose.amengual avatar
jose.amengual
SweetOps Slack Archive

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Tyler Pickett avatar
Tyler Pickett

Thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(we have some bug right now in our archive where it’s only showing Y22)

jose.amengual avatar
jose.amengual

who cares about 2019/2020/2021 erase them

jose.amengual avatar
jose.amengual

lol

Tyler Pickett avatar
Tyler Pickett

for real

mikesew avatar
mikesew

RDS question: what does everybody include for their lifecycle ignore_changes block? Some considerations:

• Db Engine version : we dont enable auto minor version upgrade, as we want to control outages.

• Storage allocation: we enable auto scaling, so..dont manage size w terraform?

2022-01-23

Mike Crowe avatar
Mike Crowe

Can somebody point me to how to use a http backend state with atmos? I’m trying:

terraform:
  vars: {}
  remote_state_backend_type: http
  remote_state_backend:
    http:
      config:
        address: <https://gitlab.com/api/v4/projects/>...
        lock_address: <https://gitlab.com/api/v4/projects/.../lock>
        unlock_address: <https://gitlab.com/api/v4/projects/.../lock>
        username: ...
        password: ...
        lock_method: POST
        unlock_method: DELETE
        retry_wait_min: 5  

but that doesn’t seem to work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t support http backend in atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please open an issue, we’ll get to it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for s3 backend, atmos just generates a file like this

{
  "terraform": {
    "backend": {
      "s3": {
        "acl": "bucket-owner-full-control",
        "bucket": "xxx-ue2-root-tfstate",
        "dynamodb_table": "xxx-ue2-root-tfstate-lock",
        "encrypt": true,
        "key": "terraform.tfstate",
        "region": "us-east-2",
        "role_arn": xxxx,
        "workspace_key_prefix": "xxx"
      }
    }
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so atmos itself does not process any backends, it lets terraform do it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but we’ll review the http backend and what could be done there

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mike Crowe

Mike Crowe avatar
Mike Crowe

OK, thanks @Andriy Knysh (Cloud Posse) – let me try that

Mike Crowe avatar
Mike Crowe

OK, based on that, this should work:

terraform:
  vars: {}
  backend_type: http
  backend:
    http:
      address: <https://gitlab.com/api/v4/projects/>...
      lock_address: <https://gitlab.com/api/v4/projects/>...
      unlock_address: <https://gitlab.com/api/v4/projects/>...
      username: ...
      password: ...
      lock_method: POST
      unlock_method: DELETE
      retry_wait_min: 5    
  
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos does not process backend type http

Mike Crowe avatar
Mike Crowe

So, is there a way to set variables like TF_HTTP_ADDRESS via the stack variables?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can set ENV vars globally or per component. The env section is a first-class section like vars - it gets deep-merged in this order: globals -> terraform globals -> base components -> component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos/test-component-override-3.yaml at master · cloudposse/atmosattachment image

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, etc) - atmos/test-component-override-3.yaml at master · cloudposse/atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those ENV vars will be detected and set in the shell before executing terraform commands https://github.com/cloudposse/atmos/pull/77

Detect ENV vars in YAML stack config and set them for command execution. Make `workspace_key_prefix` config DRY by aknysh · Pull Request #77 · cloudposse/atmosattachment image

what Detect ENV vars in YAML stack config and set them for command execution Make workspace_key_prefix config DRY Don&#39;t delete the generated terraform varfiles after each command Update tests …

Mike Crowe avatar
Mike Crowe

@Andriy Knysh (Cloud Posse) – just to confirm, I can do the following, right? stacks/pinnsg/ue1/dev.yaml:

import:
  - globals/pinnsg-globals
  - globals/ue1-globals
  - catalog/terraform/sso

vars:
  stage: dev

terraform:
  vars: {}  
  env:
    TF_HTTP_ADDRESS: <https://gitlab.com/api/v4/projects/{gitlab_project}/terraform/state/{state_key}-{stage}>
    TF_HTTP_LOCK_ADDRESS: <https://gitlab.com/api/v4/projects/{gitlab_project}/terraform/state/{state_key}-{stage}/lock>
    TF_HTTP_UNLOCK_ADDRESS: <https://gitlab.com/api/v4/projects/{gitlab_project}/terraform/state/{state_key}-{stage}/lock>

where: • gitlab_project and state_key are defined in catalog/terraform/sso

Mike Crowe avatar
Mike Crowe

Questions: • Do we persist/show environment variables somehow? • Terraform says the environment variable is TF_HTTP_ADDRESS. Atmos isn’t prefixing with TF_VAR by chance, is it?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mike Crowe we don’t replace tokens in ENV vars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then executing atmos commands, it detects the ENV vars, adds them to the shell that executes the commands, and show them in the output

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Detect ENV vars in YAML stack config and set them for command execution. Make `workspace_key_prefix` config DRY by aknysh · Pull Request #77 · cloudposse/atmosattachment image

what Detect ENV vars in YAML stack config and set them for command execution Make workspace_key_prefix config DRY Don&#39;t delete the generated terraform varfiles after each command Update tests …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Command info:
Terraform binary: /usr/local/bin/terraform
Terraform command: plan
Arguments and flags: []
Component: test/test-component-override
Base component: test/test-component
Stack: tenant1/ue2/dev
Working dir: ./examples/complete/components/terraform/test/test-component

Using ENV vars:
TEST_ENV_VAR3=val3-override
TEST_ENV_VAR1=val1-override
TEST_ENV_VAR2=val2
TEST_ENV_VAR4=val4
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Using ENV vars: shows the detected ENV vars for the component and they are already in the executing shell

Mike Crowe avatar
Mike Crowe

@Andriy Knysh (Cloud Posse) – are ENV vars used in the terraform init pass? I’m getting no ENV files when I run:

❯ atmos terraform plan sso --stack=pinnsg-ue1-dev 

Variables for the component 'sso' in the stack 'pinnsg/ue1/dev':

enabled: true
environment: ue1
namespace: psg
region: us-east-1
stage: dev
state_key: pinnsg
tenant: pinnsg

Writing the variables to file:
components/terraform/sso/pinnsg-ue1-dev-sso.terraform.tfvars.json

Executing command:
/usr/bin/terraform init

It pulls my vars fine – but I don’t get any environment variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

env section is processed on all commands and is printed in the console

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

show me your YAML config @Mike Crowe

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also check that you are using the latest version of atmos

Mike Crowe avatar
Mike Crowe
❯ atmos version
v1.3.23
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for the backend type http, can’t we just serialize it the way we do for s3?

Mike Crowe avatar
Mike Crowe

I don’t think so. As best I can figure, it looks like the http back in does not support terraform workspaces.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(oh this was a question for andriy)

Mike Crowe avatar
Mike Crowe

I get this error: Executing command: /usr/bin/terraform workspace select pinnsg-ue1-dev workspaces not supported

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We’ll have to review the http backend and update atmos to support it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I guess the issue is that we dynamically manage the workspace parameter as part of the backend configuration

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so that’s why it needs to be handled per backend

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe we could generalize it, did not think about it since we used only s3 backend to generate backend files (we have other backends, but they are custom)

Mike Crowe avatar
Mike Crowe

I’m going to move our back into S3 to avoid this whole issue

2022-01-24

SlackBot avatar
SlackBot
01:12:24 PM

This message was deleted.

2022-01-25

Kevin Kenny avatar
Kevin Kenny

I am new to TFE, I am currently working on automating TFE workspace including adding the env variables, Any source code i can take an example of ? I saw below example but not sure https://github.com/cloudposse/terraform-tfe-cloud-infrastructure-automation where they mentioned how to add the env vs regular variables to the workspace

GitHub - cloudposse/terraform-tfe-cloud-infrastructure-automation: Terraform Enterprise/Cloud Infrastructure Automationattachment image

Terraform Enterprise/Cloud Infrastructure Automation - GitHub - cloudposse/terraform-tfe-cloud-infrastructure-automation: Terraform Enterprise/Cloud Infrastructure Automation

Matt Gowie avatar
Matt Gowie

tfe-cloud-infra-automation is a pretty out of date / abandoned. TFE isn’t the best solution on the market today so Cloud Posse didn’t continue to invest into it + we don’t have many folks contributing to that particular module.

Unfortunately, I don’t know of another module out there that automates TFE, so don’t know where to direct you. I would read through the TFE provider resources and see if any of them do what you want to do.

GitHub - cloudposse/terraform-tfe-cloud-infrastructure-automation: Terraform Enterprise/Cloud Infrastructure Automationattachment image

Terraform Enterprise/Cloud Infrastructure Automation - GitHub - cloudposse/terraform-tfe-cloud-infrastructure-automation: Terraform Enterprise/Cloud Infrastructure Automation

Kevin Kenny avatar
Kevin Kenny

Thanks @Matt Gowie, appreciate your response.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, unfortunately, we were’t able to get any customers on board for TFE so this is on pause until such a time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub - cloudposse/terraform-spacelift-cloud-infrastructure-automation: Terraform module to provision Spacelift resources for cloud infrastructure automationattachment image

Terraform module to provision Spacelift resources for cloud infrastructure automation - GitHub - cloudposse/terraform-spacelift-cloud-infrastructure-automation: Terraform module to provision Spacel…

mikesew avatar
mikesew

Terraform question: how do i get the first/single item out of a set? I see the docs show the one() function, but that’s 0.15+ only . I was hoping for a more general/back-wards compatible method (i’m 0.14)

loren avatar

Do you have try()?

RB avatar

couldnt you use a 0 index? like myset[0]

mikesew avatar
mikesew

@RB: I thought [0] indexes were only for lists.. sets don’t have any order, from what i’m reading? I’ll try now though, hold

## This will give me a set of roles, which there 
## should be only ONE element (ie. one role). Just wanT
## that role =]
data "aws_iam_roles" "readonly" {
  name_regex =  "AWSReservedSSO_AWSReadOnly_.*"
  path_prefix = "/aws-reserved/sso.amazonaws.com/"
}
mikesew avatar
mikesew

… I think I solved it. first convert it to a list tolist() then get the 1st element of that list element()

# tf0.15 syntax
readonly_role_name = one(data.aws_iam_roles.readonly).name

# tf0.14 syntax
readonly_role_name = element(tolist(data.aws_iam_roles.readonly.names),0)
RB avatar

nice!

this would work too

tolist(data.aws_iam_roles.readonly.names)[0]
1

2022-01-26

JB avatar

Having some issue reconciling the account-map and the account components from the terraform-aws-components repo. It looks like pr #363 updated the account-map to a version not compatible with the currently published account component. In particular I am trying to sort out how to reverse engineer the required account_info_map output and woudl greatly appreciate a nudge in the right direction;)

2022-01-27

Bhavik Patel avatar
Bhavik Patel

anyone have any examples on how to make certain resources within a module optional?

Alex Jurkiewicz avatar
Alex Jurkiewicz
variable enable_iam_role {
  type = bool
}

resource "aws_iam_role" "this" {
  count = var.enable_iam_role ? 1 : 0
  # ...
}

?

3
Bhavik Patel avatar
Bhavik Patel

Thank you sir!

1

2022-01-28

Zachary Loeber avatar
Zachary Loeber

Taking a peek at Atmos after a long break (in IT time, so like a few months heh) and I’m totally impressed by a very focused and well thought out implementation of a declarative manifest processing engine. I’m curious, was there ever any thought of using a config library like Cuelang to better manage schema level updates and changes?

Zachary Loeber avatar
Zachary Loeber

Context: I’ve come up with an engine for my current client that does a large amount of what Atmos does but targeted squarely at multi-team hashicorp vault deployments (so like an internal SaaS for the many Vault secret/auth engines a team may want to use). I made the schema itself some yaml files as well but using Python and Yamale for validation.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are constantly improving atmos and have a lot of new features to implement

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and yes, schema validation is one of them

Zachary Loeber avatar
Zachary Loeber

But as I add more features I find the manifest schema the most arcane part to manage. Additionally, maturation of the app could lead to parts of your manifest become REST or GraphQL queries. This leads down the line of openapi/swagger/et cetera and figuring out how declarative manifests can be created/managed at the start of a project so that the underlying schema threads through the component parts

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we started w/o validation, but we need to add it since it will solve many possible issues with invalid config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks for pointing out, we’ll review Cuelang (this one looks good at the first glance )

Zachary Loeber avatar
Zachary Loeber

I wish I were skilled enough to make Cuelang my b**ch but to really use it one needs more Golang acumen than I possess ATM

Zachary Loeber avatar
Zachary Loeber

I did use it as a generic yaml schema validation task in a pipeline via its cli though, that wasn’t super hard.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we were thinking about using Cuelang modules in our code, we’ll implement validation with Cuelang or w/o)

Zachary Loeber avatar
Zachary Loeber

Perhaps integrating it for the validation bits would be something to look into as it claims to have automatic backwards compatible schema capabilities and such

Zachary Loeber avatar
Zachary Loeber

I believe istio uses it

1
Zachary Loeber avatar
Zachary Loeber

I was going to offer up my toddler level work in using it for basic schema validation but I’m certain istio has far for interesting examples….

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you have any istio links showing the examples of using Cuelang?

Zachary Loeber avatar
Zachary Loeber

I think they only use it for schema definition for translation into OpenAPI, it was pretty complex as they have several repos working on concert

tamsky avatar

What is Cloudposse’s current “top pick” for centralized terraform operations/collaboration? Atlantis / Scalr / Spacelift / TF Cloud/Enterprise / Something-else?

1
RB avatar

spacelift

1
2
RB avatar
Customer Success Story | Cloud Posseattachment image

Read how Cloud Posse discovered the power of Spacelift and any IaC implementation for a customer includes Spacelift as part of the final DevOps IaC environment.

RB avatar
GitHub - cloudposse/terraform-spacelift-cloud-infrastructure-automation: Terraform module to provision Spacelift resources for cloud infrastructure automationattachment image

Terraform module to provision Spacelift resources for cloud infrastructure automation - GitHub - cloudposse/terraform-spacelift-cloud-infrastructure-automation: Terraform module to provision Spacel…

tamsky avatar

Thanks for all that!

Beyond the success story above — I’d be curious to hear how many (appx #) of clients that are having a good outcome with spacelift?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

more than 10 of our clients are using Spacelift now

2
tamsky avatar

Thanks again!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in fact, we love it so much that it’s part of atmos, our command-line tool

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos/pkg/spacelift at master · cloudposse/atmosattachment image

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, etc) - atmos/pkg/spacelift at master · cloudposse/atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform-yaml-stack-config/modules/spacelift at main · cloudposse/terraform-yaml-stack-configattachment image

Terraform module that loads an opinionated &quot;stack&quot; configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform-yaml-stack-config/examples/spacelift at main · cloudposse/terraform-yaml-stack-configattachment image

Terraform module that loads an opinionated &quot;stack&quot; configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …

Alex Jurkiewicz avatar
Alex Jurkiewicz

I use spacelift, not part of cloudposse, and big fan

1
Matt Gowie avatar
Matt Gowie

Yeah Spacelift gets two thumbs up from me as well

1

2022-01-29

Muhammad Badawy avatar
Muhammad Badawy

Hello, I’m trying to understand Cloudposse terraform components, and how they are connected/invoked , the modules, labels, contexts ..etc Any high level explanation/architecture? Thanks in advance

pjaudiomv avatar
pjaudiomv

There’s a YouTube video that explains the contexts here https://youtu.be/V2b5F6jt6tQ

2
Matt Gowie avatar
Matt Gowie

Check out the docs on Atmos as well — https://docs.cloudposse.com/

1
Muhammad Badawy avatar
Muhammad Badawy

Thanks guys

2022-01-30

Mike Crowe avatar
Mike Crowe

How would you integrate a root Route53 domain where you purchased the domain in AWS (and thus the Route53 zone was auto-created). I’m trying to understand how to wire that up to dns-primary (or more specifically dns-delegated)

Alex Jurkiewicz avatar
Alex Jurkiewicz

import your existing zone & SOA record

RB avatar

usually dns primary is provisioned in the dns account and then dns delegated is provisioned in a member account with the same hosted zone as the one in the primary with a different subdomain

Mike Crowe avatar
Mike Crowe

@RB Don’t you have to create the name server records in the dns primary zone for the delegated dns?

RB avatar

i believe that’s what dns-delegated does

RB avatar
terraform-aws-components/main.tf at e28ac31f7333843184c6996474d75b45cdca9c67 · cloudposse/terraform-aws-componentsattachment image

Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/main.tf at e28ac31f7333843184c6996474d75b45cdca9c67 · cloudposse/terraform-aws-co…

IK avatar

Hey guys.. does anyone have a way to propagate the TGW attachment Name tag to the TGW owner account? We are creating TGW attachments using TF in spoke accounts which is fine however would be great to have the attachment in the TGW owner account to also be named and tagged appropriately. Cheers!

RB avatar

you’d need a lambda to do this id imagine

RB avatar

unfortunately there isn’t an aws_tag resource to generically tag resources, if there was, we would be and to create a resource with an aliased provider to create these tags cross account

IK avatar

Yeah I figured. Thought there was a way of doing this by using an aliased provider, assuming a role in the owner account and making changes there but doesn’t seem to be a way to modify an existing TGW attachment

2022-01-31

Nick Kocharhook avatar
Nick Kocharhook

I’m attempting to use ecs-web-app to set up a Fargate instance which handles traffic from the internet. When I attempt to apply on Terraform Cloud, I get this error:

Error: error creating ECS service (ocs-staging-myproj): InvalidParameterException: The target group with targetGroupArn arn:aws_elasticloadbalancing:us-east-2:xxxxxxxxxxxxtargetgroup/ocs-staging-osprey/721e8ee36076b407 does not have an associated load balancer.with module.project_module.module.web_app.module.ecs_alb_service_task.aws_ecs_service.ignore_changes_task_definition[0]

on .terraform/modules/project_module.web_app.ecs_alb_service_task/main.tf line 326, in resource "aws_ecs_service" "ignore_changes_task_definition":resource "aws_ecs_service" "ignore_changes_task_definition" {

And when I look on AWS, I can see two target groups: ocs-staging-default and ocs-staging-myproj. The first has a load balancer of “ocs-staging”, and the second indeed says “None associated.” The code I’m using is in the thread>>

Nick Kocharhook avatar
Nick Kocharhook

I’m creating a cloudposse vpc, dynamic-subnets and alb, as well as a aws_ecs_cluster. And then I have this ecs-web-app module:

module "web_app" {
  source = "cloudposse/ecs-web-app/aws"
  version     = "0.67.1"

  region      = var.region
  name        = var.project_name

  launch_type = "FARGATE"
  vpc_id      = module.vpc.vpc_id

  container_environment = [
    {
      name  = "LAUNCH_TYPE"
      value = "FARGATE"
    },
    {
      name  = "VPC_ID"
      value = module.vpc.vpc_id
    }
  ]

  # Container
  alb_container_name = module.this.name
  container_image   = "${var.aws_account_id}.dkr.ecr.${var.region}.amazonaws.com/${var.ecr_repository_name}:${data.aws_ecr_image.flask.id}"
  container_cpu     = 256
  container_memory  = 512
  container_port    = 80
  build_timeout     = 10

  # CI/CD
  codepipeline_enabled = true
  github_oauth_token    = "/Prod/GITHUB_OAUTH_TOKEN"
  github_webhooks_token = var.GITHUB_TOKEN
  repo_owner            = var.repository_owner
  repo_name             = var.repository_name
  branch                = var.repository_branch

  badge_enabled        = false
  ecs_alarms_enabled   = false

  # Autoscaling
  autoscaling_enabled               = false
  autoscaling_dimension             = "cpu"
  autoscaling_min_capacity          = 1
  autoscaling_max_capacity          = 2
  autoscaling_scale_up_adjustment   = 1
  autoscaling_scale_up_cooldown     = 60
  autoscaling_scale_down_adjustment = -1
  autoscaling_scale_down_cooldown   = 300

  # ECS
  aws_logs_region         = var.region
  desired_count           = 1
  ecs_cluster_arn         = aws_ecs_cluster.default.arn
  ecs_cluster_name        = aws_ecs_cluster.default.name
  ecs_security_group_ids  = [module.vpc.vpc_default_security_group_id]
  ecs_private_subnet_ids  = module.subnets.private_subnet_ids

  alb_security_group                              = module.alb.security_group_id
  alb_target_group_alarms_enabled                 = true
  alb_target_group_alarms_3xx_threshold           = 25
  alb_target_group_alarms_4xx_threshold           = 25
  alb_target_group_alarms_5xx_threshold           = 25
  alb_target_group_alarms_response_time_threshold = 0.5
  alb_target_group_alarms_period                  = 300
  alb_target_group_alarms_evaluation_periods      = 1

  alb_arn_suffix = module.alb.alb_arn_suffix

  alb_ingress_healthcheck_path = "/"

  # NOTE: Cognito and OIDC authentication only supported on HTTPS endpoints; here we provide `https_listener_arn` from ALB
  alb_ingress_authenticated_listener_arns       = [module.alb.https_listener_arn]
  alb_ingress_authenticated_listener_arns_count = 1

  # Unauthenticated paths (with higher priority than the authenticated paths)
  alb_ingress_unauthenticated_paths             = ["/", "/test"]
  alb_ingress_listener_unauthenticated_priority = 50

  # Authenticated paths
  alb_ingress_authenticated_paths             = ["/api/v1/*"]
  alb_ingress_listener_authenticated_priority = 100

  aws_logs_prefix   = module.this.name

  # authentication_type                        = "COGNITO"
  # authentication_cognito_user_pool_arn       = var.cognito_user_pool_arn
  # authentication_cognito_user_pool_client_id = var.cognito_user_pool_client_id
  # authentication_cognito_user_pool_domain    = var.cognito_user_pool_domain

  context = module.this.context
}
Shrivatsan Narayanaswamy avatar
Shrivatsan Narayanaswamy

may be i guess you could associate the load balancer with the service for eg: like this

resource “aws_ecs_service” “nginx_deployment” { name = “nginx-deployment” task_definition = aws_ecs_task_definition.nginx_deployment_definition.arn cluster = data.terraform_remote_state.platform.outputs.ecs_cluster_id desired_count = “1” launch_type = “FARGATE”

network_configuration { subnets = data.terraform_remote_state.platform.outputs.private_subnet_ids security_groups = [ aws_security_group.ecs.id ] }

load_balancer { container_name = “nginx-deployment” container_port = “80” target_group_arn = aws_alb_target_group.nginx_deployment_green.id }

}

Nick Kocharhook avatar
Nick Kocharhook

Hi, thanks for the reply! Doesn’t the ecs-web-app module already set up all the required connections? I thought it would make and hook up the service and any necessary target groups.

I definitely see that, even if I remove the target groups on AWS, the target groups get created. (They are both for 80/HTTP though, oddly.) It’s just that one of them isn’t associated with the load balancer. I am not sure which module is creating this target group but then not associating it with the alb.

Nick Kocharhook avatar
Nick Kocharhook

OK, so I diagnosed this by adding attributes = ["webapp"] to the web app declaration and “alb” to the ALB. And that showed me that the errant target group was being created by the web app module. So then I found that creating the target group in the web app is configurable, so I turned it off:

  alb_ingress_enable_default_target_group = false
  alb_ingress_target_group_arn = module.alb.default_target_group_arn

And now there’s a single target group (which is what I wanted), and it’s properly connected to the ALB.

Nick Kocharhook avatar
Nick Kocharhook

I wonder if ecs-web-app has changed, because it seems that this behavior means the web app examples as written don’t work.

András Sándor avatar
András Sándor

for posterity if someone else is getting the same error: I had the same error using ecs-web-app module only (I did NOT use the other cloudposse modules, had created VPC, subnets, ECS cluster and ALB separetely), and my issue was that ecs-web-app creates the aws_lb_listener_rule in cloudposse/alb-ingress/aws child module conditionally based on the var.alb_ingress_unauthenticated_hosts, var.alb_ingress_unauthenticated_paths, etc. variables. If the relevant variable is empty, the listener rule is not created, so the target group will throwe the error with no ALB attached to it. spent more time with this then I care to admit, hope no one else have to.

Marcelo avatar
Marcelo

Hi Guys … this is my first message here… Please i Would like to know if what I am trying to do is possible using just terraform…

I am trying to create an ECR Module and get the repository_url output to use as Variable in Task Definition Module ….

task_definition container_image variable trying to get the ECR output :/

container_image = module.ecr_container_tiktok_api.ecr_tiktok_api.*.repository_url

I am Using AWS environment.

Nick Kocharhook avatar
Nick Kocharhook

What does the repository_url output from the ecr module look like?

Marcelo avatar
Marcelo

Hi Nick … tks for your help … :slightly_smiling_face:

the content of repository_url is: [20xxxxxxxxx1.dkr.ecr.us-east-1.amazonaws.com/tiktok-front](http://20xxxxxxxxx1.dkr.ecr.us-east-1.amazonaws.com/tiktok-front)

Nick Kocharhook avatar
Nick Kocharhook

Cool, so you should be able to do something like this:

container_image = "${module.ecr_tiktok_api.repository_url}:my_image_tag"
1
Nick Kocharhook avatar
Nick Kocharhook

I’m not entirely sure why you have two ecr modules declared?

Marcelo avatar
Marcelo

i am creating a template module based in the ecr module … to give to our devs … the ability to create their own “infrastructure” … based just in the template module the is simplified than the full ECR module !!! this is just a Test …

Nick Kocharhook avatar
Nick Kocharhook

sounds good. hopefully that will work for you

1
Marcelo avatar
Marcelo

thank you very much for your help

1
mrwacky avatar
mrwacky

Is it correct to say that Terraform state format hasn’t changed between 0.14 and 1.1 ? The 0.14 upgrade guide says:
Terraform v0.14 does not support legacy Terraform state snapshot formats from prior to Terraform v0.13, so before upgrading to Terraform v0.14 you must have successfully run terraform apply at least once with Terraform v0.13 so that it can complete its state format upgrades.
But newer versions are silent on this issue.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suspect there can be subtle changes. The reason it’s probably not discussed is b/c terraform has promised all versions of 1.x will be backwards compatible, just not always forwards compatible.
You should be able to upgrade from any v1.x release to any later v1.x release. You might also be able to downgrade to an earlier v1.x release, but that isn’t guaranteed: later releases may introduce new features that earlier versions cannot understand, including new storage formats for Terraform state snapshots
https://www.terraform.io/language/v1-compatibility-promises

Terraform v1.0 Compatibility Promises | Terraform by HashiCorpattachment image

From Terraform v1.0 onwards the Terraform team promises to preserve backward compatibility for most of the Terraform language and the primary CLI workflow, until the next major release.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is your concern more about reading remote state from within 0.14 root modules to some modules provisioned with 1.1?

mrwacky avatar
mrwacky

Just wondering if I can go from 0.14 -> 1.1 directly. The v1.0 upgrade notes say

mrwacky avatar
mrwacky

(all signs point to - just try it )

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, honestly - that’s your best course of action

mrwacky avatar
mrwacky

Ah - they are explicit about it https://www.terraform.io/language/upgrade-guides/1-0#remote-state-compatibility
If you are upgrading from Terraform v0.14 or Terraform v0.15 to Terraform v1.0 then you can upgrade your configurations in any order, because all three of these versions have intercompatible state snapshot formats.

1
mrwacky avatar
mrwacky

Thank you sir

    keyboard_arrow_up