#terraform (2021-12)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-12-01

Alex Ware avatar
Alex Ware

hi, I’m trying to use terraform-aws-ec2-client-vpn, but getting Error creating Client VPN endpoint: InvalidParameterValue: Certificate arn::see_no_evil: does not have a domain which has confused me slightly as the parameters for the vpn module doesn’t seem to accept a domain for the cert or to allow one to be given by arn

Release notes from terraform avatar
Release notes from terraform
10:43:40 PM

v1.1.0-rc1 1.1.0-rc1 (Unreleased) UPGRADE NOTES:

Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.

The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed…

2021-12-02

Ryan Ryke avatar
Ryan Ryke

hey guys and gals, has anyone done the work to allow for awsfirelens in fargate https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/variables.tf#L522

terraform-aws-ecs-web-app/variables.tf at master · cloudposse/terraform-aws-ecs-web-appattachment image

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - terraform-aws-ecs-web-app/variables.tf at master · cloudposse/terraf…

RB avatar

Yep at the moment you can only do it if you were to use each of those modules directly

terraform-aws-ecs-web-app/variables.tf at master · cloudposse/terraform-aws-ecs-web-appattachment image

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - terraform-aws-ecs-web-app/variables.tf at master · cloudposse/terraf…

RB avatar

Feel free to submit a PR to override the log configuration with a custom one

Ryan Ryke avatar
Ryan Ryke

tell me what you think of this approach @RB

  log_configuration = var.cloudwatch_log_group_enabled ? {
    logDriver = var.log_driver
    options = {
      "awslogs-region"        = coalesce(var.aws_logs_region, data.aws_region.current.name)
      "awslogs-group"         = join("", aws_cloudwatch_log_group.app.*.name)
      "awslogs-stream-prefix" = var.aws_logs_prefix == "" ? module.this.name : var.aws_logs_prefix
    }
    secretOptions = null
  } : {
      logDriver = var.log_driver
      options = {
        "papertrail_port" = "40723"
        "papertrail_host" = "logsn.papertrailapp.com"
        "@type" = "papertrail"
      }
      secretOptions = null
  }
Ryan Ryke avatar
Ryan Ryke

except for options ill just have a new var call log_options

Ryan Ryke avatar
Ryan Ryke

then use the defaults as the cwl ones

Ryan Ryke avatar
Ryan Ryke

its super messy

RB avatar

why not just override all of log_configuration

Ryan Ryke avatar
Ryan Ryke

was trying to minimize the amount of changes

Ryan Ryke avatar
Ryan Ryke

but thinking about it more i might just create the container def outside the web-app module

Ryan Ryke avatar
Ryan Ryke

and leave it be

RB avatar

ya that would work too

Ryan Ryke avatar
Ryan Ryke

sorta wish i would have thought of that earlier

RB avatar

lol me too. i forgot we can override the entire container

Ryan Ryke avatar
Ryan Ryke

while i have you any thought on removing the github provider from the web app module

Ryan Ryke avatar
Ryan Ryke

its in the codepipeline module

Ryan Ryke avatar
Ryan Ryke

breaks my for_each loop and from what i was reading is moving away from best practice

Ryan Ryke avatar
Ryan Ryke

seems to me the way the options are configured wouldnt allow it https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L95

terraform-aws-ecs-web-app/main.tf at master · cloudposse/terraform-aws-ecs-web-appattachment image

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - terraform-aws-ecs-web-app/main.tf at master · cloudposse/terraform-a…

2021-12-03

DaniC (he/him) avatar
DaniC (he/him)

hi folks, not sure if this is the right channel to ask:

how do you manage the K8s objects?

  1. tf (which will deploy eks ) + helm + charts ?
  2. tf (which will deploy eks) + argocd + helm charts ?
  3. tf + kustomize (no helm)?
  4. others ? anyone has any feedback between choosing helm vs kustomize ?
RB avatar
GitHub - cloudposse/terraform-aws-helm-release: Create helm release and common aws resources like an eks iam roleattachment image

Create helm release and common aws resources like an eks iam role - GitHub - cloudposse/terraform-aws-helm-release: Create helm release and common aws resources like an eks iam role

DaniC (he/him) avatar
DaniC (he/him)

thanks @RB

np1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would say it depends

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do use this terraform module for backing services including deploying argocd

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And argocd for custom apps

DaniC (he/him) avatar
DaniC (he/him)

right, i see. The challenge i have is that i have a mix of different services / infra

• infra: AWs vpc/TGW/ RDS/ EC2/ ECS/ EKS spread across N accounts

• apps: containers running in ECS/ EKS/ some 3rd party helm apps/ lambdas/ api gw / step fcts etc Ideally i’d like to stick to one CI and one CD to drive it via TF. I’ve tried and learnt through pain that the simple GHA + TF or TF apply after merged to main br doesn’t work( even though the majority of the web rave about it except folks in this place),

will read more on Argo to see if it can drive no K8s apps using TF.

Mauricio Wyler avatar
Mauricio Wyler

I had used Argo + kustomize, and they work perfectly.

Going back on when to choose helm or kustomize question. I’m on a project with a couple of applications (nothing complex), and they made charts for them.

Is it a good idea to use Argo to deploy and update the charts? Or would it be better to go back to kustomize and a central repo?

Thanks

RB avatar

Please upvote if you want a progress bar in terraform https://github.com/hashicorp/terraform/issues/28512

Estimate provisioning time · Issue #28512 · hashicorp/terraformattachment image

Current Terraform Version v0.15 Use-cases Terraform command like plan/apply/destroy can tell users an estimation of the time needed to finish the operation, so that users can have a sense on how lo…

1

2021-12-04

Evair Marinho Vilas Boas Porfirio avatar
Evair Marinho Vilas Boas Porfirio

Hey guys! I hope you are all right! I would like to know what different tasks the terraform-provider-awsutils provider performs. Would someone know how to answer me?

MrAtheist avatar
MrAtheist

Anyone tried to tf apply org cloudtrail and got denied because the “cloudtrail service” hasnt been enabled for the org? Any hack to automate this step and not have to resort to manual clicking in the console?

• seems like it’s not recommended to enable cloudtrail as part of the org…

• but then i dont see any “toggle” under cloudtrail besides literally creating a new trail and check “enable for all accounts in my organization”

MrAtheist avatar
MrAtheist
1
Evair Marinho Vilas Boas Porfirio avatar
Evair Marinho Vilas Boas Porfirio

Reading the messages, it’s really confusing.

MrAtheist avatar
MrAtheist

isnt that the case for everything in aws…?

MrAtheist avatar
MrAtheist

seems like this is the trick with cli

aws organizations enable-aws-service-access --service-principal [cloudtrail.amazonaws.com](http://cloudtrail.amazonaws.com)

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-an-organizational-trail-by-using-the-aws-cli.html#clo[…]vice

Creating a trail for an organization with the AWS Command Line Interface - AWS CloudTrail

Create, update, and manage a trail for an organization with the AWS Command Line Interface.

Evair Marinho Vilas Boas Porfirio avatar
Evair Marinho Vilas Boas Porfirio

Issue opportunity!

Evair Marinho Vilas Boas Porfirio avatar
Evair Marinho Vilas Boas Porfirio

aws_organizations_enable_aws_service_access

MrAtheist avatar
MrAtheist

hmm come to think of it the cli is still initiating from “organizations” perspective (i.e. aws organizations….) and not from cloudtrail. So i guess it’s the same button as the one in this screenshot i attached

aws organizations enable-aws-service-access --service-principal [cloudtrail.amazonaws.com](http://cloudtrail.amazonaws.com)

2021-12-05

2021-12-06

David avatar

Is anyone familiar with a method to get this dynamic statement

  dynamic "statement" {
    for_each = var.principals_lambda

    content {
      effect  = "Allow"
      actions = [
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ]

      principals {
        type        = "Service"
        identifiers = ["lambda.amazonaws.com"]
      }

      condition {
        test     = "StringLike"
        values   = formatlist("arn:${data.aws_partition.current.partition}:lambda:*:%s:function:*", var.principals_lambda)
        variable = "aws:sourceArn"
      }
    }
  }

to return the following:

+ {
                      + Action    = [
                          + "ecr:GetDownloadUrlForLayer",
                          + "ecr:BatchGetImage",
                        ]
                      + Condition = {
                          + StringLike = {
                              + aws:sourceArn = [
                                  + "arn:aws:lambda:*:222222222222:function:*",
                                  + "arn:aws:lambda:*:333333333333:function:*"
                                ]
                            }
                        }
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "lambda.amazonaws.com"
                        }
                      + Sid       = "LambdaECRImageCrossAccountRetrievalPolicy"
                    },

rather than

+ {
                      + Action    = [
                          + "ecr:GetDownloadUrlForLayer",
                          + "ecr:BatchGetImage",
                        ]
                      + Condition = {
                          + StringLike = {
                              + aws:sourceArn = [
                                  + "arn:aws:lambda:*:222222222222:function:*",
                                  + "arn:aws:lambda:*:333333333333:function:*"
                                ]
                            }
                        }
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "lambda.amazonaws.com"
                        }
                      + Sid       = ""
                    },
                  + {
                      + Action    = [
                          + "ecr:GetDownloadUrlForLayer",
                          + "ecr:BatchGetImage",
                        ]
                      + Condition = {
                          + StringLike = {
                              + aws:sourceArn = [
                                  + "arn:aws:lambda:*:222222222222:function:*",
                                  + "arn:aws:lambda:*:333333333333:function:*"
                                ]
                            }
                        }
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "lambda.amazonaws.com"
                        }
                      + Sid       = ""
                    },
loren avatar

why use the dynamic block at all?

loren avatar

you are not accessing the dynamic values, so it seems unnecessary

loren avatar

e.g. statement.key and statement.value

David avatar
feat: Cross account ECR for lambda functions by dsme94 · Pull Request #88 · cloudposse/terraform-aws-ecrattachment image

what With the introduction of cross-account ECR for lambda functions, I have put together the necessary code to allow for this functionality why Cross-account ECR is a feature many would use as …

David avatar

this should help provide the reasoning for the dynamic statement

loren avatar

i don’t actually see any reasoning described in that pr. just code

David avatar

so we might not necessarily want to add the lambda statement in, i.e not allowing lambda access to the ECR repo

David avatar

but for other repositories that statement might be reversed and we do want lambda to have access to it

David avatar

using a dynamic block will allow us to control whether or not the statement should be added

loren avatar

ahh, ok. so you only want to add the statement if the user specifies a value for var.principals_lambda, but otherwise do not add it?

David avatar

exactly

David avatar

principals_lambda is a list of arns

David avatar

well account ids

loren avatar

but you don’t care what the value is, and don’t need to create multiple statements?

David avatar

yes looking to avoid multiple statements as it’ll clutter the policy needlessly and has potential to grow out of control and breach the policy length limit set my AWS

loren avatar

try this:

  dynamic "statement" {
    for_each = len(var.principals_lambda) > 0 ? [1] : []

    content {
      effect  = "Allow"
      actions = [
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ]

      principals {
        type        = "Service"
        identifiers = ["lambda.amazonaws.com"]
      }

      condition {
        test     = "StringLike"
        values   = formatlist("arn:${data.aws_partition.current.partition}:lambda:*:%s:function:*", var.principals_lambda)
        variable = "aws:sourceArn"
      }
    }
  }
David avatar

jesus if this works… i didn’t think about len

David avatar

give me 2 mins

loren avatar

sorry for the run around. just needed to confirm that the shape of var.principals_lambda was not actually pertinent to the number of statements, and that it really was just a 1 or 0 requirement

David avatar

it’s all good

David avatar

but it looks to be working

David avatar

just checking with principals_lambda set to null

David avatar

yes

David avatar

this lgtm

David avatar
dynamic "statement" {
    for_each = length(var.principals_lambda) > 0 ? [1] : []

    content {
      sid = "LambdaECRImageCrossAccountRetrievalPolicy"
      effect  = "Allow"
      actions = [
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ]

      principals {
        type        = "Service"
        identifiers = ["lambda.amazonaws.com"]
      }

      condition {
        test     = "StringLike"
        values   = formatlist("arn:${data.aws_partition.current.partition}:lambda:*:%s:function:*", var.principals_lambda)
        variable = "aws:sourceArn"
      }
    }
  }
David avatar

boom

loren avatar

if the user actually passes var.principals_lambda = null that will bomb, but if the user lets it pick up a default value of [] or passes [] explicitly then it works

David avatar

yep

David avatar

default is set to []

David avatar

will update MR

David avatar

what’s your github username

David avatar

don’t want to steal your thunder

loren avatar

i’ve seen some people test for null to handle that, but personally i like to let it explode, or use variable validation to improve the error message

loren avatar

@lorengordon

David avatar

thank you mate, tagged you

1
David avatar

for_each is acting as intended, but we only want it to loop once

justin.dynamicd avatar
justin.dynamicd

did you try updating your foreach to simply run on a match condition instead of iterating?

for_each = var.principals_lambda != {} ? ["statement"] : []
loren avatar
dynamic "statement" {
    for_each = length(var.principals_lambda) > 0 ? [1] : []

    content {
      sid = "LambdaECRImageCrossAccountRetrievalPolicy"
      effect  = "Allow"
      actions = [
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ]

      principals {
        type        = "Service"
        identifiers = ["lambda.amazonaws.com"]
      }

      condition {
        test     = "StringLike"
        values   = formatlist("arn:${data.aws_partition.current.partition}:lambda:*:%s:function:*", var.principals_lambda)
        variable = "aws:sourceArn"
      }
    }
  }
Vikram Yerneni avatar
Vikram Yerneni

Fellas, I do have a module that basically creates S3 bucket (using typical terraform resource "aws_s3_bucket") with given terraform code that includes properties like “lifecycle_rule, server_side_encryption_configuration & logging” added in my main.tf file and we use Gitlab for the CI setup. However, I am trying to setup my Gitlab merge request process only goes through when ONLY RUNS WHEN THE ABOVE PROPERTIES ARE ADDED TO THE MAIN.TF FILE. I am not sure if this can be controlled at Terraform plan level or even before. Anyone here in the community set this type of setup before?

2021-12-07

Kevin Neufeld(PayByPhone) avatar
Kevin Neufeld(PayByPhone)

If have time and do not mind to help promote with :thumbsup: on a PR for a new terraform aws resource rds_cluster_activity_stream one my colleagues is trying to get merged. I will be would be greatly appreciated.

Thank you in advance. https://github.com/hashicorp/terraform-provider-aws/pull/22097

New Resource aws_rds_cluster_activity_stream by jdstuart · Pull Request #22097 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…

Kevin Neufeld(PayByPhone) avatar
Kevin Neufeld(PayByPhone)

It is doubtful it will get acted on anytime time soon with 491 PRs but any thumbs up can’t hurt.

New Resource aws_rds_cluster_activity_stream by jdstuart · Pull Request #22097 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…

Adam McKinley avatar
Adam McKinley

Hi all. I’ve got this code in my main.tf:

module "user" {
  source = "cloudposse/iam-user/aws"
  version = "0.8.1"

  name = "adam"
  user_name = "[email protected]"
  pgp_key = "keybase:awmckinley"
  groups = []
}

Getting this error message:

Do you want to perform these actions in workspace "gbl-root"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes 

module.user.aws_iam_user.default[0]: Creating...
 
Error: Error creating IAM User [email protected]: InvalidClientTokenId: The security token included in the request is invalid
        status code: 403, request id: e55b8264-73ea-47d8-865b-712c193054fb

Any suggestions?

1
aimbotd avatar
aimbotd

AWS is still pooping all over the bed. I wouldn’t really attempt to work on AWS until all issues are resolved.

Adam McKinley avatar
Adam McKinley

Got it.

Adam McKinley avatar
Adam McKinley

Didn’t see the news about the outage.

Adam McKinley avatar
Adam McKinley

Thanks!

aimbotd avatar
aimbotd

Oh yea. It’s been a dumpster fire all morning.

Adam McKinley avatar
Adam McKinley

Looking at the Personal Health Dashboard shows issues only in us-east-1

aimbotd avatar
aimbotd

Yes and also very much no.

aimbotd avatar
aimbotd

Various services have dependencies on services within us-east-1, such as STS, iam, auth, etc.

this1
jose.amengual avatar
jose.amengual

us-east-1 is the center of everything pretty much

jose.amengual avatar
jose.amengual

what he said

jose.amengual avatar
jose.amengual

S3 auth, SSL certs, Cloudfront TLS and a bunch of other stuff

aimbotd avatar
aimbotd

I’m in us-east-2 but I put a freeze on all deployments because theres a chance that a deployment to prod may not come online due to some unseen dependencies within AWS.

aimbotd avatar
aimbotd

For instance, the aws console was down for everyone in every region today for the better part of the day. 5 hours at the very least.

Adam McKinley avatar
Adam McKinley

Dang. Well, thanks for educating me about the hidden dependencies on us-east-1. Didn’t know.

loren avatar

my favorite is that within the console (if you had a session from before the outage began), it started reporting that the us-east-1 region was “invalid”

2
loren avatar

they fixed that maybe 30 minutes ago. i’m really glad that they found it

aimbotd avatar
aimbotd

We always joke about regional failure and what to do and more often than not, people are like, well, its an entire region goes out, we’re probably at war.

aimbotd avatar
aimbotd

And then AWS comes to the party and says ha, heres your regional outage ya plebs.

Adam McKinley avatar
Adam McKinley

lol

aimbotd avatar
aimbotd

For instance, amazon warehouses and deliveries that we’re out by 8am pst today, have been grounded since 8am today.

aimbotd avatar
aimbotd

https://www.cnbc.com/2021/12/07/amazon-web-services-outage-causes-issues-at-disney-netflix-coinbase.html
The outage also brought down critical tools used inside Amazon. Warehouse and delivery workers, along with drivers for Amazon’s Flex service, reported on Reddit that they couldn’t access the Flex app or the AtoZ app, making it impossible to scan packages or access delivery routes.

loren avatar

maybe it’s a reinvent hangover

this1
aimbotd avatar
aimbotd

I was going to share some tweets about it but they appear to all have been deleted.

aimbotd avatar
aimbotd

At least the ones I had from earlier.

Alex Jurkiewicz avatar
Alex Jurkiewicz

next time someone mentions how “simple” multi-region redundancy is, or how it’s “critical” for your business to grow to the next level, point out AWS has probably 0.1% annual downtime because of us-east-1, and they are doing fine

1
aimbotd avatar
aimbotd

To be fair, as long as you don’t consider the data requirement, it is pretty simple.

Alex Jurkiewicz avatar
Alex Jurkiewicz

what do you mean by the data requirement?

aimbotd avatar
aimbotd

Where does the data live and how are you going to replicate it across regions.

Alex Jurkiewicz avatar
Alex Jurkiewicz

oh yes! It must be a lot easier to run a service without state

aimbotd avatar
aimbotd

Even then, you can fix that. A gluster cluster with cross region failover can be configured relatively painlessly. It could be pretty expensive though depending on requirements.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I completely agree, if by “relatively painless” you mean “relative to medieval torture” or perhaps “relative to dental work without anaesthetic”

Alex Jurkiewicz avatar
Alex Jurkiewicz

actually, I think it might be in the same “pain ballpark” as the latter

aimbotd avatar
aimbotd

Nah. Not even a days worth of work and most of it can be automated via ansible or the like.

aimbotd avatar
aimbotd

Ive done a number of cross region redundancy deployments. The largest hurdle has always been cost. Hot instances, vs warm, vs cold. Smaller footprint with the ability to scale at the touch of a button.

I’d claim that if cost and bias of services wasn’t a factor, setting up redundancy is easy. We don’t live in such a world though.

aimbotd avatar
aimbotd

Even if you do consider it, RDS has cross region replication.

aimbotd avatar
aimbotd

its not that hard … but its definitely a time/effort/cost triangle.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

jose.amengual avatar
jose.amengual

Just do not use Databases

jose.amengual avatar
jose.amengual

it is 2021!!!!

2021-12-08

Adam McKinley avatar
Adam McKinley

Hi all. I’m still getting the same problem as last night. So no one has to scroll: I’ve got this code in my main.tf:

module "user" {
  source = "cloudposse/iam-user/aws"
  version = "0.8.1"

  name = "adam"
  user_name = "[email protected]"
  pgp_key = "keybase:awmckinley"
  groups = []
}

Getting this error message:

Do you want to perform these actions in workspace "gbl-root"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes 

module.user.aws_iam_user.default[0]: Creating...
 
Error: Error creating IAM User [email protected]: InvalidClientTokenId: The security token included in the request is invalid
        status code: 403, request id: e55b8264-73ea-47d8-865b-712c193054fb

I’m using the root account with access ID and secret key. I know it’s not best practice, but is there any reason this should fail besides the AWS outage?

RB avatar

looks like an issue either with authentication or with aws because the upstream module resource for the iam user is very very simple

https://github.com/cloudposse/terraform-aws-iam-user/blob/5d953db7244b2cf81bb6f29813a03ccbe76b8684/main.tf#L1-L9

terraform-aws-iam-user/main.tf at 5d953db7244b2cf81bb6f29813a03ccbe76b8684 · cloudposse/terraform-aws-iam-userattachment image

Terraform Module to provision a basic IAM user suitable for humans. - terraform-aws-iam-user/main.tf at 5d953db7244b2cf81bb6f29813a03ccbe76b8684 · cloudposse/terraform-aws-iam-user

RB avatar

you can try copying and pasting that resource in its own main.tf and try applying it and im sure you would be able to reproduce the issue

Adam McKinley avatar
Adam McKinley

Other modules seem to work. For example:

module "tfstate_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "0.33.0"

  force_destroy                 = var.force_destroy
  prevent_unencrypted_uploads   = var.prevent_unencrypted_uploads
  enable_server_side_encryption = var.enable_server_side_encryption

  context = module.this.context
}

module "s3_bucket" {
  source = "cloudposse/s3-bucket/aws"
  version = "0.44.1"
  acl                      = "private"
  enabled                  = true
  user_enabled             = false
  versioning_enabled       = false
  allowed_bucket_actions   = ["s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation"]
  name                     = "bar432"
  stage                    = "root"
  namespace                = "foo253"
}

Ran fine just now with this result:

Do you want to perform these actions in workspace "uw2-root"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.s3_bucket.aws_s3_bucket.default[0]: Creating...
module.s3_bucket.aws_s3_bucket.default[0]: Creation complete after 5s [id=foo-root-bar]
module.s3_bucket.data.aws_iam_policy_document.bucket_policy[0]: Reading...
module.s3_bucket.data.aws_iam_policy_document.bucket_policy[0]: Read complete after 0s [id=561002259]
module.s3_bucket.aws_s3_bucket_public_access_block.default[0]: Creating...
module.s3_bucket.data.aws_iam_policy_document.aggregated_policy[0]: Reading...
module.s3_bucket.data.aws_iam_policy_document.aggregated_policy[0]: Read complete after 0s [id=561002259]
module.s3_bucket.aws_s3_bucket_public_access_block.default[0]: Creation complete after 1s [id=foo-root-bar]
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Creating...
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Still creating... [10s elapsed]
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Still creating... [20s elapsed]
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Still creating... [30s elapsed]
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Creation complete after 30s [id=2021-12-08T15:55:08Z]
module.s3_bucket.aws_s3_bucket_ownership_controls.default[0]: Creating...
module.s3_bucket.aws_s3_bucket_ownership_controls.default[0]: Creation complete after 1s [id=foo-root-bar]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

Outputs:

tfstate_backend_dynamodb_table_arn = "arn:aws:dynamodb:us-west-2:{redacted}:table/{redacted}-uw2-root-tfstate-lock"
tfstate_backend_dynamodb_table_id = "{redacted}-uw2-root-tfstate-lock"
tfstate_backend_dynamodb_table_name = "{redacted}-uw2-root-tfstate-lock"
tfstate_backend_s3_bucket_arn = "arn:aws:s3:::{redacted}-uw2-root-tfstate"
tfstate_backend_s3_bucket_domain_name = "{redacted}-uw2-root-tfstate.s3.amazonaws.com"
tfstate_backend_s3_bucket_id = "{redacted}-uw2-root-tfstate"

Is there anything I can check about my AWS settings to see what IAM user creation fails?

RB avatar

not all modules are the same tho

RB avatar

you can try to create an iam user from the cli with similar inputs and see if you get a failure

Adam McKinley avatar
Adam McKinley

aws iam create-user --user-name [[email protected]](mailto:[email protected]) resulted in

An error occurred (InvalidClientTokenId) when calling the CreateUser operation: The security token included in the request is invalid
Adam McKinley avatar
Adam McKinley

I’m using an access key for the root user on an almost brand-new account (created Monday). Are there any possibilities besides AWS outage-related issues?

RB avatar

im not sure tbh. their status page shows all green checks

RB avatar

what does aws sts get-caller-identity return for you ?

Adam McKinley avatar
Adam McKinley
√ . [default] app ⨠ aws sts get-caller-identity
{
    "UserId": "158459863977",
    "Account": "158459863977",
    "Arn": "arn:aws:iam::158459863977:root"
}
RB avatar

wow that looks right

RB avatar

must be aws api issues

Adam McKinley avatar
Adam McKinley

Ok. I’ll reach out to AWS support. Thanks for all your help!

2
J Norment avatar
J Norment

I have a situation where I need to manually set up the .terraform directory for locally running init and validate. I’d like init to not change (at least two modules that) I set up. Is there a way to do that?

RB avatar

if you run terraform init again, it shouldn’t overwrite it unless you run terraform init -upgrade

RB avatar

(i think)

J Norment avatar
J Norment

Part of the issue is that I’m not able to run init the first time, for a few of the modules, which are pulled from repos where the key isn’t available locally, and mangling .terraform/modules.json to match. But, for some reason, even though I think this tactic worked once before, this time, it erased the repo I had set up when init was run.

J Norment avatar
J Norment

Can’t edit the above. I’m going to try again, that didn’t quite come out right.

J Norment avatar
J Norment

.. isn’t available locally. So, instead, I tried mangling ..

RB avatar

so the root issue is that terraform init doesn’t work without updating the module json file

RB avatar

doesn’t make sense but i can’t say more without looking at the code

J Norment avatar
J Norment

I can’t provide code in this case. How does it work, as far as you know?

J Norment avatar
J Norment

Maybe I missed a small detail.

RB avatar

¯_(ツ)_/¯

J Norment avatar
J Norment

Fair enough.

RB avatar

i can’t really say tbh, sorry. the modules can be pulled directly from the registry if they are posted there

RB avatar

if they are private modules then they need to be sourced using git ssh

RB avatar

either way, tf init should work out of the box

J Norment avatar
J Norment

It’s a corporate thing — policies prevent direct access to repo used in the pipeline, which includes a way to read from the repo that init attempts to pull. Can get around it with *_override.tf files, but that doesn’t work for child modules.

J Norment avatar
J Norment

** from the repo that init attempts to pull locally

Alex Jurkiewicz avatar
Alex Jurkiewicz

maybe instead of using a locally-inaccessible remote path for the module, use a directory path. Then for local dev, you can stub the module, and in your real pipeline, you can manually install the module to the same path

Alex Jurkiewicz avatar
Alex Jurkiewicz

eg from

module "foo" {
  source = "github.com/privateorg/foo"

to

module "foo" {
  source "./foo-module"
J Norment avatar
J Norment

I think the main issue is that they are child modules that are causing the issues. That, and the (human) policies at the company where I’m working where the child policies aren’t readable locally. I don’t think I can see the above approach working for child modules, without changing the calling modules in the pipeline. But it’s been a long day. I’ll revisit this tomorrow to see if I missed something. Thanks!

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yeah, you are right, you’d have to change the modules including those child modules

Release notes from terraform avatar
Release notes from terraform
09:43:37 PM

v1.1.0 1.1.0 (December 08, 2021) Terraform v1.1.0 is a new minor release, containing some new features and some bug fixes whose scope was too large for inclusion in a patch release. NEW FEATURES:

moved blocks for refactoring within modules: Module authors can now record in module source code whenever they’ve changed the address of a resource or resource instance, and then during planning Terraform will automatically migrate existing objects in the state to new addresses. This therefore avoids the…

DaniC (he/him) avatar
DaniC (he/him)

hi folks, in case someone has real exp with TF and CDK, you mind sharing your thoughts wrt why should you embrace one or another?

Bit of context:

Am familiar with CFN and TF but not much with CDK (even though i know the more you go to L1 construct the more you gonna hit same issues with CFN yaml) and i’d like to try out and not judge based on:

• CDK is vendor lock-in -> not caring as is AWS shop only for now and forever

• marketing or personal pref in picking one or another In essence i’m thinking for a cdk aws shop, what will be required to move away from it to TF and what business value will bring (not just for the sake of doing it cause … is cool)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

These don’t need to be mutually exclusive.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s how we describe it in our “4 layers of infrastructure”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To do everything we do at cloudposse, we have no choice but to use something like Terraform or Pulumi, since we’re provisioning way more than what’s in CDK. In our model, we acknowledge that for different purposes better tools exist. Primarily, this affects layer-4, which is application deployments where developers are using other tools like the serverless framework, or CDK. That’s fine. Everything can co-exist.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

My belief is you need a strong foundation, the sort of which we deliver for our customers. We use terraform for that, since we have 180+ terraform modules for that today. But just because we use terraform for the foundation and platform, doesn’t mean it’s required all the way to the top of the stack.

1
DaniC (he/him) avatar
DaniC (he/him)

i see, never thought of having CDK at the top layer of applications, i’ve always seen it in very close proximity of TF overlapping each other. The separation of duties between tools is hard work i’d say …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t see that necessarily as the case. In this model, any parameters you need to share between them, store in SSM. Anything you deploy via CDK/CFT would have a much, much smaller scope than what’s been deployed with terraform. It’s building on top of, rather injecting in the middle. Deploying lambdas with terraform has historically sucked. On the other hands, serverless and SAM were built to make this very easy. Typically you’ll just need to know things like account IDs, VPC IDs, and the rest should plug and play.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would say terraform is a great language for platform engineering teams. It definitely works for everyone, but once you get into the realm of software development, opinions on languages and I like to stay out of it. For example, deploying containers on kubernetes, we don’t dictate every image must only deploy rust apps on alpine. Instead, we provide a platform that enables anyone to ship a container, regardless of what’s inside.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Much the same, for things outside of kubernetes & containers, there should be similar analogs to other technologies. That’s why I like to say we provision a solid foundation with terraform, for everyone else to build on, however they need to build it.

DaniC (he/him) avatar
DaniC (he/him)

Wow, you’ve touched so many points, thanks for opening yourself on all this topics, i can see your philosophy here.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Where do stateful services like database & s3 fit into your infrastructure layers? Are they just part of “Backing services”?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It depends, a database is frequently shared by more than one service, therefore it’s a platform service and deployed in a separate phase

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

S3 buckets can be deployed with the service, but then there’s a theoretical question: if the service is the deleted, is the bucket deleted? if not, then it’s a platform-level service, since the lifecycle is different

3

2021-12-09

Laurynas avatar
Laurynas

Hi, how can I output ip addresses of the nlb created with terraform?

mikesew avatar
mikesew

Question about the Cloudwatch Log Groups that are enabled by turning on RDS Enabling Exports (ie. ["alert", "audit", "listener", "trace"]), do those log groups get tagged by terraform? I’m finding that they’re not. Is there a param I’m supposed to set?

resource "aws_db_instance" "default" {
  enabled_cloudwatch_logs_exports = ["alert", "audit", "listener", "trace"]
  tags                            = var.tags
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
HashiCorp shares rise after one of top software IPOs of 2021 values company at over $14 billionattachment image

Almost all of the company’s revenue comes from subscriptions, but just 7% comes from cloud-based services, although that’s the fast-growing part of the company.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
HashiCorp shares rise after one of top software IPOs of 2021 values company at over $14 billionattachment image

Almost all of the company’s revenue comes from subscriptions, but just 7% comes from cloud-based services, although that’s the fast-growing part of the company.

DaniC (he/him) avatar
DaniC (he/him)

OMG, it must be very good time for all Hashi employees who joined a while ago… after hard work and high risks …well deserved rewarding time

3
Alex Jurkiewicz avatar
Alex Jurkiewicz

instant billionaire. I hope they sell a lot of shares before it slumps. Can’t see the value staying so high

Laurynas avatar
Laurynas

Yeah 14 billion is a lot. I wonder what ARR they currently have?

2021-12-10

Jamie K avatar
Jamie K

This looks like a breaking change was introduced where replica DNS for RDS clusters is only created if the configuration is serverless? https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L331 would this have been intentional for any sort of reason? trying to pull my configuration forward and its not serverless and will remove the DNS record.

Additional inputs by nitrocode · Pull Request #124 · cloudposse/terraform-aws-rds-clusterattachment image

what Add performance_insights_retention_period Add ca_cert_identifier Add preferred_maintenance_window to instances Add timeout to instances why Performance insights retention Add a ca cert iden…

terraform-aws-rds-cluster/main.tf at master · cloudposse/terraform-aws-rds-clusterattachment image

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - terraform-aws-rds-cluster/main.tf at master · cloudposse/terraform-aws-rds-cluster

RB avatar

nice catch

Additional inputs by nitrocode · Pull Request #124 · cloudposse/terraform-aws-rds-clusterattachment image

what Add performance_insights_retention_period Add ca_cert_identifier Add preferred_maintenance_window to instances Add timeout to instances why Performance insights retention Add a ca cert iden…

terraform-aws-rds-cluster/main.tf at master · cloudposse/terraform-aws-rds-clusterattachment image

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - terraform-aws-rds-cluster/main.tf at master · cloudposse/terraform-aws-rds-cluster

RB avatar
Create dns record if not serverless by nitrocode · Pull Request #128 · cloudposse/terraform-aws-rds-clusterattachment image

what Restore original logic why Previous logic was to create the record when module was not serverless references Previous PR #124

RB avatar

the orig logic was to create the dns record only if the engine is not serverless

Jamie K avatar
Jamie K

sweet!

RB avatar
Releases · cloudposse/terraform-aws-rds-clusterattachment image

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

DaniC (he/him) avatar
DaniC (he/him)

wow @RB kudos for being so responsive

2
1
this1
Jamie K avatar
Jamie K

looks like its going to work! thanks so much!!

1

2021-12-12

Benjamin Boyle avatar
Benjamin Boyle

Hi! I’m looking for someone who can help me with small (paid) ad-hoc projects involving terraform and kubernetes. I’m a devops beginner launching a new product. The terraform/kubernetes side of it is really small and simple, but occasionally challenges come up that I can’t solve, (or which take much too long for me to learn how to solve).

For example, right now, I have kubernetes ingress working fine with the AWS Load Balancer Controller. (AWS Load balancers are created to serve the k8 ingresses). But I’m having trouble installing certmanager and solving letsencrypt http challenges. The first project would be to get the tls features working.

Onboarding is really fast. It’s all on github with a remote backend, and I’ve setup a docker container command tools environment that has everythign you need in just a couple of minutes onboarding.

I’d really appreciate (and enjoy) having a friend/consultant to help out when it’s too hard for me. Thank you in advance for reaching out.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I wonder if you’d find it easier to use AWS ACM certificates, which you can very easily create programatically with Terraform

steenhoven avatar
steenhoven

Any way to do that from within kube?

Benjamin Boyle avatar
Benjamin Boyle

@Alex Jurkiewicz, that’s a very nice suggestion. I believe ACM can also update certificates automatically? How would one then use the certificate in a kubernetes ingress?

Benjamin Boyle avatar
Benjamin Boyle

@steenhoven, yes, one can set it up using yaml and kubectl. But then history/configuration is not clear to others coming later to the project, hence my desire to use terraform for it.

Benjamin Boyle avatar
Benjamin Boyle

(Mostly I’m interested in finding someone willing to help out as a paid consultant).

steenhoven avatar
steenhoven

Right. You say its possible to create an ACM cert from a kube manifest?

ismail yenigul avatar
ismail yenigul

@Benjamin Boyle you can create ACM in terraform and set ACM certificate ARN in ingress annotations.

alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:xyz:certificate/bfbfa4ab-6b51-4575-92f1-56e2a31f0fbd

full example

  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/tags: app=platform
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:xyz:certificate/bfbfa4ab-6b51-4575-92f1-56e2c21f0fbd
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}, {"HTTP": 8080}, {"HTTPS": 8443}]'
    alb.ingress.kubernetes.io/ip-address-type: ipv4
    alb.ingress.kubernetes.io/backend-protocol: HTTP
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
    alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true
    alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.type=lb_cookie,stickiness.lb_cookie.duration_seconds=172800,load_balancing.algorithm.type=least_outstanding_requests
spec:
  rules:
  - http:
      paths:
ismail yenigul avatar
ismail yenigul
GitHub - cloudposse/terraform-aws-acm-request-certificate: Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validationattachment image

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - GitHub - cloudposse/terraform-aws-acm-request-certifi…

steenhoven avatar
steenhoven

I was just trying with this one, but it doesnt validate: https://github.com/terraform-aws-modules/terraform-aws-acm

GitHub - terraform-aws-modules/terraform-aws-acm: Terraform module which creates and validates ACM certificateattachment image

Terraform module which creates and validates ACM certificate - GitHub - terraform-aws-modules/terraform-aws-acm: Terraform module which creates and validates ACM certificate

steenhoven avatar
steenhoven

@ismail yenigul does the module create the required CNAME records for the validation?

ismail yenigul avatar
ismail yenigul

I am using another terraform module for acm. but for this module, yes it does https://github.com/terraform-aws-modules/terraform-aws-acm/blob/master/main.tf#L34 https://github.com/terraform-aws-modules/terraform-aws-acm/blob/master/variables.tf#L7 Double check if you provide correct zone id.

terraform-aws-acm/main.tf at master · terraform-aws-modules/terraform-aws-acmattachment image

Terraform module which creates and validates ACM certificate - terraform-aws-acm/main.tf at master · terraform-aws-modules/terraform-aws-acm

terraform-aws-acm/variables.tf at master · terraform-aws-modules/terraform-aws-acmattachment image

Terraform module which creates and validates ACM certificate - terraform-aws-acm/variables.tf at master · terraform-aws-modules/terraform-aws-acm

steenhoven avatar
steenhoven

Thanks

Benjamin Boyle avatar
Benjamin Boyle

This is the .tf code I’m having trouble with (as you can see, it’s very small) https://github.com/FastFinTech/FFT.Signals.GitOps/blob/main/ingress.tf

FFT.Signals.GitOps/ingress.tf at main · FastFinTech/FFT.Signals.GitOpsattachment image

Terraform definition for the FFT.Signals infrastructure - FFT.Signals.GitOps/ingress.tf at main · FastFinTech/FFT.Signals.GitOps

batman_93 avatar
batman_93

Hi, Can someone please take a look at this reddit post and help me out? https://www.reddit.com/r/Terraform/comments/rf2473/access_value_from_map_of_list/ I am trying to remove a duplicate var in the main.tf.

Access value from map of list

Hello Folks, ​ I am working on developing an S3 module based on the TF community module. As part of that, I am trying to access a value…

2021-12-13

Mark Garringer avatar
Mark Garringer

Is it possible to create multiple Client VPNs in the same region with different VPCs using https://github.com/cloudposse/terraform-aws-ec2-client-vpn? I’m using the SSM functionality to store the cert information and it seems like there’s no way to specify new key names for the SSM keys, and collisions occur?

GitHub - cloudposse/terraform-aws-ec2-client-vpnattachment image

Contribute to cloudposse/terraform-aws-ec2-client-vpn development by creating an account on GitHub.

RB avatar

Have you tried changing the context inputs for each module ref ?

for example

module "ec2_client_vpn_blue" {
  source  = "cloudposse/ec2-client-vpn/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  name = "blue"
  # ... etc

}

module "ec2_client_vpn_orange" {
  source  = "cloudposse/ec2-client-vpn/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  name = "orange"
  # ... etc

}
GitHub - cloudposse/terraform-aws-ec2-client-vpnattachment image

Contribute to cloudposse/terraform-aws-ec2-client-vpn development by creating an account on GitHub.

Mark Garringer avatar
Mark Garringer

I have, in my digging, the only thing that would actually change the SSM key names is the secret_path_format in https://github.com/cloudposse/terraform-aws-ssm-tls-self-signed-cert but I don’t think I can affect that without the client vpn module supporting it directly?

GitHub - cloudposse/terraform-aws-ssm-tls-self-signed-cert: This module creates a self-signed certificate and writes it alongside with its key to SSM Parameter Store (or alternatively AWS Secrets Manager).attachment image

This module creates a self-signed certificate and writes it alongside with its key to SSM Parameter Store (or alternatively AWS Secrets Manager). - GitHub - cloudposse/terraform-aws-ssm-tls-self-si…

RB avatar
terraform-aws-ssm-tls-self-signed-cert/ssm.tf at f7bc31ea9a4f40bdf209a1c8be51bbf3932e89fb · cloudposse/terraform-aws-ssm-tls-self-signed-certattachment image

This module creates a self-signed certificate and writes it alongside with its key to SSM Parameter Store (or alternatively AWS Secrets Manager). - terraform-aws-ssm-tls-self-signed-cert/ssm.tf at …

RB avatar

the name contains module.this.name

RB avatar

so if you put a diff name for each module ref of ec2 client vpn, it would feed a diff name to each ssm tls self signed cert, which would create a separate ssm resource

Mark Garringer avatar
Mark Garringer

I thought I’d tried it, but did it again just to double check and it doesn’t appear to affect it… am I missing something? Trying both with context & name… The private key name is still /self-signed-cert-server.key

RB avatar

hmmm that’s very strange. it’s like the name is completely skipped from the formatting

RB avatar

could you create an issue in the client vpn github repo and our sme will get to it ? or if you figure it out, feel free to put in a pr :)

Mark Garringer avatar
Mark Garringer

will do…

Mark Garringer avatar
Mark Garringer

is this overriding whatever I pass in and ‘hardcoding’ it? https://github.com/cloudposse/terraform-aws-ec2-client-vpn/blob/master/main.tf#L24

terraform-aws-ec2-client-vpn/main.tf at master · cloudposse/terraform-aws-ec2-client-vpnattachment image

Contribute to cloudposse/terraform-aws-ec2-client-vpn development by creating an account on GitHub.

Mark Garringer avatar
Mark Garringer

as well as lines 58, 93, etc

Mark Garringer avatar
Mark Garringer

That appears to have been it. Didn’t see it earlier, https://github.com/cloudposse/terraform-aws-ec2-client-vpn/pull/24

Self Signed Certs all have hardcoded names, not allowing for multiple Client VPNs in a region by garrinmf · Pull Request #24 · cloudposse/terraform-aws-ec2-client-vpnattachment image

what The certificate names are all hardcoded, not allowing modification via context. why In order to have multiple Client VPNs in the same region, the keys stored in SSM need to be unique, the hard…

1
RB avatar

cc: @Leo Przybylski

jonjitsu avatar
jonjitsu

What are the options for keeping secrets out of the state file?

Lloyd O'Brien avatar
Lloyd O'Brien

Would anyone have a working example of an AWS IAM module that uses the resource aws_iam_instance_profile and is able to produce a password? Ideally with pgp and not Keybase

jonjitsu avatar
jonjitsu
gpg --gen-key
gpg --export MyKey | base64 > pgp_key

then in TF:

resource "aws_iam_user_login_profile" "me" {
   pgp_key = file("./pgp_key")
   # ...
}
Lloyd O'Brien avatar
Lloyd O'Brien

i’ll give that a go, tyvm!

Lloyd O'Brien avatar
Lloyd O'Brien

this worked, appreciate the response a lot. Cheers!

jose.amengual avatar
jose.amengual

@Andriy Knysh (Cloud Posse) is it possible to create resources without tags? ( using cloudposse module that uses context)

RB avatar

@jose.amengual did you try setting tags = {} ?

jose.amengual avatar
jose.amengual

module.this.tags always returns tags

jose.amengual avatar
jose.amengual

I did

RB avatar

what if you create a label outside of the module and then pass that label’s context into the module ?

jose.amengual avatar
jose.amengual

because the null label append the tags base on namespace etc

jose.amengual avatar
jose.amengual

it is already embeded

RB avatar

what happens if you pass in

  tags = {
    pepe = "true"
  }

does it overwrite the tags or append them to each resource ?

jose.amengual avatar
jose.amengual

append

jose.amengual avatar
jose.amengual

well merge

RB avatar
terraform-null-label/context.tf at 488ab91e34a24a86957e397d9f7262ec5925586a · cloudposse/terraform-null-labelattachment image

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - terraform-null-label/context.tf at 488ab91e34a24a86957e397d9f7262ec5925586a · cloudposse/terraf…

jose.amengual avatar
jose.amengual

I have not

RB avatar

maybe try setting it to ["unset"]

jose.amengual avatar
jose.amengual

labels_as_tags = [] did it

2
jose.amengual avatar
jose.amengual

awesome

Lloyd O'Brien avatar
Lloyd O'Brien

Hi all, does anyone know the correct process for creating a Terraform AWS keypair for a Windows instance? I have the resource/parameters correct, but consistently get an error:

error importing EC2 Key Pair (KP-production-Management-0): MissingParameter: The request must contain the parameter PublicKeyMaterial
Lloyd O'Brien avatar
Lloyd O'Brien

I am just unsure of what I’m missing in regards to PublicKeyMaterial TIA

managedkaos avatar
managedkaos

@Lloyd O’Brien are you using https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/key_pair?

To answer your question, though, the PublicKeyMaterial is the content of the public key associated with a key pair. so you need to provide the actual contents of the file containing the public key that you want to associate with the private key.

You can paste the contents of the file directly into your terraform or you can reference the file externally.

managedkaos avatar
managedkaos

The docs give a good example of adding the public key material to the terraform code:

resource "aws_key_pair" "deployer" {
  key_name   = "deployer-key"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 [email protected]"
}
Lloyd O'Brien avatar
Lloyd O'Brien

Hey @managedkaos thanks for taking a look at my post. Yes I am using that resource you linked. I think my issue is generating the key (for Windows) and having that key meet one of the 3 bullet points in the doc you linked. The example seems to be for SSH, but hard to find material on keys for Windows.

managedkaos avatar
managedkaos

The key material will be the same. How are you generating your key?

Ideally you’d use ssh-keygen and do something like:

/usr/bin/ssh-keygen -t ed25519 -C "This is a comment" -f this_is_the_key_name
managedkaos avatar
managedkaos

the .pub part of that output is what you provide.

For windows servers in AWS, you provide the private key when you decrypt the admin password. so its not really an SSH operation, but the features of the key are used for encryption/decryption.

Anyway, I hope you get it worked out!

Lloyd O'Brien avatar
Lloyd O'Brien

i got this sorted out, thanks a mill for explaining all that!

1

2021-12-14

Sam Bishop avatar
Sam Bishop

We really need some kind of Cloudposse published artifact list for helping https://github.com/cloudposse/terraform-external-module-artifact with “broken defaults”… because https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder/issues/13 seems to be back, but since the module doesnt log the actual curl request anywhere I have no log of the actually attempted url, and also no way to know what the correct one to try explicitly setting it to is… because its just an S3 bucket and if i don’t know the key… well then I just get 404.

GitHub - cloudposse/terraform-external-module-artifact: Terraform module to fetch any kind of artifacts using curl (binary and text okay)attachment image

Terraform module to fetch any kind of artifacts using curl (binary and text okay) - GitHub - cloudposse/terraform-external-module-artifact: Terraform module to fetch any kind of artifacts using cur…

404 error when loading lambda.zip · Issue #13 · cloudposse/terraform-aws-ses-lambda-forwarderattachment image

When running the example hcl from the readme I'm getting the following error: Error: failed to execute "curl": curl: (22) The requested URL returned error: 404 on .terraform/modules/s…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think the better way forward for us on this module is to rewrite it to use a dockerized lambda

GitHub - cloudposse/terraform-external-module-artifact: Terraform module to fetch any kind of artifacts using curl (binary and text okay)attachment image

Terraform module to fetch any kind of artifacts using curl (binary and text okay) - GitHub - cloudposse/terraform-external-module-artifact: Terraform module to fetch any kind of artifacts using cur…

404 error when loading lambda.zip · Issue #13 · cloudposse/terraform-aws-ses-lambda-forwarderattachment image

When running the example hcl from the readme I'm getting the following error: Error: failed to execute "curl": curl: (22) The requested URL returned error: 404 on .terraform/modules/s…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and publish a public ECR image for it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we do have some plans for developing a module for that in the next month or so as part of another requirement.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Leo Przybylski you are having this problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This turns out to be a problem with the github action

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s failing on deploy to S3. something changed with how the aws cli runs under GHA. exploring options.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
fix: Adding environment variable by r351574nc3 · Pull Request #35 · cloudposse/terraform-aws-ses-lambda-forwarderattachment image

what Setting environment variable AWS_EC2_METADATA_DISABLED: true as a solution why github actions is unable to push artifacts to s3 because of an error with the awscli. references aws/aws-cli…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for the record, the published artifact should always match the commit of the release. if there’s no artifact, there’s a problem with the pipeline.

Sam Bishop avatar
Sam Bishop

“dynamic subgraph encountered errors: failed to execute “curl”: curl: (22) The requested URL returned error: 404” is not a lot to go on for debugging…

jonjitsu avatar
jonjitsu

Any thoughts/opinions on kitchen-terraform?

2021-12-15

Ben Dubuisson avatar
Ben Dubuisson

Hi, using the https://github.com/cloudposse/terraform-aws-sso repo and getting some strange errors when applying changes:

ConflictException: Could not delete because PermissionSet has ApplicationProfile associated with it

Has anyone ever seen those?

RB avatar

hmm interesting. could you create an issue with all of your inputs?

Ben Dubuisson avatar
Ben Dubuisson

Hey it looks like permission sets were associated manually to accounts OUTSIDE of TF, which caused the issue, sorry nothing to see here

1
1
Release notes from terraform avatar
Release notes from terraform
09:43:40 PM

v1.1.1 1.1.1 (December 15, 2021) BUG FIXES: core: Fix crash with orphaned module instance due to changed count or for_each value (#30151) core: Fix regression where some expressions failed during validation when referencing resources expanded with count or for_each (<a href=”https://github.com/hashicorp/terraform/issues/30171“…

core: Fix crash with orphaned module instance by alisdair · Pull Request #30151 · hashicorp/terraformattachment image

Fixes #30110. These commits are also on a shared working branch, which I&#39;ve rebased and squashed so that we don&#39;t have a broken commit on main if this is merged. From @apparentlymart&#39;s …

use `cty.DynamicVal` for expanded resources during validation by jbardin · Pull Request #30171 · hashicorp/terraformattachment image

Revert the evaluation change from #29862. While returning a dynamic value for all expanded resources during validation is not optimal, trying to work around this using unknown maps and lists is cau…

1

2021-12-16

J Norment avatar
J Norment

Hi. I need to iterate through a list of objects that I’m using to build WAF ACLs. The trick here is that the order in the list absolutely matters. I’ve tried googling a number of keyword combinations of “for_each preserve order”, “for_each dynamic guarantee order”, and so on, and haven’t really found anything that can answer my question. How might I go about doing this? Any code that includes for / for_each and dynamic inside of a resource should give me what I need to get it implemented in my use-case.

loren avatar

What have you tried that is not preserving order the way you expect?

IK avatar

maps/objects are unordered by design.. you may need to use a list/tuple

J Norment avatar
J Norment

I was under the impression that for_each didn’t handle lists, that toset() needed to be used.

IK avatar

correct, for_each cannot.. do you have the option of using a list?

J Norment avatar
J Norment

Is that not true for dynamic blocks? I wasn’t able to verify that it wasn’t true for dynamic blocks.

J Norment avatar
J Norment

Option of using a list? Yes. But how to set that up?

J Norment avatar
J Norment

Well, I don’t know. It needs to be within a dynamic block.

IK avatar

otherwise, create an intermediate map and specify an ordered key

J Norment avatar
J Norment

I considered doing that. But I really don’t want to force future users to manually order the keys in a map in order to preserve the order …

IK avatar

yeah unfortunately not much else you can do

J Norment avatar
J Norment

Hmm… in that case, is there a way to abstract that kind of ugly behavior, so that the user doesn’t need to order the list elements into a map?

IK avatar

any chance you can share what your object looks like?

J Norment avatar
J Norment

I just found evidence that a dynamic block for_each can accept lists. It seems that I’ve confused the syntax.

1
J Norment avatar
J Norment

Thanks!

Alex Jurkiewicz avatar
Alex Jurkiewicz

yeah, although they look similar, dynamic block for_each and resource for_each can accept different data types

Alex Jurkiewicz avatar
Alex Jurkiewicz

What do you need to preserve order for with WAF?

J Norment avatar
J Norment

The rules are processed top down.

J Norment avatar
J Norment

First one to match terminating action wins.

J Norment avatar
J Norment

So the order matters.

J Norment avatar
J Norment

( that’s how I understand it, anyway )

loren avatar

correct, a for_each expression on a dynamic block will accept a list, and if you use a list then the resulting generated blocks will maintain the order of the list

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

good to know. We use WAF but only with managed rulesets, so I’ve never noticed that

1
J Norment avatar
J Norment

Thanks for the confirmation, Loren.

1
J Norment avatar
J Norment

I’m implementing WAF for custom rule groups, and wanted to be able to allow for more strictly managed rule groups to be combined with custom rule groups, as needed.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Under some conditions, you can replace dynamic with primitive types btw (link):

resource aws_foo bar {
  block {
    name = "one"
  }
  block {
    name = "two"
  }
}

equivalent to

resource aws_foo bar {
  block = [
    { name = "one" },
    { name = "two" },
  ]
}

Not sure if that still works, or if it ever worked globally or was implement per-provider or even per-resource . I often wish we could do this when writing hairy dynamic logic that is hard to understand

loren avatar

i dearly miss that syntax Alex. they’ve definitely been moving away from supporting attribute assignment as an alternative syntax for blocks

loren avatar

their reasoning being something about making the json format a first-class citizen and distinguishing between null values vs absence, blah blah blah. still annoys me

Alex Jurkiewicz avatar
Alex Jurkiewicz

oh yeah, what’s the issue with json representation for assignment syntax? I can’t see it immediately

loren avatar

it was too deep in the internals for me to really understand it all, especially not well enough to regurgitate, hence blah blah blah

1
loren avatar
Attributes as Blocks - Configuration Language | Terraform by HashiCorpattachment image

For historical reasons, certain arguments within resource blocks can use either block or attribute syntax.

1
IK avatar

i will admit i haven’t done much with dynamic blocks.. i’ve gotten around all my problems by creating maps as required.. maybe i should start looking into dynamic blocks..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub - cloudposse/terraform-aws-wafattachment image

Contribute to cloudposse/terraform-aws-waf development by creating an account on GitHub.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This also supports ordered lists of rules

IK avatar

hey guys.. with authoring modules, do you prefer putting the complexity of your module in the front or backend? For e.g. assume i’m writing a module to create security groups. The expectation is that another team member can directly reference this module, pass it a YAML file with all the ingress/egress rules and a vpc_id.. should i be building my module to handle a single vpc_id and expect the user/consumer to handle all the front-end logic i.e. create for_each loops within their reference to the module, or should i make the front-end simple (where they can pass me either a single vpc_id or a list of vpc_ids) and then within the module, do my for_each loops etc; i keep switching back and forth between the 2 different ways.. part of me wants to keep the front end simple so consuming it is easy but then i’m having to write additional logic to handle the various permutations so thinking of shifting the complexity back to the user/consumer.. any ideas? cheers guys!

RB avatar

we’ve gone a separate route where we use yaml to define inputs, use the tool atmos to convert the yaml inputs into a terraform varfile string/map/list/etc, then define individual arguments for a specific terraform root module, which then consumes modules like our sg module to define rules.

RB avatar

anyway long story short, you should build the module to take a single vpc id

IK avatar

thanks RB. I think it’s reasonable to expect consumers of our modules to have some understanding of Terraform so if our module is built to take a single vpc_id, the consumer of that module should align to that.

2021-12-17

zeid.derhally avatar
zeid.derhally

Anyone here using Terraform Enterprise? We are migrating from atlantis to TFE and was wondering how people deal with the issue where you can’t do targeted applies when using VCS integration. Sometimes terraform has issues with planning and needs some help with targeted applies.

Release notes from terraform avatar
Release notes from terraform
10:03:18 PM

v1.1.2 1.1.2 (December 17, 2021) If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to this new version as soon as possible. Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure to construct the apply-time graph can cause Terraform to incorrectly report success and save an empty state, effectively “forgetting” all existing infrastructure. Although configurations that already worked on previous releases should not encounter this problem, it’s possible that incorrect future…

2
2
DaniC (he/him) avatar
DaniC (he/him)

the joy of being a public company … pressure to “show” new stuff ….

2021-12-18

2021-12-19

OZZZY avatar

Good morning, how can I change the configuration items not created by terraform with terraform?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The best you can hope to achieve is define the resources in terraform, run terraform import on those resources.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Terraform is not optimized to modify resources managed outside of terraform

steenhoven avatar
steenhoven

only after you import them https://www.terraform.io/cli/import

Import | Terraform by HashiCorpattachment image

Terraform can import and manage existing infrastructure. This can help you transition your infrastructure to Terraform.

1

2021-12-20

jose.amengual avatar
jose.amengual

is there a way to pass a permission boundary to TF provider instead of to a resource?

loren avatar

i’m not really sure what that means exactly… can you expand on the desired outcome?

jose.amengual avatar
jose.amengual

ok, nevermind this only apply to role creation

jose.amengual avatar
jose.amengual
GitHub - cloudposse/terraform-aws-ec2-instance: Terraform module for provisioning a general purpose EC2 hostattachment image

Terraform module for provisioning a general purpose EC2 host - GitHub - cloudposse/terraform-aws-ec2-instance: Terraform module for provisioning a general purpose EC2 host

loren avatar

yeah, it’s an argument for iam principals, so roles and users

jose.amengual avatar
jose.amengual

I was having trouble to create some resources and I though it was related to the boundary ( which is very restrivtive)

jose.amengual avatar
jose.amengual

but it was not

loren avatar

it’s kinda an interesting idea to be able to apply a permissions boundary to an assume-role call though…

loren avatar

so i wasn’t sure where you were going…

jose.amengual avatar
jose.amengual

I mean, if there were a lot of resources that needed the boundary then it will make sense to pass it as a parameter on the provider but it is not

loren avatar

ahh, not quite what i mean… you can set the policy on an assume-role call. so regardless of what the role is, you can pass a more restrictive policy… but, what if you could pass a permissions-boundary? that way you could say, instead, basically, “this temporary credential should be constrained by the permissions in this boundary policy”….

loren avatar

i think that would be great for CI use cases

1
jose.amengual avatar
jose.amengual

ahhhhhh I c

jose.amengual avatar
jose.amengual

interesting

jose.amengual avatar
jose.amengual

but how is the boundary gets applied to the role? on aws organizations?

jose.amengual avatar
jose.amengual

(I have not done it myself)

loren avatar

the boundary right now is only an argument of the actions iam:CreateRole and iam:CreateUser. a similar but different feature in organizations is “service control policies” or SCPs

loren avatar

so right now, the permissions boundary is at the account level, on every role or user where you want the boundary

jose.amengual avatar
jose.amengual

ahhhh I though boundaries were made in a central location

jose.amengual avatar
jose.amengual

I do not see the usefulness of them in that case

jose.amengual avatar
jose.amengual

it seems to be made to avoid creating multiple policies in one account , but no matter what you need to manage the boundary per account so there is no much difference

loren avatar

yeah, both permissions boundaries and SCPs have a lot of warts when it comes to the user experience

loren avatar

i believe the primary use case for SCPs is to disable entire services and regions

loren avatar

and the primary use case for permissions boundaries is to prevent privilege escalation. you have to grant developers the ability to create roles/users (so a blanket SCP deny won’t work), but you also want to prevent them from granting more permissions than they have themselves. so you write their role such that they have to attach the boundary to any role/user they create

jose.amengual avatar
jose.amengual

ahhhhh I see ok

jose.amengual avatar
jose.amengual

so that they do not become themselfs admins

2021-12-21

bricezakra avatar
bricezakra

Hello Guys, I am working on a Postgres Flexible Server Terraform Module on Azure. I have found a documentation on Terraform but I am a little bit lost. Can anybody help me with that please? Does anybody ever worked on the same or similar project? Thanks

w avatar

hi. i’m using https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn and wondering about error_document . The doc says the value used here is used for all 4xx errors. When i set this value to a string like “error.html, however, it doesn’t seem to do anything.

What is the expected result when using this input?

GitHub - cloudposse/terraform-aws-cloudfront-s3-cdn: Terraform module to easily provision CloudFront CDN backed by an S3 originattachment image

Terraform module to easily provision CloudFront CDN backed by an S3 origin - GitHub - cloudposse/terraform-aws-cloudfront-s3-cdn: Terraform module to easily provision CloudFront CDN backed by an S3…

2021-12-22

SlackBot avatar
SlackBot
09:55:01 PM

This message was deleted.

2021-12-29

OliverS avatar
OliverS

has anyone used https://registry.terraform.io/providers/paultyng/sql/latest/docs/resources/migrate to initialize a DB (in RDS or Redshift) after creating it in AWS?

It seems a much nicer solution, state based, than using a local (which relies on local OS to provide db client and ssh client) or remote exec (which requires an ec2 to ssh into and it must have a db client like psql, not really appropriate for a bastion).

I’m guessing the provider tracks migrations via a table in the db but I have not checked the code.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’ve not used that provider before but taking a quick look at it I can definitely ask “where have been all my life?” Looking at the schema example, and not having delved into the Go code yet, I’m going to assume that it’s storing the statements like anything else in state and using that to determine if it’s been executed or not if something changes. I could see where that might have some downfalls though if the statement itself is modified after having been executed rather than not creating a new migratation block

OliverS avatar
OliverS

good points

there is also the issue that typically, the DB is not reachable from the machine on which terraform runs

I would still have to create a tunnel through a bastion, but at least no host OS dependency / need to install DB client locally

OliverS avatar
OliverS

like a null_resource with local exec for the ssh tunnel, then the sql migrate with a dependency on that null resource

OliverS avatar
OliverS

(and would need different solution if we did not already have a bastion and did not want to create one just for this)

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

yeah network topology would need to be accounted for. Digging around in the code a bit and I saw:

func completeMigrationsAttribute() *tfprotov5.SchemaAttribute {
	return &tfprotov5.SchemaAttribute{
		Name:     "complete_migrations",
		Computed: true,
		Description: "The completed migrations that have been run against your database. This list is used as " +
			"storage to migrate down or as a trigger for downstream dependencies.",
		DescriptionKind: tfprotov5.StringKindMarkdown,
...

in the internal/provider/resource_migrate_common.go code so it does appear it’s keeping track somewhere in the code

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

looks like it copies internally from migrations struct to complete_migrations while processing so I’m assuming it would then compare the later against the former to decide if it should be executed again or not due to change. Whether that accounts for the up/down SQL changing or not I’ve not determined yet

Alec Fong avatar
Alec Fong

Hello, I am creating and deleting eks clusters using the https://github.com/cloudposse/terraform-aws-eks-cluster complete example and have run into this error multiple times. Is there anyway to resolve properly? My workaround has been terraform state rm module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes .

module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Refreshing state... [id=kube-system/aws-auth]
╷
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
│ 
│   with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│   on .terraform/modules/eks_cluster/auth.tf line 115, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│  115: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
GitHub - cloudposse/terraform-aws-eks-cluster: Terraform module for provisioning an EKS clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Zach avatar

if you get that when deleting yah that’s about the only solution that I’ve found

GitHub - cloudposse/terraform-aws-eks-cluster: Terraform module for provisioning an EKS clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Alec Fong avatar
Alec Fong

got it thanks

Zach avatar

@DaniC (he/him) that’s an entirely different module though

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) do we have any work arounds for this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that is a very common error, and it could be anything related to kubeconfig

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. can’t connect to the cluster to load KUBECONFIG, and then the aws provider tries to connect to the local cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also depends on the module version

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Alec Fong did you try to use the latest example (which uses the latest module) https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete ?

terraform-aws-eks-cluster/examples/complete at master · cloudposse/terraform-aws-eks-clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the example gets executed by terratest automatically every time we open a PR, and it gets deleted after that (so it’s working, but prob does not cover all the use-cases you could encounter)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this latest PR https://github.com/cloudposse/terraform-aws-eks-cluster/pull/138 was working ok for both creating the cluster and destroying it after the test

Update to use the Security Group module by aknysh · Pull Request #138 · cloudposse/terraform-aws-eks-clusterattachment image

what Update to use the Security Group module Add migration doc Update README and GitHub workflows why Standardize on using https://github.com/cloudposse/terraform-aws-security-group in all modu…

Zach avatar

That’s pretty much how mine is setup (minus the vpc setup and adding in some security group and IAM stuff) but I’ve had the aws-auth issue on destroy very recently, on 0.44.0 of the cluster and 0.27.0 of the node-group modules

Zach avatar

the recent versions did solve the random “can’t find the aws-auth” issue on plan/apply though

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the error you see is (almost always) b/c the provider could not access the cluster to load KUBECONFIG to get the keys and creds, and by default the provider (if you check the Go code) will try to access a (non-existing) local cluster (dial tcp [::1]:80)

Zach avatar

sure, just only see that on a destroy now. I don’t know if terraform is removing something in an unexpected sequence or what

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what TF version are you using?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(destroy was always an issue with Tf with count logic, it was extremely bad in TF 0.13)

Zach avatar

1.0.11 (held off on 1.1 after encountering some very nasty bugs in it)

2021-12-30

mikesew avatar
mikesew

Q: Does anybody have any examples of AWS RDS Events to Pagerduty terraform code/modules? Lot of examples I see are RDS cloudwatch alarms, but not necessarily RDS events. Am I supposed to search for “SNS to Pagerduty” instead?

Zach avatar

right, once the event is on SNS you just need to wire that to pagerduty. Doesn’t matter that its an RDS or EC2 or lambda event at that point

    keyboard_arrow_up