#terraform (2022-12)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-12-02

Soren Jensen avatar
Soren Jensen

I could use a bit of help here.. I’m trying to create a list of buckets my anti virus module is using. The list should contain all upload buckets + 2 extra buckets. I’m using the cloudposse module for creating the upload buckets

# Create the upload_bucket module
module "upload_bucket" {
  for_each = toset(var.upload_buckets)

  source = "cloudposse/s3-bucket/aws"
  # Cloud Posse recommends pinning every module to a specific version
  version = "3.0.0"
  enabled = true

  bucket_name = random_pet.upload_bucket_name[each.key].id
}

I’m in the same module trying to create this list for buckets to scan:

# Use the concat() and values() functions to combine the lists of bucket IDs
av_scan_buckets = concat([
  module.temp_bucket.bucket_id,
  module.db_objects_bucket.bucket_id
  ],
  [LIST OF UPLOAD BUCKETS]
)

As an output this works

value = { for k, v in toset(var.upload_buckets) : k => module.upload_bucket[k].bucket_id }

Gives me

upload_bucket_ids = {
  "bucket1" = "upload-bucket-1"
  "bucket2" = "upload-bucket-2"
}

But if I use the same as input to the list it obviously doesn’t work as I need to change the map to a list.. Any one who can tell me how to get this working?

Denis avatar

which input list are you exactly referring to?

Soren Jensen avatar
Soren Jensen

What do you mean?

Denis avatar

you say “if I use the same as input to the list”

Denis avatar

which list is that ?

Soren Jensen avatar
Soren Jensen

I tried to use the same for loop as for the output, where I got “LIST OF UPLOAD BUCKETS”

Soren Jensen avatar
Soren Jensen

But it generates a map, not a list with only the buckets id’s

Denis avatar

are you saying this line doesn’t work for you?

bucket_name = random_pet.upload_bucket_name[each.key].id

if yes you can use something like

values(module.my_module.upload_bucket_ids)
Denis avatar

of course change the my_module

Soren Jensen avatar
Soren Jensen

I tried it like this with and without the tolist()

av_scan_buckets = concat([
  module.temp_bucket.bucket_id,
  module.db_objects_bucket.bucket_id
  ],
  tolist(values(module.upload_bucket.bucket_id))
)

Got this error:

│ Error: Unsupported attribute
│ 
│   on antivirus.tf line 49, in module "s3_anti_virus":
│   49:     tolist(values(module.upload_bucket.bucket_id))
│     ├────────────────
│     │ module.upload_bucket is object with 2 attributes
│ 
│ This object does not have an attribute named "bucket_id".

Makes sense module.upload_bucket got 2 objects as I’m creating 2 buckets with for_each = toset(var.upload_buckets)

Soren Jensen avatar
Soren Jensen

Solved, this works

av_scan_buckets = concat([
  module.temp_bucket.bucket_id,
  module.db_objects_bucket.bucket_id
  ],
  [ for k in toset(var.upload_buckets) : module.upload_bucket[k].bucket_id ]
)
OliverS avatar
OliverS

How does terrateam compare to atlantis? The comparison page by terrateam shows it has several important additional capabilities over atlantis, but I’m looking for something a little deeper: https://terrateam.io/docs/compare

RB avatar

I think they have drift detection and some other features but afaik it’s a saas Atlantis offering

OliverS avatar
OliverS

i thought atlantis was a saas… you have to install atlantis on your own machines? (real or virtual)

RB avatar

Atlantis is self hosted

OliverS avatar
OliverS

Ah good to know

2022-12-05

Gabriel avatar
Gabriel

Anybody knows about a page listing all AWS resource naming restrictions?

Kurt Dean avatar
Kurt Dean

Im not aware of a centralized place. Depending on why you’re looking for this info, another piece to consider is that you may be using Name-Prefix for some resources (which can have a much smaller length limit, for example).

1
Gabriel avatar
Gabriel

“can have a much smaller length limit” if prefix is used. do you have a docs link and/or example for this?

Kurt Dean avatar
Kurt Dean

Im on mobile right now, but if you look at AWS load balancers you’ll find right now in Terraform the prefix can be at most 6 characters (last I checked).

It’s useful to supply a prefix instead of a complete name because LB names are unique (per account?), and you typically want to spin up a new load balancer before tearing down your old one with Terraform lifecycle create-before-destroy.

Gabriel avatar
Gabriel

Yes. I do like prefixes. Thanks for the tip.

Gabriel avatar
Gabriel

I wonder how Cloud Posse and null/label deal with these restrictions. Where do you make sure that the null/label output id fits as name for a resource and do you use prefixes or do they become irrelevant with null/label? @Erik Osterman (Cloud Posse)

jonjitsu avatar
jonjitsu

Is there something like TF_PLUGIN_CACHE_DIR but for modules downloaded from github? I got 80 services using the same module (I copy pasted the source = “”) and terraform redownloads it each time.

Alex Jurkiewicz avatar
Alex Jurkiewicz

no, sadly

2022-12-06

Ron avatar

what you guys recommend to store state on-prem ?

jsreed avatar

3.5” floppies

Ron avatar

dont have floppies

loren avatar

consul?

loren avatar

gitlab?

1
Ron avatar

I’ve never used consul. sounds new to me. I’ll try that

Ron avatar

thanks

Ron avatar

some context, thats for my personal lab. I’m creating vms on libvirt/kvm and installing rke2 on that. still learning terraform

loren avatar

i mean, if it’s just you and you’re keeping it all local, you can encrypt the state with sops and commit it to your repo lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Terraform Cloud state storage is free

Alex Jurkiewicz avatar
Alex Jurkiewicz

on-prem? It probably depends what technologies you have available. If you have something that can emulate a local filesystem, the default backend would be simplest

mrwacky avatar
mrwacky


Terraform Cloud state storage is free
Is it faster/slower than S3? S3 feels really slow to me

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


some context, thats for my personal lab.
oh, right, TFC is not on-prem, but it’s totally suitable for a personal lab, provided it’s not airgapped.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We don’t use TFC for stage storage, so I cannot comment. We use S3 and any slowness seems to just be more closely tied to the number of resources under management within and given root-module and the number of parallel threads.

2022-12-07

Tushar avatar

Hi Team,

When I’m trying to folllow the https://github.com/cloudposse/terraform-aws-vpc-peering) module to create VPC and setup the peering between them.

I’m following the example mentioned in “/examples/complete” module and while generating the plan getting the following error

Error: Invalid count argument
│ 
│   on ../../main.tf line 62, in resource "aws_route" "requestor":
│   62:   count                     = module.this.enabled ? length(distinct(sort(data.aws_route_tables.requestor.0.ids))) * length(local.acceptor_cidr_blocks) : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work
│ around this, use the -target argument to first apply only the resources that the count depends on.

and getting same for resource "aws_route" "acceptor".

I’m looking for help to understand following item:

  1. what things i should improve?
  2. is there any different process to use this module?
  3. anything that i’m missing?
cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account.

RB avatar

Seems like a bug with our example. Our tests use the terraform version minimum set in the examples versions.tf file.

I wonder if you’re seeing this issue since you may be using a newer version locally?

cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account.

RB avatar

Try using 0.13.7 terraform, just to see if you can reproduce this issue the way our tests would

https://github.com/cloudposse/terraform-aws-vpc-peering/blob/c9316cb2f9cd6e472808f0c95ab930e62f18de34/examples/complete/versions.tf#L2

Release notes from terraform avatar
Release notes from terraform
03:13:30 PM

v1.4.0-alpha20221207 1.4.0 (Unreleased) UPGRADE NOTES:

config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.

config: The textencodebase64 function when called with encoding “GBK” or “CP936” will now encode the euro symbol € as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this…

Release v1.4.0-alpha20221207 · hashicorp/terraformattachment image

1.4.0 (Unreleased) UPGRADE NOTES:

config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by th…

2022-12-08

Krushna avatar
Krushna

Hi, I am trying to use cloudposse/terraform-aws-transit-gateway module to connect 2 different VPC on different regions, Are there any examples. The multiaccount example posted (https://github.com/cloudposse/terraform-aws-transit-gateway/tree/master/examples/multi-account) is within the same region.

Elleval avatar
Elleval

Hey, I haven’t used this module before so I’m not a 100%, but what you’re after may be configured through the providers: https://github.com/cloudposse/terraform-aws-transit-gateway/search?q=provider&type=code

https://github.com/cloudposse/terraform-aws-transit-gateway/search?q=aws.prod

Joe Perez avatar
Joe Perez

Hello All! I recently have had to work with AWS PrivateLink and found the documentation to be a bit lacking, so I created a blog post about my experience with the technology. I’m also planning a follow-up post with a terraform example. Has anyone had a chance to use AWS PrivateLink? And have you leveraged other technologies to accomplish the same thing?

https://www.taccoform.com/posts/aws_pvt_link_1/

AWS PrivateLink Part 1

Overview Your company is growing and now you have to find out how to allow communication between services across VPCs and AWS accounts. You don’t want send traffic over the public Internet and maintaining VPC Peering isn’t a fun prospect. Implementing an AWS supported solution is the top priority and AWS PrivateLink can be a front-runner for enabling your infrastructure to scale. Lesson What is AWS PrivateLink? PrivateLink Components Gotchas Next Steps What is AWS PrivateLink?

1

2022-12-09

Elleval avatar
Elleval

Hi Everyone, I’m hitting the following issue when using cloudposse/terraform-aws-alb and specifying access_logs_s3_bucket_id = aws_s3_bucket.alb_s3_logging.id .

Elleval avatar
Elleval
Error: Invalid count argument
│ 
│   on .terraform/modules/alb.access_logs/main.tf line 2, in data "aws_elb_service_account" "default":
│    2:   count = module.this.enabled ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.
Elleval avatar
Elleval
data "aws_elb_service_account" "default" {
Elleval avatar
Elleval

But module.this.enabled value is available in the defaults so I’m not sure why it’s complaining.

RB avatar

That is odd. This error is everyone’s least favorite…

Could you open a ticket with all of your inputs and a sample of your hcl? It would help if it was possible to reproduce it

RB avatar

This issue comes up when you pass in another uncreated resources attribute to a module. It’s the first time I’ve seen it for the module.this.enabled flag tho

Elleval avatar
Elleval

thanks, I’ve raised a bug here: https://github.com/cloudposse/terraform-aws-alb/issues/126 hope that’s OK.

Elleval avatar
Elleval

Yeah, it works if the bucket is created before you run terraform apply with enable logging and the custom bucket ccess_logs_s3_bucket_id = aws_s3_bucket.alb_s3_logging.id

Elleval avatar
Elleval

@RB this works when the literal name of the S3 bucket is used. I’ve updated the issue.

RB avatar

This issue is common and one of the most hated in terraform.

one reason we haven’t hit it is because we create a new logging bucket per alb by not specifying the access_logs_s3_bucket_id and allowing the module to create the bucket for you

And if we want a shared s3 bucket across albs and pass it in, then we would create that s3 bucket in its own root terraform directory and then it’s created, then go to the root terraform directory for our alb, retrieve the s3 bucket from a data source, and then the alb would apply correctly

RB avatar

One question that comes up, if you’re creating an s3 bucket for logging and passing it in to the alb in the same root terraform directory, then why not let the module handle it?

Elleval avatar
Elleval

Thanks, @RB. Basically, I’d like to have fine-grained control over the bucket name. I did intend to let the module handle it but have 2 environments named prod in different regions and ended up with a name clash. Yes, hindsight… :)

RB avatar

You should take advantage of our null label. All of the modules use it.

https://github.com/cloudposse/terraform-null-label

{namespace}-{environment}-{stage}-{name}-{attributes}
Namespace=acme
Environment=use1
Stage=dev
Name=alb
Attributes=[blue]

These will then get joined by a dash to form a unique name

acme-use1-dev-alb-blue
Elleval avatar
Elleval

Also, is it possible, when enabling logs (https://github.com/cloudposse/terraform-aws-alb#input_access_logs_enabled) to give the s3 bucket, which is created a custom name? Seem to inherit from the labels of the ALB. S3 buckets have a global namespace which is causing a clash across environments, which are in different accounts/regions but use the same seed variable.

2022-12-11

2022-12-12

aimbotd avatar
aimbotd

Regarding the node groups module. Lets say I’m running the instance type, m6id.large , which is provisioned with 118G ssd ephemeral storage. In order to use that in the cluster, what should I be doing? Do I provision it via the block_device_mappings? Is it already available to pods in the cluster?

2022-12-13

Jonas Steinberg avatar
Jonas Steinberg

If anyone has any preferred ways of terraforming secrets I’d love to hear about it. Right now I’m creating the secret stubs (so no sensitive data) in terraform and allowing people to clickops the actual secret data from UI; I’m also creating secrets through APIs where possible, e.g. datadog, circleci, whatever, and then reading them from those products and writing them over to my secret backend. I’m using a major CSP secret store; I am not using Vault and am not going to use Vault. I am aware of various things like SOPS to some extent. I’m just curious if anyone has any ingenious ideas for allowing for full secret management via terraform files; something like using key cryptography locally and then committing encrypted secrets in terraform might be a bit advanced for my developers. But fundamentally I’m open to anything slick. Thank you!

loren avatar

i haven’t seen a “great” solution yet… i feel things like sops are the best way, if only because the secrets remain encrypted in terraform state. especially since remote state is synced to local disk in plaintext…

loren avatar

otherwise, if you can, push secrets to a secret store out of band of terraform, use the “path” to the secret in terraform, and pull the secret from the store in your app

RB avatar

Hmm i did not know you can keep secrets encrypted in tf state using sops.

Do you have an example or docs i could look over?

loren avatar

i didn’t mean to imply it was “integrated”… to keep them encrypted, you have to reference only the encrypted strings in tf configs. so basically tfstate becomes the secret store, and you pass the encrypted string to your app, and decrypt the string in the app

David Karlsson avatar
David Karlsson
segmentio/chamber

CLI for managing secrets

RB avatar

@David Karlsson chamber saves to ssm/secrets manager but if you use a data source, it will still grab the secret and put it in plain-text in (an albeit encrypted) tfstate

edit: I use chamber today and like it a lot. Did not mean to sound dismissive. Thank you for sharing.

RB avatar

@loren how do you structure your sops today ? do you save all your secrets in ssm/secrets manager, then pull it down, encrypt it with sops, and then save the encrypted string into terraform? then when you deploy your app, your app knows to retrieve the key (kms?) to decrypt the sops key for the app?

1
David Karlsson avatar
David Karlsson

I havent done it personally but segment describes a slightly different way of using chabmer in prod, at least if you run containers… … 1 sec

David Karlsson avatar
David Karlsson

In order to populate secrets in production, chamber is packaged inside our docker containers as a binary and is set as the entrypoint of the container

https://segment.com/blog/the-right-way-to-manage-secrets/

1
loren avatar

no no, sorry, i haven’t had cause to use sops myself in this manner. i keep waiting for something native in terraform. i guess in general the only idea i’ve seen is to keep the secrets out of terraform, by encrypting them with something like sops, or to use an external secret store and manage only the path to the secret in terraform. either way, you are handling the secret itself in your app, instead of passing the secret value to the app with terraform

1
Michael Dizon avatar
Michael Dizon
#22 update(ssm_log_bucket): use source_policy_documents

what

• using source_policy_documents for bucket policy

why

• bucket was created, but no policy was applied

references

• Use closes #21 #21

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Michael Dizon thank you. I’ve left a comment if you an take a look at it

#22 update(ssm_log_bucket): use source_policy_documents

what

• using source_policy_documents for bucket policy

why

• bucket was created, but no policy was applied

references

• Use closes #21 #21

Michael Dizon avatar
Michael Dizon

on it

Michael Dizon avatar
Michael Dizon

@Andriy Knysh (Cloud Posse) do the tests need to be kicked off manually>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

started the tests

1
Michael Dizon avatar
Michael Dizon

strange, it’s still not picking up that ami

Michael Dizon avatar
Michael Dizon

i updated the PR to use an ami from us-east-2

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

approved and merged, thanks again

Michael Dizon avatar
Michael Dizon

np!

shamwow avatar
shamwow

hello, had a question about validations for list(string) variables, I was triying this but it doesnt seem to work:

variable "dns_servers" {
  description = "List of DNS Servers, Max 2"
  type        = list(string)
  default     = ["10.2.0.2"]

  validation {
    condition = length(var.dns_servers) > 2
    error_message = "Error: There can only be two dns servers MAX"
  }
}

but when I run it it just errors on that rule, probably something obvi but Im not able to find any solution

RB avatar

Do you need to change length(var.dns_servers) > 2 to length(var.dns_servers) <= 2 ?

RB avatar

nope, i think im mistaken.

RB avatar

I always get the condition confused.

RB avatar
Input Variables - Configuration Language | Terraform | HashiCorp Developerattachment image

Input variables allow you to customize modules without altering their source code. Learn how to declare, define, and reference variables in configurations.

RB avatar

What is the current default value youre sending as an input ?

shamwow avatar
shamwow

the exact one above

shamwow avatar
shamwow

like I didnt add another dns server or different ones… if thats what you mean?

RB avatar

sorry, I mean I see that the default is a single dns, but what are you passing as the input ?

RB avatar

are you just using the default and the default isnt working?

shamwow avatar
shamwow

yes, thats it

RB avatar

the other issue i see is that the validation should probably be length(var.dns_servers) <= 2

RB avatar

haha, i think I had it right in my first comment. You want to allow only dns servers with a count of 1 or 2

shamwow avatar
shamwow

correct yes!

RB avatar

try the suggested change for the validation and it should work

shamwow avatar
shamwow

ok… lemme try that

1
shamwow avatar
shamwow

Worked! oh man I feel so dumb… ok… ty so much!

np1
RB avatar

it’s tricky. I suppose we can think of the validation needs to be what we expect. If it’s true, continue. If it’s false, then the error condition will be thrown.

shamwow avatar
shamwow

right… the end state we want in other words ya

1
RB avatar

I often think I need the validation to be what I do NOT expect in order to get the error condition. I have to keep reminding myself that the validation is the if (true) {} block and the error_condition is the else {} block

1
Jonas Steinberg avatar
Jonas Steinberg

Is creating api keys, application keys, private key pairs and similar product or cloud resources for which there are terraform resources for a bad practice? It winds secrets up in the state, but why would providers create these resources if it was an antipattern? Note here I am not talking about creating plaintext secrets or something of that nature – obviously that is nuts. I have some workflows that involve creating key pairs and then reading them into other places.

I don’t think its possible to avoid having secrets in state, is it?

Sergey avatar

Good afternoon, I encountered a small problem (bug). Tell me who can help fix it and commit it? I plan to use this code in production.

#37 The policy is created without the asterisk symbol

The policy is created simply by ARN without the “:” construct, which is necessary to create the correct policy for the role.
Without this “:
” construct, the policy is created, but it does not work correctly.
This error was discovered when I tried to create a cloudwatch group in the cloudtrail module.
I got the response “Error: Error updating CloudTrail: InvalidCloudWatchLogsLogGroupArnException: Access denied. Verify in IAM that the role has adequate permissions.
After studying the code, I realized that I need to add the construction “:*” in a couple of lines.
My solution looks like this, I need to replace the lines in file :

This line:
join("", aws_cloudwatch_log_group.default.*.arn),
replaced by
"${join("", aws_cloudwatch_log_group.default.*.arn)}:*"
You need to do this in both identical lines.

Perhaps you can suggest a better solution, I’m new to terraforming.

2022-12-14

vicentemanzano6 avatar
vicentemanzano6

Hello, I have this lb listener

resource "aws_lb_listener_rule" "https_443_listener_rule_2" {
  listener_arn = aws_lb_listener.https_443.arn
  priority     = 107

  action {
    type             = "forward"
     forward {
        target_group {
          arn    = aws_lb_target_group.flo3_on_ecs_blue_tg.arn
          weight = 100
        }

        target_group {
          arn    = aws_lb_target_group.flo3_on_ecs_green_tg.arn
          weight = 0
        }

         stickiness {
           enabled  = false
           duration = 1
         }
        }
  
  }

I currently use codedeploy to make blue/green deployments into ECS - However, after a deployment, the weights of each target group change and terraform wants to change them back to the scripted configuration which makes traffic go to a target group with no containers. What is the best way to tackle this issue so regardless of the state of which target group has 100 weight, terraform does not want to update it?

Joe Perez avatar
Joe Perez

check out this tutorial, the TLDR is that they use the -var flag to switch between blue and green apps https://developer.hashicorp.com/terraform/tutorials/aws/blue-green-canary-tests-deployments

Use Application Load Balancers for Blue-Green and Canary Deployments | Terraform | HashiCorp Developerattachment image

Configure AWS application load balancers to release an application in a rolling upgrade with near-zero downtime. Incrementally promote a new canary application version to production by building a feature toggle with Terraform.

1
vicentemanzano6 avatar
vicentemanzano6

Thank you!

ANILKUMAR K avatar
ANILKUMAR K

How to setup AWS MSK Cluster using SASL/IAM authentication using basic MSK Cluster in Terraform

ANILKUMAR K avatar
ANILKUMAR K

Could you please help me in configuring this

2022-12-15

ANILKUMAR K avatar
ANILKUMAR K

Actually we have created the cluster and we are using the SASL/IAM authentication. Also I have provided the following policies in Instance profile role Kafka-cluster: Connect Kafka-cluster: Describe * Kafka-cluster: ReadData Kafka-cluster: Alter * Kafka-cluster: Delete* Kafka-cluster: Write *

Port are opened in EC2 and cluster i.e.9092 9098,2181,2182 both inbound and outbound

We are trying to connect with role and we are giving below command i.e., aws kafka describe-cluster –cluster -arn

Geeting error like : Connection was closed before we released a valid response from Endpoint.

shamwow avatar
shamwow

sounds like a connectivity issue and less a terraform issue, have you tried in the AWS channel?

David Karlsson avatar
David Karlsson

security groups what ingress and egress do you have

Jonas Steinberg avatar
Jonas Steinberg

Does anyone have a good hack, whether it be a script or a tool (but probably not something crazy like “use TFC or Spacelift”) for prevent cross-state blunders in multi-account state management? In other words a good hack or script or tool for validating that, for example, a plan that has been generated is about to bo applied to the correct cloud account id (or similar)? Thanks.

Denis avatar

the farthest I’ve gone in this direction is setting the account and region in the cloud provider config block in terraform. And terraform errors out if you are running it anywhere else.

Chris Dobbyn avatar
Chris Dobbyn
State: Locking | Terraform | HashiCorp Developerattachment image

Terraform stores state which caches the known state of the world the last time Terraform ran.

Jonas Steinberg avatar
Jonas Steinberg

@Chris Dobbyn state locking doesn’t prevent state from the wrong account from being written to state storage, it just prevents collisions that would otherwise occur when concurrent state commits were happening.

Chris Dobbyn avatar
Chris Dobbyn

Yep I miss read, denis’s answer is correct.

Jonas Steinberg avatar
Jonas Steinberg

This is a different ask.

2022-12-16

deepakshi avatar
deepakshi

wave Hello, team!

deepakshi avatar
deepakshi

im getting this error can any one suggest to resolve it.

deepakshi avatar
deepakshi

│ Error: CacheParameterGroupNotFound: CacheParameterGroup ps-prod-redis-cache not found. │ status code: 404, request id: ccbf450d-4b2d-410e-95a9-2797c6d184d2 │

Damian avatar

Hi team. I wonder if there is a way to provide activemq xml config using this module? https://registry.terraform.io/modules/cloudposse/mq-broker/aws/latest . I am new to terraform so I might be doing something wrong, but basically I’d like to modify some destination policies like this

<destinationPolicy> ... </destinationPolicy>

If I used barebones aws_mq_broker I would do it like this:

resource "aws_mq_broker" "example" {
  broker_name = "example"

  configuration {
    id       = aws_mq_configuration.example.id
    revision = aws_mq_configuration.example.latest_revision
  }
...
}
resource "aws_mq_configuration" "example" {
  description    = "Example Configuration"
  name           = "example"
  engine_type    = "ActiveMQ"
  engine_version = "5.15.0"

  data = <<DATA
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<broker xmlns="<http://activemq.apache.org/schema/core>">
    <destinationPolicy>
        <policyMap>
            <policyEntries>
                <policyEntry queue=">" gcInactiveDestinations="true" inactiveTimoutBeforeGC="600000" />
            </policyEntries>
        </policyMap>
    </destinationPolicy>
</broker>
DATA
}

Can I attach such configuration when I use Cloudposse mq-broker module?

2022-12-17

Alcp avatar

I am running in to error with helm module, installing the calico-operator

│ Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "default" namespace: "" from "": no matches for kind "APIServer" in version "operator.tigera.io/v1"
│ ensure CRDs are installed first, resource mapping not found for name: "default" namespace: "" from "": no matches for kind "Installation" in version "operator.tigera.io/v1"
│ ensure CRDs are installed first]
│ 
│   with module.calico_addon.helm_release.this[0],
│   on .terraform/modules/calico_addon/main.tf line 58, in resource "helm_release" "this":
│   58: resource "helm_release" "this" {
│ 

here is the root module

module "calico_addon" {
  source  = "cloudposse/helm-release/aws"
  version = "0.7.0"

  name                 = "" # avoids hitting length restrictions on IAM Role names
  chart                = var.chart
  description          = var.description
  repository           = var.repository
  chart_version        = var.chart_version
  kubernetes_namespace = join("", kubernetes_namespace.default.*.id)
  wait                 = var.wait
  atomic               = var.atomic
  cleanup_on_fail      = var.cleanup_on_fail
  timeout              = var.timeout
  create_namespace     = false
  verify               = var.verify

  iam_role_enabled            = false
  eks_cluster_oidc_issuer_url = replace(module.eks.outputs.eks_cluster_identity_oidc_issuer, "https://", "")

  values = compact([
    # hardcoded values
    yamlencode(yamldecode(file("${path.module}/resources/values.yaml"))),
    # standard k8s object settings
    yamlencode({
      fullnameOverride = module.this.name,
      awsRegion        = var.region
      autoDiscovery = {
        clusterName = module.eks.outputs.eks_cluster_id
      }
      rbac = {
        serviceAccount = {
          name = var.service_account_name
        }
      }
    }),
    # additional values
    yamlencode(var.chart_values)
  ])

  context = module.introspection.context
}

2022-12-18

Simon Weil avatar
Simon Weil

wave Hello, team!

Great to be here and thank you for useful Terraform modules!

1
Simon Weil avatar
Simon Weil

I am using the https://github.com/cloudposse/terraform-aws-sso/ module and have 2 issues:

  1. Depends on issue, opened a PR for it: https://github.com/cloudposse/terraform-aws-sso/pull/33
  2. Deprecation warnings for AWS provider v4: https://github.com/cloudposse/terraform-aws-sso/issues/34 As the first issue has got no attention, I did not open a PR for the second one… Any chance to get a review for the first one and fix the second one? I’m willing to to open a PR for the second one if it will get attention

Please don’t see my message as criticism, I’m very grateful for your open source work and modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Simon Weil thanks for the PR, please see the comment

Simon Weil avatar
Simon Weil

Thank you, will do it tomorrow

Simon Weil avatar
Simon Weil

Updated the PR as required, although it did nothing And opened a new PR for the deprecation warnings

Simon Weil avatar
Simon Weil

Is there anything the PRs are still waiting for? can they get merged?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#33 feat: allow to safly depend on other resources to read from the identity store

what

This adds a workaround for the depends_on issue with modules and data sources.

• Added a wait for variable • Added a null_resource to use for depends_on for the data resource

If the PR is acceptable, we can add an example usage to avoid the recreation of resources.

why

• When creating a user group via an external source that syncs with AWS SSO, we need to wait for it to finish before reading the groups from the identity store • Adding a depends_on to a module can create a situation that every change to the dependee will recreate ALL the resources of the module which is super bad

In my case I have to following code:

data "okta_user" "this" {
  for_each = toset(local.users_list)

  user_id = each.value
}

resource "okta_group" "this" {
  for_each = local.accounts_list

  name        = each.value.group_name
  description = "description"
}

resource "okta_group_memberships" "this" {
  for_each = local.accounts_list

  group_id = okta_group.this[each.key].id
  users    = [for u in each.value.users : data.okta_user.this[u].id]
}


module "permission_sets" {
  source  = "cloudposse/sso/aws//modules/permission-sets"
  version = "0.6.1"

  permission_sets = [
    for a in local.accounts_list : {
      name               = a.permission_set_name
      description        = "some desc"
      relay_state        = ""
      session_duration   = "PT2H"
      tags               = local.permission_set_tags
      inline_policy      = ""
      policy_attachments = ["arn:aws:iam::aws:policy/XXXXX"]
    }
  ]
}

module "account_assignments" {
  source  = "cloudposse/sso/aws//modules/account-assignments"
  version = "0.6.1"

  depends_on = [
    okta_group.this,
  ]

  account_assignments = concat([
    for a in local.accounts_list : {
      account             = a.id
      permission_set_arn  = module.permission_sets.permission_sets[a.permission_set_name].arn
      permission_set_name = "${a.name}-${a.role}"
      principal_type      = "GROUP",
      principal_name      = a.group_name
    }
  ])
}

When ever I need to change the local.accounts_list it causes ALL the assignments to be recreated, disconnecting users and causing mayhem…

With the proposed change I need to change the account_assignments module and now I can add or remove accounts safely:

module "account_assignments" {
  source = "path/to/terraform-aws-sso/modules/account-assignments"

  for_each = local.accounts_list

  wait_group_creation = okta_group.this[each.value.name].id

  account_assignments = [
    {
      account             = each.value.id
      permission_set_arn  = module.permission_sets.permission_sets[each.value.permission_set_name].arn
      permission_set_name = "${each.value.name}-${each.value.role}"
      principal_type      = "GROUP",
      principal_name      = each.value.group_name
    }
  ]
}

references

https://itnext.io/beware-of-depends-on-for-modules-it-might-bite-you-da4741caac70https://medium.com/hashicorp-engineering/creating-module-dependencies-in-terraform-0-13-4322702dac4a

Simon Weil avatar
Simon Weil

Thanks, will attend to it next week

Simon Weil avatar
Simon Weil

I tried the requested name change but it failed, please tell me what is the next step, what do you want to do next?

Simon Weil avatar
Simon Weil

see my comment in the PR

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please address the last comment and it should be ok, thank you

Simon Weil avatar
Simon Weil

Done, tested and pushed, tell me if anything else is needed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

approved and merged, thanks

Simon Weil avatar
Simon Weil

Simon Weil avatar
Simon Weil

Thank you!

Simon Weil avatar
Simon Weil

Next PR is ready for review/merge

Simon Weil avatar
Simon Weil

Any thoughts on the second PR?

Simon Weil avatar
Simon Weil

Great, thank you

2022-12-19

OliverS avatar
OliverS

Odd bug (I think):

I have a stack that has an ec2 instance with an AMI taken from the stock of AWS public AMIs. There is a data source in the stack which checks for latest AMI based on some criteria. I have been updating the stack every few weeks and I can see that when a newer AMI is available from AWS, the terraform plan shows a replacement of the EC2 instance will occur. All good so far.

Today I changed the aws_instance.ami attribute to override it manually with “ami-xxxxx” (an actual custom AMI that I created), as part of some testing. Oddly, terraform plan did NOT show that the ec2 instance would be replaced. I added some outputs to confirm that my manually overriden value is see by the var used for aws_instance.ami.

Any ideas what might cause this?

I worked around the issue by tainting the server, and in that case the plan showed that the ami was going to be changed. But I’m still puzzled as to why AMI ID change works sometimes (in this case AWS public AMIs) but not always (here for custom AMIs).

Paula avatar
data "aws_ami" "latest" {
  most_recent = true
  owners      = [var.owner]

  filter {
    name   = "name"
    values = ["${var.default_ami[var.ami]["name"]}"]
  }
  filter {
    name   = "image-id"
    values = ["${var.default_ami[var.ami]["ami_id"]}"]
  }
}

may be this https://stackoverflow.com/questions/65686821/terraform-find-latest-ami-via-data

Terraform find latest ami via data

I’m trying to implement some sort of mechanism where someone can fill in a variable which defines if it’s going to deploy an Amazon Linux machine or a self-created packer machine. But for some reas…

OliverS avatar
OliverS

Thanks but that part works (see “all good so far”) The problem is in paragraph 2.

Paula avatar

Sorry, I misunderstood. Maybe you can manually taint the instances, forcing a replacement, but it’s not the most correct solution for sure

OliverS avatar
OliverS

yes well that’s what I describe in paragraph 4

thanks for trying though

1
Soren Jensen avatar
Soren Jensen

Is there any chance your EC2 instance is in an ASG? If so you are probably only updating the launch template.

OliverS avatar
OliverS

No @Soren Jensen no ASG in this stack!

Fizz avatar

Can you upload your code and specify which version of the aws provider you are using?

OliverS avatar
OliverS

I’ll try to pare it down

bricezakra avatar
bricezakra

Hello everyone, how to move my aws codepipeline from one environment to another?

2022-12-20

Jonas Steinberg avatar
Jonas Steinberg

Has anyone ever successfully implemented terraform in CI (not talking TFC, Spacelift or similar) where you came up with a way of preventing the canceling of the CI job from potentially borking TF state? Currently up against this issue. Solutions I can think of right off the top are:

  1. have the CI delegate to some other tool like a cloud container that runs terraform a. don’t like this really because it’s outside the CI
  2. have a step in the CI that requires approval for apply a. don’t like this really because “manual CI”
  3. do nothing and just run it in CI
  4. try to implement a script that somehow persists the container on the CI backend? I don’t have control of this so I highly doubt this is possible.
RB avatar


preventing the canceling of the CI job from potentially borking TF state
what does this mean? you mean you cancelled your ci job and then how did your tfstate get borked? if you version your tfstate, then couldn’t you go back to the last version?

Jonas Steinberg avatar
Jonas Steinberg

@RB this wouldn’t be me canceling the job.

our terraform is often embedded into the service repos and may run with application pull requests and I’m a big fan of this approach. developers may occasionally kill a job and not realize that a tf apply is in the middle of it. if whatever tf was doing at the time was very involved, like building a database, or doing something in production, this could actually cause a serious issue. and apparently we’re not the only people to have encountered this issue.

state is versioned, but rolling back state sounds scary. I’m not convinced that will be something to rely on and a seamless workflow that wouldn’t ultimately require manually blowing a bunch of stuff away. just like using destroy is not a straightforward operation.

what would be better is if there was a way, as TFC does, that I could implement a graceful shutdown or something. that doesn’t need to be the solution, but it’s one thing that comes to mind. but of course I don’t control the CI’s backend so I’m assuming if a user hits Cancel on the UI that the container runtime is going to send a KILL to every container running, of which terraform apply will be one.

RB avatar


developers may occasionally kill a job
why do they do this?

Jonas Steinberg avatar
Jonas Steinberg

because they’re human. why does anyone make mistakes. you make mistakes lol.

RB avatar

no no, I understand that they occasionally kill a job, but I’m wondering what is the reason they feel compelled to ssh into a server and run kill -9 on the terraform process?

is terraform taking too long?

do they think they made a mistake and want to save time?

there must be a reason other than that they are human

Jonas Steinberg avatar
Jonas Steinberg

oh – ha, sorry.

Jonas Steinberg avatar
Jonas Steinberg

because they realize that something is wrong with the job.

Jonas Steinberg avatar
Jonas Steinberg

for example it could be in a lower environment.

Jonas Steinberg avatar
Jonas Steinberg

and they might realize “oh crap – actually that’s [the wrong value]”

Jonas Steinberg avatar
Jonas Steinberg

and cancel the job. I’ve done this before, although usually it’s with crappy development workflow.

Jonas Steinberg avatar
Jonas Steinberg

they might not be aware that there is a step in the CI workflow that is running terraform

Jonas Steinberg avatar
Jonas Steinberg

like for example they may be running something in a lower environment and iterating over and over again

RB avatar

how long does the terraform apply take for the jobs that developers are likely to kill ?

Jonas Steinberg avatar
Jonas Steinberg

I have no way of tracking that, although I wish that the CI had a graceful shutdown option.

Jonas Steinberg avatar
Jonas Steinberg

this is circleci

RB avatar

in terraform automation, you can usually exit terraform runs early, even gracefully, and that is similar to a kill <task> instead of a kill -9 <task>

RB avatar

if the reason they are doing this is that they mess up and the feedback loop is too long so they try to kill the process in order to rerun it with the correct param, then the solution is to reduce the size (number of resources managed) of your terraform root directories

Jonas Steinberg avatar
Jonas Steinberg

it wouldn’t be on the terraform end

RB avatar

other than that, all you can do is educate your developers to not do this or put in a policy within circleci to prevent any shutdown of the terraform apply task.

Jonas Steinberg avatar
Jonas Steinberg

why they are canceling

Jonas Steinberg avatar
Jonas Steinberg

they’d be canceling for some reason due to their application I think, not realizing that tf is running as a sub-workflow

Jonas Steinberg avatar
Jonas Steinberg

I could use a dynamic step to trigger a centralized pipeline maybe

Jonas Steinberg avatar
Jonas Steinberg

but they only allow one dynamic step per pipeline lmfao so it’s like if I use that one dynamic step for my little terraform hack that’s a pretty poor reason for doing so

2022-12-21

Jonas Steinberg avatar
Jonas Steinberg

Is anyone running Terraform in their CI workflow? Not Spacelift, TFC or other terraform management tools, but actual CI like CircleCI, Gitlab, Jenkins, Codedeploy, Codefresh, etc? If so: how do you handle for the potential for an apply to be accidentally canceled mid-run or other complications?

Soren Jensen avatar
Soren Jensen

We deploy all terraform code to prod accounts from GitHub actions. We haven’t taken action on avoiding a deployment to be cancelled. So far we haven’t had any issues.

Jonas Steinberg avatar
Jonas Steinberg

@Soren Jensen Thanks!

so you handle linting, plans and everything from github actions?

What are the costs like? Are you deploying multiple times across a good number of services daily or infrequently on a small number of things or?

Sudhakar Isireddy avatar
Sudhakar Isireddy

We do use Gitlab. Few times where we had to cancel a deployment in the middle, we had TF state issues….which we simply unlock using force unlock from our laptops or simply go into DynamoDB and delete the lock

Jonas Steinberg avatar
Jonas Steinberg

Nice one @Sudhakar Isireddy thanks.

mike avatar

I have a list of Okta application names and would like to convert them to a list of Okta application IDs. I have this working:

variable "okta_app_names" {
    type        = list(string)
    default     = ["core_wallet", "dli-test-app"]
}


data "okta_app_oauth" "apps" {
    for_each    = toset(var.okta_app_names)
    label       = each.key
}


resource "null_resource" "output_ids" {
    for_each = data.okta_app_oauth.apps
    provisioner "local-exec" {
        command = "echo ${each.key} = ${each.value.id}"
    }
}

The output_ids null_resource will print out each ID. However, I need this in a list, not just printed like this. The list is expected by another Okta resource.

Anyone know of a way to get this into a list? Thanks.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# Something like this

output "okta_app_ids" {
  value = values(data.okta_app_oauth.apps)[*].id
}
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Splat Expressions - Configuration Language | Terraform | HashiCorp Developerattachment image

Splat expressions concisely represent common operations. In Terraform, they also transform single, non-null values into a single-element tuple.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, why do you need resource "null_resource" "output_ids" and print it to the console?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use another output which would output key = value pairs

mike avatar

Thank you! That worked. I most definitely do not need the output_ids null resource. I was just using that to illustrate what I was trying to do.

1

2022-12-22

Rik avatar

Hi, Trying to make use of the cloudposse/platform/datadog//modules/monitors To create monitors in Datadog. I’d like to add some tags (which are visibile in DD) from a variable. I cannot figure out how to get those included?

Basically the same behaviour as alert_tags variable for normal tags..

Tried tags = but this makes no difference in the monitor ending up in datadog:

module "datadog_monitors" {
  source = "cloudposse/platform/datadog//modules/monitors"

  version          = "1.0.1"
  datadog_monitors = local.monitor_map
  alert_tags       = local.alert_tags
  tags             = { "BusinessUnit" : "XYZ" }
}
RB avatar

The tags are defined in the yaml

Rik avatar

yes but how can i re-use the same tags over many monitors without duplication

Rik avatar

u mean the catalog/monitor.yaml

RB avatar
    for tagk, tagv in lookup(each.value, "tags", module.this.tags) : (tagv != null ? format("%s:%s", tagk, tagv) : tagk)
RB avatar

You can use yaml anchors

1
RB avatar

After looking at the code, it does look like it reads from var.tags. if that doesn’t work, i would file a ticket with your inputs

RB avatar

module.this.tags is filled by var.tags

Rik avatar

I just tried, if i remove the tags: from my monitor.yaml it inserts the tags from the tf.. Thats not very clear from the docs

RB avatar

Agreed. We’re always looking for contributions. Please feel free to update our docs :)

Durai avatar

Hi, Trying to use cloudposse/terraform-aws-cloudwatch-events to create a cloudwatch event rule with SNS target I’m facing issue with cloudwatch event rule pattern while creating it. We use terragrunt to deploy our resources.

inputs = {
  name                                = "rds-maintenance-event"
  cloudwatch_event_rule_description   = "Rule to get notified rds scheduled maintenance"
  cloudwatch_event_target_arn         = dependency.sns.outputs.sns_topic_arn
  cloudwatch_event_rule_pattern       = {
    "detail" = {
      "eventTypeCategory" = ["scheduledChange"]
    
      "service" = ["RDS"]
    }
    
    "detail-type" = ["AWS Health Event"]
    
    "source" = ["aws.health"]
}
}

Error received

Error: error creating EventBridge Rule (nonprod-rds-maintenance-event): InvalidEventPatternException: Event pattern is not valid. Reason: Filter is not an object
 at [Source: (String)""{\"detail\":{\"eventTypeCategory\":[\"scheduledChange\"],\"service\":[\"RDS\"]},\"detail-type\":[\"AWS Health Event\"],\"source\":[\"aws.health\"]}""; line: 1, column: 2]

Please suggest how to resolve it.

Patrice Lachance avatar
Patrice Lachance

Hi, I’m trying to upgrade from https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.44.0 to https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.44.1 and get the following error message:
Error: Get “http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth”: dial tcp 127.0.0.1 connect: connection refused
I saw other replies mentioning a network related issues but it shouldn’t be the case because I’m running the command from the same host, same terminal session, same environment variables…

I can’t figure out by looking at the diff why this problem happens and hope someone will be able to help me!

1
Fizz avatar

Where are you expecting to find the cluster? Host;port. Can you post the config for your providers?

RB avatar

You probably want to use the 2.x version of the eks cluster module to get around that issue

Patrice Lachance avatar
Patrice Lachance

@RB you were right! using 2.00 version and setting create_security_group = true fixed the issue. Now using latest version of the module. Thanks for quick support!

1

2022-12-23

Sam avatar

Hello Everyone!

I’m working on creating an AWS Organization with the following (dev, staging, and prod), but I don’t know what would be the best folder structure in Terraform.

  1. Is it better to create a separate directory for each envs or use Workspaces? folder structure ie.
  2. Is it best to use modules to share resources in between envs?
  3. The .tfstate file, should that be in the root folder structure, or in each envs folder? I know it should be stored inside S3 with locks.

Your help would be greatly appreciated.

Kurt Dean avatar
Kurt Dean

There’s many (slightly) different ways to go about this. When comparing any of them, I would think about

• keeping your IaC DRY (typically resources are same/similar across environments)

• how do you add a new account?

• how do you add a new service/project?

• how do you avoid drift?

Sam avatar

The current environment is managed through aws console. That is one of the reasons why I’m moving to IaC using Terraform. I’m currently running an application using the following resources (Beanstalk, RDS, Route53, Cloudfront). Now do I create a separate directory for these services and another directory for the modules (VPC, Subnets, Security)?

2022-12-26

Dhamodharan avatar
Dhamodharan

Hi All, I am new to terraform cloud, i would like automate my tf cli commands with tf cloud, to provision the resource in aws, can someone help with the proper document? I have gone through the terraform official documentation, i couldnt follow that as i am new to that… someone refer anyother documents if you come across…

regards,

Fizz avatar

Not sure what you are trying to do but here are some tutorials on getting started with terraform cloud. https://developer.hashicorp.com/terraform/tutorials/cloud-get-started

Terraform Cloud | Terraform | HashiCorp Developerattachment image

Collaborate on version-controlled configuration using Terraform Cloud. Follow this track to build, change, and destroy infrastructure using remote runs and state.

Dhamodharan avatar
Dhamodharan

Hi @Fizz, thanks for your support, actually im planning to automate the tf deployment, I have my code in my local machine, which i want to push it to tf cloud and provision the resources in aws.

Dhamodharan avatar
Dhamodharan

this is my requirement

Fizz avatar

So assuming you will upload your code to a source code repository like GitHub, the tutorials cover your use case. I’d suggest going through them.

Fizz avatar

And even if you plan to keep your code local, the tutorials get you most of the way there.

Dhamodharan avatar
Dhamodharan

@Fizz thanks, i am going through the document, hope it may help me,. i will check and come back if i stuck

2022-12-27

Dhamodharan avatar
Dhamodharan

I am trying to create a aws task definition with passing the env variables in the container definition in a different file. but i am getting the below error while planning the tf code.

│ Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal object into Go value of type []*ecs.ContainerDefinition
│
│   with aws_ecs_task_definition.offers_taskdefinition,
│   on ecs_main.tf line 13, in resource "aws_ecs_task_definition" "app_taskdefinition":
│   13:   container_definitions    = "${file("task_definitions/ecs_app_task_definition.json")}"

My resource snippet is:

resource "aws_ecs_task_definition" "app_taskdefinition" {
  family                   = "offers"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = 2048
  memory                   = 8192
  task_role_arn            = "${aws_iam_role.ecstaskexecution_role.arn}"
  execution_role_arn       = "${aws_iam_role.ecstaskexecution_role.arn}"
  container_definitions    = "${file("task_definitions/ecs_app_task_definition.json")}"
}

defined the image spec in the json file, its working when i deploy manually.

Someone help on this?

Denis avatar

maybe share the task def json?

RB avatar

You may want to use the module to generate the container JSON

https://github.com/cloudposse/terraform-aws-ecs-container-definition

Then you can use it in your ecs task definition like this

  container_definitions = module.container_definition.json_map_encoded_list
Eric avatar

Not sure if this is the right place to ask but can a new release be cut for cloudtrail-s3-bucket? There was a commit made to fix (what i assume was) a deprecation message but it was never released to the registry: https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/commit/93050ec4f032edc32fed7b77943f3d43e9baeccd

Eric avatar

hmm actually (and i should have checked this), the deprecation message i got (coming from s3-log-storage deep down) isn’t fixed by 0.26.0 either)

Eric avatar

so ill file an issue in cloudtrail-s3-bucket to increment that dep

Eric avatar
#68 Upgrade s3-log-storage dependency to 1.0.0

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Using this module introduces a deprecation message regarding the use of “versioning” attribute of the s3 bucket created by this module.

Expected Behavior

No deprecation messages appear when using the latest release of this module

Steps to Reproduce

module "cloudtrail_s3_bucket" {
  source  = "cloudposse/cloudtrail-s3-bucket/aws"
  version = "0.23.1"
}

Screenshots

╷
│ Warning: Argument is deprecated
│ 
│   with module.cloudtrail_s3_bucket.module.s3_bucket.aws_s3_bucket.default[0],
│   on .terraform/modules/cloudtrail_s3_bucket.s3_bucket/main.tf line 1, in resource "aws_s3_bucket" "default":
│    1: resource "aws_s3_bucket" "default" {
│ 
│ Use the aws_s3_bucket_versioning resource instead
│ 
│ (and 4 more similar warnings elsewhere)

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:
MacOS 11.7.1
Terraform 1.2.7

Additional Context

This can be fixed by incrementing the version of the depedency “cloudposse/s3-log-bucket” to 1.0.0 (to get AWS v4 provider support). It is possible that an interim version might also work but the release notes for your own module say not to use non 1.0 releases of s3-log-bucket.

Matt Richter avatar
Matt Richter

Building out some light-brownfield terraform infra. I would love to make use of this module https://github.com/cloudposse/terraform-aws-tfstate-backend,

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Matt Richter avatar
Matt Richter
#118 terraform apply completes successfully with "Warning: Argument is deprecated"attachment image

Describe the Bug

terraform apply completed successfully. However, there is a warning in the log that will need attention in future:


│ Warning: Argument is deprecated

│ with module.terraform_state_backend.module.log_storage.aws_s3_bucket.default,
│ on .terraform/modules/terraform_state_backend.log_storage/main.tf line 1, in resource “aws_s3_bucket” “default”:
│ 1: resource “aws_s3_bucket” “default” {

│ Use the aws_s3_bucket_logging resource instead

│ (and 21 more similar warnings elsewhere)

Expected Behavior

No deprecated argument warning.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Add the below to my main.tf
module "terraform_state_backend" {
  source      = "cloudposse/tfstate-backend/aws"
  version     = "0.38.1"
  namespace   = "versent-digital-dev-kit"
  stage       = var.aws_region
  name        = "terraform"
  attributes  = ["state"]

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
  force_destroy                      = false
}
  1. Run ‘terraform apply -auto-approve’
  2. See warning in console output

Screenshots

Screen Shot 2022-07-15 at 13 53 33

Matt Richter avatar
Matt Richter

i may take a whack at improving the outstanding PR at some point

Matt Richter avatar
Matt Richter

unless someone more familiar with the space has a chance to look

Jonas Steinberg avatar
Jonas Steinberg

Wow – shocked I haven’t heard of this before! It does not come up at all when googling TACOS or things of that nature; it took accidentally seeing it mentioned in a reddit comment (of course):

https://github.com/AzBuilder/terrakube

cc @Erik Osterman (Cloud Posse)

AzBuilder/terrakube

Open source tool to handle remote terraform workspace in organizations and handle all the lifecycle (plan, apply, destroy).

RB avatar

I’ve seen this before and haven’t met anyone using it yet. Have you tried it?

I see very few forks and stars so I’d be hesitant to use in production.

https://github.com/AzBuilder/terrakube/network/members

It does look like possibly bridgecrew uses it according to the list of forks

AzBuilder/terrakube

Open source tool to handle remote terraform workspace in organizations and handle all the lifecycle (plan, apply, destroy).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It does look pretty nice. It hit our radar back on May 27th, but haven’t had a chance to look at it.

Mohammed Yahya avatar
Mohammed Yahya

you can quickly test it with docker-compose and Postman here:

https://github.com/AzBuilder/terrakube-docker-compose seems promising for people who use k8s to manage their operations AKA operations k8s cluster

AzBuilder/terrakube-docker-compose

Docker compose to run a standalone Terrakube Platform

Jonas Steinberg avatar
Jonas Steinberg

There is a desperate need for a feature-rich, production-grade taco, I have absolutely no doubt about that. And in fact I’ve committed to deprecating the taco at my current shop because the benefits are simply not justifying the cost. I’m not crazy about the idea of running k8s for terraform because I’d rather do it with a much simpler container scheduling platform like ecs fargate, but I genuinely hope terrakube enters CNCF.

RB avatar

Have you considered the other TACOS? We’ve had a lot of good experiences with spacelift

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(Jonas was at Bread)

1
Jonas Steinberg avatar
Jonas Steinberg


We’ve had a lot of good experiences with spacelift
@RB As Erik mentioned I was at Bread and worked with Andriy to implement spacelift there. There are nothing wrong per sé with the tacos it’s just that aside from state and workspace management I’m not sure what benefit they truly bring? Yes they can sequential and parallel stacks, yes they can run arbitrary shell commands against jobs, yes they can do drift detection and they give a convenient hook for governance and auditing, but actually these features aren’t really worth that much right now. And they’re definitely not worth 500K - 1M USD which is about what the average big corporate TFC contract is.

I should reach out to env0 and spacelift and see what a contract with them for between 10K - 15K applies/month would be.

Ryan Cartwright avatar
Ryan Cartwright

@Jonas Steinberg happy to chat again with a previous customer. Let’s connect offline.

We don’t charge on the number of applies/month so it won’t be apples to apples, but I can tell you the existing customers that we have who migrated from TFE and TFC are quite happy.

It definitely won’t be 500K - 1M USD.

Grab some time with me at https://calendly.com/ryan-spacelift or anyone else interested. Happy to provide transparent and simple pricing and discussion on this.

Ryan Cartwright

Spacelift is a sophisticated and compliant infrastructure delivery platform for Terraform, CloudFormation, Pulumi, and Kubernetes. Free Trial: https://spacelift.io/free-trial.

Jonas Steinberg avatar
Jonas Steinberg

Thanks Ryan.

2022-12-28

2022-12-29

Roman Kirilenko avatar
Roman Kirilenko

hi everyone, i’m trying to do upgrade of MSK with terraform but have an issue with Configuration is in use by one or more clusters. Dissociate the configuration from the clusters. Is it possible to bypass that step so the configuration is not even touched?

2022-12-31

aimbotd avatar
aimbotd

Hey all, looking for some guidance here. I’d like to create a node group per AZ, I’d like the name of the group to contain the AZ in it. I’m struggling a bit. Any suggestions would be appreciated

module "primary_node_groups" {
  source   = "cloudposse/eks-node-group/aws"
  version  = "2.6.1"
  for_each = { for idx, subnet_id in module.subnets.private_subnet_ids : idx => subnet_id }

  subnet_ids         = [each.value]
  attributes         = each.value # this doesn't get me want
}
RB avatar

The attributes are a list. Wouldnt you have to do this?

  attributes = [each.value]
aimbotd avatar
aimbotd

You know, thats a good call. …curious as to why its not mad at that.

aimbotd avatar
aimbotd

Ah, because I was actually using tenant.

aimbotd avatar
aimbotd

I changed it for this because I was thinking about doing attributes. That said, am I going about this the right way?

RB avatar

I think you’re doing it correctly. You just need to pass in the subnet AZ as an attribute

aimbotd avatar
aimbotd

I think that’s where I’m struggling. How would I go about grabbing the AZ w/ the subnet from the dynamic subnets module?

RB avatar

Check the outputs first by using this

output subnets { value=module.subnets }
RB avatar

Then see where you can collect the AZ

RB avatar

If it’s not outputted then you could use a data source

aimbotd avatar
aimbotd

Ahh, great call on the data source. I typically forget about using them. I’ll give that a whirl.

aimbotd avatar
aimbotd

Thanks a bunch.

1
aimbotd avatar
aimbotd

That worked like a charm. Thanks for that suggestion!

1
    keyboard_arrow_up