#terraform (2022-12)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2022-12-02
![Soren Jensen avatar](https://secure.gravatar.com/avatar/dcf3b210fa777d0f497eab0ec539be35.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0008-72.png)
I could use a bit of help here.. I’m trying to create a list of buckets my anti virus module is using. The list should contain all upload buckets + 2 extra buckets. I’m using the cloudposse module for creating the upload buckets
# Create the upload_bucket module
module "upload_bucket" {
for_each = toset(var.upload_buckets)
source = "cloudposse/s3-bucket/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "3.0.0"
enabled = true
bucket_name = random_pet.upload_bucket_name[each.key].id
}
I’m in the same module trying to create this list for buckets to scan:
# Use the concat() and values() functions to combine the lists of bucket IDs
av_scan_buckets = concat([
module.temp_bucket.bucket_id,
module.db_objects_bucket.bucket_id
],
[LIST OF UPLOAD BUCKETS]
)
As an output this works
value = { for k, v in toset(var.upload_buckets) : k => module.upload_bucket[k].bucket_id }
Gives me
upload_bucket_ids = {
"bucket1" = "upload-bucket-1"
"bucket2" = "upload-bucket-2"
}
But if I use the same as input to the list it obviously doesn’t work as I need to change the map to a list.. Any one who can tell me how to get this working?
![Denis avatar](https://avatars.slack-edge.com/2022-07-05/3755698025589_2dee8d81d277563f5d20_72.jpg)
which input list are you exactly referring to?
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
What do you mean?
![Denis avatar](https://avatars.slack-edge.com/2022-07-05/3755698025589_2dee8d81d277563f5d20_72.jpg)
you say “if I use the same as input to the list”
![Denis avatar](https://avatars.slack-edge.com/2022-07-05/3755698025589_2dee8d81d277563f5d20_72.jpg)
which list is that ?
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
I tried to use the same for loop as for the output, where I got “LIST OF UPLOAD BUCKETS”
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
But it generates a map, not a list with only the buckets id’s
![Denis avatar](https://avatars.slack-edge.com/2022-07-05/3755698025589_2dee8d81d277563f5d20_72.jpg)
are you saying this line doesn’t work for you?
bucket_name = random_pet.upload_bucket_name[each.key].id
if yes you can use something like
values(module.my_module.upload_bucket_ids)
![Denis avatar](https://avatars.slack-edge.com/2022-07-05/3755698025589_2dee8d81d277563f5d20_72.jpg)
of course change the my_module
![Soren Jensen avatar](https://secure.gravatar.com/avatar/dcf3b210fa777d0f497eab0ec539be35.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0008-72.png)
I tried it like this with and without the tolist()
av_scan_buckets = concat([
module.temp_bucket.bucket_id,
module.db_objects_bucket.bucket_id
],
tolist(values(module.upload_bucket.bucket_id))
)
Got this error:
│ Error: Unsupported attribute
│
│ on antivirus.tf line 49, in module "s3_anti_virus":
│ 49: tolist(values(module.upload_bucket.bucket_id))
│ ├────────────────
│ │ module.upload_bucket is object with 2 attributes
│
│ This object does not have an attribute named "bucket_id".
Makes sense module.upload_bucket
got 2 objects as I’m creating 2 buckets with for_each = toset(var.upload_buckets)
![Soren Jensen avatar](https://secure.gravatar.com/avatar/dcf3b210fa777d0f497eab0ec539be35.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0008-72.png)
Solved, this works
av_scan_buckets = concat([
module.temp_bucket.bucket_id,
module.db_objects_bucket.bucket_id
],
[ for k in toset(var.upload_buckets) : module.upload_bucket[k].bucket_id ]
)
![OliverS avatar](https://avatars.slack-edge.com/2020-04-30/1107989667377_3841766be8721753183c_72.jpg)
How does terrateam compare to atlantis? The comparison page by terrateam shows it has several important additional capabilities over atlantis, but I’m looking for something a little deeper: https://terrateam.io/docs/compare
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
I think they have drift detection and some other features but afaik it’s a saas Atlantis offering
![OliverS avatar](https://avatars.slack-edge.com/2020-04-30/1107989667377_3841766be8721753183c_72.jpg)
i thought atlantis was a saas… you have to install atlantis on your own machines? (real or virtual)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Atlantis is self hosted
![OliverS avatar](https://avatars.slack-edge.com/2020-04-30/1107989667377_3841766be8721753183c_72.jpg)
Ah good to know
2022-12-05
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
Anybody knows about a page listing all AWS resource naming restrictions?
![Kurt Dean avatar](https://secure.gravatar.com/avatar/472d23a227e5f04df7a7b0620f37b8eb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Im not aware of a centralized place. Depending on why you’re looking for this info, another piece to consider is that you may be using Name-Prefix for some resources (which can have a much smaller length limit, for example).
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
“can have a much smaller length limit” if prefix is used. do you have a docs link and/or example for this?
![Kurt Dean avatar](https://secure.gravatar.com/avatar/472d23a227e5f04df7a7b0620f37b8eb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Im on mobile right now, but if you look at AWS load balancers you’ll find right now in Terraform the prefix can be at most 6 characters (last I checked).
It’s useful to supply a prefix instead of a complete name because LB names are unique (per account?), and you typically want to spin up a new load balancer before tearing down your old one with Terraform lifecycle create-before-destroy.
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
Yes. I do like prefixes. Thanks for the tip.
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
I wonder how Cloud Posse and null/label deal with these restrictions. Where do you make sure that the null/label output id fits as name for a resource and do you use prefixes or do they become irrelevant with null/label? @Erik Osterman (Cloud Posse)
![jonjitsu avatar](https://secure.gravatar.com/avatar/8d4169f1a7fd27bd9a11a25ccfed62c8.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Is there something like TF_PLUGIN_CACHE_DIR but for modules downloaded from github? I got 80 services using the same module (I copy pasted the source = “”) and terraform redownloads it each time.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
no, sadly
2022-12-06
![Ron avatar](https://secure.gravatar.com/avatar/a74324f34889f29c1aaa2d6fb82698d3.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
what you guys recommend to store state on-prem ?
![jsreed avatar](https://avatars.slack-edge.com/2022-12-06/4491361948977_169d2199777bd480b3dd_72.png)
3.5” floppies
![Ron avatar](https://secure.gravatar.com/avatar/a74324f34889f29c1aaa2d6fb82698d3.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
dont have floppies
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
consul?
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
![Ron avatar](https://secure.gravatar.com/avatar/a74324f34889f29c1aaa2d6fb82698d3.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
I’ve never used consul. sounds new to me. I’ll try that
![Ron avatar](https://secure.gravatar.com/avatar/a74324f34889f29c1aaa2d6fb82698d3.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
thanks
![Ron avatar](https://secure.gravatar.com/avatar/a74324f34889f29c1aaa2d6fb82698d3.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
some context, thats for my personal lab. I’m creating vms on libvirt/kvm and installing rke2 on that. still learning terraform
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
i mean, if it’s just you and you’re keeping it all local, you can encrypt the state with sops and commit it to your repo lol
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Terraform Cloud state storage is free
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
on-prem? It probably depends what technologies you have available. If you have something that can emulate a local filesystem, the default backend would be simplest
![mrwacky avatar](https://avatars.slack-edge.com/2018-08-22/423003208646_5ad1b1ba6be6b00306b3_72.jpg)
Terraform Cloud state storage is free
Is it faster/slower than S3? S3 feels really slow to me
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
some context, thats for my personal lab.
oh, right, TFC is not on-prem, but it’s totally suitable for a personal lab, provided it’s not airgapped.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
We don’t use TFC for stage storage, so I cannot comment. We use S3 and any slowness seems to just be more closely tied to the number of resources under management within and given root-module and the number of parallel threads.
2022-12-07
![Tushar avatar](https://secure.gravatar.com/avatar/73eab0c29c18da43992350f422af1bca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0021-72.png)
Hi Team,
When I’m trying to folllow the https://github.com/cloudposse/terraform-aws-vpc-peering) module to create VPC and setup the peering between them.
I’m following the example mentioned in “/examples/complete” module and while generating the plan getting the following error
Error: Invalid count argument
│
│ on ../../main.tf line 62, in resource "aws_route" "requestor":
│ 62: count = module.this.enabled ? length(distinct(sort(data.aws_route_tables.requestor.0.ids))) * length(local.acceptor_cidr_blocks) : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work
│ around this, use the -target argument to first apply only the resources that the count depends on.
and getting same for resource "aws_route" "acceptor".
I’m looking for help to understand following item:
- what things i should improve?
- is there any different process to use this module?
- anything that i’m missing?
Terraform module to create a peering connection between two VPCs in the same AWS account.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Seems like a bug with our example. Our tests use the terraform version minimum set in the examples versions.tf file.
I wonder if you’re seeing this issue since you may be using a newer version locally?
Terraform module to create a peering connection between two VPCs in the same AWS account.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Try using 0.13.7 terraform, just to see if you can reproduce this issue the way our tests would
![Release notes from terraform avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.4.0-alpha20221207 1.4.0 (Unreleased) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding “GBK” or “CP936” will now encode the euro symbol € as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this…
1.4.0 (Unreleased) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by th…
2022-12-08
![Krushna avatar](https://secure.gravatar.com/avatar/3c79d141bb8d5fb7099ad9f8961f2654.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Hi, I am trying to use cloudposse/terraform-aws-transit-gateway module to connect 2 different VPC on different regions, Are there any examples. The multiaccount example posted (https://github.com/cloudposse/terraform-aws-transit-gateway/tree/master/examples/multi-account) is within the same region.
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Hey, I haven’t used this module before so I’m not a 100%, but what you’re after may be configured through the providers: https://github.com/cloudposse/terraform-aws-transit-gateway/search?q=provider&type=code
https://github.com/cloudposse/terraform-aws-transit-gateway/search?q=aws.prod
![Joe Perez avatar](https://avatars.slack-edge.com/2022-11-09/4361990079457_b06c12666181bb7ec599_72.jpg)
Hello All! I recently have had to work with AWS PrivateLink and found the documentation to be a bit lacking, so I created a blog post about my experience with the technology. I’m also planning a follow-up post with a terraform example. Has anyone had a chance to use AWS PrivateLink? And have you leveraged other technologies to accomplish the same thing?
Overview Your company is growing and now you have to find out how to allow communication between services across VPCs and AWS accounts. You don’t want send traffic over the public Internet and maintaining VPC Peering isn’t a fun prospect. Implementing an AWS supported solution is the top priority and AWS PrivateLink can be a front-runner for enabling your infrastructure to scale. Lesson What is AWS PrivateLink? PrivateLink Components Gotchas Next Steps What is AWS PrivateLink?
2022-12-09
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Hi Everyone, I’m hitting the following issue when using cloudposse/terraform-aws-alb
and specifying access_logs_s3_bucket_id = aws_s3_bucket.alb_s3_logging.id
.
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Error: Invalid count argument
│
│ on .terraform/modules/alb.access_logs/main.tf line 2, in data "aws_elb_service_account" "default":
│ 2: count = module.this.enabled ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Seems to be related to this - https://github.com/cloudposse/terraform-aws-lb-s3-bucket/blob/master/main.tf#L1
data "aws_elb_service_account" "default" {
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
But module.this.enabled
value is available in the defaults so I’m not sure why it’s complaining.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
That is odd. This error is everyone’s least favorite…
Could you open a ticket with all of your inputs and a sample of your hcl? It would help if it was possible to reproduce it
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
This issue comes up when you pass in another uncreated resources attribute to a module. It’s the first time I’ve seen it for the module.this.enabled flag tho
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
thanks, I’ve raised a bug here: https://github.com/cloudposse/terraform-aws-alb/issues/126 hope that’s OK.
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Yeah, it works if the bucket is created before you run terraform apply
with enable logging and the custom bucket ccess_logs_s3_bucket_id = aws_s3_bucket.alb_s3_logging.id
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
@RB this works when the literal name of the S3 bucket is used. I’ve updated the issue.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
This issue is common and one of the most hated in terraform.
one reason we haven’t hit it is because we create a new logging bucket per alb by not specifying the access_logs_s3_bucket_id
and allowing the module to create the bucket for you
And if we want a shared s3 bucket across albs and pass it in, then we would create that s3 bucket in its own root terraform directory and then it’s created, then go to the root terraform directory for our alb, retrieve the s3 bucket from a data source, and then the alb would apply correctly
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
One question that comes up, if you’re creating an s3 bucket for logging and passing it in to the alb in the same root terraform directory, then why not let the module handle it?
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Thanks, @RB. Basically, I’d like to have fine-grained control over the bucket name. I did intend to let the module handle it but have 2 environments named prod in different regions and ended up with a name clash. Yes, hindsight… :)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
You should take advantage of our null label. All of the modules use it.
https://github.com/cloudposse/terraform-null-label
{namespace}-{environment}-{stage}-{name}-{attributes}
Namespace=acme
Environment=use1
Stage=dev
Name=alb
Attributes=[blue]
These will then get joined by a dash to form a unique name
acme-use1-dev-alb-blue
![Elleval avatar](https://secure.gravatar.com/avatar/f47308b2beb224dedb5b4805867964ca.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Also, is it possible, when enabling logs (https://github.com/cloudposse/terraform-aws-alb#input_access_logs_enabled) to give the s3 bucket, which is created a custom name? Seem to inherit from the labels of the ALB. S3 buckets have a global namespace which is causing a clash across environments, which are in different accounts/regions but use the same seed variable.
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Hey, pretty cool update for ASG’s. https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-ec2-specifying-instance-types-selection-ec2-spot-fleet-auto-scaling/
2022-12-11
2022-12-12
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Regarding the node groups module. Lets say I’m running the instance type, m6id.large
, which is provisioned with 118G ssd ephemeral storage. In order to use that in the cluster, what should I be doing? Do I provision it via the block_device_mappings
? Is it already available to pods in the cluster?
2022-12-13
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
If anyone has any preferred ways of terraforming secrets I’d love to hear about it. Right now I’m creating the secret stubs (so no sensitive data) in terraform and allowing people to clickops the actual secret data from UI; I’m also creating secrets through APIs where possible, e.g. datadog, circleci, whatever, and then reading them from those products and writing them over to my secret backend. I’m using a major CSP secret store; I am not using Vault and am not going to use Vault. I am aware of various things like SOPS to some extent. I’m just curious if anyone has any ingenious ideas for allowing for full secret management via terraform files; something like using key cryptography locally and then committing encrypted secrets in terraform might be a bit advanced for my developers. But fundamentally I’m open to anything slick. Thank you!
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
i haven’t seen a “great” solution yet… i feel things like sops are the best way, if only because the secrets remain encrypted in terraform state. especially since remote state is synced to local disk in plaintext…
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
otherwise, if you can, push secrets to a secret store out of band of terraform, use the “path” to the secret in terraform, and pull the secret from the store in your app
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Hmm i did not know you can keep secrets encrypted in tf state using sops.
Do you have an example or docs i could look over?
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
i didn’t mean to imply it was “integrated”… to keep them encrypted, you have to reference only the encrypted strings in tf configs. so basically tfstate becomes the secret store, and you pass the encrypted string to your app, and decrypt the string in the app
![David Karlsson avatar](https://avatars.slack-edge.com/2021-10-27/2652800695378_c689974d782ce64494aa_72.png)
CLI for managing secrets
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
@David Karlsson chamber saves to ssm/secrets manager but if you use a data source, it will still grab the secret and put it in plain-text in (an albeit encrypted) tfstate
edit: I use chamber today and like it a lot. Did not mean to sound dismissive. Thank you for sharing.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
@loren how do you structure your sops today ? do you save all your secrets in ssm/secrets manager, then pull it down, encrypt it with sops, and then save the encrypted string into terraform? then when you deploy your app, your app knows to retrieve the key (kms?) to decrypt the sops key for the app?
![David Karlsson avatar](https://avatars.slack-edge.com/2021-10-27/2652800695378_c689974d782ce64494aa_72.png)
I havent done it personally but segment describes a slightly different way of using chabmer in prod, at least if you run containers… … 1 sec
![David Karlsson avatar](https://avatars.slack-edge.com/2021-10-27/2652800695378_c689974d782ce64494aa_72.png)
In order to populate secrets in production, chamber is packaged inside our docker containers as a binary and is set as the entrypoint of the container
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
no no, sorry, i haven’t had cause to use sops myself in this manner. i keep waiting for something native in terraform. i guess in general the only idea i’ve seen is to keep the secrets out of terraform, by encrypting them with something like sops, or to use an external secret store and manage only the path to the secret in terraform. either way, you are handling the secret itself in your app, instead of passing the secret value to the app with terraform
![Michael Dizon avatar](https://avatars.slack-edge.com/2021-01-15/1664383757488_b5214d00b8fce4726a7c_72.jpg)
made a quick PR here https://github.com/cloudposse/terraform-aws-ssm-patch-manager/pull/22
what
• using source_policy_documents
for bucket policy
why
• bucket was created, but no policy was applied
references
• Use closes #21
#21
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
@Michael Dizon thank you. I’ve left a comment if you an take a look at it
what
• using source_policy_documents
for bucket policy
why
• bucket was created, but no policy was applied
references
• Use closes #21
#21
![Michael Dizon avatar](https://avatars.slack-edge.com/2021-01-15/1664383757488_b5214d00b8fce4726a7c_72.jpg)
on it
![Michael Dizon avatar](https://avatars.slack-edge.com/2021-01-15/1664383757488_b5214d00b8fce4726a7c_72.jpg)
@Andriy Knysh (Cloud Posse) do the tests need to be kicked off manually>
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Michael Dizon avatar](https://avatars.slack-edge.com/2021-01-15/1664383757488_b5214d00b8fce4726a7c_72.jpg)
strange, it’s still not picking up that ami
![Michael Dizon avatar](https://avatars.slack-edge.com/2021-01-15/1664383757488_b5214d00b8fce4726a7c_72.jpg)
i updated the PR to use an ami from us-east-2
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
approved and merged, thanks again
![Michael Dizon avatar](https://avatars.slack-edge.com/2021-01-15/1664383757488_b5214d00b8fce4726a7c_72.jpg)
np!
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
hello, had a question about validations for list(string) variables, I was triying this but it doesnt seem to work:
variable "dns_servers" {
description = "List of DNS Servers, Max 2"
type = list(string)
default = ["10.2.0.2"]
validation {
condition = length(var.dns_servers) > 2
error_message = "Error: There can only be two dns servers MAX"
}
}
but when I run it it just errors on that rule, probably something obvi but Im not able to find any solution
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Do you need to change length(var.dns_servers) > 2
to length(var.dns_servers) <= 2
?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
nope, i think im mistaken.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
I always get the condition confused.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
looks okay according to this …
https://developer.hashicorp.com/terraform/language/values/variables#custom-validation-rules
![attachment image](https://developer.hashicorp.com/og-image/terraform.jpg)
Input variables allow you to customize modules without altering their source code. Learn how to declare, define, and reference variables in configurations.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
What is the current default value youre sending as an input ?
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
the exact one above
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
like I didnt add another dns server or different ones… if thats what you mean?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
sorry, I mean I see that the default is a single dns, but what are you passing as the input ?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
are you just using the default and the default isnt working?
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
yes, thats it
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
the other issue i see is that the validation should probably be length(var.dns_servers) <= 2
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
haha, i think I had it right in my first comment. You want to allow only dns servers with a count of 1 or 2
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
correct yes!
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
try the suggested change for the validation and it should work
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
it’s tricky. I suppose we can think of the validation needs to be what we expect. If it’s true, continue. If it’s false, then the error condition will be thrown.
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
I often think I need the validation to be what I do NOT expect in order to get the error condition. I have to keep reminding myself that the validation
is the if (true) {}
block and the error_condition
is the else {}
block
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
Is creating api keys, application keys, private key pairs and similar product or cloud resources for which there are terraform resources for a bad practice? It winds secrets up in the state, but why would providers create these resources if it was an antipattern? Note here I am not talking about creating plaintext secrets or something of that nature – obviously that is nuts. I have some workflows that involve creating key pairs and then reading them into other places.
I don’t think its possible to avoid having secrets in state, is it?
![Sergey avatar](https://secure.gravatar.com/avatar/3c8c7a37e3b442ab65b402d6060417de.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
Good afternoon, I encountered a small problem (bug). Tell me who can help fix it and commit it? I plan to use this code in production.
The policy is created simply by ARN without the “:” construct, which is necessary to create the correct policy for the role.
Without this “:” construct, the policy is created, but it does not work correctly.
This error was discovered when I tried to create a cloudwatch group in the cloudtrail module.
I got the response “Error: Error updating CloudTrail: InvalidCloudWatchLogsLogGroupArnException: Access denied. Verify in IAM that the role has adequate permissions.”
After studying the code, I realized that I need to add the construction “:*” in a couple of lines.
My solution looks like this, I need to replace the lines in file :
This line:
join("", aws_cloudwatch_log_group.default.*.arn),
replaced by
"${join("", aws_cloudwatch_log_group.default.*.arn)}:*"
You need to do this in both identical lines.
Perhaps you can suggest a better solution, I’m new to terraforming.
2022-12-14
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Hello, I have this lb listener
resource "aws_lb_listener_rule" "https_443_listener_rule_2" {
listener_arn = aws_lb_listener.https_443.arn
priority = 107
action {
type = "forward"
forward {
target_group {
arn = aws_lb_target_group.flo3_on_ecs_blue_tg.arn
weight = 100
}
target_group {
arn = aws_lb_target_group.flo3_on_ecs_green_tg.arn
weight = 0
}
stickiness {
enabled = false
duration = 1
}
}
}
I currently use codedeploy to make blue/green deployments into ECS - However, after a deployment, the weights of each target group change and terraform wants to change them back to the scripted configuration which makes traffic go to a target group with no containers. What is the best way to tackle this issue so regardless of the state of which target group has 100 weight, terraform does not want to update it?
![Joe Perez avatar](https://avatars.slack-edge.com/2022-11-09/4361990079457_b06c12666181bb7ec599_72.jpg)
check out this tutorial, the TLDR is that they use the -var
flag to switch between blue and green apps https://developer.hashicorp.com/terraform/tutorials/aws/blue-green-canary-tests-deployments
![attachment image](https://developer.hashicorp.com/og-image/terraform.jpg)
Configure AWS application load balancers to release an application in a rolling upgrade with near-zero downtime. Incrementally promote a new canary application version to production by building a feature toggle with Terraform.
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Thank you!
![ANILKUMAR K avatar](https://secure.gravatar.com/avatar/1e51a8e169a0bcebe0a0f4e6a377d44f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
How to setup AWS MSK Cluster using SASL/IAM authentication using basic MSK Cluster in Terraform
![ANILKUMAR K avatar](https://secure.gravatar.com/avatar/1e51a8e169a0bcebe0a0f4e6a377d44f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
Could you please help me in configuring this
2022-12-15
![ANILKUMAR K avatar](https://secure.gravatar.com/avatar/1e51a8e169a0bcebe0a0f4e6a377d44f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
Actually we have created the cluster and we are using the SASL/IAM authentication. Also I have provided the following policies in Instance profile role Kafka-cluster: Connect Kafka-cluster: Describe * Kafka-cluster: ReadData Kafka-cluster: Alter * Kafka-cluster: Delete* Kafka-cluster: Write *
Port are opened in EC2 and cluster i.e.9092 9098,2181,2182 both inbound and outbound
We are trying to connect with role and we are giving below command i.e., aws kafka describe-cluster –cluster -arn
Geeting error like : Connection was closed before we released a valid response from Endpoint.
![shamwow avatar](https://avatars.slack-edge.com/2022-12-13/4504463368086_966a683e4d72e74619c9_72.png)
sounds like a connectivity issue and less a terraform issue, have you tried in the AWS channel?
![David Karlsson avatar](https://avatars.slack-edge.com/2021-10-27/2652800695378_c689974d782ce64494aa_72.png)
security groups what ingress and egress do you have
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
Does anyone have a good hack, whether it be a script or a tool (but probably not something crazy like “use TFC or Spacelift”) for prevent cross-state blunders in multi-account state management? In other words a good hack or script or tool for validating that, for example, a plan that has been generated is about to bo applied to the correct cloud account id (or similar)? Thanks.
![Denis avatar](https://avatars.slack-edge.com/2022-07-05/3755698025589_2dee8d81d277563f5d20_72.jpg)
the farthest I’ve gone in this direction is setting the account and region in the cloud provider config block in terraform. And terraform errors out if you are running it anywhere else.
![Chris Dobbyn avatar](https://avatars.slack-edge.com/2021-03-01/1814062195588_ba73798ef7efdbd3021e_72.jpg)
Use state locking.
![attachment image](https://developer.hashicorp.com/og-image/terraform.jpg)
Terraform stores state which caches the known state of the world the last time Terraform ran.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
@Chris Dobbyn state locking doesn’t prevent state from the wrong account from being written to state storage, it just prevents collisions that would otherwise occur when concurrent state commits were happening.
![Chris Dobbyn avatar](https://avatars.slack-edge.com/2021-03-01/1814062195588_ba73798ef7efdbd3021e_72.jpg)
Yep I miss read, denis’s answer is correct.
![Chris Dobbyn avatar](https://avatars.slack-edge.com/2021-03-01/1814062195588_ba73798ef7efdbd3021e_72.jpg)
https://registry.terraform.io/providers/hashicorp/aws/latest/docs#allowed_account_ids
Obviously this is dependent on aws provider.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
This is a different ask.
2022-12-16
![deepakshi avatar](https://secure.gravatar.com/avatar/49b078e01b01cc53a434db8567b0a4ea.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0020-72.png)
Hello, team!
![deepakshi avatar](https://secure.gravatar.com/avatar/49b078e01b01cc53a434db8567b0a4ea.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0020-72.png)
im getting this error can any one suggest to resolve it.
![deepakshi avatar](https://secure.gravatar.com/avatar/49b078e01b01cc53a434db8567b0a4ea.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0020-72.png)
│ Error: CacheParameterGroupNotFound: CacheParameterGroup ps-prod-redis-cache not found. │ status code: 404, request id: ccbf450d-4b2d-410e-95a9-2797c6d184d2 │
![Damian avatar](https://secure.gravatar.com/avatar/d185fc8114caa4c6c8430aafdb94a5a4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
Hi team. I wonder if there is a way to provide activemq xml
config using this module? https://registry.terraform.io/modules/cloudposse/mq-broker/aws/latest . I am new to terraform so I might be doing something wrong, but basically I’d like to modify some destination policies like this
<destinationPolicy> ... </destinationPolicy>
If I used barebones aws_mq_broker
I would do it like this:
resource "aws_mq_broker" "example" {
broker_name = "example"
configuration {
id = aws_mq_configuration.example.id
revision = aws_mq_configuration.example.latest_revision
}
...
}
resource "aws_mq_configuration" "example" {
description = "Example Configuration"
name = "example"
engine_type = "ActiveMQ"
engine_version = "5.15.0"
data = <<DATA
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<broker xmlns="<http://activemq.apache.org/schema/core>">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" gcInactiveDestinations="true" inactiveTimoutBeforeGC="600000" />
</policyEntries>
</policyMap>
</destinationPolicy>
</broker>
DATA
}
Can I attach such configuration when I use Cloudposse mq-broker
module?
2022-12-17
![Alcp avatar](https://secure.gravatar.com/avatar/bb2a467fb8f95b8f63ba5d9d570223cc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
I am running in to error with helm module, installing the calico-operator
│ Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "default" namespace: "" from "": no matches for kind "APIServer" in version "operator.tigera.io/v1"
│ ensure CRDs are installed first, resource mapping not found for name: "default" namespace: "" from "": no matches for kind "Installation" in version "operator.tigera.io/v1"
│ ensure CRDs are installed first]
│
│ with module.calico_addon.helm_release.this[0],
│ on .terraform/modules/calico_addon/main.tf line 58, in resource "helm_release" "this":
│ 58: resource "helm_release" "this" {
│
here is the root module
module "calico_addon" {
source = "cloudposse/helm-release/aws"
version = "0.7.0"
name = "" # avoids hitting length restrictions on IAM Role names
chart = var.chart
description = var.description
repository = var.repository
chart_version = var.chart_version
kubernetes_namespace = join("", kubernetes_namespace.default.*.id)
wait = var.wait
atomic = var.atomic
cleanup_on_fail = var.cleanup_on_fail
timeout = var.timeout
create_namespace = false
verify = var.verify
iam_role_enabled = false
eks_cluster_oidc_issuer_url = replace(module.eks.outputs.eks_cluster_identity_oidc_issuer, "https://", "")
values = compact([
# hardcoded values
yamlencode(yamldecode(file("${path.module}/resources/values.yaml"))),
# standard k8s object settings
yamlencode({
fullnameOverride = module.this.name,
awsRegion = var.region
autoDiscovery = {
clusterName = module.eks.outputs.eks_cluster_id
}
rbac = {
serviceAccount = {
name = var.service_account_name
}
}
}),
# additional values
yamlencode(var.chart_values)
])
context = module.introspection.context
}
2022-12-18
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
I am using the https://github.com/cloudposse/terraform-aws-sso/ module and have 2 issues:
- Depends on issue, opened a PR for it: https://github.com/cloudposse/terraform-aws-sso/pull/33
- Deprecation warnings for AWS provider v4: https://github.com/cloudposse/terraform-aws-sso/issues/34 As the first issue has got no attention, I did not open a PR for the second one… Any chance to get a review for the first one and fix the second one? I’m willing to to open a PR for the second one if it will get attention
Please don’t see my message as criticism, I’m very grateful for your open source work and modules
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
@Simon Weil thanks for the PR, please see the comment
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Thank you, will do it tomorrow
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Updated the PR as required, although it did nothing And opened a new PR for the deprecation warnings
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Is there anything the PRs are still waiting for? can they get merged?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
what
This adds a workaround for the depends_on
issue with modules and data sources.
• Added a wait for variable
• Added a null_resource
to use for depends_on
for the data resource
If the PR is acceptable, we can add an example usage to avoid the recreation of resources.
why
• When creating a user group via an external source that syncs with AWS SSO, we need to wait for it to finish before reading the groups from the identity store
• Adding a depends_on
to a module can create a situation that every change to the dependee will recreate ALL the resources of the module which is super bad
In my case I have to following code:
data "okta_user" "this" {
for_each = toset(local.users_list)
user_id = each.value
}
resource "okta_group" "this" {
for_each = local.accounts_list
name = each.value.group_name
description = "description"
}
resource "okta_group_memberships" "this" {
for_each = local.accounts_list
group_id = okta_group.this[each.key].id
users = [for u in each.value.users : data.okta_user.this[u].id]
}
module "permission_sets" {
source = "cloudposse/sso/aws//modules/permission-sets"
version = "0.6.1"
permission_sets = [
for a in local.accounts_list : {
name = a.permission_set_name
description = "some desc"
relay_state = ""
session_duration = "PT2H"
tags = local.permission_set_tags
inline_policy = ""
policy_attachments = ["arn:aws:iam::aws:policy/XXXXX"]
}
]
}
module "account_assignments" {
source = "cloudposse/sso/aws//modules/account-assignments"
version = "0.6.1"
depends_on = [
okta_group.this,
]
account_assignments = concat([
for a in local.accounts_list : {
account = a.id
permission_set_arn = module.permission_sets.permission_sets[a.permission_set_name].arn
permission_set_name = "${a.name}-${a.role}"
principal_type = "GROUP",
principal_name = a.group_name
}
])
}
When ever I need to change the local.accounts_list
it causes ALL the assignments to be recreated, disconnecting users and causing mayhem…
With the proposed change I need to change the account_assignments
module and now I can add or remove accounts safely:
module "account_assignments" {
source = "path/to/terraform-aws-sso/modules/account-assignments"
for_each = local.accounts_list
wait_group_creation = okta_group.this[each.value.name].id
account_assignments = [
{
account = each.value.id
permission_set_arn = module.permission_sets.permission_sets[each.value.permission_set_name].arn
permission_set_name = "${each.value.name}-${each.value.role}"
principal_type = "GROUP",
principal_name = each.value.group_name
}
]
}
references
• https://itnext.io/beware-of-depends-on-for-modules-it-might-bite-you-da4741caac70 • https://medium.com/hashicorp-engineering/creating-module-dependencies-in-terraform-0-13-4322702dac4a
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Thanks, will attend to it next week
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
I tried the requested name change but it failed, please tell me what is the next step, what do you want to do next?
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
see my comment in the PR
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
please address the last comment and it should be ok, thank you
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Done, tested and pushed, tell me if anything else is needed
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
approved and merged, thanks
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Thank you!
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Next PR is ready for review/merge
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Any thoughts on the second PR?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
this one https://github.com/cloudposse/terraform-aws-sso/pull/35 merged, thanks again
what
Fix the deprecation warnings as described here: https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.40.0
Based on PR #33 so that should be merged first.
why
Otherwise there are deprecation warnings…
references
• https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.40.0 • https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/identitystore_group#filter • closes #34
![Simon Weil avatar](https://avatars.slack-edge.com/2022-12-18/4528338364150_1308db832598c8c29f20_72.jpg)
Great, thank you
2022-12-19
![OliverS avatar](https://avatars.slack-edge.com/2020-04-30/1107989667377_3841766be8721753183c_72.jpg)
Odd bug (I think):
I have a stack that has an ec2 instance with an AMI taken from the stock of AWS public AMIs. There is a data source in the stack which checks for latest AMI based on some criteria. I have been updating the stack every few weeks and I can see that when a newer AMI is available from AWS, the terraform plan shows a replacement of the EC2 instance will occur. All good so far.
Today I changed the aws_instance.ami attribute to override it manually with “ami-xxxxx” (an actual custom AMI that I created), as part of some testing. Oddly, terraform plan did NOT show that the ec2 instance would be replaced. I added some outputs to confirm that my manually overriden value is see by the var used for aws_instance.ami.
Any ideas what might cause this?
I worked around the issue by tainting the server, and in that case the plan showed that the ami was going to be changed. But I’m still puzzled as to why AMI ID change works sometimes (in this case AWS public AMIs) but not always (here for custom AMIs).
![Paula avatar](https://avatars.slack-edge.com/2022-09-13/4070142320726_24f91e7b54e97b142967_72.jpg)
data "aws_ami" "latest" {
most_recent = true
owners = [var.owner]
filter {
name = "name"
values = ["${var.default_ami[var.ami]["name"]}"]
}
filter {
name = "image-id"
values = ["${var.default_ami[var.ami]["ami_id"]}"]
}
}
may be this https://stackoverflow.com/questions/65686821/terraform-find-latest-ami-via-data
I’m trying to implement some sort of mechanism where someone can fill in a variable which defines if it’s going to deploy an Amazon Linux machine or a self-created packer machine. But for some reas…
![OliverS avatar](https://avatars.slack-edge.com/2020-04-30/1107989667377_3841766be8721753183c_72.jpg)
Thanks but that part works (see “all good so far”) The problem is in paragraph 2.
![Paula avatar](https://avatars.slack-edge.com/2022-09-13/4070142320726_24f91e7b54e97b142967_72.jpg)
Sorry, I misunderstood. Maybe you can manually taint the instances, forcing a replacement, but it’s not the most correct solution for sure
![OliverS avatar](https://avatars.slack-edge.com/2020-04-30/1107989667377_3841766be8721753183c_72.jpg)
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
Is there any chance your EC2 instance is in an ASG? If so you are probably only updating the launch template.
![OliverS avatar](https://avatars.slack-edge.com/2020-04-30/1107989667377_3841766be8721753183c_72.jpg)
No @Soren Jensen no ASG in this stack!
![Fizz avatar](https://secure.gravatar.com/avatar/77649846c46dd6a47fccd66b77ef609c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
Can you upload your code and specify which version of the aws provider you are using?
![OliverS avatar](https://avatars.slack-edge.com/2020-04-30/1107989667377_3841766be8721753183c_72.jpg)
I’ll try to pare it down
![bricezakra avatar](https://secure.gravatar.com/avatar/0c510e42d1d451e0c1ca8112a63e3e63.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Hello everyone, how to move my aws codepipeline from one environment to another?
2022-12-20
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
Has anyone ever successfully implemented terraform in CI (not talking TFC, Spacelift or similar) where you came up with a way of preventing the canceling of the CI job from potentially borking TF state? Currently up against this issue. Solutions I can think of right off the top are:
- have the CI delegate to some other tool like a cloud container that runs terraform a. don’t like this really because it’s outside the CI
- have a step in the CI that requires approval for apply a. don’t like this really because “manual CI”
- do nothing and just run it in CI
- try to implement a script that somehow persists the container on the CI backend? I don’t have control of this so I highly doubt this is possible.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
preventing the canceling of the CI job from potentially borking TF state
what does this mean? you mean you cancelled your ci job and then how did your tfstate get borked? if you version your tfstate, then couldn’t you go back to the last version?
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
@RB this wouldn’t be me canceling the job.
our terraform is often embedded into the service repos and may run with application pull requests and I’m a big fan of this approach. developers may occasionally kill a job and not realize that a tf apply is in the middle of it. if whatever tf was doing at the time was very involved, like building a database, or doing something in production, this could actually cause a serious issue. and apparently we’re not the only people to have encountered this issue.
state is versioned, but rolling back state sounds scary. I’m not convinced that will be something to rely on and a seamless workflow that wouldn’t ultimately require manually blowing a bunch of stuff away. just like using destroy
is not a straightforward operation.
what would be better is if there was a way, as TFC does, that I could implement a graceful shutdown or something. that doesn’t need to be the solution, but it’s one thing that comes to mind. but of course I don’t control the CI’s backend so I’m assuming if a user hits Cancel
on the UI that the container runtime is going to send a KILL
to every container running, of which terraform apply
will be one.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
developers may occasionally kill a job
why do they do this?
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
because they’re human. why does anyone make mistakes. you make mistakes lol.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
no no, I understand that they occasionally kill a job, but I’m wondering what is the reason they feel compelled to ssh into a server and run kill -9 on the terraform process?
is terraform taking too long?
do they think they made a mistake and want to save time?
there must be a reason other than that they are human
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
oh – ha, sorry.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
because they realize that something is wrong with the job.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
for example it could be in a lower environment.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
and they might realize “oh crap – actually that’s [the wrong value]”
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
and cancel the job. I’ve done this before, although usually it’s with crappy development workflow.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
they might not be aware that there is a step in the CI workflow that is running terraform
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
like for example they may be running something in a lower environment and iterating over and over again
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
how long does the terraform apply take for the jobs that developers are likely to kill ?
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
I have no way of tracking that, although I wish that the CI had a graceful shutdown option.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
this is circleci
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
in terraform automation, you can usually exit terraform runs early, even gracefully, and that is similar to a kill <task>
instead of a kill -9 <task>
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
if the reason they are doing this is that they mess up and the feedback loop is too long so they try to kill the process in order to rerun it with the correct param, then the solution is to reduce the size (number of resources managed) of your terraform root directories
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
it wouldn’t be on the terraform end
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
other than that, all you can do is educate your developers to not do this or put in a policy within circleci to prevent any shutdown of the terraform apply task.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
why they are canceling
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
they’d be canceling for some reason due to their application I think, not realizing that tf is running as a sub-workflow
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
I could use a dynamic step to trigger a centralized pipeline maybe
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
but they only allow one dynamic step per pipeline lmfao so it’s like if I use that one dynamic step for my little terraform hack that’s a pretty poor reason for doing so
2022-12-21
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
Is anyone running Terraform in their CI workflow? Not Spacelift, TFC or other terraform management tools, but actual CI like CircleCI, Gitlab, Jenkins, Codedeploy, Codefresh, etc? If so: how do you handle for the potential for an apply to be accidentally canceled mid-run or other complications?
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
We deploy all terraform code to prod accounts from GitHub actions. We haven’t taken action on avoiding a deployment to be cancelled. So far we haven’t had any issues.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
@Soren Jensen Thanks!
so you handle linting, plans and everything from github actions?
What are the costs like? Are you deploying multiple times across a good number of services daily or infrequently on a small number of things or?
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
We do use Gitlab. Few times where we had to cancel a deployment in the middle, we had TF state issues….which we simply unlock using force unlock from our laptops or simply go into DynamoDB and delete the lock
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
Nice one @Sudhakar Isireddy thanks.
![mike avatar](https://secure.gravatar.com/avatar/f9a43785673682a201673b40d92efd25.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
I have a list of Okta application names and would like to convert them to a list of Okta application IDs. I have this working:
variable "okta_app_names" {
type = list(string)
default = ["core_wallet", "dli-test-app"]
}
data "okta_app_oauth" "apps" {
for_each = toset(var.okta_app_names)
label = each.key
}
resource "null_resource" "output_ids" {
for_each = data.okta_app_oauth.apps
provisioner "local-exec" {
command = "echo ${each.key} = ${each.value.id}"
}
}
The output_ids
null_resource
will print out each ID. However, I need this in a list, not just printed like this. The list is expected by another Okta resource.
Anyone know of a way to get this into a list? Thanks.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
# Something like this
output "okta_app_ids" {
value = values(data.okta_app_oauth.apps)[*].id
}
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![attachment image](https://developer.hashicorp.com/og-image/terraform.jpg)
The values function returns a list of the element values in a given map.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![attachment image](https://developer.hashicorp.com/og-image/terraform.jpg)
Splat expressions concisely represent common operations. In Terraform, they also transform single, non-null values into a single-element tuple.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
also, why do you need resource "null_resource" "output_ids"
and print it to the console?
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you can use another output which would output key = value
pairs
![mike avatar](https://secure.gravatar.com/avatar/f9a43785673682a201673b40d92efd25.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Thank you! That worked. I most definitely do not need the output_ids
null resource. I was just using that to illustrate what I was trying to do.
2022-12-22
![Rik avatar](https://avatars.slack-edge.com/2022-11-07/4318830255911_da56bb8ef9258058745f_72.jpg)
Hi,
Trying to make use of the cloudposse/platform/datadog//modules/monitors
To create monitors in Datadog.
I’d like to add some tags (which are visibile in DD) from a variable. I cannot figure out how to get those included?
Basically the same behaviour as alert_tags variable for normal tags..
Tried tags = but this makes no difference in the monitor ending up in datadog:
module "datadog_monitors" {
source = "cloudposse/platform/datadog//modules/monitors"
version = "1.0.1"
datadog_monitors = local.monitor_map
alert_tags = local.alert_tags
tags = { "BusinessUnit" : "XYZ" }
}
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
The tags are defined in the yaml
![Rik avatar](https://avatars.slack-edge.com/2022-11-07/4318830255911_da56bb8ef9258058745f_72.jpg)
yes but how can i re-use the same tags over many monitors without duplication
![Rik avatar](https://avatars.slack-edge.com/2022-11-07/4318830255911_da56bb8ef9258058745f_72.jpg)
u mean the catalog/monitor.yaml
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
for tagk, tagv in lookup(each.value, "tags", module.this.tags) : (tagv != null ? format("%s:%s", tagk, tagv) : tagk)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
After looking at the code, it does look like it reads from var.tags. if that doesn’t work, i would file a ticket with your inputs
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
module.this.tags is filled by var.tags
![Rik avatar](https://avatars.slack-edge.com/2022-11-07/4318830255911_da56bb8ef9258058745f_72.jpg)
hmm i see, looking for this i found: https://github.com/cloudposse/terraform-datadog-platform/pull/36
![Rik avatar](https://avatars.slack-edge.com/2022-11-07/4318830255911_da56bb8ef9258058745f_72.jpg)
I just tried, if i remove the tags: from my monitor.yaml it inserts the tags from the tf.. Thats not very clear from the docs
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Agreed. We’re always looking for contributions. Please feel free to update our docs :)
![Durai avatar](https://secure.gravatar.com/avatar/1af77db776b243b36ab7e85a9c0e447e.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
Hi, Trying to use cloudposse/terraform-aws-cloudwatch-events to create a cloudwatch event rule with SNS target I’m facing issue with cloudwatch event rule pattern while creating it. We use terragrunt to deploy our resources.
inputs = {
name = "rds-maintenance-event"
cloudwatch_event_rule_description = "Rule to get notified rds scheduled maintenance"
cloudwatch_event_target_arn = dependency.sns.outputs.sns_topic_arn
cloudwatch_event_rule_pattern = {
"detail" = {
"eventTypeCategory" = ["scheduledChange"]
"service" = ["RDS"]
}
"detail-type" = ["AWS Health Event"]
"source" = ["aws.health"]
}
}
Error received
Error: error creating EventBridge Rule (nonprod-rds-maintenance-event): InvalidEventPatternException: Event pattern is not valid. Reason: Filter is not an object
at [Source: (String)""{\"detail\":{\"eventTypeCategory\":[\"scheduledChange\"],\"service\":[\"RDS\"]},\"detail-type\":[\"AWS Health Event\"],\"source\":[\"aws.health\"]}""; line: 1, column: 2]
Please suggest how to resolve it.
![Patrice Lachance avatar](https://secure.gravatar.com/avatar/2333d4aa63cfb2808781a7eedf8551e8.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Hi, I’m trying to upgrade from https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.44.0 to https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.44.1 and get the following error message:
Error: Get “http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth”: dial tcp 127.0.0.1 connect: connection refused
I saw other replies mentioning a network related issues but it shouldn’t be the case because I’m running the command from the same host, same terminal session, same environment variables…
I can’t figure out by looking at the diff why this problem happens and hope someone will be able to help me!
![Fizz avatar](https://secure.gravatar.com/avatar/77649846c46dd6a47fccd66b77ef609c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
Where are you expecting to find the cluster? Host;port. Can you post the config for your providers?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
You probably want to use the 2.x version of the eks cluster module to get around that issue
![Patrice Lachance avatar](https://secure.gravatar.com/avatar/2333d4aa63cfb2808781a7eedf8551e8.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
@RB you were right! using 2.00 version and setting create_security_group = true fixed the issue. Now using latest version of the module. Thanks for quick support!
2022-12-23
![Sam avatar](https://secure.gravatar.com/avatar/7f7f5d75c3ec0ae933ea33f1b2c3737d.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
Hello Everyone!
I’m working on creating an AWS Organization with the following (dev, staging, and prod), but I don’t know what would be the best folder structure in Terraform.
- Is it better to create a separate directory for each envs or use Workspaces? folder structure ie.
- Is it best to use modules to share resources in between envs?
- The .tfstate file, should that be in the root folder structure, or in each envs folder? I know it should be stored inside S3 with locks.
Your help would be greatly appreciated.
![Kurt Dean avatar](https://secure.gravatar.com/avatar/472d23a227e5f04df7a7b0620f37b8eb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
There’s many (slightly) different ways to go about this. When comparing any of them, I would think about
• keeping your IaC DRY (typically resources are same/similar across environments)
• how do you add a new account?
• how do you add a new service/project?
• how do you avoid drift?
![Sam avatar](https://secure.gravatar.com/avatar/7f7f5d75c3ec0ae933ea33f1b2c3737d.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
The current environment is managed through aws console. That is one of the reasons why I’m moving to IaC using Terraform. I’m currently running an application using the following resources (Beanstalk, RDS, Route53, Cloudfront). Now do I create a separate directory for these services and another directory for the modules (VPC, Subnets, Security)?
2022-12-26
![Dhamodharan avatar](https://secure.gravatar.com/avatar/d22d4a23a167ba07fe9ad7e7708694e2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0017-72.png)
Hi All, I am new to terraform cloud, i would like automate my tf cli commands with tf cloud, to provision the resource in aws, can someone help with the proper document? I have gone through the terraform official documentation, i couldnt follow that as i am new to that… someone refer anyother documents if you come across…
regards,
![Fizz avatar](https://secure.gravatar.com/avatar/77649846c46dd6a47fccd66b77ef609c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
Not sure what you are trying to do but here are some tutorials on getting started with terraform cloud. https://developer.hashicorp.com/terraform/tutorials/cloud-get-started
![attachment image](https://developer.hashicorp.com/og-image/terraform.jpg)
Collaborate on version-controlled configuration using Terraform Cloud. Follow this track to build, change, and destroy infrastructure using remote runs and state.
![Dhamodharan avatar](https://secure.gravatar.com/avatar/d22d4a23a167ba07fe9ad7e7708694e2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0017-72.png)
Hi @Fizz, thanks for your support, actually im planning to automate the tf deployment, I have my code in my local machine, which i want to push it to tf cloud and provision the resources in aws.
![Dhamodharan avatar](https://secure.gravatar.com/avatar/d22d4a23a167ba07fe9ad7e7708694e2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0017-72.png)
this is my requirement
![Fizz avatar](https://secure.gravatar.com/avatar/77649846c46dd6a47fccd66b77ef609c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
So assuming you will upload your code to a source code repository like GitHub, the tutorials cover your use case. I’d suggest going through them.
![Fizz avatar](https://secure.gravatar.com/avatar/77649846c46dd6a47fccd66b77ef609c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
And even if you plan to keep your code local, the tutorials get you most of the way there.
![Dhamodharan avatar](https://secure.gravatar.com/avatar/d22d4a23a167ba07fe9ad7e7708694e2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0017-72.png)
@Fizz thanks, i am going through the document, hope it may help me,. i will check and come back if i stuck
2022-12-27
![Dhamodharan avatar](https://secure.gravatar.com/avatar/d22d4a23a167ba07fe9ad7e7708694e2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0017-72.png)
I am trying to create a aws task definition with passing the env variables in the container definition in a different file. but i am getting the below error while planning the tf code.
│ Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal object into Go value of type []*ecs.ContainerDefinition
│
│ with aws_ecs_task_definition.offers_taskdefinition,
│ on ecs_main.tf line 13, in resource "aws_ecs_task_definition" "app_taskdefinition":
│ 13: container_definitions = "${file("task_definitions/ecs_app_task_definition.json")}"
My resource snippet is:
resource "aws_ecs_task_definition" "app_taskdefinition" {
family = "offers"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 2048
memory = 8192
task_role_arn = "${aws_iam_role.ecstaskexecution_role.arn}"
execution_role_arn = "${aws_iam_role.ecstaskexecution_role.arn}"
container_definitions = "${file("task_definitions/ecs_app_task_definition.json")}"
}
defined the image spec in the json file, its working when i deploy manually.
Someone help on this?
![Denis avatar](https://avatars.slack-edge.com/2022-07-05/3755698025589_2dee8d81d277563f5d20_72.jpg)
maybe share the task def json?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
You may want to use the module to generate the container JSON
https://github.com/cloudposse/terraform-aws-ecs-container-definition
Then you can use it in your ecs task definition like this
container_definitions = module.container_definition.json_map_encoded_list
![Eric avatar](https://secure.gravatar.com/avatar/45be41b39dd85ffe663684d29f2448ce.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Not sure if this is the right place to ask but can a new release be cut for cloudtrail-s3-bucket? There was a commit made to fix (what i assume was) a deprecation message but it was never released to the registry: https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/commit/93050ec4f032edc32fed7b77943f3d43e9baeccd
![Eric avatar](https://secure.gravatar.com/avatar/45be41b39dd85ffe663684d29f2448ce.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
hmm actually (and i should have checked this), the deprecation message i got (coming from s3-log-storage deep down) isn’t fixed by 0.26.0 either)
![Eric avatar](https://secure.gravatar.com/avatar/45be41b39dd85ffe663684d29f2448ce.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
so ill file an issue in cloudtrail-s3-bucket to increment that dep
![Eric avatar](https://secure.gravatar.com/avatar/45be41b39dd85ffe663684d29f2448ce.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Found a bug? Maybe our Slack Community can help.
Describe the Bug
Using this module introduces a deprecation message regarding the use of “versioning” attribute of the s3 bucket created by this module.
Expected Behavior
No deprecation messages appear when using the latest release of this module
Steps to Reproduce
module "cloudtrail_s3_bucket" {
source = "cloudposse/cloudtrail-s3-bucket/aws"
version = "0.23.1"
}
Screenshots
╷
│ Warning: Argument is deprecated
│
│ with module.cloudtrail_s3_bucket.module.s3_bucket.aws_s3_bucket.default[0],
│ on .terraform/modules/cloudtrail_s3_bucket.s3_bucket/main.tf line 1, in resource "aws_s3_bucket" "default":
│ 1: resource "aws_s3_bucket" "default" {
│
│ Use the aws_s3_bucket_versioning resource instead
│
│ (and 4 more similar warnings elsewhere)
Environment (please complete the following information):
Anything that will help us triage the bug will help. Here are some ideas:
MacOS 11.7.1
Terraform 1.2.7
Additional Context
This can be fixed by incrementing the version of the depedency “cloudposse/s3-log-bucket” to 1.0.0 (to get AWS v4 provider support). It is possible that an interim version might also work but the release notes for your own module say not to use non 1.0 releases of s3-log-bucket.
![Matt Richter avatar](https://secure.gravatar.com/avatar/a6e5798964d1a06c34f0674c0a46e7fa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
Building out some light-brownfield terraform infra. I would love to make use of this module https://github.com/cloudposse/terraform-aws-tfstate-backend,
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.
![Matt Richter avatar](https://secure.gravatar.com/avatar/a6e5798964d1a06c34f0674c0a46e7fa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
but this issue is a bit annoying: https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/118
![attachment image](https://user-images.githubusercontent.com/86274629/179146938-f68316f9-041b-4fd6-8c0d-f8dc00e5bbae.png)
Describe the Bug
terraform apply
completed successfully. However, there is a warning in the log that will need attention in future:
╷
│ Warning: Argument is deprecated
│
│ with module.terraform_state_backend.module.log_storage.aws_s3_bucket.default,
│ on .terraform/modules/terraform_state_backend.log_storage/main.tf line 1, in resource “aws_s3_bucket” “default”:
│ 1: resource “aws_s3_bucket” “default” {
│
│ Use the aws_s3_bucket_logging resource instead
│
│ (and 21 more similar warnings elsewhere)
Expected Behavior
No deprecated argument warning.
Steps to Reproduce
Steps to reproduce the behavior:
- Add the below to my main.tf
module "terraform_state_backend" {
source = "cloudposse/tfstate-backend/aws"
version = "0.38.1"
namespace = "versent-digital-dev-kit"
stage = var.aws_region
name = "terraform"
attributes = ["state"]
terraform_backend_config_file_path = "."
terraform_backend_config_file_name = "backend.tf"
force_destroy = false
}
- Run ‘terraform apply -auto-approve’
- See warning in console output
Screenshots
![Matt Richter avatar](https://secure.gravatar.com/avatar/a6e5798964d1a06c34f0674c0a46e7fa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
i may take a whack at improving the outstanding PR at some point
![Matt Richter avatar](https://secure.gravatar.com/avatar/a6e5798964d1a06c34f0674c0a46e7fa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0018-72.png)
unless someone more familiar with the space has a chance to look
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
Wow – shocked I haven’t heard of this before! It does not come up at all when googling TACOS or things of that nature; it took accidentally seeing it mentioned in a reddit comment (of course):
https://github.com/AzBuilder/terrakube
cc @Erik Osterman (Cloud Posse)
Open source tool to handle remote terraform workspace in organizations and handle all the lifecycle (plan, apply, destroy).
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
I’ve seen this before and haven’t met anyone using it yet. Have you tried it?
I see very few forks and stars so I’d be hesitant to use in production.
https://github.com/AzBuilder/terrakube/network/members
It does look like possibly bridgecrew uses it according to the list of forks
Open source tool to handle remote terraform workspace in organizations and handle all the lifecycle (plan, apply, destroy).
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
It does look pretty nice. It hit our radar back on May 27th, but haven’t had a chance to look at it.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
you can quickly test it with docker-compose and Postman here:
• https://github.com/AzBuilder/terrakube-docker-compose seems promising for people who use k8s to manage their operations AKA operations k8s cluster
Docker compose to run a standalone Terrakube Platform
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
There is a desperate need for a feature-rich, production-grade taco, I have absolutely no doubt about that. And in fact I’ve committed to deprecating the taco at my current shop because the benefits are simply not justifying the cost. I’m not crazy about the idea of running k8s for terraform because I’d rather do it with a much simpler container scheduling platform like ecs fargate, but I genuinely hope terrakube enters CNCF.
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Have you considered the other TACOS? We’ve had a lot of good experiences with spacelift
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
We’ve had a lot of good experiences with spacelift
@RB As Erik mentioned I was at Bread and worked with Andriy to implement spacelift there. There are nothing wrong per sé with the tacos it’s just that aside from state and workspace management I’m not sure what benefit they truly bring? Yes they can sequential and parallel stacks, yes they can run arbitrary shell commands against jobs, yes they can do drift detection and they give a convenient hook for governance and auditing, but actually these features aren’t really worth that much right now. And they’re definitely not worth 500K - 1M USD which is about what the average big corporate TFC contract is.
I should reach out to env0 and spacelift and see what a contract with them for between 10K - 15K applies/month would be.
![Ryan avatar](https://avatars.slack-edge.com/2024-05-18/7139260260259_c50140f382b7db30f7e3_72.png)
@Jonas Steinberg happy to chat again with a previous customer. Let’s connect offline.
We don’t charge on the number of applies/month so it won’t be apples to apples, but I can tell you the existing customers that we have who migrated from TFE and TFC are quite happy.
It definitely won’t be 500K - 1M USD.
Grab some time with me at https://calendly.com/ryan-spacelift or anyone else interested. Happy to provide transparent and simple pricing and discussion on this.
Spacelift is a sophisticated and compliant infrastructure delivery platform for Terraform, CloudFormation, Pulumi, and Kubernetes. Free Trial: https://spacelift.io/free-trial.
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
Thanks Ryan.
2022-12-28
2022-12-29
![Roman Kirilenko avatar](https://avatars.slack-edge.com/2022-12-29/4572910991749_72c8ca3039434043d059_72.jpg)
hi everyone, i’m trying to do upgrade of MSK with terraform but have an issue with Configuration is in use by one or more clusters. Dissociate the configuration from the clusters.
Is it possible to bypass that step so the configuration is not even touched?
2022-12-30
2022-12-31
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Hey all, looking for some guidance here. I’d like to create a node group per AZ, I’d like the name of the group to contain the AZ in it. I’m struggling a bit. Any suggestions would be appreciated
module "primary_node_groups" {
source = "cloudposse/eks-node-group/aws"
version = "2.6.1"
for_each = { for idx, subnet_id in module.subnets.private_subnet_ids : idx => subnet_id }
subnet_ids = [each.value]
attributes = each.value # this doesn't get me want
}
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
The attributes are a list. Wouldnt you have to do this?
attributes = [each.value]
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
You know, thats a good call. …curious as to why its not mad at that.
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Ah, because I was actually using tenant
.
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
I changed it for this because I was thinking about doing attributes
. That said, am I going about this the right way?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
I think you’re doing it correctly. You just need to pass in the subnet AZ as an attribute
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
I think that’s where I’m struggling. How would I go about grabbing the AZ w/ the subnet from the dynamic subnets module?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Check the outputs first by using this
output subnets { value=module.subnets }
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Then see where you can collect the AZ
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
If it’s not outputted then you could use a data source
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Ahh, great call on the data source. I typically forget about using them. I’ll give that a whirl.
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
![aimbotd avatar](https://secure.gravatar.com/avatar/d51859828a3c951900042fa7effd24ff.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)