#terraform (2023-06)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2023-06-01
Is there an example out there of how to put my entire org’s tfstate into a specific account? Multiple buckets is fine, but ideally I’d like to house it all in a specific account. I see that the tfstate-backend
module has a tfstate_account_id
, should I be wrapping that in another module to pass the account ID from the account-map
module, or is there a better way? Using atmos
if that matters.
I’ve got the docs pulled up for atmos’ first AWS environment, and the bootstrap process for tfstate-backend
, but there are some knowledge gaps for how to bootstrap multiple accounts to a single backend.
Best place to ask is in refarch
Unfortunately, we do not provide the cold-start documentation publicly at this time. It’s part of our bootcamp, and jumpstart tracks. cloudposse.com/services
Will ask there, thanks!
Hello,
I am using terraform-aws-eks-workers
and I would like to bump its version from v0.20.0
to v1.0.0
. Unfortunately, I was using the input security_groups
like so
security_groups = [aws_security_group.eks_nodegroups_security_group.id, aws_security_group.eks_alb_ingress.id]
and I cannot figure out what is the new right way to configure my worker module, any idea?
I think the release notes contain a migration guide. At least we try to do that.
Can you check?
I didn’t see any release note on this repository. But maybe I am blind ^^
Hrmm… seems like it.
@Jeremy G (Cloud Posse)
To be honest, I tried a lot of different things. I cannot figure out multiple. At then end I had issues with aws
provider which is used under the hood and the tag system I gave up
@Arthur Unfortunately, v1.0.0 was released accidentally. The current recommended release is v0.18.4. In v0.18.4 or v1.0.0, use additional_security_group_ids
instead of security_groups
.
Oh ok, nice to know ! Can’t there be a warning somewhere? Also, do you have any advice for silicon apple chip?
I cannot find a way to install tempalte_file
provider unfortunately because the 0.20.0 is not surported by arm64 chip
Ok manage to do it by using
export TFENV_ARCH="amd64"
Hrmmm we distribute a arm64 version of template_file
Maybe we didn’t update the module to use our fork.
Thanks, got it work out finally
@Erik Osterman (Cloud Posse) This is one of the modules not yet updated to our current security-group
module, which, BTW, needs updating itself.
@Gabriela Campana (Cloud Posse) ticket to update terraform-aws-eks-workers module’s security-group to the latest. Heads up @Arthur, not sure when we will fix it. Just creating a ticket to fix it.
@Erik Osterman (Cloud Posse) @Gabriela Campana (Cloud Posse) Ticket already in backlog (DEV-485)
2023-06-02
Hello! Could you please take a look at this PR? https://github.com/cloudposse/terraform-aws-ecs-cluster/pull/9 It really important, because now it doesn’t work with current version of module.
This PR contains the following updates:
Release Notes
cloudposse/terraform-aws-ec2-autoscale-group
v0.32.0
Add support for instance reuse policy and fix bug for tag for v4 @linhkikuchi (#101) what
• Fix bug for https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/issues/100 • Add support for instance reuse policy for warm pool
why
• My module needs this new feature for warm pool
references
Configuration
Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
Automerge: Disabled by config. Please merge this manually once you are satisfied.
Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
Ignore: Close this PR and you won’t be reminded about this update again.
☐ If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
Hi @Mikhail
This PR contains the following updates:
Release Notes
cloudposse/terraform-aws-ec2-autoscale-group
v0.32.0
Add support for instance reuse policy and fix bug for tag for v4 @linhkikuchi (#101) what
• Fix bug for https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/issues/100 • Add support for instance reuse policy for warm pool
why
• My module needs this new feature for warm pool
references
Configuration
Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
Automerge: Disabled by config. Please merge this manually once you are satisfied.
Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
Ignore: Close this PR and you won’t be reminded about this update again.
☐ If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
We’ll review it
Thank you! Hope it will be fast)
Hi! I saw that previous PR was closed and opened a new one: https://github.com/cloudposse/terraform-aws-ecs-cluster/pull/15 Any info about ETA?
This PR contains the following updates:
Release Notes
cloudposse/terraform-aws-ec2-autoscale-group
v0.34.2
:bug: Bug Fixes Restore tags output @Nuru (#114) what
• Restore autoscaling_group_tags
output removed in #113
why
• Maintain backwards compatibility
Sync github @max-lobur (#110)
Rebuild github dir from the template
:rocket: Enhancements Support AWS Provider V5 @max-lobur (#113) what
• Support AWS Provider V5 • Linter fixes • Bump tf version
why
resource/aws_autoscaling_group: Remove deprecated tags attribute (https://github.com/hashicorp/terraform-provider-aws/issues/30842)
references
https://github.com/hashicorp/terraform-provider-aws/releases/tag/v5.0.0
• No changes
To support custom alarms with extended_statistic @linhkikuchi (#102) what
• Current module only support statistic
, not extended_statistic
for custom_alarms
why
• extended_statistic
is useful for metric TargetResponseTime
references
• Link to any supporting github issues or helpful documentation to add some context (e.g. stackoverflow).
• Use closes #​123
, if this PR closes a GitHub issue #123
Add support for instance reuse policy and fix bug for tag for v4 @linhkikuchi (#101) what
• Fix bug for https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/issues/100 • Add support for instance reuse policy for warm pool
why
• My module needs this new feature for warm pool
references
Configuration
Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
Automerge: Disabled by config. Please merge this manually once you are satisfied.
Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
Ignore: Close this PR and you won’t be reminded about this update again.
☐ If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
Hi @Mikhail
One of our engineers reviewed PR #9 on the same day and closed it because we need to update to 0.34.2 as @ragumix mentioned.
We will review PR #15
Will have an update today
@Mikhail @Dan Miller (Cloud Posse) commented on PR#15
Heya guys, got a quick ask for help, I’m still pretty new with Terraform. I have this local variable in which I’m trying to build a listing of host, internal_ip and external_ip. The last octet of the internal_ip is derived from external_ip, I’ve got mostly everything working except I can’t seem to wrap my head around how to construct the internal ip.
I know I can use split(".", ip)[3]
to extract the last octet from the ip, but I can’t seem to figure out/understand how to combine that with join(".", slice(split(".", data.aws_subnet.sending[k].cidr_block), 0, 3))
to construct a complete ip address
sending_ip_map = flatten([
for k, v in var.sending_ips : [
for ip in v : [
{
host = k,
external_ip = ip,
private_ip = join(".", slice(split(".", data.aws_subnet.sending[k].cidr_block), 0, 3))
}
]
]
])
With the current code, my output looks like
debug = [
+ {
+ external_ip = "44.194.111.252"
+ host = "eis1"
+ private_ip = "10.0.234"
},
+ {
+ external_ip = "44.194.111.254"
+ host = "eis1"
+ private_ip = "10.0.234"
},
+ {
+ external_ip = "44.194.111.253"
+ host = "eis2"
+ private_ip = "10.0.234"
},
+ {
+ external_ip = "44.194.111.255"
+ host = "eis2"
+ private_ip = "10.0.234"
},
]
what about
private_ip = format("%s.%s", join(".", slice(split(".", data.aws_subnet.sending[k].cidr_block), 0, 3)), split(".", ip)[3])
or similar (the expression is complex and can prob be simplified)
that worked
ok, last question…. I also need to create a list of private ips per host that I can use later in a for_each loop (with the host being the key). I re-used the expression given earlier to convert the external IP to a private IP, but the list being generated doesn’t have a key.
sending_ips = {
eis1 = ["44.194.111.252", "44.194.111.254"],
eis2 = ["44.194.111.253", "44.194.111.255"]
}
private_ip_list = [
for k, v in var.sending_ips : [
for ip in v : format("%s.%s", join(".", slice(split(".", data.aws_subnet.sending[k].cidr_block), 0, 3)), split(".", ip)[3])
]
]
the output is
debug = [
[
"10.0.234.252",
"10.0.234.254",
],
[
"10.0.234.253",
"10.0.234.255",
],
]
private_ip_list = [
for k, v in local.sending_ips : {
for ip in v : k => format("%s.%s", join(".", slice(split(".", data.aws_subnet.sending[k].cidr_block), 0, 3)), split(".", ip)[3])
}
]
Thanks… it was close, I was getting Two different items produced the key "eis2" in this 'for' expression. If duplicates are expected, use the ellipsis (...) after the value expression to enable grouping by key.
with that one, I had to flip things around a little.
This did the trick.
private_ip_list = {
for k, v in var.sending_ips : k => [
for ip in v : format("%s.%s", join(".", slice(split(".", data.aws_subnet.sending[k].cidr_block), 0, 3)), split(".", ip)[3])
]
}
Still trying to wrap my head around for loops with terraform, but these examples help a lot for future reference.
glad you found the correct way to do it
Yep. Thanks a lot for the help, it’s a bit clearer to me how these loops work seeing a few working examples.
2023-06-03
2023-06-05
v1.5.0-rc2 1.5.0-rc2 (June 5, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…
Anyone else had weird behaviour using the helm provider? I’m just ran a plan, one in tf cloud, and one locally, and they’re both wildy different. The code is the same, the providers look the same too. same backend. Everything. I can’t workout why this would be. Anyone got any idea?
@Max Lobur (Cloud Posse) maybe you can help
For the tf-cloud - are you configuring the helm provider yourself there? or settings are preset ? Are you also setting helm/k8s versions yourself for cloud?
There might be some cache in a cloud that holds a different version for you
2023-06-06
I have question, when you create an environment in elasticbeanstalk with loadbalancer there will be a created security group now I have a custom security group named mysql-secgrp, now my question is how can I automatically add the security group id of the elasticbeanstalk environment to my custom security group.
i’m trying to explore the terraform “data” and the “output” but I’m stuck…
you can use the EB SG ID to add to your SG rules (https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/main/examples/complete/outputs.tf#L36)
output "elastic_beanstalk_environment_security_group_id" {
or the other way around, you can add your SG to the EB SG rules https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/main/examples/complete/main.tf#L75
additional_security_group_rules = [
2023-06-07
Hello! Is there any Github managers here? Our infra is blocked from upgrading ElasticBeanstalk module to newer modules due to a issue raised. There has been a solution offered but there has been no movement.
is there any chance of getting this looked at?
you can fork the module yourself as a temporary workaround
try #terraform-aws-modules for this feedback
Thanks Alex
if you or original author fix the conflicts , I can review
Thats done! thanks! i reached out to him
@jose.amengual I’ve got the original author to fix the conflicts
now we need to update the examples/complete with the new versions of the vpc module
Doesn’t the fix actually match the examples and documentation now? Sorry, I might have missed something there.
terratest runs the example/complete folder
Ahh ok
which instantiates the module, the need to be updated
Thank you
Hi @Samuel Crudge Igor requested changes 2 weeks ago on PR #229
Okay thanks, i’ll take a look
Can’t write to repo, i’m not Dawid. can i be added as a contributor?
or you can clone the repo and create your own PR, just make sure you mention this PR
Finally did it. My first OS contrib too - hope i did it right
what
Suggested changed by @goruha.
Default to empty string if aws_security_group.id is null. Empty string will then get filtered out by compact().
why
Null value caused sort() function to crash, making it impossible to set create_security_group to false
Implementing changes suggested in #229 by @goruha.
references
Fixed sort function crashing when create_security_group=false #229
Option create_security_group can’t be false currently #216
Can someone explain why trim doesn’t trim this hyphen, but replace operates on it as expected?
> trim("foo-bar", "-")
"foo-bar"
> replace("foo-bar", "-" , "")
"foobar"
trim
removes the specified set of characters from the start and end of the given string.
The trim function removes the specified set of characters from the start and end of a given string.
…………………… ok. tyvm. BASH > Terraform
echo "foo-bar" | tr -d -
foobar
I do see it actually says start and end now. I lazy-eye skimmed right past it. And I know I’ve looked at this in the past.
many languages handle “trim” in different ways i think. can’t take the behavior for granted
That’s fair.
It’s disappointing when a company with transparency at its core, and integrity and kindness as values conducts layoffs in an obscure way: employees learning about this from Twitter DMs.
HashiCorp just let go 8%. Investors listening to earnings heard it before most employees.
It is a shame for HashiCorp
It’s disappointing when a company with transparency at its core, and integrity and kindness as values conducts layoffs in an obscure way: employees learning about this from Twitter DMs.
HashiCorp just let go 8%. Investors listening to earnings heard it before most employees.
The company onboarded some incompetent HR/managers during pandemic…
2023-06-08
Hello guys, do you have any ideas on how to create a warm pool terraform-aws-eks-workers
module?
just checked the codes, warm pool is not supported yet
• And warm pool got a limit for EKS: If you try using a warm pool with an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group, instances that are still initializing might register with your Amazon EKS cluster. As a result, the cluster might schedule jobs on an instance as it is preparing to be stopped or hibernated.
2023-06-09
Hi, I recall; carrying out an AWS ASG Rolling Update through Terraform was not previously possible (due to no Update Policy API being exposed ). This was something comparatively easily achievable in CloudFormation through update policy. Is this ASG Rolling Update achievable now in TF ? Thanks in advance !
Specify how to handle updates for the following resources with the UpdatePolicy attribute: AWS::AutoScalingGroup, AWS::AutoScalingGroup, AWS::ReplicationGroup, AWS::Domain, AWS::Domain, or AWS::Alias.
yes by default Terraform also sets instance refresh strategy to rolling updates https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group#strategy
Specify how to handle updates for the following resources with the UpdatePolicy attribute: AWS::AutoScalingGroup, AWS::AutoScalingGroup, AWS::ReplicationGroup, AWS::Domain, AWS::Domain, or AWS::Alias.
Thanks @Dan Miller (Cloud Posse) , would the TF property min_healthy_percentage
achieve the equivalent behaviour of CFN property MaxBatchSize
to have instances replaced in parallel ? Ta
no I believe that would be max_size
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group#max_size
The max_size
is a property directly on the ASG , it seems achieving the same behaviour through TF would require 2 terraform runs, one to introduce the new batch through a higher max_size
once deployed reducing the instance group size. In attempt to mimic CFN Update Policy , out of the box behaviour.
2023-06-12
v1.5.0 1.5.0 (June 12, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…
2023-06-13
2023-06-14
Hi, Just wondering if it is advisable to use only s3 lock feature without dynamoDB for the state files. We are trying to get rid of dynamoDB to save some costs.
No s3 feature allows it to provide a sufficient lock guarantee
If you are referring to Terraform state locking, the is no S3 lock feature; you need DynamoDB to provide locks. DynamoDB in pay-per-request mode should cost much less than $5/month and is worth much more than that.
Hey guys - had a quick question about Terraform and was wondering if anyone knew about this.
The Arbitrary Expressions with Argument Syntax docs page implies that all blocks are supported as arrays (Which was welcome news to me, having hated blocks ever since they were introduced), meaning this should work if settings
was usually defined in block syntax and api_settings
was an object:
settings = [var.api_settings]
However, I get Unsupported Argument. Is support for this dependent on the provider, or am I misunderstanding the docs?
my understanding is that it depends on the provider, and whether they define that argument as a block, or use the special mode that page is talking about, attr-as-block, where the argument is then an attribute…
ugggghhh that would make sense, thanks. I really can’t be bothered to open another issue on the aws provider github so I’ll just make a whole bunch of non-future-proofed boilerplate, as terraform is great at forcing you to write
In the 0.12 days, they were pushing to move more to the block definition of things. But since then, I think they’ve gotten more feedback and pushback that users/providers/developers really like the flexibility of assignment that you get with the attribute syntax. So I think maybe the guidance is shifting again
When it is defined as a block, you can kinda get there using dynamic blocks, and then your module input can be a list or map of things, and the caller can use any arbitrary expression to build up that object
I really hope so. I ranted into the void many times that blocks were a really bad idea to force developers to use. They look great if you’re hardcoding your values, but in the case of wanting to allow any kind of dynamicism you end up with horrific dynamic
workarounds.
Can you even generate arbitrary attributes in a block? Like, does this work?
locals {
settings = {
metrics_enabled = true
logging_level = "info"
}
}
settings {
for key, value in local.settings : key => value
}
or do I have to manually write support for every potential attribute that can be in the block?
locals {
settings = {
metrics_enabled = true
logging_level = "info"
}
}
settings {
metrics_enabled = local.settings.metrics_enabled
logging_level = local.settings.logging_level
}
dynamic only makes things easier for the caller of the module, not the developer
Sadly I’ve been using it daily since 0.10 and have dozens of modules with horrible hacks to get around blocks. I’m rather hoping I have the opportunity to work with one of the more programmatic IaC solutions in the future…
yeah. The existence of both maps and blocks is a weird HCL wart. I wish they would stop leaning into blocks
I’ve encountered something with cloudposse/label/null
that feels like an issue to me, or at least a potential feature, and wanted to get some opinions before I try fixing it. When setting label_key_case
, the Name
tag is still affected the same as every other tag. Its value is treated as a special case (getting the id), but not the key itself. I know it’s kind of AWS specific, but it feels to me like Name
should always be title-cased regardless of label_key_case
.
Yes, it’s an “AWS Wart”
AWS requires this to be exactly Name
for it to appear properly in the Web UI, which is why the behavior.
I forget if we have a workaround.
@Jeremy G (Cloud Posse)
@Nat Williams @Erik Osterman (Cloud Posse) Sorry, we do not have a workaround or fix for this. In v2 we probably should not have a “name” input, and the “Name” tag would be thoroughly special case.
The fix for this is not that difficult, but since we use it almost literally everywhere, the ripple effect of even a small change is huge. The primary use case for label_key_case
is for people using null-label
outside of AWS. As such, I do not think it is worth the effort to change it right now. If you want to take the time to open an issue on the topic, that will help ensure the change makes it into the next release (which may not be until next year).
@Jeremy G (Cloud Posse) Is the release schedule for null-label such that even if I put up a PR that everyone loved, it might not get released until next year?
That is ultimately up to @Erik Osterman (Cloud Posse), but I would say “yes”. Updating null-label is just too disruptive, and none of our current or past paying customers have complained about this.
Also, because null-label is so central to our framework and so complicated, we generally do not accept PR s for it anyway. The best contribution you can make is to open an issue with a good example.
OK, good to know. I was on the fence about trying to make this work or just accepting the disgusting idea of title-cased tags.
Out of curiosity are you using AWS or on another cloud?
/* our big picture plan is to overhaul null-label and eliminate any opinionated label parameters or things like this so it's entirely generalized */
I’m using AWS
Hrm… and so I understand your use-case, you do not care for pretty/meaningful resource names in AWS Web Console?
(personally, I love kubernetes style labels, so get the desire to force tags to a given convention; just in this case I also prefer AWS to have meaningful resource names in the UI, or it’s something AWS generates)
I mean, I would say I do care for pretty tag names, in that I want them to all be lower case. But I acknowledge the reality that Name
is a special case.
oh, sorry, I misread that
no, I want the good resource names, which, yeah, means setting Name
2023-06-15
Hi, How do you disable encryption_at_rest on terraform documentdb by cloudposse? seems there is no option to do that
it might not be supported. Why do you want this option?
Don’t know about terraform documentdb by cloudposse
but I think AWS disabled the option for non_encryption on almost all of the services.
You need to use some kms (aws or cmk)
If supported, we would accept a PR to feature-flag it. Post in #pr-reviews
Who wants a brain teaser? Getting this error with terraform-aws-config
but my config appears (to my eyes) to be valid.
probably should be subscriber['key']
?
The error appears from a cloudposse module https://github.com/cloudposse/terraform-aws-config
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.
yeah, the variable passed in should be subscriber['key']
not subscriber
Sorry, I don’t understand. I’m not passing a variable in called subscriber
. I have subscribers
as an input.
oh can you pass in the variable subscribers
directly?
or can try:
Yeah it’s an input variable https://github.com/cloudposse/terraform-aws-config#input_subscribers
{ for key, value in var.subscribers: key => {
protocol = value["protocol"]
endpoint = value["endpoint"]
...
}
I didn’t write the code on the left, it’s a cloudposse module
oh yeah hmm what is the module version being used?
0.18.0
ok, latest one
Looks like an old bug https://github.com/cloudposse/terraform-aws-config/pull/44
Shame this fix wasn’t ever merged
oh yeah, I found the same link lol
Left a comment
oh you can try with the updates from the PR and give it a test
under .terraform
folder
Looks like an old bug https://github.com/cloudposse/terraform-aws-config/pull/44
Thanks for the quick feedback
Anything I can to move the needle on this @Erik Osterman (Cloud Posse)? Will happily take a maintainer role.
@Linda Pham (Cloud Posse) @Gabriela Campana (Cloud Posse)
@Hao Wang @James A fix just got pushed. It was based on that PR ^. Please use latest tag.
Note that as part of this update we also bumped aws provider to v5. Let us know if that conflicts with your current aws provider so we can roll out similar fix that as well.
Great, thanks a lot
I would appreciate if you could take a look on the PR
Its for firewall-manager - [shield_advanced.tf](http://shield_advanced.tf)
@Andriy Knysh (Cloud Posse) @Dan Miller (Cloud Posse)
one ci test failed, but you should just need to run this locally:
make init
make readme
Also edited the docs/terraform.md
file
great! waiting for tests to pass and then we should be good
would you mind fixing this typo too? it’s not part of your changes, but this is causing the tests to fail https://github.com/cloudposse/terraform-aws-firewall-manager/blob/main/examples/complete/main.tf#L22
shiled_advanced_policies = var.shield_advanced_policies
should be shield_advanced_policies
pushed another PR to fix this. I’ll get these tests fix and merged into main shortly https://github.com/cloudposse/terraform-aws-firewall-manager/pull/29
what
• Fixed a typo with shield_advanced_policies
• Bumped the vpc
module version
why
• This example is broken and is causing tests to fail in other PRs
references
Ok, thanks @Dan Miller (Cloud Posse)
Done
Whats the up to date module version ? (was 0.3.0 if Im not mistake)
fixed now
@Dan Miller (Cloud Posse) I need to open another PR with the same thing but now for WAFv2. Will do it later today and ping you.
Sounds good. This time should be much easier now that the module is updated
2023-06-16
Hi guys, thanks for having me here Does anybody knows if OpenSearch support for terraform-aws-elasticsearch is going to be available soon, I’m planing an upgrade but wanted to continue using cloudposse module?
We are not working on it yet. We are kind of surprised no customer has asked for it. A customer has done the work, but has not upstreamed it yet
Hi @Gabriela Campana (Cloud Posse),
We are planing to deploy new AWS Elasticsearch using terraform-aws-elasticsearch
. Would you be so kind to share your experience on how customers upgrade to OpenSearch ?
As you said above there is no direct way to deploy OpenSearch using that module.
Hi @Sergei
Maybe @Steven Hopkins can share his experience
I only modified the module to install opensearch. Our use-case doesn’t require the data to be persistent as our apps build the data on startup so I cannot comment on migrating data, if that’s the main guidance you are looking for.
Hey @Steven Hopkins, concerning the PR to add opensearch to the Cloud Posse module for ElasticSearch : Do you mind if I go ahead and work on fixing up the tests for this and move it from draft to Ready? We have a customer who finished getting this working and believe we can finalize the change
what
• Adds support for opensearch domains
why
• Amazon OpenSearch Service is the successor to Amazon Elasticsearch Service and supports OpenSearch and legacy Elasticsearch OSS (up to 7.10, the final open source version of the software).
references
• Link to any supporting github issues or helpful documentation to add some context (e.g. stackoverflow). • https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/opensearch_domain
That would be awesome, if it’s still relevant and useful, absolutely work it
2023-06-18
2023-06-19
anyone knows how these files are getting generated (via terraform / terragrunt)…? ive inherited a couple of legacy aws accounts and noticed that theres this bucket getting exploded with all of these gibberish (10gb+) , but im not sure where it’s coming from and how to stop it.
Check if this bucket has any bucket policy and maybe you need to place some control there.
theres this bucket policy, but it doesnt tell much…
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RootAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567890:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::terraform-state-1234567890",
"arn:aws:s3:::terraform-state-1234567890/*"
]
}
]
}
nothing in lamba, ecs schedule, or eventbridge rule
Were the logs rewritten to the same S3 bucket? May be checking log settings of this s3 bucket
something is writing to this bucket… and all the settings for this bucket are default or empty
This S3 bucket may be used by cloudwatch somewhere else?
as a log bucket
i dont see it… is there some sort of terraform magic thats constantly checking state files and dumping into a bucket? seems like thats what it’s doing…
Yeah, looks/sounds like access logs, maybe from another s3 bucket?
may search the bucket name in your repo?
no luck, i’ll probably have to dig into the state file and see whats actually getting brewed…
Have you opened any of those objects?
yes, but got denied since the kms used to encrypt the objects is nowhere to be found… im very tempted to just nuke the entire account and forget about this…
then you’ll lose the fun of finding the cause lol
2023-06-20
Hey guys, I am seeking feedback from those who have begun utilizing check blocks in their Terraform configurations.
- What has your experience been like so far?
- Could you highlight any specific use-cases where you found the application of check blocks particularly beneficial?
- Have you encountered any potential pitfalls, challenges, or concerns in their implementation? Any feedback or insights would be greatly appreciated
Check custom requirements for variables, outputs, data sources, and resources and provide better error messages in context.
Great to know this feature is implemented, before I had to use a null
resource to run a shell command to see if a service is up
Check custom requirements for variables, outputs, data sources, and resources and provide better error messages in context.
Is my answer going to become content for a blog post you are writing? That’s what it feels like
EDIT: I see you work with @ Matt Gowie. Not a blow-in
Yes, we have started using check resources.
So far, I’m using them to check that combinations of input variables are valid. For example, if var.eb_environment_type == "single-instance"
, ensure that var.eb_asg_max
is null
.
(Actually, I’m making up this example, our real uses are harder to explain.)
We previously implemented this with a precondition on the aws_elasticbeanstalk_environment
resource. preconditions have advantages they stop execution, but it can be confusing to check variable validity on a resource. Especially if the variable gets used in many places. For example we also use var.eb_asg_max
in some aws_cloudwatch_alarm
resources. Which resource should we put the precondition on?
One problem is that checks don’t stop execution. In our cases that’s generally fine – the check is replicating a validation check in the AWS API that Terraform’s provider doesn’t include, so the check block acts as a time-saver rather than guardrail.
@Hao Wang yeah, that totally makes sense. Thanks!
@Alex Jurkiewicz haha, you’re right! I’m looking for some good examples where the check
feature could bring infrastructure validation to the next level or solve a painful issue. I see how it helped you here, thanks for the detailed answer!
Are you testing your Terraform configuration with other tools, i.e. Terratest? If so - do you see check
as a potentianl substitution for it?
We test modules, but not our root configurations. We do use tflint and otherwise rely on terraform plan
I remember you posting about this earlier:
One problem is that checks don’t stop execution.
Why do you think it’s implemented this way?
i find it funny they are described this way (which is pretty accurate) but the truth is if one of the checks fails, execution continues
I asked Martin “Terraform dev” Atkins in another Slack. Let me pull up his reply
My understanding is that some folks reported that they don’t want to use preconditions and postconditions for certain situations because they just want to know that the thing is failed but not be blocked from making changes unrelated to the thing that failed.
If you do want it to fail then preconditions and postconditions are still there for you.
—
The sort of use-cases I heard about were things like TLS certificate expiration where something unavoidably “changes outside of Terraform” (in this case, the current time is what’s changed, which of course Terraform cannot control!) and for some teams it’s desirable to not be blocked by a surprising failure if they were intending to apply something unrelated to it.
(Certificate expiration is probably not the best example but it was the one that came to my mind first)
Checks don’t run at any particular defined time in Terraform’s execution. They are treated as just another block for scheduling according to the dependency graph.
So if they did halt execution, they would halt it at indeterminate times during execution. Hence, they are only used as warnings.
According to the docs Terraform should evaluate assertions as the last step of the operation:
assert
blocks, which execute at the end of the plan and apply stages and produce warnings to notify you of problems within your infrastructure.
So the checks serve more like recommendatory tool - that was my understanding.
@Alex Jurkiewicz have you noticed another behaviour in terms of execution?
Check custom requirements for variables, outputs, data sources, and resources and provide better error messages in context.
Thanks for the info! I haven’t read the official docs page yet That link doesn’t work though. I haven’t tested this myself, I was paraphrasing another comment from Martin:
Raising an error isn’t super useful if you can’t control what gets guarded by the error. Preconditions and postconditions can do this because they have a well-defined position in the sequence of operations – either before or after the action for a particular object – so you can make sure the error gets raised before taking whatever harmful action you were trying to prevent.
These new check blocks cannot “guard” anything so it would be questionable for them to raise an error because the bad thing would already have happened anyway and so it wouldn’t be any different than a warning.
Sorry - fixed the link
That all makes sense - thanks a lot for sharing!
good correction, thanks too!
2023-06-21
v1.5.1 1.5.1 (June 21, 2023) BUG FIXES: core: plan validation would fail for providers using nested set attributes with computed object attribute (#33377)
If a set contains partially known values Length will be unknown which causes assertPlannedObjectValid to fail valid plans. Revert to the old method if using LengthInt for the set lengths, which ret…
Hi folks, I have a question Lets say im building a libary of terraform modules and publishing them to a private registry e.g. citizen. I have an internal development portal that is effectively through pipelines calling these individual modules to stand up infra resources. What would be better, establishing a means of downloading the module from the private registry? Calling the module in a .tf file within the examples directory? (But then how do i dynamically control the version o the module?)
maybe something for #office-hours
2023-06-22
I wanted to share this article. It’s a result of a lot of thinking and talking with many Terraform users about where IaC and Terraform should go in the next few years. It’s very opinionated. Would be great to hear people’s thoughts even if they strongly disagree with the post. The tl;dr version of it is that ClickOps + HCL is the future, continuing to use HCL is a good thing because humans and computers can read, write, and modify HCL, and it enables Terraform users to get powerful tooling, especially around GUIs that could consume and produce HCL. Additionally, we believe CDKs are the wrong direction.
https://terrateam.io/blog/the-future-of-terraform-is-clickops
interesting argument, I agree with it overall. Terraform is mature technology in cloud native orgs, CDKs have no adoption, and developers in general are learning AWS but not Terraform.
I don’t see a world where public clouds invest in their IaC story. What customer does it bring? Newbies don’t care about IaC and advanced users don’t migrate workloads between clouds.
I also don’t see a world where Terraform is replaced. IMO, terraform’s moat is the AWS provider. AWS have shown no interest in the “cloud control API” to get rid of this moat. Terraform will live as long as the public clouds do
The vision of a transactional AWS Console is enticing. I would love that.
I’ve often thought about giving devs full access to manage resources in a dev environment, with an IAM condition to ensure new resources have a service
tag defined. You can then periodically run terraformer
to generate per-service IaC definitions for production
Whether or not one thinks Hashicorp’s HCL is good at expressing infrastructure, it is in the least, a very limited language, which makes it hard to write convoluted code
agree to disagree… imo HCL is the most convoluted way to repeat yourself without external libs to help you out…
https://deezer.io/terragrunt-dont-repeat-your-terraform-variables-358-times-b6cdb4e09e9f
At Deezer, the majority of our infrastructure is hosted on site and it fits most of our needs. But maintaining a world-class music…
Terragrunt introduces yet another tool just to help manage the dependency graph of multi module terraform projects. I don’t like it though, I would use makefile or atoms to manage such projects.
Repetition is the opposite of convolution terragrunt adds complexity
After reading this article, it made me think of cloudformation templates that are publicly facing which can be pasted via clickops into aws cloudformation console. You can then enter any parameters you want, hit the deploy button, and the infrastructure is spun up.
I don’t know how I feel about ClickOps, but the first half of the article seems dead on the money.
I worked with CDKTF which is admittedly still alpha, the solutions we produced “worked”, but were basically impossible to read and understand. Partly because our code wasn’t amazing, but mainly because the code abstracts away the logic too much.
Terraform is great because HCL is really easy to read.
I’ve recently started using Terraspace and it’s honestly fantastic. You write regular HCL and then add in some Ruby glue. I’ve used to it create an AWS Organisation with member accounts, something that is incredibly difficult to do elegantly with vanilla HCL.
The Terraform Framework
Can someone remind me what terraform does with “looped” resources if the order in the list changes?
my.tfvars
buckets = {
one = {
replication_enabled = true
}
two = {
replication_enabled = true
}
}
[main.tf](http://main.tf)
module "bucket" {
for_each = {for k, v in var.buckets : k => v}
name = k
replication_enabled = v.replication_enabled
}
Terraform probably destroys and recreates the resources if I change my.tfvars
to:
buckets = {
two = {
replication_enabled = true
}
one = {
replication_enabled = true
}
}
But maybe someone can confirm it.
you can test it with null_resource
s. But the answer is Terraform won’t modify the resources
locals {
buckets = {
two = {
replication_enabled = true
}
one = {
replication_enabled = true
}
}
}
resource "null_resource" "test" {
for_each = { for k, v in local.buckets: k => v }
provisioner "local-exec" {
command = "echo ${each.value.replication_enabled}"
}
}
Thanks sorry for stupid question, could’ve tested it myself
I think you were probably confusing for_each
with count
. for_each
in this example is safe, but count
is based on numeric indexes and could cause unintended changing in the infrastructure especially if the list size changes.
Are there any decent ways to generate half-way decent graphs of TF resources? Use case - putting a pretty picture in our TF modules READMEs so our co-workers can get a better idea what a module does without being experts
remember there is a project someone mentioned in office hour, but found this one, https://github.com/cycloidio/inframap
Read your tfstate or HCL to generate a graph specific for each provider, showing only the resources that are most important/relevant.
oh yeah, this is the one lol
I tried this, but it didn’t work well for modules - I ran into this issue where it would generate empty graphs. oh well
My team vends many Terraform modules out to the larger organization. Because they’re modules, they don’t have state. I would love to be able to generate a graph of the resources created by these modules, and how they are interconnected.
I write *.tf
files that pair with my Terratest *.go
code, so I could probably invoke the module to get the state of the test file and generate from that, but it would be nice to be able to pass my raw *.tf
files (perhaps with some annotations in comments? I’m still learning the Hashicorp HCL Golang package to understand what’s possible) and be able to generate a nice graph for the README and my end-users that is dramatically more user-friendly than terraform graph
.
I’ve tested this tool on a few of my modules, and some have some data, and some come up blank.
Thanks!
What command did you use?
I came across this the other day. https://github.com/asannou/tfmermaid-action
Github Action for converting an output of Terraform graph to Mermaid’s syntax.
Haven’t tried it yet, but want to.
Since GitHub supports Mermaid natively in markdown, this seems the most promising to me.
Hi folks,
Anyone around who is publishing the modules into a private registry - TFC or others and then is using rennovate for upgrades ?
If so do you have a sample on how that configuration looks like? Note i’m trying to do the following:
• the token api to access TFC private registry is / should be in GH org secret or at least repo secrets
• in rennovate.json i’d like to use the secret
• upgrades for child modules defined in root modules (easy to achieve) but equally for child of child modules)
while researching i bumped into https://github.com/renovatebot/renovate/discussions/22779 which suggest 2nd bullet point won’t work with hosted app, devils in details
You can’t upgrade child of child modules with renovate. You need to use renovate on the repo containing child module instead
"hostRules": [
{
"matchHost": "spacelift.io",
"encrypted": {
"token": "xxx"
}
}
This works for us
Note that you can use an encrypted blob for the password in renovate.json
i see, thank you Alex
Hi @Alex Jurkiewicz I stumbled upon this and cant seem to get spacelift working with renovate. Is this still working for you?
Yes 0
Check the renovate debug logs perhaps
You could try asking Spacelift to figure it out and add a page to their docs
I think renovate was just being weird – i got it working now. Also I think the API key must have Admin not just Reader
2023-06-23
Hello! Cloud Posse team, could you please review this PR? https://github.com/cloudposse/terraform-aws-alb-ingress/pull/68
what
• Add an ability to create rules with *fixed-response* action
why
• There was no ability to create such rules
Can anyone help me to understand why adding
filter {
name = "instance-state-name"
values = ["running"]
}
in the data source for aws_instance
starts complaining saying no matching EC2 instance found. The total code block is
data "aws_instance" "syslog" {
filter {
name = "tag:Name"
values = ["xx-yy"]
}
filter {
name = "tag:ManagedBy"
values = ["Terraform"]
}
filter {
name = "instance-state-name"
values = ["running"]
}
}
If I remove instance-state-name
it works but finds a terminated instance.
seems you will need to bring up an instance somewhere else
Well the instance is indeed running. And yet it fails to find it. All tags are correct. Strange!
Can you confirm it works with filtering with aws cli
On the same tags
2023-06-24
2023-06-26
Hi all. We’re looking to implement a TACOS solution like atlantis, spacelift, env0, etc (solution hasn’t been selected yet). Most of the solutions appear to handle plan review and apply in git PRs. While I’m sure this is fine for most changes, I’m concerned that complex changes, new modules, etc would be awkward to test and troubleshoot via PR comments. For those using these kinds of solutions:
-
Is troubleshooting complex changes via PR comments painful? Is there another workflow available in the tool you use?
-
Do you allow any “power users” to run terraform manually when needed, trusting them to commit changes properly at the end?
I’m thinking of use cases like importing complex legacy resources that don’t currently match the “new” architecture. Today, I’d be iterating through many runs of terraform plan to identify differences, and resolve them through module updates or even ClickOps, until I could run an apply at the end a get everything fully synced. Similarly, when developing a new component, it may take a lot of iteration to get it right.
i don’t think the tacos necessarily block local execution… ought to be able to do both, gated basically only by permissions the user has to the providers and the backends
Very true, that’s not necessary - but it is one of our goals. We want to avoid provisioning advanced AWS privileges to every developer - let the tacos tool have those privileges instead, and gate merges behind peer or admin review.
ok? then you know how to get to both… lol. so, what’s the real question?
Yeah, you’re right. I’ll clarify. I guess I have three questions:
- Have others run into situations where debugging terraform code via git PRs is inefficient/unusable?
- If my devops team, for example, have the ability to run terraform by hand, will our workload increase as we find lots of cases where it’s needed?
- If we allow this for some users, will we run into problems keeping the git workflows working? How careful do we need to be about who has this access, and when/how it is used?
i personally think it is prudent to restrict permissions to prod environments, but like to have one or more dev/test environments where i can exercise my terraform configs/updates/upgrade manually. especially when it comes to changes that require moved
or import
blocks
these days, terraform does a pretty decent job of telling me what i screwed up in the plan (way better than in the past). of course, i still have to read and interpret the messages correctly (true regardless whether it’s local or PR output). but i do find that devops teams still tend to open a PR and then disappear, and leave it to a reviewer to tell them something failed. it’s a little more in their face when it fails locally
i’m not sure there are great solutions for such things, other than trying to build the team culture over time, and guiding/correcting folks towards the desired workflow
when terraform’s messaging is not sufficient to debug, yes, it is quite valuable to be able to escalate permissions to get access for local execution
have you looked into tooling/workflows for temporarily elevating permissions?
• https://aws.amazon.com/blogs/security/temporary-elevated-access-management-with-iam-identity-center/
AWS recommends using automation where possible to keep people away from systems—yet not every action can be automated in practice, and some operations might require access by human users. Depending on their scope and potential impact, some human operations might require special treatment. One such treatment is temporary elevated access, also known as just-in-time access. […]
An open source privileged access management framework which makes requesting access a breeze.
Oh, here’s the doc page on temporary elevated access. I knew I had seen this recently…
• https://docs.aws.amazon.com/singlesignon/latest/userguide/temporary-elevated-access.html
All access to your AWS account involves some level of privilege. Sensitive operations, such as changing configuration on a high-value resource, for example, a production environment, require special treatment due to scope and potential impact. Temporary elevated access (also known as just-in-time access) is a way to request, approve, and track the use of a permission to perform a specific task during a specified time. Temporary elevated access supplements other forms of access control, such as permission sets and multi-factor authentication.
Thanks @loren! Yeah, we have been considering temporary elevation. It’s on the project list to tackle someday. I hadn’t considered it specifically in the scope of this project. I’ll have to think on that.
Thanks for your insight!
Curious if anyone else has a different perspective, or different experiences working with TACOS.
seems we got the similar feelings when using Terraform or opensource projects, troubleshooting takes most of time
Have others run into situations where debugging terraform code via git PRs is inefficient/unusable?
Speaking as an atlantis user, I don’t really hit this.. If I’m developing a module, I point my module definition to the local path on disk, terraform plan my through the development, and then when I’m ready for code review I’ll update the source = ".."
bit to the git ref, get a coworkers review, and then promote up.
If my devops team, for example, have the ability to run terraform by hand, will our workload increase as we find lots of cases where it’s needed?
We have a sandbox AWS account that we give power-users the ability to run terraform plan/apply locally. We do a “per-env-per-app” state, so when they are running terraform against their sandbox app, the blast radius is small. To get anything out of the sandbox account it has to be tagged/reviewed/applied via atlantis.
Also, if your terraform plan fails and it posts it to the PR, that now means you can share the PR around with a full error log and the context in which that error was produced, so it’s easier to help or get help from other folks
We use Spacelift.
Is troubleshooting complex changes via PR comments painful
Spacelift doesn’t post plans as comments (by default), there’s a pretty UI which shows the plan. It’s no harder to troubleshoot in this environment than normal.
One thing which changes is your iteration times goes up. It’s slower to push and wait for TACOS to notice/init/plan than to simply re-run plan locally. I don’t really notice this as painful. But sometimes we do run plan locally as I’ll describe below.
Do you allow any “power users” to run terraform manually when needed
Yes, absolutely. For two reasons:
- We want to be able to run Terraform if our TACOS breaks, and to be able to eject from our TACOS if needed
- Some things we run on the stack aren’t plans. They might be imports or other statefile manipulation, or they might be diagnostic (
terraform state show ...
) More broadly, I don’t see TACOS as being a useful security gate. If you can write Terraform code, you can do anything because TF will happily run arbitrary logic provided to it during planning. If people can submit PRs, they can do anything they wanted.
I do see TACOS as a useful governance/policy gate. It’s nice to give developers access to see the TF runs for their infra, and to submit PRs changing eg RDS instance class. But for the ops/SRE/whatever team, I don’t think it’s useful to try to enforce TACOS as “the only way to run Terraform”. You can only make it much more convenient so they do so 99% of the time.
I’ve spent a lot of time over the past 4-5 months importing legacy infra into Terraform, and dealing with the tweak-plan-revise-repeat cycle you describe. I’ve never really found that the 30secs added to iteration time by TACOS is significant. Terraform is so slow in the first place.
It’s not like running make test
and waiting 5 seconds. It’s more like 2 vs 2.5mins – I’ve context switched anyway.
@Andy Wortman I’d also encourage you to check out Terrateam https://terrateam.io if you like the Atlantis-style workflow. We can also do custom features if that’s a thing you want.
Wow! very cool project @loren https://github.com/common-fate/common-fate
Automate permissions to your cloud and critical applications.
Their granted-cli is a great project also. And they have a responsive slack channel for questions
2023-06-27
hello, I am trying to use cloudposse/terraform-aws-firewall-manager
version 0.4.0
but I am running into a problem. I am trying to define multiple waf_v2_policies
but I run into a problem. I would like to also enforce policy against CloudFront distributions but I know they’re a special case, and need to be provisioned “globally” (aka in us-east-1
) does anyone know how to go about doing that successfully? when I try to put a list of two policies (one for ALBs and one for CloudFront Distros), I get this error:
“Error: creating FMS Policy: InvalidInputException: Resource [“AWS::CloudFront::Distribution”] can not be used in region: us-east-2.”
I’m not sure how to create a CloudFront-specific policy that gets set in the global region, while everything else gets provisioned in my otherwise default region of us-east-2
. Has anyone been able to do this? Once I figure it out I am happy to add an example to the module for future reference. thanks!!
here’s my example code:
module "firewall_manager" {
source = "cloudposse/firewall-manager/aws"
version = "0.4.0"
providers = {
aws.admin = aws.admin
aws = aws
}
security_groups_usage_audit_policies = []
waf_v2_policies = [
{
name = "linux-policy"
resource_type_list = ["AWS::ElasticLoadBalancingV2::LoadBalancer", "AWS::ApiGateway::Stage"]
include_account_ids = {
accounts = ["1234567890", "2345678901"]
}
policy_data = {
default_action = "allow"
override_customer_web_acl_association = false
pre_process_rule_groups = [
{
"managedRuleGroupIdentifier" : {
"vendorName" : "AWS",
"managedRuleGroupName" : "AWSManagedRulesLinuxRuleSet",
"version" : null
},
"overrideAction" : { "type" : "NONE" },
"ruleGroupArn" : null,
"excludeRules" : [],
"ruleGroupType" : "ManagedRuleGroup"
}
]
}
},
{
name = "cloudfront-policy"
resource_type_list = ["AWS::CloudFront::Distribution"]
include_account_ids = {
accounts = ["1234567890", "2345678901"]
}
policy_data = {
default_action = "allow"
override_customer_web_acl_association = false
pre_process_rule_groups = [
{
"managedRuleGroupIdentifier" : {
"vendorName" : "AWS",
"managedRuleGroupName" : "AWSManagedRulesLinuxRuleSet",
"version" : null
},
"overrideAction" : { "type" : "NONE" },
"ruleGroupArn" : null,
"excludeRules" : [],
"ruleGroupType" : "ManagedRuleGroup"
}
]
}
}
]
}
This error is because of the use of resource_type_list
with a list of 1. the API seems to support both, but the module doesn’t.
╷
│ Error: Invalid value for input variable
│
│ on main.tf line 69, in module "firewall_manager":
│ 69: waf_v2_policies = local.waf_v2_policies
│
│ The given value is not suitable for
│ module.firewall_manager.var.waf_v2_policies declared at
│ .terraform/modules/firewall_manager/variables.tf:204,1-27: all list
│ elements must have the same type.
╵
here’s my modified code (swapping out resource_type_list
for resource_type
module "firewall_manager" {
source = "cloudposse/firewall-manager/aws"
version = "0.4.0"
providers = {
aws.admin = aws.admin
aws = aws
}
security_groups_usage_audit_policies = []
waf_v2_policies = [
{
name = "linux-policy"
resource_type = "AWS::ElasticLoadBalancingV2::LoadBalancer"
include_account_ids = {
accounts = ["1234567890", "2345678901"]
}
policy_data = {
default_action = "allow"
override_customer_web_acl_association = false
pre_process_rule_groups = [
{
"managedRuleGroupIdentifier" : {
"vendorName" : "AWS",
"managedRuleGroupName" : "AWSManagedRulesLinuxRuleSet",
"version" : null
},
"overrideAction" : { "type" : "NONE" },
"ruleGroupArn" : null,
"excludeRules" : [],
"ruleGroupType" : "ManagedRuleGroup"
}
]
}
},
{
name = "cloudfront-policy"
resource_type = "AWS::CloudFront::Distribution"
include_account_ids = {
accounts = ["1234567890", "2345678901"]
}
policy_data = {
default_action = "allow"
override_customer_web_acl_association = false
pre_process_rule_groups = [
{
"managedRuleGroupIdentifier" : {
"vendorName" : "AWS",
"managedRuleGroupName" : "AWSManagedRulesLinuxRuleSet",
"version" : null
},
"overrideAction" : { "type" : "NONE" },
"ruleGroupArn" : null,
"excludeRules" : [],
"ruleGroupType" : "ManagedRuleGroup"
}
]
}
}
]
}
and here’s the second error:
╷
│ Error: creating FMS Policy: InvalidInputException: Resource ["AWS::CloudFront::Distribution"] can not be used in region: us-east-2.
│
│ with module.firewall_manager.aws_fms_policy.waf_v2["cloudfront-policy"],
│ on .terraform/modules/firewall_manager/waf_v2.tf line 11, in resource "aws_fms_policy" "waf_v2":
│ 11: resource "aws_fms_policy" "waf_v2" {
│
╵
it seems like this is because the WAF resources need to be created in us-east-1
because that’s where CloudFront is managed from, but I don’t know how to do that since you can only have one waf_v2_policies
config block and there doesn’t seem to be a way to set the region in that config block
I think you maybe need to instantiate the module twice
once with a provider for us-east-1 , then do your regions
cool - I will give that a try and report back
quick follow-up here - I was able to get it working using a second provider against us-east-1
. thanks for your suggestion @jose.amengual!
awesome , glad to hear
did you have to instantiate twice the module or just one?
apologies for the delay - twice did the trick, with mostly the same options in my case.
2023-06-28
https://github.com/terraform-aws-modules/terraform-aws-solutions#cloudwatch-log-retention-manager ready to use solution to deal with cloudwatch log groups without proper retention
yes, it’s lambda, but packaged in neat terraform module
Hello Team, I am here to seek help. I am new to cloudposse. I am trying to implement datadog-synthetics-private-location Followed instructions on this page. If this is not the right place to seek help please point me to the right direction. Downloaded atmos.tool
- I started with
atmos
tutorial and it works fine. - Created similar folder structure for datadog-synthetics-private-location . On terminal cd into datadog-synthetics-private-location folder and ran this command
atmos terraform plan datadog-synthetics-private-location --stack=test
.This gives me below error. I have Atmos.yaml file. What am I missing here?│ Error: │ 'atmos.yaml' CLI config files not found in any of the searched paths: system dir, home dir, current dir, ENV vars. │ You can download a sample config and adapt it to your requirements from <https://raw.githubusercontent.com/cloudposse/atmos/master/examples/complete/atmos.yaml> │ │ with module.iam_roles.module.account_map.data.utils_component_config.config, │ on .terraform/modules/iam_roles.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config": │ 1: data "utils_component_config" "config" {
you have atmos.yaml
in the wrong place
In the previous step, we’ve decided on the following:
@Andriy Knysh (Cloud Posse) Thank you. I created atmos.yaml on my mac /usr/local/etc/atmos/atmos.yaml
removed the file from above location. I am hitting now above error.
@Radhika you can DM me your code, I’ll review it (it would be much faster to find any issues)
Thank you. Sent you DM.
2023-06-29
I’ve got a question regarding the terraform-aws-sso module. Anyone else here use this to manage access to their AWS infra?
Crosspost from #aws //sweetops.slack.com/archives/CCT1E7JJY/p1688052257621759>
Hey. Earlier this year we began using the terraform-aws-sso module to manage our human access to our AWS accounts. It works really well and has been a lifesaver, so upfront thank you to everyone who added to it. .
However I think I am missing something as only recently did we have a need to make a new account assignment and because I have a depends_on
for my Okta moduole to make sure the okta groups are created before the account assignment is attempted, terraform is forcing a replacement of all account assignments despite the code only adding one.
Removing the depends_on
fixes it in my plan, but I worry it will fail because it isn’t aware of the dependency on my okta module.
I did some searching and I think that this PR addressed this issue already by adding a variable to handle the dependency issue.
The variable identitystore_group_depends_on
description states the value should be “a list of parameters to use for data resources to depend on”.
I don’t understand what parameters it’s referring to? Is it a list of all Okta groups I create?
Anyone else having issues installing TF providers right now?
github is down
Real-time Github status and problems. Is Github down, not working properly or are you getting error messages? Here you see what is going on.
Uffs.
Thanks
Welcome to GitHub’s home for real-time and historical data on system performance.
Github is notorious for taking forever to update their status page
hey guys,
i need one help on https://registry.terraform.io/modules/cloudposse/elasticache-redis/aws/latest?tab=inputs version 0.52.0
encountered an Errors
related to https://github.com/cloudposse/terraform-aws-security-group/blob/main/normalize.tf
Error: Invalid function argument
│
│ on .terraform/modules/redis.aws_security_group/normalize.tf line 32, in locals:
│ 32: self = lookup(rule, "self", null) == true ? true : null
│
│ Error: Invalid function argument
│
│ on .terraform/modules/redis.aws_security_group/normalize.tf line 27, in locals:
│ 27: source_security_group_id = lookup(rule, "source_security_group_id", null)
│
│ Invalid value for "inputMap" parameter: lookup() requires a map as the first argument.
│ Error: Invalid function argument
│
│ on .terraform/modules/redis.aws_security_group/normalize.tf line 20, in locals:
│ 20: description = lookup(rule, "description", local.default_rule_description)
│
│ Invalid value for "inputMap" parameter: lookup() requires a map as the first argument.
│ Error: Unsupported attribute
│
│ on .terraform/modules/redis.aws_security_group/normalize.tf line 19, in locals:
│ 19: protocol = rule.protocol
│
│ Can't access attributes on a primitive-typed value (string).
like this and don’t what to fix here
Please do let me know if i am missing anything on this.
aws = {
source = "hashicorp/aws"
version = ">= 2.17.0"
}
###########
Terraform v1.5.2
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.5.0
What are the input variable values?
vpc_id = "vpc-0c429383ahdheb"
subnets = ["subnet-0129838abdvvh", "subnet-0e2dsb7dsdv97", "subnet-073bs7dvcsjk"]
allowed_security_groups = ["sg-02aadbbbahh", "sg-0000aaa0sss"]
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
instance_type = "cache.t3.medium"
cluster_size = 2
family = "redis6.x"
engine_version = "6.x"
maintenance_window = "wed:03:00-wed:04:00"
zone_id = "ZZZZZZZZZZ"
cluster_mode_enabled = false
apply_immediately = true
automatic_failover_enabled = true
at_rest_encryption_enabled = true
transit_encryption_enabled = true
cloudwatch_metric_alarms_enabled = false
@Hao Wang
@Andriy Knysh (Cloud Posse) can you look at this ?
seems both rules
and rules_map
are not defined
^^ in sg module
in redis
module, additional_security_group_rules
is also be used, can we try give it a value?
or use aws
provider version 4, instead of version 5?
2023-06-30
Can anyone help me with above issue
Thanks in Advance
did you try the example https://github.com/cloudposse/terraform-aws-elasticache-redis/tree/main/examples/complete ?
(it’s automatically provisioned on AWS on each PR)
@Ashish Singh did you try the example? If you did, did it produce the same error?