#terraform (2021-04)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2021-04-01
terraform-aws-ecs-alb-service-task - looking at this, it does not create the ALB. Am I right? Any particular reason why?
The argument [ecs_load_balancers](https://github.com/cloudposse/terraform-aws-ecs-alb-service-task#input_ecs_load_balancers)
takes the name of the existing ALB
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
Hi Guys, Is there an example of how to create a node group based on bottlerocket ami using this module - https://github.com/cloudposse/terraform-aws-eks-node-group ? Thanks!
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
you will need eks worker to use custom AMI
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
yea that part is a given. I was talking about if there is support for that in the module, since bottle-rocket is a different in many ways to amazon-eks-node
you may want to give this module a try, https://registry.terraform.io/modules/cloudposse/eks-workers/aws/latest
along with the bottlerocket ami or the ami you build with packer based on bottlerocket ami
Hey all, This PR is waiting 6 months for review and this is very elegant way to secure s3 bucket. https://github.com/cloudposse/terraform-aws-s3-bucket/pull/49 Could you make a magic and merge it?
what Adds enable flag to allow only ssl/https bucket uploads. Includes logic to merge other policies enabled by the user such as the string policy passed in via the policy variable and the other e…
we are still waiting for a response from the contributor
what Adds enable flag to allow only ssl/https bucket uploads. Includes logic to merge other policies enabled by the user such as the string policy passed in via the policy variable and the other e…
if you want this sooner you can create a PR with the same code
This is a rebase of PR #49 what Adds enable flag to allow only ssl/https bucket uploads. Includes logic to merge other policies enabled by the user such as the string policy passed in via the poli…
please look at the tests @Piotr Perzyna
@jose.amengual Could you try now?
merged
Thank you!
np
has anyone seen this before …
cloud-nuke defaults-aws
INFO[2021-04-01T13:40:37+01:00] Identifying enabled regions
ERRO[2021-04-01T13:40:37+01:00] session.AssumeRoleTokenProviderNotSetError AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
github.com/gruntwork-io/[email protected]/errors/errors.go:81 (0x16a1565)
runtime/panic.go:969 (0x1036699)
github.com/aws/[email protected]/aws/session/session.go:318 (0x1974a25)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:50 (0x19749ca)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:66 (0x1974b36)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:86 (0x1974ce6)
github.com/gruntwork-io/cloud-nuke/commands/cli.go:281 (0x199506c)
github.com/gruntwork-io/[email protected]/errors/errors.go:93 (0x16a175e)
github.com/urfave/[email protected]/app.go:490 (0x1691402)
github.com/urfave/[email protected]/command.go:210 (0x169269b)
github.com/urfave/[email protected]/app.go:255 (0x168f5e8)
github.com/gruntwork-io/[email protected]/entrypoint/entrypoint.go:21 (0x1996478)
github.com/gruntwork-io/cloud-nuke/main.go:13 (0x19966a7)
runtime/proc.go:204 (0x10395e9)
runtime/asm_amd64.s:1374 (0x106b901)
error="AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set."
looking for some guidance here …
Do people normally wrap https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms and https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack together?
If so, do you configure all this as part of your RDS module or have it seperate?
Also does anyone have an example out in slack from https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack ?
Hi All,
I have issues while creating the terraform Module for the RabbitMQ Terraform supports AWS version 3.34.0(latest version) for RabbitMQ which is released on November 2020 but in our organization we are using the AWS 2.67.0 version. I was encountering below error.
expected engine_type to be one of [ACTIVEMQ], got RabbitMQ [0m
[0m on .terraform/modules/amazon-mq/amazon-mq/main.tf line 63, in resource “aws_mq_broker” “mq”: 63: resource “aws_mq_broker” “mq” [4m{ [0m [0m [0m [0m [31m [1m [31mError: [0m [0m [1mexpected deployment_mode to be one of [SINGLE_INSTANCE ACTIVE_STANDBY_MULTI_AZ], got CLUSTER_MULTI_AZ [0m
Bite the bullet… upgrade your provider. I did it last week, it wasn’t too painful. 2.67.0 is way out of date and you’ll miss out on new features and resources.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade is really helpful.
Check your plan outputs with a fine toothed comb… I successfully deregistered all instances on an ALB because I missed the warning here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group
But I have tried updating the version to 3.34.0 in terraform root config.tf file but facing issues in other modules regarding the version change. the below issue with s3 module which is running on AWS version 2.57
Will it related to the older versions for the S3 bucket in the root module
For those of you with a couple seconds to spare — this issue could always use another round of ’s: https://github.com/hashicorp/terraform-provider-aws/pull/15966
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Hey all, is there a way to do dynamic blocks in terraform 0.11?
To give a concrete example: I’m stuck on an old version of Terraform and have never done the upgrade from HCL1 to HCL2 and 0.12 so a bit hesitant to attempt it. Writing an ECS Service module with a dynamic load balancer block would do me wonders right now in cleaning up the code
but I’m a bit lost on how to achieve a similar result to dynamic blocks in 0.12+ or if it’s even possible?
2021-04-02
I’m not aware of such. I’d rather ask what’s blocking you from migrating to tf0.12/HCL2?
@Rhys Davies the upgrade isn’t too scary… There’s a 0.12upgrade helper Terraform command that works pretty well. If your Terraform code is split up into small modular stacks, you can use tfenv to help make sure you’re using the correct Terraform version and avoid needing to upgrade everything at once.
Guys not able to pass environment variables rightly in my container definition
Can someone help? We can get on a call
Did you figure this out?
I’m not affiliated with CloudPosse, just a member here. Around 1.5y of exp with Terraform and CloudPosse modules, just shy of 10y of experience.
Do you have a code snippet you’re looking to improve @hrishidkakkad
Thanks for the reply guys, ok good to know that I wasn’t just missing some ancient secret Terraform. Yeah, looks like I’m gonna gear up to do the upgrade to 0.12 and +. I guess I was a bit reticent because the delta is gonna be massive with all those syntax changes!
Guess I can feel good about pumping those Github -/+’s numbers
2021-04-03
2021-04-04
Anyone using TF Enterprise here? (the on-prem version) We’re working on TFE integration and would appreciate feedback on the user instructions we’re publishing.
Hey guys, Happy Easter Monday. I need an assist with debugging the terraform_provider_azurerm on my local - trying to get started to see if I can help the community and increase my understanding of terraform.. I have posted the question on stackoverflow. https://stackoverflow.com/questions/66945925/attempting-to-debug-the-terraform-provider-azurerm-so-that-i-can-contribute-to-t Any help will be much appreciated!
Introduction Hi guys, I am trying to get started with contributing to the terraform-provider-azurerm. I have noticed a problem with the azurerm_firewall_network_rule_collection I have reported it …
I’ve got an interesting idea that I would like to see if anyone has any experience or advice. I work for a non-profit and am automating out project deployments at the moment. To ensure we’re as cost-optimal as possible, I’ve decided that non-production projects will share as many AWS resources as possible. These projects are essentially their own ECS Service and will share 1 ECS cluster and 1 RDS database. With this multi-tenant approach, I’m wondering what the best way to manage creation of multiple databases/users. I’m using Terragrunt and wanted to see if I could have these db “migrations” executed per terragrunt.hcl/project. My first thought was to create a terraform module that contained a lambda function or perhaps even a docker container.
You might want to look at Aurora Serverless for this. It may come out cheaper and less complicated, because you’d still have separation between projects but no dedicated instances.
2021-04-05
has someone tried using the the EKS node group module to deploy bottlerocket-based workers ?
Contribute to cloudandthings/terraform-pretty-plan development by creating an account on GitHub.
guys does somebody know why it is not supported now https://github.com/cloudposse/terraform-aws-kops-efs ? is there another solution to make EFS for kops maybe kops addon or something ?
Terraform module to provision an IAM role for the EFS provider running in a Kops cluster, and attach an IAM policy to the role with desired permissions - cloudposse/terraform-aws-kops-efs
@Igor Rodionov @joshmyers @Maxim Mironenko (Cloud Posse) maybe you can help to find answer sorry guys for direct mentioning
Terraform module to provision an IAM role for the EFS provider running in a Kops cluster, and attach an IAM policy to the role with desired permissions - cloudposse/terraform-aws-kops-efs
I will send you few links today
thank you so did like this:
- created needed IAM permission using kops
kops edit cluster
- created EFS with https://github.com/cloudposse/terraform-aws-efs thank you guy for this module, it is wonderful !
- deployed efs driver with helm
Hi there! I’m new to Terraform (and DevOps in general). I’m trying to automate infra on a project I’m working on and I’m a bit confused on logical separation of modules/resources. At the moment I’m refactoring the initial version (I’m splitting state per env) and wondering what could be a better approach to doing VPC configuration (I’m on Hetzner, not AWS). Right now I store all IP ranges in separate variables, grouped by purpose (app, backoffice), then I create the subnets and refer to those in my prod/main.tf
when building instances, but this feels awkward. I’m wondering if it makes more sense to have smaller configuration units and move each subnet into it’s specific service module (or something like that). I broke my code and looking for a smarter approach to this Maybe someone has an existing project with similar configuration that I could take a look at?
use modules ?
you mean module per subnet?
when I use TF to do VPC/Subnets I create all that in a separate component/module and then I search/find the subnet to use in each app/service
the search/find = could be remote state share, data lookup or using outputs
just to confirm, so you’d have something like subnets/app_mysql/, subnets/app_cache/, etc. file structure?
not really
I have a vpc/network project
and the app use those resources on their repos
I do not mix app and networking
A vpc and subnet is a lower level than the app so I separate those
it is very strange to “delete” a subnet when you delete an app unless you have a vpc per app
ohh ok, thanks, that clears it up a bit
so you’d use something like
module "vpc" {
source = "github.com/someuser/vpc"
}
and use it’s outputs within the app project?
so if you look at it from he point of view of a coldstart on a new aws account the first thing you get is a vpc because without it you can’t do anything but usually you create your own because the default is not on your CIDR range and so so set up the fundation for your app to work and after that you do not modify the conectivity for the app to work. Much like on a company there is a network department
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
similar to the example
That makes sense to me, so you define all of the networking in a separate project. but then lets say you have a subnet for databases and another for an app, Do you just hardcode the IPs in the app’s project so it knows where the db servers are? There won’t be that many and I can just reference them in the networking project, so maybe it makes sense to just copy paste from one project to another? I’m talking about the subnet “10.8.0.0/24” strings.
Or would your networking project generate an “output file” that your app project could pick up and use variables instead?
Sorry if I’m not being clear, this is all new and a little confusing:) wondering what works better in practice
you can use outputs for sure
if you are deploying app and RDS in a separate subnet then you will need outputs for each
usually you create many private/public subnets per VPC and app and rds use the same subnets but secured by a security group
and sometimes you will have a DMZ were you will allow traffic base on ACLs but all that could be in this networking
project
most of the time, you do it once and once connectivity is good you do not touch it again
got it, thank you!
Probably worth mentioning that I’ll be generating an Ansible inventory file from all of this.
Hi all. Anyone know of a tool to automatically generate test code for terraform classes? At the moment writing tests seems very mechanical, and I’d like to take the heavy lifting out of the equation.
Hi guys - curious about how ignore_changes
is implemented in providers? We’re trying to debug part of the azure provider that doesn’t seem to respect this. Anyone have pointers?
2021-04-06
Hello :wave:
I can’t configure log_configuration
in terraform-aws-ecs-container-definition . I need to configure aws cloudwhatch driver
could some one direct me to example please?
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
Fix this error: Error: Invalid function argument …
is there a way to override https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack/blob/master/main.tf#L13 so that my lambda’s aren’t called default
every time?
Yep, pass context or part of context that will name your lambda
Not directly a Terraform question, but does anyone know how to get in touch with Hashicorp sales? Specifically Terraform Cloud. We’ve reached out to support who routed us to the sales email, but haven’t heard back yet. Is anyone here from hashicorp or know any of the sales folks there? Thanks in advance!
Strange, you’d think part of the business model is selling to potential customers
You can DM me
just give me your email
Sending over now. Thanks!
does someone know a good tf module for reverse proxy?
i have a weird issue where manually firing messages into SNS fires my lambda to slack perfectly
however, rds event subscriptions do not seem to be adding messages to SNS
I have created a gist https://gist.github.com/swade1987/c80cef29079255f052099ca232c0d96c
Apologies for possibly not understanding. You’re saying that when you manually fire off an event into SNS, it successfully fires a lambda that sends a message to your slack DM/channel/whatever?
Yes but when I make any changes to RDS nothing happens. The gist above is configured to send events to SNS but it doesn’t seem to be doing it
I’ll try to get around to this sometime tonight if nobody else can pitch in and you’re still having issues. On baby duty right now
i have manually rebooted the RDS instance loads of times but nothing fires
does anyone have any ideas as I am running out myself
the issue is 100% the KMS policy
as soon as I remove encryption everything starts working
Hi all, question. I want to check out a git repo in TF (can do this will null_resource), then I want to read a YAML file from that repo into a TF var. Anyone know if null_resource
is the only way to accomplish this?
Also what is the future of null_resource
as its flagged as deprecated? It seems to me that there is still use cases that locals don’t solve (like this, the repo doesn’t exists when locals are parsed).
You can just use a module block with a source reference pointing to the repo (including the git ref). On init, terraform will pull down the repo. You can reference the files from the .terraform directory
Ah, good call, thanks!
You don’t get a tfvar that way, exactly, but you can use the file and yamldecode functions to pull the value (s) into a local
Hmm, can’t use variables in a source declaration though. Might have to re-think this. I’m calling a module and passing it variables for the repo to check out (have multiple repos depending on the config).
If the git repo or the ref are variables, then no this doesn’t work, but probably nothing else will either. You’re looking at some kind of wrapper, at that that point
Yeah, my initial solution was calling a script with the local-exec
provisioner that parsed the vars to checkout the repo.
I think Loren might be saying you can wrap your TF code. maybe have a script that creates the tf file with the module source. Run that script before running terrform plan/apply/etc
Yeah, I understand. Trying to see if I can avoid that.
roger that!
Sidebar question: how many repos are you working with? I know it might not be scalable but what about pulling down all the repos and then using a variable to reference the one for the current configuration?
Right, exactly, cdktf or terragrunt or other external tooling and generating the tf might be preferable to trying to hack the workflow from within a terraform config
Yeah that might work, pulling all the repos. The way I have it set up now is each repo has a google cloud pipeline in it. One module/pipeline/repo that calls another module that handles parsing which repo(s) to check out. I think for now I can do something along the lines of just grabbing all of them every time, its only 6 repos atm.
Or take the opportunity to rethink the larger workflow, really take advantage of the declarative nature of terraform
Otherwise wrapper script or makefile it is
Thanks for the help
Are you sure you even need to clone the repo? can you just use the raw url?
That’s the pattern we use in https://github.com/cloudposse/terraform-yaml-config
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config
ah… never thought of that. No I don’t need the repo, I just need the file.
Thanks @Erik Osterman (Cloud Posse)! Saved me a bunch of time probably dealing with a wrapper or writing a custom module.
2021-04-07
HI guys, I am trying to create aws routes dynamically for each route table, and each peering connection that I specify I’ve done it eventually, but I have feeling that there could/should be smoother way for do it. Generally I had quite headache with maps/list manipulation. Is there any better approach to achieve something like this?
locals {
route_table_ids = ["rtb-1111111111111","rtb-2222222222222", "rtb-333333333333333"]
cidr_peerings = [
{
"cidr_block" = "10.180.0.0/21"
"vpc_peering_id" = "pcx-1111111111111111111"
},
{
"cidr_block" = "10.184.0.0/21"
"vpc_peering_id" = "pcx-2222222222222222"
},
]
routes = {
for i in setproduct(local.route_table_ids, local.cidr_peerings):
"${i[0]}_${i[1].vpc_peering_id}" => merge(i[1], {route_table_id: i[0]})
}
}
resource "aws_route" "this" {
for_each = local.routes
route_table_id = each.value.route_table_id
destination_cidr_block = each.value.cidr_block
vpc_peering_connection_id = each.value.vpc_peering_id
}
Question for a conditional create. I query the instance type like this:
data "aws_instance" "instancetoapplyto" {
filter {
name = "tag:Name"
values = [var.instancename]
}
}
this gives back: data.aws_instance.instancetoapplyto.instance_type
now I would like to use this in a conditional create context, if the value equals t3.* then set count to 1
what is the different between using
resource "aws_autoscaling_attachment" "asg" {
count = length(var.load_balancers)
autoscaling_group_name = aws_autoscaling_group.asg.name
elb = element(var.load_balancers, count.index)
}
and just load_balancers = []
directly in the ASG config?
- order of operations - i.e. do you have all the information needed when creating the ASG to also attach the LB at that time?
- separation of responsibility - i.e. are there different teams/configs responsible for or maintaining the ASG vs the LB?
yes i can
i have this weird issue at present
whereby on first tf execution the attachment works fine
then if i re-execute tf again (with no changes) it wants to remove it
i can’t work out why :point_up: is happening and i have a suspicion its the aws_autoscaling_attachment
are you passing an empty list here: https://gist.github.com/swade1987/33780145d1052fadc05a0331e4ef5c30#file-asg-calling-L15
if so, change it to null
it’s like security group inline rules vs rule attachments. only one should be used for a given SG
i may be misunderstanding, because i don’t see an aws_autoscaling_group
resource in the gist
i see the attachment, not the ASG definition… but going back to the original question, use only one option for attaching the LB to ASG… either the attachment resource OR the ASG resource
if you pass an empty list to an attribute, that typically implies exclusive control of the attribute… so load_balancers = []
means remove all LBs. to get the default behavior of ignoring the attribute, use load_balancers = null
interesting
although the issue seems to be when i am specifying the id
see https://gist.github.com/swade1987/33780145d1052fadc05a0331e4ef5c30#file-ingress-node-group-L7 and then the terraform screenshot
when scaling up the ingress node count in the ASG its removing the load balancer from the ASG configuration which makes no sense to me as nothing else has changed.
The solution is:
count = format(“%.1s”, data.aws_instance.instancetoapplyto.instance_type) == “t” ? 1 : 0
Hey all – I have a general question about using cloudposse components and modules. I’ve been through the tutorials on atmos and Geodesic and both make good sense. I feel like I’m still missing something, however – specifically, a step-by-step for building a master account module, or just a ‘my first stack’ tutorial. Wondering if such a thing might exist? Or am I missing something very obvious? Cheers –
Capturing some first impressions as I ramp in hope of pulling together a doc for future rampers.
unfortunately, no such document exists yet
we’re working on some tutorials - i think the next one will be on creating a vpc and EKS cluster
TBH, we probably won’t tackle multi-account tutorial for a while since it’s a master class in terraform and aws
cc: @Matt Gowie
Maybe I should start with something smaller, like a doc on creating a small internal stack rather than a full blown master account.
Is there a baseline or example template I should start from when building my first stack?
I noticed the terraform-yaml-stack-config repo, which looks promising. I’ll start there.
Hey Marc — This is definitely on our roadmap but the full-blown “Here is how you provision many accounts + all the infra for those accounts” with stacks is still a bit of a ways out. That’s what stacks are built for, but it’s a TON of information to convey honestly. We’re trying to do it piece by piece and we’ll eventually work up to that topic, but it’s advanced and similar to what Erik mentioned: It’s a masterclass unto it self.
That said — We are putting together more AWS + Stack example tutorials soon and they should be launching within the coming weeks.
There is no example stack template that is a perfect example, but you can look here for a simpler example: https://github.com/cloudposse/terraform-yaml-stack-config#examples
Terraform module that loads an opinionated "stack" configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …
Thanks Matt! That’s perfect for me. Cheers –
@Marcin Brański might have something to share as well, having just gone through it
I started for my client from ground up with atmos
recently (less than 2 weeks ago). I don’t think I’m experienced enough to share tips but if I will have time (which currently I don’t) and client will agree then I can share snippets or whole configuration that we did.
I, personally, used tutorial, example from atmos, read variant and it kinda clicked.
Thanks Marcin – understand about your time. Happy to curate anything you can share. I enjoy that kind of work.
@Marcin Brański it would definitely help, I’m at the same point as @marc but it didn’t click to me yet. Maybe it’s because I’ve been using TF, YAML and some python wrappers in a complete different way, but although I followed the tutorials I can’t figure out how to, for example, create just a pair of accounts (i.e. master - prod) and an VPC/EKS cluster on prod one
Got a VPC running with a master account last night using atmos and geodesic, so I should be good to go. Started a supplemental “tutorial.md” which I’ll submit as a PR to the terraform-yaml-stack-config. Thanks all, for being so welcoming! Cheers –
Yeah! Good for you! Have you already thought of CICD?
I’ve been considering a few options on that side. 1.) Terraform cloud. 2.) JenkinsX.
Contribute to cloudposse/terraform-spacelift-cloud-infrastructure-automation development by creating an account on GitHub.
@marc slayton I believe a number of us here (and the wider community is coming to the same conclusion) would suggest against Terraform Cloud. Their pricing model past their “Team” tier is highway robbery and there are better solutions out there. Spacelift as Erik pointed out is once of them.
I’m jumping in to say I’m a little confused with just trying to use the yaml stck config. I don’t need the full atmos stuff, just want to use the yaml configuration for setting backend and components and i’m not having much luck.
Is there anyother example of using terraform-yaml-stack-config
repo and how the command with variant or cli is being actually processed to set backend?
I have a go build script doing some of this but I’d love to avoid rework on this if I’m just misunderstanding how to use the yaml stack config option.
Basically what I have right now is each folder loading the stack designated and the resulting output for each “root” module componet looks like this
module "vpc" {
source = "cloudposse/vpc/aws"
version = "0.18.1"
enable_default_security_group_with_custom_rules = module.yaml_config.map_configs.components.terraform.vpc.vars.enable_default_security_group_with_custom_rules
cidr_block = module.yaml_config.map_configs.components.terraform.vpc.vars.cidr_block
instance_tenancy = module.yaml_config.map_configs.components.terraform.vpc.vars.instance_tenancy
This doesn’t seem to match the more simple output I’ve observed on the other projects.
Can I ask a question here about Sandals?
@marc slayton Nothing stopping you from asking a question! I would suggest starting a new thread if it’s about a new topic.
LOL – thanks for the reply. I was actually just figuring out how to use the Foqal plugin. It looked pretty interesting. Seems like it might be useful in capturing general trends about the types of questions being asked, so you can prioritize certain types of documentation.
Ah I don’t know much about that plugin — I haven’t found the responses useful, but if you’re finding it useful then more power to ya!
@vlad knows more about foqal
Hello,
I have a question on source = "cloudposse/rds/aws" => version = "0.35.1"
.
Been getting an error DBName must begin with a letter and contain only alphanumeric characters
. Although my database_name
only contains hyphens and is less than 64 in length.
I haven’t seen any support on this yet, or wasn’t able to find it.
Any help/info is much appreciated.
that error is from Terraform, not the module
maybe is related to this : https://github.com/hashicorp/terraform-provider-aws/issues/1137
Hi, Terraform Version Terraform v0.9.11 Affected Resource(s) aws_db_instance Terraform Configuration Files data "aws_availability_zones" "available" {} resource "aws_vpc&qu…
hey @jose.amengual
Thanks for responding.
I was checking the module at it seems under aws_db_instance
default
its specifying the identifier
as module.this.id
..
Sorry, I could heading the wrong path…
module.this.id is the label module we use to name resources so you might want to check that did you pass for name, attributes, stage, environment etc
Lets say I have verified the data mapping between the module and values. Would you be to point me to any other potential mishaps? Or is it that there is missing gap between module expectation and my values?
I will have to look at your plan
but we use this module extensively and we have no issues
Thats my thought also, since its not a major reported issue. Must be something within the configurations.
seems like hyphens may have been the cause. I had removed them from the name, and its throwing another issue. But seems like passing the DBName portion.
v0.15.0-rc2 0.15.0 (Unreleased) BUG FIXES: core: Fix crash when rendering JSON plans containing iterable unknown values (#28253)
v0.14.10 is out also, strange sync with wed meeting
Welcome to DeepSource Documentation
2021-04-08
Hey all – I’m putting together my first atmos build using terraform. I’ve just added a ‘vpc’ module, one of two I found on the cloudposse site. The vpc builds with the new config, but it’s giving me WARNING errors like the following:
This looks like warning related to Atmos
Are you talking about this Atmos? https://github.com/simplygenius/atmos
It doesn’t look like the authors are in this Slack, so you are possibly asking in the wrong place for help?
Breathe easier with terraform. Cloud system architectures made easy - simplygenius/atmos
wow, had never seen that
Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos
The root module does not declare a variable named "vpc_flow_logs_enabled" but
a value was found in file "uw2-dev-vpc.terraform.tfvars.json". To use this
value, add a "variable" block to the configuration.
I’m curious whether this is a known issue, or perhaps I’m using the wrong vpc module? I’ve declared all the above variables in the stacks/globals.yaml config. The warning seems to come from terraform itself, and might be related to newer versions of terraform 0.14. Is this a known issue?
The warning is saying that you are providing a variable which your Terraform configuration isn’t using. For example, terraform plan -var foo=bar
in a Terraform configuration with no variable "foo" { ... }
block.
This isn’t related to the VPC module, but to your root (top-level) Terraform configuration
Yep, I was misinterpreting the warning. Thanks for setting me straight!
For what its worth, Hashicorp said they are removing that warning and they’ll allow you to have whatever you want declared in tfvars now.
Should for sure be in v0.15, I can’t recall if they added it to the recent 0.14 patches
0.14.9 still shows that. I just today upgraded to 0.14.10 so not sure about that but definitely fmt
has changed
oh what changed in fmt, I missed that note
Hi all
trying to use cloud posse aws backup module, working well under terraform enterprise, but when i want to re launch plan apply , i’ve got some issue:
Error: Provider produced inconsistent final plan
When expanding the plan for module.backup.aws_backup_plan.default[0] to
include new values learned so far during apply, provider
"registry.terraform.io/hashicorp/aws" produced an invalid new value for .rule:
planned set element
cty.ObjectVal(map[string]cty.Value{"completion_window":cty.NumberIntVal(240),
"copy_action":cty.SetVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"destination_vault_arn":cty.UnknownVal(cty.String),
"lifecycle":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"cold_storage_after":cty.UnknownVal(cty.Number),
"delete_after":cty.UnknownVal(cty.Number)})})})}),
"lifecycle":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"cold_storage_after":cty.NullVal(cty.Number),
"delete_after":cty.NumberIntVal(2)})}),
"recovery_point_tags":cty.MapVal(map[string]cty.Value{"Name":cty.StringVal("oa-uso-fda-plt1-env1-tenantfda_kpi-tenantfda"),
"Namespace":cty.StringVal("oa-uso-fda-plt1-env1-tenantfda")}),
"rule_name":cty.StringVal("oa-uso-fda-plt1-env1-tenantfda_kpi-tenantfda"),
"schedule":cty.StringVal("cron(0 3 * * ? *)"),
"start_window":cty.NumberIntVal(60),
"target_vault_name":cty.StringVal("oa-uso-fda-plt1-env1-tenantfda_kpi-tenantfda")})
does not correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
this is how i use module:
#Cloudposse backup module ”backup-idp-env” { source = ”tfe.xxx.xxx.com/techsol-devops/backup/aws” # Cloud Posse recommends pinning every module to a specific version version = ”0.6.1” namespace = var.workspace_name name = var.rds_identifier-idp delimiter = ”_” backup_resources = [module.rds_dbserver-odp.db_instance_arn] schedule = ”cron(0 3 ? *)” start_window = 60 completion_window = 240 delete_after = 2 destination_vault_arn = data.aws_backup_vault.dr_idp.arn copy_action_delete_after = 7 }
backup vault is an external local exec process creation with some aws cli command, so backup vault is not impacted when we want to destroy infra because not in the state
is anyone already had this issue please ?, thank you
is there an easy way to move terraform state to a different dynamo db key?
i want to move from "us-east-1/rules-engine-prd/env-01/terraform.tfstate"
to "us-east-1/rules-engine-prd/XXX/terraform.tfstate"
In the original config, do an init. then change the config and hit init again
terarform will prompt you that it detected the backend config has changed and ask if it should copy to the new location
it does not remove the old location!
i can then safely remove the old one manaually?
yup
is it possible to use aws s3 as a backend state storage for azure although i know there’s one for azure (blob storage)?
Hm, I dont see why not - the backend is supposed to be separate from the rest of the terraform config.
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
## you'd have to supply your AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY either here or during terraform init
}
}
then you’d provisiong your azure resources in your standard main.tf.. i’m literally just pulling this from the provider docs.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
# Create a resource group
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
so your ‘backend’ could be anything.. aws, terraform cloud, terraform enterprise, consul
ok - i was thinking due to using azurerm provider you’d be stuck to using azure blob for backend store and not aws s3 https://www.terraform.io/docs/language/settings/backends/azurerm.html
Terraform can store state remotely in Azure Blob Storage.
pls…
I’m reviewing some code and am curious about a choice made in it. Would there be a reason to use node_pools = zipmap(local.node_pool_names, tolist(toset(var.node_pools)))
instead of node_pools = zipmap(local.node_pool_names, var.node_pools)
? var.node_pools
type is list(map(string))
. I’m basically curious why someone would convert the list of maps to a set and then convert it back to a list.
converting a list to set remove duplicate, for_each takes as input a set or a map . so I don’t see a reason to convert the set back to a list.
Converting back to a list might be necessary for zipmap
.
The local node_pools
would be used in a for_each
block on creating node pools for a GKE cluster.
From my experiments it looks like the way the code is written avoids destructive modifications if the order of the var.node_pools
list changes. I don’t understand why that happens though. Any thoughts on why it works?
zipmap build a map out of your list, the index is based on a map key and not on a list id ..
Any free tier provider like spacelift or env0 that now covers pr integration with azure devops? Spacelift didn’t seem to have that yet so just checking if any recent updates. Right now using dynamic backend config similar to atmos approach with a go based app I’ve been fiddling with.
DevOps Advocate with env0 here. We are currently working on the Azure DevOps integration, including CD and PR Plans. You can use ADO now, but it is just a simple repo hook. We’re just not quite there with the CD / PR plan hooks just yet. I’ve reached out to our CTO to try and get a code-commit date for you.
If you want to DM me your contact info, or just let me know DM’s here are fine, I can keep you in the loop as we make progress.
I built in some PR comment additions to the plan so I’m trying, just figuring better lifecycle in the tool itself would be good. No rush, just exploring options as haven’t caught up on recent updates. thanks for keeping me posted
To clarify, do you mean Azure DevOps as a VCS provider?
I believe they were. We can support it today just as a basic VCS provider. It’s just the extra webhook stuff like PR Plan comments and CD that we don’t have yet. Do y’all have that yet for ADO @marcinw?
Nope.
Sounds like we both have a solid feature request on the board
Seperate discussion…. I get that terraform doesn’t fit into the typical workflow with CI CD very well at least out of the box.
To be fair though if these tools such as terraform cloud, spacelift, env0 are in essence running the same CLI tool that you can run in your own CI CD job that preserves the plan artifact, what do you feel is the actual substantial difference for the core part of terraform plans?
Don’t get me wrong I love working with stuff like terraform cloud, but I guess I’m still struggling to see the value in that if you write a pipeline then handles plan artifacts
At env0, it’s not about the deployment job and state to us. It’s about the entire lifecycle of the environment from deploy to destroy. We get compared to a CI/CD tool a lot, but we don’t do CI. We can do CD… But it’s really the day 2+ operational stuff that we do that makes the value come out. Setting TTL’s, environment scheduling, RBAC, Policy enforcement, cost monitoring. All of us TACoS really focus on the whole IaC lifecycle, not just running a pipeline and shoving the state somewhere.
Here are some issues to deal with:
- planfile is sensitive (secrets likely stored within)
- order of applies matter, should always apply oldest plan first
- one-cancels-all: once the planfile for a given root module is applied, any planfiles prepared before that need to be discarded - so if you’re treating planfiles as artifacts, it’s complicated
- generally want to approve before apply and many systems don’t do this well (e.g. github actions) - and I’m not talking about atlantis style chatops, but a bonafide approval mechanism
- policy enforcement - where you want a policy that when one project changes, you want another project to trigger
2021-04-09
Hey all – I’m looking into initializing remote tfstates in conjunction with atmos. To initialize a remote tfstate, I need to execute a command equivalent to: “terraform apply -auto-approve” – only from within the atmos wrapper. It’s not entirely clear how to construct this command. I’ve tried a few intuitive combinations using the docs, but they do not seem to work as expected. Does anyone have a quick example of how to run atmos with ‘terraform apply -auto-approve’ and then ‘init -force-copy’ as one-time commands to initialize a remote tfstate?
Hey Marc — does atmos terraform deploy
not do what you’re looking for?
Yes, this did the trick! Sorry for the newbie question. Must have missed it in the docs. Cheers –
Well to your credit, the docs on initializing remote state via tfstate-backend isn’t in the docs yet and that will be coming very shortly (PR should be up in the coming couple day).
But glad that did the trick. Let us know if you have any other questions.
Thanks, Matt! – So far, so good. :0)
Hey all, I would like to use a arimethric based on the value from a variable
now I have this:
resource "aws_ebs_volume" "backup" {
count = var.tier != "PROD" ? 0 : 1
availability_zone = var.aws_az
size = "${var.homesize} * 1.5"
type = "standard"
}
but this does not seem to work
this does it
Any users of Infracost here? Can you share feedback on the tool?
We’ve been looking at integrating it with Spacelift. It looks pretty decent and the CLI does not seem to leak anything to the API server. The obvious limitation is that usage-based cost estimation is only as good as the input you provide, but flat fees are generally well recognized, at least for AWS, and broken into components.
I’d suggest using it on static artifacts (state and plan) rather than have the wrapper run Terraform for you, because it feels messy and likely duplicates your work.
That said, I’ve mostly looked at it from the integration perspective - inputs, outputs, security and API availability. The CLI feels a bit awkward and inconsistent, especially if you’re outputting machine-readable JSON, but it’s generally not a blocker - you should be able to do what you want after some trial and error.
The team behind it is super nice and very responsive, too.
Thank you Marcin, that is very helpful. Would love to also hear of users using it on a regular basis. We are a vendor, like you, (not competing) and are thinking of integrating Infracost.
Sure thing. Always happy to talk shop and compare notes ;)
@Erik Osterman (Cloud Posse) gonna be very helpful https://www.terraform.io/docs/language/functions/defaults.html
The defaults function can fill in default values in place of null values.
Quite possibly
The defaults function can fill in default values in place of null values.
One thing we have encountered is that strict typing can make two otherwise compatible modules totally incompatible due to types. We encountered this with our null label module and have subsequently changed our context object to any.
Still this default function is welcome
I been thinking to reduce number of variables for any module using this via one variable object with optional, gonna test this and see pros cons
I found defaults()
hard to use and reason about, when I tried the experiment in 0.14… now, the optional()
marker for complex variable objects, that worked perfectly and was very easy
Maybe they’ve fixed defaults though, it was a few months ago and very much an experiment
2021-04-10
Hey all – I ran into a couple module bugs I’d really like to submit a PR for. To debug, I’m looking for a way to print out the objects being passed from one module to another during a ‘terraform plan’. Not quite sure how to manage this from within atmos/Geodesic. The terraform console seems a bit awkward in this context as well. Pointers on how to delve into debugging would be much appreciated!
what’s awkward about the terraform console?
the other thing is you can read your statefile
I did manage to get this going, but not from within atmos. In the end, I rendered the objects I needed using the ‘terraform output’ command. Looks like my multi-account build is working now. I’ll submit a PR with the changes to terraform-aws-components, and also some notes that may help others.
Thanks for the advice, Alex! Cheers –
Extension for Visual Studio Code - tfsec integration for VSCode
2021-04-11
hey, random question:
https://www.terraform.io/docs/language/functions/fileset.html
we are using the above fucntion and then for_each over a bunch of files to create some resources. The problem is, its sequential and a bit slow. Any idea on how to make it async?
The fileset function enumerates a set of regular file names given a pattern.
- create a sub-module from all the resources
The fileset function enumerates a set of regular file names given a pattern.
- Use
for_each
on the submodule and give it the set of files
right, so you are saying a module is async, interesting.
@Andriy Knysh (Cloud Posse) This looks to have decreased the run time by 30 to 40 percent… Thanks for the pro tip. Still testing but the first result looks good.
ok, its really hard to tell and I think. iam reading it wrong
but maybe 1 minute reduction 10 mins to 9
oh, well. Was worth a try
2021-04-12
Good morning, I’m struggling from Monday morning fog. Can somebody please suggest a quick way of converting this
myconfig = {
"/ErrorPages" = "mys3bucket"
"/client-assets" = "mys3bucket"
}
into
mys3bucket = ["/ErrorPages", "/client-assets"]
I’ve tried merge and using the (…) but I think I’m over complicating this. As I assume it should be as simply a for loop but for the life of my I can’t get the syntax correct.
type of example I feel it should be but isn't working or syntactically correct
locals {
newlist = tolist([
for k, v local.myconfig : v.value {
tolist(v)
}
])
}
try just keys(local.myconfig)
?
Hello Loren, Thank you, that’s lead me to almost get what I needed. Looks like my initial structure is actually nested. So
}
myconfig = {
"/ErrorPages" = "mys3bucket"
"/client-assets" = "mys3bucket"
}
}
Using
flatten([for k,v in local.myconfig :distinct(values(v))])
Gets me to
[
"mys3bucket",
]
but every time I try and pull this together to something like
{for k,v in local.mys3_configs : flatten(distinct(values(v))) => tolist(keys(v))}
I crash and burn
all i know is that based on what you’ve exposed here and what you said you wanted as a result, keys()
is the answer. can’t offer more without seeing the whole data structure
totally understand Loren, sorry, There isn’t really any more to the structure but let me reframe the data above to be less confusing.
locals {
mys3bucket = keys(local.myconfig)
}
okay, I think the confusion is the naming in my example. So let me frame it. Sorry about this
> local.mys3_configs
{
"api" = {
"/ErrorPages" = "assets.bucket.dev"
"/client-assets" = "assets.bucket.dev"
}
}
keys(local.mys3_configs)
[
"api",
]
What I need to do is get a distinct list of values aka assets.bucket.dev and use that value to make a new list which will contain all the keys local.mys3_configs.api
This gets me close to what I want.
{for k,v in local.mys3_configs : "randmonword" => { "another_randmon_word" = tolist(keys(v))}}
{
"randmonword" = {
"another_randmon_word" = [
"/ErrorPages",
"/client-assets",
]
}
}
I’m falling down when I try and make this type of structure
flatten(distinct(values(v)))
represents the dynamic but distinct list of values covered by values(v) aka assets.bucket.dev
tolist(keys(v)) represents the dynamic list of keys I want to add in to a single list. aka [ "/ErrorPages", "/client-assets"]
{for k,v in local.mys3_configs : flatten(distinct(values(v))) => tolist(keys(v))}}
{
"assets.bucket.dev" = [
"/ErrorPages",
"/client-assets",
]
}
}
Still as clear as mud
playing around a little more has got me to:
{for k,v in local.mys3_configs : element(flatten(distinct(values(v))), 0) => tolist(keys(v))}
{
"assets.bucket.dev" = [
"/ErrorPages",
"/client-assets",
]
}
But while this gives me what I want, is it correct? or have I just fluked it and a different approach would be safer?
Hey we’re looking for a maintainer of our popular beanstalk module — if you use Beanstalk and this module and would be interested in being a contributor then reach out and let us know!
https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment#searching-for-maintainer
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
I’m used to Terraform Remote backend. I’m using the dynamo + s3 and find lots of lock issues (i’m only one running it), as it seems easily stuck at times. Ideally I’d like to have my backend configure itself with it’s own state backend on initialization like Terraform Cloud makes easy, so either TF Cloud, Env0, or spacelift depending on what I evaluate, just for backend state simplification only, not for runners at this time.
Am I using this stuff wrong and it’s normally easy to initialize and go, or would I be better served to use a remote backend that creates on initialization to simplify that part?
does anyone have a way to obtain AWS RAM share arn’s using terraform and not the awscli?
can you just use the data source https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ram_resource_share
hmm, another AWS account has sent me a RAM invite and I can see it pending, however, the following isn’t working -
data "aws_ram_resource_share" "example" {
name = "aws-test-VPN"
resource_owner = "OTHER-ACCOUNTS"
}
I tried using SELF
as well, but I get the following error:
Error: No matching resource found: %!s(<nil>)
on main.tf line 13, in data "aws_ram_resource_share" "example":
13: data "aws_ram_resource_share" "example" {
it may only work after you accept the request
but not sure
does the cli work
yeah it seems like its only for after accepting
aws ram get-resource-shares --name "aws-test-VPN" --resource-owner OTHER-ACCOUNTS
let me try
nope
{
"resourceShares": []
}
gotcha
There are a ton of bugs in the terraform ram share accepter, you’re almost better off dealing with it manually
2021-04-13
Is there a pattern to resolve the value depends on resource attributes that cannot be determine until apply
when a resource is created where a variable is defined but the variable definition is created in the same state as the calling terraform? Example in thread.
Yes and no. You can deal with it but maybe not the way you would like to.
Just dealt with one example today.
I used for each in on a module and tried to compute it’s output in locals
. So terraform errored out on me.
I fixed it with iterating over module.xxx.output
in resource instead of iterating over computed local
.
Hmm I’m not sure i understand and if that method would apply to my circumstance. My immediate example involves creating a private Route53 zone and sending that zone into a module which will create a DNS entry if the Zone id exits using count. https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/83bd076d932b3bac8203fe9b3a70cac43d8d36db/main.tf#L169
Terraform doesnt know if it will exsist until apply. Usually i deal with this with a feature flag. ie route_53_enabled
= true
. In this case its not my module its Cloude Posse’s so wondering if there is a better way for conditional resource creation.
ultimately, the for_each
key must depend only on user inputs and not on any computed values derived from resources in the same state
@Tom Dugan you can use -target
with plan/apply to workaround the problem. looking at that module, the way it is depending on var.zone_id
in the enabled
variable, and how enabled
is used in count
, there is no other workaround when you create the zone in the same tfstate
you can certainly move them into different tfstates, and manage them separately, and that will work also
(caveat: i haven’t used this module, so there may be some detail i am unaware of that would support your use case. i’ll defer to any of the cloudposse folks if they chime in)
ah yeah the -target
option i was hoping to avoid.
Pretty much this problem is creeping up when we are testing a module which calls that module. To test our module we create the Private zone during the test. In the real call the zone is created in a different state.
I think I will just extract the route53 logic to bypass this issue. Thanks for the insight!
i would suggest, instead of depending on var.zone_id
in the enabled
variable, the module should accept a var.create_dns_record
boolean. i’m not sure how to do that in a backwards-compatible manner though, so not sure it would be accepted
That’s inline with my typical design pattern so I would agree with that approach. The backward compatibility problem is a good note, I’m not sure there is a clean way to solve that,
does anyone have a recommended way of running tflint
on a monorepo of modules?
I’m sure there’s a better way but I do this. for D in */; do cd “${D}” && tflint && cd ..; done
That will only do one level deep though
pre-commit — https://github.com/antonbabenko/pre-commit-terraform
pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform
are there any examples of tflint.hcl
as i would like to enable a few rules from https://github.com/terraform-linters/tflint/tree/master/docs/rules
i’m running into the following error when running terragrunt with azure provider. Has anyone come across this? Seems a possible bug i may have encountered?
I’m running tf version 0.14.9
and tg version 0.28.20
azurerm_role_definition.default: Creating...
Error: rpc error: code = Unavailable desc = transport is closing....
2021/04/12 19:38:12 [TRACE] dag/walk: upstream of "provider[\"<http://registry.terraform.io/hashicorp/azurerm\|registry.terraform.io/hashicorp/azurerm\>"] (close)" errored, so skipping
2021/04/12 19:38:12 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021-04-12T19:38:12.459-0700 [DEBUG] plugin: plugin exited
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.
When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.
SECURITY WARNING: the "crash.log" file that was created may contain
sensitive information that must be redacted before it is safe to share
on the issue tracker.
[1]: <https://github.com/hashicorp/terraform/issues>
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
ERRO[0079] Hit multiple errors:
Hit multiple errors:
exit status 1
Figured it out
Hi all, I’m trying to dynamically obtain the ARNS of aws resource share invitations. I found that the data source for RAM doesn’t really support this. I’m attempting to mimic this example instead and I’ve been able to retrieve the ARNS using the following awscli command below:
aws ram get-resource-share-invitations \
--query 'resourceShareInvitations[*]|[?contains(resourceShareName,`prefix`)==`true`].resourceShareInvitationArn' \
--region us-east-1
However, I’m not sure how I can get it into the correct format that data.external
requires. Ideally Id want the output to be:
{ resrouceShareName: resourceShareInvitationArn }
Pipe to jq and create a map
sidenote: If there’s a terraform github issue you were able to find about this data-source incompatibility, I’ll happily give it an upvote.
i believe it is more that it’s a whole different api, rather than an incompatibility, exactly… the data source aws_ram_resource_share
is based on get-resource-share
, but the invitation is returned by get-resource-share-invitations
however, the share accepter resource takes the actual resource_share_arn
, which IS returned by aws_ram_resource_share
, and then the share accepter resource looks up the invite arn for you using that. the share accepter does not accept the invite arn
yeah that makes sense
i believe you can get the share arn from the invite though, so you’re approach will work, but you’ll need to adjust the query
yeah thats what i’m trying to figure out but jmespath
gets ugly, fast
OR you can use a multi-provider approach… use the aws_ram_resource_share
data source with a provider that has permissions to read the ram share from the owner account
any ideas?
Wrote a short blog post about drift and Terraform, specifically in the case of AWS IAM: https://indeni.com/blog/identifying-iam-configuration-drift/
Would love to hear more examples from people here about drift issues you care about. I’m hearing more and more about the need to identify drift, and would like to focus is on specific use cases (vs all drift). Thoughts anyone?
So, your team, or even possibly your entire organization, has decided to standardize on using infrastructure-as-code to define IAM entities within cloud environments. For example, […]
nice! i like that SCP… now, make sure the trust policy for the Iac role is locked down so only your CI system can assume it… and/or have an explicit deny on all other policies to be able to AssumeRole the Iac role
So, your team, or even possibly your entire organization, has decided to standardize on using infrastructure-as-code to define IAM entities within cloud environments. For example, […]
that’s also the first good use case i’ve seen of paths on iam entities… interesting…
We used to use paths for IAM roles, but then some AWS (don’t remember which) service barfed when it wasn’t /
sounds about right @mrwacky
I’ve taken a cursory glance, but can’t find anywhere regex_replace_chars
is used anywhere in https://github.com/cloudposse/terraform-null-label (or any callers). Am I missing something? Have y’all ever used this?
To clarify: I can’t find nor imagine an instance where I’d want a different regex than the default.
Ah! That’s a different question. I can’t think of an example right now.
The reason we have this is to support the use-case where a user does not want us to normalize/strip out characters. We (cloudposse) don’t have any such use-case since we’re strict about how name things.
2021-04-14
Learn how to set up and use Secrets Automation to secure, orchestrate, and manage your company’s infrastructure secrets.
Learn how to set up and use Secrets Automation to secure, orchestrate, and manage your company’s infrastructure secrets.
Use the 1Password Connect Terraform Provider to reference, create, or update items in your 1Password Vaults. - 1Password/terraform-provider-onepassword
v0.15.0 0.15.0 (April 14, 2021) UPGRADE NOTES AND BREAKING CHANGES: The following is a summary of each of the changes in this release that might require special consideration when upgrading. Refer to the Terraform v0.15 upgrade guide for more details and recommended upgrade steps.
“Proxy configuration blocks” (provider blocks with only alias set) in shared modules are now replaced with a more explicit…
@eric you called it, they released right before office hours
Every week
It doesn’t seem like a significant version. Will test it tomorrow.
it will get auto published today in our packages distribution
hey guys! i forgot if you can do this… or if how
i need to get a module s3_replica_bucket_arn = module.secondary.module.stuff
this module is initialised (it’s in ../some_other_folder
)
but i get
A managed resource "secondary" "module" has not been declared in
module.secondary.
A managed resource "secondary" "module" has not been declared in
module.primary.
I only want the one module from the parent
anyone know if i am barking up the wrong tree?
Try running terraform state list
to get a list of all the modules in your project. If there are a lot of resources you might want to try grepping for module.secondary
:
terraform state list | grep module.secondary
That might help you narrow in on the path you need to use to get the value you are looking for.
also note that if you are trying to use a value from a module, the module must publish that value as an output. Check the source for the module to confirm all the things being published. One easy way to do this (especially if you don’t have access to the source) is to print the entire module as output in your project’s outputs.tf:
output "secondary" {
value = module.secondary
}
Then run terraform refresh
to see all the outputs from the module. if the value you are trying to get to is not in there, you can’t get it unless you update the module to publish the value.
thanks, there’s an output, i didn’t know terraform state list
(well i did once run it)
i’ll give that a go, cheers
np!
Hi folks, who is in charge of reviewing the PRs on cloudposse / terraform-aws-rds-cloudwatch-sns-alarms ?
Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms
Hey @Matthew Tovbin bring this up in #pr-reviews and somebody will get to it. That module gets left behind a little bit if I remember correctly, so us maintainers just need a bit of a nudge and #pr-reviews is the best place for that.
Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms
Thanks!
It would be so so great if that someone could have a look on several opened PR and get some of them a go
2021-04-15
Why does terraform-aws-iam-user
require a PGP key? o.O
Hmm.. yeah, that makes the module pretty much useless for me. Darn.
Don’t get me wrong, awesome craftsmanship though.
if you’re trying to make a service user that has programmatic access only, they have a different module for it
Just making a list of users that can log into the dashboard. I just used the aws_iam_user
resource with a for_each
loop inside.
moved into #terragrunt Avoided terragrunt for a long time.
I’m now in a place where I don’t have access to github, using Azure Repos. I need to deploy multiple clones of an environment and managing state is annoying. I’m doing a lot of work to realize I’m basically writing my own Go implementation of terragrunt sorta.
Considering Atlantis runs with az repos, I need to simplify my layout, I’m working with Go developers, limited on github stuff, and terraform cloud and others aren’t most likely options at this moment (have to role my own with azure pipelines otherwise)…
I tried the yaml config and dives in deep but since this is basically only terraform the abstraction and debugging for my use case wasn’t ideal though it was pretty cool!
Is there any major reason I shouldn’t just go ahead and use terragrunt for this type of workflow?
… moving this into #terragrunt didn’t realize dedicated channel.
Hello Folks, just wondering if any of you have already gone through this ... I want to force pre-generated secrets into RDS using locals : locals {
your_secret = jsondecode(
data.aws_secretsmanager_secret_version.creds.secret_string
)
}
and then ...
# Set the secrets from AWS Secrets Manager username = “${local.your_secret.username}” password = “${local.your_secret.password}”
but Terraform insists saying the values are not set … tried several combinations of “ 7$ ${} … but starting to feel like I am doing the wrong thing here … any directions please ?
and also checked : https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1#bebe — got a feeling the module is not able to access the locals !???
One of the most common questions we get about using Terraform to manage infrastructure as code is how to handle secrets such as passwords…
This suggests you may need to add more to your data
call, excerpt:
jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["key1"]
One of the most common questions we get about using Terraform to manage infrastructure as code is how to handle secrets such as passwords…
Error: provider.aws: aws_db_instance: : “password”: required field is not set
it still does not get a value it seems …
but thanks anyway … this is tripping me out !
perhaps the data object isn’t getting right secret? ie
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
anyways, good luck!
yeah … I am going on that direction now … take care my man !!!
I was able to sort it out ! the Admin folks are blocking the password field to be set … I went ahead and HARDCODED a string and it is still NOT reading … ALL SORTED !!! MOSTLY APPRECIATED FOR YOUR SUPPORT !!!
2021-04-16
I’m trying to write a parser for tfstate files. Version 4 sounds doable, but Version 3 is quite hard to normalize. Is there is a way that I can automatically migrate version 3 to version 4 without doing a full upgrade on the codebase?
That’s a pretty low level question — I’m not sure if anyone will really know off the top of their heads. I’d ask in the HashiCorp Terraform discourse board: https://discuss.hashicorp.com/
HashiCorp
Will try to check on the discourse board too
I’m working on a fun open-source tool for Terraform DevOps that should be useful
there are more and more projects operating on the terraform state files. i’d look at how they are doing it
A web dashboard to inspect Terraform States - camptocamp/terraboard
Any terratest
users here? I’m wondering if anyone has hacked/played with integrating it with KinD to get a Kubernets cluster-on-the-fly inside Terratest
2021-04-17
a quick and dirty idea: Do you think using TOML instead for YAML | JSON for passing tfvars to Terraform Stack will make sense? |
no, YAML and JSON have their limitations, but they are supported using the standard library for Terraform. The benefit of TOML is much smaller than the cost of added complexity to your process.
yea, IMO we don’t need another config format.
Hi everyone
Just a baby learning terraform here. Will watch your channel and ask questions as they come
2021-04-18
Hi everyone, just wanted to see if anyone had a clever way of doing the following; I’d like to turn the following into a module (which is the easy part )
resource "vault_auth_backend" "aws" {
type = "aws"
}
resource "vault_aws_auth_backend_role" "example" {
backend = vault_auth_backend.aws.path
bound_account_ids = ["123456789012"]
bound_iam_role_arns = ["arn:aws:iam::123456789012:role/MyRole"]
}
If multiple account id’s are required then I can pass in a list to bound_account_ids
and use count
to iterate through it, however, if I wanted the IAM role name to be different for some of the account ids how could I achieve this? for_each
?
@Brij S when you say “different for some of the account ids” things get kind of odd. yeah, you could use a for_each
on a list of account ids, but if you want to vary the IAM role, you’ll need to change the data to a map. that way each item in the for_each
loop will have an id and and IAM role associated with it.
Another option would be creating a module for each IAM role, that way you can associate the IDs with the module that has the IAM role they need.
there are a few ways you can approach this. just need to figure out which one is best for your use.
Hey all — this Terraform issue could use some . It’s been around for almost 2 years and causes a lot of confusion with modules in the registry (which the Cloud Posse module library of course gets hit by): https://github.com/hashicorp/terraform/issues/21417
See my module for example: https://registry.terraform.io/modules/JustinGrote/azure-function-powershell/azurerm/ azure_active_directory_id is default=null, so it is an optional variable, but shows a…
Maybe ping mitchelh on twitter? Seems like it should be an easy one, and it seems like he’s able to get an eyeball on useability things like that more than comments on stale issues :/
See my module for example: https://registry.terraform.io/modules/JustinGrote/azure-function-powershell/azurerm/ azure_active_directory_id is default=null, so it is an optional variable, but shows a…
Not a bad idea. I’ll do so.
Always feel like an ass doing that type of thing but went ahead and did it anyway.
2 years seems like a reasonable period to wait before escalating to a more drastic measure
Yeah agreed. It’s also the type of thing that likely burns people constantly, but the vast majority of module consumers aren’t going to actually look up this issue and give it a .
I believe this is a limitation of HCL2 not of Terraform, per se
(you might get a response which asks you to re-raise in a different repo)
2021-04-19
how likely (in time) would it be that if I created a PR for https://github.com/cloudposse/terraform-aws-documentdb-cluster that it would be merged and tagged?
Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster
Like with any PR It can be few hours up to infinity. Depends on how complex the PR is, if best practices are there, tests if any and if functionality added makes sense.
What do you want to implement? If you need guidance let me know
Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster
@Steve Wade (swade1987) we’re pretty good about providing guidance so if you put something up then it’ll likely get eyes on it and you’ll get direction if something needs to change. If nobody responds quickly then feel free to ping in #pr-reviews or ping me directly.
i realised that what i was wanting to do doesn’t make sense for the module
so we just forked what we needed
Checkov 2.0 is released ! A ton of work went into this from the Bridgecrew team (and from you all) and we’re super excited for this milestone for the project. TL;DR the update includes:
• A completely rearchitected graph-based Terraform scanning framework. This allows for multi-resource queries with improved variable resolution and drastically increases performance.
• Checkov can now scan Dockerfiles for misconfigurations.
• We’ve added nearly 250 new out-of-the-box policies, including existing attribute-based ones and new graph-based ones. To learn more, check out:
• The Bridgecrew blog post//bridgecrew.io/blog/checkov-2-0-release)
Introducing our biggest update to Checkov 2.0 yet including an all-new graph-based framework, 250 new policies, and Dockerfile support.
hello. is there an open issue to address the deprecated use of null_data_source: https://github.com/cloudposse/terraform-aws-ec2-instance/blob/4f28ecce852107011f66bf74bb6b32691605b368/main.tf#L153 ? i didn’t find anything and can submit a PR. thanks.
Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance
Doesn’t look like it @Brandon Metcalf — Feel free to submit a PR and post it here or #pr-reviews and I’ll give it a review.
Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance
Hopefully a simple question…
Is it possible to do multiple comparisons like this?
cookie_behavior = local.myvalue == "none" || "whitelist" || "all" ? local.myvalue : null
This currently errors; So, I assume not and continuing with the assumptions, I assume the only real option is to use a regular expression or
cookie_behavior = local.myvalue == "none" || local.myvalue == "whitelist" || local.myvalue == "all" ? local.myvalue : null
yes but they are separate comparison expressions when you set it up like that:
cookie_behavior = (local.myvalue == "none" || local.myvalue == "whitelist" || local.myvalue == "all") ? local.myvalue : null
or you can use a list with contains():
cookie_behavior = contains(["none", "whitelist", "all"], local.myvalue) ? local.myvalue : null
Thank you Loren,
Has anybody run into this issue before changing number of nodes with the msk module? https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/issues/17
Found a bug? Maybe our Slack Community can help. Describe the Bug Invalid index error when changing number_of_broker_nodes variable from 2 to 4. (The # of AZ's is 2 instead of 3 like the exampl…
Learn how to use the Terraform Cloud Operator for Kubernetes to manage the infrastructure lifecycle through a Kubernetes custom resource.
Can anyone recommend an upstream elastic search service module it needs to be able to handle single and multi node setups with instance and ebs storage options
Does https://github.com/cloudposse/terraform-aws-elasticsearch do what you want?
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
I have created my own and used it for a long time but it doesn’t fit my current clients use case as it needs to be more flexible
Good evening, has anybody got a suggestions as to the problem here:
terraform 0.13.5 is exiting with this error:
Error: rpc error: code = Unavailable desc = transport is closing
When trying to apply a aws_cloudfront_origin_request_policy I’ve made.
if anybody has the same issue here. my problem was
resource "aws_cloudfront_origin_request_policy" "example" {
name = "example-policy"
comment = "example comment"
cookies_config {
cookie_behavior = "none"
cookies {
items = []
}
}
leaving the cookies {} section in place when none is set caused the error. Same with either of headers_config & query_strings
Now to find a way to use dynamic to exclude those sections completely if they are set to none.
***removed, looks like the example did work but didn’t paste correctly. Above issue must be with my inputs. Sorry to have wasted peoples time.
2021-04-20
Morning everyone,
I have a resource that creates aws_cloudfront_origin_request_policy. Which I then later reference in a locals section
cf_custom_request_policy_map = { for k, v in aws_cloudfront_origin_request_policy.this : k => v.id if length(aws_cloudfront_origin_request_policy.this) > 0 }
and then merge with
all_policy_maps = merge([local.cf](http://local.cf)_managed_policy_map, length([local.cf](http://local.cf)_custom_request_policy_map)
The resource is togglable so won’t always be there. Everything looks to work but I do a lot of sanity checking / viewing of outputs in the console when I’m trying to debug by code and when trying to view local.all_policy_maps I get
Error: Result depends on values that cannot be determined until after "terraform apply".
Which makes sense but my question now is…
*Is there a better way I should be referencing the output of the resource?*
If this was a module I’d normally use an output but its not part of a module, the resource and local are all within the same tf script and are part of the same single apply.
Welcome all comments and thank you all in advance.
Hi, I am using this module (terraform-aws-elasticsearch). Where I was looking to enable Fine-Grained Access Control in Amazon Elasticsearch Service. Based on my understanding, can I say:
advanced_security_options_internal_user_database_enabled = true
Above configuration will enable it?
I did that using v0.30.0 version of that module.
Add such options beside other required parameters specified in docs.
advanced_security_options_enabled = true
advanced_security_options_internal_user_database_enabled = true
advanced_security_options_master_user_name = “master”
advanced_security_options_master_user_password = “pass”
thank you!
If not then I want to know how to enable “Fine-grained access control” for ES using above module?
Hello Folks, How do I use multiple managed rules in below aws config module
module "example" {
source = "cloudposse/config/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
create_sns_topic = true
create_iam_role = true
managed_rules = {
account-part-of-organizations = {
description = "Checks whether AWS account is part of AWS Organizations. The rule is NON_COMPLIANT if an AWS account is not part of AWS Organizations or AWS Organizations master account ID does not match rule parameter MasterAccountId.",
identifier = "ACCOUNT_PART_OF_ORGANIZATIONS",
trigger_type = "PERIODIC"
enabled = true
}
}
}
Look at our strategy for CIS: https://github.com/cloudposse/terraform-aws-config/tree/master/modules/cis-1-2-rules
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config
I’m trying with below approach but getting some error
module "config" {
source = "cloudposse/config/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
s3_bucket_arn = module.s3_config_bucket.bucket_arn
s3_bucket_id = module.s3_config_bucket.bucket_id
global_resource_collector_region = "ap-south-1"
create_sns_topic = false
create_iam_role = true
managed_rules = {
access-keys-rotated = {
description = "Checks if the active access keys are rotated within the number of days specified in maxAccessKeyAge. The rule is NON_COMPLIANT if the access keys have not been rotated for more than maxAccessKeyAge number of days.",
identifier = "ACCESS_KEYS_ROTATED",
trigger_type = "PERIODIC"
enabled = true
input_parameters = [
{
maxAccessKeyAge = 90
}
]
},
acm-certificate-expiration-check = {
description = "Checks if AWS Certificate Manager Certificates in your account are marked for expiration within the specified number of days. Certificates provided by ACM are automatically renewed. ACM does not automatically renew certificates that you import",
identifier = "ACM_CERTIFICATE_EXPIRATION_CHECK",
trigger_type = "Configuration changes"
enabled = true
input_parameters = [
{
daysToExpiration = 15
}
]
}
}
}
Maybe this has already been discussed, but I could not find anything useful so I figured it might be worth asking anyway. Is there a known best way or practice to deal with a larger number of helm_releases, ideally in a dynamic fashion? My use case looks like this:
• one pipeline builds a release from repository 1 and pushes helm charts to an artifactory folder
◦ the number of the helm charts can vary from 1 to >50 (can also grow over time)
• another pipeline gets triggered from the first and starts with a terraform run, where some helm_release
resources get deployed, the idea was to look up the list of the services from the chart repo (can be done with the jfrog cli in the pipeline) and use this list for some kind of iteration over either
◦ a module where the service parameters are fed into along with the service name from the list
◦ another method based on terraform, unknown to me so far
Or am I going the wrong path trying to solve this with terraform when it should be done with native helm?
Thank you for your suggestions.
sharing for the (terraform) culture!
alias moduleinit='touch {main,variables,outputs}.tf && wget <https://raw.githubusercontent.com/github/gitignore/master/Terraform.gitignore> -O .gitignore'
Here is my similar bash hackery around this — https://github.com/Gowiem/DotFiles/blob/master/terraform/functions.zsh#L70
Though I honestly don’t use that much anymore.
Gowiem DotFiles Repo. Contribute to Gowiem/DotFiles development by creating an account on GitHub.
nice.
incoming n00b question I am trying to work out what zone awareness means in aws elasticsearch. Do you have to use it when using a multi node setup
Probably a good question for #aws or read through AWS docs on the subject.
2021-04-21
is anyone here can help me to use this project https://github.com/cloudposse/terraform-aws-config ?
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config
What help do you need @Kim?
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config
Ask some specific questions and I’m sure folks can help out. Definitely be sure to check the examples/complete
directory if need direction on how to use it generally.
In fact, i need to know how to start using the project stp by stp
@Kim I’d check out some tutorials from HashiCorp Learn — The modules track might help https://learn.hashicorp.com/collections/terraform/modules
Going step by step through how to execute Terraform and use a module / talk through the AWS process is a bit much to go into in a forum like this unfortunately, so doing some research on your own and then circling back with specific questions like “Hey why am I getting this error” will allow me or others to help you.
Learn how to provision, secure, connect, and run any infrastructure for any application.
Hi All! I deployed the following terraform config in one account and it works fine . Currently I’m trying to deploy the same code in another account and facing the error below elasticsearch stucks in Loading state I checked STS is enabled in my region, so this is not a case https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-handling-errors.html#es-vpc-sts
module "elasticsearch-app" {
source = "../../../external_modules/cloudposse/terraform-aws-elasticsearch"
stage = var.environment
name = "elasticsearch-ap"
// TODO: setup DNS zone for elasticsearch-app
// dns_zone_id = "Z14EN2YD427LRQ"
security_groups = [module.stage_vpc.default_security_group_id, module.eks.worker_security_group_id]
vpc_id = module.stage_vpc.vpc_id
subnet_ids = [module.stage_vpc.public_subnets[0]]
availability_zone_count = 1
zone_awareness_enabled = "false"
elasticsearch_version = "7.9"
instance_type = "t2.small.elasticsearch"
instance_count = 1
ebs_volume_size = 10
// TODO: create strict policies for elastic assumed roles
iam_role_arns = ["*"]
iam_actions = ["es:ESHttpGet"] #, "es:ESHttpPut", "es:ESHttpPost", "es:ESHttpHead", "es:ESHttpDelete"]
encrypt_at_rest_enabled = "false"
kibana_subdomain_name = "kibana-es-apps"
# Disable option: Require HTTPS for all traffic to the domain
# Required as global-search service doesn't work with https
domain_endpoint_options_enforce_https = false
advanced_security_options_internal_user_database_enabled = true
advanced_security_options_master_user_name = "elasticuser"
advanced_security_options_master_user_password = aws_ssm_parameter.elasticsearch_apps_password.value
// Required as workaround: <https://github.com/cloudposse/terraform-aws-elasticsearch/issues/81>
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
}
Error:
module.elasticsearch-app.aws_elasticsearch_domain.default[0]: Still creating... [59m51s elapsed]
module.elasticsearch-app.aws_elasticsearch_domain.default[0]: Still creating... [1h0m1s elapsed]
Error: Error waiting for ElasticSearch domain to be created: "arn:aws:es:us-east-1:11111111111111:domain/stage-elasticsearch-ap": Timeout while waiting for the domain to be created
on ../../../external_modules/cloudposse/terraform-aws-elasticsearch/main.tf line 100, in resource "aws_elasticsearch_domain" "default":
100: resource "aws_elasticsearch_domain" "default" {
Learn how to identify and solve common Amazon Elasticsearch Service errors.
@O K the AWS ElasticSearch service is known to be slow / crappy. I’ve had plenty of issues with it taking hour+ deploy times and terraform timing out. I would guess that your issue is due to AWS ElasticSearch and not due to the module.
Learn how to identify and solve common Amazon Elasticsearch Service errors.
I’d suggest trying again tomorrow
Thank you, man
changed instance from t2.small -> t2.medium and it deployed in 15 min! probably more memory is needed or AWS knows the stuff
AWS is weak: They allow you to use t2.small instances, but they basically don’t work. Like I’ve had issues like you just ran into and just general day to day memory failures due to trying to use too small of instances. It’s like AWS wants to show off that you can keep your ES clusters cheap, but in reality…. that shit don’t work.
ES is extremely memory hungry so I’d rather them just come out and say: Hey we don’t allow you to use cheap instances because ES is just way too memory intensive. You have to use these expensive boxes over here.
BTW, what do you think about https://aws.amazon.com/blogs/opensource/introducing-opensearch/
Today, we are introducing the OpenSearch project, a community-driven, open source fork of Elasticsearch and Kibana. We are making a long-term investment in OpenSearch to ensure users continue to have a secure, high-quality, fully open source search and analytics suite with a rich roadmap of new and innovative functionality. This project includes OpenSearch (derived from […]
Not sure if I have a fully formed opinion. But I hope that causes them to put more effort into ES as a managed service as I’ve been pretty unhappy with it so far. I’ve lost clusters and data because clusters can just get into a failed state and the AWS documentation just says “Hey contact support to fix”. That’s BS in my mind. So if they start addressing issues like that then I’ll be happier.
This is 0.12 version https://github.com/cloudposse/terraform-aws-elasticsearch. I wonder what should I check to solve this error
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
Is there a way to set up terraform-aws-tfstate-backend so the state file is saved to a folder in an existing S3 bucket? The way I have been using this module there is a bucket for each state file and it is getting pretty cluttered. Was hoping there was a way to do this for better organization.
You can specify a folder path in the key
param when specifying the s3 state backend
backend "s3" {
region = "us-east-1"
bucket = "< the name of the S3 state bucket >"
key = "some/folder/structure/terraform.tfstate"
dynamodb_table = "< the name of the DynamoDB locking table >"
profile = ""
role_arn = ""
encrypt = true
}
Thanks!
Yup, this is the best way
Regarding https://github.com/cloudposse/terraform-aws-alb and https://github.com/cloudposse/terraform-aws-nlb - is there a particular reason the module “access_logs” in nlb can’t look like alb? I’m more than happy to submit the PR. I didn’t know if I was missing something.
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
Terraform module to provision a standard NLB for TCP/UDP/TLS traffic https://cloudposse.com/accelerate - cloudposse/terraform-aws-nlb
send a PR, but there should be similar
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
Terraform module to provision a standard NLB for TCP/UDP/TLS traffic https://cloudposse.com/accelerate - cloudposse/terraform-aws-nlb
Ya, they should be similar.
The NLB module was a contribution from some other organization
Did you hear about AWS’s new policy validation API and wished you can use it with your Terraform code? Now there’s a way: https://indeni.com/blog/integrating-awss-new-policy-validation-with-terraform-in-ci-cd/
Is there a non-saas version?
Not today. The API calls are made directly from our own AWS account. What’s your thinking?
any ideas how to detect configuration drift, e.g. resources created manually without terraform? Any tools/vendors for this kind of task?
Alex - there’s driftctl but need to understand what you determine as drift. In TF, you can use meta argument to intentionally ignore changes . Also, resources are often created out of events (e.g. lambda function creates an s3). What’s your definition of config drift?
driftctl is a free and open-source CLI that warns of infrastructure drift and fills in the missing piece in your DevSecOps toolbox.
Terraform by HashiCorp
in this case I mean that I need to detect any resources in the AWS account which were not created by the terraform code @Charles Kim
driftctl
seems good, thanks! @Charles Kim
Try out driftctl. It has some issues but the team is rather responsive. disclosure: i work at Cloudrail and we’re focusing on solving security issues resulting from config drift. Here’s a sample: https://indeni.com/blog/identifying-iam-configuration-drift/
So, your team, or even possibly your entire organization, has decided to standardize on using infrastructure-as-code to define IAM entities within cloud environments. For example, […]
@Charles Kim did you use driftctl with Atlantis
I’m working on a new project that will be released soon! Would love to hear your feedback, let me know your Github ID if you would like a preview before release https://twitter.com/mazen160/status/1383475198544936964
My side-project for the weekend, tfquery: a tool for SQL-like queries in your Terraform State files. My goal is to be able to say: $> select * from resources where type = “aws_s3_bucket” and “is_encrypted” = false
Will try to open-source it since I didn’t find a good solution
This sounds really cool. Will you be able to query the entire S3 backend, or just one state file at a time?
My side-project for the weekend, tfquery: a tool for SQL-like queries in your Terraform State files. My goal is to be able to say: $> select * from resources where type = “aws_s3_bucket” and “is_encrypted” = false
Will try to open-source it since I didn’t find a good solution
@Igor You can import sync your S3 backend locally, and then run a query on all Tfstate files at the same time, it’s been really helpful for me
Let me know if I can add you! Hopefully I can hear thoughts from you if you get a chance
I don’t have any immediate use cases for this, but I’ll keep it in mind. Wishing you success with the launch.
Hi, I’m working with https://github.com/cloudposse/terraform-aws-documentdb-cluster and I wish I could use disabled the TLS of documentdb, but not find how, Is posible with this module?
Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster
if something is missing in the module that this resource supports aws_docdb_cluster, you can open an issue or a PR
Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster
ok, thanks so much!
TLS is possible to enable/disable using https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/docdb_cluster_parameter_group
parameter {
name = "tls"
value = "enabled"
}
which in the module is this var
<https://github.com/cloudposse/terraform-aws-documentdb-cluster/blob/master/variables.tf#L84>
ok, I try with the variable “cluster_parameters”
Sorry one question, if I use for example :
variable "parameters" {
type =list(object({
apply_method = string
name = string
value = string
}))
default = [{"apply_method"="true","name"="tls","value"="disabled"}]
description = "List of parameters for documentdb"
}
How to I could use in my main.tf? I get values with the method for_each ?
module "documentdb_cluster" {
source = "......"
version = x.x.x
cluster_size = var.cluster_size
cluster_parameters = [
{
apply_method = ""
name = "tls"
value = "disabled"
}
]
or
module "documentdb_cluster" {
source = "......"
version = x.x.x
cluster_size = var.cluster_size
cluster_parameters = var.parameters
the module does for_each
on the list of objects
ok,thanks!
2021-04-22
I made a PR to build up statistics on TFSec findings, to filter results by check type.
Should make analyzing Terraform vulnerabilities much easier
If you’re into cloud security, would be happy to connect with you on Twitter https://twitter.com/mazen160 :)
The latest Tweets from Mazin Ahmed (@mazen160). Hacker | Builder. I talk about Web Security, Security Engineering, DevSecOps, and Tech Startups. Founder @FullHunt. Ex-@ProtonMail. Blue by Day. Red by Night |
Looks amazing!
Hi, I am using elasticsearch module. Which is trying to create IAM user.
➜ tf apply -auto-approve
module.elasticsearch.aws_security_group.default[0]: Refreshing state... [id=sg-0e56e3767a5b60fe7]
module.elasticsearch.aws_security_group_rule.ingress_cidr_blocks[0]: Refreshing state... [id=sgrule-3053224398]
module.elasticsearch.aws_security_group_rule.egress[0]: Refreshing state... [id=sgrule-3045719721]
module.elasticsearch.aws_iam_role.elasticsearch_user[0]: Creating...
module.elasticsearch.aws_elasticsearch_domain.default[0]: Creating...
module.elasticsearch.aws_elasticsearch_domain.default[0]: Still creating... [10s elapsed]
Error: Error creating IAM Role es-msf-gplsmzapp-1-user: AccessDenied: User: arn:aws:iam::XXXX:test is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::330153026934:role/es-msf-gplsmzapp-1-user with an explicit deny
status code: 403, request id: 87e0551b-3953-4e56-b364-a02b26065841
on .terraform/modules/elasticsearch/main.tf line 68, in resource "aws_iam_role" "elasticsearch_user":
68: resource "aws_iam_role" "elasticsearch_user" {
Error: Error creating ElasticSearch domain: ValidationException: You must specify exactly one subnet.
on .terraform/modules/elasticsearch/main.tf line 100, in resource "aws_elasticsearch_domain" "default":
100: resource "aws_elasticsearch_domain" "default" {
Just curious, can I skip user or role creation process?
~In other env, I was able to provision ES/Kibana without creating this user.~I was wrong, I found that it create new Role on other env. So role creation is default process.
Want to know can we skip role creation?
Can someone guide me?
Hey Amit, check out the variable here: https://github.com/cloudposse/terraform-aws-elasticsearch#input_create_iam_service_linked_role
That enables / disables creation of the role.
The elasticsearch_user resource that is created can be found here: https://github.com/cloudposse/terraform-aws-elasticsearch/blob/master/main.tf#L68
There is logic which you could use to disable that resource if you want.
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
Hi lads, I have an issue with cloudposse/elasticsearch/aws module
although I set create_iam_service_linked_role = "false"
and there is nothing in the plan related to AWSServiceRoleForAmazonElasticsearchService apply is trowing
Error: Error creating service-linked role with name es.amazonaws.com: InvalidInput: Service role name `AWSServiceRoleForAmazonElasticsearchService` has been taken in this account, please try a different suffix.
status code: 400, request id: 9c27ff1d-5ec9-496c-8290-cf65180ffb69
on iam.tf line 49, in resource "aws_iam_service_linked_role" "es":
49: resource "aws_iam_service_linked_role" "es" {
any idea what can be?
That value should be bool
— Are you trying to pass it as a string?
If it’s still trying to create that role then I’d look at the logic behind that variable and try to trace it back. If there is a bug, either open an issue or put up a PR and then post in #pr-reviews.
thanks, I already did as bool, I will dig deeper
Seems like there’s an issue open for the issue: https://github.com/terraform-providers/terraform-provider-aws/issues/5218
Actually, I misread, but does give some insight into the parameter.
For reference, the code is here: https://github.com/cloudposse/terraform-aws-elasticsearch/blob/0541281379ae1b916fe4e19c884336fd10a328f5/main.tf#L62
Definitely could see how it’d fail as a string even if specified as "false"
(non-empty string is truthy, I assume). Looks like using = false
should work. Good luck!
Hi folks, I’m using Cloudfosse ECS task module. Do you have another modules that we can export the following docker label below in container to datadog logs as tags?
Is it possible to chain Terraform on_depends
for a list. Basically I’m trying to do something like
resource "google_secret_manager_secret_version" "main" {
count = length(var.secret_data)
secret = google_secret_manager_secret.main.id
secret_data = var.secret_data[count.index].secret_data
enabled = var.secret_data[count.index].enabled
depends_on = [google_secret_manager_secret_version.main[count.index - 1]]
}
to create secret manager versions in a specific order. When I try it, I get an error on the depends_on
line A single static variable reference is required: only attribute access and indexing with constant keys. No calculations, function calls, template expressions, etc are allowed here.
Does
resource "google_secret_manager_secret_version" "main" {
count = length(var.secret_data)
secret = google_secret_manager_secret.main.id
secret_data = var.secret_data[count.index].secret_data
enabled = var.secret_data[count.index].enabled
depends_on = [google_secret_manager_secret_version.main]
}
Not do what you want?
no you can’t use depends_on
on the same resource block to order the creation… you can order them, but only with separate resource blocks. another “kinda” option is to use -parallelism=1
I get a cycle error when I use depends_on
on the same resource block. I’ll try to use parallelism=1
to see if that maintains order of the list.
I just tested with parellism=1
. It doesn’t preserve the order of the list. So secret version 7 is created first instead of secret version 1.
i can imagine a module to try to make it easier, but it would still involve multiple resources, each using depends_on for the prior one. say 10 resources, accepting a list of up to 10 secrets. obvs the number is arbitrary. and if you have more than that number of secrets, invoke the module more than once using module-level depends_on for ordering
I think you’re right. I’ll need to have the logic for the depends_on
on a module level. Then the user invokes the secret-version
module multiple times with a chain of depends_on
.
The only other way I see doing it is to create a module for a secret_version and let the user chain the depends_on
in the module calls.
Question regarding the https://github.com/cloudposse/terraform-aws-mq-broker module. It looks to me like the ingress SGs are messed up. I’m seeing from_port and to_port set to 0
for ingress but I don’t see protocol set to -1
to get All TCP traffic. This means that all ingress are getting the 0
port but that just won’t work for connecting to a broker. Does that sound right?
Terraform module for provisioning an AmazonMQ broker - cloudposse/terraform-aws-mq-broker
Or maybe I don’t understand how port 0
is supposed to work?
Terraform module for provisioning an AmazonMQ broker - cloudposse/terraform-aws-mq-broker
Unlelss I’m mistaken from and to port “0” should mean all ports. the protocol is “tcp” which will allow only TCP traffic https://github.com/cloudposse/terraform-aws-mq-broker/blob/master/sg.tf
Terraform module for provisioning an AmazonMQ broker - cloudposse/terraform-aws-mq-broker
Part of the code that I’m seeing
resource "aws_security_group_rule" description = "Allow inbound traffic from existing Security Groups"
from_port = 0
to_port = 0
protocol = "tcp"
type = "ingress"
}
whelp I guess I’m mistaken…for wide open ports we’re using from 0 to 65535. I have to be going crazy
Doesn’t protocol need to be “all” or “-1” for this to work?
I’m testing a RabbitMQ broker now with nc -z -v <hostname> 5671
and it doesn’t work with “tcp” and port 0
.
but works with port set to 5671
explicitely.
looks like you’re right
Okay, I thought I was loosing it. I’ll submit a PR shortly.
FYI: https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/sg.tf may show what they were attempting
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
might be more suited to this particular project (specifying a single port)
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
Yeah, in my issue I eluded to refining the ports further:
Even this is a bit wide open IMO when it should be restricting traffic to only the ports needed but at least this will fix it.
Describe the Bug When using the allowed_security_groups or allowed_cidr_blocks inputs they are setting port 0 traffic as allowed. This doesn't work since any broker queue (ActiveMQ or RabbitMQ)…
There’s only 2 supported brokers I think, ActiveMQ and RabbitMQ. Maybe my PR should include the specific ports you think?
Or I could fix the immediate issue and follow up w/a 2nd PR to restrict ‘em.
I personally think it should be restricted to a default port like they do with the RDS cluster.
In a previous PR I was conversing with “Gowiem” and he said he was using this module so now I’m wondering how he’s using it in this state
what RabbitMQ support via dynamic logs block RabbitMQ general logs cannot be enabled. hashicorp/terraform-provider-aws#18067 why Support engine_type = "RabbitMQ" references has…
2021-04-23
henlo
I’m here because of the call for maintainers: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment
Are you interested in becoming a maintainer?
Cool There is some info on the docs site here: https://docs.cloudposse.com/community/faq/
I’ll kick off the conversation with the contributor team and we’ll get back to you.
sure
I have several workloads running on Elastic Beanstalk and I’m relying on this Terraform module.
Hey Alex! That’s great. Let’s talk week after next. I will DM you.
does anyone have a recommended approach for setting up guardduty in a multi account setup from a terraform perspective?
i have seen a lot of modules but wondered if their was any alignment
Hi guys , Do you have a sample repository which installs kubernetes cluster auto scaler that works with TF 15.0 properly I was using cookie labs which broken after upgrade. Thanks Leia https://www.linkedin.com/in/leia-renee/
Hi all, I want to set up a greenfield AWS project using the CP resources as intended. What is the best place to start? Is it build the geodesic environment and follow the instructions in cloudposse/reference-architectures
? The readme says to just clone the repo, set up AWS, and run make root
but that target doesn’t exist.
Also says:
Update the configuration for this account by editing the configs/master.tfvar
file
That dir doesn’t exist in that repo
Looking through https://github.com/cloudposse/tutorials, probably will figure it out
Contribute to cloudposse/tutorials development by creating an account on GitHub.
@Ryan docs.cloudposse.com is the spot you want to be. Feel free to ask me any question. Reference arch is out of date, I wouldn’t look there.
Thanks, going through the aws bootstraping tutorial. Makes sense so far. For some reason migrating tfstate to the s3 backend is failing though, not sure why.
✗ . [none] tfstate-backend ⨠ aws-vault exec badger-dev -- aws s3 ls
Enter passphrase to unlock /conf/.awsvault/keys/:
2021-04-24 19:50:47 acme-ue2-tfstate-useful-satyr
✗ . [none] tfstate-backend ⨠ aws-vault exec badger-dev -- terraform init
Enter passphrase to unlock /conf/.awsvault/keys/:
Initializing modules...
Initializing the backend...
Error: Error inspecting states in the "s3" backend:
S3 bucket does not exist.
The referenced S3 bucket must have been previously created. If the S3 bucket
was created within the last minute, please wait for a minute or two and try
again.
Error: NoSuchBucket: The specified bucket does not exist
status code: 404, request id: R9BK7CGDC6ADW18R, host id: OGOwOANW5lgRhvUmtI8exraeV7GBAyW45XlTtuelQLMWDFxnfNYAPlg
vbNYtmCPyFknapGRRAUQ=
TF not finding the state bucket
I can ls
, cp
, rm
from the bucket so it’s definitely there
Huh possibly a region issue? I’d compare the region of the bucket vs the stacks/catalog/globals.yaml
backend config + [backend.json.tf](http://backend.json.tf)
file.
Also, I believe atmos
will pass the -reconfigure
flag when doing an tf init
to deal with issues of the backend config changing… but I’m not 100% sure on that so maybe try deleting your .terraform
directories.
If you can find out what went wrong, please let me know. I’ve gone through that workflow a few times when writing it up.
@Matt Gowie I think the script that invokes yq
sets the region to uw2, when it created the state files in ee2. I was going to submit a PR to the tutorial. I hit this too.
Ah damn, maybe I missed that one in a last minute switch to show dealing more than one region. @Joe Hosteny thanks for the info.
No problem. I think workspace_key_prefix sounds like it needs to be set too? I discussed with @jose.amengual in another thread. Haven’t checked into it this weekend any more, but I noticed the literal “env:” in the state file s3 path
@Joe Hosteny — I put up a PR for this. Mind giving it a review since you should be able to approve? https://github.com/cloudposse/tutorials/pull/5
what Updates random-pet.sh script to properly use ue2 when doing a substitution of the bucket + dynamo table names. why This was causing issues with folks not being able to find their bucket. r…
Thanks Approved - that’s what I did as well
Thanks @Joe Hosteny!
Thanks. I’ll test tomorrow.
Thanks Ryan!
2021-04-24
2021-04-25
2021-04-26
Is it possible to alter hostnames per instance in an ASG?
if you mean the hostname of the instances, sure the ASG doesn’t care…
if you mean multiple DNS names resolving to one ASG, sure… the certificate needs to be a wildcard or have a SAN for each valid name, and you need to set up the target group rules to match on the host
I would like each instance to have a unique hostname containing the AZ
If all you want is a TAG then you can configure that in the AWS config (see the doc). If you want a fully qualified domain name, then you may have to set something up to read the instance data and create DNS entries. https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-tagging.html
Add or remove tags for your Auto Scaling groups and Amazon EC2 instances to organize your resources.
You can add multiple tags to each Auto Scaling group. Additionally, you can propagate the tags from the Auto Scaling group to the Amazon EC2 instances it launches.
When setting up guarduty with a master/member setup do you always send the slack notifications from the master account or have it setup to notify from each of the member accounts?
v0.15.1 0.15.1 (April 26, 2021) ENHANCEMENTS:
config: Various Terraform language functions now have more precise inference rules for propagating the “sensitive” characteristic values. The affected functions are chunklist, concat, flatten, keys, length, lookup, merge, setproduct, tolist, tomap, values, and zipmap. The details are a little different for each of these but the general idea is to, as far as possible, preserve the sensitive characteristic on individual element or attribute values in…
Version 0.15.1
For those who missed it - HashiCorp were hit by the Codecov issue, and so all TF versions starting 0.12 had their signing key updated. Suggest you update to the most recent patch on the minor version you use. HashiCorp said they don’t foresee anyone being able to use this to deliver mal-providers, but it’s a good step to take anyway.
2021-04-27
v0.15.1 0.15.1 (April 26, 2021) ENHANCEMENTS:
config: Various Terraform language functions now have more precise inference rules for propagating the “sensitive” characteristic values. The affected functions are chunklist, concat, flatten, keys, length, lookup, merge, setproduct, tolist, tomap, values, and zipmap. The details are a little different for each of these but the general idea is to, as far as possible, preserve the sensitive characteristic on individual element or attribute values in…
v0.15.1 0.15.1 (April 26, 2021) ENHANCEMENTS:
config: Various Terraform language functions now have more precise inference rules for propagating the “sensitive” characteristic values. The affected functions are chunklist, concat, flatten, keys, length, lookup, merge, setproduct, tolist, tomap, values, and zipmap. The details are a little different for each of these but the general idea is to, as far as possible, preserve the sensitive characteristic on individual element or attribute values in…
0.15.1 (April 26, 2021) ENHANCEMENTS: config: Various Terraform language functions now have more precise inference rules for propagating the “sensitive” characteristic values. The affected functi…
Hi folks, do you have group sentinel policy here?
2021-04-28
Open-source project release
Feedback are welcome! Thank you everyone for the support. https://github.com/mazen160/tfquery
tfquery: Run SQL queries on your Terraform infrastructure. Query resources and analyze its configuration using a SQL-powered framework. - mazen160/tfquery
Hello guys, I’m working on elastic beanstalk using the CLoudposse module: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment I know this repo is looking for maintainer and it’s not main priority to update it, but could it be possible to take a closer look to this PR ? https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/170. I just tested it an it sounds like clean to me and could fix an issue in the module.
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Checking it out @Florian SILVA
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
We have a potential community member who reached out (@Alex Renoki) who is interested in helping maintain this module so hopefully that module starts getting more attention from folks who are actually using it soon. Problem we have today is that none of the current maintainers are big into beanstalk so it’s hard to review PRs.
Sounds good to me it can be merge I understand the second problem. Since beanstalk is quite complex It’s not easy to review these PR. I’m currently working on it and doing some modifications depending on my needs and seeing if some issues are linked so I’m a bit aware of some things. Just trying to fix minor fix when it annoys ^^
@Florian SILVA I might as well help reviewing some issues if needed. I’m not sure how to help with this maintenance issue
From what I saw, there are interesting and easy PR that has been open. The main issue on this module for me is that it is mainly working for application load balancer. I would not recommend using this module for other cases. I got some issues with the classic load balancer and the network type seems not working good enough but I saw some PR if I remember well so maybe starting by reviewing these could be a good beginning.
Just saw it thank you for the efficiency
Np — I can usually get to things if people complain loudly enough, but overall there is a continuous flood of PRs each week so it’s easy to miss em. Particularly for modules that we / I don’t actively use.
does anyone have any recommended cloudwatch alarms for redshift?
depends a lot on your use-case. Personally, we only alarm on the cluster health metric
if a service wants more specific alarms, it’s done at the application level
makes sense i am trying to work out the healthy alarm
specifically the threshold
resource "aws_cloudwatch_metric_alarm" "unhealthy_status" {
alarm_actions = [aws_sns_topic.this.arn]
alarm_description = "The database has been unhealthy for the last 10 minutes."
alarm_name = "${var.redshift_cluster_name}_reporting_unhealthy"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "1"
metric_name = "HealthStatus"
namespace = "AWS/Redshift"
ok_actions = [aws_sns_topic.this.arn]
period = "600"
statistic = "Maximum"
threshold = "0"
dimensions = {
ClusterIdentifier = var.redshift_cluster_id
}
}
is this right it does not feel right
You want > threshold, not >=
unhealthy is zero though
Any value below 1 for HealthStatus
is reported as 0 (UNHEALTHY
).
oops, I had it backwards. Well 1 is healthy then. So you want Less Than a threshold of 1
or Less Than Or Equal To 0
makes sense
thanks man
hope you’re well too
ty u2
Afternoon, can anybody suggest why my ASG created via terraform-aws-modules/autoscaling/aws/4.1.0 doesn’t force a new ASG to be built when the launch configuration changes?
paraphrased terraform apply output… # module.asg.aws_autoscaling_group.this[0] will be updated in-place ~ launch_configuration = “my-asg-2000001” -> (known after apply) module.asg.aws_autoscaling_group.this[0]: Modifications complete after 2s
But a no point are the existing instances replaced. I can see the module has create_before_destroy in it. Any idea what I’m missing?
looks like this is expected. Although, I’m unclear if this behaviour is due to a difference in functionality between launch configuration and launch templates. I vagally remember reading that one of them can’t be updated in place. I’ve always used a custom module that changes the ASG when the template changes.
I’ll have to look at the difference between our custom module and the registry based one. would still welcome any comments or points, while I’m doing this comparison.
Looks like my custom module uses the hack mention here: https://github.com/hashicorp/terraform-provider-aws/issues/4100
name = "asg-${aws_launch_configuration.this.name}"
but I assume I can’t do that on the module because it will be circular.
For anybody that has the same problem / question. I’ve found a way to do it. Basically call the module twice. Once to create the launch configuration or launch template Set the module not to create the asg
module "asg_config" {
source = "terraform-aws-modules/autoscaling/aws"
version = "~> 4.0"
create_lc = true
create_asg = false
name = "${var.client}-${var.service}-asg"
then use a second module to create the auto scaling group
module "asg" {
source = "terraform-aws-modules/autoscaling/aws"
version = "~> 4.0"
launch_configuration = module.asg_config.launch_configuration_name
use_lc = true
create_asg = true
I still look to have an issue with the ASG creating the asg but doesn’t wait for the node to come up before destroying the old one.
module.asg.aws_autoscaling_group.this[0]: Creation complete after 2s
module.asg.aws_autoscaling_group.this[0]: Destroying...
So not perfect but I’ll keep looking for a solution on that one. Hope this helps somebody else.
Did you ever try configuring the instance_refresh
block?
HI Tim, apologies only just seen this. I’ve not configured instance_refresh
block on this module but have used it before. What’s your question?
TIL, if trying to see all the validation warnings instead of the summarized “9 more similar warnings elsewhere”:
terraform validate -json | jq '.diagnostics[] | {detail: .detail, filename: .range.filename, start_line: .range.start.line}'
n00b question incoming … can someone explain to me exactly what the below actually means please …
Requester VPC (vpc-03a0a62a6d1e42513) peering connection attributes:
DNS resolution from accepter VPC to private IP
Enabled
2021-04-29
hey, hopefully an easy one to answer! although i cant get the correct syntax
I have the following resource outputs:
aws_efs_file_system.jenkins-efs.id
aws_efs_access_point.jenkins-efs.id
I need to string them together so they appear in the following format in the deployed resource volume_handle = aws_efs_file_system.jenkins-efs.id::aws_efs_access_point.jenkins-efs.id
can you try.
volume_handle = format("%s::%s", aws_efs_file_system.jenkins-efs.id, aws_efs_access_point.jenkins-efs.id)
but tf doesnt like the ::
Has anybody done anything with the aws resource aws_transfer_server
and EFS
. I can see support was added in provider Release v1.36.22 https://github.com/hashicorp/terraform-provider-aws/issues/17022 but the documentation on it is non existent and I’m currently getting
Error: Unsupported argument
on transfer_server.tf line 42, in resource "aws_transfer_server" "this":
42: domain = "EFS"
An argument named "domain" is not expected here.
i am trying to get my head around guardduty master to member relationship. is my understanding below true …
if we have account X (a member account) which uses region 1 and 2 that means in the master account we need to enable a detector in region 1 and 2
Question: In the master account do we need to setup aws_guardduty_member
per region for account X?
AFAIU, yes.
The per region part is painful. I know that the Cloud Posse folks have automated some of that via turf
: https://github.com/cloudposse/turf
CLI Tool to help with various automation tasks (mostly all that stuff we cannot accomplish with native terraform) - cloudposse/turf
Because doing so via Terraform is very painful supposedly.
Does atmos have the ability to run terraform workflows in parallel? (ie, sibling root modules that aren’t dependent on each other)
Good question — I don’t believe so. But might be a good one to add to the feature request list? cc @Andriy Knysh (Cloud Posse) + @Erik Osterman (Cloud Posse)
We have some scripts/ansible to run multiple components in parallel, but its ugly
Yeah — I can imagine so. I would think that atmos very likely could support that considering it’s a golang binary under the hood (created by Variant) and doing things in parallel like that is one of golang’s biggest selling points, but I’m pretty sure it’s not supported today.
Thats what I thought, but wasn’t sure if that was just due to the documentation being very new and in-progress
it supports running one workflow at a time with sequential steps
I would say our primary focus is using CD platforms to parallelize the runs. Atmos is primarily focused on local execution during development.
Parallel execution is limited by policies. E.g. in spacelift, we use rego policies to determine executions. We don’t want to re-implement that in Atmos - out of scope.
This made me smile
“No schema found …” warning removed, as schema is far more likely to be available now (#454)
https://github.com/hashicorp/terraform-ls/releases/tag/v0.16.0
Hey all. If I was using this example and I added another worker group. what do i need to do to ensure some pods only deploy to worker group alpha while the others goto worker group bravo.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
is that where i’d set a kubernetes_labels
?
Hola, friends – Ran into an interesting issue while spinning up a multi-account architecture with atmos/terraform. The master account was left out due to a misconfiguration of the tfstate-backend component, so I’ve been trying to import it. Technically, this should be possible, but when you try with atmos, using a command like:
aws-vault exec master-root -- atmos terraform import account aws_organizations_account.organization_accounts[\"master\"] XXXXXXXXXXXX -i -s master
Produces an error like this:
Error: Unsupported attribute on /modules/terraform/terraform-core.variant line 223:
This object does not have an attribute named "region".
[...]
Error: 1 error occurred:
* step "write provider override": job "terraform write override": config "override-contents": source 0: job "terraform provider override": /modules/terraform/terraform-core.variant:223,27-34: Unsupported attribute; This object does not have an attribute named "region"., and 1 other diagnostic(s)
This error seems to come from atmos. The variant file in that locale is definitely trying to add a provider with a ‘region’ variable in the config.
I have two questions, really. #1 - Is it possible this is a general problem with doing importing via Atmos? #2 - Is there an easier way to work around the issue where your master account state did not make it into the multi-account tfstate file, but every other account seems to have made it.
@marc slayton I ran into this issue with an import, and wound up working around it by modifying atmos
If you used the example project here: https://github.com/cloudposse/atmos/tree/master/example, then you can just change the imports at the bottom of cli/main.variant
to use your own fork of the terraform module
Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos
I just removed the region option from the terraform-core.variant file in the “terraform provider override” step
Issue was opened here: https://github.com/cloudposse/atmos/issues/36
Found a bug? Maybe our Slack Community can help. Describe the Bug The following error is observed when attempting a terraform import: atmos terraform import account aws_organizations_organizational…
Perhaps tomorrow you can fill me in on how you got iam-primary-roles to apply? I am having some trouble with it, though I am not working from a fresh set of accounts. I am porting our accounts from the old reference architecture standup
Issues with assuming roles, and I am curious how you configured things
RE: iam-primary-roles – sure, happy to. Which version of terraform are you using? Might be this weekend or early next week, but I’m game. Thanks for the clue, btw. I was thinking more-or-less the same thing with regard to the variant wrapper, but I hadn’t dug in that deeply yet. Good to see the community is right there on top of these things.
I’ll have to check exactly which patch version, but it is TF 0.14.x. Thanks! Just looking for some broad guidance if you’ve gotten further along. I’ve been making progress, but a bit slow
I should mention that I’m standing up an identity account, and it is having difficulty assuming the correct role
Honestly, it hasn’t been as bad as I’d thought. I’ve been taking a lot of notes and working on some tutorial material for friends and co-workers. That part has been a little slow, but overall I’d say the Atmos approach has given me a net savings. It is vastly less time to configure and deploy 90% prebuilt modules compared with writing custom ones you have to build yourself.
When you say the identity account is having trouble assuming the proper role, I’m not sure I understand. From your earlier post, I’m assuming you mean you have something like an automated system account (e.g. ci/cd) which is trying to assume an IAM role defined within the identity account. Primary roles are basically just roles that are not designed to be delegated to another user. They are roles that an automated service might take on, e.g. to run a delivery pipeline, or build more infrastructure. These are generally things you don’t do in the same identity account, however. Instead, they are roles your system user assumes to take on work in another account that trusts your service with permissions to do the necessary task. LMK if this is all really obvious. I’ve been writing tutorials all week, so forgive me if I am belaboring trivial points. :0)
@marc slayton thanks - I am actually not at the point of running ci/cd to do this. This is a manual bootstrap, that is perhaps a bit confused by the fact that is in the existing accounts previously stood up with the ref arch (except that the identity and dns accounts are new). I think I am doing something stupid - do you have, even in draft form, your tutorial I could look at, or a set of example stack files that got you all the way to the VPC?
Yes, I’m putting that together hopefully this weekend. It’s a side project for me to get some better reference docs going. RE: your problem with assuming iam-primary-roles: When you try to assume a role with your ci/cd user, what happens? Do you get an error message? Have you tried it manually using the awscli? Usually, the error message is a good clue as to what’s happening.
Also might help to see a plan file. That can sometimes reveal the issue.
Apologies, my comments were not very clear. It was a long day. I was doing this from the root account, with the primary_account_id
set to the identity account, and the stage set to identity. I am currently in an assume admin role (from the existing infra), called crl-root-admin
. When I attempt to plan the iam-primary-roles
component, the backend is unable to be configured:
error configuring S3 Backend: IAM Role (arn:aws:iam:::role/crl-gbl-root-terraform) cannot be assumed
I think I just need to read this component a bit more today
This looks like an issue with tfstate_assume_role
being defaulted to true, though I thought I had tried disabling that before and using the existing admin role
One potential problem I see is your role definition has no account in it, e.g.:
arn:aws:iam:::role/crl-gbl-root-terraform
To be assumable, this should be a fully defined account, with a region, and an account number .
You might want to check your info under the stack definition.
Right - I don’t think it should even be attempting to use that one though. I’m trying to force it to use the admin role arn (with account id and region) that I am currently assumed as. I’m not sure where the tfstate-context is being lost
This was the proximate cause: https://sweetops.slack.com/archives/CB84E9V54/p1619810307034700?thread_ts=1614076395.005800&cid=CB84E9V54. I used the same workaround. I ran into a number of other issues that I’ve been able to resolve so far, and apply this component. I am making notes on those for tickets, or to add to any tutorial if you are sharing publicly.
@Matt Gowie this is one of the things I ran into and had a question about as well. I did the same thing as Mathieu. Is the longer term intent to do something like terraform-null-label
but for the tfstate-context?
2021-04-30
i am struggling to work out how to fix a circular dependency between an SQS queue and the policy it uses. Because the policy needs the ARN of the SQS queue itself
can you construct the arn, or is there randomness in the arn on creation?
i guess i can construct the arn
its quite hacky though but works
that’s usually how i address these things
why didn’t i think of that
thanks dude, you’re the best!
lol, you had the right default, always prefer to reference an attribute! this is just an edge case where that doesn’t work…
We had the same problem with secrets and KMS. You can have the principal be * and use a condition.
Look at how AWS does it: https://docs.aws.amazon.com/kms/latest/developerguide/services-secrets-manager.html
Learn how AWS Secrets Manager uses AWS KMS to encrypt secrets.
i’ll second that, the default aws kms policies are really great to study how resource policies work
Hello. I am starting to build a new environment from scratch so i can migrate my old one into an area that didn’t have the quirks of console buildout. I am using cloudposse/vpc/aws and cloudposse/dynamic-subnets/aws. Currently only two subnets have public access, the rest go through 1 of 2 NATGWs. I also don’t need a public facing subnet to each. I don’t think in terms of money, this would end up costing that much, curious if others have considered this.
It may be a big nothing burger, happy to hear that as well.
NAT gateways are like$20/mo. It’s that really an issue to pay for three?
Like, even if it represents a big percentage of your bill, is it that much money out of your total operational budget?
Nope its not, thats why i asked here. Thank you @Alex Jurkiewicz. I appreciate when someone has that number in their head. Exactly what i was thinking.