#terraform (2023-05)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2023-05-01
2023-05-02
Hi I am looking at how to dynamically create resources in multiple regions but as far as I can see, It is not supported yet by terraform. Has anyone tried any work around as I have over 3000 resources to create across multiple regions?
Terraform Version
v0.11.1
Terraform Configuration Files
variable "regions" {
default = [
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"ap-northeast-1",
"ap-northeast-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-south-1",
"sa-east-1"
]
}
provider "aws" {
count = "${length(var.regions)}"
alias = "${element(var.regions, count.index)}"
region = "${element(var.regions, count.index)}"
profile = "defualt"
}
resource "aws_security_group" "http-https" {
count = "${length(var.regions)}"
provider = "aws.${element(var.regions, count.index)}"
name = "http-https"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Expected Behavior
Creating a security group on each AWS regions.
Actual Behavior
Planning/Applying fails with
Error: Error asking for user input: 1 error(s) occurred:
* aws_security_group.http-https: configuration for aws.${element(var.regions, count.index)} is not present; a provider configuration block is required for all operations
Steps to Reproduce
terraform init
terraform apply
best option i can think of would be something like cdktf to generate the .tf configs for you
Terraform Version
v0.11.1
Terraform Configuration Files
variable "regions" {
default = [
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"ap-northeast-1",
"ap-northeast-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-south-1",
"sa-east-1"
]
}
provider "aws" {
count = "${length(var.regions)}"
alias = "${element(var.regions, count.index)}"
region = "${element(var.regions, count.index)}"
profile = "defualt"
}
resource "aws_security_group" "http-https" {
count = "${length(var.regions)}"
provider = "aws.${element(var.regions, count.index)}"
name = "http-https"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Expected Behavior
Creating a security group on each AWS regions.
Actual Behavior
Planning/Applying fails with
Error: Error asking for user input: 1 error(s) occurred:
* aws_security_group.http-https: configuration for aws.${element(var.regions, count.index)} is not present; a provider configuration block is required for all operations
Steps to Reproduce
terraform init
terraform apply
are you creating the same set of resources in many regions? Or put another way, do you have one set of resources which are created repeatedly with only minor differences?
The best approach would be to split up your resources into many stacks. 3000 resources is too many for a single stack and you will suffer much operational pain doing so.
Apologies, I meant 300. Yes I am creating the same resources in many regions. A typical example of what I am trying to do is shown below. I know that we cannot use count
or for_each
in providers, is there any way I can do this as a work-around
variable "regions" {
default = [
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"ap-northeast-1",
"ap-northeast-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-south-1",
"sa-east-1"
]
}
provider "aws" {
count = "${length(var.regions)}"
alias = "${element(var.regions, count.index)}"
region = "${element(var.regions, count.index)}"
profile = "defualt"
}
resource "aws_security_group" "http-https" {
count = "${length(var.regions)}"
provider = "aws.${element(var.regions, count.index)}"
name = "http-https"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
here is a module that is setup for multi-region. give it a good study to see how to do multi-region in pure terraform… https://github.com/nozaq/terraform-aws-secure-baseline
Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations and AWS Foundational Security Best Practices.
Thank you. I will check this out
In your example, the only thing that is changing is the region, so you could create multiple executions over the same code and get provider to accept a variable for region which has a different value on each run.
You’ll need to take care with the state file as you’ll end up overwriting it if all you change is the region but if you use terraform workspaces then you can have a state file per region
You also would not need to use a provider alias in this case as each state file would only contain the resources of one account/region all created by one provider
Thank you for this
2023-05-03
New Podcast - theiacpodcast.com Hi all, my name is Ohad Maislish and I am the CEO and co-founder of www.env0.com We launched yesterday our new podcast about IaC and 3 episodes are already up in the air - with amazing guests such as the CEO of Infracost.io and the CTO of aquasec.com (tfsec+trivy OSS)
Hey Ohad. Cool, I’ll check it out. Super relevant to what we’re working on (in fact, been looking into env0 as well)
2023-05-04
@Erik Osterman (Cloud Posse) I see there are already 2 open PR’s on the S3 bucket module. The issue is blocking new deployments. It will be much appreciated if one of the 2 solutions are merged into main. https://github.com/cloudposse/terraform-aws-s3-bucket/pulls
│ Error: error creating S3 bucket ACL for bucket: AccessControlListNotSupported: The bucket does not allow ACLs
What you can do is, upgrade the s3-logs version in the module main.tf internally for you, from 0.26.0 to 1.1.0 until the PR is merge.
Or the hard way
• terraform init
• sed -i “s/0.26.0/1.1.0/” …
• terraform init
• terraform apply
Thanks @JoseF
@Max Lobur (Cloud Posse) can we prioritize rolling out the release branch manager to this repo
(and any S3 repos)
@Jeremy G (Cloud Posse) let’s discuss on ARB today
Modules terraform-aws-s3-bucket and terraform-aws-s3-log-storage have been updated to work with the new AWS S3 defaults. Other modules dependent on them should be updated soon.
Thanks @Jeremy G (Cloud Posse)
Thanks for the quick response and fix
Running into a weird error when creating a bucket.
module "s3_bucket" {
source = "cloudposse/s3-bucket/aws"
version = "3.1.0"
acl = "private"
context = module.this.context
kms_master_key_arn = module.kms_key.alias_arn
sse_algorithm = var.sse_algorithm
}
│ Error: error creating S3 bucket (xxxx) accelerate configuration: UnsupportedArgument: The request contained an unsupported argument.
│ status code: 400, request id: XZMWPBJNR1DEBXJQ, host id: CViaAfJ5ZhqVM2t6XViRKzlz+SATKo38dDxSISOQ3nihJM3K6qyWoBVizpP+ywZPrugDBbii/wQ=
│
│ with module.s3_bucket.aws_s3_bucket_accelerate_configuration.default[0],
│ on .terraform/modules/s3_bucket/main.tf line 48, in resource "aws_s3_bucket_accelerate_configuration" "default":
│ 48: resource "aws_s3_bucket_accelerate_configuration" "default" {
Yes, that is weird, and I cannot reproduce it, so I will need more information if you want me to investigate further.
Reading this https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html#transfer-acceleration-requirements
Get faster data transfers to and from Amazon S3 with Amazon S3 Transfer Acceleration.
i don’t see govcloud listed
I guess that means it’s not supported?
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) Please review and approve s3-bucket PR 180 to fix @Michael Dizon’s problem above.
what
• Revert change to Transfer Acceleration from #178
why
• Transfer Acceleration is not available in every region, and the change in #178 (meant to detect and correct drift) does not work (throws API errors) in regions where Transfer Acceleration is not supported
@Michael Dizon I hope this is fixed in v3.1.1. Please try it out and report back.
@Jeremy G (Cloud Posse) yeah that worked!! thank you! can terraform-aws-s3-log-storage get that bump also?
The PR is approved, but right now cannot be merged and a new release cut because of another GitHub outage
terraform-aws-vpc-flow-logs-s3-bucket
v1.0.1 released
terraform-aws-lb-s3-bucket
v0.16.4 released
AWS Lattice support ready! https://sweetops.slack.com/archives/CHDR1EWNA/p1683141452849139
Description
Support for recently announced VPC Lattice
• https://aws.amazon.com/blogs/aws/simplify-service-to-service-connectivity-security-and-monitoring-with-amazon-vpc-lattice-now-generally-available/ • https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonvpclatticeservices.html • https://awscli.amazonaws.com/v2/documentation/api/latest/reference/vpc-lattice/index.html?highlight=lattice
Requested Resource(s) and/or Data Source(s)
☑︎ aws_vpclattice_service
☑︎ aws_vpclattice_service_network
☑︎ aws_vpclattice_service_network_service_association
☑︎ aws_vpclattice_service_network_vpc_association
☑︎ aws_vpclattice_listener
☑︎ aws_vpclattice_listener_rule
☑︎ aws_vpclattice_target_group
☑︎ aws_vpclattice_access_log_subscription
☑︎ aws_vpclattice_auth_policy
☑︎ aws_vpclattice_resource_policy
☑︎ aws_vpclattice_target_group_attachment
Potential Terraform Configuration
TBD
References
• https://aws.amazon.com/blogs/aws/simplify-service-to-service-connectivity-security-and-monitoring-with-amazon-vpc-lattice-now-generally-available/ • https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonvpclatticeservices.html • https://awscli.amazonaws.com/v2/documentation/api/latest/reference/vpc-lattice/index.html?highlight=lattice
Would you like to implement a fix?
None
v1.5.0-alpha20230504 1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…
1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to v…
2023-05-05
v1.5.0-alpha20230504 1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…
1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to v…
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure.
1.5.0-alpha20230504 (May 4, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to v…
i find it funny they are described this way (which is pretty accurate) but the truth is if one of the checks fails, execution continues
https://github.com/cloudposse/terraform-aws-elasticache-memcached
curious to know why we can’t modify the security group created by this module. (everything should be known at plan time)
Terraform Module for ElastiCache Memcached Cluster
2023-05-06
2023-05-08
Anyone familiar with the Confluent Terraform provider? I am unable to get confluent_kafka_cluster_config
resources working. I always get this error:
error creating Kafka Config: 400 Bad Request: Altering resources of type BROKER is not permitted
Some information if anyone else runs into this issue: https://github.com/confluentinc/terraform-provider-confluent/issues/251#issuecomment-1541025164. I am not sure how to set settings on a cluster that is not of type dedicated
. I suppose there might be a way to call the REST API
to do so.
Oddball question: what are all the possible resource actions for a plan?
When a terraform
plan is reported, it includes what will happen to each resource. ie:
# aws_security_group_rule.ec2-http will be created
# azurerm_container_group.basics will be destroyed
# azurerm_container_group.basics will be replaced
With emphasis on created
,destroyed
, and replaced
.
Are there any other options?
Would you happen to know the source file in the repo that contains the options? (I’ll be digging into the repo in a sec)
I believe that’s it, although there are interesting sub-types, like how dangling resources get deposed
I asked my good friend Chat, Chat GPT, and this is what he came back with. The response sounds reasonable but I have yet to validate…
In Terraform, there are several resource actions that can be reported in a plan. The most common ones are:
• Created: A new resource will be created.
• Updated: An existing resource will be updated.
• Replaced: An existing resource will be replaced with a new one.
• Destroyed: An existing resource will be destroyed.
• No changes: The resource has not changed since the last Terraform run.
Additionally, there are a few less common actions that may appear in a plan:
• Tainted: A resource has been marked as tainted and will be recreated on the next Terraform run.
• Imported: An existing resource has been imported into Terraform state.
• Ignored: A resource has been ignored due to the ignore_changes setting in the configuration.
• Moved: A resource has been moved to a different location within the infrastructure.
It’s worth noting that some of these actions, such as “tainted” and “ignored”, are specific to Terraform and not used in other infrastructure-as-code tools.
@managedkaos iust curious, what are you building?
A plan parser for GitHub actions. I came across one and didn’t like it so I thought I’d put one together. Doesn’t have to be API complete but wanted to cover as much ground as possible.
(That was going to be my guess!)
If you publish it, share it!
I came across one and didn’t like it
I have seen quite a few of them, though…
I will admit I didn’t look too far.
But yeah, I’ll definitely share when it comes together.
It’ll be a while before i have a working “action” (i’m just scripting in a workflow at the moment) but here’s a preview:
clearly my summaries are off
Sharing some of my related links:
GitHub actions for terraform
This article was originally published in November 2022 on my GitHub io blog here
Purpose After working with GitHub Actions as my Terraform CI pipeline over the past year, I started looking for potential methods to clean up the Plan outputs displaye…
Also, @loren shared this one before: https://suzuki-shunsuke.github.io/tfcmt/
Build Status
Based on https://github.com/mercari/tfnotify
A CLI command to parse Terraform execution result and notify it to GitHub
@managedkaos did you end up building anything? we’re looking into this right now.
I did not build a complete, deployable action…. only some awk
code that pares the plan and translates it to Markdown for display in a GitHub Actions job summary. Will see if I can find the code for reference….
I’m doing all the plan analysis in plain text but i know it could me better using a JSON plan. Just had to move on to another project.
Hi All. I’m trying to put tags on an autoscaling group. I would like them to propogate to the underlying ec2 instances. So I put this block in
tag = {
key = "service"
value = "prometheus_server"
propagate_at_launch = true
}
you’re getting confused between attributes and blocks, one of Terraform’s warts (IMO)
attributes are set in a resource like:
key = val
blocks are set like:
block {
# attribs...
}
tag is documented as being a block, so remove the =
uuughh.. thanks… I figured it was something I was doing. Just didnt know what.
On the aws_autoscaling_group resource.
It complained that
╷
│ Error: Unsupported argument
│
│ on prometheus.tf line 93, in resource "aws_autoscaling_group" "prometheus-server":
│ 93: tag = {
│
│ An argument named "tag" is not expected here. Did you mean "tags"?
So I put used tags instead but then I got
│ Warning: Argument is deprecated
│
│ with aws_autoscaling_group.prometheus-server,
│ on prometheus.tf line 93, in resource "aws_autoscaling_group" "prometheus-server":
│ 93: tags = [{
│ 94: key = "service"
│ 95: value = "prometheus_server"
│ 96: propagate_at_launch = true
│ 97: }]
│
│ Use tag instead
Versions Im using
Terraform v1.4.6
on darwin_arm64
+ provider registry.terraform.io/grafana/grafana v1.36.1
+ provider registry.terraform.io/hashicorp/aws v4.66.1
+ provider registry.terraform.io/hashicorp/helm v2.9.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.20.0
+ provider registry.terraform.io/hashicorp/tls v4.0.4
does
`which terraform` version
Return 1.4.6
?
Maybe something we can bring up on #office-hours today
2023-05-09
2023-05-10
wondering if anyone had some thoughts or design references for creating an 2 s3 buckets. one source bucket, and one replication bucket in a single module using the terraform-aws-s3-bucket
module.
Generally you want the target bucket to be in a separate region, which means a separate AWS provider, so you have to decide how you want to organize that. You can have a root module with 2 providers, pass the 2nd one into s3-bucket
to create the replication destination, then pass the output of that into another s3-bucket
instantiation using the default AWS provider to create the primary bucket.
@Erik Osterman (Cloud Posse)
since the target bucket accepts the outputted replication_role_arn from the source, and the s3_replication_rules references the target bucket, which doesn’t yet exist
Folks who use Atlantis for Terraform Self Service - what pains you the most?
We are building an Open Source GitOps tool for Terraform (https://github.com/diggerhq/digger) and are looking for what’s missing. We also read & asked around. We found the following pain points already, curious for more:
- In Atlantis, anyone who can run a plan, can exfiltrate your root credentials. This talked about by others and was highlighted at the Defcon 2021 conference. (CloudPosse)
- “Atlantis shows plan output, if it’s too long it splits it to different comments in the PR which is not horrible, just need to get used to it.” (User feedback)
- Anyone that stumbles upon your Atlantis instance can disable apply commands, i.e. stopping production infrastructure changes. This isn’t obvious at all, and it would be a real head scratcher to work out why Atlantis suddenly stopped working! (Loveholidays blog)
- “Atlantis does not have Drift Detection.” (Multiple users)
- “The OPA support in atlantis is very basic.” (Multiple users) As CloudPosse themselves explain - “Atlantis was the first project to define a GitOps workflow for Terraform, but it’s been left in the dust compared to newer alternatives.” The problem though is that none of the newer alternatives are Open Source, and this is what we want to change. Would be super grateful for any thoughts/insights and pain points you have faced.
Running a Terraform plan on unstrusted code can lead to RCE and credential exfiltration.
Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…
Digger is an open source Terraform Cloud alternative. Digger allows you to run Terraform plan / apply in your CI. No need for separate CI tool, comes with all batteries included
Support for CodeCommit would be nice
Running a Terraform plan on unstrusted code can lead to RCE and credential exfiltration.
Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…
Digger is an open source Terraform Cloud alternative. Digger allows you to run Terraform plan / apply in your CI. No need for separate CI tool, comes with all batteries included
1.- anyone that can run terraform can exfiltrate your root credentials, not an atlantis problem per se
i think it’s a CI problem. if i’m running locally, i already have credentials… but if i’m getting the CI credentials, that’s not great
also not sure what might be done to stop that, outside of config parsing and policy enforcement tool
3.- put the UI under OIDC etc and give only access to the people that needs it
4.- it is being develop and there is a github action you could test https://github.com/cresta/atlantis-drift-detection
Detect terraform drift in atlantis
5.- you can run conftest , what is basic about it?
Ask Cloudposse Again about the status of Atlantis, I’m pretty sure they have changed their minds. Check the releases as well, we have been updating the code pretty often
by the way I’m one of the Atlantis maintainers
When i am reading documentation of digger they have comparison with open source and enterpise tool
That’s the best thing
If they work on solving the issue of both soon they will attract to use it
Does digger has authentication control VCS providers
Like bitbucket user authentication right now not available in atlantis
They only support github
Terragrunt integration
Does digger provide pr request automation same like atlantis
Hello, I have setup provisioned concurrency with scheduled scaling for my lambda. However, successive terraform runs cause the error : Error updating lambda alias resourceConflictexception: Alias can’t be used for provisioned concurrency configuration on an already provisioned version. Is this something anyone else has run into ?
Hello all, I was wondering if anyone has had success with delivering developer permissions within AWS SSO and the proper guardrail system for permission in a build system that runs terraform. I also acknowledge it is slower to iterate on Terraform changes when you have to check in a change and run a build each time. Maybe others have found success in the balance between security and speed
I like my aws SSO setup and permissions, but as a small company we just have permissions like DataTeam/Engineer/Admin for different aws orgs that are our environments, where engineer is read/write to most things we use.
no experience running terraform in a build system as I’m the only one that does it
@Michael Galey thank you for the input, I’m definitely of the mind to keep things simple as possible
as for running/not running in a build system, I get for your use case as the only one doing ops-y things, then it might not be a priority until the team grows
there’s also compliance/security stuff, but that’s another thing that needs to be prioritized
for my own auditing/backup info, i also log all runs and send diffs as events into datadog, the diffs sometimes are beyond the char limit tho
oh, that’s very cool, just a homegrown wrapper around terraform?
yea, i also wrap other terraform stuff to use SSM parameters/aws sso
from my ‘tfa’ (terraform apply)
so it also catches/warns of destructive stuff, and asks for an extra confirm later on
the beauty of bash, very cool stuff, I have a similar alias, but for aws sso+aws-vault+terraform
what do you need aws-vault for? sso means it doesn’t have to vault anything right? or you store even the temp creds in the vault?
I think it’s more habit than anything now to keep aws-vault around. Used it prior to AWS SSO
I have to fix my blog, but I created a post about terraform wrappers https://github.com/jperez3/taccoform-blog/blob/master/hugo/content/posts/TF_WRAPPER_P1.md
+++
title = “Terraform Wrappers - Simplify Your Workflow”
tags = [“terraform”, “tutorial”, “terraform1.x”, “wrapper”, “bash”, “envsubst”]
date = “2022-11-09”
+++
Overview
Cloud providers are complex. You’ll often ask yourself three questions: “Is it me?”, “Is it Terraform?”, and “Is it AWS?” The answer will be yes to at least one of those questions. Fighting complexity can happen at many different levels. It could be standardizing the tagging of cloud resources, creating and tuning the right abstraction points (Terraform modules) to help engineers build new services, or streamlining the IaC development process with wrappers. Deeper understanding of the technology through experimentation can lead to amazing breakthroughs for your team and business.
Lesson
• Terraform Challenges • Terraform Wrappers • Creating Your Own Wrapper • Wrapper Example
Terraform Challenges
As you experiment more with Terraform, you start to see where things can break down. Terraform variables can’t be used in the backend configuration, inconsistencies grow as more people contribute to the Terraform codebase, dependency differences between provisioners, etc.
Terraform Wrappers
You may or may not be familiar with the fact that you can create a wrapper around the terraform binary to add functionality or that several open source terraform wrappers have existed for several years already. The most well-known terraform wrapper being terragrunt which was ahead of its time by filling in gaps in Terraform’s features and provided things like provisioning entire environments. I tried using terragrunt around 2016 and found the documentation to be confusing and incomplete. I encountered terragrunt again in 2019 and found it be confusing and frustrating to work on. I didn’t see the utility in using a wrapper and decided to steer away from wrappers, favoring “vanilla” terraform. I created separate service workspaces/modules and leaned heavily into tagging/data-sources to scale our infrastructure codebase. In 2022, we’ve started to support developers writing/maintaing their own IaC with our guidance. In any shared repo, you will notice that people naturally have different development techniques and/or styles. We’re all about individualiiy when it comes interacting with people, but cloud providers are less forgiving. Inconsistencies across environments slow down teams, destroys deployment confidence, and makes it verify difficult to debug problems when they arise.
Creating Your Own Wrapper
It may be difficult to figure out at first, but only you and your team know the daily pain points when dealing with terraform. You also know how organized (or disorganized) your IaC can be. At the very least, the following requirements should be met:
- You have a well-defined folder structure for your terraform workspaces. This will allow you to cherry-pick information from the folder path and predictably source scripts or other files.
- Your modules and workspaces have a 1:1 mapping, which means for every folder with terraform files, you’re only deploying one terraform module. No indivial resource defintions are created. This helps with keeping consistency across across environments.
Once you’ve gotten the prereqs out of the way, you can start thinking about what you want the wrapper to do that isn’t already built into the terraform binary. Start by picking one or two features and your programming language of choice. You can jump right into using something like python or go, but I would actually recommend starting with bash
. It will work on most computers, so you don’t have to worry about specific dependencies if you want a teammate to kick the tires on your terraform wrapper. If and when your terarform wrapper blows up with functionality, then you can decide to move it to a proper programming language and think about shoving it into a container image.
Wrapper Example Organization and Requirements Gathering
I’ve created a repo called terraform-wrapper-demo and inside I’ve created a service called burrito
. The burrito
service has a well-organized folder structure:
burrito
├── modules
│ └── base
│ └── workspace-templates
├── scripts
└── workspaces
├── dev
│ └── base
└── prod
└── base
I also have a 1:1 mapping between my base
workspaces and modules. The burrito module is very basic and for demonstration purposes only includes an s3 bucket. If this were a real service, it would have more specifics on compute, networking, and database resources.
Ok, this set up is great, but even with the 1:1 mapping of workspaces to modules, we’re still seeing inconsistencies across environments. Some inconsistencies are small like misspelled tags and others are big like a security group misconfigurations. As a member of the team who contributes to the burrito
service’s codebase, I want things to be consistent across environments. Advancing changes across nearly identical environments gives a developer confidence that the intended change will be uneventful once it reaches production.
It sounds like templates can help mitigate fears of inconsistency across environments. Let’s put together some requirements:
- The wrapper should be similar to the existing terraform workflow to make it easy to use
- Workspace templates should be pulled from a centralized location, injected with environment specific variables, and placed into their respective workspaces.
Starting The Wrapper Script
We want the wrapper script to act similar to the terraform
command. So the script will start with a command (the script name) and we’ll call it tee-eff.sh
. We’ll also expect it to take a subcommand. If you’re familiar with Terraform, this is stuff like init
, plan
, apply
. Using the script would look something like tee-eff.sh plan
.
- Ok now to start the script and lets begin with the input:
tee-eff.sh
#!/bin/bash
SUBCOMMAND=$1
• Now any argument supplied to the script will be set as the SUBCOMMAND
variable.
- Now we can focus on the variables we need to interpolate by looking at the
[provider.tf](http://provider.tf)
file:
terraform {
backend "s3" {
bucket = "$TF_STATE_BUCKET_NAME-$ENV"
key = "$REPO_NAME/$SERVICE_PATH/terraform.tfstate"
region = "$BUCKET_REGION"
}
}
provider "aws" {
region = "$AWS_REGION"
default_tags {
tags = {
Terraform_Workspace = "$REPO_NAME/$SERVICE_PATH"
Environment = "$ENV"
}
}
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
required_version = "~> 1.0"
}
• We’ll want to replace any variables denoted with a $
at the beginning with values from our tee-eff.sh
script. The same goes for variables in the [burrito_base.tf](http://burrito_base.tf)
file which can be found below:
[burrito_base.tf](http://burrito_base.tf)
module "$SERVICE_$MODULE_NAME" {
source = "../../../modules/$MODULE_NAME"
env = "$ENV"
}
Note: Things like the backend values and module name cannot rely on terraform variables because those variables are loaded too late in the terraform execution process to be used.
- After we’ve tallied up the required variables, we can come back to the
tee-eff.sh
script to set those variables as environment variables:
tee-eff.sh
``` #!/bin/bash
SUBCOMMAND=$1
echo “SETTING VARIABLES”
Terraform Backend S3 Bucket
export TF_STATE_BUCKET_NAME=’taccoform-tf-backend’
current working directory matches …
I like wrappers, but also want to be aware of how tangled they can become
thanks for sharing! agreed, my setup would be unfortunate to teach a second person, even though it’s technically cool when understood
I think your setup is good and teachable to another engineer. I think the bigger hurdle is organizing the terraform a way where you two can actively work in the same environment without stepping on each others toes
yea for sure
2023-05-11
Hey all, I’m trying to use the terraform module cloudposse/firewall-manager/aws
on version 0.3.0
I can’t find the right way to add the logging_configuration
block in order to use S3 bucket
as the waf_v2_policies
direct log destination.
@Andriy Knysh (Cloud Posse) @Dan Miller (Cloud Posse)
@Elad Levi I guess the logging_configuration
configuration is part of this variable https://github.com/cloudposse/terraform-aws-firewall-manager/blob/master/variables.tf#L233
logging_configuration:
which then used here https://github.com/cloudposse/terraform-aws-firewall-manager/blob/master/waf_v2.tf#L51
loggingConfiguration = lookup(each.value.policy_data, "logging_configuration", local.logging_configuration)
which, according to https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/fms_policy#managed_service_data, can be used to set the logging bucket
something like shown here https://docs.aws.amazon.com/fms/2018-01-01/APIReference/API_SecurityServicePolicyData.html (it’s ugly)
Details about the security service that is being used to protect the resources.
search for loggingConfiguration
in the wall of JSON there
you will need to provide a similar JSON string to logging_configuration
in variable "waf_v2_policies"
having said that, the module was not updated in 1.5 years and prob needs some love
Example: WAFV2 - Logging configurations
"{\"type\":\"WAFV2\",\"preProcessRuleGroups\":[{\"ruleGroupArn\":null, \"overrideAction\":{\"type\":\"NONE\"},\"managedRuleGroupIdentifier\": {\"versionEnabled\":null,\"version\":null,\"vendorName\":\"AWS\", \"managedRuleGroupName\":\"AWSManagedRulesAdminProtectionRuleSet\"} ,\"ruleGroupType\":\"ManagedRuleGroup\",\"excludeRules\":[], \"sampledRequestsEnabled\":true}],\"postProcessRuleGroups\":[], \"defaultAction\":{\"type\":\"ALLOW\"},\"customRequestHandling\" :null,\"customResponse\":null,\"overrideCustomerWebACLAssociation\" :false,\"loggingConfiguration\":{\"logDestinationConfigs\": [\"arn:aws:s3:::aws-waf-logs-example-bucket\"] ,\"redactedFields\":[],\"loggingFilterConfigs\":{\"defaultBehavior\":\"KEEP\", \"filters\":[{\"behavior\":\"KEEP\",\"requirement\":\"MEETS_ALL\", \"conditions\":[{\"actionCondition\":\"CAPTCHA\"},{\"actionCondition\": \"CHALLENGE\"}, {\"actionCondition\":\"EXCLUDED_AS_COUNT\"}]}]}},\"sampledRequestsEnabledForDefaultActions\":true}"
Thanks @Andriy Knysh (Cloud Posse) The problem for me was that I used this block:
loggingConfiguration = ({
"logDestinationConfigs" = ["arn:aws:s3:::aws-waf-logs-bucket-name-01"]
"redactedFields" = []
"loggingFilterConfigs" = null
})
And I should have used this:
logging_configuration = ({
"logDestinationConfigs" = ["arn:aws:s3:::aws-waf-logs-bucket-name-01"]
"redactedFields" = []
"loggingFilterConfigs" = null
})
As far as I can see there is no need to write it on json string because the loggingConfiguration
will be inside jsonencode
anyway as you see here:
managed_service_data = jsonencode({
type = "WAFV2"
preProcessRuleGroups = lookup(each.value.policy_data, "pre_process_rule_groups", [])
postProcessRuleGroups = lookup(each.value.policy_data, "post_process_rule_groups", [])
defaultAction = {
type = upper(each.value.policy_data.default_action)
}
overrideCustomerWebACLAssociation = lookup(each.value.policy_data, "override_customer_web_acl_association", false)
loggingConfiguration = lookup(each.value.policy_data, "logging_configuration", local.logging_configuration)
})
2023-05-12
just had a question regarding custom terraform modules, is it generally considered best practice to “pin” things like terraform version, provider versions in the module? I feel like thats where it should be done but just looking for some advice
As the modules usage in the README.md file and the examples suggest, yes. You should have pinned versions and a systematic way to control their upgrades to avoid disruptions due to latest changes.
ok cool, ty @JoseF
there was talk of doing this from terragrunt which we are using to implement said modules and I feel like thats very dangerous
Hi, does anyone know how to implement something like this where the user calling the module can say if they want to install the EKS addon and optionally provide configuration for it?
try
function?
The issue is you can’t use dynamic in module input params @Hao Wang. Not sure if it’s possible to do differently, but that gives an idea of what I’m trying to accomplish.
Yeah, it is possible, you can pass cluster_addons as a variable with dynamic
cluster_addons
may need to use locals
variables to compose
So compose the value in a local using dynamic then pass that local var to it
I’ll have to give it a go, new to Terraform/hcl
got it
Awesome, thanks for the suggestion
I am not having any luck with this at all
Looking to dynamically generate cluster_addons, if anyone has any ideas please let me know
2023-05-13
2023-05-14
2023-05-15
v1.5.0-beta1 1.5.0-beta1 (May 15, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…
1.5.0-beta1 (May 15, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate…
Hey guys, I am using the EMR Cluster module and it is creating all my master and core instances with the same name. Is there a way to give them unique to easily identify them instead?
I set up EMR before but not used different names
After reviewing the codes, core/master nodes use different labels
module "label_core" {
module "label_master" {
and they use different attributes already in the module
May need to pass Name
tag
I will try it out. Thank you.
np
2023-05-16
Terraform Cloud’s Free tier now offers new features — including SSO, policy as code, and cloud agents — while new paid offerings update scaling concurrency and more.
Terraform Cloud’s Free tier now offers new features — including SSO, policy as code, and cloud agents
Terraform Cloud’s Free tier now offers new features — including SSO, policy as code, and cloud agents — while new paid offerings update scaling concurrency and more.
Wow! Includes cloud agents. Amazing. I’ve been lobbying for this as the free tier was unusable as implemented because it required hardcoding AWS admin credentials as environment variables.
Smart move! cc @Jake Lundberg (HashiCorp)
yeah pretty sweet. might make me think about taking another look at TFE. Even with GitHub Actions et. al. you have to hard code creds. Self-hosted runners and dedicated build agents with IAM roles gets you around that though. I guess this is Hashi getting on board.
Even with GitHub Actions et. al. you have to hard code creds.
Negative.
Configure AWS credential environment variables for use in other GitHub Actions.
OK ok i will concede to the openid configuration. I guess i am thinking about arm chair devs/ops
Yes, the dynamic credentials in TFE/C looks quite a bit like Github cloud credentials.
also, when i say “hard code” i mean repo secrets… which leads to the configure-aws-credentials
action
i guess my trade offs for the casual dev/ops people are getting them up and running with an AWS key in secrets or going through another hour or so of training to explain and set up the openid configuration.
But overall the pricing model is moving more towards scale of operations which is why we’ll see more traditional paid features be free for smaller teams. And no SSO tax…that’s table stakes these days.
still no list pricing for larger org plans, I guess it’s hard to give up the spice of “how much do we think you can afford”
Would you pay the list price as a larger org?
yes, with more private workers / concurrency
Just my luck, signed up for a paid TFC plan a few months ago because we needed SSO and Teams management.
2023-05-17
Good Morning! I’m new here so I’m looking for a place where I can jump in and get started. Is this still the best place to start? https://github.com/cloudposse/reference-architectures
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.
Not sure, looks a little old, and some providers had decent sized changes in the last 2 years. If you know terraform already, I found it helpful to read some of their modules and compare to my own that did similar things, understanding how they use the ‘this’ object/context object and pass that around etc.
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.
@Michael Galey Thank you so much!
We launched Digger v4.0 today on Product hunt today! It has been quite a journey from Digger Classic (v1.0), to AXE (v2.0), to Trowel (v3.0) and finally to our current version.
Read more about our iterative journey in the blog and please share your feedback (good and bad) on Product Hunt
Digger is an open source tool that helps you run Terraform in the CI system you already have, such as GitHub Actions.
great, will give it a try, can I use gmail to join slack?
Digger is an open source tool that helps you run Terraform in the CI system you already have, such as GitHub Actions.
Absolutely! This is the link
Cool, joined
What I feel is like your pro feature should be open sources first because there are already tools available which supports these
Like terrakube has already implemented it
Drift detection opa integration
Atlantis with custom workflow can add these features
Scalr is also providing all features unlimited for small business purpose with limited number of runs bit they are not restricting to use there feature and buy it
@Utpal Nadiger let me know if I miss anything i need something which I can usually pick up that without pro but use your tool as competitor for all open source tool
Which are already fighting get alternative for terraform cloud
Even terraform cloud has enable feature in free tier with unlimited user accounts and rbac agents
And many more
I’m really excited to release Terrateam Self-Hosted today. Full feature parity with our Cloud version. This is our first step to making Terrateam open source. Looking forward to community feedback, feature requests, etc.
https://github.com/terrateamio/terrateam https://terrateam.io/blog/terrateam-self-hosted
Great move!
Thanks!
Terrateam should support bitbucket
First in place don’t depend on GitHub in terms of git.providers
Congratulations terrateam
2023-05-18
Thanks @kunalsingthakur – We have seriously thought about supporting BitBucket (also GitLab) but our journey hasn’t taken us there yet. I’m curious though. Are you doing anything now with Terraform + BitBucket?
Just chiming into the conversation.
i support one customer running a workload with Bitbucket + TF + AWS.
The Bitbucket pipeline tooling is working pretty good for triggers, branch specification (for build targets), and environments for deployment tracking .
i do like the cloudspend feature of Terrateam though!
Hello, I’m trying to get started with CloudPosse and SweetOps and want to make sure I’m starting at square one. This is the old reference architecture https://github.com/cloudposse/reference-architectures. Has anything replaced it and if not where is a good place to start? Thank you
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.
So our reference architecture is the one thing we hold back. We have some simple examples though
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process.
Here are some lessons on how to implement Atmos in a project.
Building Blocks
https://cloudposse.com/services/ ← our paid reference architecture
We have solutions that fit your scale and budget.
hey guys I have an issue with elastic search cloud posse module not being able to enable view_index_metadata is there any way that someone can help me out perhaps ?
or is it a version block sicne I am at 0.35.1 version of the module
Managed to enable it but the way it was done is just dumb since no where I managedd to fin correlation in between es:HTTPHead and view_index_metadata
2023-05-22
Hi! I’m looking for advice as I’m new to Terraform projects. My team and I are starting to migrate all our infrastructure to Infrastructure as Code (IaC) using Terraform. The Minimum Viable Product (MVP) consists of approximately 60 microservices. This is because these services have dependencies on certain databases. Currently, we have deployed some of this infrastructure in a development environment. These services are built using different technologies, have different environment variables, and require different permissions. We are using preexisting modules, but we have a separate folder for each service for the particular configurations of each one. We have started a monorepo with a folder for each environment. However, the apply process is taking around 15 minutes, and our project organization is structured as follows:
• Staging ◦ VPC ▪︎ VPC-1 • ECS ◦ Service-1 ◦ Service-2 ◦ Service-3 ◦ Service-N • RDS ◦ RDS-1 ◦ RDS-2 ◦ RDS-3 ◦ RDS-N ▪︎ VPC-2 • ECS ◦ Service-1 ◦ Service-2 ◦ Service-3 ◦ Service-4 ◦ Service-N • RDS ◦ RDS-1 ▪︎ VPC-3 • ECS ◦ Service-1 ◦ Service-2 ◦ Service-3 can you give me your advice?
I’m very familar with ECS, but feel it ie better to migrate ECS to k8s
one step at time
k8s is not the solution for a lot of companies, it all depends on workload/knowledge level etc
Paula what advise are you looking for?
if you go monorepo I will advise to make your rds , ecs alb , etc…modules very flexible so you can Standup any of your services
you could look into using Atmos from cloudposse to make it easier to manage the inputs between all your services/modules
I think the question is “the apply process is taking around 15 minutes”
That’s a very slow apply. Why is it slow? There are generally two reasons:
- Too many resources in the Terraform configuration
- Some resources are very slow to change a. And perhaps you have several of these with dependencies causing them to apply linearly The way to improve things depends on the specifics of this answer.
Definitely option 1. There are many resources and I feel like I’m accidentally making a monolith Inside of each service folder i have the creation of the ECR, the task excec role, the container definition, the load balancing ingress, the service, the task, the pipeline, codebuild and the autoscaling (most of it using cloudposse’s modules) with the specific configuration of each service
Answering about k8s, we have not enough knowledge to migrate (startup mode activated). Im gonna check about Atmos
gotcha. One way to think about how to split things up is by lifecycle. You may want to separate the Terraform which sets up the environment from the infrastructure which deploys the application. It sort of sounds like you’ve done that, mostly though. Perhaps you might want to split out the pipeline/codebuild to a second configuration, but that might be pointless complexity.
If these stacks are mostly concerned with things that don’t change often, taking 15mins to run is probably fine. How often do you change ingress rules or autoscaling thresholds? It sounds like a stack you spin up once per service and then change a few times a year
I will say for my experience RDS will take the longest so maybe doing what Alex is suggesting with RDS and have a different deployment lifecycle could be a good option. In my company projects we do not put RDS/Aurora deployments are part of the app because some actions can take a long time
We have encountered the same issue. Our team mitigates it by split monolith into separate independently deployed modules. You can use atmos by cloudposse or terragrunt to manage the dependency graph.
Then you don’t need to apply all resources, just apply the related ones when updating.
You can check out infrastructure reference architecture of cloudposse or gruntworks, which offer you insights to mitigate this issue.
@Paula I’m a bit late to the party but I think the first issue to address is the configuration/folder hierachy
The fundamental problem is you’ve got the relationship inside out. Instead of thinking of a terraform monorepo that contains configuration for many environments > networks > infrastructure > services…
You should think of services containing their own private infrastructure and deployment configuration (whether that’s terraform, kubernetes, ecs service and task definitions, ci/cd config etc) in a service monorepo. Personally I’d put it with the service source code repo, but atleast a repo that is privately owned and maintained by that service.
The problem you’ve got is that you’ve sliced things horizontally by what they are, not the service. This is equivalent to a neighbourhood where each house contains one type of thing for everyone. For example one house contains everybody’s chairs. One house contains everybody’s beds. One house contains everybody’s tables etc.
But of course in reality each house is independent and private to everyone else as dictated by the owners/occupants of that house. And within each house things are they arranged by use case, not furniture type. For example a living room could contain chairs, tables, tvs etc.
The same way a service should contain and control it’s own infrastructure arranged by use case. That’s the whole point of service orientated architecture, in that maintainers of a service can perform the things they need to do end to end without having to involve other people.
2023-05-23
How do you folks manage terraform provider updates? For example we have a lot of terraform and we prefer to bring everything to the same version across our many repos/state files. We have used lock files and/or manually pinned each provider but this has significant overhead as we need to go to each repo or module and make updates. Would love to know if anyone has found a more optimal solution to this.
I think there are two good options:
- Just don’t bother. Provider updates almost never fix bugs, and if you write code for a new feature it’s quite painless to update to the latest version on-demand.
- Renovate with autoMerge
Renovate documentation.
Thanks
wow Renovate is magic
we’re rolling it out now. But I’d caution that it might just be adding pointless busywork. Does it really matter if you use AWS provider 3.68 instead of 3.72? Is it a waste of time to review PRs upgrading such providers?
And if you enable autoMerge, how do you ensure that problematic upgrades are not merged? With Terraform, your CI probably runs a plan and reports if there are no errors. But “no errors” doesn’t mean “no changes”. A provider upgrade might make significant changes to your resources and it’s very difficult to detect this automatically.
Renovate with Terraform is a bit of a rabbit hole, I’m finding.
Agreed. If you’re concerned about compatibility with shared modules I’d argue to maybe consider the pros and cons of org wide modules.
So since i brought this question up we had built our own little registry thinking we could do a proxy that sat in front of the official terraform registry. And the way it worked is our proxy allowed us to define what versions are actually available from the official terraform proxy. However where it doesn’t work is within modules and anything else that is nested. All the modules we use (ex. CloudPosse modules) would need to also be configured to pull from our custom provider registry. So we came up with another idea today: https://github.com/GlueOps/terraform-module-provider-versions/blob/main/README.md we wrote a module that only defines our provider versions and then reference the module across all our terraform directories.
It’s been 2 hours since we rolled it out and it’s working well so far…
terraform-module-provider-versions Overview
To use this repo as your source of truth for provider versions just add a file like this to your terraform directory:
provider_versions.tf:
module "provider_versions" {
source = "git::<https://github.com/GlueOps/terraform-module-provider-versions.git>"
}
Note:
• GlueOps uses main
for production across all repositories. So please test compatibility as needed on a feature branch.
Oh yeah here is the proxy code we came up with: https://github.com/GlueOps/terraform-registry-proxy
versions that it allows right now: https://github.com/GlueOps/terraform-registry-proxy/blob/main/provider-versions.yml
^ we will probably be archiving this repo
Can you talk a bit about what benefit this brings you? What’s the motivation here
Definitely. We calculated we were spending about 3-6 hours a month on just updating all of our terraform directories/repos to use the same/latest terraform providers across all our repos. So speeding up this repetitive process was the primary motivation here.
It’s an interesting pattern i wouldn’t have thought about using a module purely for required versions.
I guess to take a step back, what are you solving by making all your configs using the same provider versions?
Plus I assume you’ll still need to tf init -upgrade
and commit all your .terraform.lock.hcl
files
That’s a really good point. We will need to revisit if and how we want to handle the .terraform.lock.hcl
RE: standardized/updated configs.
So we actually update all our apps each month. We don’t do every last dependency/package but for all the layers we manage, we try to bring it up to the latest release/patch. Historically, we have worked on teams where we usually ignored updates and have found that falling behind on software updates (not specifically terraform) ends up being a huge nightmare at the most inconvenient times. So for a little over the past year we have been doing updates once a month. It takes about a full day for one of us to get it done.
Any hashicorp ambassadors able to promote this small, simple PR: https://github.com/hashicorp/terraform/pull/30121/files
• Enables the ability to create IAM policies that give roles access to state files based on tags in S3 for fine-grained access permissions.
• It’s a tiny/simple PR
fyi…
Hey @lorengordon I’m the community manager for the AWS provider. Someone from the team has assigned this to themselves for review, however, they’re currently focused on finishing up the last bit of work needed for the next major AWS provider release. Unfortunately I can’t provide an ETA on when this will be reviewed/merged due to the potential of shifting priorities. We prioritize by count of reactions and a few other things (more information on our prioritization guide if you’re interested).
Contributor documentation and reference for the Terraform AWS Provider
thanks @loren - it was worth a shot. The problem I have with rankings is it’s not weighted. Some of the issues are level-10 effort, while this is level-0.
cc @Jeremy G (Cloud Posse) @matt
yeah agree
maybe it’ll help to the linked issue also…? https://github.com/hashicorp/terraform/issues/30054
Current Terraform Version
1.0.11
Use-cases
In my company we’d like to simplify terraform backend management and use a unique s3 bucket for all projects.
We already have a custom rôle by project, but to secure access by folder we’ve reached a limit in aws bucket policies.
The best solution would be to tag the s3 objects to better handle access.
Proposal
Looking at the code for remote-state/s3, the tags would work a lot like the current acl work. It would be another option in the s3 backend config with small impact in the client.
https://github.com/hashicorp/terraform/blob/main/internal/backend/remote-state/s3/client.go#L175
i wonder if “participants” can be tracked as a metric in addition to reactions. a lot of folks will comment, without posting a reaction
could possibly try to frame it as a security enhancement, at least for advanced users, with fine-grained ABAC policies…
I was a lot more excited about this feature before I found out that you cannot use tags to control write-access. You still need a path-based restriction to prevent people who can write to some part of the S3 bucket from writing to (overwriting, deleting) another part of the bucket.
Hello Community, looking for a TF module that caters for SQS, SNS and CW alerts. I am somewhat new getting into the tool. Any pointers greatly appreciated.
What do you mean “caters for”?
Sorry, I meant builds those 3 resources together.
I feel like I am asking a dumb question though
it is a good question, I came across issues before that combining multiple modules will cause some dependency issue, need to run --target
first to create some resources
This Slack is run by the CloudPosse company, who release many open source modules. Searching the repository list at https://github.com/cloudposse/ is a good place to start. There are a couple of hits if I search for sqs
and sns
. One of them might be suitable.
Otherwise, we’re going to need more info. What are you trying to accomplish with SQS, SNS and CW alarms?
DevOps Accelerator for Startups Hire Us! https://slack.cloudposse.com/
I was looking for a solution to build an SQS queue, with an CW Alert that monitors a DLQ configured to send to an SNS topic. I’ve tried creating my own module as follows:
resource "aws_sqs_queue" "sqs_queue" {
name = var.sqs_queue_name
}
resource "aws_sqs_queue" "dlq_queue" {
name = var.dlq_queue_name
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.dlq_queue.arn
maxReceiveCount = 3
})
}
resource "aws_cloudwatch_metric_alarm" "dlq_alarm" {
alarm_name = "dlq-message-received-alarm"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 1
metric_name = "ApproximateNumberOfMessagesVisible"
namespace = "AWS/SQS"
period = 60
statistic = "SampleCount"
threshold = var.cloudwatch_alarm_threshold
alarm_description = "This alarm is triggered when the DLQ receives a message."
alarm_actions = [
var.sns_topic_arn
]
}
Not the cleanest of solutions.
that looks clean to me! What don’t you like about it?
Heya. I’m trying to use a cloudposse module with Terraform Cloud for the first time (terraform-aws-elasticache-redis). I keep running into a problem with things named after module.this.id . I’m looking into the this
module and see it should be set to an ID, but it seems to be set to an empty string.
Is this something that’s known? I’m passing enabled = true
to the module, which should pass it to the null context module as well.
the context must have the following attribute (all or at least one)
namespace = "eg"
stage = "test"
name = "redis-test"
enabled = true
region = "us-east-2"
availability_zones = ["us-east-2a", "us-east-2b"]
namespace = "eg"
stage = "test"
name = "redis-test"
# Using a large instance vs a micro shaves 5-10 minutes off the run time of the test
instance_type = "cache.m6g.large"
cluster_size = 1
family = "redis6.x"
engine_version = "6.x"
at_rest_encryption_enabled = false
transit_encryption_enabled = true
zone_id = "Z3SO0TKDDQ0RGG"
cloudwatch_metric_alarms_enabled = false
the final ID/name will be calculated as
{namespace}-{environment}-{stage}-{name}-{attributes}
to make all AWS resource names unique and consistent
Ah I see, thank you.
2023-05-24
v1.5.0-beta2 1.5.0-beta2 (May 24, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…
1.5.0-beta2 (May 24, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate…
2023-05-25
This message was deleted.
Hello team. Any suggestion about how to do a / rds environments with cloudposse modules? Lets use the simpler one for this suggestion https://github.com/cloudposse/terraform-aws-rds. Ideas? Thanks.
Terraform module to provision AWS RDS instances
We don’t support it
Terraform module to provision AWS RDS instances
Open to supporting it though
I just did some blue/green RDS upgrades recently, the API/Terraform support was lacking, so had to do it mostly manually, and update our Terraform after the fact to reflect the infra changes. Also, fun fact: you can’t use blue/green with RDS Proxy (yet).
Also be careful with RDS blue green on especially busy databases. It can have a lot of trouble acquiring a lock and eventually just timeout after you initiate the update.
This is my experience on Aurora MySQL at least
@Josh Pollara was this using the AWS managed “Amazon RDS Blue/Green Deployments”? Interesting insight…
Yes it was
Against a very busy database
Additionally, it wasn’t a one time event. We tried multiple times without luck.
Ok, so is a no go for the time being. Yeah it was just a insight, Now I got the facts for not to. Thanks…
What we need is amazon to offer https://neon.tech/ as a managed service
Postgres made for developers. Easy to Use, Scalable, Cost efficient solution for your next project.
(this is what vercel is now offering)
Also https://planetscale.com/ for MySQL
PlanetScale is the MySQL-compatible, serverless database platform.
I’m using the terraform-aws-ec2-autoscale-group module at the moment, but in an attempt to do some cost saving I like to switch to spot instances. I see there is an option to set instance_market_options but can’t get the syntax right. Documentation says:
object({
market_type = string
spot_options = object({
block_duration_minutes = number
instance_interruption_behavior = string
max_price = number
spot_instance_type = string
valid_until = string
})
})
I tried this with no luck:
instance_market_options = [
{
market_type = "spot",
spot_options = [
{
spot_instance_type = "one-time"
}
]
}
]
│ The given value is not suitable for module.autoscale_group.var.instance_market_options declared at .terraform/modules/autoscale_group/variables.tf:93,1-35: object required.
variable "instance_market_options"
is an object, but you use it as a list of objects
instance_market_options = {
market_type = "spot",
spot_options = {
spot_instance_type = "one-time"
}
}
same with spot_options
2023-05-26
I have a question regarding atmos stacks and cloudposse/terraform-aws-components/account
.
The email format in the example is [something+%[email protected]](mailto:something+%[email protected])
, but if the account name has hyphens in it like foo-bar
, you would have an account email of [[email protected]](mailto:[email protected])
.
My question is, will this cause issues with email routing, and if so, is there a simple way to replace hyphens with dots without forking the account component?
Please ask in refarch
(as this relates to our reference architecture)
It won’t cause problems
2023-05-27
Can somebody help me with cloudposse/ssm-tls-ssh-key-pair/aws
please? I’m trying to create a keypair and store the output in SSM. This module creates the SSH keys in SSM properly, but I don’t see how to then use it:
• In terraform-aws-modules/key-pair/aws
If I use public_key = module.ssm_tls_ssh_key_pair.public_key
, I get the error: InvalidKey.Format: Key is not in valid OpenSSH public key format
(doing an ECDSA key)
• If I use cloudposse/key-pair/aws
, it expects the key to be in a file (which defeats the whole purpose of using SSM, right?)
I’m sure this is obvious, but I’m missing it
Hmm, OK, so ECDSA
doesn’t output a public key format? If I change to RSA
, it works as expected
You can use Amazon EC2 to create an RSA or ED25519 key pair, or you can use a third-party tool to create a key pair and then import the public key to Amazon EC2.
2023-05-29
(probably?) outdated docs:
https://registry.terraform.io/modules/cloudposse/vpc/aws/latest
cidr_block = "10.0.0.0/16"
not valid anymore.. it’s ipv4_cidr_block
now and it’s a list of strings
report it as an issue against the github repository. Or if you can’t do that, try #terraform-aws-modules
Describe the Bug
The option:
cidr_block = "10.0.0.0/16"
is not valid anymore.. it has been changed to ipv4_cidr_block
now and it’s a list of strings, not one string.
Expected Behavior
cidr_block = "10.0.0.0/16"
should work as per docs and examples
Steps to Reproduce
try to use latest main
branch version of the module
Screenshots
No response
Environment
• Module: latest main
branch commit
• TF version: tested with v1.4.5
Additional Context
No response
2023-05-30
Hello, I have a terrraform issue to overcome. I am creating a azure key-vaults and secrets inside them with random password generator. It is working fine. The problem starts when i want to add to list another key-vault to be provisioned. While i run terraform again all the passwords are being regenreted because of the random function im using. Is there a possibility to persist the passwords for already created key-vaults without regenerating them for existing ones?
key-vaults are created with for_each statement from the list.
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "kv" {
for_each = toset(var.vm_names)
name = format("%s", each.value)
location = azurerm_resource_group.db_rg.location
resource_group_name = azurerm_resource_group.db_rg.name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = var.soft_delete_retention_days
purge_protection_enabled = false
sku_name = var.kv_sku_name
# TODO: add group with permissions to manage key vaults
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
]
secret_permissions = [
"Get",
"List",
"Set",
]
storage_permissions = [
]
}
}
secrets.tf example resources from secrets
resource "random_password" "password" {
count = 11
length = 11
special = false
min_upper = 3
min_numeric = 3
min_lower = 3
}
resource "azurerm_key_vault_secret" "secret1" {
depends_on = [azurerm_key_vault.kv]
for_each = toset(var.vm_names)
name = "name1"
value = random_password.password[0].result
key_vault_id = azurerm_key_vault.kv[each.value].id
}
resource "azurerm_key_vault_secret" "secret2" {
depends_on = [azurerm_key_vault.kv]
for_each = toset(var.vm_names)
name = "name2"
value = random_password.password[1].result
key_vault_id = azurerm_key_vault.kv[each.value].id
}
seems it is still a limit of terraform since 2017, https://github.com/hashicorp/terraform/issues/13417#issuecomment-297562588
Hey @zbikmarc! Sorry for the long delay on a response to this. It confounded me for a bit, and I kept chasing red herrings trying to find a root cause.
This is essentially the same bug as #13763, which itself is a manifestation of #3449. Basically, because you’re using element
, Terraform sees that the list is changed, and assumes everything changes. @apparentlymart explained it much better than I can in #13763, so I’m just going to borrow his explanation:
The problem is that currently Terraform isn’t able to understand that
element
only refers to thecount.index
element of the list, and so it assumes that because the list contains something computed the result must itself be computed, causing Terraform to think that the name has updated for all of your addresses.
A workaround that worked in my reproduction of the issue is to run
terraform plan -out=tfplan -target=random_id.server-suffix
And make sure it only intends to add the new random_id. Then run terraform apply tfplan
. Then a normal plan/apply cycle should only add the new instance, because Terraform already has the ID, so the list isn’t changing.
I’m going to go ahead and close this issue, but if you have questions or this doesn’t work for you, feel free to comment. Also, it looks like @apparentlymart has a fix for this opened as hashicorp/hil#51, so if that gets merged, this will become unnecessary in the future.
Hm, do you have to run this multiple times??
One approach you can consider is appending a small hash after the name of the keys.
ex.: key-name-j3hd1
And create a separate workspace and repo (I’m assuming you are using Terraform cloud) and only run if necessary.
Even if anyone accidentally run it, it will create a new set of keys and won’t change the ones you created before and prbly already using.
I think it solves for now but creates a technical debt with the automation gods.
Also, you could use azurecaf
provider for naming conventions and random
provider for the small hash.
Could you add the plan where it wants to re-create stuff? As you are using a list there could be an issue with lexical ordering. TF sorts lists alphabetically before looping through them, so you can’t rely on ordinal order. You could work around this by using maps or adding a numeric prefix to each element in the list so you can control the other TF loops through it and how it stores it in state.
Thank you for responses, ill check your suggestions.
More info about the setup. It is a part of a bigger tf script. Im trying to provision a VMs on azure that suppose to host databases. Together with each VM (from the list) there is a keyvault created with secrets for this database. Secrets should be created once.
I need to have the possibility to run it multiple times because there will be need to remove one of the hosts or add another later on. Im using for_each to not relay on count index. While im adding new VM to the list or removing one the secrets related to this machine are created/destroyed but the rest related to other machines that suppose to be unchanged are regenerating once again
Probably using a different state for each key-vault will do the trick but its strictly connected to the VM. While im removing the VMs i would like the key-vault and its secrets to be removed as well.
It seems that this issue is fixed. Thanks. It was some dependency that was created because i was injecting those passwords to “cloud-init. scripts.
anyway im facing another issue now.
variable "vm_names" {
type = list(string)
default = [
"vm-1-2ds31",
"vm-2-412ae"
]
}
resource "random_password" "password" {
count = 2
length = 11
special = false
min_upper = 3
min_numeric = 3
min_lower = 3
}
resource "azurerm_key_vault_secret" "secret1" {
depends_on = [azurerm_key_vault.kv]
for_each = toset(var.vm_names)
name = "name1"
value = random_password.password[0].result
key_vault_id = azurerm_key_vault.kv[each.value].id
}
resource "azurerm_key_vault_secret" "secret2" {
depends_on = [azurerm_key_vault.kv]
for_each = toset(var.vm_names)
name = "name2"
value = random_password.password[1].result
key_vault_id = azurerm_key_vault.kv[each.value].id
}
I need to create a set of two different random passwords for each VM from the list. Right now with this setup it is creating only 2 passwords and propagating it to all key-vaults that are created
key vaults are created like this
resource "azurerm_key_vault" "kv" {
for_each = toset(var.vm_names)
name = format("%s", each.value)
.
.
.
if i add the for_each to random_password then i cannot specify how many passwords need to be created.
The solution may be another resource for random_password to have it in different object but i cannot do this as it shoudl be created dinamically in relation to the var.vm_names list
@Adrian Rodzik, sorry for the late reply here. Was the latest issue fixed?
2023-05-31
v1.5.0-rc1 1.5.0-rc1 (May 31, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. The new independent check blocks must specify at least one assert block, but possibly many, each one with a condition expression and an error_message expression matching the existing <a…
1.5.0-rc1 (May 31, 2023) NEW FEATURES:
check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate a…