#terraform (2020-10)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-10-01
As more and more people are switching to using infrastructure-as-code (like Terraform) to manage their cloud environments, we’re seeing an increase in the desire to do security reviews of the IaC code files. There’s a bunch of tools out there, and a couple of big challenges. Would appreciate your thoughts on the matter. Please see a blog post we’ve just published:
https://indeni.com/blog/identifying-security-violations-in-the-cloud-before-deployment/
Treating your cloud infrastructure as code (IaC) enables you to handle the growth in demand for your applications. Additionally, you are adopting IaC to scale
Good to see you mentioning Checkov here - but this tool can definitely be used for both build and runtime; especially if you look at BridgeCrew’s SaaS offering which will hook back into your repos and remediate both operational issues, as well as your original code. Relationships between modules deffo still an issue.
Treating your cloud infrastructure as code (IaC) enables you to handle the growth in demand for your applications. Additionally, you are adopting IaC to scale
I like the bridgecrew offering. Definitely a good option.
For sure it’s not OSS, but it rounds off the LEFT < - > RIGHT
I’m thinking DevSecOps should be followed here. for my TF projects I use unit test ( tf fmt tf validate and tf lint) then Integration testing using Terratest. For security I’m using Checkov and terraform-compliance, all of this should fall into a pipeline
Just me or since Terraform moved the docs to the registry, google search for resources documentation is rubbish…keep getting random mirror sites (SEO f*cked?)
I noticed the same thing actually
Yeah, it’s degraded for sure.
i mentioned it a couple days ago in the hangops terraform channel, there are a number of maintainers there. they’re aware, and trying to figure it out. it was fine when they first switched, but something happened on the google side and now they need to request a re-index or something
One thing I was thinking about this is that it seems they’re trying to reference modules without the prefix — Like aws_instance would now just be instance
under the aws provider. But folks are continuing to search aws_instance and it harder to find it seems.
for now, it’s actually faster to search for “terraform aws” and then use the left side scroll bar to find the resource. it’s much better than it used to be, now that they group by service. https://registry.terraform.io/providers/hashicorp/aws/latest/docs
Yup dumpster fire
Yeah, to loren’s point — That does the trick OR “terraform aws_YOUR_RESOURCE” does help narrow it down.
This is getting worse and worse:
someone noted in the hangops channel that their robots.txt is blocking everyone
is there a way to get analytics on terraform modules usage ?
Usage in your environment or usage globally?
globally would be nice
could also answer this question @Matt Gowie
https://sweetops.slack.com/archives/CBW699XE0/p1601574548034000
Any other users of https://github.com/cloudposse/terraform-aws-eks-cluster having trouble with first spin up and the aws-auth configmap already being created? I’ve run into it twice now and have been forced to import. Wondering what I’m doing wrong there.
So each module has a download count here: https://registry.terraform.io/browse/modules
it’s possible that we could add a new terraform module with a null resource for analytics. maybe default enable_analytics=true
and people can turn it off if they like
But we also pulled it into an Excel file recently that’s easy to sift through:
ah but that is only for public terraform modules
i was thinking of something like homebrew analytics
@RB the git clone / download metrics from GH would be the way to do it for CP modules I believe.
ooo cool. how did you build this excel file ?]
The registry has an API
So built a script to scrape it into a CSV
Was very useful for our development efforts to know what modules are most common and we should support
ah very cool
would your team consider building it into a google spreadsheet instead ?
that way it can be updated on a cron
It’s actually in a google sheet, but we can’t share documents directly from the Drive (org policy). Can potentially find a place to put it and auto-update.
Hi. I am new to Terraform and have been struggling with something for the last few days. I’m trying to deploy and AWS ECS Fargate cluster for my django application. In my setup I have a task definition with two containers: one for the django app and one for nginx. My problem is I haven’t been able to make the django static files to work. I usually use this via docker compose using volumes in the definition like this: If someone can point me in the right direction on how to do this with terraform, that would be awesome
version: '3.0'
services:
web:
build: .
command: >
sh -c "echo yes | python manage.py collectstatic
&& gunicorn wormhole.wsgi:application --bind 0.0.0.0:8080"
volumes:
- ./:/usr/src/app/
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8080
nginx:
build:
context: .
dockerfile: ./Dockerfile-nginx
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 8000:8000
depends_on:
- web
volumes:
static_volume:
media_volume:
Where’s the source of the data?
That you’re trying to load into those volumes
in the project’s directory
I copy it in the docker file
I fixed it docker configuration had a couple typos. Thanks
are there any good terraform modules for ecs scheduled tasks ?
Anyone know what terraform cloud for business is charging? (This is the one that supports on prem runners)
nah - i’ve asked a couple of times and got the typical “let’s sit down and talk about your needs and we’ll work out a price”
i complain about it everytime i talk to anyone there
pressuring a couple of partners to push back on it too
it’s really frustrating - especially at a tier targeted at small/medium business
It seems not able to create new aws_ecs_task_definition version even if I force the definition parts like Tags to change. I have below as part of the code, If someone can help that will be great. Really looking for pure terraform solution.
resource ”aws_ecs_task_definition” ”ecs-service-taskdef” { family = ”${local.name}-${var.task_definition_name}” container_definitions = data.template_file.startup.rendered
dynamic ”volume” { for_each = var.td_volumes content { name = volume.value[“name”] host_path = volume.value[“host_path”] } }
// For new builds the images will change and force the task to change, Earlier code was, tags = local.tags tags = merge( local.tags, {“app-image”=element(split(“:”, local.json_data_images), 1)} )
lifecycle { create_before_destroy = true }
}
### Create service data data ”template_file” ”startup” { template = file(“task-definitions/${var.task_file}”) vars = { name = var.domainname image = local.json_data_images API = local.json_data_ecs.appsn APP = var.appname ENV_NAME = var.environment ENV = var.environment awsstnm = local.awsstackname CommonEnv = regex(“^[a-z]+”, var.environment) }
}
It seems not able to create new aws_ecs_task_definition version even if I force the definition parts like Tags to change. I have below as part of the code, If someone can help that will be great. Really looking for pure terraform solution.
resource ”aws_ecs_task_definition” ”ecs-service-taskdef” { family = ”${local.name}-${var.task_definition_name}” container_definitions = data.template_file.startup.rendered
dynamic ”volume” { for_each = var.td_volumes content { name = volume.value[“name”] host_path = volume.value[“host_path”] } }
// For new builds the images will change and force the task to change, Earlier code was, tags = local.tags tags = merge( local.tags, {“app-image”=element(split(“:”, local.json_data_images), 1)} )
lifecycle { create_before_destroy = true }
}
### Create service data data ”template_file” ”startup” { template = file(“task-definitions/${var.task_file}”) vars = { name = var.domainname image = local.json_data_images API = local.json_data_ecs.appsn APP = var.appname ENV_NAME = var.environment ENV = var.environment awsstnm = local.awsstackname CommonEnv = regex(“^[a-z]+”, var.environment) }
}
2020-10-02
Just found this gem: https://github.com/flosell/iam-policy-json-to-terraform Hope it’s useful to some folks!
Small tool to convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document - flosell/iam-policy-json-to-terraform
terraform-aws-eks-fargate-profile is it compatible with aws 3.* provider? any plans to change version restrictions?
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
https://www.scalr.com/blog/announcing-public-beta/
Dearest Terraform Community,
It is with great pleasure that I stand before your virtual selves to publicly present the fruit of the last 18 months of our labor: Scalr, a remote backend for Terraform to compete with Terraform Cloud and Terraform Enterprise.
- Sebastian Stadil, CEO
Would love to hear some early feedback / comparison if anybody tries it out.
i signed up for an account for the promo, anyway. but similar, would love to hear others’ experiences
@Sebastian Stadil can probably help with that maybe we can do a demo on office hours
Demo would be awesome. This seems to have a struck a nerve — the reception on Reddit looks well received: https://www.reddit.com/r/Terraform/comments/j3c225/scalr_public_beta_is_live/
Today we have our most exciting news yet… After 18 months of hard work, growing a huge waitlist and getting tremendous feedback during private…
The reception has been incredibly encouraging. all you guys.
We were thinking about doing an AMA too, if anyone thinks that could be valuable.
the most fair/transparent pricing in the history of SaaS pricing:
• no contact sales link (e.g. terraform cloud!)
• no SSO tax (https://sso.tax)
• no fee for idle users (similar to slack) If only more companies would adopt this minimum level of transparency in pricing.
https://www.reddit.com/r/devops/comments/j3sdj8/terraform_cloudenterprise_alternative/
check out the comment by danekan
• You e-mailed me that password in plain text.
Here is a quick update on our journey to build a Terraform Cloud/Enterprise alternative with open standards, transparent pricing and no SSO…
(btw, Scalr is an alternative for terraform cloud)
They also have an interesting feature with the ‘template registry’ as a self-serve infra launch
Do you manually update the [versions.tf](http://versions.tf)
in each module with the required version? Are there any tools to automate that?
Does anyone know how to generate the github oauth token specifically for private org repo? Im trying to spin up a codepipeline and its asking for a token for the source stage. While TF is trying to setup the github hook, it’s complaining that the repo doesnt exist since it’s targeting ${my_github_username}/${repo} and not the org. Anyone knows a trick to this?
Error: POST <https://api.github.com/repos/$user/$repo/hooks>: 404 Not Found []
on codepipeline.tf line 128, in resource "github_repository_webhook" "webhook":
128: resource "github_repository_webhook" "webhook" {
Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks
yes , that is because the token that github needs require admin access for webhooks on repos
Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks
hmm i am… lemme double check
and if you remove the access then it will try to create the webhook again even though is still there
mind me asking is this sort of thing documented anywhere…? this is really uh… convoluted
this is no a module problem
this is a githug provider thing
which is incredible annoying
github changes their APIs all the time, version 3.0 made a huge amount of changes
… just as annoying as terraform in general…
and as you know the provider is alway behing the api releases
I would not blame TF in this case
go check cloudformation and you will fell beter about TF
i prefer cdk
AWS apis are crap too
cdk is cloudformation under the covers
iono but my exp with tf has been nothing but headaches… tf syntax comes and goes with each version and theres a delay to support the latest from aws. I havent found a “golden bible” for best practices and all of these open source tf modules are all over the place. thats just my 2c ¯_(ツ)_/¯
and yes, im full admin to the repo, but still it’s trying to target personal namespace and not the org
2020-10-03
2020-10-04
Hey all, has anyone had any success with dynamically passing a provider to a module? It doesn’t quite seem like it’s possible from what I’ve read so far, but figured I would check as this would be such a powerful feature
pretty sure no expressions are allowed in the providers attribute of a module block
I believe that was what I found when I tried to that.
Yeah, was just wondering if there’s some sort of hacky way to accomplish the result
use terragrunt, with generate block(s) to create the provider configs
ah, so that’s where terragrunt comes in.
I would still have n
number of module
blocks though yeah?
one place, anyway
i could imagine being able to also generate a root config that contains the module blocks. that would be interesting. haven’t tried that
cdktf might be another option
cdktf might be a rabbit hole. The last time I tried working with it, it was pretty frustrating.
well cdktf is really new. there’s going to be some raw edges, and a learning curve. but fundamentally, the issue here is that this part of the tf/hcl config must be static by the time terraform is actually executing. so to make it dynamic, you have to generate/template the tf files. terragrunt can do that, cdktf is built to do that, or you can write your own using any template language you like
That’s useful info. @Sean Turner, I didn’t mean to hijack your thread. Just wanted to give you a head’s up that cdktf might lead to some time-creep.
No worries. I’m going to keep an eye out for a github issue around this and follow it closely as this would be a massive paradigm. Not too keen on trying tfcdk at the moment either, definitely going to give it a little more time to grow :)
@loren, do you have an example that shows how terragrunt can be used for generation? I’ve looked into it twice now, and didn’t get as far as seeing how it could be done. The example I found is too fancy. I’m looking for something simple so I can grok it quickly.
i don’t think you can use terragrunt just for generation… you have to buy into the whole terragrunt approach
That’s what I thought. And I haven’t had time to make that purchase yet .. which is why I’ve skipped over it twice now. But I think I have an idea how it works, I just want to see the end-game for an easy example to justify the jump.
We have at least temporary (feasible) work-arounds for all of the following things. What I really want are more control structures and templating options, that TF doesn’t include, sometimes by design.
in the terragrunt generate block, the contents
attribute accepts an expression that must evaluate to a string. you have access to all terraform functions in the terragrunt.hcl file. such as templatefile
and the hcl template language. here’s an example where i accepted a complex object as input, and transformed it to another complex object written to a tfvars file (in order to avoid some issue i no longer even remember ) https://github.com/plus3it/wrangler-watchmaker/blob/master/release/copy-bucket/terragrunt.hcl#L13-L25
Manages buckets and files needed for the public/default watchmaker configuration - plus3it/wrangler-watchmaker
if you’re not familiar with terraform string templates, https://www.terraform.io/docs/configuration/expressions.html#string-templates
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
I’m using the string templates. Thanks for the link to your example!
that’s kind of the basis for the solution i’m envisioning here… using the for
loop in a string template to dynamically construct provider blocks, referencing values from terragrunt locals… and the locals could be sourced from a yaml file, if that’s your jam, with yamldecode()
2020-10-05
Hi, does someone know the difference between: https://github.com/cloudposse/terraform-aws-eks-workers/ https://github.com/cloudposse/terraform-aws-eks-node-group And if any, which one should I use? Deploying a new infra
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
The terraform-aws-eks-workers
module uses the original self-managed auto-scaling groups with EC2s
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
the terraform-aws-eks-node-group
module implements the fully managed node groups
note, you can mix and match. we’ve deployed clusters that use both.
we used the eks-workers
for running jenkins and the eks-node-gruop
for everything else (for example)
Please don’t cross post unless your cross-posting the link:
https://sweetops.slack.com/archives/CDYGZCLDQ/p1601895829001800
Hi, does someone know the difference between: • https://github.com/cloudposse/terraform-aws-eks-workers/ • https://github.com/cloudposse/terraform-aws-eks-node-group And if any, which one should I use? Deploying a new infra
(or we get multiple answers all over the place)
Thank you for the response!
:wave: I’m trying to automate terraform, specifically running terraform plan
when a PR is opened on github. I’m having trouble finding what a least-privileged IAM policy would look like to run terraform plan
where the backend is on S3
Terraform can store state remotely in S3 and lock that state with DynamoDB.
Terraform can store state remotely in S3 and lock that state with DynamoDB.
ah nice, thanks! I’m guessing if you only need to run terraform plan
, a least-privelleged read-only policy would grant:
• s3:ListBucket
• s3:GetObject
• dynamodb:GetItem
• dynamodb:PutItem
• dynamodb:DeleteItem
assuming even terraform plan
tries to grab a lock on the state before running so it needs ddb access too
when it comes time for terraform apply
, I’m guessing the policy will pretty much need to be full admin in order to create resources?
the plan probably needs same access as apply but RO
and apply needs access to any resources that yoiu are creating
theres certain aws services i dont use or dont want my pipelines to have access to, so i basically have an admin user with explicit denys
I’m trying to track down all of the TF .12 preview blog posts. Does anyone know if there a place where these are listed in a linear way? Hashi’s blog seems engineered for distraction.
Their CHANGELOG is where I look
@Jaeson what’s your objective?
I often find myself trying to understand the for..each functionalities that were introduced in TF 12. The only real documentation that seems usable to me (from hashicorp) for this is embedded in one of their blog posts. … I’m usually pretty focused (or fighting to be focused) on solving a task, so it didn’t dawn on me until this morning that I’ve been kind of getting the information in pieces – for example, there seems to be no link from one post to the next in the series… you kind of just have to hunt them all down.
TLDR; I guess I was hoping for a better way to understand how to use the changes brought into TF12.
Aha, yea, I follow.
Has anyone had terraform crash … seemingly permanently? I can’t get it to run anymore.
might help to set TF_DEBUG
environment variable so you can get some more details
TF_LOG
? I didn’t find anything when looking for TF_DEBUG
. Not being a jerk, just confirming.
Nvm .. it’s in the log:
Use TF_LOG=TRACE to see Terraform’s internal logs
It doesn’t seem to give me much more than I already had.
haha, sorry - maybe I got it wrong.
np, I appreciate the hint. I think it might be something I did to my variables file.
Hi guys, I’d like to know if it’s a good practice to use/execute a module directly or have a code to use it, even if this code only have this module … and maybe 2-3 more resources? … (more details in the thread)
I have a few modules, they are to install kubernetes and some products in the cluster, one module per product. My code is about to install K8s and one or multiple of these products, using the modules. My first approach for this design is to have a directory for the modules and one directory for the code using the modules. The second design idea is to have one directory per product and they can be used as modules or not, this way the only reused code (which may be the real module) is the code to provision the K8s cluster.
The question is: What’s the best practice with Terraform using modules?:
- Have a directory or repo for the module, then the code using this module .. and maybe other module(s)
- Have a directory for the module to be use by other codes … and use the same module code to be executed directly, not as a module
Using option (1) my repo would be like this:
terraform
├── all_products
├── p1
├── p2
├── p3
└── modules
├── k8s
├── p1
├── p2
└── p3
So, the code in all_products
will use all the modules, The code in p?
uses the module k8s
and the module for p?
Using option (2) the repo would be like this:
terraform
├── all_products
├── p1
├── p2
├── p3
└── modules
└── k8s
The code in all_products
uses the code in p*
as modules, the code in p?
use the module k8s
… if other external code want to install p?
uses the code in p?
as a module
What do you think is better: (1), (2), both are accepted or is there a 3rd option?
you seem to be doing like if it was a monorepo
I personally do not like the aproach
I prefer modules that are instantiated by a project from a main.tf file and then this file pull the modules needed an then run TF somehow
Thanks @jose.amengual , it’s the same or similar answer I’ve received from other people (different Slack)
np
Quick question about dynamic content for TF 12 – all of the content above (and the grant block) should be skipped, right? I’m having difficulties because TF seems intent on processing that block of code.
Close — you want to provide an empty array:
for_each = false ? [1] : []
I believe that will do the trick for you.
You might be mixing count / for_each.
ah.
That was it. Thanks so much. I guess I didn’t have the brain for this today – I’ve confused three things as I’ve been working at this: count, array bounds, and even flipped the logic. …
Happens to the best of us!
New to me, looks like could be useful, https://github.com/sysdogs/tfmodvercheck
Contribute to sysdogs/tfmodvercheck development by creating an account on GitHub.
So on the one hand I feel like its not great to create a module that creates only one type of resource. On the other hand, I think there is a good amount of benefit behind only using cross account providers with modules. Thoughts? Take aws_route53_record
for example, I think it’s semi-worth creating a module around only this resource as it provides nice isolation to things that have a large blast radius (especially when cross account), and also allows for templating to dynamically render alias
blocks as needed.
i’ve done it, makes a lot of sense for cross-account workflows (or cross-region, cross-provider)
usually i have a larger module around it, and this is a nested module in that project, sometimes with a dedicated module for the cross-account workflow
another benefit i’ve been finding recently, is with module-level for_each… putting the resource in a module let’s me document the variables cleanly. then i can manage multiples with module-level for_each and complex objects
here’s an example for a ram-share… the top-level module handles all the “owner” config for a new ram-share, and there is a nested module for the cross-account principal association workflow, https://github.com/plus3it/terraform-aws-tardigrade-ram-share
Terraform module to manage a resource share. Contribute to plus3it/terraform-aws-tardigrade-ram-share development by creating an account on GitHub.
Totally agreed with module level for each documentation. It took me a second to wrap my head around the new way of doing but now it feels much cleaner as everything is var.foo instead of each.value.foo
2020-10-06
locals {
endpoint_config = local.is_test ? {"writer": writer_endpoint} : {"writer": writer_endpoint, "reader": reader_endpoint }
}
Is there some way to refactor this so I don’t have to repeat the “writer” config? I can think of this:
locals {
endpoint_config = merge({"writer": writer_endpoint}, local.is_test ? {} : {"reader": reader_endpoint })
}
But it’s a little ugly IMO
I also struggle to see a better way.
IMO this is cleaner as it’s very descriptive. It’s very similar to what you wrote in your second solution, but more verbose.
locals {
writer_config = { "writer": writer_endpoint }
reader_config = { "reader": reader_endpoint }
endpoint_config = local.is_test ? local.writer_config : merge( local.reader_config, local.writer_config )
}
2020-10-07
The inability to put extra newlines in Terraform expressions can really hurt readability. Especially since Terraform syntax highlighting / editor support is so poor
Yes. It would be nice if you could break up lines with a backslash like in shell code. Perhaps there is an open ticket with terraform? If not, it would be a good one to write up
I already have enough “will never get worked on” open issues in that repo
it’s all about the number of issues with the most emojis
https://github.com/hashicorp/terraform/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
and most commented
https://github.com/hashicorp/terraform/issues?q=is%3Aissue+is%3Aopen+sort%3Acomments-desc
don’t be discouraged!
in many places, you can use parens to get multi-line support, works fine in the ternary ? : also… e.g.
foo = (
local.test) ? true : (
local.anothertest) ? false : (
local.end
)
just note the careful placement of opening and closing parens
what version of terraform are you talking about?
this is perfectly valid:
service_dependency = {
for l in chunklist(flatten([
for k, v in local.technical_services :
[
for s in v.depends_on :
[k, s]
] if can(v.depends_on)
]), 2) : join("_", l) => l
}
Does anyone have expirence using the Terraform ACME provider with AWS Route 53?
My problem is that I have a module B that uses the ACME provider to make a certificate. Module B is included in module A, and has its provicer injected, like this:
Module A:
provider "aws" {
alias = "dns"
...
}
module "b" {
providers = { aws = aws.b }
...
}
Module B:
...
resource "acme_certificate" "certificate" {
...
dns_challenge {
provider = "route53"
}
}
When the ACME provider in B performs the challenge, it doesn’t use the role and credentials from module A’s “dns” provider. Instead it seems to use whatever credentials I have in the shell where I run terraform. I know that I can provide a “config” blob to the “dns_challeng” block, but I only have temporary credentials, so how would I extract those from module A’s provider?
Has anyone had this problem?
Hmm. The Foqal bot linked me to some disappointing documentation: https://www.terraform.io/docs/providers/acme/r/certificate.html#relation-to-terraform-provider-configuration
Provides a resource to manage certificates on an ACME CA.
Does anyone have a good workaround to avoid a setup like:
dns_challenge {
provider = "route53"
config = {
AWS_ACCESS_KEY_ID = "${var.aws_access_key}"
AWS_SECRET_ACCESS_KEY = "${var.aws_secret_key}"
AWS_DEFAULT_REGION = "us-east-1"
}
}
i don’t think it’s possible to avoid this… the acme provider resources can’t access the aws provider credentials. any such creds need to come from the config of the acme resource
i found a workaround where i use a local-exec provisioner on a null resource to assume the role i want, and with aws cli. then i grab the output and parse it
but it’s a hack
yeah sure, you don’t actually have to use vars
When trying to set up dual replication between two sets of buckets in different regions, TF informed me that I had a cycle error. I tried to control through a variable by making the block that caused the error dynamic – I didn’t really mind the thought of running it twice, once for each replication direction. But TF seems a little pessimistic. Did I do something wrong in my configuration, or is TF just really not going to let me control this through a variable?
I found this post which describes one way to handle this, but I’m wondering if there is a better way?
nvm. When I took a closer look at this, I decided to set up the replication from DR to prod as part of the step to switch over to DR.
Hello guys Does anyone has recommendation to test terraform modules or like say testing AWS ECK module ? We are currently using https://github.com/cloudposse/terraform-aws-eks-cluster and testing it manually but wanted to know what other people are following. I am aware that we can use terratest but problem with that is it will provision the resources on aws cloud which is time consuming plus it will cost money
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Won’t cover all AWS services, and I personally have not used this (just found it recently) but you could check into LocalStack https://github.com/localstack/localstack
A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline! - localstack/localstack
there’s a way to configure terraform provider for it
thanks Zach had a look at localstack EKS module is not supported in community version
I’d just like to qualify expensive. For most businesses, the most expensive resource is the human resource. E.g. A business is paying $1M/year for AWS and $20M a year for humans/payroll. AWS is letting you pay for fractional usage to run some tests. This is a very scalable way to manage the cost of testing. Instead, reimplementing this with something like localstack trades this predictable cost of using AWS with a highly variable, unpredictable cost of using an AWS emulator. You’re going to need to manage the additional associated techdebt and solve for the inevitable inconsistencies and bugs.
thanks
v0.14.0-alpha20201007 0.14.0 (Unreleased) UPGRADE NOTES: configs: The version argument inside provider configuration blocks has been documented as deprecated since Terraform 0.12. As of 0.14 it will now also generate an explicit deprecation warning. To avoid the warning, use provider requirements declarations instead. (<a href=”https://github.com/hashicorp/terraform/issues/26135” data-hovercard-type=”pull_request”…
Terraform by HashiCorp
The version argument is deprecated in Terraform v0.14 in favor of required_providers and will be removed in a future version of terraform (expected to be v0.15). The provider configuration document…
Hi, I have the following terraform structure where I use diferent variable files for different environments.
├── main.tf
├── mock_api.tf
├── variables.tf
├── vars
├── dev.tfvars
├── prod.tfvars
During the deployment I simply run:
terraform init -reconfigure -backend-config="bucket=
and then terraform plan -out=api_tfplan -var-file=${tf_var_file}
However, recently I added mock_[api.tf](http://api.tf)
and it only needs to be applied to dev environment only. What is the best way to do that? I could use if env != prod in resources field but mock_api doesn’t have resources. I purposely didn’t use modules because dev/prod (and other) environments needed to be the same but with different variables.
Let me know if you have some ideas about how can mock_[api.tf](http://api.tf)
only be applied for dev env.
use count
or for_each
on all the resources in mock_[api.tf](http://api.tf)
, and in the expression test a variable to turn the resources on/off
or with tf0.13, drop mock_[api.tf](http://api.tf)
into a subdirectory, modules/mock-api/main.tf
, and call it with a module reference from ./main.tf
, and use count
on the module reference with a variable to turn it on/off. this way you don’t mess with every resource in the module, just the reference to the module
2020-10-08
hi guys! first of all, it’s my first comment so let me thank you for your repos, they are very helpful for me I’m reading the one for autoscaling but I have a doubt about how to use it with custom metrics. I have created policies for predefined metrics, but I’m not sure about how to use it when creating custom metrics. Any tip? Thank you and keep the hard work!
Terraform module to autoscale ECS Service based on CloudWatch metrics - cloudposse/terraform-aws-ecs-cloudwatch-autoscaling
This is a pretty simple module with a few canned policies.
Terraform module to autoscale ECS Service based on CloudWatch metrics - cloudposse/terraform-aws-ecs-cloudwatch-autoscaling
If you want to do anything more probably better just to use the raw resiyrces
hey, just using it as guide but my question is more about functionality, how I could use the aws_appautoscaling_policy with custom metrics?
Anyone know if Terraform Cloud Agents support a healthcheck endpoint or health check command? e.g. something like https://www.terraform.io/docs/enterprise/admin/monitoring.html
No mention of it in the documentation https://www.terraform.io/docs/cloud/workspaces/agent.html
-name <name>
An optional user-specified name for the agent. This name may be used in
the Terraform Cloud user interface to help easily identify the agent.
Default: The agent's ephemeral ID, assigned during boot.
Environment variable: TFC_AGENT_NAME
-log-level <level>
The log verbosity expressed as a level string. Level options include
"trace", "debug", "info", "warn", and "error".
Default: info
Environment variable: TFC_AGENT_LOG_LEVEL
-data-dir <path>
The path to a directory to store all agent-related data, including
Terraform configurations, cached Terraform release archives, etc. It is
important to ensure that the given directory is backed by plentiful
storage.
Default: ~/.tfc-agent
Environment variable: TFC_AGENT_DATA_DIR
-single
Enable single mode. This causes the agent to handle at most one job and
immediately exit thereafter. Useful for running agents as ephemeral
containers, VMs, or other isolated contexts with a higher-level scheduler
or process supervisor.
Default: false
Environment variable: TFC_AGENT_SINGLE
-disable-update
Disable automatic core updates.
Default: false
Environment variable: TFC_AGENT_DISABLE_UPDATE
-address <addr>
The HTTP or HTTPS address of the Terraform Cloud API.
Default: <https://app.terraform.io>
Environment variable: TFC_ADDRESS
-token <token>
The agent token to use when making requests to the Terraform Cloud API.
This token must be obtained from the API or UI. It is recommended to use
the environment variable whenever possible for configuring this setting due
to the sensitive nature of API tokens.
Required, no default.
Environment variable: TFC_AGENT_TOKEN
-h
Display this message and exit.
-v
Display the version and exit.
no subcommand available to test health
For monitoring the health of the agents, right?
I’d like to use for_each
to create a set of resources defined by the product of two arrays:
resource aws_iam_role_policy_attachment {
for_each = setproduct(["role1", "role2"], ["policy1", "policy2", "policy3"])
role = each.key[0]
policy_arn = each.key[1]
}
(Would create six role policy attachments.) But this gives an error:
The given "for_each" argument value is unsuitable: the "for_each" argument
must be a map, or set of strings, and you have provided a value of type list
of tuple.
Suggestions for how to implement this? The list of policies I’m attaching is hardcoded, so I could create three resource
blocks. But surely there’s a better way!!
I came up with this, which is pretty ugly:
locals {
combinations = setproduct(["role1", "role2"], ["policy1", "policy2", "policy3"])
combination_map = {for item in local.combinations : "${item[0]}-${item[1]}" => item}
}
resource aws_iam_role_policy_attachment {
for_each = local.combination_map
role = each.key[0]
policy_arn = each.key[1]
}
You can remove the intermediate variables:
resource aws_iam_role_policy_attachment {
for_each = {for item in setproduct(["role1", "role2"], ["policy1", "policy2", "policy3"]) : "${item[0]}-${item[1]}" => item}
role = each.key[0]
policy_arn = each.key[1]
}
Hi, I cannot come with a better solution than your last code block…
makes sense to me. note that you can multi-line it to make it more readable
2020-10-09
Hi @Erik Osterman (Cloud Posse), what is the best way to learn the terraform and integrate with ci and cd pipeline.
I know you referred to Erik, but I’ll add what helped me: Terraform course in Udemy (not expensive at all) and a free account with AWS and CircleCI. (or TF Cloud, if you prefer)
That’s a great question! I don’t think I can answer you as comprehensively as I’d like to… I think @antonbabenko would probably able to direct you to the best learning materials.
Here’s my “high level” pitch:
• Learn by doing, not just by reading. First identify what you want to achieve (because you need a goal), then read and research enough to get started and go from there.
• Study our terraform modules. I’d like to think every single one of our modules is a reference example for how to design and implement composable, re-usable, testable modules.
• Get started early writing tests. It’s a habit hard to introduce later. We use terratest and everyone of our modules has a simple example of that.
• HashiCorp has invested heavily in their online curriculum and even offers certifications now. Their docs are free, check them out here: https://learn.hashicorp.com/terraform
• For Terraform CI (github actions are sufficient to test). For a proper terraform CD workflow, I think your best bet is to start with a SaaS solution and learn from that. Your options are Terraform Cloud, Scalr, Spacelift, and maybe Env0 (haven’t checked these guys out yet). Terraform CD is non-trivial to do well. You can easily stick it in any pipeline, but a well-built terraform CD pipeline will have a terraform plan
→ planfile → approval → apply
workflow. You’ll need to stash the planfile somewhere and the planfile may contain secrets.
• Checkout our weekly #office-hours → cloudposse.com/office-hours (podcast.cloudposse.com and youtube.com/c/cloudposse) they are free and you can ask questions and get answers from our community of experts.
• Hangout in watering holes like this one. You’ll learn a lot in a short amount of time.
That is a great and very detailed answer I usually give to people myself but in shorter way. I personally see a lot of value in reading documentation from A to Z (or to K-Keywords) when I am learning something.
Also, seeing open-source projects and try to contribute there is very good learning point… Though it requires more than one PR to start enjoying this process and see value in it.
You can also do workshop materials yourself for free - https://github.com/antonbabenko/terraform-best-practices-workshop
Terraform Best Practices - workshop materials. Contribute to antonbabenko/terraform-best-practices-workshop development by creating an account on GitHub.
thanks alot
Anyone have a solution to making use of variables in backend.tf to define statefiles via tfvars?
Yes/no. Strictly speaking, it’s not possible.
You have a few options though. Terraform supports passing backend parameters via environment variables. We used to use this extensively.
The other option is you can easily generate your [backend.tf](http://backend.tf)
as a JSON file (E.g. [backend.tf](http://backend.tf).json
) This is the route we’re taking today because it’s easier for developers to understand.
Got any examples of that first solution?
Transform environment variables for use with Terraform (e.g. HOSTNAME
⇨ TF_VAR_hostname
) - cloudposse/tfenv
I don’t have any issues with doing it as an env var
We used this simple cli to make it easier
Oh, and lastly, #terragrunt can also do this for you. But if you’re not using it, it’s a heavy handed solution for simply managing the backend config.
Yeah, my intention here is to keep things as simple and straightforward as possible as it’s fairly minor in usage
Here was our original post on this pattern: https://www.reddit.com/r/Terraform/comments/afznb2/terraform_without_wrappers_is_awesome/
One of the biggest pains with terraform is that it’s not totally 12-factor compliant (III. Config). That is,…
Here’s the terraform docs https://www.terraform.io/docs/commands/environment-variables.html#tf_cli_args-and-tf_cli_args_name
Terraform uses environment variables to configure various aspects of its behavior.
SweetOps Slack archive of #terraform for January, 2019. Discussions related to Terraform or Terraform Modules
Finally got some interest on a gitops workflow for security group whitelisting. I can plug this in with terraform cloud. Would like to know if there is any thing someone is done to post the preview of changes from terraform cloud as a comment into GitHub pull requests. The person that will approve the pull requests doesn’t have access to terraform cloud so I’d like to show the plan output similar to Atlantis directly in the pull request.
Haven’t seen anything for that - not sure if it’s possible natively. Of course, anything can be engineered using APIs, etc. I don’t think that’s your goal though?
Right. I’m trying to ensure it’s as accessible as possible to reduce friction while still promoting a clean git history and workflow. I’ll see what I can swing, just was hoping for something. I might preview the terraform command cli library I used for packer as it calls the api without terraform cli directly. It might be able to help.
You could have Atlantis to just comment the plan on a PR bases and have tf cloud executed it
Sounds like more infra to manage. :-) I found a github action and with terraform api and libraries I bet I could figure out how to extract this from terraform cloud. Wish me luck
2020-10-10
I have adopted your label Terraform modules and your namespace-environment-name-attribute
naming style. I have added an additional naming rule: when the resource is global, such as S3 buckets, I specify namespace
, using an abbreviation for my company name (sometimes adding my department). When the resource is not global, such as for EC2 instances, I omit namespace
. What do you think of my additional rule?
if it suits your needs, and the names are consistent and unique, should be fine
Thank you for your reply. Yes, it does seem to work fine…so far. I’m trying to see if anyone out there can see a flaw in my plan that I cannot.
we use namespace
to identify the company. We add it to all the resources for consistency
We use company-prd(or stg etc)-region(or global)-thing.
At the risk of wearing out my welcome today, I’ll ask another question. I just can’t find any documentation, only lots of examples on GitHub. What is module.this
?
this is our standard pattern to provide standard inputs to all modules, and simplify module instantiation
so now all standard inputs (those that we use in all modules) are in one place (in [contect.tf](http://contect.tf))
you don’t have to provide namespace
, environment
,
stage,
name` when calling modules https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L57
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
you can override any of the standard inputs anytime, e.g. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L7
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Thank you for your answers. I like your label modules and this is a great application.
2020-10-12
Hi, what’s the best way to output everything from terraform module?
for example if I have module "alb" {}
I can use
output "alb" {
value = module.alb
}
Yes you can output the entire module as an output
The module that uses the above module would then be able to reference the output using module.whatever.alb.whateveroutputname
but that makes accessing outputs a bit strange:
listener_arn = data.terraform_remote_state.alb.outputs.alb.alb_https_listener_arn
Hello. Can anyone suggest a workaround for managing WAV v2 ACL with Terraform when the ACL is nested more than 3 levels, please? https://github.com/terraform-providers/terraform-provider-aws/issues/15580#issuecomment-706613897
This issue was originally opened by @jpatallah as hashicorp/terraform#26530. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version terra…
We ran into this bug and the conclusion was “we can’t” The WAFv2 resources are a bit hacky, as I’m guessing you found out they can’t be infinitely nested as what the AWS API supports
This issue was originally opened by @jpatallah as hashicorp/terraform#26530. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version terra…
There is a depressing discussion in one of the tickets about this limitation, where it was suggested to convert the Terraform resource representation from HCL blocks to inline JSON. The hashicorp response was “this would fix the problem but it’s ugly, so we won’t do it”
Thanks, Alex!! That’s good to know.
Hi everyone. I’m running into an issue using trying to redeploy CloudTrail at the organizational level but keep running into an issue getting the module to apply successfully. The credentials that I am using have administrator access but I keep running into a permissions issue. Anyone have any idea?
module "aws_cloudtrail" {
source = "cloudposse/cloudtrail/aws"
version = "0.11.0"
module.aws_cloudtrail.aws_cloudtrail.default[0]: Creating… Error: Error creating CloudTrail: InsufficientEncryptionPolicyException: Insufficient permissions to access S3 bucket cloudtrail-bucket or KMS key <<KMS_ARN>> on .terraform/modules/aws_cloudtrail/main.tf line 13, in resource “aws_cloudtrail” “default”: 13: resource “aws_cloudtrail” “default” { Releasing state lock. This may take a few moments… ERROR: Job failed: exit code 1
found out what that error message is.
*InsufficientEncryptionPolicyException* This exception is thrown when the policy on the S3 bucket or KMS key is not sufficient. HTTP Status Code: 400
please look at this module for the required permissions https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket
S3 bucket with built in IAM policy to allow CloudTrail logs - cloudposse/terraform-aws-cloudtrail-s3-bucket
and this is a working example of using both Cloudtrail and Cloudtrail bucket modules https://github.com/cloudposse/terraform-aws-cloudtrail/blob/master/examples/complete/main.tf
Terraform module to provision an AWS CloudTrail and an encrypted S3 bucket with versioning to store CloudTrail logs - cloudposse/terraform-aws-cloudtrail
thank you @Andriy Knysh (Cloud Posse)
Anyone know how to add pagerduty subscribers to an escalation policy/service in pagerduty? I can’t figure out the provider for this. I tried adding someone and they weren’t able to be an observer because they admin cannot take team role observer
despite the fact they aren’t an admin in this service.
not quite sure what you’re asking, but does this answer your question?
data pagerduty_user default {
email = "[email protected]"
}
resource pagerduty_escalation_policy default {
name = "My Escalation Policy"
teams = [
data.pagerduty_team.default.id
]
# Notify Alex
rule {
escalation_delay_in_minutes = 5
target {
id = data.pagerduty_user.default.id
}
}
}
you can then create a pagerduty_service
resource and specify the escalation policy’s id
as the value for escalation_policy
I want to add the subscribers only, not the actual responders. Basically, I see a lot of cc’d folks in chat when activity is on pagerduty already. I’d like to be able to define the subscriber/observer list so they get updated on new incidents but not as a responder.
gotcha. We wanted this too. It’s difficult with how Pagerduty works. What you are after is to set a response play on your service
sadly, this isn’t available via the Terraform provider yet
I saw we found the same GitHub ticket @sheldonh
Great minds think alike
2020-10-13
sns_list = toset(["first", "second", "third"])
# SNS
resource "aws_sns_topic" "this" {
for_each = local.sns_list
name = "${each.key}-${var.environment}"
tags = merge(
{
Environment = var.environment
Terraform = "true"
},
var.tags)
}
data "aws_iam_policy_document" "this" {
for_each = local.sns_list
policy_id = "__default_policy_ID"
statement {
actions = [
"SNS:Publish",
"SNS:GetTopicAttributes"
]
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
aws_sns_topic.this[each.value].arn,
]
sid = "__default_statement_ID"
}
depends_on = []
}
resource "aws_sns_topic_policy" "this" {
for_each = local.sns_list
arn = aws_sns_topic.this[each.value].arn
policy = data.aws_iam_policy_document.this[each.value].json
}
Error: Invalid index
on main.tf line 150, in resource “aws_sns_topic_policy” “this”: 150: policy = data.aws_iam_policy_document.this[each.value].json |—————- | data.aws_iam_policy_document.this is object with 2 attributes | each.value is “third”
The given key does not identify an element in this collection value.
Hi all, does this look like a bug? this works ok if I create from scratch, but if I want to change resource name or to add new one it will complain with this error.
Anyone got a pro/con of Terraform Cloud vs e.g. Terraform + Atlantis? Other than pricing. What gotchas are there with TF Cloud? Easy to move infra into, hard to move out of/workspaces/runners etc…..
• Atlantis lacks triggers, you cannot have one workspace trigger another. TFC does.
• Atlantis lacks webhooks, so you cannot easily integrate it with other pipelines or continuous delivery frameworks. TFC does.
• Atlantis can only be self-hosted. TFC is also a SaaS.
• TFC cannot comment on PRs (Sometimes this is nice). Atlantis can.
• Moving in/out of TFC is equally difficult.
• TFC does not let you bring your own container without using on-prem runners. Atlantis was a phenomenal first-stab at gitops with terraform. It was pioneering software in a time when most terraform was run locally. However, as release engineering has evolved, there’s clear need for coordinated rollouts spanning tool chains (standard app deployments, db migrations, serverless deployments, infrastructure deployments, etc). Coordinating this via multiple disjoint PRs, is not scalable teams grow. Thus the atlantis-flavor of gitops (operations by github comments), isn’t as optimal in larger in team environments with lots of services.
• Atlantis lacks triggers, you cannot have one workspace trigger another. TFC does. How does this work? workspace 1 triggers a plan/apply of workspace 2 after workspace 1 has been applied?
won’t auto apply though
Gotcha
Cheers @Erik Osterman (Cloud Posse), hope you’re good!
yes! had many fun projects this year…. the tfc being among them
@Erik Osterman (Cloud Posse) if atlantis supported similar functionality than TF cloud for workspaces and plus commented the pr back, would that be what is needed to make atlantis better?
I wonder if TFC workspaces is lacking features that will be nice to have but since do not use it I’m unaware
Main thing I think Atlantis is missing is triggering projects via API and not just PR
How do TFC workspaces work different to normal workspaces? Guessing this means once your in TFC land, there is no easy getting back out…
I should probably have a poke myself…
Also with Atlantis ALL THE COMMENTS, 0.14 with concise diffs should help this though
we also have 2 forthcoming PRs for TFE/TFC and for the TFC agents.
there is an intention to add api calls
but the lack of more active maintainers makes it pretty slow
thats cool! hadn’t seen that PR. it’s from back in august though… not optimistic.
maybe is time to fork atlantis into another project with another name for good
heh, call it atlanta
peplantis
lol
That answer should be pinned imho.
Good rundown, @Erik Osterman (Cloud Posse)
Any idea how tfc enterprise pricing works? Experimenting with tfc free plan and secret management across multiple workspaces is toilsome considering we have 50 odd state files (directories -> tfc workspaces). Agent like your cloudposse module is ideal. No access secret keys..
no one knows about prices that is a huge problem with Hashicorp
@kskewes we found out a few things, and trying our best to publish at https://remotebackendpricing.com/ (pending addition of best community understanding of Hashi prices)
First, Terraform Enterprise is priced on number of workspaces, and as a product is being phased out in favor of Terraform Cloud (Business Tier).
Second, Terraform Cloud (Business Tier) is priced per user, plus a fee for a number of applies per month, plus a fee for the number of concurrent runs you want.
Third, they discount / bundle things into an offering based on their guessed ability to pay, i.e. your funding level, or size of your company, which is why prices fluctuate so much from one company to the next.
If you get a quote and want to help others know pricing, PRs are welcome at https://github.com/Scalr/remote-backend-pricing
Cheers, sounds like need to contact sales team. FWIW, I see private scalr pricing is also hidden - Is there a runner we can use with the published pricing?
so basically they have a way to screw up big customers because they have more money instead of make the prices transparent
@kskewes price will be published before the end of next week
When working at EA I remember we asked about Vault enterprise and when we told them we were EA the tone in the conversation changed and they sent us a quote for 2 Million a year
(and yes it pains me to not have it out in the open yet)
Cool cool, no biggie. I understand that at some point customers will engineer ways to reduce cost by circumventing published processes - shared users/etc.
TFC almost looks like it could be a per minute pricing like Gitlab CI minutes
maybe plus per user that need console access/etc and if need be some base. But I have no idea of their costs and constraints, just throwing something out there based on a similar “remote execution as a service”.
“plus a fee for a number of applies per month” any idea what this is?
You mean how much it is?
Yes
Atlantis is the hotness
what provisioner is running in that null_resource?
Not sure if this is the right place to post this, but I was curious if I can get some eyes on: https://github.com/cloudposse/terraform-aws-eks-node-group/pull/36
what Surface variable for boolean flag of launch_template_disk_encryption Use launch_template_disk_encryption to flip flag of generated launch_template.ebs.encryption why Allow EBS encryption r…
Your best spot for these is #pr-reviews. But happy to check it out.
what Surface variable for boolean flag of launch_template_disk_encryption Use launch_template_disk_encryption to flip flag of generated launch_template.ebs.encryption why Allow EBS encryption r…
Thanks
@Cody Moore unfortunately, it looks like there are issues with running our tests against this repo due to the module targeting 0.13 but we’re using 0.12 to run the tests. This is the first I’ve seen this, so I’ve brought it up with the rest of the contributor team. We’ll get this merged once I can get that sorted out and those tests pass.
Awesome thanks!
We’ve released our first version of the terraform module for the terraform (tfc) cloud agent for kubernetes: https://registry.terraform.io/modules/cloudposse/tfc-cloud-agent/kubernetes/latest
This enables terraform for business users to run their plans using a custom docker image (or the official one), as well as using IRSA
2020-10-14
ANYONE watching the keynote????? HashiConf Digital thread here
HCP Consul
Consul 1.9
(this is great! keep ‘em coming)
Zero trust focused products
Watching, but not stoked on anything yet.
What Armon is talking about now is interesting though..
Simple and secure remote access — to any system anywhere based on trusted identity.
haha #called-it
Now I’m excited.
Seems like AWS SSM Session Manager, but cloud agnostic.
Though it does more protocols. Looks like it can do anything TCP (redis, postgres, ssh, etc.) https://www.boundaryproject.io/
Boundary is an open source solution that automates a secure identity-based user access to hosts and services across environments.
Yeah, which will be sweet! I can throw away my ugly make target that creates a SSM session to port forward database access.
I love Zero Trust
but one of the principles of Zero trust is the identification of the machine/computer itself and without a way to Catalog for example a users laptop there is no way to reconcile
the identity of the machine after the registry change because a new software is installed, which is key for this concept to work. The way reconcile
instances is by using packer and other tools to build amis or use containers but what about a users phone or laptop?
oh man and we just signed with banyan’s zerotrust solution
oh well. it’s not like boundary is the first open source zt solution
Ask @Erik Osterman (Cloud Posse) about open source IDPs
lol
Curious how it compares to https://gravitational.com/teleport/
Teleport allows you to implement industry-best practices for SSH and Kubernetes access, meet compliance requirements, and have complete visibility into access and behavior.
Nomad goes 1.0 on 27th and namespaces will be open source in 1.0
really nice keynore for nomad
Got Boundary up and running in AWS and was able to connect from laptop
so basic functionality seems to work fine
though missing 3rd IDP
for now only local user/password
nomad is free? I never used and I thought it was all paid
it is free and there is enterprise version that has some additional feature
Namespaces in OSS is great news
boundary looks great - it covers off (on the label) a bunch of use cases i’m looking for access control atm
Next Steps
For Boundary's upcoming releases, we have 3 key product themes we're focused on delivering:
Bring your own identity. We feel strongly that Boundary's identity-based controls should use the same identity that users have for their other applications. To do so, we'll progressively add support for new auth methods for Boundary. Our first step will be in delivering an OpenID Connect (OIDC) auth method.
Just-in-time access. A just-in-time access posture will be enforced at multiple levels within Boundary. Upcoming releases will offer integration with Vault or your preferred secret management solution of choice to generate ephemeral credentials for Boundary sessions.
Target discovery. To manage dynamic infrastructure users will need a way to discover and add newly provisioned hosts to targets while enforcing existing access policies on new instances. With Boundary 0.1, you can provision these targets and access policies dynamically with the Boundary Terraform provider. In the releases following launch we'll give administrators the ability to define dynamic host catalogs to discover new hosts based on predefined rules or tags for Consul, each of the major cloud platforms, and Kubernetes.
New Keynote is going on now and the website for the new product is up!
HashiCorp Digital Day 2 — Continuing yesterday’s thread
it is something else
seems to be replacement for a bash scripts we all making to glue things together
but let’s see
Yeah. It’s still push-based which is interesting considering that the bleeding edge now is GitOps where you have something running in the cluster, pulling the latest info about what to deploy
I like that it’s using Buildpacks
remember otto ?
will waypoint go like otto ?
Please keep mentioning otto (in the comments). @RB What is that?
it’s an old very dead hashicorp project
almost like waypoint is a rebranded otto
I’m late , waypoint is like atlantis?
ohhhh is a ci-cd…..
another ci tool
It seems to be only a CD tool. You do execute builds with it, but it isn’t for CI.
So you might execute a waypoint up
as part of your CI tool it seems.
Otto looked cool at first when it appeared, but then people soon realised how utopian it was))
I’m not sold yet. This an area where people do a lot of customizatons to fix their workflows
so let see deep dive section
also, not sure what is the game plan there. As a company you supposed to make money. I can understand tools like packer that do not provide any profit but they are useful for the company itself. This one is something else…
and not really their are
hm yah they’ve already got a github action for it in fact
waypoint entrypoint = ngrok but opensource
One of the Terraform 0.14 major functionality targets is “Concise Diff” — I like the sound of that.
+1
waypoint up, links to test envs, test to terraform - it looks like all those releases are Pulumi inspired
trying to take away reasons to move from HashiCorp ecosystem
“test to terraform” — What’re you referring to there? What’d I miss? I tuned out after 2-3 sessions.
terraform 0.14 will have test provider that you can use to run HCL defined tests for terraform modules
Damn, sounds awesome… how did I miss that.
Did you see any information released about that?
heh, sigh… exciting news about the test provider. but my level of interest in rewriting all of our tests is zero
I couldn’t find any info in writing. They showed it as an experimental feature during presentation. Probably more info will come later
waypoint i think you can think of like a super powered hcl makefile
personally i’m keen to start playing with it because i’ve typically found makefiles to be non-transparent to those that are unfamilar with them
nice! Thanks for sharing @Erik Osterman (Cloud Posse)
@antonbabenko talks about the testing provider https://youtu.be/nphb0utdKEY?t=666
IMO looks like it is complimentary to terratest
In its current form - yes, I agree, but later Terraform testing framework (to be developed) should be able to check it everything what is in tfstate, and allow to run “plugin” which does subset of assertion via HCL which currently can be done in terratest.
Let’s come back to this in 2-3 months. ;)
Watched the walkthrough — good stuff! Happy to get some more information on that subject. I could definitely see that pattern being valuable. If that was built first class into Terraform instead of through a provider then that would be sweet. I’ll for 0.15.
Hey does anyone know how to see the user_data
in a plan with terraform 0.11:
-/+ aws_launch_configuration.master-us-east-1c-masters-... (new resource required)
id: "master-us-east-1c.masters....648338700000004" => <computed> (forces new resource)
...
user_data: "39e5e6f604706....43600e3513aaa2616" => "093492cc54eea....7c0a89df99fa72783" (forces new resource)
It is one of the reasons for new resource so just seeing the hash of the script is not very helpful.
userdata is base64 encoded
so whatever value you get you will need to decode it
I tried base64 -d
and putting the value but the result is non-sense. I’m thinking the user_data
number shown is a hash into some table of text data. Maybe that data is base64 encoded.
maybe is truncating it
ohhhh that is on a plan?
mmm I think the output will be truncated, I guess you will have to compare at git level
This user_data is generated by kops, I just noticed (first time!) a folder gets created in the module, /data, that has the scripts created by kops and those are referenced by user_data in the launch configs. Indeed we do save those in git so I was able to see what changed, would have been nice to get such diff in the terraform plan but git diff is better than just the hash that I was seeing before!
Thanks for you help @jose.amengual
np
2020-10-15
v0.14.0-beta1 Version 0.14.0-beta1
Version 0.14.0-beta1
oh boy, we’re on to betas already?
Version 0.14.0-beta1
v0.14.0-beta1 0.14.0 (Unreleased) NEW FEATURES:
terraform init: Terraform will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (#26524) If you wish to retain the previous behavior of always taking the newest version allowed…
This follows on from some earlier work that introduced models for representing provider dependency "locks" and a file format for saving them to disk. This PR wires the new models and beha…
well that’s an interesting feature… @Erik Osterman (Cloud Posse) @antonbabenko plays into the versioning discussion from office hours yesterday… https://github.com/hashicorp/terraform/pull/26524
This follows on from some earlier work that introduced models for representing provider dependency "locks" and a file format for saving them to disk. This PR wires the new models and beha…
that neat! thanks for point it out.
wow, beta1 already??
Ever since Terraform has had support for installing external dependencies (first external modules in early Terraform, and then separately-released providers in Terraform 0.10) it has used a hidden directory under .terraform as a sort of local, directory-specific “lock” of the selected versions of those dependencies. If you ran terraform init again in the same working directory then Terraform would select those same versions again. Unfortunately this strategy hasn’t been sufficient for today’s m…
The first iteration of this for Terraform 0.14 covers only provider dependencies. Tracking version selections for external modules will hopefully follow in a later release, but requires some deeper design due to Terraform’s support for a large number of different installation methods for external modules.
@Eric Berg heads up: our opsgenie module is now updated with support for services and teams
Today we’ve released Terraform 0.14.0-beta1, which marks the start of the prerelease testing period for Terraform v0.14. We plan to publish at least one more beta release and one release candidate before the final 0.14.0. During this period, we’d be very grateful if folks could try out the new features in this release and let us know if you see any unusual behavior. We do not recommend using beta releases in production. While many of these features have seen some alpha testing prior to these b…
quick release cycle compared to 0.13
they are just reminding us that anything that is painful, we should do more of until it’s not painful any more
e.g. it’s painful to update core version pinning on 100s of modules
0.13 was a beast of an overhaul of internals
a lot of the work done in 0.13 was done to make future work easier - it was a bit of a big techdebt clean up from what i understand
i recall a few people asking about creating acm dns-validated certs with tf, but can’t recall who… we worked on this a while back and encountered some limitations requiring janky/hacky workarounds in tf 0.12 and v2 of the aws provider. just updated today for tf 0.13 and v3 of the aws provider, and now it seems pretty solid. we can now handle multiple SANs, proper resource cycles, no occasional random diff on future plans, etc… here’s the updated module we’re using, very straightforward now… https://github.com/plus3it/terraform-aws-tardigrade-acm/blob/master/main.tf
if anyone has feedback or sees any patterns we can improve, please let me know! (or open a pr )
It would be pretty neat to export the cert ARN as a thing of it’s own, instead of the entire certificate object. My IntelliJ can’t really see that the object will be of the cert type and thus can’t do autocompletion for it
oh that’s interesting. i’ve never relied on autocompletion in modules, so never occurred to me. seems like a pain to keep up with and document all the attributes someone might want though
i wonder if autocompletion of object attributes is something the language server might be able to offer
found an issue pretty close, asked for clarification on this specific use case… https://github.com/hashicorp/terraform-ls/issues/93#issuecomment-710057056
I could not find any issues or references to it, but does autocompletion work from modules? We use a lot of them, and it does not seem to work for us.
2020-10-16
Hello,
i have added in a terraform tag the deployment timestamp.
current_date = formatdate("YYYYMMD hh:mm:ss", timestamp())
is there a way to tell terraform plan
to not display this change at each plan ?
Is there a way to limit the @ignore_changes to just a single tag e.g. VERSION tag. As apposed to every single tag for a particular resource? I have a VERSION tag which is updated regularly by Jenki…
thanks!. the information will be useful, but the point was to keep it quiet for the change on timestamp tag and not to prevent it.
Hi I have just found your terraform-aws-tfstate-backend module and I have a newbie question.
I have followed the documentation to create a remotely managed state with s3. Works well on the environment that initially ran the terraform script.
But I am trying to figure out how to bootstrap a fresh environment, on a different computer.
When I try, and run terraform plan, terraform keeps on trying to recreate the existing bucket/dynamodb table, etc.
is there another way than downloading the tfstate file and dropping it into the current directory?
Hello @Luis Muniz,
if you switch from one tfstate to an other you should use terraform init -reconfigure
if the point is fetching again the remote state, terraform refresh
will get the update.
by default terraform plan
will do a refresh
Hi, thanks for replying Pierre-Yves. The issue is that the backend is not configured, because the module wants to add the s3 bucket
i’m stuck in this pre-initialization limbo
refresh does not do anything, because the backend has not yet been switched to s3
it’s stuck in the local backend, when I run the plan, it tries to create an s3 bucket and dynamo tables that already exist
refresh does not do anything, because the backend has not yet been switched to s3
that’s something you do by editing the .tf
file and updating the backend
exact if I have resources in two terraform state, to switch from one to an other I do: terraform init -reconfigure -backend-config=
Thanks, I was misunderstanding how to use the module, and trying to re-generate the terraform.tf file (it was not under sourtce control)
Latency and availability issues Oct 16, 18:14 UTC Monitoring - We’ve rolled out some changes to the URL service that we expect to stabilize it, but it is running in a partially degraded state so there may still be latency or other minor delays.Oct 16, 18:06 UTC Update - Following user reports on GitHub, we’ve identified an issue with the Waypoint URL service that occasionally results in high latency for connections to waypoint.run hosts, as well as scenarios where a new deployment will return the “Couldn’t find a Waypoint…
HashiCorp Services’s Status Page - Latency and availability issues.
Waypoint URL Service latency and availability issues Oct 16, 18:14 UTC Monitoring - We’ve rolled out some changes to the URL service that we expect to stabilize it, but it is running in a partially degraded state so there may still be latency or other minor delays.Oct 16, 18:06 UTC Update - Following user reports on GitHub, we’ve identified an issue with the Waypoint URL service that occasionally results in high latency for connections to waypoint.run hosts, as well as scenarios where a new deployment will return the “Couldn’t find a Waypoint…
HashiCorp Services’s Status Page - Waypoint URL Service latency and availability issues.
early adoptions signs , which is good lol
HashiCorp Services’s Status Page - Waypoint URL Service latency and availability issues.
First-class problems definitely a good sign
Hi all , our customer onboarding process in the future will require launching the Infrastructure in a segregated vpc just for that customer , for automating the process of deploying the Infra , we’ll be using terraform , now we don’t have a devops person just yet so need help Architecting the best possible solution for maintaining state files for a customer from an operations perspective , The terraform code won’t change often for the infrastructure , so from a operations perspective , will you use a master directory having the tf code to run terraform and use a workspace per customer for segregating the customer state files or more like having a repo per customer in a git server and then triggering deployments from those repos ? I’m just thinking out loud about the solution since don’t know what’s the standard practice for this use case
if you are going to use TF cloud, then use workspaces but if you do not then you could use same repo with multiple backend-configs ot a repo per customer
it depends on how similar are your customers
It also depends on what will happen when you have TF code updates. Will you update all customers, or find yourself with different customers running different versions?
I’d also point out that versioning the parent code, and calling those specific versions for your customer execution plans, is important. Gives you an easy mechanism to perform selective applies.
@Yoni Leitersdorf (Indeni Cloudrail) yes sir , I can think of use cases where customers might have different versions running @Chris Wahl thanks for pointing that out
@jose.amengual thanks for always replying and helping me , so can you provide some more insight how to approach it while rolling out updates for a customer if we only have one repo ? Will the concept of having different back-end configs work ? And can you think of any scalibiliy issues with this ?
this question have many ways to answer it right
I will tell you my preference
for example lets say you have customer with wordpress+mysql
you have a module to build wordpress and another one for mysql
if you want to manage customer you could have one repo for all your customers that have that conbination
and you could have a tree structure like
terraform-aws-wordpress-mysql
clients
- clientA
-- main.tf
-- variables.tf
-- backend.tf
and then you instantiate the the main.tf per customer using the modules you already have (using versions)
and you ini terraform by doing terraform init --backend-config=clientA/backend.tf
obviously you might have more parameters, like vars file etc
that is one way
you can have a backend.tf per customer or a backend file for all customers that matche the criteria wordpress+mysql
but that will get real messy if you have many customers and with slightly different configs
I do not like that much, I like the CloudPosse way were there are modules and meta
modules
so I will prefer to have a mysql+workpress module
and have a meta
module terraform-aws-wordpress-mysql
that uses those other modules and that is flexible enough to accept slight different configs per customers and you can use the same way to init the module for all your customers and use a custom backend.tf file that can be templated by an automated job and the data could come from a key/value store or db or whatever
just like this module https://github.com/cloudposse/terraform-aws-ecs-web-app
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
and if you need environments per customers you can have another module for the environment itself that will call the meta module etc and so on
is like a chain of module dependecies
but the shallow the dependency chain the better
that is my humble opinion
there must be others that have done this for customers/many teams with different workflows
Thanks again
Hi @nileshsharma.0311 @jose.amengual, I am using one project and one tfstate per tools and one tfstate for each env. this makes smaller tfstate and limit access to project , to only allowed people
yes, that is another thing too, the bigger the state the longer deployments take and the bigger the blast radius
CLI tool that checks Terraform (0.10.x - 0.12.x) code for module updates. Single binary, no dependencies. linux, osx, windows. #golang #cli #terraform - keilerkonzept/terraform-module-versions
Waypoint URL Service latency and availability issues Oct 16, 22:26 UTC Resolved - This incident has been resolved. The Waypoint URL service should be functioning without issue.Oct 16, 18:14 UTC Monitoring - We’ve rolled out some changes to the URL service that we expect to stabilize it, but it is running in a partially degraded state so there may still be latency or other minor delays.Oct 16, 18:06 UTC Update - Following user reports on GitHub, we’ve identified an issue with the Waypoint URL service that occasionally results in high latency for…
HashiCorp Services’s Status Page - Waypoint URL Service latency and availability issues.
New #hashicorp #terraform releases this week:
- terraform v0.14.0-beta1
- terraform-aws-provider v3.11.0
Links: - https://lnkd.in/ev8r4Ka - https://lnkd.in/eAeP8fn
2020-10-17
#azure #kubernetes #terraform was anyone able to enabled admin group object id’s using terraform azure AKS cluster ? dynamic ”role_based_access_control” { for_each = list(coalesce(each.value.rbac_enabled, false)) content { enabled = role_based_access_control.value dynamic ”azure_active_directory” { for_each = var.ad_enabled != false ? list(var.ad_enabled) : [] content { managed = true admin_group_object_ids = var.admin_group_object_ids } } } }
it throws the following error
Error: Missing required argument
on ....\modules\Kubernetes[main.tf](http://main.tf) line 140, in resource “azurerm_kubernetes_cluster” “this”: 140: content {
The argument “server_app_secret” is required, but no definition was found.
Error: Missing required argument
on ....\modules\Kubernetes[main.tf](http://main.tf) line 140, in resource “azurerm_kubernetes_cluster” “this”: 140: content {
The argument “client_app_id” is required, but no definition was found.
Error: Missing required argument
on ....\modules\Kubernetes[main.tf](http://main.tf) line 140, in resource “azurerm_kubernetes_cluster” “this”: 140: content {
The argument “server_app_id” is required, but no definition was found.
Error: Unsupported argument
on ....\modules\Kubernetes[main.tf](http://main.tf) line 141, in resource “azurerm_kubernetes_cluster” “this”: 141: managed = true
An argument named “managed” is not expected here.
Error: Unsupported argument
on ....\modules\Kubernetes[main.tf](http://main.tf) line 142, in resource “azurerm_kubernetes_cluster” “this”: 142: admin_group_object_ids = var.admin_group_object_ids
An argument named “admin_group_object_ids” is not expected here.
2020-10-18
Hi I have been trying to use the terraform AWS Elastic Beanstalk environment module but I have an issue regarding S3 bucket creations. I have tried different names but it still does not work. Below is the error code when running terraform apply. Please help
Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
status code: 409, request id: A97AF26F4D7B367B, host id: 5VvycnT8xomwlLocjMwYlFK7cFAQ8JWFXgTVQ9Y/uz4e17aOnLY4In0dxiLg9enmSDiNQ1u9fek=
on .terraform/modules/elastic_beanstalk_environment/main.tf line 935, in resource "aws_s3_bucket" "elb_logs":
935: resource "aws_s3_bucket" "elb_logs" {
What’s the bucket names you’ve tried, can you use a name_prefix instead
Bucket names are global and must be unique
Unless someone is actively trying to sabotage for you they are usually free..
I am using the v0.31.0 module from https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment and the error line is from the main.tf in that module.
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
I see. I guess your name, namespace etc combination clashes with what someone else is already using then. I think that’s probably something that should be made overrideable in that module.
Any recommended reading on VPC best design practices? I have one account where 90 of the various apps all use the same vpc. Noticing that a lot of ECS, EKS stuff expects its own vpc.
I’ve avoided using some public open source modules because of that actually
some of them just assume you get a new vpc everytime
That’s the main thing that slowed me down trying to figure out all the right subnets to use. Terraform wasn’t used to manage the VPC deployment so it wasn’t quite so straightforward to quickly deploy
we have specific vpcs and just use data lookups on them by name in all our modules
I tried that and mostly ok but one had duplicate names . Maybe I could get the subnets dynamically by filtering for public/private attribute
yup! we do that too. Add a tag to the subnet ‘tier’ and look it up that way
One of the guide thats good to read https://gruntwork.io/guides/networking/how-to-deploy-production-grade-vpc-aws/ Another good topic is about placement of ec2 ecs k8s lambda etc and interconnecting it. Maybe aws guide for architect pro will have some answers for you?
Learn how to configure subnets, route tables, Internet Gateways, NAT Gateways, NACLs, VPC Peering, and more.
Hello everyone! I’ve been using the terraform-aws-acm-request-certificate
module these days along with terraform-aws-cloudfront-s3-cdn
. Everything was fine, but with the latest update for terraform-aws-acm-request-certificate/0.8.0
I started to getting any number of error messages like this one:
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 30, in resource "aws_route53_record" "default":
30: name = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
|----------------
| count.index is 1
| local.domain_validation_options_list is set of object with 2 elements
This value does not have any indices.
This is the entire output for the plan
command:
[x80486@uplink:~/Workshop/Development/aws-static-website]$ make plan
Initializing modules...
Downloading git::<https://github.com/cloudposse/terraform-aws-acm-request-certificate.git?ref=tags/0.8.0> for acm_request_certificate...
- acm_request_certificate in .terraform/modules/acm_request_certificate
Downloading git::<https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=tags/0.35.0> for cloudfront_s3_cdn...
- cloudfront_s3_cdn in .terraform/modules/cloudfront_s3_cdn
Downloading git::<https://github.com/cloudposse/terraform-aws-route53-alias.git?ref=tags/0.8.2> for cloudfront_s3_cdn.dns...
- cloudfront_s3_cdn.dns in .terraform/modules/cloudfront_s3_cdn.dns
Downloading git::<https://github.com/cloudposse/terraform-aws-s3-log-storage.git?ref=tags/0.14.0> for cloudfront_s3_cdn.logs...
- cloudfront_s3_cdn.logs in .terraform/modules/cloudfront_s3_cdn.logs
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for cloudfront_s3_cdn.logs.this...
- cloudfront_s3_cdn.logs.this in .terraform/modules/cloudfront_s3_cdn.logs.this
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for cloudfront_s3_cdn.origin_label...
- cloudfront_s3_cdn.origin_label in .terraform/modules/cloudfront_s3_cdn.origin_label
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for cloudfront_s3_cdn.this...
- cloudfront_s3_cdn.this in .terraform/modules/cloudfront_s3_cdn.this
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding hashicorp/template versions matching ">= 2.0.*"...
- Finding hashicorp/aws versions matching ">= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*"...
- Finding hashicorp/local versions matching ">= 1.2.*, >= 1.2.*, >= 1.2.*, >= 1.2.*"...
- Finding hashicorp/null versions matching ">= 2.0.*, >= 2.0.*, >= 2.0.*"...
- Installing hashicorp/null v3.0.0...
- Installed hashicorp/null v3.0.0 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
- Installing hashicorp/aws v3.11.0...
- Installed hashicorp/aws v3.11.0 (signed by HashiCorp)
- Installing hashicorp/local v2.0.0...
- Installed hashicorp/local v2.0.0 (signed by HashiCorp)
Terraform has been successfully initialized!
Switched to workspace "sandbox".
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 30, in resource "aws_route53_record" "default":
30: name = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
|----------------
| count.index is 1
| local.domain_validation_options_list is set of object with 2 elements
This value does not have any indices.
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 30, in resource "aws_route53_record" "default":
30: name = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
|----------------
| count.index is 0
| local.domain_validation_options_list is set of object with 2 elements
This value does not have any indices.
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 31, in resource "aws_route53_record" "default":
31: type = lookup(local.domain_validation_options_list[count.index], "resource_record_type")
|----------------
| count.index is 1
| local.domain_validation_options_list is set of object with 2 elements
This value does not have any indices.
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 31, in resource "aws_route53_record" "default":
31: type = lookup(local.domain_validation_options_list[count.index], "resource_record_type")
|----------------
| count.index is 0
| local.domain_validation_options_list is set of object with 2 elements
This value does not have any indices.
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 32, in resource "aws_route53_record" "default":
32: records = [lookup(local.domain_validation_options_list[count.index], "resource_record_value")]
|----------------
| count.index is 1
| local.domain_validation_options_list is set of object with 2 elements
This value does not have any indices.
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 32, in resource "aws_route53_record" "default":
32: records = [lookup(local.domain_validation_options_list[count.index], "resource_record_value")]
|----------------
| count.index is 0
| local.domain_validation_options_list is set of object with 2 elements
This value does not have any indices.
I didn’t create an issue in GitHub because I’m not sure if this is because something on my end or specific to the update 0.8.0
Hey @x80486 — I was the one that merge and releases 0.8.0. It passed our tests, but maybe something cropped up that our tests didn’t catch.
Can you check out the below PR / code and see if targeting that will fix this issue that you’ve run into? https://github.com/cloudposse/terraform-aws-acm-request-certificate/pull/27
what handles breaking change for terraform provider 3.0 https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade#domain_validation_options-changed-from-list-to-se…
I’m getting this one now:
Error: Unsupported attribute
on .terraform/modules/acm_request_certificate/main.tf line 44, in resource "aws_acm_certificate_validation" "default":
44: validation_record_fqdns = aws_route53_record.default.*.fqdn
This object does not have an attribute named "fqdn".
What I did was to copy over the files from that branch (that’s in Merge Request now), and changed the source
value by pointing to the directory under .terraform/..
(the following file snippet has the correct URL anf reg
tag for 0.8.0
which I didn’t use for testing the changes you asked me to)
This is my super-tiny Terraform configuration:
locals {
tags = {
Environment = terraform.workspace
Terraform = "true"
}
}
provider "aws" {
profile = terraform.workspace
region = var.aws_region
}
resource aws_route53_zone "default" {
name = var.domain_name
tags = local.tags
}
module "acm_request_certificate" {
source = "git::<https://github.com/cloudposse/terraform-aws-acm-request-certificate.git?ref=tags/0.8.0>"
depends_on = [aws_route53_zone.default]
domain_name = var.domain_name
process_domain_validation_options = true
subject_alternative_names = ["*.${var.domain_name}"]
wait_for_certificate_issued = var.wait_for_certificate_issued
zone_name = var.domain_name
tags = local.tags
}
module "cloudfront_s3_cdn" {
source = "git::<https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=tags/0.35.0>"
depends_on = [module.acm_request_certificate]
acm_certificate_arn = module.acm_request_certificate.arn
aliases = [var.domain_name, "www.${var.domain_name}"]
allowed_methods = ["GET", "HEAD", "OPTIONS"]
compress = true
dns_alias_enabled = true
error_document = "not_found.html"
namespace = var.company_prefix
name = var.name
origin_force_destroy = true
parent_zone_id = aws_route53_zone.default.zone_id
stage = var.stage
use_regional_s3_endpoint = true
website_enabled = true
tags = local.tags
}
The [variables.tf](http://variables.tf)
file:
variable "aws_region" {
description = "The AWS region to deploy to"
type = string
default = "us-east-1"
}
variable "company_prefix" {
description = "The company's name prefix"
type = string
default = "tld-domain"
}
variable "domain_name" {
description = "The FQDN name of the Website (e.g.: example.com)"
type = string
default = null
}
variable "force_destroy" {
description = "Delete all objects from the bucket so that the bucket can be destroyed without error"
type = bool
default = false
}
variable "name" {
description = "The identifier/name of the application or solution"
type = string
default = null
}
variable "stage" {
description = "Stage, e.g.: prod, staging, test, dev, sandbox, etc."
type = string
default = null
}
variable "wait_for_certificate_issued" {
description = "Whether to wait for the certificate to be issued by ACM (status change from PENDING_VALIDATION to ISSUED)"
type = bool
default = false
}
The .tfvars
file:
force_destroy = true
domain_name = "domain.tld"
name = "domain"
stage = "sandbox"
wait_for_certificate_issued = true
My plan
target looks like this:
.PHONY: plan
plan: init workspace init ## Create a Terraform execution plan
@terraform $@ -compact-warnings -input=false -lock=true -out=$(WORKSPACE)-plan.out -refresh=true -var-file=$(WORKSPACE).tfvars
## Private Zone
.PHONY: init
init:
@terraform $@ -input=false -lock=true -verify-plugins=true
.PHONY: workspace
workspace:
@terraform $@ select $(WORKSPACE) || terraform $@ new $(WORKSPACE)
@Matt Gowie, did you have a chance to take a look at this one?
2020-10-19
Which terraform plugin is best for vscode? Any any suggestions for vim?
Hello, I use the plugin “terraform” from “Anton Kulikov”. I don’t know if it’s the best but my need was the support of 0.12 tf version
Hashicorp has taken over. Update to their latest one as it’s an officially maintained plugin now.
Are there any recommended module for running a ecs cluster w/ blue/green codedeploy (instrumented via code pipeline + code build)?
Anyone have any experience using terraform with spinnaker/managing pipelines/etc? Any useful docs or projects would be much appreciated!
I’m sure someone else has a much better setup than I do but esentially on each PR we run: terraform fmt -resursive -check -diff terraform validate
after merging to master we output the plan and have to approve before applying
manually running tfsec and other tools as needed. trying to figure out how to automate these - daily, weekly, etc..
links: https://github.com/antonbabenko/pre-commit-terraform https://github.com/tfsec/tfsec
pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform
Static analysis powered security scanner for your terraform code - tfsec/tfsec
We have a Makefile that’s used to call different targets: lint, validate, plan, apply, tfsec, etc…
We only have a handful of parent TF projects so far. Very scalable for all of our app deployments right now. It seems like this is a scalable solution for IaC. No complaints so far.
Do you have Spinnaker running terraform in pipelines MattyB?
Wrt OP.. we use jsonnet for pipeline code etc and spin CLI to upsert Spinnaker. There’s no state so we have to do manual removals. We haven’t wired this to CI yet as haven’t setup x509 or basic auth to Spinnaker. Manually updating with Makefile leverages our gcloud oauth.
today’s great Terraform error:
The true and false result expressions must have consistent types. The given
expressions are tuple and tuple, respectively.
there is not a good way of dealing with that. If you have complex types on both sides of the ternary operator, both sides need to be of exactly the same types
you can 1) specify in the variable definition AND provide the exact object types on both sides (and it means not only the types of the objects, but the types of all the fields)
2) use jsonencode
and jsondecode
to work with strings
this is not a solved case yet https://github.com/hashicorp/terraform/issues/22405
Terraform Version Terraform v0.12.6 Terraform Configuration Files variable "default_rules" { default = [] } output "test" { value = var.default_rules != [] ? var.default_rules :…
Yes, it was a mismatched sub-type. The error was real, but the error message is worse than old GCC
on the plus side, they’ve been getting better and better with the error messages. this was way worse before!
Is it possible to configure a Terraform stack so that if a variable’s value changes, a certain resource is forced to be destroyed & rebuilt?
I am deploying an AWS Elastic Beanstalk environment with settings configured by input variable
s. Some of these settings cannot be changed after environment creation, and the AWS API will return an error if you try. The Terraform AWS provider doesn’t handle this case, so I could change a variable which plan
s fine but fails on apply
. It would be great if I can configure terraform so that it shows “re-creation required” in the plan
output.
Hello, if you change a resource name it will be destroyed first and recreated
I don’t want the resource name to depend on these variables
I didn’t experience it myself I would suggest to see if there is a trick with a null_resource to do this https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource
Calculate some kind of hash for thise variables and append it as postfix to field that will recreate the resource you want to depend on it. Im not using much elasticbeanstalk so i dont know which field would be the best but name would be fine imo
the hash is a good idea. i’m also unfamiliar with EB, but i know if you are using ec2 you can force recreation by modifying the userdata, even with just a comment in the script/cloud-init config…
The only attribute that forces recreation is name. And a lot of our infra glue depends on that being predictable I was hoping for some cool Terraform workaround
Interesting. What is the exact resource?
The cloudformation equivalent mentions a few properties that require replacement, so maybe you can work a replacing update implementation around one of those? https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk-environment.html
The AWS::Environment resource is an AWS Elastic Beanstalk resource type that specifies an Elastic Beanstalk environment.
If not, you can always taint the resource, though that’s a bit outside the gitops-style workflow I try to prefer
Yeah, good idea. I will try and use cname prefix – the DNS name can be weird
yep. I can set cname_prefix like:
resource random_id eb_cname_prefix {
keepers = {
use_shared_alb = var.use_shared_alb
}
byte_length = 1 # Increase this as we add more keepers
}
resource aws_elastic_beanstalk_environment foo {
cname_prefix = "myapp-${random_id.eb_cname_prefix.hex}"
settings = ... settings including LoadBalancerIsShared ...
}
Thanks for the ideas all!
2020-10-20
I am trying to run Terratest in gitlab CI
I have added the necessary AWS environment variables but get the following error ..
Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
Is there something specific I need to do to get this to use the environment variables?
what environment variables have you added? Do they work with the aws
CLI?
AWS_ACCESS_KEY_ID AWS_DEFAULT_REGION AWS_SECRET_ACCESS_KEY
i think i needed to also export AWS_REGION to get terratest to work (without setting the values in the provider config). i do not know why. i briefly inspected the source but couldn’t find a reference
i export all these in my local profile:
AWS_SDK_LOAD_CONFIG=1
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
@Jørgen Vik can you put that on a thread? is a bit long
@jose.amengual Yeah, sorry. I think I just solved it actually
np, what was it?
The aws_acm_certificate.cert.domain_validation_options.0.resource_record_name works in 2.70.0, but this doesn't work in 3.0.0. Checking the latest 3.0.0 document, now terraform use the for_each…
I solved it by stop using a for_each, since I only had one record per cert anyway
However I do have another problem which is the reasony why I joined this slack. I’m trying to use the cloudposse codepipeline to ecs module. The codecommit from github and build step works fine, but the deploy step is hanging forever. It seems like a new task definition is successfully created, but the deploy is just hanging.
module "ecs_push_pipeline" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-codepipeline.git?ref=0.17.0>"
name = var.api_ecs_service_name
namespace = "eg"
stage = "test"
github_oauth_token = var.pipeline_git_pat
github_webhooks_token = var.pipeline_git_pat
repo_owner = var.pipeline_git_repo_owner
repo_name = var.api_pipeline_git_repo_name
branch = var.api_pipeline_git_pipeline_branch
service_name = var.api_ecs_service_name
ecs_cluster_name = var.ecs_cluster_name
privileged_mode = "true"
region = var.aws_region
image_repo_name = var.api_docker_repo
build_image = var.pipeline_build_image
environment_variables = var.api_pipeline_env_variables
s3_bucket_force_destroy = true
}
did you check the ecs console ?
you could have a problem with the container not starting and continuously deploying
The deployment is added to the list, but it has a 0 pending and 0 running count. Nothing is logged out in the console and no new task is started
In the events tab there’s any message?
Actually no. All the events seems to be from before the deployment
weird.
Yes indeed. Not really sure how to troubleshoot it
so is the deployment % 0?
what happens if you bump it up to 50%?
I’m not sure which % you are referring to. It looks like this in AWS
Are you talking about the minimum and maximum healthy percentage?
If so: yes, that was it
Seems like it’s spinning up now after i set it to 100% and 200%
yes that is what I was talking about
deployment healthy min/mac percentage
is is 0 it does not do anyting
Running a test deployment now to make sure that it is fixed
That was it! Thanks
2,5 hours of my life gone, but it works after all
you are working on ECS, expect 50% of your life gone troubleshooting this AMAZING service
am i crazy or was it possible in teraform 0.12 to run terraform init
on terraform without an explicit backend
definition located in a .tf
file as long as you passed in the required backend configs via command line args? It appears with terraform 0.13, you need at least the following to be in some kind of .tf
file now
terraform { backend "s3" { } }
This was still required in 0.11, 0.12 at a minimum. We have these annoying stubs as well
Is it possible to load the rules of an AWS ALB Listener as a data source in Terraform? There’s no data source equivalent of the aws_lb_listener_rule
resource, and the aws_lb_listener
data source doesn’t seem to include rules
Hey everyone I’m new to this community and also relatively new to Terraform and all the goodness is brings. I’m in need of some guidance and was hoping someone could help with my problem.
I am trying to turn a map of subnets:
variable "storage_account_network_rule_set_subnets" {
type = map(object({
name = string
vnet_name = string
resource_group_name = string
}))
default = {}
description = "The Subnet ID(s) which should be able to access this Storage Account."
}
with:
data "azurerm_subnet" "module" {
for_each = { for s in var.storage_account_network_rule_set_subnets : s.name => s }
name = each.value.name
virtual_network_name = each.value.vnet_name
resource_group_name = each.value.resource_group_name
}
into a list of subnet ids for the azurerm_storage_account
provider:
resource "azurerm_storage_account" "example" {
name = "storageaccountname"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
network_rules {
default_action = "Deny"
ip_rules = ["100.0.0.1"]
<< vv INSERT THAT LIST HERE vv >>
virtual_network_subnet_ids = [azurerm_subnet.example.id]
<< ^^ INSERT THAT LIST HERE ^^ >>
}
tags = {
environment = "staging"
}
}
What magic piece of Terraform code will produce that list for me and insert it into that list. Link to the provider: https://www.terraform.io/docs/providers/azurerm/r/storage_account.html#network_rules
something like data.azurerm_subnet.module[*].id
I guess. See https://www.terraform.io/docs/configuration/expressions.html#splat-expressions
This was the solution in the end: [for subnet in data.azurerm_subnet.module : subnet.id]
Thanks for the suggestion!
2020-10-21
Hi, I am looking to setup a brand new collection of AWS accounts. I was looking for some guidance around this and stumbled across https://github.com/cloudposse/reference-architectures. I am curious if this is still the recommended approach? The docs seems to have marked this topic as archived
(https://docs.cloudposse.com/reference-architectures/introduction/), which leads to the above question.
Thanks heap.
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
This repo is out of date AFAIK. The CP folks are looking to update it with their latest and greatest, but it hasn’t come to fruition yet. Erik has mentioned an EOY target.
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Yea, so we’ve gutted the old reference architecture to make sure no one follows that.
We have a new one coming out this year, with bits and pieces trickling out.
Thanks for the reply. I’ll wait for the new architecture. Cheers
Hello, for kubernetes, do you use terraform kubernetes_provider or helm ? which one would you recommand ?
does anyone know if https://github.com/terraform-providers/terraform-provider-aws/issues/5218 is still an issue?
@Pierre-Yves i use a mixture of both to bootstrap EKS with Flux
Thanks Steve, that is my point. I’ll probably use AKS & Terraform and will use helm at most . ( mainly because there is more example and documentation )
you will need both to create the namespace and secret for flux to use to communicate with your repo
I didn’t know about flux (so I am currently reading the doc), what will be the benefit vs a standard helm pipeline ? my understanding is:
• a classic pipeline will push a release to kubernetes vs
• flux running on kubernetes will watch and fetch the version to deploy it on kubernetes ( should move the thread to #kubernetes but I don’t know how )
Hey guys, I’m experimenting with the cloudposse eks / fargate module but am getting these errors on terraform 0.13.4, any ideas on why this isn’t up to date or does it require a version closer or equal to 12?
on .terraform/modules/eks_cluster.label/versions.tf line 2, in terraform:
2: required_version = "~> 0.12.0"
Module module.eks_cluster.module.label (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0>)
does not support Terraform version 0.13.4. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
I think the fargate module was not updated to TF 0.13 requirements. We’ll get to it ASAP
No rush! Was just curious!
Figured it was updated as the requirements reflected terraform 14
@Aumkar Prajapati this latest release fixes the Fargate module https://github.com/cloudposse/terraform-aws-eks-fargate-profile/releases/tag/0.6.0, should work with TF 0.13
Update to [context.tf](http://context.tf)
. Correctly pin Terraform providers. Add GitHub Actions @aknysh (#9) what Update to context.tf Correctly pin Terraform providers to support TF 0.13 Add GitHub Actions Use uni…
look at the complete example on how to use it https://github.com/cloudposse/terraform-aws-eks-fargate-profile/tree/master/examples/complete
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
Sweet! Thank you!
It seems to fix within the documented requirements but the error says otherwise
v0.13.5 0.13.5 (October 21, 2020) BUG FIXES: terraform: fix issue where the provider configuration was not properly attached to the configured provider source address by localname (#26567) core: fix a performance issue when a resource contains a very large and deeply nested schema (<a…
The ProviderConfigTransformer was using only the provider FQN to attach a provider configuration to the provider, but what it needs to do is find the local name for the given provider FQN (which ma…
2020-10-22
Hello, which tool do you use to read a previously generated terraform plan with -out option ? I have found some tool, but would like your opinion .
Command line utility and JavaScript API for parsing stdout from “terraform plan” and converting it to JSON. - lifeomic/terraform-plan-parser
Terraform plan file to JSON. Contribute to palantir/tfjson development by creating an account on GitHub.
From the tools linked I guess you want to output to be JSON? In that case terraform show -json [path-to-file]
but I think that is only valid for the most recent plan…
Command line utility and JavaScript API for parsing stdout from “terraform plan” and converting it to JSON. - lifeomic/terraform-plan-parser
Terraform plan file to JSON. Contribute to palantir/tfjson development by creating an account on GitHub.
$ terraform plan -out=plan.tfplan > /dev/null && terraform show -json plan.tfplan > plan.json
Can someone look at https://github.com/cloudposse/terraform-aws-rds-cluster/pull/88
what DNS was changing when it shouldn't have been, it was the value of: ${var.name}-${local.cluster_dns_name} I think this may have changed in https://github.com/cloudposse/terraform-aws-rou…
2020-10-23
Terraform Cloud runs delayed Oct 23, 10:53 UTC Resolved - We had a brief delay on processing Terraform Cloud runs due to a network change that has been identified and resolved.
HashiCorp Services’s Status Page - Terraform Cloud runs delayed.
2020-10-26
can anyone help me with the below please
# Ref: <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticsearch_domain#warm_type>
variable "ultra_warm_instance_type" {
type = string
description = "The instance type for the ultra warm nodes."
default = "ultrawarm1.medium.elasticsearch"
validation {
condition = can(var.ultra_warm_instance_type == "ultrawarm1.medium.elasticsearch") || can(var.ultra_warm_instance_type == "ultrawarm1.large.elasticsearch") || can(var.ultra_warm_instance_type == "ultrawarm1.xlarge.elasticsearch")
error_message = "Valid values are ultrawarm1.medium.elasticsearch, ultrawarm1.large.elasticsearch and ultrawarm1.xlarge.elasticsearch"
}
}
ultra_warm_enabled = var.elasticsearch_configuration["ultra_warm_enabled"]
ultra_warm_instance_count = var.elasticsearch_configuration["ultra_warm_instance_count"]
ultra_warm_instance_type = var.elasticsearch_configuration["ultra_warm_instance_type"]
elasticsearch_configuration = {
ultra_warm_enabled = true
ultra_warm_instance_count = 2
ultra_warm_instance_type = "ultrawarm1.medium.elasticsearch"
}
Error: Invalid validation error message
on .terraform/modules/data_platform.es_application_logging/modules/elasticsearch/variables.tf line 79, in variable "ultra_warm_instance_type":
79: error_message = "Valid values are ultrawarm1.medium.elasticsearch, ultrawarm1.large.elasticsearch and ultrawarm1.xlarge.elasticsearch"
Validation error message must be at least one full English sentence starting
with an uppercase letter and ending with a period or question mark.
error_message = "Valid values are ultrawarm1.medium.elasticsearch, ultrawarm1.large.elasticsearch and ultrawarm1.xlarge.elasticsearch."
Try ending the error message with a period, if that doesn’t work maybe the other periods in the string are causing a problem
makes sense
This is the code from Terraform’s actual repo about this:
// looksLikeSentence is a simple heuristic that encourages writing error
// messages that will be presentable when included as part of a larger
// Terraform error diagnostic whose other text is written in the Terraform
// UI writing style.
//
// This is intentionally not a very strong validation since we're assuming
// that module authors want to write good messages and might just need a nudge
// about Terraform's specific style, rather than that they are going to try
// to work around these rules to write a lower-quality message.
func looksLikeSentences(s string) bool {
if len(s) < 1 {
return false
}
runes := []rune(s) // HCL guarantees that all strings are valid UTF-8
first := runes[0]
last := runes[len(runes)-1]
// If the first rune is a letter then it must be an uppercase letter.
// (This will only see the first rune in a multi-rune combining sequence,
// but the first rune is generally the letter if any are, and if not then
// we'll just ignore it because we're primarily expecting English messages
// right now anyway, for consistency with all of Terraform's other output.)
if unicode.IsLetter(first) && !unicode.IsUpper(first) {
return false
}
// The string must be at least one full sentence, which implies having
// sentence-ending punctuation.
// (This assumes that if a sentence ends with quotes then the period
// will be outside the quotes, which is consistent with Terraform's UI
// writing style.)
return last == '.' || last == '?' || last == '!'
}
thanks guys appreciated
Basically, valid unicode characters ending with period.
also, you can dramatically simplify the validation condition using contains()
…
and the can()
is totally unnecessary here…
contains()
with just a list?
yep… condition = contains([<list of valid values>], var.ultra_warm_instance_type)
The contains function determines whether a list or set contains a given value.
i am trying to turn the below into a valid variable type
can anyone help please?
elasticsearch_configuration = {
instance_node_count = 3
instance_node_type = "i3.xlarge.elasticsearch"
master_node_count = 3
master_node_type = "c5.large.elasticsearch"
ultra_warm_enabled = true
ultra_warm_instance_count = 2
ultra_warm_instance_type = "ultrawarm1.medium.elasticsearch"
}
type = object({
instance_node_count = number
instance_node_type = string
...
})
Hi all,
I’ve been working with the EKS terraform modules, and I ran into an issue with scaling nodegroups from this repo - https://github.com/cloudposse/terraform-aws-eks-node-group
So the problem is that I try to increase desired_size by specifying higher value, however the changes for desired_size
are being ignored because of the following code in the [main.tf](http://main.tf)
lifecycle {
create_before_destroy = false
ignore_changes = [scaling_config[0].desired_size]
}
Can anyone explain why desired_size has to be ignored in this situation ?
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
You should manage desired size outside of the stack. It is usually a variable with such a high variability that it’s usually not desirable to manage it in the base Terraform stack.
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
It makes sense, I see the reasoning now. When enable_cluster_autoscaler
is enabled the desired count changes, and it could lead to breaking changes on many new apply/deployments.
I have enable_cluster_autoscaler
disabled so it didn’t pop in my mind at first.
thanks for the reply.
a followup question - when having the autoscaler disabled, how does one manage the desired count? I am not really keen on manually adjusting variables in aws cli/console
Perhaps by setting the min_size
and max_size
fields to your desired number?
We do this (for one pool that CAS doesn’t handle).
Yes, the desired size
is meant for an autoscaler to adjust. If you are not using an autoscaler, you adjust the size by adjusting min_size
and max_size
(in general, when not using an autoscaler, I recommend min_size == max_size
)
I’m trying to setup a github_team_repository but have this be optional based on the input. Looks like I need team_id instead of team name so I’m thinking of using for_each = repos | WHERE
type of approach and just filter down the for_each using an expression. Any examples of a simple “where” clause to filter the for_each inline so I can run on none/matched results in the collection?
Manages the associations between teams and repositories.
for_each = { for key, val in object/map : key => value if <condition> }
or for a set/list:
for_each = toset([ for val in set/list : val if <condition> ])
Manages the associations between teams and repositories.
that if
syntax is in the docs for for expressions… https://www.terraform.io/docs/configuration/expressions.html#for-expressions
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
thank you. I I’m testing that now. good to see a practical example as I was looking at older hashicorp blog post
Always fun to try and find a post about “if, where or filter” expressions….
it’s not for_each, but here’s an example of the “if” in a for expression…
• https://github.com/plus3it/terraform-aws-tardigrade-transit-gateway/blob/master/modules/cross-account-vpc-attachment/main.tf#L31 idea with those is we two aws providers for the cross account use case, and we have a set of routes where we need to distinguish which routes to pass to which provider
and here is the corresponding variable definition for vpc_routes… https://github.com/plus3it/terraform-aws-tardigrade-transit-gateway/blob/master/modules/cross-account-vpc-attachment/variables.tf#L88-L103
usage as a for_each expression is not any different, since it’s really a modifier of the for expression instead of for_each…
Perfect! The simple example solved my main issue.
[for repo in local.repos: repo if repo.settings.additional_teams == "dev-team-1"]
just needed simple filtering and that solved it. I find the expression syntax pretty confusing in docs and all so still working on that. Thanks again!
yeah, it helped me that it was a pretty familiar pattern in python
Great point. The =>
is not a common expression in .NET. I’ve noticed that as I’ve slowly been learning Go, that many of the decisions in terraform for more advanced usage make total sense if you know Go, but from someone with a different background the syntax seems really strange.
I wrote up a bit on this if you are interested sometime. I’m assuming that structure is a common expression format in Python, but in PowerShell nothing like that exists. C# has Linq expressions, but I never use them in PowerShell.
Disclaimer Newbie Gopher. Much of what I observe is likely to be half right. I’ll probably look back at the end of the year and shake my head, but gotta start the journey somewhere, right? My Background I’ve learned my development skills primarily in the dotnet world. Coming from SQL Server performance, schema, and development, I transitioned into learning PowerShell and some C#. I found for the most part the “DevOps” nature of what I was doing wasn’t a good fit for focusing on C# and so have done the majority of my development in PowerShell.
heh, no the => is foreign, that’s a go construct. python has list comprehensions, which use a similar [ for ... ]
construct but it’s not identical
but the => isn’t too bad. the left side is an expression where the result is the key, the right side is another expression where the result is the value….
good article. for sure, terraform and hcl started making a whole lot more sense once i dove into the source code, took a stab at a pr or two, and learned how to compile from source
So with output no issue.
With for_each having problems still
for_each = [ for repo in local.repos :repo => if repo.settings.additional_teams == "dev-team-1" ]
i tried with {}
as well. Extra characters after the end of the 'for' expression.
Any idea what silly thing I’m doing?
for_each = [ for repo in local.repos : if repo.settings.additional_teams == "dev-team-1" ]
This is using examples in for expressions
for_each = { _for_ repo _in_ local.repos _:_ repo => repo if repo.settings.additional_teams _==_ "dev-team-1"}
If I use [] then it’s a list otherwise with {}
it says it creates an object which requires the =>
. Both having issues
yes, {} generates a map, [] generates a list
for_each only works with maps and sets
so if you use [] you need to wrap it in toset()
if you use {} then you need to use the => syntax
it’s easiest when first starting to take the for_each out of the picture for a bit, and just output the expression so you can see the data structure
taking your example…
for_each = { for repo in local.repos : repo => repo if repo.settings.additional_teams == "dev-team-1"}
this won’t work because repo
is a map. the left side of =>
becomes the key in the map. your expression is assigning the entire map as the key…
i already did this with console. I’m returning an object collection.
I basically want to foreach on each object returned but filter
try this:
for_each = { for name, repo in local.repos : name => repo if repo.settings.additional_teams == "dev-team-1"}
in particular, note the structure { for name, repo in ...
Trying now! Go syntax again lol.
heh, and again this part is familiar from python
In powershell foreach($Object in $Objects) { $obj.Name}
for example
and that works too, when your object has a name attribute that happens to also be the key of the map
I’m learning the other paradigm is this key/value which is very common in Go, but I rarely need to use in .NET in that manner.
Basically this is similar to the concept with for k,v := range struct/slice {}
is what it looks similar to
i’ll often construct a list of objects in terraform, instead of a map…
list(object({
name = string
attr1 = string
attr2 = boolean
...
}))
then convert to a map with:
for_each = { for item in var.thing : item.name => item }
I work with json input objects or so often. I’ll have to play around more with the explict object casting as it would make life easier
you don’t really need to cast it explicitly… this would work too…
list(map(any))
but if an item in the list does not have a name
key then it will be upset when you try to reference that attribute…
So the “name” is actually important? Ie if I have properties called “key” but not “name” that’s the root of the issue? gotta be kidding me. That would simplify things if I just was missing “name” as an actual property that I needed to include
“name” is important only because I referenced that attribute in my example, i.e. item.name
Basically your code defines the attributes of the object that are important to your code
Is there an easy way to upgrade a GKE cluster and the nodes to a new kubernetes version w/ terraform? I can’t find any documentation / tuts.
Has anyone had any luck deploying an ssm document with by using yamlencode()
? I’m able to cut and paste the terraform plan
content output and manually make a document that way, however terraform throws an error Error: Error updating SSM document: InvalidDocument
resource "aws_ssm_document" "this" {
...
document_format = "YAML"
content = yamlencode(templatefile(
"${path.module}/assets/documents/blah.yaml",
{
organisation_name = lower(var.organisation_name)
parameter_name = foo
}
))
}
are you sure your templated yaml is valid?
Yeah, I’m able to cut and paste it in the console
If it’s already a yaml file, you wouldn’t need yamlencode, would you?
Ah yep. Tried that as well. Noticed the plan was slightly different. Might be related to how terraform acts when document_format
= YAML
yamlencode should take an hcl object and serialize it as a yaml string…
have you tested this in terraform console
? Might be worthwhile
2020-10-27
on terragrunt plan am getting following errors I have updated modules to latest tags as i wish to update my eks version any help would be muchh appreciated Error: Provider configuration not present
To work with module.eks.data.aws_region.current its original provider configuration at provider[“registry.terraform.io/-/aws”] is required, but it has been removed. This occurs when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy module.eks.data.aws_region.current, after which you can remove the provider configuration again.
Releasing state lock. This may take a few moments…
are you upgrading from the old module versions?
this is a known issue with the latest versions of TF
I’m refactoring some Terraform modules and am getting: Error: Provider configuration not present To work with module.my_module.some_resource.resource_name its original provider configuration at m…
Terraform Version Terraform v0.12.0 With terraform version 0.11.10, the files below work as expected. Terraform Configuration Files initially the providers section in the config below was absent, b…
in some of the modules, a provider was specified in the module itself
TF does not allow that anymore
solution depends on your environments
ohh how can i resolve?
environment in sense?
- If you can just destroy the resources and redeploy the new version, it’s the easiest path (not good for prod though)
- Otherwise, use the old code, remove just the resources with the old provider (using
-target
), then add new versions of the modules and provision
look at the links above ^
Terraform Version Terraform v0.12.9 + provider.aws v2.28.1 Terraform Configuration Files sample module provider "aws" { alias = "us-east-1" region = "us-east-1" } reso…
@Andriy Knysh (Cloud Posse)
i tried using 0.12.24 terragrunt plan is working fine but on terragrunt apply its going to replace cluster and i am getting following error
Error: error creating EKS Cluster (dev_cluster): ResourceInUseException: Cluster already exists with name: dev_cluster
{
RespMetadata: {
StatusCode: 409,
RequestID: “6a650024-bdab-4965-9940-d15506218621”
},
ClusterName: “dev_cluster”,
Message_: “Cluster already exists with name: dev_cluster”
}
on .terraform/modules/eks/cluster.tf line 9, in resource “aws_eks_cluster” “this”:
9: resource “aws_eks_cluster” “this” {
My plugin_cache_dir
is now at 7.7G. There doesn’t seem to be any clean
command. Anybody have any guidance on best practices for cleaning out this cache. One thing I noticed in the docs, is that the dirs (i.e., “${plugin_cache_dir}/darwin_amd64” must exist, before TF will cache files there.
Delete whatever you want? It’s a cache so will be recreated. Bandwidth expensive? Then delete by date or version numbers as you see fit.
Any ideas for how to use the hashicorp-provided providers to make an API POST call with specific parameters? Looked at the http provider, but it’s only GET. (I don’t want to rely on curl or wget locally)
You could use the Shell Provider, but it will still require something like curl to be installed locally
Which I’m trying to avoid
@mumoshu is doing some interesting things with a project called Shoal in his Helmfile Provider. Shoal automatically downloads and uses missing dependencies when running Terraform
The providers obviously do this all the time via their Go capabilities, but I am trying not to write a provider.
I think the Shell provider would be your best bet then. It can install curl as part of the process if that helps any pain.
Thanks Matt. I can’t rely on the shell unfortunately. Looks like Go code is in my future.
Ah bummer. Worth asking around a bit more then possibly. Reddit or Terraform community forums might have a better option. I just don’t know of one unfortunately.
Then again — Go is a fun language to dive into.
golang
A terraform provider to manage objects in a RESTful API - Mastercard/terraform-provider-restapi
Great find!
There’s an open issue for getting it into Terraform registry. Hopefully they’ll get there soon since it’s pending for few months.
Very cool and agreed: great find! Staring that one for later for sure.
With Terraform 0.13 there shouldn’t need to be an “official” addition to terraform registry
Needed to do this:
terraform {
required_providers {
restapi = {
source = "fmontezuma/restapi"
version = "~> 1.14.0"
}
}
}
Because “mastercard” haven’t published their as a provider in the registry.
(I’m not fmontezuma)
Yep, this is the new way of doing it. Even the “official” ones are done this way, like "hashicorp/aws"
.
Looks like Mastercard didn’t want to give Hashi access to their repo: https://github.com/Mastercard/terraform-provider-restapi/issues/85 (see comments in Sep), so Fabio had to do it separately.
Not surprised that held them up with them being Mastercard and all.
Hey folks :wave: I was testing out https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment and bumped into this problem.
# main.tf
module "vpc" {...}
module "subnets" {...}
module "rds_instance" {...}
module "redis" {...}
module "elastic_beanstalk_application" {...}
module "elastic_beanstalk_environment" {...}
data "aws_iam_policy_document" "minimal_s3_permissions" {...}
When I run it, I am getting this error: Error: Error creating Security Group: InvalidGroup.Duplicate: The security group 'NAME' already exists for VPC 'vpc-ID'
` failing on elastic_beanstalk_environment.
Has anybody else had a similar issue or know what might cause this? Additionally, after each run, I am getting a bunch of Error: Duplicate variable declaration
and removing .terraform and re-init helps.
Are you following the example implementation? We validate that the module works using terratest
before every merge to master
Yep, precisely. Copy paste from the README for the latest tag, but combining that with RDS/Redis.
I’ve tried twice, destroying and reapplying but still getting stuck on the same things.
Btw, you’ve got a really nice collection of modules. Thanks for providing and maintaining those
@tair thanks! sorry for the troubles. did you setup your remote state correctly before running? is there a chance that something was already provisioned?
I am not using remote state yet, is that required? Just getting started with Terraform actually.
And on a clean AWS account
aha, so chances are that you might have some orphaned resources
using remote state is definitely a requirement if anyone else will ever work on the project
(or even if you lose your laptop, for instance)
yep for sure, I thought of moving it later, first tried to get up-n-running. I will then destroy, remove local state and reprovision to see if it makes any difference. Will report here, thank you
@tair you prob got into the issue of using a few top-level modules, each of which creates a SG. Since you provide the same namespace-stage-name
to each module, a few SG get created with the same name
try to add attributes=["1"]
to one of the modules (e.g. RDS)
the names will be unique, but all the names for RDS will end with -1
try that
the modules can be updated to take care of that, we can get into it when we get time
Ah this explains it, thanks @Andriy Knysh (Cloud Posse) I am wondering, shouldnt they all be part of the same SG? If they are not, how for instance Beanstalk instance would communicate to RDS, or ElasticCache.
Also, do you want me to add different attributes to each dependent module, like 1 for RDS, 2 for Cache, 3 for Beanstalk?
one module can be w/o the additional attributes
the other two, yes, try to add 1
and 2
correspondently
you then connect the SGs together by using the security_groups
variables to add one SG as ingress of the other
Cool. I will give it a spin. Do you happen to have an example on how to use multiple modules together with respect to SGs?
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
security_groups = [module.elastic_beanstalk.security_group_id]
all CloudPosse modules support this concept
also, all modules output the created SG, so you can add any additional ingress rules to them
thx! Probly newbie question, but does that mean the elastic beanstalk module should be coming before RDS or the order does not matter? would be really good to have an example of combined modules for popular stacks on GitHub
the order does not matter, Terraform will handle the order of creation even within modules
if the order matters to you for any reason, you can use TF 0.13 depends_on
on the modules, so module A will always be created before any resources in module B
gotcha
Nice, looks like everything succeeded this time. I will still have to check if connections work. But I am still bumping into a bunch of Error: Duplicate output definition
errors on subsequent runs, coming from .terraform modules. Basically following https://github.com/cloudposse/terraform-aws-tfstate-backend and on step 4
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
Anyone know of a terraform extension in vs code I can use for fmt? The ones I’ve found are not supported
Potentially unpopular opinion: IMO, while VS Code is awesome for a number of languages, it seems pretty shit for Terraform. I’ve fully switched over to IntelliJ for my Terraform work
I use sublime . Supports terraform syntax highlighting and terraform fmt when saving file
Yeah, the VC Code plugin is pretty awful. Even now being maintained by HashiCorp. I’m just waiting for the day when they finally get it right and being bull headed until then.
Yeah yeah, we know it’s vastly superior
+1 on the IntelliJ plugin. It’s great. I’m sure HashiCorp will get the VSCode plugin fixed up in the near future as well though.
i’m oldschool. pretty decent: https://github.com/hashivim/vim-terraform
@Mr.Devops HashiCorp Terraform
I have that extension installed as well but it doesn’t seem to auto perform the fmt function
Ah nvm next time I should rtfm lol
Thx @Igor
You can enable format on save in TF
It also won’t format if there are syntax errors
Got it thx
Sometimes I need to convert a resource from a single instance to count = local.create_instance? 1 : 0
, and Terraform wants to re-create the resource in this case.
How can I avoid this re-creation?
I can perform manual state manipulation terraform state rm myresource ; terraform import ...
, but it would be nice if there was a solution that didn’t require out of band state fiddling.
The terraform state mv
command moves items in the Terraform state.
While researching the above question, I found this 1 month old message from a Terraform developer saying “no major new features until 1.0 comes out next year”: https://github.com/hashicorp/terraform/issues/24476#issuecomment-700368878 Sad, there are some warts with TF
2020-10-28
Hey folks — Could use more ’s on the below issue and corresponding PR. They’ve totally stalled out (open for 8+ months) and I have a couple projects that I would like to upgrade off of a custom terraform fork (made a mistake thinking those would be merged by now ). Any help appreciated!
looks like whoever opened the PR has abandoned it… has conflicts and has not been updated with the new requirements
you can open a new PR based on the current one, and clean it up. might get more traction that way
He’s been very responsive to questions / bugs that I’ve had in the issue — I feel like he would update to get it moving forward. Maybe I should ping him on that.
But that is a good point… Maybe I ping him too to pull in the latest.
I know enough go to probably take it over and botch my way through it… but I have a talk coming up in December that I am focused on at the moment.
I hope these merge some time. We use cloudformation to manage some amplify resources currently
Done
I created one issue aswell recently !! Hope it gets some traction
https://github.com/terraform-providers/terraform-provider-aws/issues/15855
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
Hello, when using helmfile provider and helmfile_release resource, is there a way to manage helm repositories?
Maybe someone in #helmfile will have more experience
yes, already asked there
but i finally decided using helmfile_release_set and reuse helmfile.yaml
In terms of debugging I am using the classic ‘export TF_LOG=DEBUG’ but I would like to have a way to see the values of variables
For that I think you should use outputs. You can manage them to get the value of the variables, resource parameters or locals you want to see.
Output values are the return values of a Terraform module.
Yes already using it but the problem is that when debugging something that is not working
You do not get the output
I want the output value right before terraform crash
ahh, got it. Is tf crashing because the value you are passing to the resource is not in the right format and you want to see the value before it gets passed to the resource or it’s a terraform engine error?
I want to see the value before it gets passed to the ressource
I’m creating an array and passing it to an openstack resouce block
locals {
extra_network_interface = flatten([
for interface in var.extra_network_interface: [
for instance in range(var.instance_count): {
"name": format("%s0%s-v-%s.%s", var.cluster_name, instance + 1, interface.name, var.cluster_domain), "subnet": interface.ipam_cidr
}
]
])
}
Creation of the array
resource "openstack_compute_instance_v2" "server" {
count = var.instance_count
name = format("%s0%s-v.%s", var.cluster_name, count.index + 1, var.cluster_domain)
image_id = var.os_image
flavor_name = var.flavor_name
key_pair = openstack_compute_keypair_v2.keypair.name
security_groups = var.security_group
network {
name = var.network_name_id
fixed_ip_v4 = ipam_ip_allocation.ip_allocation[count.index].ip_addr
}
dynamic "network" {
for_each = {
for interface in ipam_ip_allocation.extra_ip_allocation : interface.name => interface if interface.vm_name == format("%s0%s-v-%s.%s", var.cluster_name, count.index + 1, interface.vm_name, var.cluster
_domain)
}
content {
name = network.network_name_id
fixed_ip_v4 = ipam_ip_allocation.extra_ip_allocation[count.index].ip_addr
}
}
and then execution in the Openstack resource block
have you tried using the terraform console to print local.extra_network_interface
value and make sure the array has the format you want?
The terraform console
command provides an interactive console for evaluting expressions.
Yes I’m trying right now
but wanted to know it there is something else
not sure what I’m doing on this console ^^
if you open the console in the same path where the file with the locals is. just type local.extra_network_interface
and it should output the list value.
Unfortunately, I don’t know a way of doing that dynamically with a breakpoint like debugging in python. I don’t think there is one
What error are you getting on crash?
Like a good old print() in Python
Upgrade Maintainence Oct 28, 19:44 UTC Investigating - The URL Service is going to be undergoing upgrade maintenance. Users may experience some errors using the service.
HashiCorp Services’s Status Page - Upgrade Maintainence.
v0.14.0-beta2 0.14.0-beta2 (This describes the changes since v0.13.4, rather than since v0.14.0-beta1.) NEW FEATURES:
Terraform now supports marking input variables as sensitive, and will propagate that sensitivity through expressions that derive from sensitive input variables.
terraform init: Terraform will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (<a…
This experiment relieves a major pain point with complex typed objects! module_variable_optional_attrs
module_variable_optional_attrs: When declaring an input variable for a module whose type constraint (type argument) contains an object type constraint, the type expressions for the attributes can be annotated with the experimental optional(…) modifier.
Marking an attribute as “optional” changes the type conversion behavior for that type constraint so that if the given value is a map or object that has no attribute of that name then Terraform will silently give that attribute the value null, rather than returning an error saying that it is required. The resulting value still conforms to the type constraint in that the attribute is considered to be present, but references to it in the recieving module will find a null value and can act on that accordingly.
Wow. Excellent.
@Dan Meyers @Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)
Upgrade Maintainence Oct 28, 22:03 UTC Resolved - This incident has been resolved.Oct 28, 22:02 UTC Update - Maintenance has finished. The system appears to be stable. Regular system monitoring will continue.Oct 28, 19:44 UTC Investigating - The URL Service is going to be undergoing upgrade maintenance. Users may experience some errors using the service.
Hi guys,
What do you think when you need to create an SNS-SQS subscription cross account in AWS with terraform? it’s painful, isn’t it?! You must have to set up multiple providers for terraform apply command, work successfully, to have access to both accounts;
A friend of mine, looked inside terraform-provider-aws to know why this is necessary, because doing the same procedure using AWS console, you don’t need access to both accounts.
That said, he opened a pull request that solves this problem and allows the current behavior to continue to work. So, if you are interested in this solution, leave a to help this improvement get into the next version as soon as possible.
Thanks!
• https://github.com/terraform-providers/terraform-provider-aws/pull/15633e
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Anybody using rennovatebot? I am really upset with the onboarding experience. I enabled it for all repos, like mergify, only rennovatebot is now opening 350 PRs and adding a public deploy key to all of our repos. Had no idea this was going to happen.
Each PR says they will only ever open 2 per hour or something and 20 total
We have hundreds now
Hadn’t seen that one, but I’ve had ok luck with Dependabot
Except no HCL2 support :-)
Ahhh I was wondering what the difference was
And wanted to avoid the one off GitHub action to run the patched fork
Yea, dependabot is very alpha for the gradle ecosystem too. Seems like the main diff I found between the two is that:
• Dependabot is very “batteries included”
• Rennovatebot is more easily customizable But I could be wrong based on my limited exposure to both
2020-10-29
Hello, I was about to submit a bug for this module: https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster but the bug template suggested to ask here first
Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.
out of the box I see this error when trying to plan it
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
Error: Reserved argument name in module block
on main.tf line 129, in module "hostname":
129: count = var.number_of_broker_nodes > 0 ? var.number_of_broker_nodes : 0
The name "count" is reserved for use in a future version of Terraform.
[terragrunt] 2020/10/29 14:47:58 Hit multiple errors:
exit status 1
I dont think its a bug, the module doesnt support 0.12.x
Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.
oh
I taught that was it
Is this bug or expected when using tf 0.12.26
?
I’m writing a CD pipeline which will run a plan stage, then run an apply stage only if the plan has changes. How can I detect if a plan file contains changes?
Running terraform show terraform.plan
I can parse the output for Plan: 0 to add, 0 to change, 0 to destroy.
but this feels very fragile (for example, I need a more complex check to detect if there are output-only changes).
terraform plan -detailed-exitcode
will exit 0 if there are no changes, 2 if there are changes
yeah, i use -detailed-exitcode
yeh ditto
2020-10-30
Help me how to organize blue/green deployment on the terraform. I need to create a new service, do migration and then switch traffic to the new service.
Do you mean for ECS, EKS or generic?
Not in terraform yet, but aws just released this feature for ALB… https://aws.amazon.com/blogs/devops/blue-green-deployments-with-application-load-balancer/
In a traditional approach to application deployment, you typically fix a failed deployment by redeploying an older, stable version of the application. Redeployment in traditional data centers is typically done on the same set of resources due to the cost and effort of provisioning additional resources. Applying the principles of agility, scalability, and automation capabilities […]
Thanks @loren this is great! Just as an aside, I wonder if there’s a way to set target group for a particular group of clients.
I imagine you could do something creative with path-based routing…
I shall try
aws just released this feature for ALB…
what feature is new in that? seems more of a ‘how to’ blog to me
Yeah i agree, it’s showing how to use Application Load Balancer’s weighted target group feature
sorry perhaps the wording threw me off:
Application Load Balancers now support weighted target groups routing
the use of “now” just made it sound like a new feature
If a TF-security tool asks you to send your TF plan and state to the cloud for analysis, what’s your reaction?
Appreciate your guys’ help with the above poll.
i think i’d want some kind of anonymization also… things like account ids aren’t strictly sensitive but can become so in aggregate
terraform 0.14 is also doing cool things with inputs by marking them as sensitive and propagating that through the state and plan
That’s useful. What secrets would you care about most? Username’s and passwords, key files, anything else?
Secret manager and parameter store arns
And same in task defs
if you set the RDS master password via terraform for example, that’s in the State
and yah just stuff like names of params or buckets suddenly tells you “oh this company uses X product/service”
Is it a problem to know if a specific company uses a specific service?
yes, huge
competitors could use that in a bad way
i’d not be super concerned about that myself
but i’d try to handle that with anonymization of data rather than needing to mark it out
Tags on resources would be another thing to handle
Great feedback everyone, thank you. I’ve put all of this into the ticket for this capability.