#terraform (2020-12)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-12-01
Running into the following error when using the terraform-aws-ecs-container-definition module
Error: Variables not allowed
on <value for var.environment> line 1:
(source code not available)
Variables may not be used here.
With a configuration that looks like this
{
name = "SPRING_PROFILES_ACTIVE"
value = "${var.spring_active_profile}"
},
Is that inside a template or something?
it is being passed directly to the module as value to environment
(fwiw, value = "${var.spring_active_profile}"
is HCLv1 syntax. In HCL2, it should be value = var.spring_active_profile
)
Is that extract taken from a tfvars file?
It is not
It is being passed directly to the module
Can you share a minimal example?
Here’s an example using the terragrunt style
Ah terragrunt.
Yes, it’s not possible to do that, since terragrunt is a wrapper and is just setting these inputs as TF_VAR_… when calling Terraform.
The way I understand the purpose of terragrunt.hcl is that it is where you set the variable values.
Can you explain why you need to do it this way?
the values get passed in via a regular terraform module
that was just an example
I’d need to see the entire example with all files
Seems a bit odd to me that this would not be allowed?
Do you have sample codes I can try locally?
can anyone recommended a Elasticache module upstream?
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
variable "replicas_per_node_group" {
type = number
default = 0
description = "Required when `cluster_mode_enabled` is set to true. Specify the number of replica nodes in each node group. Valid values are 0 to 5. Changing this number will force a new resource."
}
You can validate that the value is a number 0-5. I don’t think you can enforce a number that is not the default if and only if some other variable is some value
You could use a conditional in your terraform code that changes the value if the other value is true
You would in effect be setting a different default when the other value is true
You can’t do it with validation, as that can only reference the variable in question.
There is another way to do this, but it’s a little weird. I personally use it heavily because I think it’s really useful to enforce conditions like this rather than let them generate hard to understand errors.
data external validate_replicas {
count = var.cluster_mode_enabled && var.replicas > 5 ? 1 : 0
command = [ "Error: if cluster mode is enabled, replicas must be 5 or less." ]
}
is there a way to do this with validation
?
if you have a map/object and the key name needs to contain “:” character for backwards compatibility with my environment e.g.
"terraform:managed" = string
"terraform:root" = string
Can you? TF currently complains
"Object constructor map keys must be attribute names."
I’ve tried a variety of escape characters but looks like this is a none starter. Any ideas please?
i believe the expression syntax using parens may work?
("terraform:managed") = string
("terraform:root") = string
Thanks Loren, not working on my quick test. Did I misunderstand?
variable "configs" {
description = "TESTING of configs."
type = object ({
("terraform:managed") = string
("terraform:root") = string
})
}
Also just trying changing the “(“ to “{“ {“terraform:managed”} = string {“terraform:root”} = string
but no joy
i wasn’t certain it would work in a variable definition. the syntax works elsewhere in tf where the expression confuses the standard parser. wrap the expression in parens and the parser then knows what to do. but the colon may confuse it further, since both colon and equal are valid separator tokens for tf maps
Fair enough, and thank you. Just wanted to double check I’d not simply applied your suggestion wrongly. I can work around it for now, it was only a name of a AWS tag, so can injected it in later, was just trying to get as much into my data structure as possible.
for example, this works:
$ cat main.tf
locals {
foo = {
("colon:test") = "bar"
}
}
output foo {
value = local.foo
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
foo = {
"colon:test" = "bar"
}
though, as a local like that, the parens are not needed around the key
Ah, that’s good to know. (you should share a get me a beer/coffee link) I think I owe you many now.
haha, all good
might want to open an issue with this use case. the locals test shows that the colon is valid in the key. i think this means that the object constructor is not quite intelligent enough to handle this correctly
Good point, I’ll get one written up when I get home.
only related issue i can find is this one, indicating the same problem with a period in the key… https://github.com/hashicorp/terraform/issues/22681
Terraform Version 0.12.7 Terraform Configuration Files variable "some_variable" { type = map(object({ variable.1.thing = object({ variable.list = list(string) }) })) } output "output…
Thanks for the reference, I’m happy to try and report it. I’ve a few hours travel until home so will do it once I get there.
those of you that use terraform cloud for your modules i.e.
module "consul" {
source = "app.terraform.io/example-corp/k8s-cluster/azurerm"
version = "1.1.0"
}
how do you test changes to your modules before cutting a new version for it?
is the best approach to just point your reference of the module at the git repo source during local testing and change it back once its ready?
module "consul" {
source = "[email protected]:example-corp/terraform-azurerm-k8s-cluster.git?ref={new_changes}"
# source = "app.terraform.io/example-corp/k8s-cluster/azurerm"
# version = "1.1.0"
}
You can use a local directory as the source
right, so that mean you also comment out of the terraform cloud source while doing local development?
We use kitchen-terraform to test each version before releasing it.
2020-12-02
Greetings everyone. I am using terraform-aws-ecs-container-definition and trying to add volumes using following code
volumes_from = [
{
sourceContainer="applogs"
readOnly=false
}
]
mount_points = [
{
containerPath = "/app/log"
sourceVolume = "applogs"
}
]
But I am getting following error
Error: ClientException: Invalid 'volumesFrom' setting. Unknown container: 'applogs'.
on main.tf line 151, in resource "aws_ecs_task_definition" "this":
151: resource "aws_ecs_task_definition" "this" {
Can anyone help me figure out what I am doing wrong here? I was unable to find an example.
According to the link https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html I think the input is correct but I am unable to figure out the missing piece here. Any help is appreciated.
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
When using Docker volumes, the built-in local driver or a third-party volume driver can be used. Docker volumes are managed by Docker and a directory is created in /var/lib/docker/volumes on the container instance that contains the volume data.
hmm are you trying to use a docker volume or mount a volume from another container?
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
When using Docker volumes, the built-in local driver or a third-party volume driver can be used. Docker volumes are managed by Docker and a directory is created in /var/lib/docker/volumes on the container instance that contains the volume data.
I want to use docker volume which can act as a fresh volume for this newly created container. Actually my use case is that I want to use a volume which is shared between host and container. I want to access files placed on a specific path inside the container from host.
Ah yeah so you should define your docker volume parameters in the task definition
then use the same volume name in the container definition
if you look at the example under docker volume configurations you’ll see how the docker volume is reference, in your container definition using the cloud posse module you would just use sourceVolume
to reference the name defined under volume.
module "container_def" {
mount_points = [
{
containerPath = "/app/log"
sourceVolume = "service-storage"
}
]
resource "aws_ecs_task_definition" "service" {
family = "service"
container_definitions = module.container_def.json_map_encoded_list
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
driver = "local"
driver_opts = {
"type" = "nfs"
"device" = "${aws_efs_file_system.fs.dns_name}:/"
"o" = "addr=${aws_efs_file_system.fs.dns_name},rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
}
}
}
}
So I’ll put
volume {
name = "service-storage"
host_path = "/ecs/service-storage"
}
inside task definition and inside my container definition module I’ll use
volumes_from = [
{
sourceContainer="service-storage"
readOnly=false
}
]
Got it. I overlooked task definition, My bad. Thanks @Tom Dugan
is anyone using a postgres provider to create databases and users?
Yep.
any specific one you’d recommend?
i essentially need to do the following …
-- Create required databases
CREATE DATABASE notaryserver;
CREATE DATABASE notarysigner;
CREATE DATABASE registry ENCODING 'UTF8';
-- Create harbor user
-- The helm chart limits us to a single user for all databases
CREATE USER harbor;
ALTER USER harbor WITH ENCRYPTED PASSWORD 'change-this-password';
-- Grant the user access to the DBs
GRANT ALL PRIVILEGES ON DATABASE notaryserver TO harbor;
GRANT ALL PRIVILEGES ON DATABASE notarysigner TO harbor;
GRANT ALL PRIVILEGES ON DATABASE registry TO harbor;
There is a terraform-providers/postgresql provider which is the standard AFAIK.
from what i saw it doesn’t handle user creation
Ah maybe you’re correct on that front. I’ve created roles through the psql provider but not users probably.
makes sense
You can create a login role with the postgres provider which should be what you want, afaik
this didn’t seem to work for me
i am getting the following error …
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: dial tcp :5432: connect: connection refused
on .terraform/modules/data_platform_core/modules/data-platform-core/harbor-postgres-configuration.tf line 10, in provider "postgresql":
10: provider "postgresql" {
is anyone using https://tf-registry.herokuapp.com/providers/winebarrel/mysql/latest ?
no but I am doing something similar with snowflake using https://tf-registry.herokuapp.com/providers/chanzuckerberg/snowflake/latest very positive experience
Just don’t try and manage users from the same Terraform configuration you create the rds resources
why?
( I do……what did I do wrong?)
The provider needs to be configured before resources are created. If you attempt to configure the db provider based on the dynamic hostname/credentials generated in the same Terraform stack this is impossible
It works if you create the cluster first and later add the db provider resources. But it will fail if you ever rebuild the stack
Even worse, the MySQL provider has silent defaults for credentials. If you try and load them from a non-static source, it will appear to work but really be using localhost for the hostname or whatever
I was planning to have a module that wrapped my existing rds module and then took the outputs from that and passed them to the provider. Are you advising against this?
Yes, exactly
Personally I break it up into two configurations. First configuration creates rds, second manages users and other objects within.
But other people keep it in a single configuration and use hardcoded variables plus apply -target runs. Both work.
That’s going to be difficult for us as we want to provision the DB when it’s created
in TF you can search the cluster and then run the user creation and grant
It’s not that difficult. Your repo has two terraform configurations. create-rds/main.tf
and everything-else/main.tf
. Run terraform apply
twice in a row
ahhhhhhh ok , that is another way
v0.14.0 0.14.0 (December 02, 2020) NEW FEATURES:
Terraform now supports marking input variables as sensitive, and will propagate that sensitivity through expressions that derive from sensitive input variables.
terraform init will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (<a href=”https://github.com/hashicorp/terraform/issues/26524” data-hovercard-type=”pull_request”…
This follows on from some earlier work that introduced models for representing provider dependency "locks" and a file format for saving them to disk. This PR wires the new models and beha…
finally no more worries about tfstate and changing minor versions
i’ll wait for some minor updates before moving to tf14 . it looks impressive so far
I’ve enjoyed using the new concise diff engine.
same here, I will wait for few updates
2020-12-03
Can someone point me to relevant material in writing Terraform code which depicts industry standards. I am struggling to structure my code in way that it does not have duplication and can be used to deploy in multiple accounts with multiple environments? I explored Terragrunt and it is one of my options that I can use to remove duplication. So simply put I am looking for
- Industry standard large enterprise code structuring method for Terraform
- Avoid duplication For now I simply create a new folder for each new use case. For example
test-org-one-ecs-solution-production
- modules
- test-org-vpc
- main.tf
- rest of the files
- test-org-rds
- main.tf
- ...
- test-org-ecs-app
- main.tf (it has resources defined and it also call Terraform AWS modules to create a complete app solution of ECS for test-org)
- ...
test-org-one-ecs-solution-staging
- copy of above
Whereas the TF states are maintained in S3.
I’m interested in others responses as well . I have some of my own opinions and I’ll share some resources on the topic that have helped our organization develop our TF code structure. I would assume you are not using Terraform Cloud?
• Terraform Repository Best Practices
• Digital Ocean’s Take on Directory structure On the topic of Terragrunt, I do not personal use it but colleagues of mine do use it successfully. The feedback is mostly positive. I will say that Terragrunt does abstract some vanilla Terraform features that has resulted in some miscommunication of concepts between us
Learn how to standardize your Terraform code and eliminate duplicate Terraform code.
How do you scale your Terraform configuration as your team grows? In this post, we discuss approaches to structuring your Terraform configuration for improved testing, reusability, and scalability.
In this blog post, we’ll go over how we structure our IaC repositories at 2nd Watch with a particular focus on Terraform, an open-source tool by Hashicorp for provisioning infrastructure across multiple cloud providers with a single interface.
Structuring Terraform projects appropriately according to their use cases and perceived complexity is essential to ensure their maintainability and extensibility in day-to-day operations. In this tutorial, you’ll learn about structuring Terraform proj
Correct. I am not using TF cloud.
Thanks @Tom Dugan. I’ll look into the material. I personally want to avoid Terragrunt
I am interested in what you end up with!
Sure. I’ll share.
I use modules (local and published, a “base” folder that holds common configuration across environments. Then for each environment I just symlink to the common stuff in the “base” folder. In each specific environment I’ll use separate tfvars files to set the modules settings for each env. Works fine for me. Just another idea for you.
Also, an “Environment” might be represented by 10s or hundreds of workspaces. It’s never just 1 giant workspace with everything in there.
Here’s a small example:
├── Makefile
├── README.md
├── modules
│ └── default_route_device
│ ├── main.tf
│ └── variables.tf
└── projects
├── account-base
│ ├── account-base-nonprod.auto.tfvars
│ ├── account-base-prod.auto.tfvars
│ ├── account-base.auto.tfvars
│ ├── datasources.tf
│ ├── main.tf
│ ├── providers.tf
│ ├── template.sh
│ └── variables.tf
├── blah-env-us-east-1
│ ├── account-base-nonprod.auto.tfvars -> ../account-base/account-base-nonprod.auto.tfvars
│ ├── account-base.auto.tfvars -> ../account-base/account-base.auto.tfvars
│ ├── backend.tf
│ ├── datasources.tf -> ../account-base/datasources.tf
│ ├── main.tf -> ../account-base/main.tf
│ ├── providers.tf -> ../account-base/providers.tf
│ ├── blah-nonprod-us-east-1.auto.tfvars
│ └── variables.tf -> ../account-base/variables.tf
└── blah-env-us-west-2
├── account-base-nonprod.auto.tfvars -> ../account-base/account-base-nonprod.auto.tfvars
├── account-base.auto.tfvars -> ../account-base/account-base.auto.tfvars
├── backend.tf
├── datasources.tf -> ../account-base/datasources.tf
├── main.tf -> ../account-base/main.tf
├── providers.tf -> ../account-base/providers.tf
├── blah-nonprod-us-west-2.auto.tfvars
└── variables.tf -> ../account-base/variables.tf
Thanks @Jonathan Le I’ll be taking a look into the shared approach while working on finalizing the approach that suits my organization.
NP. your requirements might be different than mine, so just giving an example of think about. Good luck.
Good morning, I was curious if anyone knew of a better way to consume this module then how I currently am doing and wouldn’t mind sharing. I’m using https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/latest
Basically, my [main.tf](http://main.tf)
looks like this:
module "ssm_parameter_store" {
source = "cloudposse/ssm-parameter-store/aws"
version = "0.4.1"
parameter_write = var.parameter_write
kms_arn = data.aws_kms_key.ec2_ami_cmk.arn ## encrypts/decrypts secrets marked as "SecretString"
}
And I am passing in a tfvars
for each environment (dev,test,prod).
parameter_write = [
{
name = "/dev/app/us-east-1/path/to/secrets/foo"
value = "abc123"
type = "String"
overwrite = "true"
},
{
name = "/dev/app/us-east-1/path/to/secrets/password"
value = "def456"
type = "SecureString"
overwrite = "true"
}
]
This works just fine but as you can see, a portion of my path is something that can be programmtically filled in. I was thinking of using a local variable called prefix then that would cut down on having to type out the full path each time.
I was hoping I could do something like for_each
for the parameter_write =
part?
Maybe something like this? I’m trying to find more information on how this is achieved.
dynamic "parameter" {
for_each = [for param in properties: {
name = "${local.prefix}-${param.name}"
value = param.value
type = param.type
overwrite = param.overwrite
}]
}
I also noticed that the module is currently using count
instead of for_each
so at this time I’m not sure if it would introduce any problems.
I don’t personally use this module but I would echo the last concern about it using count
vs for_each
in that case I would opt to for_each
the module if i were to use it. Saying that I just for_each
the parameter store resource itself and I do define a path as a local.
Ya, we’d accept PRs to update the module to use for_each
- just nothing we got around to.
I’m also wondering about the beanstalk buckets you’re using. They look like the prod beanstalk?
does anyone have an example of firing a lambda when a RDS database (not aurora) is created or modified?
i am trying to configure one to provision the instance with users and databases on creation
i am trying to work out what the event_pattern
would look like
Learn how to get events from CloudWatch Events and Amazon EventBridge events for Amazon RDS.
Get a notification by email, text message, or a call to an HTTP endpoint when an Amazon RDS event occurs using Amazon SNS.
{
"source": [
"aws.rds"
],
"detail-type": [
"RDS DB Instance Event"
],
"detail": { "EventCategories": ["creation"] }
}
@Steve Wade (swade1987) try above
Hi guys! any recommendation of good tool to import
AWS resources in Terraform
? Something better than this because I have lots of resources.
terraform import example_thing.foo abc123
Check out terraformer
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer
How the pros use #Terraform https://pbs.twimg.com/media/EnNIxacUwAEstDf.jpg
Thx Matt!
Does anyone have an opinion on the thread I post here - https://twitter.com/swade1987/status/1334554787711492097?s=21
Question … How are people handling the Go code they require for lambdas that are deployed as part of their terraform modules? (1/n)
I’d keep it separate… You can use a source-only module to retrieve the go project, and reference paths in .terraform
to pass to your tf lambda resource
Question … How are people handling the Go code they require for lambdas that are deployed as part of their terraform modules? (1/n)
The go project doesn’t need any tf code at all
module foo {
source = "git::https://....git?ref=<ver>"
}
module lambda {
source = "git::<https://github.com/terraform-aws-modules/terraform-aws-lambda.git?ref=v1.30.0>"
...
source_path = "${path.module}/.terraform/modules/foo/..."
}
note the link between the label of the source-only module, foo
, and the path .terraform/modules/foo/...
You’ve lost me, I can reference a zip file from a source code url?
i don’t know why you would create the zip. let the module do it
What am I passing to the module then just the binary itself?
<https://github.com/terraform-aws-modules/terraform-aws-lambda>
recommend reviewing the module readme
Interesting I’m using a different module at the moment can easily switch though
i don’t see any examples in the repo for golang, so it will involve some experimentation to figure out how to get the module to build it. or you have your golang devs build it as part of their release/version cycle, and you can pull down that artifact and have the module create the zip of it
I think the easiest option would be to release a zip file as part of the golang repo and reference it that way
It’s so trivial to do I just need to work out how to get in inside tf
i just so despise committing a zip file or any binary to a repo. makes me sick inside
Not commiting having it as a release artifact
oh right
that makes sense, your golang pipeline can compile the code and create the zip at the same time. easy
Exactly
Then I’ll use a null resource to get it
i guess now you can instead publish a container with the package, instead?
might be even easier
With AWS Lambda, you upload your code and run it without thinking about servers. Many customers enjoy the way this works, but if you’ve invested in container tooling for your development workflows, it’s not easy to use the same approach to build applications using Lambda. To help you with that, you can now package and […]
The issue with that is from the looks of things it has to be an ECR image
sure, either way, you’ll need to publish the release artifact to somewhere… i guess as part of your tf config you could somehow mirror from another registry to ecr. or just push from the golang pipeline to ecr
Neat trick. Never had a reason to come up with the thought to use files from .terraform yet. For the very small lambda stuff so far, the .go file just sits next to the terraform files. Will keep this method in mind for future
it’s a new favorite pattern of mine. helps separate concerns across different teams and projects. takes advantage of how terraform init pulls all module sources to the local .terraform cache before generating the plan. so the files are guaranteed to be at that path
@mfridh do you create the zip file as part of the terraform code or is that in the module directory already?
Question regarding upgrade modules to 0.13, we usually dont declare the providers in module, and let them use the provider configuration from the caller terraform. but it seems 0.13 requires something like this in all module repos
terraform {
required_version = ">= 0.13"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 2.0"
}
local = {
source = "hashicorp/local"
version = "~> 1.2"
}
null = {
source = "hashicorp/null"
version = "~> 2.0"
}
}
}
is this required in 0.14? for some reason I think this was going to be required at some point
I think it might be
another quick question, so doing the above to a sub module appears to actually initizalize a provider instead of using the root one, i know in tf 0.12 this could cause problems when removing resources or the provider itself, from the module. but is it generally okay now
├── provider[registry.terraform.io/hashicorp/aws] 3.16.0 ├── module.vpc │ └── provider[registry.terraform.io/hashicorp/aws]
each module would have its own aws provider
its just before we were initalizing providers in each module, then trying to remove the module caused errors, if it had a seperate aws provider in the module, i just wanna make sure im not making the same mistake
I’m not really sure what the behaviour here is in theory, but in practice you “shouldn’t” run into problems. You should create all providers at the root level and pass them to your modules, rather than relying on the modules creating them.
kk thanks again man
provider "aws" {}
module "rds-cluster" {}
will implicitly use the root aws provider
provider "aws" { alias = "aws1" }
provider "aws" { alias = "aws2" }
module "rds-cluster" {
providers = {
aws = provider.aws2
}
}
will explicitly pass a specified provider (the syntax is off the top of my head, check the docs for correct details)
provider "aws"
module "rds-cluster" {
...
}
will still use the root provider, even tho i put the required_providers in the terraform block of rds-cluster right?
i think i got it tho
it makes sense
thanks for all the help man
modules will create providers if one doesn’t exist that’s suitable in the root configuration. IMO this is bad, but it’s the way Terraform works. So just make sure to create the providers yourself and you won’t get surprised
other wise it will try and use -/aws instead of hashicorp/aws
is this instantiating a new aws provider in the module, or just requiring the root tf have that version?
should i always have a block like this in ALL modules with relevant providers
is that best practice now?
docs are a little unclear if this is only if i want a module local provider
The above block is not “required”, but generally you should include a minimal version where you specify the source of each required module. The source is a mapping from friendly name (“aws”, “pagerduty”) to the Hashicorp registry name (“hashicorp/aws” or “pagerduty/pagerduty”). The recommendation from Hashicorp is that top level configurations specify strict version strings (“~>” or “=”), while modules specify minimum version only for providers (“>=”). And I recommend in this Slack you thread your messages and post more than once sentence per message :)
ahhhhh
The way it appears to require -/aws is an artifact of the plan for the upgrade to tf 0.13 tfstate. After the upgrade apply, you shouldn’t see that anymore
this worked, doing a local init outside of tfe, was able to correctly udpate these providers
thanks man
no prob!
You can use terraform state replace-providers
to fix it before the apply if you want. Just be aware that modifies your tfstate and may cause problems if you wanted to keep using the earlier version… https://www.terraform.io/docs/commands/state/replace-provider.html
The terraform state replace-provider
command replaces the provider for resources in the Terraform state.
hey team @Brandon Wilson I am going through a bunch of your modules that I use but I am on TF 0.14….
https://github.com/cloudposse/terraform-aws-iam-system-user/pull/38 https://github.com/cloudposse/terraform-aws-dynamodb-autoscaler/pull/27 https://github.com/cloudposse/terraform-aws-route53-cluster-hostname/pull/29
Once these ones are in, i’ll do the next round. Thanks
make this module v14 compatible
make this module v14 compatible
make this module v14 compatible
Thanks @Jurgen - you beat us to it.
make this module v14 compatible
make this module v14 compatible
make this module v14 compatible
Let’s use #pr-reviews, but we’re standing by to help review.
ah, didn’t know the channel.. i’ll move
yeah, I just have a project that isn’t in prod yet and I am beating the fore front of all versions
so not afraid to have shit break on tf betas, etc.
2020-12-04
Has anyone ever used a aws_cloudformation_stack because cloudformation did something better like resource updates?
I’ve never seen cloudformation do updates better. But I have used cloudformation when tf didn’t yet support the resource but cfn did
yes i use the aws_cloudformation_stack
but only because our 3rd party CICD tool provides that instead of a terraform module
I used it for autoscaling group rolling updates.
Primarily when EKS Managed group wasn’t introduced yet.
Anybody here using the new service_ipv4_cidr field in aws_eks_cluster
? Or looking to use it?
Separate question: we use TFLint and it feels a bit light in the rules that it has. Is there a better tool out there? Are there specific things you normally run into that TFLint doesn’t cover?
No info for ya, but I’m interested in following along on this one. I’d like to roll out TFLint to a large client Terraform monorepo at some point soon.
What are you hoping for it to catch for you?
I’d just like to enforce more consistency before folks commit. I’ve got other infra engineers who follow the patterns I’ve put in place, but there are a ton of more dev focused folks on the team who are writing TF code now so I’d love to catch naming, quoting, and other linting style hiccups before their code gets into PR and I have to reject it.
Ah, TFLint can definitely do that for you. It is also capable of handling lint issues on a per provider basis - like look for specific errors people tend to make with the AWS provider.
Yeah, figured as much. Just need to implement it! Following along on threads as I’m interesting hearing if anyone ways in with good tips for you
Most probably you came across that already https://github.com/antonbabenko/pre-commit-terraform
pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform
Yep
Are you looking for the tools or the policies?
There’s conftest
which speaks rego, but haven’t found a catalog of policies yet.
Example Code along with the blog post at https://blokje5/dev - Blokje5/validating-terraform-with-conftest
The policies Then I can try and figure out what tools cover them, or not.
I’m trying to start from “what are the policies and rules worth having”, and then see what I can use to enforce them.
Ya, it’s the hard part for sure. We’re creating catalogs for this kind of stuff (we have for SCP, AWS Config, Datadog, etc). Nothing yet for HCL policies.
Checkov perhaps?
That’s more of a security analysis than an operational analysis, no?
You can create “cost policies” using checkov if you mean that by operations.
like ec2 types etc
2020-12-05
2020-12-07
terraform is publishing a roadmap… hadn’t noticed that before… https://github.com/hashicorp/terraform-provider-aws/blob/master/ROADMAP.md
Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.
Lifecycle: Retain [Add ‘retain’ attribute to the Terraform lifecycle meta-parameter]
Issue//github.com/hashicorp/terraform-provider-aws/issues/902)
Some resources (e.g. log groups) are intended to be created but never destroyed. Terraform currently does not have a lifecycle attribute for retaining such resources. We are curious as to whether or not retaining resources is a workflow that meets the needs of our community
Huh what would that do …. you run a destroy but it leaves that particular resource (and dependencies) alone and ‘dangling’ within AWS?
Hi there, Terraform Version all Affected Resource(s) Please list the resources as a list, for example: aws_s3_bucket, s3 is a sample, this feature should be applied to most resources. meta-paramete…
that’s the idea yeah
That could be pretty useful is you were trying to have your environments scale to zero.
What does ‘scale to zero’ mean?
Oh, kubernetes thing
Scale to zero is the idea that you don’t have running infra costs when no one is using it. So lambdas are a good example. But let’s say you’re running a Fargate cluster, you could scale your instances and any other bill-per-usage infra down to zero overnight for example and not have to pay for them.
Sure. How’s that play into this idea of resources left outside the state?
The retain functionality would be less helpful with actually scaling your compute cause that already has good scaling functionality, but could help with removing your elasticsearch cluster but not your VPC for example.
Ah
Good afternoon, is there a quick way to take a map and remove any duplicate values, while maintain it as a map or creating a new one, as I need the key later on. I’ve tried reversing the key and values e.g. making the value become the key. In the hope that the duplicate would then just replace what was there but looks like TF no longer allows that (Might never have allowed it but thought it did). Equally tried converting it to a list and then running it though distinct, which works in terms of removing the duplicate values but obviously loses the key.
for the duplicates, are the key and value both the same?
Is it
{
"foo" = "bar"
"foo" = "bar"
}
or
{
"foo" = "bar"
"foo" = "baz"
}
Also, it may be possible to have a map with duplicate keys, but that is not the intention of map. Maps are supposed to be lookup tables of unique keys to values
oh hang on I see what you are saying. The keys are different but there are duplicate values
In the case of having duplicate values, how are you deciding which key to keep? The first instance? Or some other logic
Afternoon Andrew, the data set I have would be the second option
{
"foo" = "bar"
"foo" = "baz"
}
Is it? I’m understanding your question now as you having this kind of data:
{
"foo" = "bar"
"baz" = "bar"
}
Where the keys are different, but they contain the same value
In terms of which key to keep, it doesn’t actually matter in may case.
Although, in your above example I’d say the key was on the left. So, more your second option than that the one directly above.
{
"key1" = "url1"
"key2" = "url1"
"key3" = "url3"
}
So what I’d like to get to is
{
"key1" = "url1"
"key3" = "url3"
}
right
yep, we’re on the same page
I’m thinking something with the merge
function https://www.terraform.io/docs/configuration/functions/merge.html
The merge function takes an arbitrary number maps or objects, and returns a single map or object that contains a merged set of elements from all arguments.
still pondering
I wondered if I could have done something via a for loop and contains but that only looks to work if I make a copy of the map and they basically loop over it and say add it to map3 if value is not contained in map2. As TF complains if I try and refer back to myself while iterating through the loop. This might be the only way to do it; it just didn’t feel the most optimal or possibly even correct approach. I’ve a bad habit of overcomplicating things, when a simpler answer exists.
you stumped me
I’ve stumped myself on every twist and turn with terraform. So, its not an unusual feeling for me. Thank you for taking the time to make some suggestions.
i would stop and ask, why is my data model like this, and can i rework the data model to function better in the context of the tooling?
but, you can probably do something with the functions for set math. that can tell you which values are in both sets, which values are missing from a set, etc…
The setsubtract function returns a new set containing the elements from the first set that are not present in the second set
Hi Loren, as ever you do make a very valid point regarding the data set. I think the short answer is… its 100% my fault. I have a data structure (map(opjects)) that I loop over and gather some information, basically just 3 configuration that represent multiple sites
stuff = { for config_key, config in var.site_configs2 : config_key => config.s3_config.s3_bucketname if config.s3_config.s3_create == true }
stuff = {
authoring = "assets.mydomainname.test"
cms = "assets.mydomainanme.test"
maintenance = "assets.maintenance.mydomainname.test"
}
Some websites are only host headers but have somethings that access S3 so have the same s3 bucket specified as another config. setting the if config.s3_config.s3_create == true has worked around around the problem as I can set it to false on the sites that are just headers but I’m not sure I’ll be able to do that for ever. So was looking to simply remove the duplicates.
The key is important only because the for_each creating the s3 bucket should be named based on the key, so I can easily reference it by the known name of the key when creating other resources.
Sorry, hope that makes some level of sense. in terms of the explanation. Probably not my rational for structuring it the way I have. I’ll have a look at setsubtract, thank you for the advice.
On a different note to the above questions. Can anyone please tell me if its possible yet to do a for_each within a resource and have it dynamically chance regions? Based on https://github.com/hashicorp/terraform/issues/19932 it looks like its still not possible but I was wondering if anybody has seen a work around with TF 0.14?
Current Terraform Version Terraform v0.11.11 Use-cases In my current situation, I am using the AWS provider so I will scope this feature request to that specific provider, although this may extend …
It’s not possible
Current Terraform Version Terraform v0.11.11 Use-cases In my current situation, I am using the AWS provider so I will scope this feature request to that specific provider, although this may extend …
Thanks for confirming Alex
any lambda cloudwatch exports able to tell me why my lambda does not fire when my DB gets created …
resource "aws_cloudwatch_event_rule" "harbor_rds_creation_or_modification_event" {
name = "${var.team_prefix}-${var.environment}-harbor-db-event"
description = "Capture any event related to the ${var.team_prefix}-${var.environment} harbor database."
event_pattern = <<PATTERN
{
"source": [
"aws.rds"
],
"resources": [
"${module.harbor_postgres.database_arn}"
],
"detail-type": [
"RDS DB Instance Event",
"RDS DB Cluster Event"
]
}
PATTERN
}
resource "aws_cloudwatch_event_target" "harbor_rds_creation_or_modification_event" {
rule = aws_cloudwatch_event_rule.harbor_rds_creation_or_modification_event.name
arn = module.harbor_lambda.arn
}
resource "aws_lambda_permission" "harbor_rds_creation_or_modification_event" {
statement_id = "Allow-Harbor-Database-Provisioner-Execution-From-Cloud-Watch-Event"
action = "lambda:InvokeFunction"
function_name = module.harbor_lambda.name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.harbor_rds_creation_or_modification_event.arn
}
@Steve Wade (swade1987) did you try the event pattern that I shared in the previous thread?
TIL — [overide.tf](http://overide.tf)
is a special file in Terraform: https://www.terraform.io/docs/configuration/override.html
Override files allow additional settings to be merged into existing configuration objects.
wow that would be confusing to debug if you didn’t know this was a thing
Override files allow additional settings to be merged into existing configuration objects.
Yeah for real. I wouldn’t really want to use honest. Seems like a way to put a bandaid on a larger problem.
imagine an environment without internet access, where module source urls need to be overridden to point at an internally accessible git remote…
That’s a good example… have you used this before / needed that pattern Loren? Do you not check in your override.tf in that case?
i check it in my root, not in public modules
and yes, exactly this use case
Interesting
Wow. It’s this newish?
Hi All. Question about the use of the https://github.com/cloudposse/terraform-aws-cloudformation-stack module. I’m trying to use some values in the parameters
key-value map that are from local variables, e.g.
module "ecs_cloudwatch_prometheus" {
source = "git::<https://github.com/cloudposse/terraform-aws-cloudformation-stack.git?ref=tags/0.4.1>"
enabled = true
namespace = "eg"
stage = var.env_name
name = "cloudwatch-prometheus"
template_url = "<https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/ecs-task-definition-templates/deployment-mode/replica-service/cwagent-prometheus/cloudformation-quickstart/cwagent-ecs-prometheus-metric-for-awsvpc.yaml>"
parameters = {
ECSClusterName = "${var.env_name}-ecs-cluster"
CreateIAMRoles = false
ECSLaunchType = "fargate"
SecurityGroupID = "${local.security_group_ids}"
SubnetID = "${local.subnet_ids}"
TaskRoleName = var.env_name == "production" ? "ecs_task_execution_role" : "${var.env_name}_ecs_task_execution_role"
ExecutionRoleName = var.env_name == "production" ? "ecs_role" : "${var.env_name}_ecs_role"
}
capabilities = ["CAPABILITY_IAM"]
}
but I get the error
The given value is not suitable for child module variable "parameters" defined
at .terraform/modules/ecs_cloudwatch_prometheus/variables.tf:71,1-22: element
"SecurityGroupID": string required.
Perhaps I’m just misunderstanding how to use that key-value map. Could someone take a look at my syntax and see if there is an obvious problem? Thank you!
Terraform module to provision CloudFormation Stack - cloudposse/terraform-aws-cloudformation-stack
Hi @Garth, just by the variable name it looks like you’re having an array instead of a single item. What’s in local.securitygroup_ids?
Terraform module to provision CloudFormation Stack - cloudposse/terraform-aws-cloudformation-stack
Here are my locals:
locals {
security_group_ids = concat(module.networking.security_groups_ids, [module.rds.db_access_sg_id])
subnet_ids = module.networking.private_subnets_ids
}
do i need to format the lists as strings that the cf template is expecting?
et voila! a join appears to do it.
Thanks @github140 for the hint!
anyone run into cycle errors running terraform destroy with eks + k8s/helm provider? If I remove the helm/kube resources by running terraform apply w/ the helm/k8s resources removed I can subsequently run terraform destroy on the eks cluster/worker. I’m running terraform on 0.12.29 still
sounds like you might benefit from 0.13 depends_on
for modules
@Erik Osterman (Cloud Posse) looks like people are running into alot of cycle issues in 0.13 too. I’ve been running this setup for awhile now (deploy cluster + deploy helm/k8s resources in a single terraform apply + destroy w/o a problem) and its historically worked great. Seems like a possible regression towards the end of 0.12.x that’s also leaked into 0.13?
2020-12-08
@here PSA: we’re working on some of the underlying scaffolding and tooling to better support 0.14 and future updates. 0.13 was a big pain, and we learned a lot. A few things are happening behind the scenes:
• We’re switching everything on our side over to the terraform registry notation so we can use rennovatebot
(which doesn’t like our ref=tags/...
format).
• In order to support the registry notation, we needed to update our test-harness
to support bats (this is done, but is not backwards compatible).
• We’re adding mergify
to quickly merge automated PRs, but were quickly blocked by the next hurdle: mergify
cannot be a CODEOWNER
because it’s a GitHub App. So we need to upgrade our account. Working on that (but it’s expensive!)
• We’ve added added the make targets to quickly convert a module to use the “new” (but not so new) provider and registry notation. But turns out many have trouble running this with the build-harness natively, so we’re going to add a make docker/shell
target to run it in a container and mount the cwd
into the container. This will also help with make readme
and other things like it.
• We’ve added the github actions to automatically rebuild the README nightly, but it’s blocked on the mergify issue above.
• We’ve added the github actions to automatically update the [context.tf](http://context.tf)
from the central copy in terraform-null-label
• We’ve added the make target to the build-harness which will update the lower-bound pinning for modules pinned to >= 0.12
to be >=0.12.26
(to support new provider syntax)
• We’ve drafted the rennovatebot configuration to automatically update module pinning and run tests, then merge when they pass.
All of this work will make all future upgrades of terraform breezy. Unfortunately, with so many changes, we ran into the inevitable rough edges.
This is all to say, we’re currently blocked on testing PRs because tests are failing due to our changes. We’re working to fix those. ETA is by end-of-the-week.
Does anyone do terraform tests against their root modules? I’m assuming no, but if anyone is I’d like to hear your experience.
yes
kinda? depends on what you mean by “terraform tests”…
Yes, indeed.
@jose.amengual @Chris Wahl do you derive value out of those tests? Are you going about it with the terratest process? Do you require passing before applying those root modules?
@loren referring to terratests tests I guess or other terraform testing framework tests.
ahh, no. our roots are comprised of modules, with logic tied together using locals or data sources… no actual resources. we use terratest to exercise each module independently.
That’s what I expected folks to do — Test your reusable modules, but root modules don’t get tested other than actually being used.
because there can be issues threading modules together with that logic, we do have a “mock” account for exercising the root config… the ci pipeline runs a plan against all accounts when a pr is opened for review. when merged to the main branch, the ci runs the apply against the mock account. if that succeeds, and if the “new release” condition is present, then it tags the repo. the ci pipeline picks up the tag event and runs the apply on all accounts
that review pipeline only has read permissions, so it can’t inadvertently do anything
We are using terratest and implementing aws-nuke, the idea is to build e2e integration tests
value=the thing works and here is the report
gotcha. Interesting, thanks gents.
you can’t ask us a question without telling us why you are asking
otherwise we charge for answers
Hahah just wondering what folks in this community do. I’m trying to get my mind around this for a client.
ahh ok
Similar to @loren - our root modules are mostly just calling other modules. I use tflint
to check the version being snagged (e.g. “latest” versus a branch / version) and another test just to make sure the module path is still valid (based on a previous issue where someone moved a GitLab project and broke everything).
From Terraform 0.14 webinar — Lightly confirming we’re getting 1.0 after 0.15?
ohhhhhh interesting
v0.14.1 0.14.1 (December 08, 2020) ENHANCEMENTS: backend/remote: When using the enhanced remote backend with commands which locally modify state, verify that the local Terraform version and the configured remote workspace Terraform version are compatible. This prevents accidentally upgrading the remote state to an incompatible version. The check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, -ignore-remote-version. (<a…
Any github actions users starting to get a weird error? (started over the past hour or so)
Run hashicorp/setup-terraform@v1 internal/modules/cjs/loader.js:800 throw err; ^ Error: Cannot find module 'asn1.js' Require stack: - /home/runner/work/_actions/hashicorp/setup-terraform/v1…
Hello, very general question ( just curious) if anyone here used/heard about Terraboard (https://github.com/camptocamp/terraboard)? any thoughts you might have?
A web dashboard to inspect Terraform States - camptocamp/terraboard
v0.14.2 0.14.2 (December 08, 2020) BUG FIXES: backend/remote: Disable the remote backend version compatibility check for workspaces set to use the “latest” pseudo-version. (#27199) providers/terraform: Disable the remote backend version compatibility check for the terraform_remote_state data source. This check is unnecessary, because the…
Busy day for TF releases
hello all
I’m having an issue running some TF code from this PR: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/1138
Signed-off-by: Kevin Lefevre [email protected] PR o'clock Description Enable the creation of a default launch template if needed to use with managed node pool. This enable the use of kube…
Error: Invalid for_each argument
on ../../../modules/terraform-aws-eks/modules/node_groups/launchtemplate.tf line 2, in data "template_file" "workers_userdata":
2: for_each = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
the other guy says he can run the same code just fine … which is crazy — i tried the same version of TF that he’s on
are you both starting from a blank tfstate?
pretty sure he is — let me check mine
my $ terraform state list
is empty
this kind of error is more common from a blank tfstate… double check the other one
how can i get rid of my blank tfstate?
or … more importantly, how do i avoid the error?
once you apply, tfstate will not be empty
apply fails too
the error is telling you, use -target
to apply dependent resources that make up your for_each expression
dependent …
i’ll try that
basically, you have something in local.node_groups_expanded
making it such that your k
value is not known during the plan
if k
is not know in the plan phase, then terraform cannot determine the resource label, and it fails with this error
so terraform apply -target=module.eks.modules.node_groups
?
that ran, applied nothing, still same error
i can’t give you the answer, i can only describe the condition under which that error occurs
using more specific targets just results in help message being printed
what do you mean by “more specific targets”?
terraform apply -target='module.eks.modules.node_groups.aws_launch_template.workers'
trying to apply this: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/1138/files#diff-88d020257d5b3a657e10667cc97a9f3b0fa430b32bcd72d894d262d9c5351f3aR16
Signed-off-by: Kevin Lefevre [email protected] PR o'clock Description Enable the creation of a default launch template if needed to use with managed node pool. This enable the use of kube…
i only have 1 module defined in my main.tf — module.eks
— that module contains another, called node_groups
$ terraform apply -target=‘module.eks.module.node_groups’
results in the same error
seems weird. The value of local.node_groups_expanded
seems to only depend on variables and static references
yeah … so what the heck lol
is there something wrong with this syntax? for_each = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }
syntax looks fine to me
node_groups_expanded
also has a for k, v
thing going on
can anyone else try running this for me?
try opening terraform console
and see what value local.node_groups_expanded
has
once i run terraform console
, then what?
one thing you could try is getting rid of the template_file
data source… you ought to be able to use the function templatefile()
directly
user_data = base64encode(templatefile("${path.module}/templates/userdata.sh.tpl", {
kubelet_extra_args = each.value["kubelet_extra_args"]
}
))
template_file
is deprecated anyway…
now it’s just complaining about the next block that uses that for_each
Error: Invalid for_each argument
on terraform-aws-eks/modules/node_groups/launchtemplate.tf line 8, in resource "aws_launch_template" "workers":
8: for_each = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
Signed-off-by: Kevin Lefevre [email protected] PR o'clock Description Enable the creation of a default launch template if needed to use with managed node pool. This enable the use of kube…
“Terraform cannot predict how many instances will be created.”
I feel like we could easily count how many will be created using a function or something
@loren @cabrinha get a ~room~hread
help me not hit this bug pls
2020-12-09
Is anyone using the GitHub pull request comment feature with terraform cli? Digging the preview in the PR.
I do want to figure out if I could use Github deployment feature in actions to make that work smoother on the final merge and approval review.
Is after merge to master I want the final plan to be approved. Right now I have it trigger a run to approve in Terraform cloud but because the call is synchronous it means unless promptly resolved it will error with timeout.
I’d be interested in this too. We’re building a github action for TF security review and it will be commenting on the PR too.
Yesterday we released our catalog for the full suite of managed AWS Config rules (including those for CIS). https://github.com/cloudposse/terraform-aws-config https://github.com/cloudposse/terraform-aws-config/tree/master/catalog
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config
Hi I’m using terraform cloud and have published a private module for my org. Is there any tips on how I can go about using best practice to reference the module in a separate repo I use to only apply variables using the tfe_variables resources?
E.g if I commit my changes for all tfe_variables for my workspaces how can I have my repo reference my private module?
I use the AWS Redshift Terraform module, https://github.com/terraform-aws-modules/terraform-aws-redshift. I got the error below. Error: InvalidClusterSubnetGroupStateFault: Vpc associated with db subnet group redshift-subnet-group does not exist. Per document, it says: redshift_subnet_group_name: The name of a cluster subnet group to be associated with this cluster. If not specified, new subnet will be created. I use the module, terraform-aws-modules/vpc/aws to provision VPC with following subnets:
private_subnets = var.private_subnets
public_subnets = var.public_subnets
database_subnets = var.database_subnets
elasticache_subnets = var.elasticache_subnets
redshift_subnets = var.redshift_subnets
Below is the redshift code:
module "redshift" {
source = "terraform-aws-modules/redshift/aws"
version = "2.7.0"
redshift_subnet_group_name = var.redshift_subnet_group_name
subnets = data.terraform_remote_state.vpc.outputs.redshift_subnets
cluster_identifier = var.cluster_identifier
cluster_database_name = var.cluster_database_name
encrypted = false
cluster_master_password = var.cluster_master_password
cluster_master_username = var.cluster_master_username
cluster_node_type = var.cluster_node_type
cluster_number_of_nodes = var.cluster_number_of_nodes
enhanced_vpc_routing = false
publicly_accessible = true
vpc_security_group_ids = [module.sg.this_security_group_id]
final_snapshot_identifier = var.final_snapshot_identifier
skip_final_snapshot = true
}
The error is gone if I comment out the line, redshift_subnet_group_name = var.redshift_subnet_group_name But, why?
Terraform module which creates Redshift resources on AWS - terraform-aws-modules/terraform-aws-redshift
2020-12-10
I got errors below:
terraform validate
Error: Unsupported block type
on .terraform/modules/elasticsearch/main.tf line 105, in resource "aws_elasticsearch_domain" "default":
105: advanced_security_options {
Blocks of type "advanced_security_options" are not expected here.
Error: Unsupported argument
on .terraform/modules/elasticsearch/main.tf line 139, in resource "aws_elasticsearch_domain" "default":
139: warm_enabled = var.warm_enabled
An argument named "warm_enabled" is not expected here.
Error: Unsupported argument
on .terraform/modules/elasticsearch/main.tf line 140, in resource "aws_elasticsearch_domain" "default":
140: warm_count = var.warm_enabled ? var.warm_count : null
An argument named "warm_count" is not expected here.
Error: Unsupported argument
on .terraform/modules/elasticsearch/main.tf line 141, in resource "aws_elasticsearch_domain" "default":
141: warm_type = var.warm_enabled ? var.warm_type : null
An argument named "warm_type" is not expected here.
[terragrunt] 2020/12/10 14:11:49 Hit multiple errors:
Here are the code: main.tf:
module "elasticsearch" {
source = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
subnet_ids = data.terraform_remote_state.vpc.outputs.private_subnets
zone_awareness_enabled = var.zone_awareness_enabled
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
#dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = var.kibana_hostname_enabled
domain_hostname_enabled = var.domain_hostname_enabled
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
context = module.this.context
}
Probably your aws provider version is too old
provider "aws" {
version = "2.55.0"
region = var.region
}
I changed aws provider to 3.20.0. It solves the problem.
module "this" {
source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.22.0>"
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = var.tags
additional_tag_map = var.additional_tag_map
label_order = var.label_order
regex_replace_chars = var.regex_replace_chars
id_length_limit = var.id_length_limit
context = var.context
}
# Copy contents of cloudposse/terraform-null-label/variables.tf here
variable "context" {
type = object({
enabled = bool
namespace = string
environment = string
stage = string
name = string
delimiter = string
attributes = list(string)
tags = map(string)
additional_tag_map = map(string)
regex_replace_chars = string
label_order = list(string)
id_length_limit = number
})
default = {
enabled = true
namespace = null
environment = null
stage = null
name = null
delimiter = null
attributes = []
tags = {}
additional_tag_map = {}
regex_replace_chars = null
label_order = []
id_length_limit = null
}
description = <<-EOT
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
EOT
}
variable "enabled" {
type = bool
default = true
description = "Set to false to prevent the module from creating any resources"
}
variable "namespace" {
type = string
default = "dev"
description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}
variable "environment" {
type = string
default = "dev-blue"
description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}
variable "stage" {
type = string
default = "dev-blue"
description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}
variable "name" {
type = string
default = "es-nsm-blue"
description = "Solution name, e.g. 'app' or 'jenkins'"
}
variable "delimiter" {
type = string
default = "-"
description = <<-EOT
Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
EOT
}
variable "attributes" {
type = list(string)
default = []
description = "Additional attributes (e.g. `1`)"
}
variable "tags" {
type = map(string)
default = {}
description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}
variable "additional_tag_map" {
type = map(string)
default = {}
description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}
variable "label_order" {
type = list(string)
default = null
description = <<-EOT
The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
EOT
}
variable "regex_replace_chars" {
type = string
default = null
description = <<-EOT
Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
EOT
}
variable "id_length_limit" {
type = number
default = null
description = <<-EOT
Limit `id` to this many characters.
Set to `0` for unlimited length.
Set to `null` for default, which is `0`.
Does not affect `id_full`.
EOT
}
Is it possible to retrieve the ARN of a specific key in a secret using aws_secretsmanager_secret or the aws_secretsmanager_secret_version data source?
AWS docs say that the ARN to a specific key can be constructed this way
"arn:aws:secretsmanager:region:aws_account_id:secret:example-secret:example-key::"
I’m curious if it’s possible to just reference the ARN via a datasource
Rather than constructing a string myself, eg:
data "aws_secretsmanager_secret" "this" {
name = var.secrets_manager_secret
}
locals {
example_service_token_secret_arn = data.aws_secretsmanager_secret.this.arn
}
valueFrom = "${local.example_service_token_secret_arn}::example_service_token::"
Hi all, I’m using cloudposse/terraform-aws-elasticache-redis
and stuck with this error. I have no idea how to move forward. I tried with TF 1.35.5 and getting state snapshot was created by Terraform v0.14.0, which is newer than current v0.13.5
error. How can I resolve this issue?
you used tf 0.14 to do init ?
or 0.13.5?
if you switch you need to remove the .terraform directory before you try init again
I tried different ways, not knowing what to do, now not sure which version the init happened last. :sweat_smile: I’ll try deleting the .terraform
and doing an init again
I deleted the .terraform
and did an init again with 1.13.5
but still getting the state snapshot was created by Terraform v0.14.0, which is newer than current v0.13.5
error.
so your state was upgraded
because you used 0.14
you need to go back to 0.14
On 0.14 I’m getting this error as I posted
We are using cloudposse/terraform-aws-elasticache-redis
so you need to push a PR to relax the provider version
what Updated the required provider versions to get this module working with the latest terraform 0.13 release why Without this patch this module does not work with terraform 0.13.4
This one supposed to relax the provider version of cloudposse/terraform-aws-elasticache-redis
, right? It’s not getting merged
make this module v14 compatible
2020-12-11
Hi everyone, we use the cloudposse/terraform-aws-route53-cluster-zone
and are seeing a race condition with the creation of the ns record by Terraform with the creation of the NS record by AWS. Has anyone else run into that? Is there a reason to not use the AWS created NS records? You could achieve management over the resource by doing a data import instead of a creation.
We add the allow_overwrite
to the ns record and this was solved.
I provisioned Elasticsearch. I got URL outputs of “domain_endpoint”, “domain_hostname”, “kibana_endpoint” and “kibana_hostname”. But, I cannot hit any of these URLS. I got, “This site can’t be reached”. Below is the code:
main.tf:
module "elasticsearch" {
source = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
zone_awareness_enabled = var.zone_awareness_enabled
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = var.kibana_hostname_enabled
domain_hostname_enabled = var.domain_hostname_enabled
allowed_cidr_blocks = ["0.0.0.0/0"]
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
context = module.this.context
}
terraform.tfvars:
enabled = true
region = "us-west-2"
namespace = "dev"
stage = "abcd"
name = "abcd"
instance_type = "m5.xlarge.elasticsearch"
elasticsearch_version = "7.7"
instance_count = 2
zone_awareness_enabled = true
encrypt_at_rest_enabled = false
dedicated_master_enabled = false
elasticsearch_subdomain_name = "abcd"
kibana_subdomain_name = "abcd"
ebs_volume_size = 250
create_iam_service_linked_role = false
dns_zone_id = "Z08006012JKHYUEROIPAD"
kibana_hostname_enabled = true
domain_hostname_enabled = true
this is provisioned in private subnets
slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
I believe you need to configure it without a VPC to allow public access, otherwise you need to create a reverse proxy or use a VPN.
If you do public, make sure you vary the access policy and/or IP restrictions
private subnets can’t be accessed from the internet
yes, agree with @Joe Niland
make sure you protect it (with IP restrictions or password) if you open it to the internet. The AWS IPs are constantly scanned by bots, and your cluster will be hacked in minutes
How about enable NAT? Will it solve the problem?
NAT is from the subnets to the internet
so no, you will not be able to access the cluster behind NAT
So, I need to provision ES with none VPC.
How about I place ES in the public subnets? It should solve the problem?
Have a look at this article @melissa Jenner https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html
Learn how to launch your Amazon ES domain within a VPC.
What is the best practice? If you provision the ES, How will you do?
It depends on the access requirements. What are you trying to do?
The idea case is to be able to access ES behind VPN.
I login to my company via VPN.
And I provision ES.
It looks like attach access_policies may also solve the problem? https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticsearch_domain_policy
{ “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “AWS”: “” }, “Action”: “es:”, “Resource”: “${resource_arn}/*” } ] }
Will it solve the problem if I attach this policy?
That’s at a different layer. First you need to get access to it at the network level. If you’re using a VPN to connect to your VPC that should be ok, although ideally you would lock down the permissions to certain roles or whatever.
Yes. I am using a VPN to connect to my VPC. How to add the access policy I posted above as Terraform code?
I added the line below:
iam_role_arns = [“*”]
But, I got error.
module.elasticsearch.aws_elasticsearch_domain_policy.default[0]: Creating...
Error: InvalidTypeException: Error setting policy: [{ “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “”, “Effect”: “Allow”, “Resource”: [ “arnes12345678:domain/abcd-domain/”, “arnes12345678:domain/abcd-domain” ], “Principal”: { “AWS”: [ “arniam:role/abcd-domain-user”, “” ] } } ] }]
2020-12-12
Hi I am working on a project where I need to deploy a Ruby application along with the infrastructure (created from Terraform) on ECS. I am using CircleCI pipeline. Pipeline job creates the infra (RDS,Redis, ECR and my ECS services and cluster) through Terraform CLI. Now I’ve a requirement that whenever the environment variables of the application is changed I want to deploy new infrastructure so that each application is working separately. The problem I’m facing is with the S3 Backend configuration not being dynamic. If somehow I could provide the s3 key dynamically so that the state file for each application is maintained separately.
Simply put my use case is that whenever CircleCI pipeline is triggered, based on the environment variable either a new deployment along with the infrastructure is made if the environment variable file is new or it simply updates the old infra and deployment.
terraform {
backend "s3" {
encrypt = true
key = "./tfstates/staging/${var.something_dynamic}/ecr/terraform.tfstate"
region = "eu-west-1"
bucket = "mys3bucket"
profile = "default"
}
}
Very cool set up, I think you can solve your problem using backend partial config. https://www.terraform.io/docs/backends/config.html
Backends are configured directly in Terraform files in the terraform
section.
Exactly the thing I was looking for. I need to read documentation more and more. Thanks @Tom Dugan
Consider the following object,
variable "data_sources" {
type = list(object{
environment = string
url = string
})
description = "An object containing data source URLs per environment"
default = [
{
beta = "jdbc:<postgresql://1.1.1.1:5432/db>"
}
]
}
I am attempting to retrieve the value of the URL and assign it to a local based on a user supplied variable called environment
Digging through the various function documentation for terraform there doesn’t appear to be many that operate on objects.
Any tips are welcomed.
locals {
data_source_url = var.environment != null ?
}
Is as far as I have gotten. The idea here is if it doesn’t match match any of the values in data_sources.environment fall back to beta, retrieve the value of url and assign it to data_source_url
If it does match an environment in the object, retrieve that value and assign it to data_sources.environment
Is the object variable type the most optimal here?
Does this require me creating a map of the data_sources.environment in order to do the comparison?
This is what I came up with
data_source_urls = var.data_source_urls
data_source_keys = [for m in local.data_source_urls : lookup(m, "environment")]
data_source_values = [for m in local.data_source_urls : lookup(m, "url")]
data_source_as_map = zipmap(local.data_source_keys, local.data_source_values)
default_data_source_url = local.data_source_as_map["beta"]
data_source_url = lookup(local.data_source_as_map, var.environment, local.default_data_source_url)
2020-12-13
2020-12-14
Hi Cloudposse!
Could someone explain me why we have a pinned github provider version? This pinned version blocks using count
for the module and module using this module as a dependency (ecs-codepipeline, ecs-web-app, ecs-service-web-task, etc)
https://github.com/cloudposse/terraform-github-repository-webhooks/blob/master/versions.tf#L5
Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks
From memory it was because of a breaking change in the provider: https://github.com/terraform-providers/terraform-provider-github/issues/502#issuecomment-651627517
Terraform Version Terraform version: 0.12.28 Provider version: 2.9.0 Affected Resource(s) Provider configuration Terraform Configuration Files terraform { required_version = "~> 0.12.0"…
The GitHub provider introduced a breaking change with the minor bump to version 2.9.0, by removing a number of configuration options. This included the 'anonymous' flag, which is expected t…
So, if we remove this flag and change this variable in all modules using github-repository-webhooks we will be able to unpin?
perhaps.. maybe a workaround is to set anonymous if token if absent, which is what it mentions here. Best way is to create a PR and test it.
Ahead of our next major release, this PR modifies the provider schema in the following ways: token becomes optional, with its absence signalling anonymous mode organization is no longer deprecated…
You can thank Github for changing the API again that is why there is this breaking change and we had to pin
Thanks Pepe - ya I think this was our temporary workaround until we could solve it a better way. Otherwise, we’re not about strict pinning like this.
Hi, I am trying to mask sensitive information from plan
and apply
output. I tried couple of ways:
• using sensitive keyword, but https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.35.0 version pinned at required_version = “>= 0.12.0, < 0.14.0” and sensitive keyword is available from >= 0.14.0
• Tfmask doesn’t seems to work with resources or values which are lists. (I am trying to mask helm values variable). Any suggestions on how to go about this.
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
Sounds like you need to submit a pull request to allow the module to work with 0.14
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
Ya, I think that’s your best bet.
PRs welcome. Post it in #pr-reviews for expedited reviews
Hey Guys, a tiny PR with the required changes: https://github.com/cloudposse/terraform-aws-rds/pull/80
what Upgrading dependency modules to versions that supporting Terraform 14 why For using this module with Terraform 14
@Jay <https://github.com/cloudposse/terraform-aws-rds-cluster>
was updated to support TF 0.14. Please, give it a shot now
@Maxim Mironenko (Cloud Posse) thanks I will check.
2020-12-15
anyone have seen this before ?
120:128: syntax error: A identifier can’t go after this “"”. (-2740)
aws-vault exec cloud-native-dev -- terraform validate
Success! The configuration is valid.
Terraform plan/apply work just fine
Did you copy/paste the command from somewhere? I’m thinking hidden quotes but idk
Not sure if it’s aws-vault throwing the error or what
I have been trying to find the quotes
this started happening after 0.13upgrade was run
@jose.amengual I think the error comes from osascript. Do you have anything weird in ~/.aws/config?
mmmm
let me see
I had some empty duplicated profiles…
I’m checking
same error
yeah, empty profiles should be fine
I do not remember seen this before
Hello can anyone please help me find the correct syntax of nested “for” to get this data structure into a for_each block?
test = {
cms = {
random_key_name1 = "[email protected]"
},
site2 = {
unknown_key_name1 = "[email protected]"
random_name3 = "[email protected]"
}
}
The data I wish to use in the resource block is the first Key e.g. cms Then second key e.g. random_key_name1 Then the value associated with random_key_name1 e.g. “[email protected]”
I’ve been able to similar before but I’ve always known the name of the keys at the second level but this time the keys could be named anything. I know I need to do something like this but I just can’t find the right configuration.
[for mykey in keys(var.test) : {
for k,v in in var.test[mykey] : k => v }
]
Resource block
resource "aws_ssm_parameter" "mailFromAddress" {
for_each = { CANT GET THE CORRECT FOR LOOP }
name = format("/%s/%s", cms, random_key_name1)
type = "String"
value = each.value aka "[email protected]"
}
this means the email IDs need to be unique across all sites
(I’m guessing your terminology here.)
If that’s true, why not change the initial data structure to:
emails = {
random_key_name1 = {
site = "cms"
email = "[email protected]"
}
}
Then it’s easier:
resource "aws_ssm_parameter" "mailFromAddress" {
for_each = local.emails
name = "${each.value.site}/${each.key}"
value = each.value.email
}
Hi Alex, The CMS, site2 would be unique, as would each of the keys at the second level. So, yes you could call them email IDs.
1 minute please, just writing a more detailed reply.
Here’s a solution to your exact structure, if you can’t do that:
> local.site_emails
{
"blog" = {
"blog_contactus" = "[email protected]"
"blog_nodeply" = "[email protected]"
}
"cms" = {
"cms_noreply" = "[email protected]"
}
}
> merge([for site, emails in local.site_emails : { for email_id, email_addr in emails : email_id => email_addr } ]...)
{
"blog_contactus" = "[email protected]"
"blog_nodeply" = "[email protected]"
"cms_noreply" = "[email protected]"
}
I think that might work on this simplified structure I created to ask the question but in reality my real structure is much bigger.
variable "site_configs" {
type = map(object({
configuration_name = string
brand = string
primary_url = string
subdomains = list(string)
domain_name = string
environment = string
ses_config = object({
create_ses_user = bool
allowed_email_recipients = list(string)
mailFromAddress_config = map(string)
})
cloudfront_config = object({
create_cloudfront = bool
})
firewall_config = object({
waf_enabled = bool
dedicated_waf_acl = bool
})
iis_config = object({
iis_site_name = string
site_type = string
site_language = string
})
s3_config = object({
s3_create = bool
s3_bucketname = string
})
ec2_config = object({
ec2_description = string
})
ssl_config = object({
create_ssl_cert = bool
primary_url = string
subdomains = list(string)
ssl_description = string
ssl_root_domain_names = map(string)
ssl_perform_validation = bool
ssl_validation_method = string
ssl_wait_for_validation = bool
ssl_allow_overwrite_dns = bool
ssl_transparency_logging = bool
ssl_certificate_import = bool
ssl_certificate_public_pem = string
ssl_certificate_chain_pem = string
ssl_private_key_key = string
})
}))
}
I’ve used
mailfromAddresses = { for config_key, config in var.site_configs : config_key => {
for mykey, myvalue in config.ses_config.mailFromAddress_config : mykey => myvalue
} if config.ses_config.create_ses_user == true
}
to generate the simplified output. I posed for this question and what I was going to use as the input to the for_each subject to suggestion from here.
Changes to Outputs:
+ test = {
+ cms = {
+ mailFrom2 = "[email protected]"
+ mailFromAddress = "[email protected]"
},
+ mysite2 = {
+ site2email = "[email protected]"
+ mailFromAddress = "[email protected]"
}
}
The data from the mailFromAddress_config section of the site_config is later used within userdata to perform build transform within other configuration files.
right. that makes sense. Then you can change the key to be a concatenation of site ID and email ID with something like
> merge([for site, emails in local.site_emails : { for email_id, email_addr in emails : "${site}-${email_id}" => email_addr } ]...)
{
"blog-blog_contactus" = "[email protected]"
"blog-blog_nodeply" = "[email protected]"
"cms-cms_noreply" = "[email protected]"
}
Then there’s no restriction on requiring the keys to be globally unique
okay, I had something similar in terms of the merged site name and other key in my own testing but when I then try and overlay this to the for_each I’m struggling to pull the three pieces of information out.
so in the aws_ssm_parameter resource I need the name value to be
"/blog/blog_contactus"
and the value to be “[email protected]” would you suggest the last structure you supplied but then to get the key “blog” you simply split ${each.key} based on the “-“ to get the key name back?
maybe it would be better to generate a final structure which is a list:
[
{ site = "cms", id = "noreply", addr = "[email protected]" },
...
]
Then no need to worry about uniqueness constraint at all
okay, I think I’m following. I’ll run some tests on my data structure based on the above and see how far I can get.
I would never have got my head around this on my own, as I’ve not used “…” expression yet and I wouldn’t have thought about merging the list together. So, thank you!
I can confirm
mailfromAddresses = merge([for configs in keys(var.site_configs) :
{ for a, b in var.site_configs[configs].ses_config.mailFromAddress_config : "${configs}-${a}" => b } if var.site_configs[configs].ses_config.create_ses_user == true
]...)
outputs
+ test = {
+ cms-mailFrom2 = "[email protected]"
+ cms-mailFromAddress = "[email protected]"
}
Which is great! As I think I would be able to split on the “-“ to get back to the key “cms” However, you made one further suggestion but I’m not sure how to use it. Did you mean I should pass the output from local.mailfromAddresses to another local / for loop to remap it or did you mean I should modify the original local.mailfromAddresses to match your new structure in some way. Apologies, if I’m missing the obvious.
The ...
operator is VERY poorly documented. In fact I just looked and couldn’t find it documented at all! Can anyone find a docs reference?
I was thinking something like this:
> flatten([for site, emails in local.site_emails : [ for email_id, email_addr in emails : {site = site, id = email_id, addr = email_addr} ] ])
[
{
"addr" = "[email protected]"
"id" = "blog_contactus"
"site" = "blog"
},
{
"addr" = "[email protected]"
"id" = "blog_nodeply"
"site" = "blog"
},
{
"addr" = "[email protected]"
"id" = "cms_noreply"
"site" = "cms"
},
]
Ah, okay, and the for_each would accept that in as an input? I didn’t think for_each could handle a list but I might not be thinking this through properly, it is getting rather late in the UK.
@Alex Jurkiewicz the ...
operator has near-zero documentation, and it actually means different things in different contexts (function calls vs for expressions)… here is the original hcl2 spec that describes them… https://github.com/hashicorp/hcl2/blob/master/hcl/hclsyntax/spec.md#functions-and-function-calls
Former temporary home for experimental new version of HCL - hashicorp/hcl2
you can also find a reference in the tf docs on for expressions: https://www.terraform.io/docs/configuration/expressions/for.html
Finally, if the result type is an object (using {
and }
delimiters) then the value result expression can be followed by the ...
symbol to group together results that have a common key:
{for s in var.list : substr(s, 0, 1) =
s… if s != “”}
Okay, penny has dropped and I’ve manged to test based around your examples. Can’t thank you enough for taking the time to go through that and making the suggestions. Your last example simplifies it a lot. thank you
Thanks Loren!
i can’t actually find a reference to the function call version of ...
in the tf docs…
oh wait, there it is: https://www.terraform.io/docs/configuration/expressions/function-calls.html#expanding-function-arguments
If the arguments to pass to a function are available in a list or tuple value, that value can be expanded into separate arguments. Provide the list value as an argument and follow it with the ...
symbol:
min([55, 2453, 2]…)
The expansion symbol is three periods (
...
), not a Unicode ellipsis character (…
). Expansion is a special syntax that is only available in function calls.
i recently asked about exactly this operator in the hangops slack, so i feel your pain
but the question and the answer have aged out of their slack
so many *ops slacks! Is that one any good? Can you share an invite?
I feel like hangops is one of the oldest, but cloudposse is one of the best. I’m only in that one cuz some of the hashi folks occasionally are active
Anybody know of a written aws lambda that can instigate a RDS MSSQL restore from S3 backup? Looking for something to help bootstrap an MSSQL database in RDS with a baseline db.
or maybe a way of creating a gold ami but for RDS MSSQL? or a way of running stored procedure directly from terraform? ideally all controlled by terraform.
What I’ve done before is create the db one time, then snapshot it, then use the snapshot as the starting point for subsequent rds instances…
Remember to join us tomorrow (12/16) at 11:25am PST to learn about TACOS - Terraform Automation and Collaboration Software
We have speakers from:
• HashiCorp Terraform Cloud
• Env0
• Scalr
• Spacelift https://cloudposse.com/office-hours
2020-12-16
is there a way to create a lambda that just gets fired once when it gets created?
i basically want to create a lambda during the bootstrapping of an AWS account and then never needs to fire again
A Terraform Module. Contribute to plus3it/terraform-aws-org-new-account-trust-policy development by creating an account on GitHub.
in particular, the cloudwatch event pattern: https://github.com/plus3it/terraform-aws-org-new-account-trust-policy/blob/d79ef768c34a052dab10cf5e0e3f5353fd10fcf7/main.tf#L63-L79
A Terraform Module. Contribute to plus3it/terraform-aws-org-new-account-trust-policy development by creating an account on GitHub.
the issue is the account will already exist
bootstrapping an account implies it is a new account or an invited account
its going to be when we bootstrap the account with other stuff i basically want to write a lambda that optionally adds the account to fugue.co
we have two TF roots
• tf-organisation where the accounts are listed
• tf-accounts where we add baseline stuff to the account
i want to create the lambda optionally from tf-accounts when we add the baseline config
create an event rule for the new account, use a schedule, set the schedule to run once
makes sense
Does it have to be complicated? A lambda can be triggered from terraform on demand…
Guess it would need some form of persistence after it did trigger for that particular account so it only happens the first run.
It could also be handled fully in terraform with triggers
I have a strange issue that I hope someone can point me in the right direction. I inadvertently updated to tf 0.14.2 (new laptop, homebrew, and being dumb). Anyway, needless to say all my tf states are in a bad, uhhh, state, which AFAIK cannot be reverted back to 0.13.x. I am using some of the cloudposse tf modules and a lot of them understandably are not ready for 0.14.x. So what does one do in a situation such as this? Naturally, fork every cloudposse terraform repo and hacking up the code until every last tf error is gone. Mission accomplished! but…. my tf plan now says that it wants to replace most of my resources because the name changed (from cloudposse/label/null). It seems that the attributes ordering is different now. i.e.
~ name = "dt-prod-api-ecs-exec" -> "dt-prod-api-exec-ecs" # forces replacement
So I have 2 questions
- Does anyone know a way to revert a state back to 0.13.x?
- Is this attribute re-ordering situation something that anyone has encountered?
-
i don’t recall having a reorder issue, but i also have not upgraded to 0.14
-
if you’re using remote state with object versioning then try reverting the remote object to the previous version. you can also try starting with a fresh state, then
terraform plan
(do not apply) andterraform import
the resource names returned by the plan. i don’t thinkterraform refresh
can save you in this case.
Howdy ya’ll. I know there are some checkov user’s here. One of the recent updates added the ability to run terraform plan analysis. So now it supports both static and dynamic analysis of terraform. More about it: https://www.checkov.io/2.Concepts/Evaluate%20Terraform%20Plan.html https://bridgecrew.io/blog/terraform-plan-security-scanning-checkov/
Learn how to leverage Checkov and Bridgecrew to scan both raw Terraform files and Terraform plan output for security and compliance errors.
Thanks for sharing
Learn how to leverage Checkov and Bridgecrew to scan both raw Terraform files and Terraform plan output for security and compliance errors.
You can’t modify lifecycle { ignore_changes
data with dynamic data. I have a module I’d like to consume in multiple places, but with different ignore_changes configuration for an internal resource – depending on the consumer.
The only way I can think to do this is duplicating the resource with different lifecycle configuration and having a condition define which copy of the the resource is actually created.
But this is really ugly. Anyone have a better idea?
The only way around it currently as I understand. Have had similar situation with some upstream alb modules and the min/max/desired …
I’m learning to accept it and just live with the fact that if it’s in a module at least it’s “hidden” so what do I actually care once the module does what it’s supposed to.
i’m pretty excited about this experiment… complex objects with optional attributes and default values! https://www.terraform.io/docs/configuration/functions/defaults.html
The defaults function can fill in default values in place of null values.
This will be great.
The defaults function can fill in default values in place of null values.
@Jeremy G (Cloud Posse)
2020-12-17
Hi guys, anyone managed to schedule daily cleanup job in terraform cloud? I have workspace on which I would like to execute terraform destroy
as a daily job
does anyone have a recommended example to add some xml to an existing xml via a bash script?
this is probably not the best channel for this question as its not terrraform related but have you checked out XMLStarlet
xsltproc could be useful to if you have a xslt
i just want to add
<init-param>
<param-name>DisableTaskScheduler</param-name>
<param-value>FALSE</param-value>
</init-param>
Why bash?! Do it in a language which can parse XML and you will be a happier man
I guess bash is suitable if you want to append or prepend it. But as soon as you get to the “grep for a certain string and add my snippet after that” you are writing a future bomb IMNSHO
Anyone using the Kubernetes provider with EKS + Terraform Cloud? Any direct path to success for configuring the provider?
Moving a client onto Terraform Cloud and I believe my path forward involves including the K8s client certificates as the authentication mechanism + aws-auth role for my AWS Terraform CI creds. But if anyone has a “you only need to do X, Y, and Z” approach that’d be awesome.
Using an IAM role you can configure the kubernetes provider with something like:
data "aws_eks_cluster" "cluster" {
name = var.cluster_name
}
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_name
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
}
Ah good stuff @Tim Birkett — Thanks.
And regarding the IAM role — You’re referring to the user / AWS creds provider to TFC having a role within the cluster, correct?
So you’d have an IAM role that your CI (TFC?) would use, then you’d need a role mapping in the aws-auth
config map as described here: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html#awsdocs-filter-selector<i class="em em-~"</i>text=Add%20your%20IAM%20users%2C%20roles%2C%20or%20AWS%20accounts%20to%20the%20configMap>.
The aws-auth ConfigMap is applied as part of the guide which provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a sample Kubernetes application. It is initially created to allow your nodes to join your cluster, but you also use this ConfigMap to add RBAC access to IAM users and roles. If you have not launched nodes and applied the
v0.14.3 0.14.3 (December 17, 2020) ENHANCEMENTS:
terraform output: Now supports a new “raw” mode, activated by the -raw option, for printing out the raw string representation of a particular output value. (#27212) Only primitive-typed values have a string representation, so this formatting mode is not compatible with complex types. The…
So far the output command has had a default output format intended for human consumption and a JSON output format intended for machine consumption. However, until Terraform v0.14 the default output…
Hey all! I’m using the terraform-aws-rds-cluster module and am trying to setup a secondary(replica) of my primary cluster, but I’m running into an issue with the secondary cluster. 🧵
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
I’m getting the error
error creating RDS cluster: InvalidParameterCombination: Cannot specify user name for cross region replication cluster
I’m wondering if this is because I haven’t set the field global_cluster_identifier.
Anyone had this issue before?
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
are you trying to setup a global cluster ?
or a replica cluster
anything global_
is for global databases which are different than replica clusters using regular DB replication
I want to setup a regular db replica cluster
so you want to setup the replica_source_identifier
no user or pass word
you can leave them as = ""
ahhh I didn’t leave them as empty strings
maybe that’s what I’m missing
they are required but for the replica they need to be empty
got it!
trying it out now
@jose.amengual is it possible to make this replica in the same region?
no
that is an AWS restriction
cool that’s what I thought.
Sorry, I think I was confused when I said i wanted a replica earlier. I want to establish a read replica. Does this template have the ability to do that?
yes
ohhhh read replica
I think I need this one https://github.com/cloudposse/terraform-aws-rds-replica
Terraform module that provisions an RDS replica. Contribute to cloudposse/terraform-aws-rds-replica development by creating an account on GitHub.
just change the size to 2
the size of the primary?
a read replica alongside the writer in the same region, vpc ?
yes the use case is to have a primary DB that’s core to the application and another DB that is fed all the data from the primary (like a read replica would do) and then I want to hook up the read replica to a Bi reporting tool like tableau
but you want a replica not a cluster replica
?
I believe I only need a replica. Just somethign that will always stay up to date with my primary DB
We use aurora postgres though and it doesn’t look like that is supported
Oh nevermind this cloudposse doc is old. Aurora does support it
yes we use postgress replicas
there is more tendency to use cluster replicas than single instances
is there any reason why it has to be an instance?
I do not think there is much price difference
I think I’m going to just bump the numder of instances in the aurora cluster up by 1
if you need to read from it is the fastest way to do it
Okay right on
you can add more than one read replica
Thanks this worked!
nool
@jose.amengual I have a quick question about the behavior of the reader node on aurora postgres. So after following what you said yesterday i have a reader node that’s now apart of my aurora cluster. it gives me a specific endpoint to connect to it, but doesn’t have a master password or master username. Do you know how I can connect to specifcally that node or is that even possible?
the reader node uses the same creds that then writer
usually I create user with select access grants
interesting when I use the same master credentials on the reader endpoint it tells me “that role doesn’t exist”
weird
unless is still replicating….
hmm, I thought the connection string would be the “reader_endpoint” that’s outlines in the terraform docs, but when going to the RDS console it looks different
they should be the same
on has ro in the name
the other one does not
that’s what I thought.
but in the console the endpoint looks a bit different.
so in the console if you click on the main cluster resource in rds it shows two endpoints: reader and writer, but if you click on the actual reader node the endpoint looks different.
it appends a -2 to the end of the string like “<RDS NAME>-1.<RDS ID>.<REGION>.rds.amazonaws.com
yes that is normal
one is the Cluster enpoint
the other is the Instance endpoint
each instance endpoint have its own
the cluster endpoint have two
r/o and r/w endpoint
is always better to use the cluster endpoint
if you need to read you use the reader endpoint
okay gotcha
Heads up if you are attempting to apply a terraform repo using helm https://github.com/hashicorp/terraform-provider-helm/issues/645
As strange as it sounds, across 8 clusters with 0 local or remote state changes and with no module changes (confirmed with three engineers), any attempt to create a plan that should result in nothi…
anyone happen to know any magic for generating a random string that can be used in a for_each expression in the same state, without using -target
? i was trying to be cute with try()
but no love…
locals {
id = substr(uuid(),0,8)
}
resource null_resource id {
triggers = {
id = local.id
}
lifecycle {
ignore_changes = [
triggers,
]
}
}
resource null_resource this {
for_each = try(toset([null_resource.id.triggers.id]), toset([local.id]))
}
$ terraform apply
Error: Invalid for_each argument
on main.tf line 18, in resource "null_resource" "this":
18: for_each = try(toset([null_resource.id.triggers.id]), toset([local.id]))
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
Can you create a random_string resource with for_each and then reference the results?
no, the random_string must be applied for it’s result to be known
only variables, data sources, and builtin functions are known before apply
hence the cuteness here with uuid()
and a null_resource
that also ignores its own triggers, to create a random value that does not change from one apply to the next
Hm maybe I misunderstood the use case?
You want to create a for_each resource block where the number of resources depends on the output of another resource?
well yes, but i know that doesn’t work. but the use case is very targeted here. i need an random value that doesn’t change from apply to apply, and to use that value as part of the for_each expression
i have some modules that use for_each with a list of objects, each with a “name” attribute. pretty standard. but in my tests, i want to give each object a generated id
in the code above, i generate an id using uuid()
, then i store it in null_resource.id
and ignore changes to the trigger to make it static. this works fine.
then i try to use a for_each expression to resolve the value from either null_resource.id
or local.id
. the first will not work on first apply but will subsequently. the second will work on first apply but will change every subsequent apply
for_each = try(toset([null_resource.id.triggers.id]), toset([local.id]))
but it seems terraform doesn’t even try to resolve the expression. it just sees the reference to null_resource.id
and gives up
So the use case is something like a module with an input variable number_of_rds_instances
and you want to create that many aws_rds_instance resources, with a random but fixed name for each
Why the need to use the random string in the for each? Pass the random_id to the name variable to all those modules? I don’t fully understand but I’m curious
I use random_string, random_id, random_password for quite a few things. It seems very static to me…
indeed, we got a little sidetracked from the question. the module is what the module is. it uses for_each on the name attribute of this list of objects. if the module is invoked in a way where the name attribute is set using the output of a random_*
resource, then you get the error “cannot be determined until apply”.
i am making zero claims disputing that random_*
resources generate static values, just that they do not work as inputs for the for_each key…
sorry. I don’t understand why this format won’t work:
variable number_of_ec2_instances {
type = number
}
resource random_id default {
count = var.number_of_ec2_instances
}
resource aws_ec2_instance default {
count = var.number_of_ec2_instances
tags {
Name = random_id.default[count.index].result
}
}
i make no claims about that construction. that is using a static var with count. i am generating the value with terraform and using for_each
if i expose a variable, and generate the random value outside terraform, that certainly works. that’s my backup plan. i was just trying to avoid having the variable, and keeping it all within the tf config
The problem was uuid()
and my assumption that functions were resolved in the plan phase (apparently that’s not always true). Switched to a data source that has a random output and got something working. Will post code in the morning
here’s what i came up with… null_data_source
outputs a random
value, and it is resolved in the plan phase, so this works:
locals {
random_id = substr(md5(data.null_data_source.id.random),0,8)
id = try(null_resource.id.triggers.id, local.random_id)
}
data null_data_source id {}
resource null_resource id {
triggers = {
id = local.random_id
}
lifecycle {
ignore_changes = [
triggers,
]
}
}
resource null_resource this {
for_each = toset([local.id])
}
Is there a way to create a subset map from another map based on conditions? The map has elements with data types by I got around that by using a type of any
.
For example, with the map below, is there a way to remove the check-name key completely by checking if it equals []
?
I’ve tried various things with for loops, e.g.
locals {
cleaned_pattern = {
for label, value in var.cloudwatch_event_rule_pattern :
label => value if coalesce(value) != null
}
}
Output is unchanged.
+ event_pattern = jsonencode(
{
+ check-name = []
+ detail = {
+ status = [
+ "ERROR",
+ "WARN",
]
}
+ detail-type = [
+ "Trusted Advisor Check Item Refresh Notification",
]
+ source = [
+ "aws.trustedadvisor",
]
}
)
ok very simple solution
cleaned_pattern = {
for label, value in var.cloudwatch_event_rule_pattern :
label => value if length(value) > 0
}
2020-12-18
Hello, I have recreated my private cluster. and it try to start creating namespaces before the cluster is created, despite the depends_on azurerm_kubernetes_cluster. do you have any input on how to do it one by one ?
resource "kubernetes_namespace" "terra_test_namespace" {
...
depends_on = [azurerm_kubernetes_cluster.kube_infra, var.vnet_subnet_id]
}
i have found same error on awk_eks with some tricks to fix it. I have tried them but it don’t work for now. ( terraform plan failed telling the cluster is not available ) . Can you give me some guidelines on how to solve this ?
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/943
I have issues I'm submitting a… bug report feature request support request - read the FAQ first! kudos, thank you, warm fuzzy What is the current behavior? I was unable to figure out how to d…
so far I have solved two issue:
• when recreating the cluster ( due to changing servers size ), the terraform resources “kubernetes_namespace” where not destroyed from the tfstate. So I have remove them manually
• I am using RBAC to filter access and the above null_ressource wait_for_cluster call wget --no-check-certificate -O - -q API_ENDPOINT
and failed to authenticate
• so for now I just check that the clusterid exists before creating the namespace
I have issues I'm submitting a… bug report feature request support request - read the FAQ first! kudos, thank you, warm fuzzy What is the current behavior? I was unable to figure out how to d…
Hi, Anybody know if its possible and probably more importantly advisable, to have the output of a lambda function as input of a Data source? Use case: I need to generate machine keys for IIS but only way I’ve found to do this is via PowerShell. I don’t believe I can use a local PowerShell provider as not all the members of my team run on windows machines and therefore create a dependency on installing PowerShell core etc. Same could be said for the Jenkins pipelines. So was think a lambda could generate the keys and a data source could read them in.
Side note: I know I could generate and inject the machine keys in as part of the build process but for historical reasons we’ve extracted security items from the build process and re-inject them at time of build.
Are you planning to run it using aws_lambda_invocation
?
Hello Joe,
Honest answer, is I’ve not fully thought this through. I was starting by thinking could I read the output from a Lambda in as a source. Hadn’t given a thought as to how that Lambda would be executed but looking at
aws_lambda_invocation
it looks the most logical but really am open to any suggestions
The other option to achieve the same thing would be to have a data source that could make a web request and use the response as the input but I’m not aware of a way to do that either.
Hey Gareth, For the second one you can use https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http
Many thanks Joe, I’ve not come across that provider. Not really sure how I’ve overlooked it before but given what I’m trying to achieve, this might be the simplest way around my scenario. Thanks for taking the time to reply.
np, that provider is actually pretty simple but powerful!
so your Lambda will call WinRM on each server?
What I’ll probably do is create a endpoint that points the to the lambda and then either return the data for terraform to then write to the ssm parameter store for later retrieval by the user data when a machine boots or I’ll I’ll supply something along with the web request and get the lambda to write to the ssm parameter store.
Currently, I tend to pull most security items from the ssm parameter store on boot via userdata. Which Help keep it out of the session state and removed from the admins etc etc. Each instance of an ASG knows which products its responsible for and then just reads in the required values.
Probably a lot of reasons not to do it this way, that I’ve not thought about e.g. use a chef style agent etc but it works for us.
that said, if somebody things I’m opening myself up for problems, feel free to shout. Always whiling to listen to reason
yeah calling them to generate and store then return the ssm key seems secure enough
Thanks again for the help
you’re welcome
Is it possible to output state from resources created by child modules but are not declared as outputs in the child module?
For example, I would like to output the DB username when using the cloudposse/rds/aws
.
The values are available in the state as demonstrated by terraform state show 'module.rds_instance.aws_db_instance.default[0]'
However, an output rule like the following does not work…
output "aws_db_instance" {
value = module.rds_instance.aws_db_instance
}
results in an error
An output value with the name "aws_db_instance" has not been declared in
module.rds_instance.
Hi, I’m trying to use https://github.com/cloudposse/terraform-aws-dynamodb v0.23.0 with terraform 0.13 but I’m getting this output from terraform plan:
Error: Invalid count argument
on .terraform/modules/dynamodb_table.dynamodb_autoscaler/main.tf line 92, in resource "aws_appautoscaling_target" "read_target_index":
92: count = var.enabled ? length(var.dynamodb_indexes) : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
which I kinda understand but having to run plan -target
seems a bit hacky…
seems to have been reported here https://github.com/cloudposse/terraform-aws-dynamodb/issues/70 too
Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
Found a bug? Maybe our Slack Community can help. Describe the Bug Error: Invalid count argument for read_target_index & write_target_index with running plan or apply. on .terraform/modules/dyna…
that null_resource approach doesn’t seem necessary at all, anymore. could just use a for expression
Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
Found a bug? Maybe our Slack Community can help. Describe the Bug Error: Invalid count argument for read_target_index & write_target_index with running plan or apply. on .terraform/modules/dyna…
Hi all - really enjoyed the recent cast on TACOS and I’m really interested in not having to manage my own Terraform or create the governance that I want around our infra on my own. Basically (and I understand that this is a really broad question, that I expect to differ between Terraform Cloud, Env0, Scalr, Spacelift) I would to ask how you transition your self-hosted Terraform solution to one of these SaaS providers without downtime and, maybe more importantly, how your previous small-team customers have driven buy-in from their wider org that this stuff is really important (please don’t sell me on it, I know it’s critical)
I haven’t listened to it yet, but I think there was a lot of talk on this topic in the last #office-hours
Nevermind, sounds you listened to it based on your follow-up comment
@Rhys Davies For Scalr:
- Transition (state) - First you’ll want to migrate existing state, which is straight forward and can be automated//docs.scalr.com/en/latest/migration.html>
- Transition (workflow) - We would need to understand your current workflow, but whether it is CLI based or vcs based, either are straightforward. If CLI, just add a code snippet to your TF config files and it will start using Scalr to execute the runs. If VCS, just add the repository to Scalr and kick off a run or have it automatically execute on the next commit.
Neither of the above require downtime. You won’t need to reprovision infrastructure/services.
In terms of buy-in, this depends on the team you need it from, the area of most importance, or where the biggest pain point is.
- A few of our smaller customers have gone through SOC2 or other similar compliance reviews lately and Scalr accelerated that for them through audits, policy, etc. Leadership bought in quickly as soon as they saw they could accelerate it.
- Others have had major issues around an unorganized module process, which caused outages. The template, module registry, and OPA greatly improved their process and standards.
- The idea of a more efficient PR process or general workflow has been another big one. Many users did not want to babysit the existing DIY workflow. Really not much buy-in needed as the benefit is fairly obvious.
- Our larger customers get buy-in from the wider orgs through the idea of autonomy and self-service. Many of them call it an app “vending machine”. Teams sign up to use Scalr and they automatically get an environment or workspace created for them and then they are off and running on their own.
Hi @Rhys Davies - for env0:
- For migration you need to create a new template with your Terraform code at env0, connect your cloud account credentials, and add all the relevant variables, and create an environment with the same workspace name. You can read more here
- For us the benefits we see with small teams are the following:
• Gitops for continuous deployment and plan on PRs.
• Custom flows, that allows you to run everything in you Terraform pipeline.
• Self service environment management with TTL policies and Scheduling for cost reduction.
• Creating environment for each PR, also a lot of them are using it using Kubernetes.
• Terragrunt support.
• Actual cost over time with correlation to deployments.
• RBAC and plan before apply, which creates a workflow that is similar to a PR for infrastructure changes. Bigger teams buy in to our SAML, Policies, Self service environment management capabilities, OPA, Teams management, environment limits and budget limits. You can read more about our use case here:
- IaC automation
- Teams and governance
- Managed self service Hope it helps, and let us know if you have any questions.
sorry for the late reply guys, I wrote something out and never hit enter. thank you all so much for the explanations and help, really enjoyed the podcast
Even if I don’t get a reply here, fascinating stream, really enjoyed and will be tuning in for the next one
2020-12-19
Hey morning. everyone! :wave:
Having a short question.
I define my local module in the modules
directory. Can modules in that directory have reference to another module from github for instance?
Hi, yes that’s possible.
Alright. Cool. You got any examples of where is this implemented? Like some github repo? I am getting this warning saying that “the module cannot be found in the directory”.
A Terraform Module for how to run Vault on AWS using Terraform and Packer - hashicorp/terraform-aws-vault
Thanks!
A Terraform Module for how to run Vault on AWS using Terraform and Packer - hashicorp/terraform-aws-vault
This should show it.
2020-12-21
Can anyone help with this issue please …
terraform {
backend "s3" {
...
}
required_version = "= 0.13.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.9.0"
}
fugue = {
source = "fugue/fugue"
version = "0.0.1"
}
}
}
provider "fugue" {
client_id = var.fugue_client_id
client_secret = var.fugue_client_secret
}
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/aws v3.9.0
- Finding fugue/fugue versions matching "0.0.1"...
- Finding latest version of hashicorp/fugue...
- Installing fugue/fugue v0.0.1...
- Installed fugue/fugue v0.0.1 (self-signed, key ID B14956EDEF9DD1A2)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
<https://www.terraform.io/docs/plugins/signing.html>
Error: Failed to install provider
Error while installing hashicorp/fugue: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/fugue
Why is it trying to find hashicorp/fugue
?
do you have existing tfstate? are you upgrading from tf 0.12? do you have modules using the fugue provider with incorrect provider/terraform blocks?
this is newly added
the module being called has the following …
provider "fugue" {
alias = "terraform-runner"
}
terraform providers
output?
Providers required by configuration:
.
├── provider[registry.terraform.io/fugue/fugue] 0.0.1
├── provider[registry.terraform.io/hashicorp/aws] ~> 3.9.0
└── module.aws_account
├── provider[registry.terraform.io/hashicorp/aws]
├── provider[registry.terraform.io/hashicorp/fugue]
├── module.cloudtrail_bucket
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.default_account_roles
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.default_vpc_flowlog_key
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.terraform_runner
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.default_vpc_flowlogs
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.cloudtrail_key
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.default_vpc
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.s3_access_logs
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.default_vpc_flowlog_bucket
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.iam_password_policy
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.securityhub
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.awsconfig
│ ├── provider[registry.terraform.io/hashicorp/aws]
│ └── module.config_bucket
│ └── provider[registry.terraform.io/hashicorp/aws]
└── module.cloudtrail
└── provider[registry.terraform.io/hashicorp/aws]
Providers required by state:
provider[registry.terraform.io/hashicorp/aws]
module.aws_account
├── provider[registry.terraform.io/hashicorp/aws]
├── provider[registry.terraform.io/hashicorp/fugue]
how can i delete that?
delete it?
if that module is not using any fugue resources, then you can delete it
is this not correct though in that module …
provider "aws" {
alias = "terraform-runner"
}
provider "fugue" {
alias = "terraform-runner"
}
as that module will optionally create resources via the fugue provider
how are you passing the provider from your root to aws_account?
let me create a gist
ok, so you have a single aws provider, and a single fugue provider
providers = {
aws = aws.terraform-runner
fugue = fugue.terraform-runner
}
yes
then the module itself is pretty much all aws apart from two fugue resources
then in your aws_account module, you do not need the provider block at all, you can remove this:
provider "aws" {
alias = "terraform-runner"
}
provider "fugue" {
alias = "terraform-runner"
}
ok let me try that
in this declaration, aws
and fugue
are the default unaliased providers…
providers = {
aws = aws.terraform-runner
fugue = fugue.terraform-runner
}
i am still getting the same error
➜ data-engineering-qa git:(configure-fugue-environment) ✗ terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] ~> 3.9.0
├── provider[registry.terraform.io/fugue/fugue] 0.0.1
└── module.aws_account
├── provider[registry.terraform.io/hashicorp/fugue]
├── provider[registry.terraform.io/hashicorp/aws]
i am still seeing this
is terraform providers
coming from state?
@alisdair Thanks for the information. However i did what you mentioned and it still does not work - ~/.terraform.d/plugins/kyma-project.io/kyma-incubator/terraform-provider-gardener/0.0.9/linux_amd…
annoying…
each module must declare its own set of provider requirements
so i am going to have to set this inside the module itself
so you need to add this to the module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
fugue = {
source = "fugue/fugue"
}
}
}
thats annoyung
not the versions, just the source location
is there a convention for what to call that file within side the module itself?
i normally called it [config.tf](http://config.tf)
well since it is often used to manage provider versions, the tf upgrade utilities generally add it as versions.tf
perfect thanks for that
Hi! I have a bunch of variables, 200, in AWS Parameter Store
, all of them, in same path, any way to create a kind of loop and get all of them in Terraform
instead of going one by one?
i think you have to provide all the names, not just the path, but you can use for_each
on the data source to loop over them all
Do you have 200 vars stored in /path/to/200-vars/
or is it /path/to/var-1/
/path/to/var-2/
?
I think SSM doesn’t have the option to do this /path/to/200-vars/ you can have /path/var1 /path/var2 no other way.
locals {
ssm_path = "/hopin/dev/hopin/env"
envs_list = [ "var1", "var2"]
}
// Get all values from var1, var,2 ....
data "aws_ssm_parameter" "env_vars" {
for_each = local.envs_list
name = "${local.ssm_path}/each.key"
}
You could store a json object representing 200 kvs in one path, I wouldn’t recommend it :laughing: that Terraform would be my approach. Do you not have to toset
a list with a for_each
anymore?
yes I u are right I think I need toset
ah ok didnt know if it was a 0.14.
thing or not
aws ssm does have a “GetParameters” api that will return all parameters from a list in a single call, and also the “GetParametersByPath” api that works specifically for paths, but terraform does not currently offer a data source based on those APIs
• https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameters.html
• https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameters-by-path.html
Ohhhh… :disappointed: and any idea of doing a api call to AWS to get parameters? something with provisioner "local-exec"
?
people have done some crazy things to get local-exec to return values they can use, but it is pretty crazy and a little hard to recommend
Run (exec) a command in shell and capture the output (stdout, stderr) and status code (exit status) - matti/terraform-shell-resource
here’s a version that uses the external provider, but requires ruby… https://github.com/matti/terraform-shell-outputs
Contribute to matti/terraform-shell-outputs development by creating an account on GitHub.
or here’s a shell provider, probably the best option. haven’t tried this one… https://github.com/scottwinkler/terraform-provider-shell
Terraform provider for executing shell commands and saving output to state file - scottwinkler/terraform-provider-shell
I am using aws ssm get-parameters-by-path
to get parameters under the same path. Like @loren suggested. I am calling this in CircleCI pipeline. I usejq
to process and set those as environment variables for later use.
2020-12-22
I use terraform for scheduled lambda fuction with cloudwatch. It worked perfectly 5 months ago but now I came back to the code and when I run terraform plan with no changes i get this:
Error: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListTargetsByRuleInput.EventBusName.
What does it even mean? I don’t even use target bus I use resource "aws_cloudwatch_event_target" "start_alarms" {
Turns out it’s an issue with terraform AWS provider version. After updating to latest it works. How do you deal with versioning of aws provider? Do you have it specified like version = "3.14.1"
?
This what we do. As you can see, if there is a version that has a bug rendering it incompatible with our TF we just drop in a quick !=
provider "aws" {
version = ">= 2.70.0, != 3.17.0"
}
in tf 0.12.29 or later, use the terraform block with required_providers, as version in the provider block is being deprecated…
terraform {
required_providers {
aws = {
source = "registry.terraform.io/hashicorp/aws"
version = ">= 2.70.0, != 3.17.0"
}
}
}
Thank you both! I’m just curious why do you both have != 3.17.0
in your providers?
oh, i just copied tom’s example :slightly_smiling_face: i’m currently pinning like this: "~> 3.18.0"
Ah thanks loren that insight! That syntax was because of bug in that provider with gov cloud
Hi! I built a module which is using restapi provider to create kibana space/roles and users, the module needs two configuration for the restapi provider and I am going to use provider alias to pass the configuration from root module.
provider "restapi" {
alias = "kibana"
uri = "<https://X.X.X.X:5601>"
username = "user"
password = "pass"
insecure = true
headers = {
"kbn-xsrf" = "true"
}
write_returns_object = true
}
provider "restapi" {
alias = "elastic"
uri = "<https://X.X.X.X:9200>"
username = "user"
password = "pass"
insecure = true
write_returns_object = true
}
The module will create the space/roles and user in one instance of kibana, and I need to configure 3 instances of kibana, which means that I will need to define 6 provider configuration in the root module. What would be the best practices for this situation ?
You can’t loop over providers in any way with Terraform. So the users of your module will need to define all 6 providers and call your module 3 times with differing configuration
Thanks @Alex Jurkiewicz
Hi Team,
you can use threads in this Slack
Cloudposse modules accept pull requests to add 0.14 support
@Alex Jurkiewicz I’m unable to push change to repository.
How do I go about pushing the PR?
Fork the repo, which will create a copy you own
Then you can push to your copy and create a pull request from your repo’s branch to the master
gotcha.
Initializing modules...
Error: Unsupported Terraform Core version
on .terraform/modules/ec2-bastion-server.dns.this/versions.tf line 2, in terraform:
2: required_version = ">= 0.12.0, < 0.14.0"
Module module.ec2-bastion-server.module.dns.module.this (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>)
does not support Terraform version 0.14.2. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
Whats the version of terraform you are using, it should be less than 14.0 and greater than 12.0
I guess you might be running terraform 0.14
I’m running 0.14.2
I figured out the cause of the error.
I was trying to find a fix.
Surely downgrading my terraform is not a valid solution.
I guess you can fork this module and bump up terraform version on that and see if that works as expected with 0.14.2, if you think its working okay. You can create a PR to upstream and someone should approve and merge it !
Done.
yay !
I get the above error message while using this module
Terraform Module to define a generic Bastion host with parameterized user_data - cloudposse/terraform-aws-ec2-bastion-server
2020-12-23
Hello all
Have some questions on a Terraform and best practices
Hey wanted to see if people where here first
TL;DR - Don’t feel the need to “be polite” by asking if someone can help you and waiting in some sort of virtual queue for help. Just ask the question.
When it comes to things like Slack, it’s best to compose a nicely formatted list of questions or message for people to digest at their leisure in whatever timezone they are in - asking permission or vague things like: “Hey, anyone about to answer a question?” will likely get no interaction.
It may seem polite, but the polite thing is to give the reader context and enough info to make these decisions:
- Can I help this person with the knowledge and experience that I might have?
- Am I interested in helping this person? I have to give coaching all the time to people I work with who message things like: “Have you got a minute?” - hmm what for? Or even worse: “Hey, I’m getting an error!” - er, what are you doing to get it? what is it? what are you expecting?
“Have you got a minute?” - hmm what for? Or even worse: “Hey, I’m getting an error!”
Ugh, I feel this. My pet peeve is people who slack/skype me “Good Morning!“. They have a question for me but won’t ask it until I respond with some trite greeting.
I usually just give them the wave emoji:
Basically new to Terraform and want to send people to different sites depending if they are on the test network behind a VPN failing the resources not being there check public site. Can I use failover routing to solve this or is it better to use a lambda
Okay, a bit more information would be helpful:
• What is the site hosted on? AWS?
• How do requests get to the site? CDN? ALB? ELB? Kubernetes Ingress?
• Do you have split DNS when connected to the VPN? Is the VPN in some office, datacenter, cloud? This isn’t so much a Terraform question as it is a general network architecture question.
It’s is hosted on AWS deployed using Terraform uses a cloudfront CDN to deliver images and pdfs. The issues is they aren’t mirrored so for testing when the testing cdns is hit and doesn’t have the file check the production CDN
Interesting… Is Cloudfront in front of S3 or something?
The assets live in S3
There’s probably a few things that you can try… The first thing that came to my mind was Cloudfront origin failover (See: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html)
You could setup your test Cloudfront distribution with a primary origin (test bucket) and setup a secondary origin which points to the production bucket for 404 responses from the primary origin?
Learn how to increase the availability of your website, application, or content with Amazon CloudFront origin failover and other features.
Outstanding will have a read now
There’s an example in the terraform docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudfront_distribution#example-usage<i class="em em-~~~"https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudfront_distribution#example-usage:~~~ext=%[…]%20routing%3A,-resource
It might be necessary for you to make use of dynamic blocks to configure the Cloudfront Distribution based on environment (test or production).
Thanks again
Hiya peeps! Has anyone used the terraform-aws-elastic-beanstalk-environment
and attached an RDS instance before? I have set the relevant aws:rds:dbinstance
namespace values and put that in the additional_options
but it doesn’t appear to be creating the Database when I look at the environment Configuration in AWS console - is there something else I need to do to get the module to create the link? (Note this is for an RDS instance attached to the environment itself - not a seperate RDS instance, this is only for an internal tool, not production)
Hi Team, having the following issue: https://github.com/cloudposse/terraform-aws-ec2-bastion-server/issues/52
Found a bug? Maybe our Slack Community can help. Describe the Bug module "bastion" { source = "cloudposse/ec2-bastion-server/aws" version = "0.17.0" ami = "ami-03…
Try setting id_length_limit
Found a bug? Maybe our Slack Community can help. Describe the Bug module "bastion" { source = "cloudposse/ec2-bastion-server/aws" version = "0.17.0" ami = "ami-03…
(it’s a feature of null label)
never mind - saw you posted: https://sweetops.slack.com/archives/CB6GHNLG0/p1608759390401500
I tried different values for
id_length_limit
Thank you was able to resolve this issue with that exact fix.
Error: expected length of name to be in the range (1 - 64), got
on .terraform/modules/bastion/main.tf line 9, in resource "aws_iam_role" "default":
9: name = module.this.id
on this module: https://github.com/cloudposse/terraform-aws-ec2-bastion-server
Terraform Module to define a generic Bastion host with parameterized user_data - cloudposse/terraform-aws-ec2-bastion-server
you tried the module context variable right ?
Terraform Module to define a generic Bastion host with parameterized user_data - cloudposse/terraform-aws-ec2-bastion-server
id_length_limit ? in your instantiation of the module?
you prob need to provide at least one of namespace
, environment
, stage
, name
the resource names and IDs are calculated from those 4 vars
module "bastion" {
source = "cloudposse/ec2-bastion-server/aws"
version = "0.17.0"
ami = "ami-03130878b60947df3"
instance_type = "t2.micro"
id_length_limit = 10
enabled = true
name = "${var.app_name}-bastion"
vpc_id = aws_vpc.main.id
associate_public_ip_address = true
subnets = aws_subnet.public.*.id
allowed_cidr_blocks = var.allowed_cidr_blocks
ssh_user = "user"
key_name = module.dispatch_key_pair.this_key_pair_key_name
user_data = ["sudo amazon-linux-extras enable postgresql11"]
tags = {
name = "${var.app_name}-bastion"
description = "Used to connect to db."
environment = var.env
}
}
Passing the name worked.
Would you happen to have any idea what could be the issue here?
I tried different values for
id_length_limit
I tried 0, 5 and default null
but always getting the same eror
I seem to have fixed it/
Need to pass enabled = true
enabled? where?
can you past a code snippet?
along with a non-default value for id_length_limit
and a name
2020-12-24
Hi All
I have written the Terraform modules for creating EKS Cluster and EKS Node groups everything is running as expected but the EC2 instances under the node groups does not have a name.
May be put Name
tag ?
This is what i have its creating the name for node groups but not for the EC2 instances.
resource "aws_eks_node_group" "node" {
cluster_name = aws_eks_cluster.aws_eks.name
node_role_arn = aws_iam_role.eks_nodes.arn
instance_types = "${var.eks_instance_type}"
node_group_name = "${var.generictag}-${var.env}-ec2-eksng"
tags = "${merge(var.tags,map("Name", "${var.generictag}-${var.env}-ec2-eks-nodes"))}"
subnet_ids = [ "${var.private_subnet_ids[0]}","${var.private_subnet_ids[1]}","${var.private_subnet_ids[2]}" ]
remote_access {
ec2_ssh_key = "${aws_key_pair.eks.key_name}"
source_security_group_ids = "${var.bastion_security_group_id}"
}
scaling_config {
desired_size = "${var.eks_asg_desir}"
max_size = "${var.eks_asg_max}"
min_size = "${var.eks_asg_min}"
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
]
}
Okay looks like tags wont propagate to instances as per this https://docs.aws.amazon.com/eks/latest/userguide/eks-using-tags.html#tag-resources
ok, is there any other way i can tag my EC2 instances under the node groups?
@aaratn Thank You for your help it really helped. https://docs.aws.amazon.com/eks/latest/userguide/eks-using-tags.html#tag-resources
Any help
I have written the Terraform modules for creating EKS Cluster and EKS Node groups everything is running as expected but the EC2 instances under the node groups does not have a name.
2020-12-28
hi guys i hope to get an advice on this. So i am trying to create an extra db on an existing aws_db_instance cluster so that my applications on fargate can connect to it. However i keep getting a connection timed out error during creation. wondering if anyone has had this kind of encounter. How did you go about it. my config looks like this
# Create a database server
resource "aws_db_instance" "default" {
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t1.micro"
name = "initial_db"
username = "rootuser"
password = "rootpasswd"
# etc, etc; see aws_db_instance docs for more
}
# Configure the MySQL provider based on the outcome of
# creating the aws_db_instance.
provider "mysql" {
endpoint = "${aws_db_instance.default.endpoint}"
username = "${aws_db_instance.default.username}"
password = "${aws_db_instance.default.password}"
}
# Create a second database, in addition to the "initial_db" created
# by the aws_db_instance resource above.
resource "mysql_database" "app" {
name = "another_db"
}
unless that db instance have a plublic ip you need to have a tunnel/vpn or something so that the computer running terraform can connect to the instance on 3306
just thinking aloud do you think it will work if i set it to publicly accessible. i actually wish i wouldnt have to go through that before it works. thanks for the pointer
yes it will work, just make sure open it to only your ip
but then it will have to deployed in the public subnet and your app might now be able to reach it
my app and db both run in private subnet
so when we need to connect to the db we use bastion. i was wondering is tf not supposed to already be in that vpc to create resources. plus i am using tf cloud to deploy (i dont know if this counts)
then it will be easier to set a ssh tunnel and then set the port of the provider to the ssh-tunnel port
thanksgot it
2020-12-29
Does anyone know of a way for Terraform Cloud to connect to internal AWS resources without using the business tier hosted Terraform agents? I have a database root module where I manage multiple RDS DB instances and Amazon MQ vhosts. I’d like to make that as an automated Terraform Cloud workspace, but right now I manage accessing those private resources via port forwarding into a bastion host on the applier’s machine, which obviously isn’t possible for the TFC workspace.
Interesting, they don’t offer a solution for this?
They offer TFC Agents, which are self hosted runners… but to use them they require their business tier.
Super weak.
I imagine you could set a port forward and then allow TF cloud ips to it but that sounds pretty insecure
Or run localexec to call o a VPN client and connect to a VPN…..
Yeah and I’d need to make my RDS / RabbitMQ / ElasticSearch clusters externally available… Can’t do that.
No one can
I’m curious about other people experiences with this
If you run Atlantis then you could do this
Yeah — for real. I will post on the Hashi discussion board if nobody gets back by EOD.
Yeah, trying to avoid more self-hosted tooling honestly. Atlantis is awesome, but my client doesn’t need to add another self-hosted tool.
I agree you already have a tool to do this
subscribing. I looked for a solution earlier this year and couldn’t find one
Yeah… honestly, I don’t think there is a solution, but I figure I should check thoroughly before giving up on that front.
isn’t this a very basic feature for IaC?
I mean they do have an option but you need to pay for it
I chatted with a Hashicorp account manager recently and asked about this, and I believe you need to pay
the only workaround I can think about is managing that particular workspace with your CI provider and setting it to local run and just use Terraform Cloud for remote state storage
I spoke to Hashicorp AM when I was looking too. I was given the impression that Hashicorp simply isn’t interested in non-enterprise customers. If you aren’t looking to spend six figures, their product isn’t really for you.
FWIW, we did try Terraform Cloud for the great module directory and remote state storage only, like you suggest @btai. But it’s not really cost effective so we didn’t commit
I committed a bit early when it was newly released as free and moved everything over because I really liked those two things you mentioned (state storage / module directory). I’m a little bummed w/ my decision as their pricing model is not a model that my finance team appreciates (charging per successful terraform apply). For now, we’ve been able to get away with just doing local runs for workspaces that require a null resource script or matt’s use case.
Likewise. We had convo with Hashicorp team and options are pay for business or run terraform in Gitlab CI with agents in our AWS somewhere.
@btai what did you think of the spacelift presentation?
@Matt Gowie unfortunately, you need to pony up for TFC for business.
I haven’t found any workaround.
Spacelift pricing will be a lot more affordable. Pay per user. Pay per concurrency.
Scalr is also coming out with runners.
Scalr has an API similar to TFC and a terraform provider.
Yeah, everybody saying the same thing — I figured as much when asking the question, but it’s pretty weak.
What I love about spacelift is you don’t have to move state backend.
Definitely see myself recommending those other tools to clients in the future once they’re more mature.
fwiw, we have the agent ready: https://github.com/cloudposse/terraform-kubernetes-tfc-cloud-agent
Provision a Terraform Cloud Agent on an existing Kubernetes cluster. - cloudposse/terraform-kubernetes-tfc-cloud-agent
Spacelift looks really cool, but seems to be a very early stage startup and no public pricing. Bit risky surely?
Do you use that with any clients that have purchased the biz tier? Or was that a forward thinking thing?
Yeah — that was my thoughts exactly Alex: Spacelift looks awesome, but seems too early to invest into it now.
And I’m not sure I can get behind that they built their own configuration language for their SaaS. Seems esoteric and cumbersome. But maybe I’m being short sighted and that’s a necessary thing?
I share similar concerns. I’m also concerned about tooling fatigue. I already have a CI provider and Terraform Cloud. For every new tool I introduce, my (very small) team needs to learn yet another thing. This concern is separate from Spacelift (or the other 3 products attempting to solve similar problems). That’s not to say that I wasn’t excited about the demos, but I’m approaching yet another tool a little more cautiously because of the mistake I made w/ moving everything to TFC so quickly.
Also naively forgetting that while Hashicorp has provided a ton of tooling for free, that they are still first and foremost a business. A part of me initially thought we’d continue to see more features added to the free version of Terraform Cloud .
Yeah, that’s what I’d like to see. And honestly, I don’t need these things to be free — I’m sure companies would be happy to pay for them (I would), but not at crazy prices or prices I assume are crazy because why are you not showing them to me on your pricing page.
As Alex mentioned above: “given the impression that Hashicorp simply isn’t interested in non-enterprise customers”. I get the same sentiment and that sucks because there is no reason TFC couldn’t be leagues ahead of the up and comers, but it doesn’t seem they’re interested in doing so.
Hi all! I’m new here, but I wanted to ask what do you use to test your terraform modules? I have created some but I’m looking for some options on how to maybe create some testing and maybe lint?
Hope all of you have a great christmas
Terratests
Nice I will try it out it looks really cool
@jose.amengual Do you know if terratest mock the creation of the resources?
mmmmm I do not know
I know it can create the resources for sure
Interesting I will look into it I will like to mock at least some resources
Nice thanks!
a quick quesiton, can ALB module support terraform 0.14?
now the version file got >0.12:
terraform {
required_version = ">= 0.12.0"
required_providers {
aws = ">= 2.0"
template = ">= 2.0"
null = ">= 2.0"
local = ">= 1.3"
}
}
it should support TF 0.13 and 0.14
>= 0.12.0
means 0.12 and up
The error message I got
Module module.vpc (from
git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.8.1>) does
not support Terraform version 0.14.3. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
it works after changed to source = "cloudposse/vpc/aws"
it seems the doc need an update
VPC and subnet work but failed at ALB
Error: Unsupported Terraform Core version
on .terraform/modules/alb.access_logs.s3_bucket.this/versions.tf line 2, in terraform:
2: required_version = ">= 0.12.0, < 0.14.0"
Module module.alb.module.access_logs.module.s3_bucket.module.this (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>)
does not support Terraform version 0.14.3. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
- alb.access_logs.s3_bucket in .terraform/modules/alb.access_logs.s3_bucket
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for alb.access_logs.s3_bucket.this...
- alb.access_logs.s3_bucket.this in .terraform/modules/alb.access_logs.s3_bucket.this
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for alb.access_logs.this...
seems it downloaded an old version of null-label plugin
if it uses sub-modules which were not converted yet, it will not work with TF 0.14
we are working on converting all modules
thanks Andriy
null-label
latest versions support TF 0.14
yeah, I’m looking for which file uses the old version lol
it’s the other modules that use the prev versions of null-label
that will cause issues
alb.access_logs.this
?
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
ah
this one for example ^
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
got it
we are working on converting all modules, but to speed up the process, PRs are welcome
cool, let me put up one if you don’t mind I steal from you lol
0.22.1
is the newest version of null-label
which supports 0.14
yes
in [versions.tf](http://versions.tf)
, we use it like this
terraform {
required_version = ">= 0.12.26"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.0"
}
local = {
source = "hashicorp/local"
version = ">= 1.2"
}
null = {
source = "hashicorp/null"
version = ">= 2.0"
}
}
}
required_version
w/o upper limit
and the new syntax for required_providers
the current one is
terraform {
required_version = ">= 0.12.0"
required_providers {
aws = ">= 2.0"
template = ">= 2.0"
null = ">= 2.0"
local = ">= 1.3"
}
}
I can add source
to all if needed
we are using this format for all sources now
module "label" {
source = "cloudposse/label/null"
version = "0.22.1"
(not GitHub URls)
oh, got it, I used this way just now
without git::
ok, will update this as well
how can I test if the change works?
I find test
folder
and it asks me to install bats
a bit late here, will sync up tomorrow
thanks
@Andriy Knysh (Cloud Posse) good morning
how can I find the new source of package which is not git url?
for example, git::<https://github.com/cloudposse/terraform-aws-lb-s3-bucket.git?ref=tags/0.9.0>
source = "cloudposse/lb-s3-bucket/aws"
version = "0.9.0"
The syntax for specifying a registry module is <NAMESPACE>/<NAME>/<PROVIDER>. For example: hashicorp/consul/aws
The Terraform Registry makes it simple to find and use modules.
<NAMESPACE>
is cloudposse
<PROVIDER>
is aws
terraform-aws-lb-s3-bucket
becomes
cloudposse/lb-s3-bucket/aws
Anyone can publish and share modules on the Terraform Registry.
cool
got the PR: https://github.com/cloudposse/terraform-aws-alb/pull/67, please review
what Add Terraform 0.14 support Use new syntax in main.tf/versions.tf why This module needs to support Terraform 0.14
@Hao Wang reviewed, thanks for the PR. Looks good, just a few comments
cool, just updated
Add 0.14 support and update new syntax @snowsky (#67) what Add Terraform 0.14 support Use new syntax in main.tf/versions.tf why This module needs to support Terraform 0.14
thanks @Hao Wang
thanks Andriy
2020-12-30
I’m working with the “terraform-aws-eks-node-group” module https://github.com/cloudposse/terraform-aws-eks-node-group, and am having issues adding user data. I followed examples in the repo:
before_cluster_joining_userdata = var.before_cluster_joining_userdata
When I run a terraform plan
I’m getting
An argument named "before_cluster_joining_userdata" is not expected here.
I’m using terraform version 0.13.2
.
Has anyone else had this problem?
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Not sure what version of module you are using, but most recent one have
required_version = ">= 0.13.3"
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Potentially breaking changes Terraform 0.13.3 or later required This release requires Terraform 0.13.3 or later because it is affected by these bugs that are fixed in 0.13.3: hashicorp/terraform#2…
it may not be related still, but better to fit the requirements
Ah, good call out.
I created security groups for eks cluster and eks nodes and also created an ingress rule for eks cluster to add an inbound rule for port 443 and it works fine but if i run the plan or apply the second time the ingress security rule which has added previously gets deleted and its not added back again. But when i run it for the third time it again created the ingress rule and if i run plan/apply again it deleted the rule and vice versa any idea why is it behaving like that.
resource "aws_security_group" "eks_cluster_sg" {
name = "${var.generictag}-${var.env}-scg-ekscls"
description = "The eks cluster master security group"
vpc_id = "${var.vpc}"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
description = "Allowed the inbound connection to VPC CIDR"
security_groups = ["${aws_security_group.bastion.id}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(
var.tags,
map(
"Name", "${var.generictag}-${var.env}-scg-ekscls"
)
)}"
}
resource "aws_security_group" "eks_node_security_group" {
name = "${var.generictag}-${var.env}-scg-eks-node"
description = "The eks cluster master security group"
vpc_id = "${var.vpc}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
description = "Allowed the inbound connection from bastion to eks nodes"
security_groups = ["${aws_security_group.bastion.id}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
description = "Allowed the inbound connection from eks controle plane to eks nodes"
security_groups = ["${aws_security_group.eks_cluster_sg.id}"]
}
ingress {
from_port = 10250
to_port = 10250
protocol = "tcp"
description = "Allowed the inbound from eks controle plane to eks nodes for internal connectivity"
security_groups = ["${aws_security_group.eks_cluster_sg.id}"]
}
ingress {
from_port = 1025
to_port = 65535
protocol = "tcp"
description = "Allowed the inbound connection port range of eks nodes to itself"
self = "true"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(
var.tags,
map(
"Name", "${var.generictag}-${var.env}-scg-eks-node"
)
)}"
}
resource "aws_security_group_rule" "eks_cluster-to-eks_worker_node" {
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
description = "Allow Inbound rule in eks cluster to eks nodes"
security_group_id = "${aws_security_group.eks_cluster_sg.id}"
source_security_group_id = "${aws_security_group.eks_node_security_group.id}"
depends_on = [aws_security_group.eks_node_security_group,]
lifecycle {
create_before_destroy = "true"
}
}
output of plan:
# module.sg.aws_security_group.eks_cluster_sg will be updated in-place
~ resource "aws_security_group" "eks_cluster_sg" {
id = "sg-09b067394f3f35d99"
~ ingress = [
- {
- cidr_blocks = []
- description = ""
- from_port = 443
- ipv6_cidr_blocks = []
- prefix_list_ids = []
- protocol = "tcp"
- security_groups = [
- "sg-07e3a22f09247060c",
]
- self = false
- to_port = 443
},
# (1 unchanged element hidden)
]
name = "a0266d-prd-scg-ekscls"
tags = {
"Environment" = "prd"
"Name" = "a0266d-prd-scg-ekscls"
"Projectcode" = "a0266d"
"Terraformed" = "true"
}
# (6 unchanged attributes hidden)
}
# module.sg.aws_security_group_rule.eks_cluster-to-eks_worker_node will be updated in-place
~ resource "aws_security_group_rule" "eks_cluster-to-eks_worker_node" {
+ description = "Allow Inbound rule in eks cluster to eks nodes"
id = "sgrule-1645696446"
# (10 unchanged attributes hidden)
}
Plan: 0 to add, 4 to change, 0 to destroy.
You cannot combine rules inside an aws_security_group resource with individual _rule resources.
They will be competing so to speak…
Do i need to specify them in different module or what is the workaround for it?
Ok i got it thanks here is the how its done http://cavaliercoder.com/blog/inline-vs-discrete-security-groups-in-terraform.html#<i class="em em-~"</i>text=At%20this%20time%20you%20cannot,science%20lab%20experiment%20for%20you>!
There are two ways to configure AWS Security Groups in Terraform. You may definerules inline with a aws_security_group resource or you may define additionaldiscrete aws_security_group_rule resources.
hey, I run into an issue and it may be an easy fix. When I use both terraform-aws-ecs-alb-service-task
and rds
together, they both create a security group with the same name so I got the error message like
Error creating Security Group: InvalidGroup.Duplicate: The security group 'eg-test-test' already exists for VPC 'vpc-0a4474b6d776a7b74'
I did some researches and found both modules will call cloudposse/label/null
and create the SG with the same ID
how could I use a different name for different module?
let me try pass attributes
to rds
you need to add something to the name, or add a attribute or something to make it different.
got it, thanks
for vpc module, how can I use custom security group rules?
enable_default_security_group_with_custom_rules
is a flag and enabled by default
you want a security group for the whole VPC?
that is usually not recommended
I’d like to have a security group for EC2 instance
then you create one and attached to the instance
ok, thanks
Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance
Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance
so you create the security group and then pass it to the instance when you are creating it
and that SG will be for that instance only
thanks for the details, it works
2020-12-31
This is pretty weak — terraform cloud does not support refresh
: https://github.com/hashicorp/terraform/issues/23247
Terraform Version Terraform v0.12.10 provider.aws v2.33.0 Hey guys, I'm using Terraform Cloud as a remote backend. For example, I've changed output and nothing else and need it to be update…
No response from Hashi or anything in that thread. And it doesn’t even seem to make that much sense… Why can’t terraform refresh work the same way against their remote backend?
Terraform Version Terraform v0.12.10 provider.aws v2.33.0 Hey guys, I'm using Terraform Cloud as a remote backend. For example, I've changed output and nothing else and need it to be update…
some basic updates to the cloudtrail s3 bucket module… also Happy NYE and NYD to everyone in here
Updating this to the latest null-label module. Otherwise, it doesn't work with TF 14 what getting this error when trying to init with tf 14 Error: Unsupported Terraform Core version on .te…
@Ryan Ryke can you follow this guide ?
Updating this to the latest null-label module. Otherwise, it doesn't work with TF 14 what getting this error when trying to init with tf 14 Error: Unsupported Terraform Core version on .te…
feel free to add to my pr, just wanted to let you know that it wasnt currently working
We do not merge PRs without passing tests or without following the new standards and we are very grateful community contributions but there is guidelines for contributing. If you want the module to be update in your pr you can follow the steps otherwise it will have to wait until we get to it, sadly i don’t have time to do it right now.
@Ryan Ryke looks like you’re in luck: https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/pull/32
what Upgrade to support Terraform 0.14 and bring up to current Cloud Posse standard why Support Terraform 0.14
thanks… i saw it and closed mine
Anyone has any experience with blue/green deployments with EC2 using launch templates and ASG? Basically, I’m trying to update the launch template and launch the new EC2 before spinning down the old one. Looks like I need to duplicate the templates and ASGs and use a script to fail over, like these guys did: https://github.com/skyscrapers/terraform-bluegreen/blob/master/bluegreen.py
I was just hoping not to need to do that…
Terraform module to setup blue / green deployments - skyscrapers/terraform-bluegreen
You can use codedeploy to do that or app mesh
Terraform module to setup blue / green deployments - skyscrapers/terraform-bluegreen
Otherwise you need duplicate asg, TG, the task Def etc
Yeah which I was hoping to avoid. CodeDeploy good thinking… I was thinking about it only for ECS but it can be used in EC2 too.
Codedeploy is way older than ecs, we used quite a lot in the past
CodeDeploy does not work well with ASGs, forwarning
it doesn’t duplicate the entire config - they claim it does a blue green but its sort of half-arsed
@Yoni Leitersdorf (Indeni Cloudrail) we do this in a blue-green manner, although you can’t pause/stop halfway, by tying the name of the ASG to the name of the AMI or the launch template version etc, and using ‘create_before_destroy’
terraform will then spin up a new ASG using all the rest of the infra ‘as is’ and only destroy the old one once the new ASG is stable
create_before_destroy on the ASG, not the template, right?
yup
Oh looks like we used the creation_time of the AMI actually
Where?
locals {
namespace_timestamp = "${local.namespace}-${formatdate("YYYYMMDDhhss", data.aws_ami.this.creation_date)}"
}
resource "aws_autoscaling_group" "this" {
name = local.namespace_timestamp
...
lifecycle {
create_before_destroy = true
}
We also add a dependency on the launch_template within the ASG after we found some weird dependency looping
depends_on = [aws_launch_template.this]
Cool thanks!
Cool trick with the ami creation date!
Codedeploy still have that problem with the ASG in EC2? I thought they fixed it
Nope, I even reached out to support some time ago
it makes an ASG, but doesn’t copy any of the autoscaling policies
and I think doesn’t register it with ALBs either
… frankly I don’t understand the value of the feature
so stupid
in ECS it did work
with fargate
and the documentation is incredibly vague about it
it is
Here’s what they told me, verbatim
AWS CodeDeploy doesn’t make a copy of the cloudwatch alarms at the moment. Hence does the following:
~ In the first approach, AWS CodeDeploy makes a copy of an Auto Scaling group. It, in turn, provisions new Amazon EC2 instances, deploys the application to these new instances, and then redirects traffic to the newly deployed code.
~ In the second approach, you use instance tags or an Auto Scaling group to select the instances that will be used for the green environment. AWS CodeDeploy then deploys the code to the tagged instances.
Furthermore, I have created a feature request on your behalf.
I can’t even really think of a situation where this feature is helpful, even for something like an ASG that just reads off a queue - w/o the alarms and scaling policies it will turn on with ‘min instances’ and then never budge from that
@endofcake has a post here: https://medium.com/@endofcake/using-terraform-for-zero-downtime-updates-of-an-auto-scaling-group-in-aws-60faca582664
A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…
Yup that outlines the approach I mentioned
We can force the ASG resource to be inextricably tied to the launch configuration. To do this, we reference the launch configuration name in the name of the Auto Scaling group.
I’m using launch templates though, which are versioned, so we used the ami creation-date as a way of forcing the b/g replacement
Anyone have experience adding before_cluster_joining_userdata
to an eks_node_group https://github.com/cloudposse/terraform-aws-eks-node-group.git?ref=tags/0.9.0?
I want to add user data to my worker nodes without any downtime. Is this possible to do with this?
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
I deployed this in my dev environment but all my pods went down. I’d like to not have that happen in my prod environment.
I was thinking about creating a new instance of the worker node module, spin up the new worker node pool as part of one deployment and then spin down the old worker node pool as part of a separate changeset.
Does that make sense?
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Yes it makes sense. So you can cordon and drain the old nodes at your leisure.
I think I may have hit a bug on the ACM certificate module. Any time I try and specify subject alternative names (even using the example code from the readme), I’m getting these errors. Environment and other details in thread.
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 31, in resource "aws_route53_record" "default":
31: name = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
|----------------
| count.index is 1
| local.domain_validation_options_list is empty list of dynamic
The given key does not identify an element in this collection value.
https://github.com/cloudposse/terraform-aws-acm-request-certificate
Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate
env:
Terraform v0.12.9
+ provider.aws v2.70.0
+ provider.local v1.4.0
+ provider.null v2.1.2
Your version of Terraform is out of date! The latest version
is 0.14.3. You can update by downloading from www.terraform.io/downloads.html
Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate
as well as the definition i’m executing:
module "acm_request_certificate" {
source = "git::<https://github.com/cloudposse/terraform-aws-acm-request-certificate.git?ref=tags/0.10.0>"
domain_name = "mytableauvnext.tableaucorp.com"
process_domain_validation_options = true
ttl = "300"
subject_alternative_names = ["mytableauvnext.ea.tableaucorp.com"]
zone_name = "mytableauvnext.ea.tableaucorp.com"
}
Can you bump the aws provider to 3.x
Or back the module version down
I tried module versions as far back as 0.3.0, and they all seemed to have the same issue.
I can also try the newer provider verison.
I have been using this module without any issues
maybe is related to the route53 zone lookup?
I’m guessing it’s the aws provider version
It was changed from a list to set in 3.x
I’m using the 3.x provider
actually after looking over the module, it does not support 3.x only 2.x
looks like theres a ticket for it https://github.com/cloudposse/terraform-aws-acm-request-certificate/issues/26
looks like the ball is in the PR creators court
You can create a PR if you want to upgrade it, and we can close the other one
You can follow this instructions https://docs.cloudposse.com/community/updating-modules-for-terraform-14/
what Upgrade to support Terraform 0.14 and bring up to current Cloud Posse standard why Support Terraform 0.14 Support AWS Provider >= 3.x references https://registry.terraform.io/providers…