#terraform (2020-11)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-11-01
2020-11-02
i tried using 0.12.24 terragrunt plan is working fine but on terragrunt apply its going to replace cluster and i am getting following error
Error: error creating EKS Cluster (dev_cluster): ResourceInUseException: Cluster already exists with name: dev_cluster
{
RespMetadata: {
StatusCode: 409,
RequestID: “6a650024-bdab-4965-9940-d15506218621”
},
ClusterName: “dev_cluster”,
Message_: “Cluster already exists with name: dev_cluster”
}
on .terraform/modules/eks/cluster.tf line 9, in resource “aws_eks_cluster” “this”:
9: resource “aws_eks_cluster” “this” {
I don’t think a terraform plan will catch those kinds of conflicts.
Waypoint URL Service Nov 2, 15:44 UTC Investigating - Investigating observed issues with Waypoint URL Service
HashiCorp Services’s Status Page - Waypoint URL Service.
Waypoint URL Service Nov 2, 17:04 UTC Update - Service is experiencing partial outage and returning “Deployment not found” for some deployments. We are continuing to investigate.Nov 2, 15:44 UTC Investigating - Investigating observed issues with Waypoint URL Service
Waypoint URL Service Nov 2, 21:58 UTC Update - Service is experiencing degraded performance and may timeout for some deployments. We are continuing to investigate.Nov 2, 17:04 UTC Update - Service is experiencing partial outage and returning “Deployment not found” for some deployments. We are continuing to investigate.Nov 2, 15:44 UTC Investigating - Investigating observed issues with Waypoint URL Service
HashiCorp Services’s Status Page - Waypoint URL Service.
Has anyone ever user local-exec
or data.external.this
with aws-vault? I’ve tried a couple of times over the past 6 months and never had any luck :disappointed:
Our aws-vault setup is as follows:
aws-vault exec terraform-profile -- terraform apply
=> assume role in account listed in tf code to deploy to
from there, I would think that any aws cli or other sdk calls would fall under the assumed role, but this doesn’t appear to be the case. I think it’s using the role associated with terraform-profile
, rather than the assumed role
I’m trying to run the following:
data "external" "this" {
program = ["go", "run", "${path.module}/../main.go", "-i", "i-12345678"]
}
main.go
hits cloudwatch ListMetrics()
for metrics on the provided instance id, and creates a json response which is then in theory used to set up disk monitoring with for_each
without needing to input the fstype
and device name
when using aws_cloudwatch_metric_alarm
in terraform. The response would look like the following:
{
"/": { "Device": "rootfs", "FSType": "rootfs" },
"/boot": { "Device": "nvme0n1p1", "FSType": "ext4" }
}
unfortunately, main.go is returning an empty response (which errors by design) when terraform runs the script due to it not being able to pull metrics with the provided instance id. Thoughts?
Exactly right, you would need to configure vault to assume the role, not your terraform provider
Or write your script to assume the role, which we’ve done before. Gimme a sec to find an example….
here’s the setup in python: https://github.com/plus3it/terraform-aws-tardigrade-security-hub/blob/master/modules/accepter/security_hub_accepter.py#L106-L119
Terraform module to create SecurityHub. Contribute to plus3it/terraform-aws-tardigrade-security-hub development by creating an account on GitHub.
Cheers again mate. Was just thinking I’m going to make my error print the output of get caller identity
and here’s how we get that role_arn from the aws provider… https://github.com/plus3it/terraform-aws-tardigrade-security-hub/blob/master/modules/accepter/main.tf#L26-L35
Terraform module to create SecurityHub. Contribute to plus3it/terraform-aws-tardigrade-security-hub development by creating an account on GitHub.
So I got assume role working which is great, I hadn’t done that before. But now I think my problem is that i am returning a map of map (see above), not a map of string values as specified in the docs. Will need to flatten things out which is a bit messy looks like. Thanks for this
Yep, doesn’t error out when I get rid of nesting
> data.external.this.result
{
"Device" = "tmpfs"
"FSType" = "tmpfs"
}
You can return a json-encoded object, then use jsondecode()
on the result
Ah, even with nesting? I think all of the values need to be strings per the docs. Though this would be great! https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source
Yes, you create a map where all of its keys have json-encoded values, that makes them all strings!
Ooooh damnnnnn. Will give it a crack tomorrow
Or, at that point, a map with a single key and your entire complex structure json-encoded as the value… Maybe a bit easier to refactor that way
Smart :)
Didn’t feel like waiting, got it working, I owe you a beer :)
Glad it turned out
Waypoint URL Service Nov 3, 03:59 UTC Resolved - Services have normalized.Nov 2, 21:58 UTC Update - Service is experiencing degraded performance and may timeout for some deployments. We are continuing to investigate.Nov 2, 17:04 UTC Update - Service is experiencing partial outage and returning “Deployment not found” for some deployments. We are continuing to investigate.Nov 2, 15:44 UTC Investigating - Investigating observed issues with Waypoint URL Service
HashiCorp Services’s Status Page - Waypoint URL Service.
2020-11-03
anyone have successfully updated eks from 1.14 using terraform terragrunt? using terraform-root-modules
hi,
I’m interested in terraform-terraform-label module to re-label random ELB name in CloudWatch alarms, however it looks like it needs separate label module for each resource. Is that true or I misunderstood something?
For instance I have a ELB CloudWatch alert resource where I’d like to use a list of ELBs using [count.index]
:
module "elb-5xx-label" {
source = "git::<https://github.com/cloudposse/terraform-terraform-label.git>"
name = var.name
namespace = var.namespace
stage = var.stage
attributes = compact(concat(var.attributes, list("elb", "5xx")))
}
resource "aws_cloudwatch_metric_alarm" "elb-5xx-anomaly" {
count = length(var.monitored-elb-ids)
alarm_name = join("", ["ELB 5xx errors high - ", var.monitored-elb-ids[count.index]])
# alarm_name = join("", ["ELB 5xx errors high - ", module.elb-5xx-label.id])
comparison_operator = "LessThanLowerOrGreaterThanUpperThreshold"
evaluation_periods = "1"
threshold_metric_id = "e1"
alarm_description = "The number of HTTP 5XX errors originating from the ELB are out of band. This is not an error generated by the targets (backend)"
treat_missing_data = "notBreaching"
alarm_actions = [element(var.sns-topics.*.topic-id, 1)]
ok_actions = [element(var.sns-topics.*.topic-id, 1)]
metric_query {
id = "e1"
expression = "ANOMALY_DETECTION_BAND(m1, 1)"
label = "HTTPCode_ELB_5XX (expected)"
return_data = "true"
}
metric_query {
id = "m1"
return_data = "true"
metric {
metric_name = "HTTPCode_ELB_5XX"
namespace = "AWS/ELB"
period = "60"
stat = "Sum"
unit = "Count"
dimensions = {
LoadBalancerName = var.monitored-elb-ids[count.index]
}
}
}
}
First, I would recommend using terraform-null-label
. We started terraform-terraform-label
as an alternative for terraform-null-label
that would not use the null_resource
; however, since 0.12 shipped with many new featuers, we ended up dropping null_resource from the null label (so the name of the module is a bit of a misnomer now).
In the new null label module, we support a context
variable. This makes it very easy to define many label names.
See this example: https://github.com/cloudposse/terraform-aws-dynamic-subnets/blob/master/nat-instance.tf#L1-L7
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
Hi @Erik Osterman (Cloud Posse) thanks for your reply. I’m looking on to terraform-null-label
module but can’t find if that (and how) will work in my scenario. When I want to have one resource:
resource "aws_cloudwatch_metric_alarm" "elb-5xx-anomaly"
with multiple alarm_name
taken from label module and multiple LBs defined in variable:
dimensions = {
LoadBalancerName = var.monitored-elb-ids[count.index]
How to configure it?
Hello! good morning, i'm new to this slack! I wanted to ask you a question, I was trying to build a cluster with the cloudposse modules and they work perfect, my problem is that I don't understand how to continue after that, I don't understand how to make the services work, I raise the cluster, but automate the listeners part and target group complicates me, I always get to a point (both with workers and node groups) that I get stuck
Do you have any guidance from scratch on how to make all this work? mount the cluster and run the services
how to make all this work
yikes. How kubernetes works? That’s more than a few slack messages lol
or did you have anything a bit more specific
Excuse me, I explained wrong, my English is not native, I take the opportunity to say that if something seems rude, it is not my intention
I am trying to automate the deploy in terraform
the cloudposse modules helped me a lot
But, in my understanding, for an application to work I have to create a listener, give that listener a rule that points to a target group (I always did it this way)
My problem is that I think I’m in a bad way or I can’t make it work well
For example, if I use node groups, I cannot give them the necessary security groups
If you have a question about something specific, like a particular module you are using, that will help get you the right answers. The best way to get the answers you are looking for on most OSS slack workspaces is to ask targeted questions that people are able to answer quickly. Asking “how does all this stuff work” generally just gets ignored since people don’t have time to walk you through everything
I think my problem is this type of networking within amazon or maybe the operation of eks itself, so I asked if there was an example of how to use the modules up to that point
I never asked a specific question, just asked if there was a more detailed guide
Ask and you shall receive
https://aws.amazon.com/blogs/apn/aws-networking-for-developers/ https://www.youtube.com/watch?v=hiKPPy584Mg http://aws-de-media.s3.amazonaws.com/images/AWS_Summit_2018/June7/Coral/AWS%20Networking%20Fundamentals.pdf
This post is co-authored with Mark Stephens, Partner Solutions Architect, AWS If you’re a developer and are new to AWS, there is a good chance you have not had the need to set up or configure many networks. As a developer that has a need to work with infrastructure, you might end up running into […]
As far as the actual CloudPosse modules, the README that is in each repo is the documentation for that module
Most (all?) of them also have a very good example in an examples folder
Let me understand, I ask about specific terraform modules and eks in particular, I explain that I know how to make it work, but I don’t understand how to make it work with the cloudposse modules and do you give me amazon networking tutorials?
I can make it work by hand, that’s not a problem
Yes, the modules have good examples on how to create the cluster, but not how to make any services available
Does what I’m saying make sense? Or am I letting something go
• What kind of services are you running?
• How do you want them made available? Do you want them accessible from the outside internet, or something else?
• What do you want to use to get traffic in? ALB, ELB, something else? None of this stuff has anything to do with the cloudposse modules, they just build the cluster. This is all “how do I do this stuff in Kubernetes”
• Are you using any kind of service mesh? or nginx-ingress controller? Or alb-ingress controller? Gloo? Traefic? Any of the other dozen ways to do ingress and routing in a k8s cluster?
Or just creating a k8s Service of type LoadBalancer? (this is the easiest way to start, but one of the most expensive)
Concepts and resources behind networking in Kubernetes.
@Sebastian Borrajo you question is how to deploy apps and services on the EKS cluster (including apps and system services like nginx-ingress
etc.). This is a separate topic, but in short, you can use helm
and helmfile
to do it
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
can I use helm to solve these networking problems I’m having? thanks, it’s a very good way to keep going
@roth.andy for the moment I want to use as simple as possible, simple web app, outside, alb
The Ghost helm chart is a simple blogging app. I’ve used it often when doing simple deployments to k8s. It has other optional stuff like databases and Ingress that you can enable if you want to test that stuff too
If you want to use ALB check out https://github.com/kubernetes-sigs/aws-load-balancer-controller
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
A Kubernetes controller for Elastic Load Balancers - kubernetes-sigs/aws-load-balancer-controller
You can load balance application traffic across pods using the AWS Application Load Balancer (ALB). To learn more, see What is an Application Load Balancer? in the Application Load Balancers User Guide . You can share an ALB across multiple applications in your Kubernetes cluster using Ingress groups. In the past, you needed to use a separate ALB for each application. The controller automatically provisions AWS ALBs in response to Kubernetes Ingress objects. ALBs can be used with pods deployed to nodes or to AWS Fargate. You can deploy an ALB to public or private subnets.
The one it comes with by default creates ELBs
a.k.a. Classic Load Balancer
I’m going to read about both options, thanks a lot to both! With alb ingress I tried a tutorial from amazon but referring to cf stacks and get stuck, this could work
now that ALB ingress v2 is released, I’d go straight for that one. .
@Sebastian Borrajo Hablas español?
@jose.amengual Si! el ingles me cuesta un poco, pero lo intenteo de llevar con lo poco que se
@Erik Osterman (Cloud Posse) Thank you! I’m going to try version 2
TLDR To those english speakers , this is where the conversation will be encrypted in spanish from now on
Lo que te decian mis compañeros es que los modules de cloudposse setean todo lo que necesitas para un EKS cluster
en la carpeta examples/complete vas a ver un ejemplo que crea todo
incluyendo las redes VPC, etc
despues tienes que hacer un deployment y eso lo puedes hacer con helm que es lo mas facil
pero para cualquier EKS cluster lo minimo que necesitas es tener un VPC, subnets, nat gateways y todo lo que corresponde desde el punto de vista de redes
y eso no es solo cierto para EKS
NO HAY producto en amazon que no necesite un VPC
Cloudposse tambien tiene modules para eso con ejemplos
(i remember we started #terraform-es but it didn’t get much traction)
Si! yo monte todo con el ejemplo puesto en cloudposse, pero mi mayor traba fue como deployar el producto mas tarde, tengo entendido que helm te resuelve estos problemas de ruteo no?
(por estos problemas me refiero a que por ejemplo, para deployar una app yo tengo que hacer un listener, un listener rule y a eso ponerle un target tengo entendidor que ALB ingress resuelve este problema)
si esa parte te la hace el helm chart
pero de una u otra manera te tienes que leer el manual de como deployar apps en K8s por que en algun minuto vas a tener que modificar el chart o el deployment para que funcione de la manera que quieras
Muchisimas gracias pepe, voy a indagar sobre eso
@Erik Osterman (Cloud Posse) yes, there is not much traction in it
una ultima pregunta, por manual te refieres a cual? es un link? un libro?
me refiero a la documentacion general de como hacr deployments en k8s
cualquier tutorial sirve
despues de que uses el helm chart y veas lo que creo
vas a tener que entender lo que hizo para poder modificarlo
a veces el helm chart hace todo lo que necesitas
Hey quick question about the terraform-s3-website module, basically I’m trying to put up a route53 reference along with the website, the docs say to use parent_zone_name
or id along with hostname
but I noticed the alias / value it’s creating is just s3-website
, any ideas what’s going on here?
+ alias {
+ evaluate_target_health = false
+ name = "s3-website.ca-central-1.amazonaws.com"
+ zone_id = "xxx"
}
}
Here’s the module code
module "website" {
source = "git::<https://github.com/cloudposse/terraform-aws-s3-website.git?ref=0.12.0>"
delimiter = "."
region = var.region
namespace = var.name
stage = local.stage_namespace
name = local.cluster_domain
hostname = local.domain
versioning_enabled = "true"
cors_allowed_methods = ["GET", "HEAD"]
index_document = "index.html"
error_document = "index.html"
parent_zone_name = local.namespace_domain
tags = merge(
map("Country", substr(var.region, 0, 2)),
map("DataCenter", substr(var.region, 3, length(var.region) - 5))
)
}
local.namespace_domain represents the specific route53 zone.
it should create an A
record in the DNS zone and point <local.domain.local.namespace_domain>
to [s3-website.ca-central-1.amazonaws.com](http://s3-website.ca-central-1.amazonaws.com)
So this is working as expected then? Hmm, the site doesn’t seem to come up despite everything being there and the direct s3 endpoint working
do you see the A
and AAAA
records in the DNS zone?
A record
try to use some external tool like https://mxtoolbox.com/DnsLookup.aspx to check the record
Well I destroyed and recreated the whole thing and now it essentially just says Not found
Despite the s3 endpoint having files being hosted
DNS has TTL
Guess I gotta be a little patient haha
if it cached somewhere, you’ll not see the new record for the TTL time
Alright new update, it just times out now
Any ideas @Andriy Knysh (Cloud Posse)?
how do you access the site using the A record?
can you share it?
I’ll dm ya
does anyone use or recommend a tool to view Terraform (0.13) plans in a prettier format? I’ve been looking at https://prettyplan.chrislewisdev.com/ which is 0.12 only, wondering if there are alternatives
Something I have used that has been pretty good is to just grep
the output looking for hashes.
terraform plan | grep "#"
yea, there was also scenery
, but it too was not updated.
i would be weary of any tool right now because the plan output is seemingly changing with every new version
yeah. I hoped some tools would use the JSON representation of plan, which I’m assuming hasn’t changed so much
That’s true…
I just tried 0.14 and the output is great though
Terraform will perform the following actions:
# module.eb["770"].aws_elastic_beanstalk_environment.main will be updated in-place
~ resource "aws_elastic_beanstalk_environment" "main" {
id = "XXX"
name = "XXX"
# (17 unchanged attributes hidden)
+ setting {
+ name = "HostHeaders"
+ namespace = "aws:elbv2:listenerrule:SharedAlbRedirect"
+ value = "XXX"
}
- setting {
- name = "HostHeaders" -> null
- namespace = "aws:elbv2:listenerrule:SharedAlbRedirect" -> null
- value = "YYY" -> null
}
# (86 unchanged blocks hidden)
}
Those 86 unchanged blocks used to take up sooo much space! Now my diffs fit in a single terminal window.
that’s awesome! I can also be posted as comments now to PRs too since sensitive data will can be filtered out
I found this tool which creates a colored JS Graph which I can study in the browser. Added to a make target
visualize-plan:
@terraform show -json plan.out > plan.json
@mkdir -pv terraform-visual-report
@docker run --rm -it --name terraform-visual-cli \
--entrypoint terraform-visual \
-v $$(pwd)/plan.json:/plan.json \
-v $$(pwd)/terraform-visual-report:/terraform-visual-report/ \
hieven/terraform-visual-cli:0.1.0-0.12.29 --plan plan.json
@echo "\nOpen in your Browser:\n\t\tfile://$$(pwd)/terraform-visual-report/index.html"
I was using version 0.24 of https://github.com/cloudposse/terraform-aws-rds-cluster and I’m now upgrading to version 0.35 and Tf 0.13 and I’m getting
module.datamart_writer_cluster_us_east_1.aws_rds_cluster.default[0] its
original provider configuration at
provider["registry.terraform.io/-/aws"].us_east_1 is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.datamart_writer_cluster_us_east_1.aws_rds_cluster.default[0], after
which you can remove the provider configuration again.
I think is because of the removal of the label module local provider?
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
I’m having trouble finding in the state the problematic resource so I can remove it
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
obviously I do not want to remove my cluster
I can go up to 0.31.0 version of the module so far no issues
so… here are the steps to solve this: (yes, it’s b/c of the removal of the provider from the very old label module)
- Copy the module locally
- In the module, add the new version of the label module w/o removing the old one
- Place the module into
modules
folder in your solution and reference the module (instead of the remote one)
- Update all the code to use the new label.id etc.
- terraform plan should shown no changes w/o destroying anything else
- terraform apply
- Delete the old label from the code
- terraform apply
is there a easy way to find exactly the provider used in the submodule, like in this case the label module?
Yea search for this error in the archives
Someone else has a simpler solution I think
I am on my phone so it’s hard to search, but I recall seeing someone run some command to migrate the provider from - to HashiCorp
SweetOps Slack archive of #terraform for September, 2020. Discussions related to Terraform or Terraform Modules
That’s the command I was thinking of
ahhh cool
I’m having a very weird issue with this project
if I use version 0.31.0 of the module with tf 0.13.5 I can run apply no problem
if I use version 0.32.0 the apply command hangs for ever trying to connect to mysql
stays on a look of waiting to connect
Error: Could not connect to server: dial tcp 127.0.0.1:3306: connect: connection refused
I use a mysql user provider
but if I lower the module version to 0.31 it works
very strange
Sounds like it’s related to this PR: https://github.com/cloudposse/terraform-aws-rds-cluster/pull/80
what Fix outputs when enabled=false Change Security Group rules from inline to resources why Fix outputs when enabled=false: coalesce will throw error when both parameters are empty Change Secur…
Using inline rules was “our bad” and shouldn’t have made it through code review back in the day. Switching from inline rules to resources, isn’t supported by terraform.
You might need to delete those inline rules first.
interesting
but the weird thing is that the security group had not been changed yet
This PR #80 changed the Security Group rules from inline to resource-based. This is a good move since using inline SG rules is a "bad practice". Inline rules have many issues (one of them…
this fails on a plan, which I found it weird
so the steps to reproduce is:
1.- run plan with module version 0.31.0 and confirm plan/apply works
2.- change module version to 0.32.0
3.- run terraform init
4.- run plan to confirm it times out again
I have yet to be able to apply with version > 0.31.0 of the module
Is this with atlantis on fargate?
no, this is from my laptop
I even thought it was my connection, restarted and such and same issue
I was going to run it on atlantis to see if is still a problem there
have you tried setting TF_LOG=debug
?
yep, the mysql provider tries to connect to mysql all the time
until it times out after 5 min
and if I lower the module version everything works
it is VERY strange
the module does not have anything to do with the mysql user provider
I figure it out
I tried aws provider 2.7 and 3x and nothing
there seems to be a problem with getting the .endpoint (hostname) of the cluster from the module that sometimes could not resolve
so once I changed
provider "mysql" {
# endpoint = module.datamart_writer_cluster_us_east_1.endpoint
endpoint = "xxxxxx.us-east-1.rds.amazonaws.com"
username = var.datamart_db_user
password = random_string.db_password.result
}
the plan worked
now since the label module changed the cluster identifier changed so now it wants to destroy everything but that is another issue
So without hard coding the hostname, what hostname was it connecting to in the debug output?
localhost
which makes no sense since the module.endpoint was used
but now that I successfully run plan and since the plan wants to destroy the cluster because of a name change , my guess there is a race condition of the plan trying to calculate the new endpoint name instead of using it from the state
or something along those lines
actually the reason why the cluster is being recreated is because of the inline security group stuff but I will follow Andriy guide to fix it
mmm that was not it
the cluster_identifieer seems to be the problem but even if supplied still wants to delete everything
@jose.amengual I recommend to copy the module locally into modules
folder, reference it from your code, and make changes locally to find the issues - this will allow you to iterate fast and find/fix the issues
I found the version that is breaking the changes
is 0.32.0
we added the contex.tf
cluster_identifier = var.cluster_identifier == "" ? module.this.id : var.cluster_identifier
I think this could be
now I know…
is this thing
module.datamart_writer_cluster_us_east_1.aws_rds_cluster.primary[0
the id in my state looks like module.datamart_writer_cluster_us_east_1.aws_rds_cluster.default[0]
By the way, what was the reason behind adding primary and secondary?
if Cluster is part of a Global Cluster, use the lifecycle configuration block ignore_changes argument to prevent Terraform from showing differences for replication_source_identifier argument instead of configuring this value (and we can't use dynamic since it's not supported in lifecycle blocks).
what Update to context.tf Add primary and secondary resource "aws_rds_cluster" why Standardization and interoperability Keep the module up to date If Cluster is part of a Global Cluste…
they are not created at the same time, only one or the other
it should not affect any names or IDs
primary
and secondary
are not a very descriptive/good names for that
those are just for regular cluster (“primary” by itself) or a cluster that is part of a Global Cluster (hence “secondary”)
(any of those can have their own secondary read replicas)
I use Global RDS too, and I was using this module with global rds but I can see how it can make things easier
the primary
and secondary
were changed in the instance and cluster ids and that is why my plan wants to destroy everything
2020-11-04
Hi, I am using https://github.com/cloudposse/terraform-aws-elasticsearch v0.24.1 to spin up a managed ElasticSearch domain. I want to add it to a previously created Route53 hosted zone. The ES cluster spins up fine, but when it starts creating the DNS record, I get the following error:
[ERR]: Error building changeset: AccessDenied: The resource hostedzone/XXXX can only be managed through AWS Cloud Map (arn:aws:servicediscovery:us-west-1:123456789:namespace/ns-xxxxxxx)
status code: 403, request id: 4d9a9437-3af1-4982-ad58-c766dc1d18d6
How can overcome this error? Should I create the DNS records manually with the Cloud Map CLI or is there a better solution? Thank you very much!
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
Sounds like you don’t have permission to modify that zone. I think there is a setting to disable automatic DNS. If not, we’ll accept any PRs to do that.
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
Thanks Erik. Yes, in the meanwhile I figured that out.
I was trying to create DNS records within a hosted zone created for service discovery (in the Elastic Container Service). Apparently this is not allowed since, as the error says, can only be managed through Cloud Map.
A very simple workaround is to disable the hostname option when creating the module (it is there) and just use the Elasticsearch endpoint directly. Thanks anyway for the reply!
Hello everyone Could someone tell me do we REALLY have to have “~> 2.0” for AWS provider here? https://github.com/cloudposse/terraform-aws-ssm-parameter-store/blob/master/versions.tf#L5
Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store
no, we can go with => 2.0
Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store
you are welcome to create a PR
Thank you for the fast answer. I’ll definitely create a PR
should look like this https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/master/versions.tf
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
just add or delete the not needed providers
and post the link to your pr in the pr-reviews channel afterwards
what Update terraform and provider version requirements why 3.0 AWS provider is already here and we should be able to use the module with it
ssm module uses the same providers so just copy-paste
(wrong org okgolove
)
miss click lol
what Update terraform and provider version requirements why 3.0 AWS provider is already here and we should be able to use the module with it
lol
what Loosen the AWS provider requirement why Required versions was loosened for running Terraform 0.13 which wants to use the AWS v3 provider, allow it to do so. Otherwise we need to pin any module…
Aren’t the required_providers in https://github.com/cloudposse/terraform-aws-ssm-parameter-store/pull/19 going to break TF 12?
Ah, I see the Terratest output, you get a warning
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: Warning: Provider source not supported in Terraform v0.12
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: on ../../versions.tf line 4, in terraform:
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 4: aws = {
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 5: source = "hashicorp/aws"
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 6: version = ">= 2.0"
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: 7: }
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: A source was declared for provider aws. Terraform v0.12 does not support the
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: provider source attribute. It will be ignored.
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121: (and 3 more similar warnings elsewhere)
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:
TestExamplesComplete 2020-11-04T17:55:07Z command.go:121:
are you guys on a race?
I will close this last Pr then, is that ok?
@Mikhail Naletov
closed
(when we run tests, just need to make sure to add terraform/0.13
label)
ohhh I though we did not have to do that since we were keeping compatibility with 0.12
I don’t feel we have to keep 0.12, it’s more a nice to have
since 0.14 is dropping any day, we need to drop 0.12 soon
Thanks folks!
Any chance of getting a release cut? ;D
I can do that
done
Thanks!
Thank you for merging the fix
hey there. was about to open an issue but the template seems to say i should bring it here first. https://github.com/cloudposse/terraform-aws-ecr
using it as such:
module "xxx_yyy" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecr.git?ref=tags/0.29.0>"
name = "xxx_yyy"
}
drops the underscore and yields xxxyy
for the repo name which isn’t desired. need to have the underscore in my use case to maintain a convention. confirmed amazon supports it. am i missing something from a tf escaping perspective or is this a legitimate bug?
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
Did you try setting use_fullname to false?
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
will try that now. i also see regex_replace_chars
which would explain this but it seems to be defaulted to null
actually. that thing i just said seems to be the problem. the default is null in the table but it says:
If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.
so to close this out, a working example is
module "xxx_yyy" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecr.git?ref=tags/0.29.0>"
name = "xxx_yyy"
regex_replace_chars = "/[^a-zA-Z0-9-_]/"
}
i would expect that the default character set should be the amazon-supported characters
hi all - hoping someone can help me here. I’m using the aws_instance resource, within my userdata argument i’m using user_data = filebase64("${path.cwd}/scripts/${var.linux_user_data}.sh")
- as this doesn’t work. What i like to be able to do here is reference my script via variable as I have many scripts i like to reference for this types of instances i spin up.
What is not working? Can you share an error?
Hi it’s not that it’s not working. Since my user data reference a single script file I wanted a way where I can reference either script a or b depending on the type of instant I spin up.
Hope that clear
And did you try doing it with the method you mentioned?
I did but it’s not able to find the script path which I defined in my variable
So it’s giving you an error message?
Yes I’m not in front of my laptop unfortunately
We’ll need the error message to help you. I suggest you send it when you’re at your laptop.
although the error states invalid function i’m pretty sure i’m not using this correctly.
Error: Invalid function argument
on ..\..\..\..\..\terraform-aws-ec2\main.tf line 135, in resource "aws_instance" "linux":
135: user_data = filebase64("${path.module}/scripts/${var.linux_user_data}.sh")
|----------------
| path.module is "../../../../../terraform-aws-ec2"
| var.linux_user_data is ""
Invalid value for "path" parameter: no file exists at
..\..\..\..\..\terraform-aws-ec2\scripts\.sh; this function works only with
files that are distributed as part of the configuration source code, so if
this file will be created by a resource in this configuration you must instead
obtain this result from an attribute of that resource.
How did you define the variable?
in my variables.tf i’m using
variable "linux_user_data" {
description = "User data script"
default = ""
}
in the values are defined in my non-prod.tf
linux_user_data = "grafana"
and my module is set to use something like this
module "example_test01" {
source = "../../../../../terraform-aws-ec2"
Let’s do a sanity - remove the default from your variable. This will cause TF to break if it can’t find a value for that variable.
this is what i get
❯ terraform.exe plan -var-file="non-prod.tfvars"
Error: Missing required argument
on main.tf line 1, in module "example_test01":
1: module "example_test01" {
The argument "linux_user_data" is required, but no definition was found.
i also tried adding the var to the module and same error
actually this time it sees the file but it’s looking in the wrong path
Error: Invalid function argument
on ..\..\..\..\..\terraform-aws-ec2\main.tf line 135, in resource "aws_instance" "linux":
135: user_data = filebase64("${path.module}/scripts/${var.linux_user_data}.sh")
|----------------
| path.module is "../../../../../terraform-aws-ec2"
| var.linux_user_data is "granfana"
Invalid value for "path" parameter: no file exists at
..\..\..\..\..\terraform-aws-ec2\scripts\granfana.sh; this function works only
with files that are distributed as part of the configuration source code, so
if this file will be created by a resource in this configuration you must
instead obtain this result from an attribute of that resource.
it should looking in the current dir .\scripts\
so i updated to used cwd
user_data = filebase64("${path.cwd}/scripts/${var.linux_user_data}.sh")
and now i get
Error: Invalid function argument
on ..\..\..\..\..\terraform-aws-ec2\main.tf line 135, in resource "aws_instance" "linux":
135: user_data = filebase64("${path.cwd}/scripts/${var.linux_user_data}.sh")
|----------------
| path.cwd is "D:/Users/sotath/Desktop/repo/mso/terraform-aws-ec2/environment/non-prod/example_test/test"
| var.linux_user_data is "granfana"
Invalid value for "path" parameter: no file exists at
D:\Users\sotath\Desktop\repo\mso\terraform-aws-ec2\environment\non-prod\example_test\test\scripts\granfana.sh;
this function works only with files that are distributed as part of the
configuration source code, so if this file will be created by a resource in
this configuration you must instead obtain this result from an attribute of
that resource.
it’s the correct location path now. but errors out
D:\Users\sotath\Desktop\repo\mso\terraform-aws-ec2\environment\non-prod\example_test\test\scripts\granfana.sh
exists?
it does
oh wait
why is it picking up granfana
instead of linux_user_data = "grafana"
spelling is off
That’s what you passed to your module
It’s visible in the above screenshot
from a few minutes ago
ah hell how did i miss that
i see it now
sweet! it works
omg!
sorry for the troubles
Happy to help
thx you
[Waypoint] Service Maintenance Nov 5, 01:30 UTC Investigating - The services are being upgraded to avoid conditions detected during the previous outages.
HashiCorp Services’s Status Page - [Waypoint] Service Maintenance.
[Waypoint] Service Maintenance Nov 5, 02:13 UTC Resolved - Services have been upgraded and are working properly.Nov 5, 01:30 UTC Investigating - The services are being upgraded to avoid conditions detected during the previous outages.
Working with @Yoni Leitersdorf (Indeni Cloudrail) on this project. We started receiving feedback. In particular - the use of TF plan to run as part of our TF security scanning. would love to get this group’s input on passing TF plan externally to such tools like Cloudrail -> You can maintain anonymity with this google survey, but would love to chat!
As more and more people are switching to using infrastructure-as-code (like Terraform) to manage their cloud environments, we’re seeing an increase in the desire to do security reviews of the IaC code files. There’s a bunch of tools out there, and a couple of big challenges. Would appreciate your thoughts on the matter. Please see a blog post we’ve just published:
https://indeni.com/blog/identifying-security-violations-in-the-cloud-before-deployment/
Today, the management of Terraform environments has taken shape with varying security controls. We would like to understand how some of the security controls manifest for sensitive configuration files like Terraform Plan/State.
2020-11-05
heey is it possible to update this https://github.com/cloudposse/terraform-aws-tfstate-backend to use the latest null-label module with context?
Delimiter setting doesn’t work for the module now
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
what Use context feature and latest null-label module why Some settings like delimiter didn’t work for this module
@Andriy Knysh (Cloud Posse)
what Use context feature and latest null-label module why Some settings like delimiter didn’t work for this module
i’ll check it out
@Mikhail Naletov reviewed the PR, LGTM, a few comments
@Andriy Knysh (Cloud Posse) hello! I’ve fixed the PR. Could you review again?
bump
@Mikhail Naletov thanks again. I left a few comments
[Waypoint] Services not connecting Nov 5, 18:45 UTC Monitoring - We’ve rolled out a fix (appears to be a bug in reading random numbers? pretty weird, we agree). We’ll keep an eye on it for the rest of the day.Nov 5, 17:53 UTC Identified - An old bug has reappeared! We’re working on a fix.
^ could these move to another channel?
I’m going to keep the terraform releases though, since those are pretty seldom
Agree - releases are nice.
terraform import is broken. I can’t get it to import more than one element of a map. Anyone else run into this? Bugs have been opened and closed with Hashicorp without them ever admitting fault. I’m so frustrated. Here’s what the output looks like:
# terraform state list
data.aws_acm_certificate.amazon_issued_compeat_wc
aws_s3_bucket.beta_data_buckets["svc_feedback"]
aws_s3_bucket.frontend_beta_web_buckets["accounting"]
aws_s3_bucket.frontend_beta_web_buckets["integrations"]
aws_s3_bucket.frontend_beta_web_buckets["inventory"]
aws_s3_bucket.frontend_beta_web_buckets["portal"]
root@f30c14ba15f6:/tfroot/beta# terraform import aws_s3_bucket.beta_data_buckets[\"svc_imports\"] co-beta-service-imports
aws_s3_bucket.beta_data_buckets["svc_imports"]: Importing from ID "co-beta-service-imports"...
aws_s3_bucket.beta_data_buckets["svc_imports"]: Import prepared!
Prepared aws_s3_bucket for import
aws_s3_bucket.beta_data_buckets["svc_imports"]: Refreshing state... [id=co-beta-service-imports]
Error: Invalid index
on /tfroot/beta/resources.tf line 72, in locals:
72: aws_s3_bucket.beta_data_buckets[bucket].arn
|----------------
| aws_s3_bucket.beta_data_buckets is object with 1 attribute "svc_feedback"
The given key does not identify an element in this collection value.
I’m well aware that the given key doesn’t identify an element … that’s why I’m trying to import it!! Before I made the mistake of upgrading to 13 thinking that maybe it was fixed there, importing the first element added a minimal amount of config that I could maybe use to hack the file by copying for the other buckets, but at the latest 13, it appears to load a lot more into the config. Has anyone had to hack this file in order to get their TF working with objects created elsewhere like this? If so, what is the minimal amount of config that I need to hand-add for each bucket?
Curious if you’ve tried removing that object from the state file and importing?
Yeah. Going back and forth between import and state rm …
about to try destroying the entire state file. … that’s definitely not my preferred option, but I just tried to recreate it with tf 12, and it just worked …
What version of terraform are you on?
I’ve had problems with terraform 0.13.2 and import
. I think they’re known / potentially fixed in later patch versions of 0.13
Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/aws v3.13.0
+ provider registry.terraform.io/hashicorp/null v3.0.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
I feel import is pretty badly broken in 0.13, especially and completely when using count/for_each on modules. they’ve closed issues indicating the bugs are fixed in master and will be part of the 0.14 release, but that’s small solace in the interim
I was able to fix my issue by importing with tf12 and then copying over the contents of the state file generated by tf12 into the working state file I was using with tf13. … this leaves me with working with unreproducible TF, but that seems to be the best they can do right now.
Interested to see where this goes, seems hashicorp is trying to improve visibility of their internal priorities and how they merge community contributions… https://www.github.com/hashicorp/terraform-provider-aws/tree/master/ROADMAP.md
Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.
Lulz didn’t realize I was still in thread…
Interested to see where this goes, seems hashicorp is trying to improve visibility of their internal priorities and how they merge community contributions… https://www.github.com/hashicorp/terraform-provider-aws/tree/master/ROADMAP.md
Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.
I love the permission set support. Was just missing that recently.
Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.
The data has been accurate so far (2 quarters)
Hello anyone getting error -
Error: InvalidParameter: 1 validation error(s) found. - minimum field size of 1, ListTargetsByRuleInput.EventBusName.
I just started getting this error in our pipeline when I tried to upgrade to the latest aws provider version
* hashicorp/aws: version = "~> 3.14.0"
check the release notes of versions between your previous version and the latest
however, that error looks like it comes direct from the AWS API
“InvalidParameter” is a common AWS API error string
Yes whts weried is my previous provider version is 3.13.0 (October 29, 2020)
I will try running plan with TF_DEBUG=1 to rule out the issue
Is that the same as TF_LOG=trace
? I also suggest -parallelism=1
which makes the logs much easier to read
yes I suppose
I use -parallelism=100 I will setting it to 1 to debug
Looks like this was a provider bug https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.14.1
BUG FIXES resource/aws_cloudwatch_event_target: Prevent regression from version 3.14.0 with ListTargetsByRuleInput.EventBusName error (#16075)
Has anyone integrated custom providers into CD? Specifically, I’m looking to build a custom version of the AWS provider with some pull requests merged. I am wondering about the best way to add a custom provider binary to our CD process
Are you using TF 0.13? You can do something like this:
terraform {
required_providers {
restapi = {
source = "fmontezuma/restapi"
version = "~> 1.14.0"
}
}
}
It does require you to register it.
or you can place the binary into terraform.d/plugins/linux_amd64
, and TF will find it
this is the name format that TF expects: terraform-provider-shell_v0.1.3
It looks like the AWS provider is installed to
.terraform/plugins/registry.terraform.io/hashicorp/aws/3.7.0/darwin_amd64/terraform-provider-aws_v3.7.0_x5
I wonder if I can simply replace the binary with my own…
Extending Terraform is a section for content dedicated to developing Plugins to extend Terraform’s core offering.
they are installed in that location, yes
but terraform.d
works as well as a static location
I tried to put my custom compile of aws provider in there as terraform.d/plugins/darwin_amd64/terraform-provider-aws_v3.14.0
, but Terraform still installed the “real” version from the internet
you prob need to change the name or the version number so TF would not find it in the registry and look into terraform.d
(although we did it some time ago and things could have changed)
dang. Why is this so hard? Does Hashicorp benefit keeping it difficult?
Also keep in mind there’s a hash there, it may be comparing it (I don’t know… didn’t read the code):
% cat selections.json
{
"registry.terraform.io/fmontezuma/restapi": {
"hash": "h1:dvLIvjzP1nGHcimSkM4mSLvuJ7yI+3aV/mZAWHu4EXs=",
"version": "1.14.1"
},
"registry.terraform.io/hashicorp/aws": {
"hash": "h1:3gkfYjOVSHc3g/eXnk/JnRuoYtoDRu1oV3YPmBnuVtY=",
"version": "3.12.0"
}
}
Figured it out. You can force install of packages using terraform init -plugin-dir
– the plugin-dir
argument is required to install “official namespace” packages from a local cache
example [main.tf](http://main.tf)
:
# order doesn't matter
provider azure {} # only available online
provider aws {} # available in local cache
Your local cache is a directory with the following file:
registry.terraform.io/hashicorp/aws/3.14.0/darwin_amd64/terraform-provider-aws_v3.14.0
Install aws provider:
$ terraform init -plugin-dir terraform.d/plugins
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/aws v3.14.0
- Finding latest version of hashicorp/azure...
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/azure: provider registry.terraform.io/hashicorp/azure was not found
in any of the search locations
- terraform.d/plugins
Then you can install all other providers:
$ terraform init
(well, azure is a bad provider name, if you use a real provider things work )
2020-11-06
Quick question: does anyone know how I can set the Execution Timeout of a maintenance window task using terraform? I can set the ‘Delivery Timeout’ value in the run_command_parameters block, using the parameter name: timeout_seconds, but I don’t know the name of the ‘Execution Timeout’ parameter.
resource "aws_ssm_maintenance_window_task" "task" {
....
task_invocation_parameters {
run_command_parameters {
...
timeout_seconds = 600
}
}
}
Any insights/suggestions would be much appreciated.
One trick I sometimes leverage is configuring the resource in the console (by hand) and then importing the resource to see how it was configured. I was not able to find another timeout value in the provider documentation.
Where are the best modules templates
The internet
But more specifically you can try the CloudPosse GitHub org
I always expect a bit of cynicism, and arrogance , from all the narcistic, genius people out there who have it all figured out. You mate, are the product of the world we live in today..
thanks for the answer
General question: I found it normal to use like.. chtf
to switch between terraform versions as needed. But is there a way to allow for backwards compatibility from terraform 0.13 binary but for code meant for 0.12?
Pretty sure 0.12 binary for code with 0.11 just fails
(This came up after some conversations with friends on the topic. Just curious.)
are you asking if it’s possible to write terraform code that works with 0.12 and 0.13? It is
Best practice question: I have a handful of domains (zone / record data) that I’d like terraform to manage. Would you keep all this data in tf file itself (in the resource), or keep that data in a flat/json file and include it in the tf (resource)?
when you say a handfull is like 10-20, or 100s?
if is 10 20, I will make a list and put them all there and if there is a module that create the group of record per each domain I will for loop over the list and use the module to make it really small and simple
but the same apply if you have 1000s in a json file, then you need an additional step to decode it and iterate
If you manage the data with another tool, json is much easier to deal with than HCL. For humans, you might as well keep it HCL so the system has fewer moving parts
Thanks for the advice!
2020-11-07
hi all, can anyone please show me how to get the ami id of the image made in packer, i wanted to use it for my ASG in terraform
tag your ami as part of your packer build then in your terraform use a datasource for the ami to look up the ami id
i think you need to use name actually for the terraform datasource (off the top of my head)
but the principle remains
2020-11-08
2020-11-09
@Chris Fowles (build-and-launch.sh) #!/bin/bash … AMI_ID= ‘packer build -machine-readable packer.json | awk -f, ‘$0 ~/artifact,0,id/ {print $6}’ | echo ‘variable “AMI_id” { deafualt = “’${AMI_ID}” ‘ > amivar.tf…. use the shell script ‘ build-and-launch’ .. wich will first build the AMI and then extract the AMI_ID .. Put the ‘extracted’ AMI_ID as a ‘variable’ into “amivar.tf” then run terraform apply |
Thanks for this. i did try a similar approach. but didnt work, maybe what i did is just wrong. so what i did is at the packer json file, i have "ami_name":"ebs_backed_ami_by_packer",
and in variables.tf i have
variable "ami_name_filter" {
description = "Filter to use to find the AMI by name"
default = "EBS-*"
}
and created a separate ami.tf that have this
data "aws_ami" "ami" {
most_recent = true
owners = ["var.ami_owner"]
filter {
name = "name"
values = ["${var.ami_name_filter}*"]
}
so eventually was able to use it at terraform
resource "aws_launch_configuration" "ASGlaunchconfig" {
image_id = data.aws_ami.ami.id
it looks to work .
now my issue when ran in gitlab ci, i get this message data.aws_ami.ami: Refreshing state...
Error: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: ebbbd4fe-097f-4f78-b3bd-9ab1abac9f49
Error: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: cd24c1fd-47b4-47c2-a4b6-8da6371efa9f
Cleaning up file based variables
00:01
ERROR: Job failed: exit status 1
i read somewhere that somehow i have not given packer enough permissions to let terraform use probably the ami i created. i added this on the iam user i used to exec packer and terraform { “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “VisualEditor0”, “Effect”: “Allow”, “Action”: “iam:”, “Resource”: “” } ] }
but not sure / and fully understand what it means. not working too
@Chris Fowles when you execute the “sh build-and-launch.sh”, you will see the last bit. then you ‘should see’ the AMI
@charlespogi please see my post(s) to @Chris Fowles hope it helps
Just curious as I can’t find one, but does cloudposse have a repo for wordpress?
Unfortunately not….
(we use kinsta.com!)
Thanks for letting me know! Based on the pricing I’ve seen from hosted solutions, it seems like a great business to be in. xD
It’s quite a tough business to be in. I used to run the infrastructure and hosting platform for ghost.org - managing 10’000s of application instances, databases and the customers that come along with them is interesting.
We’d built a scheduler, billing, routing and caching layer out of Ruby, NodeJS, LXC, MySQL and Redis that was, at times, quite funky. We usually got about 200 - 250 sites on a single instance. Databases were “sharded” ( blogId % 2 == 1 ? "db01" : "db02"
) over 2 sets of primary/secondary MySQL replicas…
About 1200 of the DBs were double encoded UTF8 in Latin1 which was fun to fix…
Business wise, there are many competitors including self-hosting and “1-click apps” like those on DigitalOcean. That’s not necessarily bad though as DigitalOcean love OSS and donate back to many of the apps that they provide.
Then backups… don’t get me started on backups…
@t_humphrey Thanks for that insight. That’s actually the site that led me to the question. Currently I run no where near that size of infrastructure, but on AWS am getting just under 100 sites on their own t2.micro instances for < $1 / instance, without reserve instances. The margin between that and Ghost’s basic package seems quite large..
Self hosting Wordpress is easy. Keeping it hardened and patched is what we pay for. If there’s ever any issue, they fix it and my weekend isn’t ruined cleaning up some hack :-)
Just dropping a complaint, but I’m sure everyone feels it. The change in how for loops are handled between v 12 & 13 is really frustrating and seems to a have broken A LOT of functionality
second the request to clarify. done a lot of upgrades from 0.12 to 0.13 and haven’t (yet) seen a big problem with for expressions…
Hmm.. I seem to be mistaken. The problem I was having was within the vpc module, the terraform-aws-alb
example had pinned the reference to tags/0.8.1
, changing that master
seems to have cleared the error I was having.
I am working through trying to set up a new infrastructure and have bumped into an issue. I am trying to setup chamber
so I can use it for the secret store, but am having troubles finding how to do it with terraform >= 0.12.0 .
I used reference-archtecture
to get it “working” but the other modules I am using have been upgraded to 0.12 so I would like to make this work as well
setup chamber in which way?
you mean setting a user to use chamber or something else?
Yes. When using the reference architecture and creating the ‘child’ accounts it looks like the bucket and kms was setup for chamber but not the user. I am trying to add the user but not on the 0.11 versions that were used during the initial provisioning
Here is what I am having some issues with… I used the reference-architecture repo to get started so everything was based on terraform 0.11 But since then I have setup EKS and ECR which those repos are on TF 0.12+ so I modified my Dockerfile for the child stage to use the newer TF… so I am kind of version split and it is just awkward. Maybe I am just not doing it correct either…
I don’t want to have some of the directories in my conf/
to be TF 0.11 and some to be TF 0.12… that just seems wrong, but maybe that is expected at this point in time??
I use chamber in project with tf 0.13 and 0.12
I do not use the chamber user module
I give access to the kms key to the ecs task/instance profile to pe able to read the secrets
that is how I do it
Yeah makes sense… I am trying to use with codefresh so I don’t think that will work… I think i will have to also have a user as well
that module is the one that is still in 0.11?
It looks like it… I forked it and started trying to upgrade it but bumped into some issues quickly so figured I would ask what others were doing before going to far down the rabbit hole. Terraform is not my strong suite yet, done the basics for a few years but nothing like how cloudposse is doing things so I am still climbing the steep learning curve
sorry I mean to say which module?
Well I started at https://github.com/cloudposse/terraform-aws-components/tree/master/deprecated/aws/chamber (which used to be terraform-aws-root-modules
)
Service catalog of reusable Terraform components and blueprints for provisioning reference architectures - cloudposse/terraform-aws-components
did you notice the /deprecated/
?
it has not been updated in a while
you are basically looking for an upgraded version of this module https://github.com/cloudposse/terraform-aws-iam-chamber-user
Terraform module to provision a basic IAM chamber user with access to SSM parameters and KMS key to decrypt secrets, suitable for CI/CD systems (e.g. TravisCI, CircleCI, CodeFresh) or systems which…
you could open a PR and we can review it
I am working on the PR, but having some issues with modules that are used inside this repo. I am going to have to back burner this for a little but will make note to come back to it.
Hi, I am interested in knowing how do you organize your IaaC. looking for ideas. Currently we are building our new k8s based infrastructure, thus requiring Terraform, helm, helmfiles and gitlab ci. which is a good pattern to combine all this elements? monorepo? repo with submodules? script/makefile magic? what if the helmfiles and charts repos also contain stuff for the infra and main application?
Hi Rel, I have a bit of experience with this. For now we are using a monorepo containing Terraform, Helm charts, Helmfile config and CI config.
If you make sure the configuration is separated well, for example:
repo/
terraform/
helm/
charts/
helmfile.yaml
ci/
then it’s not too difficult to filter git history and have your CI watch for changes based on path filters.
and how to do you tag the repo?
Do you also combine infra helm(files) with application helm(files)?
We consider the state of the entire repo to represent the state of all the infra, so each commit can update TF, cluster config and Helm
Because of this I would keep application Helm charts in the application repos
But if the helm chart are in the application repos, the devs may checkout a helm chart in a different version, than the current tagged version available on master.
Yeah so you have to decide who has responsibility over the charts and release cycle - application developers or ops
Anyone uses terraform-aws-elasticache-redis? I got error when I use this module. Error: Error creating Cache Parameter Group: InvalidParameterValue: The parameter CacheParameterGroupName must be provided and must not be blank. Below are the code:
main_elasticache_redis.tf:
module "redis" {
source = "git::<https://github.com/cloudposse/terraform-aws-elasticache-redis.git?ref=tags/0.25.0>"
availability_zones = data.aws_availability_zones.available.names
vpc_id = module.vpc.vpc_id
allowed_security_groups = [module.vpc.default_security_group_id]
subnets = module.vpc.private_subnets
cluster_size = var.redis_cluster_size #number_cache_clusters
instance_type = var.redis_instance_type
apply_immediately = true
automatic_failover_enabled = true
engine_version = var.redis_engine_version
family = var.redis_family
#enabled = var.enabled
cluster_mode_enabled = true
enabled = true
replication_group_id = var.replication_group_id
elasticache_subnet_group_name = var.elasticache_subnet_group_name
at_rest_encryption_enabled = var.at_rest_encryption_enabled
transit_encryption_enabled = var.transit_encryption_enabled
cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled
parameter = [
{
name = "notify-keyspace-events"
value = "lK"
}
]
context = module.this.context
}
data.tf:
provider "aws" {
version = ">= 2.55.0"
region = var.region
}
context.tf:
module "this" {
source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>"
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = var.tags
additional_tag_map = var.additional_tag_map
label_order = var.label_order
regex_replace_chars = var.regex_replace_chars
id_length_limit = var.id_length_limit
context = var.context
}
variable "context" {
type = object({
enabled = bool
namespace = string
environment = string
stage = string
name = string
delimiter = string
attributes = list(string)
tags = map(string)
additional_tag_map = map(string)
regex_replace_chars = string
label_order = list(string)
id_length_limit = number
})
default = {
enabled = true
namespace = null
environment = null
stage = null
name = null
delimiter = null
attributes = []
tags = {}
additional_tag_map = {}
regex_replace_chars = null
label_order = []
id_length_limit = null
}
description = <<-EOT
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
EOT
}
variable "enabled" {
type = bool
default = true
description = "Set to false to prevent the module from creating any resources"
}
variable "namespace" {
type = string
default = null
description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}
variable "environment" {
type = string
default = null
description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}
variable "stage" {
type = string
default = null
description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}
variable "name" {
type = string
default = null
description = "Solution name, e.g. 'app' or 'jenkins'"
}
variable "delimiter" {
type = string
default = null
description = <<-EOT
Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
EOT
}
variable "attributes" {
type = list(string)
default = []
description = "Additional attributes (e.g. `1`)"
}
variable "tags" {
type = map(string)
default = {}
description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}
variable "additional_tag_map" {
type = map(string)
default = {}
description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}
variable "label_order" {
type = list(string)
default = null
description = <<-EOT
The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
EOT
}
variable "regex_replace_chars" {
type = string
default = null
description = <<-EOT
Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
EOT
}
variable "id_length_limit" {
type = number
default = null
description = <<-EOT
Limit `id` to this many characters.
Set to `0` for unlimited length.
Set to `null` for default, which is `0`.
Does not affect `id_full`.
EOT
}
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
try posting this code as a code snippet, it will be a little easier to read
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Done. Thanks.
I suspect it’s because your top-level context module is being initialised without any useful values
eg, none of the variables that it loads are being set
From what I know of the null label module, you need to set at least one of namespace/environment/stage/name
Hi Alex. Thank you. I set all values for namespace/environment/stage/name. That error is gone. But, I got another error, Error: Error creating Elasticache Replication Group: InvalidParameterValue: Number of node groups cannot be less than 1. Do you have any idea?
I think this question can be answered by reading the README for the module: https://github.com/cloudposse/terraform-aws-elasticache-redis
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
2020-11-10
hey guys
is there a way to customize the metadata information using this module https://github.com/cloudposse/terraform-aws-ec2-autoscale-group ?
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
if is not in the module variables then most probably no but is easy to do with a Dynamic
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
PRs are welcome
Hi guys, I’m using the terraform-aws-cloudfront-s3-cdn terraform module for creating a cloudfront distribution… And I’ve managed to use for a standard distribution…
But I also need to create a second distribution for a redirect, so I used the variable redirect_all_requests_to
with the url where to redirect….
all goes well, except the s3 bucket created is not configured as a redirect website…. just a standard bucket as in the creating of a standard cloudfront distribution… Am I missing something? Do I need to configure the s3 bucket myself as a redirect website after the module finishes creating the cloudfront distribution?
mmm I have never done this but I guess you add the redirect rules in cloudfront instead of the bucket
Actually no, I’ve done it by hand, and you have to do it in the bucket for the redirect to work… In the terraform module I thought when you add the redirect_all variable the bucket would be created with the redirect, but it seems it doesn’t… So I don’t seed what that option actually does…
I’m using the current master from https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
And I’m using the following variables:
Any help would be helpful… thanks
Am I reading this right but is is true Terraform does not support cloning an RDS cluster? for real??????
what is it you’re reading?
oh it’s an aurora feature
interesting. i’ve used the snapshot and create workflow. clone is new to me
clone is hours faster
makes sense
I clone a 600GB cluster in 3 min
there is a linked pr implementing an underpinning piece… looks active as of a few days ago, so might get some traction on the clone issue… https://github.com/hashicorp/terraform-provider-aws/pull/7031
Relates #5286 Changes proposed in this pull request: Added support for Aurora point in time restore and clone. Output from acceptance testing: $ make testacc TEST=./aws TESTARGS="-run=TestAc…
ohhh I did not see that one
if that gets merged that will solve my problem
oh it wasn’t linked after all. two prs implementing the same feature. i found this one just by searching pulls for “clone”
@jose.amengual fyi aws provider v3.15.0 just dropped with support for the aurora rds point in time restore feature
resource/aws_db_instance: Add restore_to_point_in_time
argument and latest_restorable_time
attribute (#15969)
https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.15.0
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
NO FING WAY…..
I literally hacked my way to do this with local exec
I have been testing it…….
hate it when that happens lol
I was about to merge the PR
I guess I have more work to do
thanks for letting me no @loren
no prob!
this is a half done implementation
they do not allow you to specify the security group
it defaults to the default vpc security group
Dang. So close! Open a new issue I guess
I’m testing it, the docs are not so clear bu I was able to pass the SGs
but I need to see if my data is there now
resource "aws_rds_cluster" "clone_us_west_2" {
count = var.clone_enabled ? 1 : 0
cluster_identifier = "${local.cluster_identifier_us_west_2}-clone"
vpc_security_group_ids = local.cluster_security_groups_ids_us_west_2
db_subnet_group_name = "${local.cluster_identifier_us_west_2}-clone"
db_cluster_parameter_group_name = "${local.cluster_identifier_us_west_2}-clone"
skip_final_snapshot = true
tags = local.complete_tags
restore_to_point_in_time {
source_cluster_identifier = local.cluster_identifier_us_west_2
restore_type = "copy-on-write"
use_latest_restorable_time = true
}
provider = aws.us_west_2
}
resource "aws_rds_cluster_instance" "clone_us_west_2" {
count = var.clone_enabled ? 1 : 0
identifier = "${local.cluster_identifier_us_west_2}-clone-1"
cluster_identifier = "${local.cluster_identifier_us_west_2}-clone"
instance_class = var.instance_type
db_subnet_group_name = "${local.cluster_identifier_us_west_2}-clone"
tags = local.complete_tags
engine = "aurora"
engine_version = "5.6.mysql_aurora.1.22.2"
provider = aws.us_west_2
depends_on = [aws_rds_cluster.clone_us_west_2]
}
it works
Nice! Fwiw, instead of depends_on in the instance resource, can you pass through the cluster_identifier from the cluster instance?
Yes, I think that is possible
hey guys i am using the https://github.com/cloudposse/terraform-aws-ec2-autoscale-group. I am following the example. I am able to plan it but getting the following error.
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
module.autoscale_group.aws_autoscaling_group.default[0]: Creating... Error: One of `id` or `name` must be set for `launch_template
`
TF version i am running is 0.12.0 and i am also using the 0.5.0
of the module.
Has anybody else had a similar issue or know what might cause this?
Did you try invoking the example from the examples/complete
?
Yes
@Erik Osterman (Cloud Posse) That’s exactly what i did but i removed the VPC & Subnets from it and then ran it.
@Jeremy G (Cloud Posse) error i am getting is this
module.autoscale_group.aws_autoscaling_group.default[0]: Creating... Error: One of `id` or `name` must be set for `launch_template
`
@aravind.kandalam498 Where did you start? I mean, what is the root module, what module is it calling.
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
calling the above module.
You are using 0.7.3 with Terraform 0.13?
Yes i tried that and then i went back to 0.5.0 with TF .12
Are you setting mixed_instances_policy
? If so, to what?
well i was not planning to use mixed_instances_policy
but this is what i had in the example
mixed_instances_policy = {
instances_distribution = null
override = [{
instance_type = "t3.2xlarge"
weighted_capacity = null
}]
}
I could be wrong, but this seems to me to be an issue with your environment. In particular, cached modules, providers, and or state. Clear out any .terraform
or .module
directories. Also, try module version 0.7.0 with the latest Terraform 0.13.x
sure will that try that again.
is there a version you recommend with TF .12?
Error: Unsupported Terraform Core version
on .terraform/modules/autoscale_group.label/versions.tf line 2, in terraform:
2: required_version = "~> 0.12.0"
Module module.autoscale_group.module.label (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0>)
does not support Terraform version 0.13.0.
Looks like 0.7.0 is using null-label
that needs 0.12.0
and cant run with 0.13.0
Try 0.7.1 with TF 0.13. or 0.6.0 with TF 0.12
will do
Roll back all the way to 0.3.0 with TF 0.12 if you are still stuck.
k
We will fix the module if you find and isolate the bug, but so far the module looks good to me.
Thanks again for your help. i will try it out
@Jeremy G (Cloud Posse) you were correct.
Would you please be more specific? What fixed the issue for you?
Sure! I went ahead and deleted .terraform
folder which had the modules as well. I then downloaded tf 0.12.29
using tfswitch and used 0.6.0
root module to build without the mixed_instance_policy
.
I also verified modules were working with 0.12.29
2020-11-11
Hi, regarding my issue with the terraform-aws-cloudfront-s3-cdn
module I found the issue:
For the redirect_all_requests_to
option to work, I also need to set the website_enabled = true
variable… But the documentation does not say that.
Please update the documentation. I’ve created a bug report with it at: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/issues/111
Hi, I've asked on the terraform channel on slack, but got no answer. I'm creating a standard cloudfront distribution, with an s3 origin and all works ok. I then need to create a second clou…
Thanks and keep up the good work
v0.14.0-rc1 0.14.0 (Unreleased) NEW FEATURES:
Terraform now supports marking input variables as sensitive, and will propagate that sensitivity through expressions that derive from sensitive input variables.
terraform init will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (<a href=”https://github.com/hashicorp/terraform/issues/26524” data-hovercard-type=”pull_request”…
This follows on from some earlier work that introduced models for representing provider dependency "locks" and a file format for saving them to disk. This PR wires the new models and beha…
Hooray!
Terraform will now support reading and writing all compatible state files, even from future versions of Terraform. This means that users of Terraform 0.14.0 will be able to share state files with future Terraform versions until a new state file format version is needed. We have no plans to change the state file format at this time.
This follows on from some earlier work that introduced models for representing provider dependency "locks" and a file format for saving them to disk. This PR wires the new models and beha…
finally
Yeah, this will be interesting. I wonder if the recommended approach of pinning an explicit terraform version in root modules will go away with v14
i believe they are deprecating that terraform hcl block
The version
argument inside provider configuration blocks has been documented as deprecated since Terraform 0.12. As of 0.14 it will now also generate an explicit deprecation warning. To avoid the warning, use provider requirements declarations instead. (#26135)
hmmm or perhaps just the version
arg for providers. i can’t find the terraform hcl block referenced anywhere for 0.14 or 0.15 changelog
Terraform by HashiCorp
The version argument is deprecated in Terraform v0.14 in favor of required_providers and will be removed in a future version of terraform (expected to be v0.15). The provider configuration document…
can drive letters be assigned in Terraform
You can call out to a script using remote exec (assuming you need to do this on a server)
Thank you. suggested. using “Azure VM Extension resource with a custom PowerShell script
2020-11-12
Hi I was trying to create Kubernetes Ingress resource using Terraform..I dont see an option to specify PathType like Prefix, Exact etc inside kubernete_ingress block..any idea on how to do that?
@Erik Osterman (Cloud Posse) hi! I always wanted to ask you one thing. Why don’t you use terraform registry source in cloudposse modules instead of specifying git https url?
Haha - mostly since when we started it wasn’t available
but the reason I laugh, is this literally came up at standup today
We’re probably going to start switching things over.
What’s the benefit?
Hurrah
IMHO it’s a bit more comfort to read in code
I think our module stats will go through the roof once we do that everywhere
2020-11-13
Question for those of you that use Terraform Cloud: is there a way to run the remote applies against AWS using AWS profile instead of using aws creds environment variables yet?
Yeah, quick Google search https://registry.terraform.io/providers/hashicorp/aws/latest/docs#shared-credentials-file
@rei to clarify this is to run remotely on terraform cloud (previously named terraform enterprise)
What about cloud agents?
Provision a Terraform Cloud Agent on an existing Kubernetes cluster. - cloudposse/terraform-kubernetes-tfc-cloud-agent
Using this, you can deploy it kubernetes with IRSA
This way it also runs in your VPC and can manage things like database users with the postgres/mysql providers
(this is an alternative to “using aws creds environment variables yet”, not necessarily a solution to using “profiles”, however, I think that could also work if you mount a configmap with the profile settings, since terraform uses the AWS SDK, it should “just work” )
@btai ok, got it
This message was deleted.
Anyone here using vscode for Terraform? Any of you have an actual working extension for popping up contextual docs that work? I remember the good old day when I had it… ever since language server came in I never had it anymore.
If somebody has this working… I would love to hear about it.
I’ve had to disable the Terraform extension’s LS. The error throwing was getting annoying.
They’ve mostly fixed the error spam in the past couple of weeks. But yeah, it’s still borderline useless. I don’t get completions beyond the built in vscode “other tokens from this file”
It seems to assume your terraform code is in the root directory, and follows modern standards exactly (eg using the 0.13 provider version constraints, etc)
My colleague was informing me today how well his terraform things in intellij was working.. gee thanks
For me this is the main (if not only reason) to use intellij. There are some plugins for VSCode for Terraform and Terragrunt ( https://github.com/4ops/vscode-language-terraform ), but they are rather limited and provide limited static completion.
Adds support for the Terraform configuration language to Visual Studio Code - 4ops/vscode-language-terraform
I’ve also given IntelliJ a stab for these reasons. It beats having TF docs open in a browser or parsing the provider schema.
I just tried vscode again this morning and it managed to autocomplete (in order) lifecycle, ignore_changes = [ name ] give it another try everyone!
In a single repo workspace?
Most of my auto completion is via dictionary history and not due to functioning language support. Omg look at this though!!? https://github.com/hashicorp/terraform-ls/blob/master/CHANGELOG.md
Terraform Language Server. Contribute to hashicorp/terraform-ls development by creating an account on GitHub.
0.10.0 (Unreleased)
FEATURES:
Support module wide diagnostics (#288) Provide documentation on hover (#294) ENHANCEMENTS
I was testing an internal module repo. So a repo with terraform files in the root directory, and basic terraform init
having been run
I just tested a monorepo which has some terraform configurations in sub-directories, and no dice there
man it just keeps getting worse here … falling like a deck of cards hehe
the vscode terraform-ls just added support for docs on hover… https://github.com/hashicorp/terraform-ls/releases/tag/v0.10.0
FEATURES: Support module wide diagnostics (#288) Provide documentation on hover (#294) ENHANCEMENTS: Add support for upcoming Terraform v0.14 (#289) completion: Prompt picking type of provider/d…
It’s not great… Like cmon gimme a link please.
Haha I shouldn’t rag on them. They’re trying. I’m sure that’ll get better with time.
Haha yeah that’s not so helpful
still doesn’t work for monorepos. In fact it’s even worse – crashes repeatedly if you make the panels appear. It’s been months now so I guess they have no interest in supporting this
I completely gave up for now … here’s my new combination …
At least it works. And no longer randomly I get “You have no formatter for terraform installed” and other depressing things.
Hahaha not disappointed in you @mfridh — Disappointed in terraform and you having to go to such lengths to have a useable environment in the biggest editor of today. It’s weak on their part. They need to step it up.
It’s slowly getting better. For instance, it doesn’t crash constantly in monorepos now!!
I know what you meant @Matt Gowie .
The completion itself is pretty bad to nonexistent in my current setup but it’s ok. I take stable and predictable over the old situation.
Hey, do you even test your modules?
Error: error creating RDS cluster: InvalidParameterCombination: Aurora Serverless DB clusters are always encrypted at rest. Encryption can't be disabled.
status code: 400, request id: 6f8ac312-53d1-4d12-9602-e5fb64cc102f
Maybe that warning about using master branch was a real one? I’m shocked .
are you reffering to this var https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/variables.tf#L185 ?
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
yup . Am just adding a serverless ? conditional on it …
I think I found another bug in the example too …
family = "mysql5.7
change to:
family = "aurora-mysql5.7"
we build serverless using this module without issues
version 0.35.0
Oh, I can too if I explicitly set those variables. So the bugs are very minor, no problemo.
howdy PePe btw
hello
we automatically test on AWS only examples/complete
the rest of the examples are for reference only
thanks for finding the bug with the encryption for serverless
we need to update that example (set the var to true
)
like I said, no biggie, but can’t hurt to fix those!
yeah I’ll do that later.
technically the correct thing could be to have that value not set… would that mean passing an explicit null
in the case where this is a conditional?
if serverless set to null
if not set to current default
There’s no build-harness helper for the [context.tf](http://context.tf)
stuff?
nevermind… hacking some mods for it
Has anyone used terraform with the eks efs driver? I’m having a bit of trouble making the volume claim example here: https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html work with the kubernetes_volume_claim
resource (thread)
The Amazon EFS Container Storage Interface (CSI) driver provides a CSI interface that allows Kubernetes clusters running on AWS to manage the lifecycle of Amazon EFS file systems.
I’ve got this:
resource "kubernetes_persistent_volume" "sona_efs" {
metadata {
name = local.efs_name
}
spec {
access_modes = ["ReadWriteMany"]
capacity = {
storage = "50Gi"
}
volume_mode = "Filesystem"
storage_class_name = "efs-sc"
persistent_volume_reclaim_policy = "Retain"
persistent_volume_source {
}
csi = {
driver = "efs.csi.aws.com"
volume_handle = aws_efs_file_system.efs_file_system.id
}
}
}
but it doesn’t like the csi block (I also tried putting it inside of persistent_volume_source
The Amazon EFS Container Storage Interface (CSI) driver provides a CSI interface that allows Kubernetes clusters running on AWS to manage the lifecycle of Amazon EFS file systems.
found a similar question here https://stackoverflow.com/questions/64630508/persistent-volume-source-configuration-for-terraform-kubernetes-persistent-volum#
I’m using EFS as a CSI driver in a k8s cluster. I would like to use Terraform to create a PV that will use the efs storage class. I verified that I can create the PV "manually". I would l…
I see there is a csi type: https://github.com/hashicorp/terraform-provider-kubernetes/blob/7cc826c9e55af33aee11165a425155e4f8cc2ae1/kubernetes/structure_persistent_volume_spec.go#L972 but for some reason terraform doesn’t like it
Terraform Kubernetes provider. Contribute to hashicorp/terraform-provider-kubernetes development by creating an account on GitHub.
2020-11-14
This message was deleted.
Please can I get help to find the best way to link two resources (aws Log_group & KMS_Key) with a for_each loop within each of them - More details inside Thread
Good evening, Please can somebody help confirm if this is the best approach to what I’m trying to achieve? It is working but I’ve seen similar things in past version of TF with count that appear to work in the similar way and cause really issues when the source data is changed or reordered. I’m trying to use the below collection of objects to be a data structure I use throughout my code, this is just a stripped back version of a much bigger structure. The aim is to say for each one of the collect_lists entries create a log_group based on the keys and values, dynamically decided if it needs a custom KMS key. If true, create one and then like the key ID to the log_group.
logs_map = {
files = {
collect_list = [
{
"file_path" = "C:\cms\logs\iis\W3SVC1\*"
"log_group_name" = "cms/{environment}/iis"
"log_stream_name" = "{instance_id}_{local_hostname}"
"retention_in_days" = 7
"use_custom_kms_key" = true
"auto_removal" = true
},
{
"file_path" = "C:\maint\logs\iis\W3SVC2\*"
"log_group_name" = "maint/{environment}/iis"
"log_stream_name" = "{instance_id}_{local_hostname}"
"retention_in_days" = 7
"use_custom_kms_key" = true
"auto_removal" = true
},
{
"file_path" = "C:\cms\website\App_Data\Logs\*"
"log_group_name" = "cms2/{environment}/umbraco"
"log_stream_name" = "{instance_id}_{local_hostname}"
"retention_in_days" = 7
"use_custom_kms_key" = true
"auto_removal" = true
},
]
}
}
resource "aws_cloudwatch_log_group" "loggroups" {
for_each = { for key, value in var.logs_map.files.collect_list: key => value }
name = replace(each.value.log_group_name, "{environment}", var.environment)
retention_in_days = each.value.retention_in_days
kms_key_id = each.value.use_custom_kms_key == true ? aws_kms_key.log_group[each.key].arn : ""
# tags = var.tags
}
resource "aws_kms_key" "log_group" {
for_each = { for key, value in var.logs_map.files.collect_list: key => value if value["use_custom_kms_key"] == true }
description = "KMS key used to encrypt log files. One per log group. ${element(split("/", each.value.log_group_name), 0)}"
deletion_window_in_days = 30
enable_key_rotation = true
#tags = merge({ Name = "${element(split("/", each.value.log_group_name), 0)}-default-kms" }, var.tags)
}
While this works, it feels far too risky and susceptible to error if the items in the list change order. Am I mistaken when I thought terraform 0.13+ changed the way it stored items with the state? Rather than being array based index: aws_cloudwatch_log_group.loggroups[“0”] aws_kms_key.log_group[“0”] It used a name as a unique identify instead? Assumedly something like: aws_cloudwatch_log_group.loggroups[“mylog”] aws_kms_key.log_group[“myotherlog”] So when used within a for_each loop the resources created would be easier to refer to later on and it didn’t matter if the order changed. Am I completely way off with my resources configuration or even my data layout? Should I be doing the above in a different way. Thank you for taking the time to read this. Sorry, its was so long.
if you use count
on a list, and the items in the list change, then TF will try to recreate everything (this could be ok in some cases, but not acceptable in other cases)
for_each
does not have that problem
use for_each
whenever possible (especially when you create many instances of the same resource)
also, resources with count
are lists, so you have to use a list index to get one item
resource with for_each
are maps, so you have to use a map key to get one item
Resources are the most important element in a Terraform configuration. Each resource corresponds to an infrastructure object, such as a virtual network or compute instance.
Hi Andriy thank you for taking the time to reply and for the information.
I get count and for_each are different and I’ve used them for a while but I’ve never need to link between the resources like I’ve done in my example.
Given what you’ve said, is it fair to assume that the each.key of the newly created KMS will always be in sync with the each value of the log group?
Sorry if it’s obvious, I’m a little paranoid that I set this up and somebody comes along and adds a new value to my data structure e.g log group number 4 and all of a sudden TF updates the existing resources to use the wrong keys.
To put you in the spot. Would you be happy doing it as I have or would you restructure in some way.
Again thanks in advance.
i think your concern is coming from an ill-advised for_each
expression…
for key, value in var.logs_map.files.collect_list: key => value
key
is not an actual key in a map here. it is the index of the list. using the index of the list in this way makes the for_each subject to all the same problems as count
you could instead do something like:
for item in var.logs_map.files.collect_list: item.file_path => item
note the use of item.file_path
to create the key of the map. this becomes the resource label/identifier in the tfstate. you don’t have to use file_path
, just any attribute that is unique over all the items. in all lists of objects, i often require a name
attribute that must be unique, and use that as the for_each key
Okay, I’m going to need more coffee. So, are you suggesting that a better data structure would be to do away with the list which I used to symbolise each log_group. Eg one item for each log I want. And change it to map/object so I have a key rather than the numerical index. Which would help do the matching later to ensure “key A”match “log group A”.
Sorry Loren, if I missed your point.
no change to the data structure is necessary, just the for_each expression
for_each = { for item in var.logs_map.files.collect_list: item.file_path => item }
Okay, I’ll go have a go. Would that be in both log group and the kms key or just one or the other ?
both, they should use the same key so you can index into the map the way you have it: kms_key_id = each.value.use_custom_kms_key == true ? aws_kms_key.log_group[each.key].arn : ""
thank you, I’ll go do some more testing.
using file_path
as the key, this makes your resource address look like:
aws_cloudwatch_log_group.loggroups["C:\\cms\\logs\\iis\\W3SVC1\\*"]
if you do not like that resource address, then you might want to update the data structure with a new attribute that you use as the key… e.g.
logs_map = {
files = {
collect_list = [
{
"name" = "cms_w3svc1"
"file_path" = "C:\\cms\\logs\\iis\\W3SVC1\\*"
"log_group_name" = "cms/{environment}/iis"
"log_stream_name" = "{instance_id}_{local_hostname}"
"retention_in_days" = 7
"use_custom_kms_key" = true
"auto_removal" = true
},
...
]
}
}
which adds a unique name
attribute to each item. and then your for_each like:
for_each = { for item in var.logs_map.files.collect_list: item.name => item }
and your resource address would look like:
aws_cloudwatch_log_group.loggroups["cms_w3svc1"]
hopefully that better demonstrates the relationship between the for_each expression and your resource address…
@loren you have firmly hammered the nail in to my thick skull, its finally clicked. Thank you for talking the time to answer the question and my thanks also to @Andriy Knysh (Cloud Posse)
I can clearly see now the naming is working and that it is indeed no longer based on an index count and It was my own doing that was name it in that way.
Hello, I have read all the thread, and wants to thanks for taking time to address the question and explain every thing. I have runned in the same need previously and made it by changing datastructure to a map of map
managing the map of objects directly certainly works also (instead of converting the list to a map with the for expression). i haven’t quite figured out why i prefer a list of objects, but something about it just works better for me
2020-11-15
No way to use a data source value in the mysql provider, right? … if I try it seems the value is just null.
that’s right. If you check the github issues this is a known bug. Specifically the bug is that all the values have a default, so it seems to work but doesn’t really
We wanted to create a MySQL server, then create users/databases within it, all in one Terraform configuration. This is not possible, and you have to split your configuration in two
Actually I have to do the opposite. I have to combine it… So that I create the initial RDS cluster and all services that need to use it from the same configuration
I wanted to fetch the endpoint, master username,password from parameter store to be able to split the creations …
i suggest you provide these values as variable inputs. The system you use that runs Terraform can load the parameter and pass in the data
Yeah that’s the method I guess. These things always make me want to try pulimi…
Ok, I did some more digging.
You can use dynamic data in the provider. But it only works at initial creation. In the Refresh pass it does NOT populate the values at all.
I'm currently writing a provider to store the secret in pass (https://www.passwordstore.org/) (https://github.com/camptocamp/terraform-provider-pass) and it sometimes work and sometimes it look…
actually… seems like it’s maybe possible if I don’t use a module for the data reading… (I was using the cloudposse ssm parameter module)
I had issues loading the data from terraform remote state data source.
I want to advise you not to go down this path. It’s a rabbit hole and is not a workflow well supported by Terraform (configuring providers with dynamic data). If you read the Terraform docs, it’s specifically called out as something to avoid:
You can use expressions in the values of these configuration arguments, but can only reference values that are known before the configuration is applied. This means you can safely reference input variables, but not attributes exported by resources (with an exception for resource arguments that are specified directly in the configuration).
Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.
Hey :wave: @Alex Jurkiewicz I just needed a clarification on this thread. Do you mean we should avoid using terraform_remote_state
as it is not well supported by Terraform yet and it can cause problems?
I am using something like below.
data "terraform_remote_state" "rds" {
backend = "s3"
config = {
bucket = "somebucket"
key = "terraform.tfstate"
region = "eu-west-1"
profile = "default"
}
}
locals {
dbInstanceEndPoint = data.terraform_remote_state.rds.outputs.endpoint
}
module "container" {
source = "[email protected]:cloudposse/terraform-aws-ecs-container-definition.git?ref=0.44.0"
container_name = var.container_name
...
...
...
environment = [
{
name = "DATABASE_HOST"
value = local.dbInstanceEndPoint
}
]
privileged = false
}
I am planning to use terraform_remote_state
extensively in my use case. Like creating each service Like RDS, ASG, ALB separately and use the outputs via terraform_remote_state
into other services like ECS.
Thanks
Terraform remote state is great, use it lots. My advice is “only configure providers with static data”. That is, values derived only from input variables or hardcoded values. Don’t attempt to configure a provider with data source attributes.
It works, probably, if done in the root module. Dynamically in sub module is explicitly advised against in terraform and results in horrible experiences… (unfortunately)
@mfridh I am not sure if I understand your point correctly. Are you talking about making provider configuration dynamic? Can you share a code example please?
Your example is fine and this thread was more about the case I mention with provider instantiation in sub modules. A practice I certainly in some cases would like if it worked, for flexibility reasons, but it’s just not possible in a sane way today.
Fresh off the press: https://github.com/cloudposse/terraform-yaml-config
We’re using YAML more and more to define configuration in a portable format that we use with terraform. This allows us to define that configuration from both local and remote sources (via https). For example, we use it for opsgenie escalations, datadog monitors, SCP policies, etc.
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config
terraform >= 0.12 has actually just become a really good structured config conversion tool
yaml => json => hcl etc
2020-11-16
2020/11/15 15:03:18 [ERROR] eval: terraform.evalReadDataRefresh, err: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: d2c30533-f6da-40e2-925c-58b5404cb356
2020/11/15 15:03:18 [ERROR] eval: terraform.EvalSequence, err: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: d2c30533-f6da-40e2-925c-58b5404cb356
Error: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 2a7b0234-0255-496d-8deb-91877b5aad94
Error: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: d2c30533-f6da-40e2-925c-58b5404cb356
2020/11/15 15:03:18 [ERROR] eval: terraform.evalReadDataRefresh, err: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 2a7b0234-0255-496d-8deb-91877b5aad94
2020/11/15 15:03:18 [ERROR] eval: terraform.EvalSequence, err: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 2a7b0234-0255-496d-8deb-91877b5aad94
2020-11-15T15:03:18.578Z [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
Cleaning up file based variables
00:00
ERROR: Job failed: exit status 1
my aim is to use the ami produced by packer to terraform,
{
"builders": [
{
"type": "amazon-ebs",
"access_key": "###",
"secret_key": "###",
"ami_name":"EBS-{{isotime | clean_resource_name}}",
"temporary_iam_instance_profile_policy_document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:AssociateIamInstanceProfile",
"ec2:ReplaceIamInstanceProfileAssociation"
],
"Resource": "*"
},
{
"Effect" : "Allow",
"Action": "iam:PassRole",
"Resource": "*"
}]
},
"region": "us-east-1",
"ami_regions": ["us-east-1"],
"instance_type": "t2.micro",
"ssh_keypair_name": "SysOps2020",
"ssh_private_key_file": "/home/ubuntu/keys/SysOps2020.pem",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn2-ami-hvm-2.0.*-x86_64*",
"root-device-type": "ebs"
},
"owners": ["amazon"],
"most_recent": true
},
"ssh_username": "ec2-user"
}
],
"provisioners": [
{
"type": "file",
"source": "scripts",
"destination": "/home/ec2-user/"
},
{
"type": "file",
"source": "code",
"destination": "/home/ec2-user/"
},
{
"type": "shell",
"script": "scripts/install.sh"
},
{
"type": "shell",
"script": "scripts/cleanup.sh"
}
]
any tips on what i missed?
is there a way of having an if
block inside a resource?
i basically want to switch the value of
logging {
target_bucket = "${var.org_namespace}-${var.environment}-access-logs"
}
depending upon the value of a variable
Take a look at dynamic blocks: https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks you’d be able to do it with that.
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
yeh
here’s an example from the config aggregator i was just working on:
resource aws_config_configuration_aggregator this {
name = var.name
tags = merge({ Name = var.name }, var.tags)
dynamic account_aggregation_source {
for_each = var.account_aggregation_source != null ? [var.account_aggregation_source] : []
content {
account_ids = account_aggregation_source.value.account_ids
all_regions = account_aggregation_source.value.all_regions
regions = account_aggregation_source.value.regions
}
}
dynamic organization_aggregation_source {
for_each = var.organization_aggregation_source != null ? [var.organization_aggregation_source] : []
content {
all_regions = organization_aggregation_source.value.all_regions
regions = organization_aggregation_source.value.regions
role_arn = organization_aggregation_source.value.role_arn
}
}
}
complex object types can be null
‘d! very convenient for this kind of thing
here are the variable defs that work with that config…
variable account_aggregation_source {
description = "Object of account sources to aggregate"
type = object({
account_ids = list(string)
all_regions = bool
regions = list(string)
})
default = null
}
variable organization_aggregation_source {
description = "Object with the AWS Organization configuration for the Config Aggregator"
type = object({
all_regions = bool
regions = list(string)
role_arn = string
})
default = null
}
Is someone using this with terragrunt? https://www.infracost.io/
I’m trying to use the tf-state
version but terragrunt stores states in different directories for each module (and I like it), so I’m trying to figure out a way of consolidating all states in a single one just for infracost (this should be the main question, actually).
Cost estimates for Terraform - in your pull requests
2020-11-17
Hi all. Quick sanity check on cloudposse/terraform-aws-s3-bucket - is there really no way to enable the lifecycle rule for aborting incomplete multipart uploads without also enabling full object deletion? I have buckets that are actively used where I want to keep objects around permanently, but want to ensure orphaned multiparts are cleaned up. Under v0.25.0, the object expiration rule is mandatory, while everything else can be disabled. Surely I’m miss-reading this somehow.
I’m pretty sure I added that in long time ago
This sure looks like it’s mandatory for expiration to always be on… https://github.com/cloudposse/terraform-aws-s3-bucket/blob/master/main.tf#L51-L53
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
yes, but no reason to be on a dynamic you could send a PR
or set expiration to 0 that i think is unlimited
Yeah, a dynamic seems more idiomatic with the rest of the module. Glad to know I wasn’t misreading things. Surprised me that others haven’t hit this before. Must not be a lot of users of lifecycle rules. :)
we use it
but we always set expiration
I’d like to create multiple identical instances in TF 12 using a for loop and reference those instances in another loop to attach them to a LB. I’m having a lot of trouble figuring out how to get this pattern working. I’ve tried creating with count and references with for loops with local variables, direct for_each = aws_[instance.my](http://instance.my)_instances
under the resource, and really, many more different things that I can’t think of now because I’ve been staring at this problem too long. I haven’t been able to find anything using google except for much more complex patterns, where the instances are defined by static maps, and sometimes even not homogenous. I feel like this should be a much simpler pattern …
do you have some code you can share? it’s easier to update example code to point you in the right direction than to come up with something totally from scratch
though, if you look at this thread, we’re basically talking about exactly this use case… https://sweetops.slack.com/archives/CB6GHNLG0/p1605395904242600
Please can I get help to find the best way to link two resources (aws Log_group & KMS_Key) with a for_each loop within each of them - More details inside Thread
That’s fair. This is how I’m doing it currently with count:
resource "aws_instance" "advprx_instance" {
count = var.proxy_counts[var.environment]
ami = var.override_ami != "" ? var.override_ami : var.AMI_map[var.environment]
instance_type = var.instance_type
vpc_security_group_ids = var.security_group_map[var.environment]
subnet_id = var.subnet_map[var.environment]
associate_public_ip_address = "false"
key_name = var.keypair_name_map[var.environment]
iam_instance_profile = aws_iam_instance_profile.win-ec2-profile.name
user_data = base64encode(data.template_file.advprx_user_data.rendered)
credit_specification {
cpu_credits = "standard"
}
# Give instance unique name
tags = merge(local.proxy_tags, { "Name" = "${var.host_name}-${count.index + var.override_start_index + 1}" } )
}
# Attach the generated instances to the appropriate target group
resource "aws_lb_target_group_attachment" "advprx_instance_alb_attachment" {
count = var.proxy_counts[var.environment]
# how do I use for_each or for for this?
target_group_arn = local.target_group_arn
# target_id = aws_instance.advprx_instance[each.key].id
target_id = aws_instance.advprx_instance[count.index].id
port = 80
}
and what is the object structure in var.proxy_counts
?
literally just a number?
if you are creating a literal multiple of identical things, then count
is a fine approach. for_each
makes more sense when each item has some unique identifier as one of its properties. in your code, we could construct that identifier for the “Name” tag, but it would still be based on the index and so may not avoid classic limitations of count
Yes, it’s literally a number.
Oh. It wouldn’t avoid the limitations of count? That’s what I was after. Specifically this limitation: if I decrease the count, or increase the count, I didn’t want resources recreated.
increasing the count by one will add one instance, decreasing the count by one will destroy one instance (the last one, you can’t pick)
Ok. Fair enough. Thanks!
fwiw, here’s one way of converting to for_each, but using a custom name
per instance…
locals {
instance_inputs = [
{
name = "foo"
},
{
name = "bar"
},
]
instance_defaults = {
ami = var.override_ami != "" ? var.override_ami : var.AMI_map[var.environment]
instance_type = var.instance_type
vpc_security_group_ids = var.security_group_map[var.environment]
subnet_id = var.subnet_map[var.environment]
associate_public_ip_address = "false"
key_name = var.keypair_name_map[var.environment]
iam_instance_profile = aws_iam_instance_profile.win-ec2-profile.name
tags = local.proxy_tags
user_data = base64encode(data.template_file.advprx_user_data.rendered)
target_group_arn = local.target_group_arn
port = 80
}
instances = [for instance in local.instance_inputs : merge(local.instance_defaults, instance)]
}
resource "aws_instance" "advprx_instance" {
for_each = { for instance in local.instances : instance.name => instance }
ami = each.value.ami
instance_type = each.value.instance_type
vpc_security_group_ids = each.value.vpc_security_group_ids
subnet_id = each.value.subnet_id
associate_public_ip_address = each.value.associate_public_ip_address
key_name = each.value.key_name
iam_instance_profile = each.value.iam_instance_profile
user_data = each.value.user_data
credit_specification {
cpu_credits = "standard"
}
# Give instance unique name
tags = merge(each.value.tags, { "Name" = each.key } )
}
resource "aws_lb_target_group_attachment" "advprx_instance_alb_attachment" {
for_each = { for instance in local.instances : instance.name => instance }
target_group_arn = each.value.target_group_arn
target_id = aws_instance.advprx_instance[each.key].id
port = each.value.port
}
great, thanks!
i also set it up to let you override any property by specifying it in the instance_inputs
object…
:wave: Is there a way to define the session expiration time for the role an ECS task assumes in terraform? The AWS docs state that the default is 6 hours. max_session_duration
for aws_iam_role
only sets the allowed max session but it looks like when changing that to 12 hours, the ECS task’s role still uses the default 6 hour session duration
There is timeouts for resource creation but the iam role has a session timeout that is default to 1 hour but can be changed to 30.days I think is the max
That is in IAM not terraform
What’s the reason you want to enforce this timeout?
The ECS task itself generates a S3 presigned URL that it passes on to a worker outside AWS. That work can take up to 12 hours and if the session that generated the presigned URL timeouts, the URL becomes invalid.
Experiences setting up innovative Terraform Workspaces using docker..
2020-11-18
Hi all, i’ve got a question about https://registry.terraform.io/modules/cloudposse/cloudtrail/aws/latest
When we store the cloudtrail logs in a different account. Does the kms key for encrypting the objects be a key from the source account, or the account that stores the logs
nobody?
Does anyone know how I could point a provider to a github repo/locally? I looked online and didn’t find any info on this. I’ve made some mods to a provider that I would like to try out
If you mean you changed some go and recompiled a provider not sure beyond just copying it into the local folder (something like ~/.terraform/plugins/) and giving it a version higher than exists in the online repo then setting that as a constraint in the TF code
The general behavior of the Terraform CLI can be customized using the CLI configuration file.
Yeah, I do what Nick outlined for a fork of the aws-provider. It’s not pretty, but it gets the job done.
Here is my make target to do it:
amplify_aws_provider: $(GOEXE)
git clone --single-branch \
--branch amplify \
[email protected]:masterpointio/terraform-provider-aws.git \
./tmp/terraform-provider-aws
cd ./tmp/terraform-provider-aws && \
go build . && \
cp ./terraform-provider-aws $(PLUGINS_DIR)/$(AWS_PROVIDER_FILE_NAME)
PLUGINS_DIR = ~/.terraform.d/plugins
AWS_PROVIDER_FILE_NAME = terraform-provider-aws_$(AWS_PROVIDER_VERSION)_x4
2020-11-19
Is it possible to attach elastic IP to the aws_spot_instance_request with terraform?
How do you section up your Terraform root modules?
00-base, 10-dns, 15-network, 20-eks, 30-iam-roles
Sorry I mean the question can’t be answered that easy, it depends of the complexities of the project
Haha yeah, definitely a “it depends” answer. I just think it’s an interesting question and would like to hear where folks draw their dividing lines.
in my case I do it per application and AWS product too
so I will create a repo to deploy this app on ECS which will include all the neceary access ( roles and such) plus any additional resources for the app to run but the Boundary of this TF repo is the container so if for example the container needs to connect to other external services or DB, that will go on a separate repo
I do not mix DBs with apps since the deployment process of DBs is very different and has a far bigger blast radius
the repo could start with the db but almost every time the DB gets migrated to it’s own repo once thing like replication/multi-region come into play or there are other teams consuming the same data directly from the db
many factors to consider
Cloud Posse’s breakdown: https://github.com/cloudposse/terraform-aws-components
Catalog of reusable Terraform components and blueprints for provisioning reference architectures - cloudposse/terraform-aws-components
(btw, we’ve updated these in the past week - these are all our latest root modules)
Does anyone know where I can find some simple code for automating aws config? I’ve been getting this error for about a day
Creating Delivery Channel failed: InsufficientDeliveryPolicyException: Insufficient delivery policy to s3 bucket: terraform-20201119163429797100000001, unable to write to bucket, provided s3 key prefix is 'config'.
Actually looks like it’s an open issue https://github.com/hashicorp/terraform-provider-aws/issues/8655
This issue was originally opened by @stsraymond as hashicorp/terraform#21325. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version …v…
I have new windows image I want to use for terraform installs. Currently it is in an image gallery. Does the terraform call change when its image gallery or storage account? I am trying to understand how those work together in terraform?
Launch-day support is pretty darn cool, https://www.hashicorp.com/blog/announcing-support-for-aws-network-firewall-in-the-terraform-aws-provider
The Terraform AWS provider has added support for the newly released AWS Network Firewall service.
Wow, I was just thinking about that last night and didn’t even search for it as I assumed they didn’t.
The Terraform AWS provider has added support for the newly released AWS Network Firewall service.
I should have more faith
Not sure I care about Network Firewall support… but agreed that is super cool. I hope that becomes the pattern instead of the exception.
It’s getting too much for me , what is the added value here compared to what we have already?
There are some organizations out there that need to do filtering based on FQDN or have real IPS capabilities. Usually required by some regulations. Until now, they had to deploy a third party firewall (like check point, palo alto, etc). Now they can use this instead.
Yes, some places we are required to filter egress traffic
Yes. I have to do the math but probably will deprecate our current solution
Hi, I am using RDS module , I was thinking how I can reuse my existing VPC or subnet names. As of now I have manually search for vpc_id, subnet_ids, security_group_ids from the console. And then use then into terraform.tfvars.
I know using data we can fetch that, but I don’t find any example which will use name of vpc or subnet.
i.e. I need to provision RDS db into existing VPC, which will be having same name/tags. How can I refer then instead of copying and pasting IDs from AWS console. Which is extra work.
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
you need to use the names of the tags with a filter
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
so it is the Tag named Name
for example
data "aws_vpc" "main_vpc_us_east_1" {
tags = {
provisioning = "terraform"
shared_vpc = "false"
}
}
data "aws_subnet_ids" "private_us_east_1" {
vpc_id = data.aws_vpc.main_vpc_us_east_1.id
tags = {
provisioning = "terraform"
Name = "*private*"
}
}
Thanks, let me try this example
For data sources referencing “unmanaged” resources, I like to add another tag to them like loaded_in_terraform = true
so there’s a little documentation about the link on the source side
Got following error:
module.db.module.db_subnet_group.aws_db_subnet_group.this[0]: Creating...
module.db.module.db_option_group.aws_db_option_group.this[0]: Creating...
module.db.module.db_option_group.aws_db_option_group.this[0]: Creation complete after 0s [id=db-migrate2-20201120063405956700000001]
Error: Error creating DB Subnet Group: InvalidParameterValue: Some input subnets in :[vpc-01e43736cca466036] are invalid.
status code: 400, request id: 9e1683c2-f9e6-456d-99fb-6e21890c6546
on .terraform/modules/db/modules/db_subnet_group/main.tf line 1, in resource "aws_db_subnet_group" "this":
1: resource "aws_db_subnet_group" "this" {
My mistake: Following value should be without []
subnet_ids = data.aws_subnet_ids.private2.ids
Its working Thank you
@Alex Jurkiewicz
What is loaded_in_terraform = true
?
I did not understand its usage?
Should I add into data block?
Using tags as variable can be good practise?
As of now, I have something like:
data "aws_vpc" "vpc" {
tags = {
Name = "mainvpc"
Environment = "dev"
}
}
data "aws_subnet_ids" "private2" {
vpc_id = data.aws_vpc.vpc.id
tags = {
Custom-tag = "*private_subnet2*"
Name = "sub_private_dev_*"
}
}
Can I use variable for the module can be reuse and any Name of the vpc or subnet can be change?
data "aws_vpc" "vpc" {
tags = {
Name = var.vpcName
Environment = "dev"
}
}
data "aws_subnet_ids" "private2" {
vpc_id = data.aws_vpc.vpc.id
tags = {
Custom-tag = var.subnetCustomTag
Name = var.subnetNames
}
}
Variables:
subnetNames = "*private_subnet2*"
subnetCustomTag = "sub_private_dev_*"
vpcName = "mainvpc"
Any suggestions?
Yes you can use input variables to pass to the filters
ok. I will try that
2020-11-20
Hi,
I am using Apigateway Module, I want add a sub paths into a path_part, like path/subpathA/subpathB
. Im trying do it, but module cant do that. Someone know how i can do this?. Also, when a make this manually on apiGateway, when i refresh state, always it say that i have changes to apply.
Cheers.
Terraform module to create Route53 resource on AWS for create api gateway with it’s basic elements. - clouddrove/terraform-aws-api-gateway
I’m getting an error about the Terraform Core version, but I’m using a version which is in the constraints listed..
I get it, so version 0.13+ is out
@David Napier’s question was answered by <@Foqal>
HI, I’m looking for help with terraforms built in templating function. More details in thread if you can spare the time
I’d like to dynamically create the json configuration file for cloudwatch agent based on a data structure in terraform like this Reason being that I’d like to only have one place to add and remove logs, event viewer logs and metrics. Metrics being the most frequently changed. Currently the loggroups are created within TF. So having the ability to customise the agent config would save time etc etc. My terraform structure used to create
agent_objects = { files = { collect_list = [ { ”name” = “cms” ”create_log_group” = true ”create_log_stream” = false ”file_path” = “C:\Connect\cms\logs\iis\W3SVC1\” ”log_group_name” = “/myapp/test/iis/cms” ”log_stream_name” = “{instance_id}_{local_hostname}” ”retention_in_days” = 30 ”use_custom_kms_key” = true ”kms_key_id” = “Blah” ”auto_removal” = true }, { ”name” = “maint” ”create_log_group” = true ”create_log_stream” = true ”file_path” = “C:\Connect\maint\logs\iis\W3SVC2\” ”log_group_name” = “/myapp/test/iis/maint” ”log_stream_name” = “{instance_id}_{local_hostname}” ”retention_in_days” = 30 ”use_custom_kms_key” = true ”kms_key_id” = “Blah” ”auto_removal” = true }, ] }
windows_events = { collect_list = [ { name = “System” event_format = “xml”
event_levels = [ ”INFORMATION”, ”WARNING”, ”ERROR”, ”CRITICAL”, ]
create_log_group = true log_group_name = “/myapp/test/System” create_log_stream = false log_stream_name = “{instance_id}_{local_hostname}” retention_in_days = 30 use_custom_kms_key = true }, { name = “Application” event_format = “xml”
event_levels = [ ”INFORMATION”, ”WARNING”, ”ERROR”, ”CRITICAL”, ]
create_log_group = true log_group_name = “/myapp/test/Application” create_log_stream = false log_stream_name = “{instance_id}_{local_hostname}” retention_in_days = 30 use_custom_kms_key = true }, ] } metrics = { metrics_collected = [ { name = “LogicalDisk” measurement = [”% Free Space”] metrics_collection_interval = 60 resources = [“*”] }, { name = “Memory” measurement = [”% Committed Bytes In Use”] metrics_collection_interval = 60 resources = [””] }, ] } }
With my very limited understand of the template functionality it appears that it should be possible to build the type of structure I need. An example of the cloudwatch configuration I need can be seen here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html Search for “The following is an example of a logs section” the example log is just below. Having read over the examples here https://alexharv074.github.io/2019/11/23/adventures-in-the-terraform-dsl-part-x-templates.html#example-7-the-for-loop
locals { fruits = [“apple”, “banana”, “pear”] }
output “fruits” { value = «-EOF My favourite fruits are: %{ for fruit in local.fruits ~} - ${ fruit } %{ endfor ~} EOF }
I can see the templating is capable of looping over a terraform list and outputting values. My questions is… am I being too ambitious for what the terraform language can do. I’ve failed to find any examples online of similar json creations. Is there a different approach / provider I should be investigation? my thanks in advance for any advice given.
well one option would be to construct it entirely as an hcl object, then use jsonencode()
to convert it to json
also, templates can use tf functions, see this section: https://www.terraform.io/docs/configuration/functions/templatefile.html#generating-json-or-yaml-from-a-template
The templatefile function reads the file at the given path and renders its content as a template.
Hi Loren, thank you for coming to my aid again :)
Regarding the first suggestion, I don’t fully understand, is there a provider to do jsonencoding? I know you can read a template file in and add .json to get an encoded version. Is that what you mean?
As for the link I’ll go read that now.
jsonencode()
is a builtin function in terraform. no provider needed, https://www.terraform.io/docs/configuration/functions/jsonencode.html
The jsonencode function encodes a given value as a JSON string.
Ah I see from the link the jsonencode now. Thanks wasn’t aware of this.
it’s very powerful, give it a go!
I will; thank you for the point in the right direction
I used terraform 0.13.X to update my state, but the modules I’m using require TF ~> 0.12.X, is there a way to revert state to work with an older TF version?
you can attempt to pull the state and manually edit it to change the version it records. But I’m not sure if there were structural changes from 12 to 13 which mean this won’t work… Might be easier to upgrade those modules…
I wish I had the time to do that. Seems like a lot of the cloudposse modules have version constraints towards 0.12.X though.
@David Napier there’s an ongoing effort to fix version pinning. Which ones are you using?
Really good to know. terraform-aws-alb
, terraform-aws-dynamic-subnets
, and terraform-aws-vpc
.
If you are using s3 backend and have object versioning enabled, you can revert back to previous version
make sure you check the module versions that you are referencing in your source argument… the current alb module version does work with tf0.13… https://github.com/cloudposse/terraform-aws-alb/blob/master/versions.tf
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
dynamic-subnets is also ready for tf 0.13
and so is vpc
terraform-aws-tfstate-backend
is the only one I’m using that’s not in that list
That looks like it was fixed in v0.28.0
Now getting an sts:GetCallerIdentity error. No idea where that’s coming from. Can still use the aws cli with np
Sorry, I know that’s not useful information
Ah, apparently the [backend.tf](http://backend.tf)
file didn’t get the profile config
Okay, I finally got it, had to pin the version for each item rather than going with the front page examples
Right, front page example has master but you definitely have to pin to a specific tag for each module
Well, that’s definitely on me, but I really appreciate you guys being willing to look.
No worries. I remember it wasn’t obvious to me at first.
2020-11-21
Please can I get some help to get me over the line with the right syntax for my final for loop? More in in thread…
Good afternoon, Thanks for all the help so far with getting my templates sorted, I’m almost there but I’m struggling with the last for loop or maybe it should be a for_each, feels like I’m missing an important lesson but I’ve played around with this for over two hours and I’m getting nowhere.
I have this structure (cut down for this example.)
variable "log_object" {
type = object({
metrics = object ({
metrics_collected = list(object ({
name = string
measurement = list(string)
metrics_collection_interval = number
resources = list(string)
})
)
})
})
}
With this set as the variable:
log_object = {
metrics = {
metrics_collected = [
{
name = "LogicalDisk"
measurement = ["% Free Space"]
metrics_collection_interval = 60
resources = ["*"]
},
{
name = "Memory"
measurement = ["% Committed Bytes In Use"]
metrics_collection_interval = 60
resources = [""] # Note Memory does not use a resource but TF can't have optional data types in objects see <https://github.com/hashicorp/terraform/issues/19898>
},
]
}
}
I’m trying to create an jsonencode block for use in my custom tpl file.
With the help of some kind people here, I’ve managed to get it mostly working but I need to tweak the outputted structure slightly from what I’m able to produce. Current output
"metrics": {
"metrics_collected": [
{
"measurement": ["% Free Space"],
"metrics_collection_interval": 60,
"name": "LogicalDisk",
"resources": ["*"]
},
{
"measurement": ["% Committed Bytes In Use"],
"metrics_collection_interval": 60,
"name": "Memory",
"resources": [""]
},
]
}
What I need:
metrics = {
metrics_collected = [
"LogicalDisk" = {
measurement = ["% Free Space"]
metrics_collection_interval = 60
resources = ["*"]
}
"Memory" = {
measurement = ["% Committed Bytes In Use"]
metrics_collection_interval = 60
resources = [""]
}
]
}
I have thought about changing my data structure to
type = object({
metrics = object ({
metrics_collected = list(object ({
name = object({
measurement = list(string)
metrics_collection_interval = number
resources = list(string)
})
})
)
})
})
}
But doesn’t this then fixes everything to be called name rather than dynamic? So looks wrong. I want the flexibility to add and remove items from the overall list. So want to avoid having to explicitly set each one by a name within the variable declaration.
I’ve been trying lots of different combination of past suggestions but not hit on the right thing yet This is what working:
resource "local_file" "test" {
filename = "myconfig.json"
content = templatefile("${path.module}/file.json.tmpl", {
metrics_collected = jsonencode(
[ for metric in var.log_object.metrics.metrics_collected : {
name = metric.name
measurement = metric.measurement
metrics_collection_interval = metric.metrics_collection_interval
resources = metric.resources
}
]
)
})
}
I feel like this is closest but it gives a TF error
resource "local_file" "test" {
filename = "myconfig.json"
content = templatefile("${path.module}/file.json.tmpl", {
metrics_collected = jsonencode(
[ for metric in var.log_object.metrics.metrics_collected : metric.name => {
name = metric.name
measurement = metric.measurement
metrics_collection_interval = metric.metrics_collection_interval
resources = metric.resources
}
]
)
})
}
Error: Key expression is not valid when building a tuple.
I’ve also tried adding a for_each in the mix like
for_each = { for metric in var.log_object.metrics.metrics_collected : metric.name => metric }
but this errors as well.
I’m sure the answer is within my reach but I’m not sure how many more hours it’ll take me to find it on my own. So, any help you can offer would be appreciated.
Wondering if anybody has time to help with this, I fought with it most of the weekend and I’m still no closer
I think you are trying to do something too fancy here for your use case. But it’s a little difficult to understand because of your complex data structure. Can you create a simplified test case using simper data structures and null resource?
Many thanks for replying Alex. If that’s the case i guess I should reluctantly give up. I’ve tried to find the right combination of for_each and for or even just a for loop but I’m just not skilled enough in constructing the correct syntax. Which is driving me nuts as I’m used to programming with these types of data structure in other languages. I do understand that this is more a templating language but it really felt like I was almost there. Especially, as I’d managed to crack the first two parts with Loren’s kind help on the previous thread.
Putting terraform aside for a second and trying to explain in different way, all I want to do was 1.Take my list,
- loop over it
- for each object in the list,
- take the value for inside the object called name e.g. “LogicalDisk”. Use this as a key for the remaining items in the object. Be this on the fly or by creating a new object structure to temporarily use.
- outputs
"LogicalDisk" = { measurement = ["% Free Space"] metrics_collection_interval = 60 resources = ["*"] }
When all said and done, if what I’m trying to do, isn’t piratical i’ll shelve it. It’s clearly beyond my ability at the minute. Once again, thank you for your replying and to everybody else for taking the time to read the question.
it sounds practical. But your examples are not easy to understand. You will get more help if you spend your time making the question clearer
like, when I’m looking at your code blocks they are indented so much they wrap on my screen. It’s really hard to make sense of
Thanks Alex, I’ll try again to reframe the question, not a problem to do so. After all, I’m the one asking for help. Apologies, its not clearly enough to start with. I do appreciate your time.
Re the wrapping on the screen, totally unaware of that, as they appeared fine on my screen. I’ll clean them up. Thank you for pointing it out.
2020-11-23
here’s a neat trick… we like to maintain iam policy templates as separate json files, and validate the the json syntax using jq
. but these are actually json templates, rendered with terraform’s templatefile()
, and so we can use any sort of terraform function inside these templates. in particular, using jsonencode()
within the template to support hcl lists without hacky joins to render the list to json. the problem with using terraform functions like this is the templates become invalid as json, and so fail jq
validation.
so, came up with a simple tf config to render the template and ensure it serializes to json…
locals {
# specify all vars in the templates
template_vars = {
# foo = bar
}
}
data null_data_source this {
for_each = fileset(var.path, var.pattern)
inputs = {
# templatefile catches bad hcl syntax in interpolations
# encode/decode cycle catches bad json in the template
json = jsonencode(jsondecode(templatefile(each.value, local.template_vars)))
}
}
variable path {
type = string
default = "."
}
variable pattern {
type = string
default = "**/*.json.template"
}
then have your CI system run a plan on that config:
terraform init -backend=false <path/to/test/config>
terraform plan <path/to/test/config>
the config intentionally uses a null data source to avoid needing any aws credentials
Another “neat trick” that I implemented recently… we use Terraform to manage the versions of core pre-installed things on EKS, things like Kube Proxy, the CNI, pretty much replicated what AWS would have you do according to their upgrade docs.
We faced a problem when AWS changed their tags for kube-proxy to include eksbuild.1
in the tag. To try to combat this, we’ve used the docker_registry_image
data resource like:
# Validate images exist
data "docker_registry_image" "aws_node" {
count = var.manage_daemonsets ? 1 : 0
name = "602401143452.dkr.ecr.${var.region}.amazonaws.com/amazon-k8s-cni:${local.amazon-k8s-cni_image_tag}"
}
data "docker_registry_image" "coredns" {
count = var.manage_daemonsets ? 1 : 0
name = "602401143452.dkr.ecr.${var.region}.amazonaws.com/eks/coredns:${local.coredns_image_tag}"
}
data "docker_registry_image" "kube_proxy" {
count = var.manage_daemonsets ? 1 : 0
name = "602401143452.dkr.ecr.${var.region}.amazonaws.com/eks/kube-proxy:${local.kube-proxy_image_tag}"
}
Provider configuration:
data "aws_ecr_authorization_token" "login" {}
provider "docker" {
registry_auth {
address = "602401143452.dkr.ecr.${var.region}.amazonaws.com"
username = data.aws_ecr_authorization_token.login.user_name
password = data.aws_ecr_authorization_token.login.password
}
}
This results in an error being thrown way earlier in the Terraform run.
@Tim Birkett nice. iirc, one issue with data sources in provider configs is that it breaks imports. have you run into that? or maybe the situation is improved since i last tried it… the linked issue is pretty old: https://github.com/hashicorp/terraform/issues/13018. but the docs still say no data sources in providers… https://www.terraform.io/docs/commands/import.html#provider-configuration
The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.
@loren your witchery can be use to terraform init
a backend config file?
no backend config required with, terraform init -backend=false
so basically double init, one with no backend to create the config and another with the remote backend?
i’m doing this in a repo with only policy template files, no other tf config, so i specifically disable any backend setup. another repo references this one to pull down the template files for its own tf configs…
I understand but imagine if the backend config is on json and is templated and generated by the first init?
I’m trying to find a way to have templates of backend config files, and when I saw this I thought I could maybe use this
oh i see, interesting. terragrunt
can do this using its generate
block. i imagine terraform could using a couple steps to create the file first with a targeted apply. but you’d need to be very careful with the vars you use for the backend template to avoid a resource cycle
yes, the source should be pretty locked down and tested before any input is passed
That’s very clever @loren
Could probably do something pretty similar to validate templated yaml files using yamldecode/yamlencode…
@loren Maybe worth adding to https://github.com/cloudposse/terraform-yaml-config?
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config
I was also just thinking about this… Why do you store your policies in JSON instead of using the policy_document resource? I’ve always gone the policy_document approach as I thought that was the recommended way.
because the teams managing the policies do not want to deal with terraform directly, so i try to keep the hcl-ness to an absolute minimum :)
@loren totally valid reasoning
IMO policy document is much better than <<EOF
, slightly worse than external file. The underlying structure is JSON, it is simpler to reason about stuff which undergoes less transformation
You’re saying you prefer: External JSON file > Policy Document > EOF?
I like policy document because most of my policy_document resources include mentions to other resources in the root module or from remote state.
And with it being all HCL, you don’t need to jump through as much string templating hoops.
ah yeah. I didn’t think about that. You are right, for any IAM document with references to Terraform stuff, I would prefer the policy document data source
for this use case, it’s mostly some cyber security folks. i just want them focused on reviewing/approving the iam policy documents that are in scope of their team’s review. i don’t want them getting their knickers twisted on some intricacies of hcl and terraform. this is also why these policies are in their own repo, which now no longer has any tf code (other than some very light hcl templating in the policy files). i just pull the repo to my root module using a module block with a source arg and no other arguments, then reference paths to the policy files from the .terraform cache. it’s my new favorite trick, learned here in cloudposse slack, but i don’t remember from who
so it basically all looks like JSON to them, which they’re used to seeing and working with in the IAM console. just with a couple self-explanatory template var names that we pass in through templatefile()
Yeah I like it — Sounds very useful for a larger org. And sounds very similar to terraform-yaml-config.
i skimmed that module when it was posted recently, it seems super handy… i’m about to overhaul some of our root config management, will keep in it in mind to see if it fits… :)
Yeah, let me know how that goes. I haven’t used it yet, but it does look a solid tool. I like the new Cloud Posse thought process behind pushing more configuration into Yaml > HCL. I’m finding I do that more and more.
sorry to be a pest but I was hopeful that somebody might have time to look at my last question/thread (posted Saturday)?
Anyone know whether modules are able to be looped over? Did that make it into v0.13.X?
along with depends_on for modules
Terraform 0.14 is all about perfecting workflow. Our latest version enables practitioners to leverage more predictability and reliability in infrastructure automation. In meeting this goal, we’ve also added some key features to help organizations heavily invested in Terraform, continue to mature. Additionally, updates included in this release will be equally valuable to practitioners and teams just starting their journey with infrastructure as code. In this webinar, Terraform OSS Product Manager Petros Kolyvas and Technical Product Marketing Manager Kyle Ruddy will walk you through new Terraform features such as concise diff, sensitive variables, and the provider dependency lock file.
Looks like it’s still RC1 though
Terraform 0.14 is all about perfecting workflow. Our latest version enables practitioners to leverage more predictability and reliability in infrastructure automation. In meeting this goal, we’ve also added some key features to help organizations heavily invested in Terraform, continue to mature. Additionally, updates included in this release will be equally valuable to practitioners and teams just starting their journey with infrastructure as code. In this webinar, Terraform OSS Product Manager Petros Kolyvas and Technical Product Marketing Manager Kyle Ruddy will walk you through new Terraform features such as concise diff, sensitive variables, and the provider dependency lock file.
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…
Maybe it will drop *December 08 | 10:00 AM PST?* |
yeah, it hasn’t dropped yet. close though. my thinking is this could be one of the lead-up discussions, like how they published pre-release blog posts on tf 0.13 features
Yup
We’re bracing for the onslaught of 0.14 issues/prs
I like that they’re shipping new major versions soon… but it does cause a lot of havoc in terms of upgrading projects. this upgrade is as easy as 0.13 and that we’ve done enough of the minor version only pinning to make it easy on the CP module front.
i’m not expecting all that much in the way of breaking changes, from what i’m seeing. long as the version pins on modules are >=!
are they dropping support for old provider syntax?
(i know they are dropping the upgrade command)
Isn’t the old provider syntax already dropped in 0.13?
interesting, the master branch changelog is already collecting 0.15 changes…
the version
argument of provider blocks is deprecated, but only results in an explicit deprecation warning. it has not been removed
the changelog for 0.14 does not have a “BREAKING CHANGES” section, where for 0.13 it did…
Their main push is the lock file, right? I would assume that wasn’t breaking.
That’d be awesome. Looking forward to trying that out.
i’m always of two minds on the lockfiles. we’ll see. i mean, i use terraform-bundle anyway, because i don’t want my CI downloading random things anyway. and i haven’t generally seen any problems across teams in a looong while (outside TF core upgrades breaking tfstate)
backwards-compatible tfstate will be welcome, for sure, even though that problem is mostly addressed just by a strict pin on the version in the root config. some of those changes are filtering into the 0.13.x releases… https://github.com/hashicorp/terraform/blob/v0.13/CHANGELOG.md#0136-unreleased
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…
Yeah good point — Though I didn’t mean to say lock files are what I’m looking forward to trying out. I’m excited about the move towards faster releases with minimal breaking changes as that sounds like a great direction for the tool.
It sounds like the next release will be 1.0, so I guess you can consider this 1.0rc1
Anyone have terraform syntax highlighting for json.tpl files working in vscode?
I think vscode has a way of associating a file extension to a plugin? Haven’t looked at that in a while…
You can associate the extension or override it manually in the bottom right corner of the ui where it says something like ‘json’ to identity the current syntax selected.
I know how to associate. But the hashicorp terraform plugin doesn’t add syntax langauge for “JSON with HCL templates”
Check for, or open, an issue?
I’ve resorted to adding the files correct file extension to aid in readability in text editors. Where we used append .tmpl
to everything we now name files like: values.tmpl.yaml
- of course in JSON if you have any kind of linting turned on templated variables will light up as errors. It makes YAML easier on the in my case.
2020-11-24
using//github.com/cloudposse/terraform-aws-efs> is it possible to turn off the creation of a security group and pass one to it instead? it has
resource "aws_security_group" "efs" {
count = module.this.enabled ? 1 : 0
but not sure how to use it and nothing in the readme
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
hi again not an expert, but that looks like a way to disable the entire module only, rather than a way to disable the SG only. You might need to submit a PR to add this functionality if you want it
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
hi again will take a look. thanks
always exciting when aws changes the values returned by their api… the resource aws_securityhub_member
is currently a bit broken as a result… https://github.com/hashicorp/terraform-provider-aws/issues/16403
AWS released changes today to SecurityHub that changed the MemberStatus fields to contain a few different values then is currently supported by the terraform AWS provider. Because of this change th…
Taking the advice given about my previous question. I have tried to reframe the question into a simpler example. I hope I’ve managed it. Please have a look within this new thread and once again thank you all for your patience. Question: How do perform nested for loop on list(object) variable in Terraform. Without looping the resource?
My stripped down Terraform structure:
metrics = [
{
name = "LogicalDisk"
measurement = ["FreeSpace"]
},
{
name = "Memory"
measurement = ["CommittedBytes "]
},
]
Same structure but via JSON if that helps the readability:
{
"metrics_collected": [
{
"measurement": ["FreeSpace"],
"metrics_collection_interval": 60,
"name": "LogicalDisk"
},
{
"measurement": ["CommittedBytes"],
"metrics_collection_interval": 60,
"name": "Memory"
}
]
}
This is the structure I need to generate by loop over the above structure to get a jsonencoded output like this?:
{
"metrics_collected": {
"LogicalDisk": {
"measurement": ["FreeSpace"],
"interval": 60
},
"Memory": {
"measurement": ["CommittedBytes"],
"interval": 60
}
}
}
I believe I need to do something like:-
for_each = {
for metric in var.metrics: metric.name => metric
}
However, I don’t know how to do that without looping the whole resource block and making multiple templates. I suspect I need to use a null_resource to loop over the items and then maybe join them in some way afterward but I’m guessing.
I wish to then use the above structure in a local_file resource block as one of the input variables of content: I’ve mocked up what I’d like to do but not I can’t because the syntax is illegal.
resource "local_file" "test" {
filename = "myconfig.json"
content = templatefile("${path.module}/file.json.tmpl", {
# I use this working loop to create a similar structure to the one I want but for other inputs
# Left it here so you can see I need multiple inputs to the content and that I have got similar thing working
log_collect_list = jsonencode(
[ for logs in var.log_object.files.collect_list : {
"file_path" = logs.file_path
"log_group_name" = logs.log_group_name
"log_stream_name" = logs.log_stream_name
"auto_removal" = logs.auto_removal
}
] # closing for loop
) # closing bracket for jsonencode
# Now for the one I want but I can't establish how I use a for_each loop without duplicating the whole resource
# I've have also tried to use of "dynamic variable" name but again the syntax is rejected.
# Mockup of what I'd like to do
metrics_collected = jsonencode(
for_each = {
for metric in var.log_object.metrics.metrics_collected : metric.name => metric
name = metric.name
measurement = metric.measurement
metrics_collection_interval = metric.metrics_collection_interval
resources = metric.resources
}
}
) # closing of jsonencode
}) # closing of template
} # closing of resource
Any pointers would be appreciated.
here’s an example of how to convert from your input format to desired structure
locals {
in = [
{
name = "LogicalDisk"
measurement = ["FreeSpace"]
},
{
name = "Memory"
measurement = ["CommittedBytes"]
},
]
out = {
metrics_collected = {
for item in local.in : item.name => {
measurement = item.measurement
interval = 60
}
}
}
}
output "out" {
value = local.out
}
Hello Alex thank you for taking the time to write an example. I appreciate the support you and others have given me. It’s enabled me to keep learning and achieve things I didn’t think were possible. I’ll be retuning to the office shortly and look forward to trying this.
In the past week, I’ve upgraded probably a dozen or so root modules across two clients to TF 0.13 + aws provider 3.0 and both were heavily using dozens of CP modules. It was a breeze and that’s an awesome accomplishment by this community. Thanks to all you folks who contributed in that regard!
0.14 will be easier too
We’ve been receiving tons of PRs to loosen provider pinning (>=
) and update to [context.tf](http://context.tf)
which continued to make maintenance easier
It sounds like it should be.
Yeah, that’ll make everything pretty smooth. Happy to see it.
We’ve also published the terraform 0.14 RC1 to our packages repo
Cool — are you using it with anything yet or just in prep?
Hi everybody. In case you need help with tagging resources in AWS, Azure or GCP, please take a look on our new open-source http://github.com/env0/terratag that automatically and recursively tags all resources for you. (Disclaimer - I am co-founder and CEO of env0, the one that created this open-source)
Terratag is a CLI tool that enables users of Terraform to automatically create and maintain tags across their entire set of AWS, Azure, and GCP resources - env0/terratag
This is interesting… so this goes through all your *.tf
files and knows which TF resources support tags and can add the tags {... }
(or similar) to it. Is that right?
Terratag is a CLI tool that enables users of Terraform to automatically create and maintain tags across their entire set of AWS, Azure, and GCP resources - env0/terratag
If so, does it preserve comments in the TF code?
@Erik Osterman (Cloud Posse) It will also go through any module that you have a tag those resources as well.
(fwiw, this is why we have terraform-null-label
which we use in everyone of our modules to ensure consistent, enforced tagging)
How to create healthcheck for wss protocol on a fargate?
I need create socket server as service with one socket port.
hello, I’m having some issues with the terraform-null-label
, after upgrading to the latest version I’m always getting an empty ID, regardless what I tried.
module "label" {
source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=0.21.0>"
context = module.this.context
enabled = true
id_length_limit = 10
}
output "ID" {
value = "'${module.label.id_full}'"
}
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
ID = ''
Am I missing anything? (the same happens when using module.label.id
instead of module.label.id_full
on another topic, did anyone figure out a way to use modules with for_each/count and different AWS regional providers?
Is there a more compact way to write this sort of expression?
length(random_shuffle.shared_alb.result) > 0 ? random_shuffle.shared_alb.result[0] : null
Get the first element of a list if it’s not empty. If the list is empty I don’t care what the value is
Not sure…. Both try
and element
both return errors if the list is empty or doesn’t evaluate.
these days, i recommend using the same condition you use for the count expression, instead of the length
it’s not more compact, but it is more sane and better demonstrates the relationship between the resource and the local/output
that makes a lot of sense. I’ll pull that condition out into a local
We have some random provider resources in our Terraform configuration. For instance a random_id
resource: https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id
We want to change the keeper values but not change the output. Has anyone done this? I assume it’s possible with some statefile hackery, which I’m OK with.
looks like it’s pretty simple.
- Pull the state,
- (back up the state,)
- edit the resource so the input values match your new ones
- Increment state serial (up the top)
- push the state.
Jsonencode looks to order name pairs into alphabetical order
log_collect_list = jsonencode(
[ for logs in var.log_object.files.collect_list : {
"file_path" = logs.file_path
"log_group_name" = logs.log_group_name
"log_stream_name" = logs.log_stream_name
"auto_removal" = logs.auto_removal
}
]
)
auto_removal is listed at the bottom of the above for loop but the resulting json string list the items in the order
"auto_removal": true,
"file_path": "W3SVC1\\*",
"log_group_name": "/test/iis/cms",
"log_stream_name": "{instance_id}_{local_hostname}"
While I appreciate the order isn’t normally an issue for the consuming application and I think that might be true in my case. I was wondering if there is a way to tell jsonencode to respect the order in which it consumed the named pairs?
No, there’s not. Are you positive the order matters for your application?
I’m just testing the app (Cloud Watch agent), I’m sure it won’t matter but I’m not always that lucky, especially with some of this historical apps we run. So was curious if it was possible. Thank you for confirming.
There are ways to make the order what you want, but they are complicated. I think you should be 100% sure there’s an issue before you solve this problem
Is there a better way to write
contains(keys(mymap), "asg")
?
can(mymap.asg)
?
If that works, may be a little too clever… contains is very straightforward. Could put the keys in a local to improve readability/intent
hopefully contains
supports maps in 1.0
Hi! I’m using https://github.com/cloudposse/terraform-aws-tfstate-backend. Is it possible to have same s3 bucket to work with multiple state with unique locks? State file name can be given with terraform_state_file but lock name seems to come directly from bucket name -> same lock would be used when working with all states so you could not work with multiple states at the same time.
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
2020-11-25
Hello all, I have just stumbled across https://github.com/cloudposse/terraform-aws-tfstate-backend/ - does this modules manage access permissions to S3 buckets being created ? I.e.: I would like to grant RW permission to person A and B so that nobody else can access the new S3 bucket.
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
Do I need to modify this module or could I add those permissions outside of this module ?
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
Yea, so permissions is not something in scope for this module. That’s more something you control using IAM roles associated with groups or users.
E.g. by default, no user has access to the bucket created.
thanks @Erik Osterman (Cloud Posse) a lot for explanation.
@Erik Osterman (Cloud Posse) but this modules is per cluster ? I mean I cannot create one S3 bucket for many clusters and i.e.: differentiate using DynamoDB keys ?
The architecture of how you deploy TF state backends is entirely up to you.
In the past, we’ve done one per AWS account. Now we typically have one in the root AWS account.
api_pipeline_env_variables = [
{
name = "AWS_DEFAULT_REGION"
value = "eu-central-1"
},
{
name = "CONTAINER_NAME"
value = var.api_container_name <-- Rookie her. This is illegal, but how can I inject a variable in list?
}
]
that should work fine… what version of terraform?
@loren Terraform v0.13.5
and what error are you getting?
$ cat main.tf
variable foo {}
output bar {
value = [
{
name = "foo"
value = var.foo
},
]
}
$ terraform apply -var foo=yoda
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
Terraform will perform the following actions:
Plan: 0 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
bar = [
{
"name" = "foo"
"value" = "yoda"
},
]
Error: Variables not allowed
on terraform.tfvars line 122: 122: value = var.api_container_name
Variables may not be used here.
Might be related to this I guess
Error: No value for required variable
on vars.tf line 278: 278: variable “client_pipeline_env_variables” {
The root module input variable “client_pipeline_env_variables” is not set, and has no default value. Use a -var or -var-file command line argument to provide a value for this variable.
ahh, in tfvars, no you cannot reference variable values. you have to do that in your .tf code
Ah, right
Pass through a locals
to combine or decide things based on two different vars freely.
@mfridh Allright. Thanks
before I add docs and pull request… good idea? https://github.com/cloudposse/terraform-aws-iam-role/compare/master...sultans-of-devops:master
I use it for example like this, to attach also managed policies:
module "ssm_service_role" {
source = "github.com/sultans-of-devops/terraform-aws-iam-role"
name = "SSMServiceRole"
namespace = var.namespace
stage = var.stage
policy_description = "SSM Service Role policy"
role_description = "IAM role with permissions SSM and CloudWatch Agent policies"
principals = {
Service = ["ssm.amazonaws.com"]
}
policy_attachments = [
"arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore",
"arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy",
]
}
A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role
Does cloudposse have any repos directed at vmware as a provider?
I’m not finding any, but just wanted to make sure I wasn’t overlooking it
nope… we’re hyper-specialized on AWS
(some stuff for github, datadog, opsgenie too)
do you recommend terraform-null-label
or terraform-terraform-label
these days? It’s not clear if one is preferred
yea
terraform-terraform-label
should be archived
I’m seeing this now. It is..
We’re using an old fork of null-label and we were passing null-label resources as module inputs, but you are much smarter
Using a for_each
loop with the terraform-aws-route53-alias
module, how would I specify a parent_zone_id
from a resource. This (parent_zone_id = aws_route53_zone[each.key].zone_id
) returns Invalid reference.
Appreciate any thumbs up: https://github.com/hashicorp/terraform-config-inspect/issues/57
what Add semver releases Attach precompiled binaries to the releases why Not everyone has a go setup GitHub actions make it trivial to build and distribute binary releases (happy to provide example)
hi. please can anyone recommend a database ui tool on the web for managing databases. currently using datagrip and heildiSQL. But was looking for something that could be available on my webbrowser
Uhhh.. kinda afraid to ask this at this point, but do you guys put your provisioners (ansible, puppet, etc.) in folders within your terraform folders or do you keep them in an adjacent folder outside of the IaC?
Design question: I have terraform with multiple AWS VPC’s and inside its SecGroups. Now I have created an “Opertions” VPC that needs to access to everywhere, do I change each SG (which are hundreds) adding a rule in each SG (solution not much scalable)? Do I create a SG module and I use it everywhere where I have a SG? Do I add a kind of “terraform linter” in the pipeline that detects if some SG doesn’t have the “Operations VPC rule”? WDYT?
In all other VPC’s you can create a single SG allow_from_operations, this SG has a rule to allow the SG from the Operations VPC, or netblock. Now you only need to add this allow_from_operations SG to the resources in that VPC.
@maarten Thx! question, what do ou mean with “Now you only need to add this allow_from_operations SG to the resources in that VPC.”
you probably have a few EC2 instances already, they currently have a Security Group attached to it already as well.
In remote VPC we have SG_remote and the SG_allow-_from__operations you mentioned, now what?
You can add a second security group to the ec2 instance
what do you mean with remote VPC, this is where the bastion resides ?
but…are you sure that for any resource, EC2, RDS…we can add 2 SG’s?
yes
remote VPC would be where the EC2, RDS…are. the VPC where bastion is, I call it VPC operations
all clear, so as an example
Hey, I think that one EC2 only can have 1 SG
How much bitcoin do you want to bet
jaja, sorry, u are right
so, example
I’m checking RDS if can have multiple SG’s right now
yeah!
HEy, thax for your help, valid solution, if u need anything from my side, I woulñd help you if it is in my hands
1 rds has 2 security groups
rds_sg = allow mysql/postgres access from
allow_from_operations = allow traffic from vpc/bastion_sg
1 web ec2 has 2 security groups
web_sg = allow traffic from port 80/internet
allow_from_operations = allow traffic from vpc/bastion_sg
with load balancers it’s a bit more complicated, but same principles apply
also keep in mind that you can create a security group in your bastion VPC, and you can refer to it in other VPC’s when they are connected
yes, but our peering is made with TRansit GW which still doesn’t recognizes SG’s from other VPC’s
2020-11-26
Given an expression like this, how do I repeat elements in aws_subnet.db.**.id
if num_nodes
is greater than length(aws_subnet.db.**.id)
?
subnet_ids = [for subnet in range(var.num_nodes) : aws_subnet.db[subnet].id]
Try the element() function? https://www.terraform.io/docs/configuration/functions/element.html#examples
The element function retrieves a single element from a list.
If the given index is greater than the length of the list then the index is “wrapped around” by taking the index modulo the length of the list:
Tried it, looks like doesn’t do the trick
[for subnet in range(var.num_nodes) : element(aws_subnet.db.*.id, subnet)]
Seeing this, expecting first element to get repeated which doesn’t look like it is
subnet_ids = [
+ "subnet-01759xxxxx5199",
+ "subnet-02ea1xxxx2c019",
+ "subnet-091ff5xxx6a116f",
]
Does anyone have a hack they’d like to share to get around the limitation of recursive templatefile calls?
for e.g consider
./example.tmpl
example = ${foo}
${templatefile("common.tmpl", { bar = "bar" })}
will result in
templatefile(“./example.tmpl”, { foo = “bar”})
Error: Error in function call
on <console-input
line 1:
(source code not available)
Call to function “templatefile” failed: ./example.tmpl Error in
function call; Call to function “templatefile” failed: cannot recursively call
templatefile from inside templatefile call..
Pass the recursive value through as a template variable instead
that’s a good idea
2020-11-27
qq for those of you that use terragrunt in CI. What does your pipeline look like essentially? going along the terragrunt path has lead me down the mono repository layout
maybe try #terragrunt
one thing that I see being problematic is having a static collection of known directories and executing plan-all within a pipeline would be problematic
2020-11-29
Is there some way to convert a tfvars file to json at the command line?
May be something like this ?
Convert JSON to HCL, and vice versa. We don’t use json2hcl anymore ourselves, so we can’t invest time into it. However, we’re still welcoming PRs. - kvz/json2hcl
Also, this is a oneliner which I have used in past if you just have key=values in terraform.tfvars
cat terraform.tfvars | jq -sR '[split("\n")[:-1][] | rtrimstr("\\r") | split("=") | {(.[0]): .[1]}]'
dang. Makes me think I should write my tfvars in json
comments are good though
Yep it depends what you want to do . There’s terraform.tfvars.json option as describe here
Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.
2020-11-30
I’m trying to set up an elasticache redis cluster and grant access to a security group that is created independently by a vendor. I could get use the security group ID of of that vendor security group but that ID could change if they redeploy their infra at some point in the future. Is there any way to use the security group name instead?
Specifically with the cloudposse/elasticache-redis/aws resource, can the allowed_security_groups contain a name or must it be an ID?
Or independent of the elasticache stuff with this resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule is there any way to provide the security group name instead of source_security_group_id?
Maybe use a data block to get the sg id by name then pass in the resultant sg id?
data "aws_security_group" "selected" {
name = "fancy-security-group-
}
module "redis" {
allowed_security_groups = [data.aws_security_group.selected.id]
}
huh TIL about data blocks
Thanks!
Is there a way to use a regex instead of an exact match? If I wanted to match nodes.dev.domain.com and nodes.prod.domain.com could I do
data "aws_security_group" "selected" {
filter {
name = "name"
regex = true
values = ["nodes.*.domain.com"]
}
}
looks like regex doesn’t work and neither does “name” as a field, using description
worked somehow though
AWS tends to use a tag to define the name for things, meaning you have to use something formatted like this:
data "aws_ec2_transit_gateway" "tgw" {
filter {
name = "tag:Name"
values = ["wahlnetwork-tgw-prod"]
}
}
I run into this enough that I wrote a blog post earlier this year to remind myself. https://wahlnetwork.com/2020/04/30/filter-terraform-data-source-by-aws-tag-value/
A quick guide showing you how to filter a Terraform data source with the associated AWS tag.
thank Chris! I was trying to use tag.Name at some point, good to know that tag:Name is what I wanted
I’m trying to use this https://github.com/cloudposse/terraform-aws-elasticache-redis module and I’m getting an error:
Error: expected length of replication_group_id to be in the range (1 - 40)
I’m not providing a replication_group_id as I want the terraform script to create one. Any ideas on what I’m doing wrong here?
here’s the way I’m using it https://github.com/kiva/protocol-redis
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
Terraform for persistent redis KV store for protocol - kiva/protocol-redis
Also I tried using the module with terrraform 0.13.5 and it told me that only 0.12.0 was supported (though the readme says it supports >= 0.12.0). When I use 0.12.0 I get a bunch of errors about Unsuitable value type, and when I use 0.12.29 I get this replication_group_id issue (which at least looks like a more promising error)
The work to change module requirements from ~> 0.12
to >= 0.12.0
is ongoing, AFAIK they would accept a patch for you to make this change yourself
Thanks Alex, I might make a PR if I can figure out how to use the module. By which I mean I’d feel better patching something if I know the patch didn’t break anything
@Anthony Voutas did you look at the example here https://github.com/cloudposse/terraform-aws-elasticache-redis/tree/master/examples/complete
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
it was tested on AWS (but not sure when was the last time). It was working with TF 0.12
yes, it also doesn’t set replication_group_id
so I’m not sure what I’m doing wrong
did you set all the variables https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/examples/complete/fixtures.us-east-2.tfvars ?
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
namespace, stage, name
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
So this module is not pinned to 0.12, but maybe another module in the ecosystem is…
Also I tried using the module with terrraform 0.13.5 and it told me that only 0.12.0 was supported
so I think we need to see the precise error message with context to suss it out.
the readme is generated using terraform-docs
which gets the supported version as found in [versions.tf](http://versions.tf)
the module uses only this other module https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/main.tf#L161
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
which is not pinned to 0.12 as well
this is one of the errors:
Error: Unsupported Terraform Core version
on .terraform/modules/vpc.label/versions.tf line 2, in terraform:
2: required_version = "~> 0.12.0"
Module module.vpc.module.label (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.14.0>)
does not support Terraform version 0.13.5. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
you prob using an old version of the vpc module
re: replication group id https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/main.tf#L83
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
Also on namespace, stage and name, I don’t have any fixtures set in my version
Terraform for persistent redis KV store for protocol - kiva/protocol-redis
you can provide one, or the module will generate one from the variables namespace, stage, name
you need to provide those
the ID gets generated from them
see the example
I’m confused
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
They aren’t being provided as fields in the example
you need to provide some variables for the module to work
So they’re just global variables that need to be set for the module to work
I’m pretty new to terraform so that seems insane to me, but I’m learning
so, in the example, we provide tfvar file
TF will get those variables from there
in your case, you can directly specify namespace, stage and name to the module
so I can specify them as fields or globally?
and it’ll be the same either way?
as fields to the module
okay sounds good, thanks Andriy I’ll try that
take a look at the older release https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/0.24.0/examples/complete/main.tf#L29
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
we are using the [context.tf](http://context.tf)
pattern for all new releases of all modules
that’s why the newest example is a bit confusing
IMO it’s not well documented that “one of these fields is required” pattern CloudPosse modules use for determining the name
Andriy could you share any docs on context.tf ? I’m learning so trying to absorb as much as I can
yes, it’s not well documented probably, and the README says not required https://github.com/cloudposse/terraform-aws-elasticache-redis#usage
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
that’s the artifact of auto-generating readme
all those variables optional, but you have to specify at least one for the module to work
okay setting those fields seems to have cleared the errors. Still waiting to see if the apply works, but progress is good
re: [context.tf](http://context.tf)
- all our modules include this filr by default https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/context.tf
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
which is just a set of common variables used in all modules
(it’s not related to terraform per se, it’s our way of declaring a standard set of vars to all modules, so they all have consistent input interface for the common stuff
gotcha, thanks Andriy this is helpful to be aware of
https://github.com/cloudposse/terraform-null-label read the README
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
fwiw it feels like namespace, stage and name should be mandatory (even if technically they’re not always required)
I have a separate error now about security groups but that could probably be in a separate thread
it feels like namespace, stage and name should be mandatory
they are, if you want unique and consistent names for all AWS resources
I mean that if you don’t provide those fields, the error should state that they’re mandatory (I could be wrong that’s just what it seems to the untrained eye)
they are not really mandatory since in some cases people want to use just let’s say name
you have to enforce the naming convention for all your resources
so I could just use name? interesting
you can use any pattern
our pattern is using all of them, so we can have an ID like this
cp-ue2-prod-myapp
where the namespace is unique abbreviation of our company
ue2
is environment (region in this case)
stage is prod
cp-ue2-dev-myapp
cp-ue2-staging-myapp
cp-uw2-prod-myapp
This is the security group error I have, it’s possible this is more of an AWS question than a terraform one. Why can’t I give ingress access to my redis cluster to a security group in another VPC?
Error: Error authorizing security group rule type ingress: InvalidGroup.NotFound: You have specified two resources that belong to different networks.
status code: 400, request id: cfa5dd4d-55c5-44dc-9c86-c18cd1b079c4
on .terraform/modules/elasticache-redis/main.tf line 22, in resource "aws_security_group_rule" "ingress_security_groups":
22: resource "aws_security_group_rule" "ingress_security_groups" {
it’s a question more for #aws yes. But the short answer is that security groups cannot refer to the ID of SGs in another VPC. You must use the external IP CIDR ranges, or peer the VPCs
when you deploy two vpc in the same acocunt the only way to communicate to each other if is they are peered
is like two different switches and routers
Okay here is the terraform question haha: Is there a way to tell terraform to peer the VPC I’ve created with an existing VPC?
yes, and cloudposse have a module