#terraform (2020-01)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-01-01
For complex terrform modules spanning multiple environments, why do I almost always regret using modules? Is there some kind of rule of thumb about module complexity that should be followed?
like, rule 1: if you are spanning multiple providers maybe modules aren’t a good idea (or something like that)
we never manage more than one environment/account in one terraform plan/apply
in our case, with #geodesic , we actually have a one-to-one correlation between AWS accounts and git repos.
What’s the best learning for newbie Engineers on terraform
@chinedu2424 check out terraform up and running second edition <http://shop.oreilly.com/product/0636920225010.do>
Then choose something you know how to deploy without terraform, open a free account on Terraform cloud and start iterating with a personal AWS/GCP account (you can prob find a community module to get you started).
Or you can inherit a 10 thousand line complex terraform manifest like I did and learn ‘under pressure’
Need an inputs pls:
I have a VPC: 10.0.0.0/16 Subnets: 2 pub + 2 pvt spanned across 2 AZs in one single region. pub sub1-> 10.0.0.0/24 pvt sub1 -> 10.0.1.0/24
pub sub2->10.0.2.0/24 pvt sub2 -> 10.0.3.0/24
Pvt route table 1 has two routes: local and a route for 10.0.1.0/24 to a NAT GW.
I am stuck with the below error when associating Private route table 1 to private subnet 1. wondering whats actually happening under the hood and why the issue ? any inputs will be of a great help.
API error message Route table contains unsupported route destination. The unsupported route destination is more specific or equal specific than VPC local CIDR.
HI @vvsp it seems you route the private subnet itself to the nat GW, that won’t work, you need to route public routable nets to the nat gw, Common is to route 0.0.0.0/0 to the nat gw.
@maarten Thanks and that was the thing; appreciate your response;
2020-01-02
Hi, does anyone know why terraform sees too many changes in the task definition updates?
# module.ecs_app_service.module.ecs_task_definition.aws_ecs_task_definition.app must be replaced
+/- resource "aws_ecs_task_definition" "app" {
~ arn = "arn:aws:ecs:eu-west-1:xxxx:task-definition/test:216" -> (known after apply)
~ container_definitions = jsonencode(
~ [ # forces replacement
~ {
cpu = 256
+ entrypoint = null
~ environment = [
- {
- name = "AWS_REGION"
- value = "eu-west-1"
},
{
name = "APP_ENV"
value = "prod"
},
~ {
~ name = "AWS_USER_POOL_ID" -> "APP_DEBUG"
~ value = "us-east-jgjjh" -> "0"
},
+ {
+ name = "AWS_REGION"
+ value = "eu-west-1"
},
]
is it because I use Jsondecode and it changes the ordering off the elements?
Anyone know of a way to have terraform init
check a central location for providers and download them to that location if they are missing? I know there is -plugin-dir
for the first part of that question, but it explicitly disables the second part. I don’t really understand the need to have the same version of the same provider in .terraform
in the config working directory for every config… it’s a lot of space and a lot of downloads
Or a different tool that can retrieve providers the way terraform init
does and place them in a specified directory…
Have you tried TF_PLUGIN_CACHE_DIR?
I could have sworn this does not disable automatic fetching
We set this to a shared location in #geodesic to speed up init process
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…
But I am not quite sure if that meets your requirements
i haven’t, i was under the impression that was just the env-equivalent of -plugin-dir
, which the docs at least claim disable the auto-download functionality…
-plugin-dir Directory containing plugin binaries. This overrides all
default search paths for plugins, and prevents the
automatic installation of plugins. This flag can be used
multiple times.
Fwiw, we have this set and we don’t download any plugins manually
du -sh ~/.terraform.d/plugins/
1.2G /home/erik.osterman/.terraform.d/plugins/
I think it’s working…
interesting, ok, i’ll give it a try, thanks!
works! brilliant!
i mean, it still copies the binary into .terraform/plugins
in the config working directory, which seems unnecessary, but it gets them from the plugin cache dir, so at least i can save on some data when tethering
Interesting - didn’t realized it double-copied them
is anyone able to provide more insight into the following descriptions?
https://github.com/cloudposse/terraform-null-label/blob/master/outputs.tf#L3 (Could you elaborate on what this is disambiguated from? Is it regarding the AWS Name
tag?)
https://github.com/cloudposse/terraform-null-label/blob/master/outputs.tf#L8 (how do all of these fields get ‘normalized’? It would be nice if this was a bit more clearer)
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
disambiguated
is prob not a good name for it
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
it is constructed from the inputs you provide
and is prob disambiguated
because it’s supposed to be consistent (you use the same pattern for everything) and globally unique across all AWS accounts and environments even for the global AWS resources like S3 buckets
I see, what about the normalized
references
in almost all cases you don’t see/need the ‘normalization’ when you provide simple strings for namespace, stage, name etc.
the module converts the inputs to lower-case
and applies https://github.com/cloudposse/terraform-null-label/blob/master/variables.tf#L91 if provided
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
please look at the description here https://github.com/cloudposse/terraform-null-label#terraform-null-label---
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
and terratest for the example https://github.com/cloudposse/terraform-null-label/blob/master/test/src/examples_complete_test.go
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Hi There… Quick query pls:
Does EKS cluster creation using terraform creates kubeconfig at ~/.kube by default ? or do we have to configure it manually every time we create the cluster as some of the fields are cluster specific ?
Have you done a POC yet with creating an EKS cluster using Terraform ?
Not yet … in the process of doing it; hence arrived at this step.
Have you guys got any info on POC for EKS with terraform ?
@vvsp here are the terraform modules that we have for EKS:
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
kubeconfig
is already in the cluster after you create it, you just need to read it from there
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
aws eks update-kubeconfig
reads it from the cluster and saves on the file system
also, each module has a complete working example, e.g. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
and Terratest that deploys the example on AWS and checks for correct outputs https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/test/src/examples_complete_test.go
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
also, take a look at this comment from @Erik Osterman (Cloud Posse) https://github.com/terraform-aws-modules/terraform-aws-eks/issues/635#issuecomment-567691445 (regarding another EKS module form #terraform-aws-modules) describing why we have 4 different EKS modules instead of just one
A general question for users and contributors of this module My feeling is that complexity getting too high and quality is suffering somewhat. We are squeezing a lot of features in a single modul…
2020-01-03
Hello, I am trying to solve terraform drift and I ran into an error
Error: module “my_rds_resource”: “performance_insights_enabled” is not a valid argument
For this particular resource, it’s in 0.11.14 using the terraform-aws-rds-aurora 1.21.0 release. I see that performance_insights_enabled has been supported since 1.0.0 release, any idea why my module kicks back this error?
@NVMeÐÐi for this module https://github.com/terraform-aws-modules/terraform-aws-rds-aurora, ask in #terraform-aws-modules channel
Terraform module which creates RDS Aurora resources on AWS - terraform-aws-modules/terraform-aws-rds-aurora
2020-01-06
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jan 15, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
Hey all, Ive got a warning i’m not sure how to get past. Was hoping some of the experts here can point me in the right direction
Warning: Interpolation-only expressions are deprecated
on ../main.tf line 95, in resource "aws_lambda_function" "publisher":
95: source_code_hash = "${filebase64sha256("${path.module}/publisher.zip")}"
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 5 more similar warnings elsewhere)
I’m not really sure how to stop the warning, I checked the online documentation and everything seems correct
To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
So I belive you just need:
source_code_hash = filebase64sha256("${path.module}/publisher.zip")
Anyone have good links to TF directory structures?
so far I’ve compiled up https://www.2ndwatch.com/blog/how-we-organize-terraform-code-at-2nd-watch/ , https://www.oreilly.com/library/view/terraform-up-and/9781491977071/ch04.html and http://saurabh-hirani.github.io/writing/2017/08/02/terraform-makefile
Going through a major refactor of TF configs currently and would like to get it right the first time around…
Boston DevOps had a long conversation on the topic, but it’s sadly now lost in Slack.
https://www.reddit.com/r/Terraform/comments/bskqbg/advice_for_folder_structure/ may have some useful insights though.
Thanks @Adam Blackwell
Also, check this out https://archive.sweetops.com/
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
if you search for “folder structure” there will be some past discussions
Great, thanks!
https://www.terraform.io/docs/cloud/workspaces/repo-structure.html seems like a good read as well
Terraform by HashiCorp
Ah yes! forgot they have those excellent docs now….
Hey, in https://github.com/cloudposse/terraform-aws-rds you do not have solution to update ca_cert (parameter ca_cert_identifier
was added to aws_db_instance recently). This needs to be added, or do you have some other solution?
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
@Mateusz Kamiński if it was added recently aws_db_instance
, it needs to be implemented. PRs are always welcome, thanks
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
2020-01-07
2020-01-08
I’m using terraform-rds at the moment to create postgres RDS. using engine version 9.6.15 Thing work ok with DB_paramter_group, but when it running to create DB_Option_group, it return error as below:
InvalidParameterCombination: Cannot find major version 9 for postgres
I look at AWS document, seem it didn’t have any DB option group yet for Postgres, So how can I by pass this resource in the module?
Hi team, I am using the terraform-aws-rds-cluster
to create an Aurora MySQL read replica of an RDS MySQL instance to transition over to Aurora by using the replication_source_identifier
set as the RDS MySQL instance.
However the creation hangs on Terraform but is successful in the console. It’s most likely an issue with AWS provider, but I am curious to see if anyone else has come across this issue or have a work around that was successful?
Hey @Erik Osterman (Cloud Posse) or @Andriy Knysh (Cloud Posse), can one of you tell me how you’re doing terraform md automation? (If you are)
Hi Callum. What’s terraform md automation?
Sorry @Andriy Knysh (Cloud Posse) if that was confusing, wondering if you had any way of automatically generating terraform inputs/outputs in an md format
My team uses https://github.com/segmentio/terraform-docs
Generate documentation from Terraform modules in various output formats - segmentio/terraform-docs
Yes we use it as well. In build-harness we have a Make target, make readme, that generates md files from terraform
Greetings all, We have a number of existing terraform modules that we are looking to expand into a multi-region/multi-env deployment process. Just curious if anyone has any recommendations or instructions on repo configuration/setup for modules that would be deployed concurrently? Thanks in Advance.
for now, we are mostly concerned with autoscaling groups, albs, rt53 records
good evening
hi guys, I’m using terraform-aws-elastic-beanstalk-environment
which I link to an already created VPC, and I keep getting into the problem that it tries to create a security group twice (see the error below).
`
Error creating Security Group: InvalidGroup.Duplicate: The security group 'xxxx' already exists for VPC 'vpc-xxxxxxxxxxx'
status code: 400
@George Platon did you check in the AWS console if a SG with that name already exists?
also take a look at this example https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/examples/complete/main.tf
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Yes, it does not exists. I do delete everything, and then its gone
I am also using a rds instance from cloudposse, which is in the same VPC
the example above gets deployed automatically by terratest
Ill try to run the complete example, although my parameters are pretty similar
was working last time we updated the module
I’m using something of this kind.
module "vpc" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.7.0>"
// General
namespace = var.namespace
stage = var.stage
name = var.name
tags = var.tags
// Network
cidr_block = var.vpc_cidr_block
}
module "subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.16.0>"
// General
availability_zones = var.availability_zones
namespace = var.namespace
stage = var.stage
name = var.name
tags = var.tags
// Network
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = var.vpc_gateway_enabled
nat_instance_enabled = false
}
// Database
module "rds_instance" {
// possibly put this into a separate vpc without access to outside
// make sure we use a version - e.g ?ref=tags/0.9.3
source = "git::<https://github.com/cloudposse/terraform-aws-rds.git>"
// General
namespace = var.namespace
stage = var.stage
name = var.name
tags = var.tags
multi_az = var.multi_az
// Network
vpc_id = module.vpc.vpc_id
associate_security_group_ids = [module.vpc.vpc_default_security_group_id]
security_group_ids = [module.vpc.vpc_default_security_group_id]
subnet_ids = module.subnets.private_subnet_ids
// Rds specific
database_name = var.rds_db_name
database_user = var.rds_db_user
database_password = var.rds_db_password
database_port = var.rds_db_port
storage_type = var.rds_storage_type
storage_encrypted = var.rds_storage_encrypted
allocated_storage = var.rds_allocated_storage
engine = var.rds_engine
engine_version = var.rds_engine_version
major_engine_version = var.rds_major_engine_version
instance_class = var.rds_instance_class
db_parameter_group = var.rds_db_parameter_group
publicly_accessible = var.rds_publicly_accessible
apply_immediately = var.rds_apply_immediately
deletion_protection = var.rds_deletion_protection
db_parameter = [
{
name = "myisam_sort_buffer_size"
value = "1048576"
apply_method = "immediate"
},
{
name = "sort_buffer_size"
value = "2097152"
apply_method = "immediate"
}
]
}
// ElasticBeanStalk
module "elastic_beanstalk_application" {
source = "git::<https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=tags/0.4.0>"
// General
namespace = var.namespace
stage = var.stage
name = var.name
tags = var.tags
description = "Elastic_beanstalk_application"
}
module "elastic_beanstalk_environment" {
source = "git::<https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tags/0.17.0>"
// General
namespace = var.namespace
stage = var.stage
name = var.name
tags = var.tags
region = var.region
description = "Elastic_beanstalk_environment"
availability_zone_selector = "Any 2"
// Configuration
// see <https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment> for more details
dns_zone_id = var.eb_dns_zone_id
elastic_beanstalk_application_name = module.elastic_beanstalk_application.elastic_beanstalk_application_name
instance_type = var.eb_instance_type
autoscale_min = var.eb_autoscale_min
autoscale_max = var.eb_autoscale_max
updating_min_in_service = var.eb_updating_min_in_service
updating_max_batch = var.eb_updating_max_batch
environment_type = var.eb_environment_type
loadbalancer_type = var.eb_loadbalancer_type
vpc_id = module.vpc.vpc_id
loadbalancer_subnets = module.subnets.public_subnet_ids
loadbalancer_security_groups = [module.vpc.vpc_default_security_group_id]
// loadbalancer_managed_security_group = [module.vpc.vpc_default_security_group_id]
application_subnets = module.subnets.private_subnet_ids
allowed_security_groups = [module.vpc.vpc_default_security_group_id]
enable_stream_logs = var.eb_enable_stream_logs
keypair = var.eb_sshkeypair
solution_stack_name = var.eb_solution_stack_name
env_vars = {
db_arg = module.rds_instance.instance_endpoint
db_host = module.rds_instance.instance_endpoint }
}
Ill try to run the complete example
oh ok, you are creating RDS together with EB
and provide the same namespace, stage and name
yes
both prob create the same SG (don’t remeber)
can be the case, yes
try to add some attributes
to any of those
a quick fix would be to give different names
ok, will add now
e.g. attributes = [“rds”] to RDS
or whatever name for the attribute you like
sure, doing it now
it is a bad practice to have similar name for multiple resources ?
we name all resources namespace-stage-name-attributes
, which is perfectly fine
the issue you encountered is what we noticed after the modules were created. Some modules create resources like IAM Roles and Security Groups using the same pattern
and those collide with same resources created by other modules
so we started to add attributes to Roles and SGs inside modules, but not of them have been updated yet
ok then will try to destroy them and run them again with the new attributes, and then check manually if rds SG has the correct naming, and also the ElasticBeanstalk
it worked all fine
thanks a lot @Andriy Knysh (Cloud Posse)
2020-01-09
Hi all, someone yesterday asked a question here: https://sweetops.slack.com/archives/CB6GHNLG0/p1578496997120600 and I have a very similar question. My team is currently trying to figure out the best way to deploy resources different AWS account in multiple regions using modules in terraform. For example we’d want to deploy an EC2 instance into account A in us-east-1 and us-west-1 and deploy that same instance into account B in the same regions. Is anyone doing anything like this, and if so how are you structuring your terraform to do so?
Greetings all, We have a number of existing terraform modules that we are looking to expand into a multi-region/multi-env deployment process. Just curious if anyone has any recommendations or instructions on repo configuration/setup for modules that would be deployed concurrently? Thanks in Advance.
Hello! Looking for some pointers here. This deploys the project to codebuild as expected, but doesn’t want to properly link to the private repo. I have to go into the UI and change it from Public repository
to Repository in my GitHub account
and find it in the dropdown every time. As you can see at the bottom, I’m mirroring what TF is reporting the config as when it’s setup properly in AWS, but that doesn’t seem to fix it. Any help sincerely appreciated, this is a manageable nuisance, but a nuisance nonetheless.
This is copied/modified from cloudposse/terraform-aws-codebuild, if that helps in any way
Hi the example of https://github.com/cloudposse/terraform-aws-ecs-web-app/tree/master/examples/without_authentication doesn’t run.
I keep getting this response:
Error: Error in function call
on .terraform/modules/web_app.alb_target_group_cloudwatch_sns_alarms/main.tf line 49, in locals:
49: alarm_actions = coalescelist(var.alarm_actions, var.notify_arns)
|----------------
| var.alarm_actions is empty list of string
| var.notify_arns is empty list of string
Call to function "coalescelist" failed: no non-null arguments.
Error: Error in function call
on .terraform/modules/web_app.alb_target_group_cloudwatch_sns_alarms/main.tf line 50, in locals:
50: ok_actions = coalescelist(var.ok_actions, var.notify_arns)
|----------------
| var.notify_arns is empty list of string
| var.ok_actions is empty list of string
Call to function "coalescelist" failed: no non-null arguments.
Error: Error in function call
on .terraform/modules/web_app.alb_target_group_cloudwatch_sns_alarms/main.tf line 51, in locals:
51: insufficient_data_actions = coalescelist(var.insufficient_data_actions, var.notify_arns)
|----------------
| var.insufficient_data_actions is empty list of string
| var.notify_arns is empty list of string
When trying to replicate locally without using the example I get the same errors
When using terragrunt’s plan-all
command, Ive got the following directory structure
.
├── global
│ ├── main.tf
│ └── terragrunt.hcl
├── terragrunt.hcl
└── us-east-1
├── main.tf
└── terragrunt.hcl
main.tf
inside the us-east-1 folder has a variable which refers to module.route53_zone.zone_id
which is an output of the module referred to in main.tf in the global main.tf. However I get the following error:
Error: Reference to undeclared module
on main.tf line 15, in module "acm":
15: zone_id = module.route53_zone.zone_id
No module call named "route53_zone" is declared in the root module.
[terragrunt] 2020/01/09 10:50:41 Encountered the following errors:
Hit multiple errors:
exit status 1
Is this even possible with terragrunt? Or am I doing something wrong?
You need to add a dependency
block in the .hcl file that’s referencing the other. TG syntax is a bit different.
inputs = {
vpc_id = dependency.vpc.outputs.vpc_id
}
dependency "vpc" {
config_path = "../../network/vpc"
}
now it says
/global/terragrunt.hcl is a dependency of /us-east-1/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block.
based on your example, I have
inputs = {
zone_id = dependency.route53_zone.outputs.zone_id
}
dependency "route53_zone" {
config_path = "../global"
}
do I need mock outputs?
oh I think do
So the actual terraform (not terragrunt.hcl) for whatever you’re using for your global
there should have an output called zone_id
- outputs here refers literally to what’s in the outputs for that module.
You can’t (I don’t think) access resources directly in TG. Add an output "route_53_zone_id"
with the proper value to your [outputs.tf](http://outputs.tf)
in your global module and then it’s dependency.route53_zone.outputs.route_53_zone_id
You will also need to re-apply your global module for those outputs to be detected
@slaughtr do you know how I can pass the input to my module? the .tf file in my us-east-1 folder has the following
provider "aws" {
region = "us-east-1"
}
terraform {
backend "s3" {}
}
module "acm" {
source = "terraform-aws-modules/acm/aws"
version = "2.3.0"
domain_name = "tools.domain.com"
zone_id = ??
}
and the .hcl file has
include {
path = find_in_parent_folders()
}
dependency "global" {
config_path = "../global"
mock_outputs = {
zone_id = "Z3P5QSUBK4POTI"
}
}
inputs = {
zone_id = dependency.global.outputs.zone_id
}
how do i pass zone_id input as a variable to the module?
To the acm module? You’d have a variable "zone_id"
and then zone_id = var.zone_id
I’m getting below error. Is this the right forum to ask for help?
terraform init
Initializing modules...
Downloading cloudposse/ecs-container-definition/aws 0.21.0 for ecs-container-definition...
Error: Failed to download module
Could not download module "ecs-container-definition" (ecs.tf:106) source code
from
"<https://api.github.com/repos/cloudposse/terraform-aws-ecs-container-definition/tarball/0.21.0//*?archive=tar.gz>":
Error opening a gzip reader for
Bernhard Lenz [3:15 PM] I’m getting below error.
terraform init
Initializing modules...
Downloading cloudposse/ecs-container-definition/aws 0.21.0 for ecs-container-definition...
Error: Failed to download module
Could not download module "ecs-container-definition" (ecs.tf:106) source code
from
"<https://api.github.com/repos/cloudposse/terraform-aws-ecs-container-definition/tarball/0.21.0//*?archive=tar.gz>":
Error opening a gzip reader for
My terraform file contains
module "ecs-container-definition" {
source = "cloudposse/ecs-container-definition/aws"
version = "0.21.0"
The URL does not seem to resolve correctly. Does anybody know here how to get this to work? I believe this worked for me 2 days ago
What do you have for your source value?
module “ecs-container-definition” { source = “cloudposse/ecs-container-definition/aws” version = “0.21.0”
ohhhhhhh
I think this was a bug in terraform
are you running the latest terraform?
yeah latest version 0.12.18
on windows
Maybe try using a git::
source? IE source = "git::<https://github.com/cloudposse/terraform-aws-dynamodb.git?ref=tags/0.11.0>"
wait, the now have .19. Let me try that
Hello all, In the last 24 hours, all of our terraform-null-label modules started failing, with the following error, anyone have any ideas?
Error: Failed to download module
Could not download module "s3_bizrewards_dev_label" (s3_bizrewards.tf:31)
source code from
"<https://api.github.com/repos/cloudposse/terraform-null-label/tarball/0.16.0//*?archive=tar.gz>":
Error opening a gzip reader for
/var/folders/1d/gpvdrwrd0y1_d0jv64w76j645nyvdq/T/getter001152442/archive: EOF.
this is what I was thinking about
see the thread/discussion below that
Thanks for the quick help
0.12.18 -> 0.12.19 fixed it
Annoyingly re-sharing this since it got buried. I’m fixing a lot of stuff in the coming days that touches codebuild so it would be great to figure this out before I do that. Seriously, thanks for any help!
Hello! Looking for some pointers here. This deploys the project to codebuild as expected, but doesn’t want to properly link to the private repo. I have to go into the UI and change it from Public repository
to Repository in my GitHub account
and find it in the dropdown every time. As you can see at the bottom, I’m mirroring what TF is reporting the config as when it’s setup properly in AWS, but that doesn’t seem to fix it. Any help sincerely appreciated, this is a manageable nuisance, but a nuisance nonetheless.
i don’t know specifically what’s causing this and haven’t/can’t look right now, but sounds like it could be related to something like ignore_changes
somewhere.
Hello! Looking for some pointers here. This deploys the project to codebuild as expected, but doesn’t want to properly link to the private repo. I have to go into the UI and change it from Public repository
to Repository in my GitHub account
and find it in the dropdown every time. As you can see at the bottom, I’m mirroring what TF is reporting the config as when it’s setup properly in AWS, but that doesn’t seem to fix it. Any help sincerely appreciated, this is a manageable nuisance, but a nuisance nonetheless.
maybe grep through .terraform/modules
and see if you see something related to that
Hmm didn’t even consider that. I’ll look around and see what I can find. And maybe - even if it isn’t really a “fix” - I can use ignore_changes or something to prevent it from reverting what I do in the console and save myself some headache
ya, or something like that…
Ive got a terraform module that creates resources in two different aws accounts. I handle this by doing the following:
provider "aws" {
region = "us-west-2"
profile = "profile1"
}
provider "aws" {
region = "us-west-2"
profile = "profile2"
alias = "digi"
}
I’m trying to utilize terragrunt to deploy many modules. This becomes difficult since the above method no longer works, has anyone encountered this? If so, how have you got around this. I dont think Terragrunt supports multiple providers like this
Hey Brij, I’ve used multiple providers before in terragrunt modules without issue.
What error are you seeing?
no errors yet, Im trying to understand how terragrunt will manage to use different aws profiles
how have you used multiple providers ?
(also we have #terragrunt - might get more feedback there)
oh woops - thanks
2020-01-10
so this seams like global issue?
strange, yesterday was working fine with 0.12.18
but upgrade to 0.12.19 fixed issue
i bet it was related to the checkpoint api being broke yesterday, https://github.com/hashicorp/terraform/issues/23816
https://checkpoint-api.hashicorp.com/v1/check/terraform currently returns the following: { "product": "terraform", "current_version": "0.11.19", "curren…
Hey I was hoping someone could let me know if I’m on the right track. I’m currently setting up a Jenkins pipeline to provision resources for a startup I’m freelancing for. I was planning on using a multibranch repository with each branch for one environment. Is this an alright way to do it? Or should I do something else?
Hi @Rob Rose this would fit into the #release-engineering channel.
Branch/env strategy depends on the team’s exact workflow but normally develop
to an integration environment and master
to Production is the minimum.
Thanks @Joe Niland currently the client has a couple customers and they want one production environment per customer plus one develop environment per developer as well as staging. Trying to figure out the best way to orchestrate that all using Jenkins. Do you know of any examples?
It sounds pretty standard. Create a pipeline and define variables that will change per environment. Define build and deploy stages.
This looks related: https://jenkins.io/doc/tutorials/build-a-multibranch-pipeline-project/
Jenkins – an open source automation server which enables developers around the world to reliably build, test, and deploy their software
Do they definitely need separate branches per customer? That seems like a potentially more difficult way to manage changes across multiple customers.
@Joe Niland Don’t definitely need separate branches per customer but I’m not sure how else to persist variables. Not too familiar with Jenkins so I’ll have to keep digging.
@Rob Rose in my experience, it’s normally easier to use Env vars for system-wide variables, and then use a database or a config file (if secure) for customer-specific variables
Hey can someone help me terraform an EKS cluster. I’m trying to use the aws_eks_node_group
resource but i can’t figure out how to pass it the worker’s security group so when i deploy i get workers that cant connect to the cluster because they dont have the right security group. Is that resource supposed to generate the correct security group or something? Also what security group do i use as the source_security_group_id
in the cluster security group rules?
@Philip L Bankier have you seen our working example here? https://github.com/cloudposse/terraform-aws-eks-node-group
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
see examples/complete
of using our module that implements the EKS managed node groups
Hi @Andriy Knysh (Cloud Posse). The previously mentioned NLB module is available at https://github.com/jhosteny/terraform-aws-nlb/. I didn’t realize NLBs could not have security groups assigned when I started, so it has a smaller surface area now. Also, I could not figure out how to get access logs to work with NLBs due to encryption issues (not sure it is possible on NLBs yet), so I left that commented out. Also, tests have not been run, though I modified your ALB tests and expect it should work, or be close to working.
Terraform module to provision a standard NLB for TCP/UDP/TLS traffic https://cloudposse.com/accelerate - jhosteny/terraform-aws-nlb
I am using this for a Concourse installation in ECS, and it seems to be working so far (concourse web launches in Fargate and passes ALB and NLB health checks). I haven’t actually run traffic through the NLB yet, so it may need another tweak or two.
Terraform module to provision a standard NLB for TCP/UDP/TLS traffic https://cloudposse.com/accelerate - jhosteny/terraform-aws-nlb
nice work @Joe Hosteny thanks
PRs for Cloud Posse modules are welcome
Hmm, is there a way to issue a PR to transfer a repo? I was not aware of that. I will check it out.
ah no
you want us to use your repo and put it in cloudposse/terraform-aws-nlb
? that would be nice, we’ll look into that
Also, I have some changes that propagate this into terraform-aws-ecs-web-app, as well as some changes allow that module to run init containers (also with volumes available to the main container after they are done). I’ll issue those as several PRs so you can decide if they are worthwhile.
Yes, feel free to just copy it. I ran the tooling to build a proper README for cloudposse, so you should be able to just clone it and upload to GH
very cool, variable validation coming to tf 0.12.20… https://github.com/hashicorp/terraform/issues/2847#issuecomment-573252616
It would be nice to assert conditions on values, extending the schema validation idea to the actual config language. This could probably be limited to variables, but even standalone assertion state…
2020-01-11
Do any folks here have advice to share on how they manage environments that require multi-step terraform apply
?
For instance, in our environment, we have dependencies between two different terraform config dirs ( config dir A references resource ARNs that are created and exist in the output
of config dir B ).
If config dir A executes apply
before resources exist in config dir B, we rely on terraform_remote_state
with lookup(),
and an empty (""
) default value.
I’m interested to hear about methods folks have created that track and/or automate the cases where A is waiting for resources in B, and helps the system determine and/or trigger a subsequent apply
in dir A.
We’ve ended up with multiple state files, one per directory and a bring up dependency order. For example VPC early then things like RDS later. So we have B relying on A but not vice versa. We use remote state for RDS to get VPC id and subnets etc. Have built some mechanisms for changing subnets etc but major changes would require new vpc or whatever the lower level is and then a migration. Manageable and typical for infra primitives. Haven’t done CI job for running terraform yet. Current thinking is we’re mostly going to run isolated terraform in changed dir on merge to master.
Could you have CI run A on changes to A or B directories? Might be a noop and slight job duration increase most of the time… Catch if any real changes in the plan output which you’d be checking anyway for B?
@tamsky use Terragrunt.. https://terragrunt.gruntwork.io/docs/getting-started/configuration/
dependencies {
paths = ["../vpc"]
}
Learn how to configure Terragrunt.
2020-01-12
2020-01-13
guys could this https://github.com/cloudposse/terraform-aws-codebuild/pull/50 be merged? Seems that it will work and it will help big time!
In order to speed up docker build process in aws codebuild, we can enable local cache for caching docker layer. This PR add option to enable LOCAL_CACHE in aws codebuild
we’ll review, thanks for the PR
In order to speed up docker build process in aws codebuild, we can enable local cache for caching docker layer. This PR add option to enable LOCAL_CACHE in aws codebuild
well, it’s not mine but thx
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jan 22, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
Hi, I’m using https://github.com/cloudposse/terraform-aws-alb-ingress.git but I need to specify 15 ingress rules some path some host-header for what I understand with this module I can’t define more than
count = length(var.unauthenticated_paths) > 0 && length(var.unauthenticated_hosts) == 0 ? var.unauthenticated_listener_arns_count : 0
@jose.amengual yea, I think it could use some refactoring for that use-case. We were a bit constrained with HCLv1 syntax, but I think with HCLv2, it can be improved. When we upgraded it to HCL2, didn’t change the interface or leverage the new features of HCL2.
ok, yes we could use some dynamics for that
ok for now I will just do it in plain tf without the module
I will see if I have some time and send a PR over
2020-01-14
If I have an object of the following type:
type = list(object({
sqs_arn = string
bucket_name = string
}))
is there any way for me to get a list of all the sqs_arn
s?
I want to say there is but for the life of me I can’t figure it out
Got it:
[for i in var.additional_forwarding_configs : i.sqs_arn]
works, also:
variable "additional_forwarding_configs" {
default = [
{
sqs_arn = "1"
bucket_name = "b1"
},
{
sqs_arn = "2"
bucket_name = "b2"
}
]
type = list(object({
sqs_arn = string
bucket_name = string
}))
}
output "test" {
value = var.additional_forwarding_configs.*.sqs_arn
}
2020-01-15
guys anyone could help pls with small thing? It’s about https://github.com/cloudposse/terraform-aws-codebuild
Everytime I execute terraform plan
I can see this diff (without any changes)
- source {
- buildspec = "cicd/swaggerspec.yml" -> null
- git_clone_depth = 0 -> null
- insecure_ssl = false -> null
- report_build_status = false -> null
- type = "CODEPIPELINE" -> null
}
+ source {
+ buildspec = "cicd/swaggerspec.yml"
+ report_build_status = true
+ type = "CODEPIPELINE"
}
Any idea how to get rid of it?
Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild
I dunno about the module specifically… But seems there is a diff between the state and desired state at least …
report_build_status = false
report_build_status = true
Did you apply
the current diff above at least once?
it was report_build_status
thanks a lot
Seems like using a aws_launch_config and setting spot_price = ""
no longer launches spot instances?
Today, we’re excited to announce the beginnings of a new direction for the Registry. We’re renaming it as the Terraform Registry and expanding it to include Terraform providers as …
Hi team, I have a question about launch templates and AWS ASG:
resource "aws_launch_template" "example" {
name_prefix = "example"
image_id = "${data.aws_ami.example.id}"
instance_type = "c5.large"
}
resource "aws_autoscaling_group" "example" {
availability_zones = ["us-east-1a"]
desired_capacity = 1
max_size = 1
min_size = 1
mixed_instances_policy {
launch_template {
launch_template_specification {
launch_template_id = "${aws_launch_template.example.id}"
}
override {
instance_type = "c4.large"
weighted_capacity = "3"
}
override {
instance_type = "c3.large"
weighted_capacity = "2"
}
}
}
}
Can I have an ASG with 2 overrides and not weighted_capacity
?
I think I found my answer by reading this:
override - (Optional) List of nested arguments provides the ability to specify multiple instance types. This will override the same parameter in the launch template. For on-demand instances, Auto Scaling considers the order of preference of instance types to launch based on the order specified in the overrides list. Defined below.
2020-01-16
Niche question but any GCP users have tried declaring their build steps of Cloud Build triggers in Terraform?
yeh - it was a bit awkward but ok for simple builds
interesting, I was looking into making a module to standardize build steps across the org. cloudbuild has no way to sort of share steps across multiple cloudbuild files (e.g. if all builds have a common download encrypted key step and the location of that key happens to change)
but good to know in advance that it’s not as great as it seems
It was more that it got difficult for different teams to own their own builds rather than functionally broken things
probably wouldn’t try doing it again
Guys, I am using waf regional web acls with fortinet managed rules from marketplace. That fortinet rule set id changes from region to region, maybe even from account to account so option to hardcode id cant work. I can’t find way to dynamically find rule id (over data source). I was only able to find rule id, if i create web acl by hand through console, attach fortinet rule to webacl, and to describe web acl through aws cli which contains rule id inside. Does anyone have similar issue? Any ideas are welcomed. Thanks
2020-01-17
Nicki Watt, OpenCredo’s CTO, explains how her company uses HashiCorp’s stack—and particularly Terraform—to support its customers in moving to the world of CI/CD and DevOps.
“The Terralith” very apropos
nice explanation for newcomers to terraform (And why to avoid them)
What happens when you have an explicit dependency on a resource that has a count of 0?
So something like:
resource some_resource thing {
count = 0
}
resource other_resource thang {
...
depends_on = [some_resource.thing]
}
i’d say your config is broken
2020-01-18
modules are getting an upgrade… https://github.com/hashicorp/terraform/issues/10462#issuecomment-575738220
Hi there, Terraform Version 0.8.0 rc1+ Affected Resource(s) module Terraform Configuration Files module "legacy_site" { source = "../../../../../modules/site" name = "foo-s…
CC @maarten
Hi there, Terraform Version 0.8.0 rc1+ Affected Resource(s) module Terraform Configuration Files module "legacy_site" { source = "../../../../../modules/site" name = "foo-s…
2020-01-20
Hi, do you recommend using multiple tfstate file ? per environnment and per tool as explained here the post below is from 2016 and I wonder if its still the best way to go . ( I am currently struggling by having a single tfstate file ). or should I go with workspace ? which path did you choose ?
https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa
A guide to file layout, isolation, and locking for Terraform projects
Hey, this article is from 2016, but also updated in 2019 I would also recommend moving to smaller .tfstate files
A guide to file layout, isolation, and locking for Terraform projects
We are not using workspaces atm. To my understanding, workspaces are working with different directories in the backend, so I don’t see any benefit there compared to just use different directories
ok thanks, I will move to multiple terraform state.
Yep, multiple tfstate ftw.
We’re doing it without workspaces
in parallel of using multiple remote statefile, did you setup a deployment pipeline for each env / component ? or did you stick to a single one ?
right now we’re not using a ci/cd pipeline for the infrastructure stuff as it is relatively static for now
hopefully my code was organized by sub module, ( kubernetes, vm_linux, vm_windows, sql server, network, mgmt ). Each will have its own independant tfstate.
how should I migrate from one tfstate to several one ?
with a new terraform init in a sub folder, terraform init; terraform plan
list resource already deployed .
should I manually amend the new tfstate files?
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jan 29, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
Has anybody ever run into an error that just says Invalid Parameter with no other info. The DEBUG output shows a 400 Bad Request from AWS
I think it would help if you share a little bit more context like:
• what is the terraform provider
• how are you authenticating with AWS (e.g. through SSO or access keys)
• if this was working and recently stopped
I was able to find the issue. It had to do with target group not being connected to the ALB, due to neither unauthenticated_hosts/_paths parameter being passed in to terraform-aws-alb-ingress.
Not sure why TF was swallowing the error message
Saw a neat demo today by @marcinw of his new SaaS (spacelift.io). They’ve built something similar to Terraform Cloud, but some nice differentiators:
• Integration with Open Policy Agent so you can set policies that operate on the output of the terraform plan, but also other things like time-of-day.
• Bring-your-own-docker-container model so it’s easier to run custom providers and depend on other tools
• No hardcoded AWS credentials. Just grant access to their principal, the way datadog works.
If it sounds interesting, you can ping him for a demo.
Thanks for the shout-out @Erik Osterman (Cloud Posse) . If anyone wants a demo or just wants to try it out (it’s in private beta) please give me a shout, either here or through the contact form on https://spacelift.io
Hi Marcin! I’d like to see the demo.
Yeah, I got your email alright. I’ll whitelist you and let you play around. If you want a live demo afterwards, give me a shout.
2020-01-21
Policy-based control for cloud native environments
There’s an example of using OPA with terraform. Pretty neat.
https://github.com/fugue/regula <- similar thing from fugue
Regula checks Terraform for AWS security and compliance using Open Policy Agent/Rego - fugue/regula
In GitHub actions, are secrets shared between actions? So if I put my AWS credentials there, does it mean that pretty much anyone with push access to the repo can use it for pretty much any purpose? Or is there a way to enforce some sort of policy there, too?
Regula checks Terraform for AWS security and compliance using Open Policy Agent/Rego - fugue/regula
If one really wanted to, it’s possible to ensure a user can steal those, but by default github actions tries to make this difficult.
This also makes it a real pain for testing PRs from forks on open source repos.
ah cool, so OPA under the hood
2020-01-22
Write tests against structured configuration data using the Open Policy Agent Rego query language - instrumenta/conftest
This is one is interesting because it can operate on HCL, too.
Write tests against structured configuration data using the Open Policy Agent Rego query language - instrumenta/conftest
in a brand new aws account with nothing in it initially, how do you all handle creating some sort of iam role/user which carries out tf applies
Manually
kind of a chicken n egg situation
Yup. Can’t think of a clever solution to that. One of the reasons I find Google’s IAM more elegant because you can both create a project and add a service account to it in Terraform.
Create the account with aws organizations, assume the role it creates in the account
we have an org management project that is run against the org root - it handles creating sub accounts and then assuming roles into those to create the baseline IAM setup
2020-01-23
i have a route53 module which creates a route53 zone among some other operations and an acm module, when running terraform apply, I get the following output
module.route53_zone.aws_route53_record.digital_ns: Creating...
module.acm.aws_acm_certificate_validation.this[0]: Creating...
module.acm.aws_acm_certificate_validation.this[0]: Still creating... [10s elapsed]
The acm validation wont pass until that route53 record is created, can I force the order here, or place a dependency on modules?
terraform does not support dependencies on modules yet as far as I know
what about using https://www.terraform.io/docs/commands/apply.html#target-resource
The terraform apply
command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan
execution plan.
and do two apply
, first with --target
for the zone, second just apply
for everything else
Hey how do you guys work around https://github.com/hashicorp/terraform/issues/4775 when using MySQL instances in a private subnet?
I'd like to use Terraform's PostgreSQL provider to provision some databases on an AWS RDS instance in a private subnet (with Terraform running on a host outside of my VPC). It doesn't s…
- practice gitops
I'd like to use Terraform's PostgreSQL provider to provision some databases on an AWS RDS instance in a private subnet (with Terraform running on a host outside of my VPC). It doesn't s…
- run something like
atlantis
on ECS Fargate
(@marcinw might have some other ideas )
Re: 2 -> I’d personally recommend against running atlantis
on Fargate for anything non-trivial because with Fargate you have no guarantee that your task will stay up, and if they reap your task while running terraform apply
, then best of luck cleaning up the mess. Just get a single EC2 machine for your ECS task and put it there.
The thing @Erik Osterman (Cloud Posse) probably meant when mentioning myself was that I guess you could put a little Lambda in your VPC and bounce your VPC-internal requests off of it - I’m currently investigating this approach for the Terraform SaaS I’m working on - https://spacelift.io
Also re: running Terraform in your VPC, it’s a bit of a and problem because something has to set up the VPC itself
Hey folks, I have the eks_node_group working, but am hitting a problem with EFS allowing connections. I need to update the EFS security group to allow the node groups sg, but it gets a random sg from the template.
Anyone address this yet?
hmm interesting. What about allowing CIDR blocks in SG?
or writing the SG from the template into SSM and then reading it from there and adding to EFS SG
I think I am going to try the CIDR approach. I left off last night looking at that and looking up the template’s group
looks like that might’ve worked just fine
2020-01-24
Would it be unwise to use terraform to install fluxcd
at the end of a EKS provision?
module "alb_target_group_alarms" {
source = "git::<https://github.com/cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms.git?ref=tags/0.7.0>"
...
insufficient_data_actions = []
...
}
what solution would fix this when it is set to []
or null
?
Error: Error in function call
on .terraform/modules/core.alb_target_group_alarms/main.tf line 51, in locals:
51: insufficient_data_actions = coalescelist(var.insufficient_data_actions, var.notify_arns)
|----------------
| var.insufficient_data_actions is null
| var.notify_arns is list of string with 1 element
Call to function "coalescelist" failed: panic in function implementation:
value is null
goroutine 3185 [running]:
runtime/debug.Stack(0xc000cb2230, 0x25dc320, 0x2d91510)
/opt/goenv/versions/1.12.4/src/runtime/debug/stack.go:24 +0x9d
github.com/zclconf/go-cty/cty/function.errorForPanic(...)
/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/zclconf/[email protected]/cty/function/error.go:44
github.com/zclconf/go-cty/cty/function.Function.Call.func1(0xc000cb2568,
0xc000cb2588)
/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/zclconf/[email protected]/cty/function/function.go:239
+0x8f
panic(0x25dc320, 0x2d91510)
/opt/goenv/versions/1.12.4/src/runtime/panic.go:522 +0x1b5
github.com/zclconf/go-cty/cty.Value.Lengt
Terraform module to create CloudWatch Alarms on ALB Target level metrics. - cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms
It seems compact(coalescelist([], [""]))
is needed on that line so []
is the result
Terraform module to create CloudWatch Alarms on ALB Target level metrics. - cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms
This may be a dumb question. But how do I get a single Nacl ID from a the data "aws_network_acls"
resource to add a route in aws_route
for the route_table_id
attribute? I’ve tried using element
function but that doesn’t work.
Can you please share the actual code snippet?
It’s something like this:
data "aws_network_acls" "example" {
vpc_id = var.vpc_id
filter {
name = "tag:Name"
values = ["ACL-Name"]
}
}
resource "aws_route" "route" {
route_table_id = data.
aws_network_acls.ids
........
}
resource "aws_route" "route" {
route_table_id = element(data.aws_network_acls.ids, 0)
}
What happens when you do that? ^
Also, are you sure th route_table_id
expects a network ACL ID? It seems to expect the output of this resource
Provides details about a specific Route Table
I think your right it’s the case of staring at the problem for too long.
2020-01-25
2020-01-27
do we have terraform provider for Ingress controller on kubernetes
Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.
?
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Feb 05, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2020-01-28
Is the terraform-aws-dynamic-subnets module preferred over terraform-aws-multi-az-subnets or is there a reason to use one over the other?
@getSurreal there’s no one best way to do it because it depends on the customer requirements on what you want to achieve
that’s why we decoupled subnets from VPCs
ok. thanks. I guess I need to study them better. On the surface it appears you can get the same results.
because subnetting is a very opinionated topic, especially in established organizations
Do you have plans to update https://github.com/cloudposse/terraform-aws-cloudfront-cdn/releases with support for TF 0.12?
Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - cloudposse/terraform-aws-cloudfront-cdn
yes. we have a few modules not converted to 0.12 yet, this is one of them
Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - cloudposse/terraform-aws-cloudfront-cdn
will do it as soon as we have time
I used this one recently with good results. https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/README.md
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
2020-01-29
Hello,
I am using remote tfstate and change are directly made to it with no backup. Do you do tfstate backup and store it remotelly ?
i see that terraform refresh
has a backup option, but as I am using it in CI local storage is not an option
S3 bucket versioning isn’t enough?
Hi Adrian, i am using Azure where there is no versioning on Blobstorage, also Azure Snapshot is only for file but not for Blob
I just found that the snapshot is available by file not by storage
When using elasticbeanstalk why does every apply result in setting changes on the elasticbeanstalk app even though it looks like nothing changed?
- setting {
- name = "MinSize" -> null
- namespace = "aws:autoscaling:asg" -> null
- value = "2" -> null
}
+ setting {
+ name = "MinSize"
+ namespace = "aws:autoscaling:asg"
+ value = "2"
}
i don’t understand what’s going on there
and what that -> null
is all about
those are nown bugs/issues in the provider
terraform-aws-elastic-beanstalk-environment recreates all settings on each terraform plan/apply setting.1039973377.name: "InstancePort" => "InstancePort" setting.1039973377.n…
we were not able to solve it at that time
did not look into it for the last 3-4 months though, so maybe things could be better now
the issue I suppose is that we provide a set of settings which terraform sends to the AWS API to apply. But the API does not apply all of them since some are not relevant to the particular environment you are building
then terraform reads the settings back and compares with what it has, and always see differences
another one, even if the settings are just for the environment and nothing more, they are returned in diff order and terraform still sees differences
so in short, it could be either one of 1) settings not specific to the environment which AWS just drops; 2) incorrect order. Or a combination of the above
thank you for the explanation
hey there sweetops ninjas. I’m working with the terraform-aws-cloudtrail-s3-bucket
module and wondering if there’s a trick to adding a custom policy attribute. Docs simply say “string”, but I can’t get anything to stick. It always overwrites the policy with the default.
module "cloudtrail_s3_bucket" {
source = "git::<https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket.git?ref=master>"
name = "cloudtrail-sandbox-boo"
policy = file("policies/cloudtrail-bucket.json.tpl")
}
any examples would be greatly appreciated. I’ve tried the <<EOF
pattern also, both of which work with the aws_s3_bucket_policy
resource. But these two battle it out, so no idempotency which makes me a sad panda.
trying to force policy
to a null value also doesn’t work. I’m going to try rolling my own implementation of cloudtrail_s3_bucket
using aws_s3_bucket_policy
instead since it works.
2020-01-30
Hi, has anyone tried Pulumi? I heard a lot of good things about it but I’m not sure if it’s good idea to migrate
there have been a few discussions about it, https://archive.sweetops.com/search?query=pulumi
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
I know python enough where I really should dig into Pulumi more. Dut did you know that there is a pulumi terraform ‘bridge’? https://github.com/pulumi/pulumi-terraform
A bridge between Pulumi and Terraform. Contribute to pulumi/pulumi-terraform development by creating an account on GitHub.
I have a hard time mentally reconciling shifting to an imperative model for what I believe should be declarative based work.
Plus, I’ve worked with some brilliant infrastructure people who were just awful at coding (even in python)….
And Pulumi’s first class citizen is Typescript. Not my bag of tea….
In general terms how are any of you guys protecting secrets inside tfstates?
• We are currently using the S3 backend with it encrypted so the general tf recommendation referenced here https://www.terraform.io/docs/state/sensitive-data.html is only part of the solution. The solution for pgp is great but only available inside iam_user, iam access key & iam login profile & light sail.
So what about when an RDS instance for example, the admin password I want to be a secret in the tfstate. Other examples are DS Directory Services domain admin password, SSM values, etc.
Sensitive data in Terraform state.
I haven’t used it but I read about a tool called terrahelp that might be useful for this situation. If you end up looking into it I’d be curious to what you think https://github.com/opencredo/terrahelp
Terraform helper. Terrahelp is as a command line utility written in Go and is aimed at providing supplementary functionality which can sometimes prove useful when working with Terraform. - opencred…
CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.
when you want to reference a release(Within github) for a tf module. Do you reference within the link, for example:
[email protected]/example.git?ref=v.1.0
or can you do the following:
source = "[email protected]/example.git"
version = "1.0"
How to import Azure Function App in Azure API Management using Terraform?
Hello Sumit, I don’t have experience with it but as I see azurerm_api_management_api has an import block: https://www.terraform.io/docs/providers/azurerm/r/api_management_api.html
Manages an API within an API Management Service.
2020-01-31
tflint
error
what am I doing wrong?
Can you show your code for setting up mysql_replica_instance_type
?
I set var.hardware
default value and everything is ok
Right. But is there a corresponding entry for var.hardware
in mysql_replica_instance_type
?
yes
variable "mysql_master_instance_type" {
description = "DB Instance Type"
type = map
default = {
small = "db.t2.small"
medium = "db.t2.medium"
large = "db.t2.large"
xlarge = "db.t2.xlarge"
}
}
Well, you gave me ‘mysql_master_instance_type’ instead of ‘mysql_replica_instance_type’. But since they’re likely the same structure, I’ll go with it.
The error: ‘The given key does not identify an element in this collection value.’ occurs when the key does not have an associated value.
e.g. If ‘var.hardware = “small”’ then everything should work. If ‘var.hardware = “smallish”’ then you will get that error.
Exactly
terraform validate
say Ok
Anyone have recommendations/reading suggestions on how you test infra built with terraform?
I watched a Hashicorp video the other day that talked about terratest
curious what other folks are doing also though
I’m even just curious on HOW testing for infra is done - beyond the tools - the concept of testing infra is new to me
this was written by the guy who wrote “Terraform Up & Running”.
Tools to test Terraform, Packer, Docker, AWS, and much more
Just joined a greenfield project where we will be doing this. Just haven’t gotten that far yet. Would love to hear how this goes for you
we use bats and terratest for all our modules
each module has a complete example
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
and terratest to deploy the example on real AWS account
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
@Andriy Knysh (Cloud Posse) - are there any books, required reading, Udemy courses or something on proper Terraform testing?
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
don’t know about books/reading, we just implemented out solution (it’s open-sourced), and it works well for us
interesting stuff - thanks guys, infrastructure testing seems more like end-to-end on your real provisioned infra.
Since terratest is Golang, you can create tests of any complexity, including end-to-end. For example, on EKS you could not only deploy the infra, but deploy Kubernetes apps and test them
Presentation by the gruntwork folks behind terratest, I found it really helpful in understanding the concepts and approaches for infra testing, https://www.infoq.com/presentations/automated-testing-terraform-docker-packer
Yevgeniy Brikman talks about how to write automated tests for infrastructure code, including the code written for use with tools such as Terraform, Docker, Packer, and Kubernetes. Topics covered include: unit tests, integration tests, end-to-end tests, dependency injection, test parallelism, retries and error handling, static analysis, property testing and CI / CD for infrastructure code.
tflint .
terraform validate
Does ECS provide an SNS topic to subscribe to events like updating service, tasks starting/stopping, autoscaling events?
And if so, is there a terraform example of this that someone can share?
Using the CloudPosse tooling, and root modules using tf 0.12, is there any way at all to run output
currently, when using remote state?
Terraform Version Terraform v0.11.3 + provider.aws v1.8.0 Terraform Configuration Files # aws-stack/backend.tf terraform { backend "s3" { bucket = "my-project" key = "state…
I got around this temporarily by removing the -from-module
from the TF_CLI_INIT envvar, after doing the terraform init
in the /conf/<module>
directory. Then I cd’ed into .module
and ran the terraform output
and some various state management commands
@Joe Hosteny this is an unfortunate downside as a result of terraform 0.12 not allowing init -from-module=..
in the local directory even the files are just dot files (the way it worked in 0.11)
Unfortunately, I don’t see a clean way around this without a bunch of extra scripting, make targets, or using terragrunt.
Thanks @Erik Osterman (Cloud Posse), I read the thread and it is unfortunate.
this is now the officially recommended layout
using this pattern, the need for invoking root modules multiple times nearly entirely goes away
it takes a bit of mind warping to think this way, but in the end, i think it’s going to lead to easier projects to maintain with fewer inconsistencies
I haven’t read that yet, but it seems like it would have significant impact on the tooling? Or is that assumption wrong?
I’ll look that over though
So that may mitigate for now
yes, I thinking using SSM is the way to go
outputs is really just for human validation and convenience
@here can anyone help with terraform connection to a private cloud. Do I have to write a custom provider?
@pianoriko2, do you mean a way to manage a private cloud platform/api with Terraform?
…if so, first check the wealth of providers for private clouds
alternatively, if the scope of what you want to manage is small/simple and the private cloud provides a standard REST API, you can use this “escape hatch”: https://github.com/Mastercard/terraform-provider-restapi
A terraform provider to manage objects in a RESTful API - Mastercard/terraform-provider-restapi
i’m so unsure about how i feel about this provider - i can totally see the need, but i’ve had to fix up so many terraform environments that were full of null resource local_execs that i’m a little terrified of what’s going to come out of it
A terraform provider to manage objects in a RESTful API - Mastercard/terraform-provider-restapi