#terraform (2019-06)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2019-06-03
is there a way to source env vars in https://github.com/cloudposse/terraform-aws-codebuild (v.0.1.6.0) from SSM Parameter store? cc: @Andriy Knysh (Cloud Posse) @Igor Rodionov @jamie? Apparently it’s supported in the native aws_codebuild_project
:
environment_variable {
"name" = "SOME_KEY2"
"value" = "SOME_VALUE2"
"type" = "PARAMETER_STORE"
}
Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild
which I couldn’t find in the [main.tf](http://main.tf)
at ln183
The answer is yes, you can add type == "PARAMETER_STORE"
I have just added a pull request to update the example. https://github.com/cloudposse/terraform-aws-codebuild/pull/43
What: Updates to allow module.label.context to be passed in to the module. Example to show how to address parameter store Why Because someone asked about it in the slack channel
thanks for the update Jamie! On a side-note do you know if anything will come out of https://github.com/blinkist/terraform-aws-airship-ecs-cluster/pull/9 btw? Or would I be better off forking it? Marten hasn’t been so responsive for the last weeks
launch_template resource is used instead of a launch_configuration, this allows for a more granular tagging. i.e. The instances and the instance ebs volumes get tags. The ASG uses a mixed_instances…
@Bogdan As far as I am aware pull requests into Blinkist are basically not happening, or are happening very slowly. The repo’s are in production and they aren’t taking in changes very regularly. @maarten himself is not working at Blinkist. He has his own personal brand called DoingCloudRight which is where the airship.tf site is generated from, and where some of his more current terraform modules exist.
I suggest you take a fork of mine for now if you want those features..
Feel free to fork. If you have better other ideas let me know. Baby expecting plus many other things atm.. cant commit my time at the moment unfortunately. @jamie if i make you repo admin, if @jonboulle agrees. Would that be helpful ?
please do, i actually have a ton of time now.
Thanks for clarifying Maarten and as an 11-week old father myself I wish you good luck and congrats :slightly_smiling_face:. I’ll follow yours and Jamie’s suggestion and fork Jamie’s latest then. Regarding ideas, one that has been on my mind was aws_autoscaling_schedule
which if set I could spin the cluster early in the morning and kill it after working hours
fine for the repo admin!
Great!
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
(ignore the incorrect date there - need to fix that!)
office hour this wednesday
@jamie thanks! good to see you around
@Erik Osterman (Cloud Posse) after all the damn waiting, I’m finally in the States
was just gonna ask!
and now I’m a bit stabilised
welcome!!
East coast?
so I am catching up on my backlog
yeah, Connecticut
cool!
I’m doing a pull request for this one in 30 mins https://github.com/cloudposse/terraform-aws-dynamic-subnets/issues/36
The existence of length(data.aws_availability_zones.available.names) implies that something already knows what "all AZs" or "n AZs" should look like; but I still have to specify…
we’re about to kick off 0.12 updates. we started with terraform-null-label
(alsmost ready for merge)
I can see that activity!
Are you doing it in named tags or named branches?
so that 0.11 can be maintained too?
going to keep master
for bleeding edge / whatever is “latest” so 0.12
creating 0.11/master
for bug fixes
cool
going to keep 0.x
module tags since interfaces are still subject to change
the thinking is no new feature work in 0.11
only bug fixes
so all updates should be patch releases to the last 0.11 release
My clients haven’t been switching to 12
since they have huge libraries of existing modules
to 12
as the release versin?
0.12
yeah
er.. i mean, how are they tagging their modules post-0.12
i see some bumping the major version
Ah, they haven’t yet.
(don’t really want to do that since not even terraform is 1.x!)
Just haven’t updated to use 0.12
ok
I’ll brb, i’m gonna finish this module
then I can look at what it takes to convert a module to 0.12
the main blocker I have right now is that terraform-docs
and json2hcl
do not support 0.12, which i what we’ve been using to (a) generate README.md
documentation, (b) validate that input and output descriptions have been set
ah… annoy
Prerequisites Put an x into the box that applies: This issue describes a bug. This issue describes a feature request. For more information, see the Contributing Guidelines. Description With the upc…
I’m getting inventive in this module to maintain backwards compatibility
as far as I’m concerned that’s not a requirement. if we have to settle for LCD, then we lose the benefits of 0.12 that make it so compelling.
Did you see how simple this became? https://github.com/cloudposse/terraform-null-label/blob/0.12/main.tf
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Visual Studio Code Still has not updated the terraform plugin to support 0.12 i’m using Mikael Olenfalk’s plugin
Okay @Erik Osterman (Cloud Posse) finished the pull https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/50
what Added a working example. Added the ability to specify the number of public or private subnets. Added the ability to specify the format of the public/private subnet tag. Updated the label modul…
2019-06-04
did anyone use this
Provides a GitHub user’s SSH key resource.
to assign ssh_key to users?
not just to the user related to the app token
am not sure what am doing wrong.. I did tag the subnet with shared
tags = { “kubernetes.io/cluster/eks-beemo” = “shared”
@Ayo Bami which module are you using?
Also, is the subnet public or private?
Does anyone have suggestions for the best way to use the terraform-aws-ecs-codepipeline repo (or anythign similar) with more complex pipelines? It is hard coded to a single build and single deploy stage. However most real pipelines have multiple environments to deploy to with additional actions like automated tests and approval gateways. Ideally, we’d like something that is configurable to use the same codepipeline module for different project requirements. Any thoughts?
I’m using that repo
it if very opinionated
so you should basically create your own terraform calling those modules individually and pass the appropriate variables to each
like the listerner rules for example , you might need to call it twice if you have two target_groups
The problem is the build and deploy stages and actions of the pipeline are hardcoded so I don’t see a way to alter them when calling.
hi @Rob the module is opinionated and is used for simple pipelines (source/build/deploy). For something more generic, you can open a PR (we’ll review fast), or maybe even create a new module (if it’s too complicated to add new things to the existing module)
you will have to fork and make your own and maybe send a PR over to make it more flexible
Yep @jose.amengual and @Andriy Knysh (Cloud Posse) are spot on
it is pretty hard to create a module that “all size fits all”
Yep… also, we (cloudposse) use mostly #codefresh for more serious release engineering. AWS CodeBuild/CodePipeline are quite primitive by comparison.
more primitive and more difficult to create complex pipelines. Codefresh has better DSL and UI
so no jenkins in your world ?
Thanks so much for the information! I didn’t see an easy way to modify the module for complex pipelines now, but saw that in TF 12 it can be done with looping logic.
We’ll absolutely take a look at codefresh. Do you have TF modules for defining more complex pipelines in codefresh?
Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app
the example repo defines all Codefresh pipelines that we usually use
(it’s not terraform, pipelines defined in yaml)
@jose.amengual we have jenkins modules https://github.com/cloudposse/terraform-aws-jenkins
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
but we stopped using it
for many reasons
jenkins is ok, but requires separate deployment and maintenance (as the module above shows)
it’s also old and convoluted and has a lot of security holes
Researcher discovers vulnerabilities, mostly CSRF flaws and passwords stored in plain text, in more than 100 Jenkins plugins, and patches are not available in many cases.
yes jenkins is like the standard ci/do everything tool but nowadays services like codefresh are better
Two vulnerabilities discovered and patched over the summer expose Jenkins servers to mass exploitation.
we don’t want to deal with all that stuff
ohhh yes I know, I work at Sonatype(Nexus IQ/Firewall and Maven tool creators ), , we had scan jenkins, node and others
Codefresh also added parallel
execution, using which you can build many images (e.g. for deploy and for test) in parallel
builds_parallel:
type: parallel
stage: Build
steps:
build:
title: "Build image"
type: build
description: "Build image"
dockerfile: Dockerfile
image_name: ${{CF_REPO_NAME}}
no_cache: false
no_cf_cache: false
tag: ${{CF_SHORT_REVISION}}
build_test:
title: "Build test image"
type: build
description: "Build test image"
dockerfile: Dockerfile
target: test
image_name: ${{CF_REPO_NAME}}
tag: ${{CF_SHORT_REVISION}}-test
when:
condition:
all:
executeForDeploy: "'${{INTEGRATION_TESTS_ENABLED}}' == 'true'"
after building the test image, you can run tests on it
test:
title: "Run tests"
stage: Test
type: composition
fail_fast: false
composition: codefresh/test/docker-compose.yml
composition_candidates:
app:
image: ${{build_test}}
command: codefresh/test/test.sh
env_file:
- codefresh/test/test.env
links:
- db
when:
condition:
all:
executeForDeploy: "'${{INTEGRATION_TESTS_ENABLED}}' == 'true'"
pretty cool
hello,
How can I use *
in a route53 recordset using terraform for example *.[example.com](http://example.com)
?
@Vidhi Virmani I believe you can have *.example.com
in the name
field.
eg.
resource "aws_route53_record" "www" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "*.example.com"
type = "A"
ttl = "300"
records = ["${aws_eip.lb.public_ip}"]
}
https://www.terraform.io/docs/providers/aws/r/route53_record.html
Provides a Route53 record resource.
*
is not allowed.
@Vidhi Virmani Interesting. what version of aws
provider are you using? I’ve tried with 1.60.0
and it didn’t complain
This is where the route53 resource is defined. Not seeing anything forbidding wildcard record sets. https://github.com/terraform-providers/terraform-provider-aws/blob/3baf33f202f644bd4d861d4b44846127774e7e30/aws/resource_aws_route53_record.go#L897-L934
Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.
Hey, It worked for me.. There was an extra space and I didn’t realise that. Sorry for such a dumb question
2019-06-05
need a little help, I’m trying to create a bucket resource “aws_s3_bucket” “blabla” { bucket = “blabla-terrafom.dev” acl = “private” } i want to make this as smart as possible if I’m in env prod i need it to be blabla-terrafom but if i am in any other env i need it to be blabla-terrafom.${var.Environment_short} my problem is how to make the “.” only appear when Environment_short is not prd
i tried playing with ${var.Environment_short != “prd”: “.${var.Environment_short}“} but with no luck
bucket = “blabla-terrafom${var.Environment_short != "prd" ? ".${var.Environment_short}" : ""}”
or clean it up with a local:
locals {
env = "${var.Environment_short != "prd" ? ".${var.Environment_short}" : ""}"
}
bucket = "blabla-terrafom${local.env}"
It’s a fine way to handle it. If you know all your env short names in advance you could also use a locals map to remove the trinary
Morning all! Was wondering if anyone has run into this before: Creating an ECS cluster and service with terraform, have the ALB, cluster, target group, and target group attachment set, task definition and all that gets created as expected - but when TFE goes to create the service it fails because it never associated an ALB with the target group and therefore can’t add a target to the group.
Yes I’m very familiar with that issue. :) which modules are you using?
Hey, sorry didn’t see your reply. Not using any module, building it by hand because I had trouble getting a module that created the whole service and ALB, and exported the ALB dns name so I could make r53 entries with it
@Mike Nock would you be happy to share what you have? Also all this is handled in the https://airship.tf terraform modules that @maarten created. But there’s a few tricks you may want to use to force dependence between resources you’re creating.
Flexible Terraform templates help setting up your Docker Orchestration platform, resources 100% supported by Amazon
Sure I’d be open to that. I just got the service working correctly, so removing some of what I hard coded to do that atm.
Its its working… then great
Error code: Error: Error modifying Target Group Attributes: InvalidConfigurationRequest: The provided target group attribute is not supported status code: 400, request id: ff5c4dd6-870b-11e9-832e-3dd05e9547d1
Hey any quick example of how to provide list var with the terraform cli
I am using this
terraform plan -var env_vars={"test" : "value", "test1": "value2"}
this won’t work; missing '
… '
did you define env_var
as a varariable
?
what’s the precise error you are getting from terraform
Yup, I have got env_var as a variable error:
as valid HCL: At 1:28: illegal char[0m[0m
[31mUsage: terraform plan [options] [DIR-OR-PLAN]
can you share precisely the command you are running?
terraform plan -var env_vars={"test" : "value", "test1": "value2"}
will 100% not work
the "
is evaluated by bash (or your shell), so that it is equivalent to running
terraform plan -var env_vars={test : value, test1: value2}
so that ` : value, test1: value2} are actually passed as arguments to
terraform plan`
and that env_vars={test
which is not a valid terraform expression
it failing cause of HCL syntax error
wrap it in quotes? env_vars='{"test" : "value", "test1": "value2"}'
or maybe -var 'env_vars={"test" : "value", "test1": "value2"}'
or even -var=env_vars='{"test" : "value", "test1": "value2"}'
?
let me try those @loren
that’s a map though, question was about a list…
examples from the docs: https://www.terraform.io/docs/configuration/variables.html#variables-on-the-command-line
Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.
terraform apply -var="image_id=ami-abc123"
terraform apply -var='image_id_list=["ami-abc123","ami-def456"]'
terraform apply -var='image_id_map={"us-east-1":"ami-abc123","us-east-2":"ami-def456"}'
tried the same, neither of it worked
office hours starting now: https://zoom.us/j/684901853
anyone know if AWS Aurora is always a cluster or can it be a single instance? Seems like using db_instance is an option, but i’m getting errors with storage type io1
it’s always a cluster, but could be with a single instance. Cluster itself does not cost anything, it’s just metadata
ok, gotcha. Makes sense. Thx
2019-06-06
hello, can someone help me understand what I’m doing wrong in the elastic-beanstalk-environment module? here’s the code / output https://gist.github.com/lupupaulsv/d705e695be36e0ec6c21f1b9e9d70a3b
@Paul Lupu
security_groups = ["sg-07f9582d82c4058e8, sg-0bf6bf06395fdf2c4"]
should be
security_groups = ["sg-07f9582d82c4058e8", "sg-0bf6bf06395fdf2c4"]
Hey guys, using this module: https://github.com/cloudposse/terraform-aws-s3-log-storage?ref=master and running into an issue where the ALBs I’m creating don’t have access to the bucket. Is that an attribute input I need to set?
This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail - cloudposse/terraform-aws-s3-log-storage
@Mike Nock take a look at this https://github.com/cloudposse/terraform-root-modules/blob/master/aws/ecs/alb.tf
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
it’s a working example of ALB and log storage
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
uses this module https://github.com/cloudposse/terraform-aws-lb-s3-bucket
Terraform module to provision an S3 bucket with built in IAM policy to allow AWS Load Balancers to ship access logs - cloudposse/terraform-aws-lb-s3-bucket
you might be missing this policy https://github.com/cloudposse/terraform-aws-lb-s3-bucket/blob/master/main.tf#L13
Terraform module to provision an S3 bucket with built in IAM policy to allow AWS Load Balancers to ship access logs - cloudposse/terraform-aws-lb-s3-bucket
Ahhh I think that last one is it, I don’t think I applied a bucket policy
Seems like the log bucket is pretty similar to the log_storage one that I’m using. Same variables and what not
it’s the same + the IAM policy
terraform-aws-lb-s3-bucket
uses terraform-aws-s3-log-storage
and adds the policy
Hey all! I’m using a couple of your guys’ modules to create and peer a vpc. I’m trying to do it all in one tf run so that the state file won’t be crazy. But I’m getting these errors on the plan:
* module.peering.module.vpn_peering.data.aws_route_table.requester: data.aws_route_table.requester: value of 'count' cannot be computed
* module.peering.module.monitoring_peering.data.aws_route_table.requester: data.aws_route_table.requester: value of 'count' cannot be computed[0m
I suspect it’s because the route tables will be created by the vpc module, but it doesn’t know that and expects them to already exist maybe?
unfortunatly, any time a data
provider is used you cannot do all-in-one run
if that data provide depends on something created
that’s because terraform expects the plan to be entirely deterministic before execution. if you instead inlined all the modules into the same context
and got rid of the data
provider, you could possibly mitigate it
damn, I thought that might be true. I was hoping I could do it all in one because we’re trying to automate it and if the pipeline runs again the statefile that created the vpc is like “wtf is this stuff, I’ll just wipe that out for you”
also, some new options might be available in 0.12, but we haven’t explored that yet
we have a write up here: https://docs.cloudposse.com/troubleshooting/terraform-value-of-count-cannot-be-computed/
oh ok cool, I will have a look, thank you!
we’ve just kind of “sucked it up” for now hoping that eventually this will all get addressed. then we use a Makefile
with an apply
target or coldstart
target to help us in those coldstart scenarios
but it’s not ideal… sorry!!
All good, thank you for the explanation! We will also have to suck it up I guess
There are some workarounds. Like using Makefile like Erik says to run targeted applies in a certain order. The workaround I use the most is splitting terraform into parts with individual state files, and using terraform_remote_state to look up the values
@jamie doesn’t that still require multi-phased apply?
(maybe I’m missing something!)
It does, you’re right. The difference being that each apply is standalone and complete. I use it more often than having extra Args that would create certain resources first. If data sources had a defaults section I would be really happy. :)
Aha, so you make the delineation very cut & dry
e.g. by using different project folders.
I suppose that’s a good rule to go by
It was the easier-to-understand option for the client I was working with. Having terraform apply -target module.x.submodule.y
was harder to wrap their heads around than cd vpc; terraform apply
This way it stays native terraform
Instead of requiring make
2019-06-07
What are your best practises for data importing in TF from another stack (tfstate). Examples showing that.
You want to import exsisting resources on a new terraform project or get outputs from another state?
No remote states
mainly how do you do. To see if there is better patterns in “importing” remote states using https://www.terraform.io/docs/providers/terraform/d/remote_state.html
Accesses state meta data from a remote backend.
the only way i use is via remote_state
I also use the terragrunt wrapper for terraform, so that I can do interpolation on the backend resources. And that way I can use standard naming conventions across all terraform state keys, and access their remote state values in a standard way. @Meb
Example to follow
terragrunt = {
remote_state {
backend = "s3"
config {
key = "Namespace=${get_env("TF_VAR_namespace", "not-set")}/Name=${get_env("TF_VAR_name", "not-set")}/Environment=${get_env("TF_VAR_environment", "not-set")}/${path_relative_to_include()}/${get_env("TF_VAR_state_key", "not-set")}"
bucket = "${get_env("TF_VAR_state_bucket", "not-set")}"
region = "${get_env("TF_VAR_state_region", "not-set")}"
encrypt = true
dynamodb_table = "${get_env("TF_VAR_state_dynamodb_table", "not-set")}"
# role_arn = "arn:aws:iam::${get_env("TF_VAR_state_assume_role_AWS_account_id", "not-set")}:role/${get_env("TF_VAR_state_assume_role_prefix", "not-set")}-${get_env("TF_VAR_name", "not-set")}-${get_env("TF_VAR_environment", "not-set")}"
s3_bucket_tags {
name = "Terraform state storage"
}
dynamodb_table_tags {
name = "Terraform lock table"
}
}
}
}
That is what my backend file looks like for terragrunt
my terraform.tfvars
includes it
terragrunt = {
include = {
path = "./backend.tfvars"
}
...
And the environment file that passes in all of the values:
##################################################################################################################
## NOTE: No not put spaces after the equals sign, or quotes around any of the strings to the right of the equals sign.
## the values are line delimited
##################################################################################################################
## The namespace, name, and environment must be unique per deployment
## These values are kept short to accomodate the name length considerations at AWS's end
## The namespace should be kept to 6 characters or fewer,
## and can be used to indicate a region (i.e. uk), or a company (i.e. gtrack)
TF_VAR_namespace=ccoe
## The name should be 12 characters or fewer, this is the name of the client, or project.
## This should match the project.name in the build.properties of the Junifer project
TF_VAR_name=spinnaker
## The environment should be kept to 8 or fewer characters, and can be something like PROD, UAT or DEV
## You can use other names, just use all CAPS
TF_VAR_environment=SANDBOX
# Region to generate all resources
TF_VAR_region=ap-southeast-2
## The bucket must be accessible by the account running terraform, these can be shared across deployments
## These settings should not be changed as the terraform state is centralised in this bucket for all deployments (in this region)
TF_VAR_state_bucket=central-terraform-state
TF_VAR_state_dynamodb_table=central_terraform_state_lock
TF_VAR_state_region=ap-southeast-2
TF_VAR_state_key=terraform.tfstate
# TF_VAR_state_assume_role_AWS_account_id=xxxxxxxx
## the prefix for the role that terraform will assume for the terraform state
## the full role is of the form ${prefix}-${TF_VAR_name}-${TF_VAR_environment}. see below for these other values
TF_VAR_state_assume_role_prefix=terraform-state-access
The environment variables can then be passed in at run time either via this file for testing, or via the deployment pipeline (i.e. runatlantis)
Thanks
also asking here since it’s relevant
anyone knows a faster way to setup Vault than described in https://github.com/hashicorp/terraform-aws-vault ?
Could you say more about how you mean “faster”? Fewer steps? Fewer decisions? Fewer components? …There are images in DockerHub that might have the combination of settings that will work for you.
anyone knows a faster way to setup Vault than described in https://github.com/hashicorp/terraform-aws-vault ?
Thanks for asking @Blaise Pabon ! I was referring to both fewer steps and fewer decisions while avoiding the need for an orchestration and use of Docker
@Bogdan, ok, that makes sense, well in that case, have you looked around for a good Ansible role? I haven’t tried it yet, but I might use https://galaxy.ansible.com/andrewrothstein/vault for my home lab.
Jump start your automation project with great content from the Ansible community
Does anyone know what format this module outputs the subnet_ids in? https://github.com/terraform-community-modules/tf_aws_public_subnet
Trying to import the list of subnets as a string but it keeps saying I’m incorrectly attempting to import a tuple with one element (it should have 2 subnets)
A Terraform module to manage public subnets in VPC in AWS. - terraform-community-modules/tf_aws_public_subnet
Run, run, run away from this 4 years old Terraform module which has not gotten any love for the last 2 years…
A Terraform module to manage public subnets in VPC in AWS. - terraform-community-modules/tf_aws_public_subnet
PS: it is a list
Better use https://github.com/terraform-aws-modules/terraform-aws-vpc by the same author
Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc
I was using the terraform-aws-alb-ingress and I had to fork it but now I’m taking another look and I think I misunderstood the intention , please correct me if I’m wrong but if I was going to use it I should declare a module per target group I have ?
basically this module was made for one target group
Yes, more or less
@jose.amengual it was design to “kind’a” work like an Ingress
rule in kubernetes
(since we approached ECS coming from a k8s background)
That’s also why we have a “default backend”
I see ok, my problems is that I try to used thinking it could ingest a list
but in K8s you declare every ingress
so it make sense
I will just call it again for my bluegreen target
and at some point I will send a PR for the ALB module to support arbitrary targets
2019-06-08
Today, we’re excited to introduce repository templates to make boilerplate code management and distribution a first-class citizen on GitHub.
Good for terraform modules
Is it added as a resouse?
The GitHub provider is used to interact with GitHub organization resources.
I’ve worked in a couple of awsome modules, but im not yet allowed to publiclly add them (company policy)
modules for users and repos
version .11.14 and .12
2019-06-09
Anyone know of a good GUI for terraform, other than Atlas?
2019-06-10
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
Hi guys, I’m running nuts against this error. I am using the tutorial to create a beanstalk app ; it seems that the output required to create a related env is not working. Any idea why ? I’m running on Windows 10 (yeah, yeah…!): terraform v0.11.14
- provider.aws v2.12.0
- provider.null v2.1.2
@[Gamifly] Vincent did you look at the example of using EB application and environment https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
the name
attribute was working before https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application/blob/master/outputs.tf
Terraform Module to define an ElasticBeanstalk Application - cloudposse/terraform-aws-elastic-beanstalk-application
I’ve been using https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/master/examples/complete A bit tweeked to have variables reused
what does your [outputs.tf](http://outputs.tf)
file look like?
the outputs.tf from elastic_beanstalk_application looks like that
And the resource is…
maybe some concurrency issue, did you try to apply more than once?
yup, alot As the application is created, I need to destroy it manually from the aws console between each try
why manually?
why not ?!
actually, the app exists within the console ; just the state not up-to-date ?
usually happens when one tries to add/delete something manually
but seriously, try to remove everything and apply again
what state backend are you using?
what you see also happens when you apply something using local state, then exit the shell (or whatever you are using to access AWS), and the state gets lost; or you just delete the state file on purpose or accidentally
use S3 for TF state backend
I think I just use the local backend ; I’ll try the destroy and apply…see you in some minutes !
@wbrown43 hit a problem with two CP modules conflicting with internal SGs.
https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L12 …and… https://github.com/cloudposse/terraform-aws-alb/blob/master/main.tf#L11
Because we use the same values the name becomes the same: app-name-dev
. So the second one to create fails because the first w/ that same name exists.
What is the best approach to handle this conflict? We have our own db SG so don’t really want the rds cluster one, tbh, but it is required by the module.
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
you can add diff attributes attributes = "${distinct(compact(concat(var.attributes,list("something", "else"))))}"
but it would add those to the cluster too
yes it will add
need a PR to fix that conflict if you don’t want to add attributes to all the names
Do you know if there is any nice way to recover from that ?
look into why default
is nil and if you have it
once that is fixed, run plan/apply again (should be nothing to recover here)
It actually happens during a destroy. I went through tfstate, to check where it was missing, then refresh/plan/apply to ‘refresh’ aws_vpc.default Then destroy again. Same error…I think there is something I’m missing here ; as I reapplyed, should’nt the module be ok ?
you need to destroy everything first. If terraform can’t destroy something, you need to go to the AWS console and destroy it manually. Since some things were destroyed (and probably applied) manually before and were missing from the state file, it’s difficult to say anything what is missing or wrong
Ok, thanks ! I though there maybe was a way not to do manually again
if TF can’t destroy something b/c it’s in the state file, but missing in AWS, you need to taint
the resource
usually, there are two rules on how to clean up a TF mess:
- If something is in AWS, but not in the state file (b/c of lost or broken or deleted state file) -> manually delete the resources from AWS console
- If something is in the state file, but not in AWS (was manually deleted/updated/changed) ->
terraform taint
orstate rm
the resources
there is a third way to clean up the mess, but not sure if you want to do it
- Delete the state file and then nuke the account https://github.com/gruntwork-io/cloud-nuke or https://github.com/rebuy-de/aws-nuke
A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it - gruntwork-io/cloud-nuke
Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke
That sounds like a whiter-than-white cleaning
Thanks for those advises
What about…things that are not in AWS because manually deleted nor in the state ? ( I juste deleted the vpc but the error strikes again ) I may sleep on that by now and get back on it on the morrow with a clearer mind. Thanks again !
sounds like not all things were manually deleted
isn’t there a way to ‘force’, i.e. continue even if the error strikes, to destroy as much as possible, then finish manually ?
depends on the error(s), but usually not
Hi @Andriy Knysh (Cloud Posse), still stuck on the same error…! I have no idea what resource is blocking the destroy : no sign of it in aws, I exported a full list of all resources… If you have any insight on what may cause the “Error: Error applying plan: 1 error occurred: * module.vpc.output.vpc_cidr_block: variable “default” is nil, but no error was reported” i’d be in debt !
- go to the AWS console VPC and check all the subnets, route tables, elastic IPs. Delete them manually if any present
- then do
terraform taint -module=vpc aws_vpc.output
BTW, did you name your aws_vpc
resource output
? (seems strange)
- Check if you have the resource with the name
output
and the variable with the namedefault
(in the code). Share your code with me b/c it’s difficult to say anything without seeing it
Thanks your your messages ;
- Already done, but some of the resources here are not named ; as there is no way to get a creation_timestamp, I can’t be 100% sure that it had been created via terraform (the alternative is that this is my dev/prod…)
The module root.vpc has no resources. There is nothing to taint.
- The only file I modified was main.tf
```
——— Complete example to create Beanstalk —————-
https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment
provider “aws” { region = “eu-west-1” profile = “terraform” }
variable “max_availability_zones” { default = “2” }
variable “namespace” { default = “eg” }
variable “stage” { default = “dev” }
variable “name” { default = “test” }
variable “zone_id” { type = “string” description = “Route53 Zone ID” }
data “aws_availability_zones” “available” {}
module “vpc” { source = “git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.4.1” namespace = “${var.namespace}” stage = “${var.stage}” name = “${var.name}” cidr_block = “10.0.0.0/16” }
module “subnets” { source = “git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.10.0” availability_zones = [”${slice(data.aws_availability_zones.available.names, 0, var.max_availability_zones)}”] namespace = “${var.namespace}” stage = “${var.stage}” name = “${var.name}” region = “us-east-1” vpc_id = “${module.vpc.vpc_id}” igw_id = “${module.vpc.igw_id}” cidr_block = “${module.vpc.vpc_cidr_block}” nat_gateway_enabled = “true” }
module “elastic_beanstalk_application” { source = “git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=tags/0.1.6” namespace = “${var.namespace}” stage = “${var.stage}” name = “${var.name}” description = “Test elastic_beanstalk_application” }
module “elastic_beanstalk_environment” { source = “git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tags/0.13.0” namespace = “${var.namespace}” stage = “${var.stage}” name = “${var.name}” zone_id = “${var.zone_id}” app = “${module.elastic_beanstalk_application.app_name}”
instance_type = “t2.small” autoscale_min = 1 autoscale_max = 2 updating_min_in_service = 0 updating_max_batch = 1 loadbalancer_type = “application” vpc_id = “${module.vpc.vpc_id}” public_subnets = “${module.subnets.public_subnet_ids}” private_subnets = “${module.subnets.private_subnet_ids}” security_groups = [”${module.vpc.vpc_default_security_group_id}”] solution_stack_name = “64bit Amazon Linux 2018.03 v2.12.12 running Docker 18.06.1-ce” keypair = “”
env_vars = “${ map( “ENV1”, “Test1”, “ENV2”, “Test2”, “ENV3”, “Test3” ) }” } ```
Terraform Module to define an ElasticBeanstalk Application - cloudposse/terraform-aws-elastic-beanstalk-application
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
as per the rest of the code, it’s genuine cloudposse
what’s the output of terraform output
?
what’s the output of terraform plan
?
$ terraform output
The state file either has no outputs defined, or all the defined
outputs are empty. Please define an output in your configuration
with the `output` keyword and run `terraform refresh` for it to
become available. If you are using interpolation, please verify
the interpolated value is not empty. You can use the
`terraform console` command to assist.
…and the plan
the console output (not the JSON file) when you run terraform plan
?
t plan -out plan
t plan > plan
what’s the output from terraform destroy
? (without saying yes
, just the destroy plan)
a few more things:
- Update these vars ``` variable “namespace” { default = “eg” }
variable “stage” { default = “dev” }
variable “name” { default = “test” } ```
those are just examples, you need to specify your own namespace, stage and name - otherwise there could be name conflict with other resources created by using the same example
- In your code, you use diff regions
provider "aws" {
region = "eu-west-1"
profile = "terraform"
}
and for the subnets
region = "us-east-1"
everything should be in one region
make a var region
and use it eveywhere for all modules and providers
@[Gamifly] Vincent once you fix those two issues, run terraform plan/apply
again
[Sorry, was on phone]:
- as per the example vars, I added those to main.tf ; I override them via terraform.tfvars
- I did not see about the subnet region, thanks ; I have a doubt that it is taken into account as I always get prompted, for a region selection, whenever I run a command
- ``` $ t destroy provider.aws.region The region where AWS operations will take place. Examples are us-east-1, us-west-2, etc.
Default: us-east-1 Enter a value: eu-west-1
data.aws_iam_policy_document.service: Refreshing state… data.aws_elb_service_account.main: Refreshing state… data.aws_region.default: Refreshing state… data.aws_availability_zones.available: Refreshing state… data.aws_iam_policy_document.ec2: Refreshing state… data.aws_iam_policy_document.default: Refreshing state… data.aws_iam_policy_document.elb_logs: Refreshing state… data.aws_availability_zones.available: Refreshing state… ```
(I just edited 1. , first answer was wrong)
fix region
(the same region for all resources and providers), update namespace
/stage
/name
and run terraform plan/apply
again let me know the result
Back to the beginning and why I first came here !
Error: Error applying plan:
1 error occurred:
* module.elastic_beanstalk_application.output.app_name: Resource 'aws_elastic_beanstalk_application.default' does not have attribute 'name' for variable 'aws_elastic_beanstalk_application.default.name'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
@Andriy Knysh (Cloud Posse) I re-run apply
Error: Error applying plan:
1 error occurred:
* module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: 1 error occurred:
* aws_elastic_beanstalk_application.default: InvalidParameterValue: Application gamelive-dev-gamelive-test-dev already exists.
status code: 400, request id: 834e0b6a-7c01-4bab-bc02-3ab73607f5aa
$ t destroy -target=aws_elastic_beanstalk_application.default
null_resource.default: Refreshing state... (ID: 4362035586681674298)
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
Destroy complete! Resources: 0 destroyed.
@[Gamifly] Vincent send me your complete code (the one you ran), I’ll take a look when have time
Ok ; I actually used that as a main: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/master/examples/complete The only difference is that I added variables to override in tfvar
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Thanks for your time
did you change the region to be the same for all modules and providers?
I think so
re: aws_elastic_beanstalk_application.default: InvalidParameterValue: Application gamelive-dev-gamelive-test-dev already exists - it says the application with that name already exists
what’s var.region
?
and why don’t you use it in
provider "aws" {
region = "eu-west-1"
profile = "terraform"
}
as well
I didn’t do that update after you indicated me that the subnets were using another region, my bad
ok, all modules and providers have to use just one region
make a var region
and default it to the region you want
use the var for subnets module
and use it for the provider as well
provider "aws" {
region = "${var.region}"
profile = "terraform"
}
make sure if you have region anywhere else, use the same var
yes, I’ve just done it, thx
Did you run plan and apply after you fixed the region?
yes
I got back to the very first error I had:
Error: Error applying plan:
1 error occurred:
* module.elastic_beanstalk_application.output.app_name: Resource 'aws_elastic_beanstalk_application.default' does not have attribute 'name' for variable 'aws_elastic_beanstalk_application.default.name'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
What’s the error? That the application already exists?
When I rerun, the error is that the application already exists
@[Gamifly] Vincent i just ran the example here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/examples/complete/main.tf without any modification except I changed the namespace
to something unique
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
terraform apply
without the errors you saw
(BTW, if you use the same namespace, stage and name, it will fail because the bucket eg-dev-test
already exists in AWS and all bucket names are global
Ok, sounds like I was not in luck ! Have you tried to destroy and apply again ? If the error was an already existing bucket, wouldn’t the error be related to S3 buckets ? Thanks agin for your help
Ok, I tried:
- add a S3 backend,
- change namespace / name I still get the same error: ``` module.subnets.aws_network_acl.private: Creation complete after 1s (ID: acl-0889fc9cefc38c3d8) module.subnets.aws_network_acl.public: Creation complete after 2s (ID: acl-000584f9e2f8afb4c) module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: Still creating… (10s elapsed) module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: Still creating… (20s elapsed) module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: Still creating… (30s elapsed) module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: Creation complete after 30s
Error: Error applying plan:
1 error occurred: * module.elastic_beanstalk_application.output.app_name: Resource ‘aws_elastic_beanstalk_application.default’ does not have attribute ‘name’ for variable ‘aws_elastic_beanstalk_application.default.name’ ```
I’m getting to the point of thinking that the error cannot only be on me
Yes I applied and destroyed a few times using the example (but changed the namespace)
Never got the error you see
Something is wrong with your state file
You can show the exact, meaning 100%, code you use so I’ll take a look
Also suggest to throw away the state file and start from scratch and see what happens
By using an S3 backend, I started with a scratch state file ; I moved m main.tf and tfvars into another folder, to scratch the whole project but the conf, maybe this will make a difference.
Here is the full project link:
<https://www.dropbox.com/s/tkhq2f10fvtc3rt/terraform.zip?dl=0>
Last thing I did:
- destroy everything
- change all the variables
- change the region
- apply –> same error
Error: Error applying plan:
1 error occurred:
* module.elastic_beanstalk_application.output.app_name: Resource 'aws_elastic_beanstalk_application.default' does not have attribute 'name' for variable 'aws_elastic_beanstalk_application.default.name'
Could this be related to using tf with windows 10 ?
What terraform version and AWS provider version are you using?
$ tf -v
Terraform v0.11.14
+ provider.aws v2.12.0
+ provider.null v2.1.2
Hey aknysh, any new idea on the subject ?!
I have given admin access to the user…and it works.
There is a very misleading error here ; I created the user step by step with each auth required one by one
@Andriy Knysh (Cloud Posse) just to be sure you’ve seen the last messages
@[Gamifly] Vincent re: I have given admin access to the user…and it works.
what other errors do you see?
the IAM user under which you provision the resources needs to have all the required permissions
we usually give such user an admin access
At first I just gave the required permissions, by running apply until it broke to ask such permission.
There is no other error, the misleading one I was speaking about was one we were trying to solve
if you give the user the admin permissions, and everything else regarding users/permissions is correct, it should not ask for any other permissions and apply should complete
That is my point: something was incorrect with the permissions before I did grant admin rights, but the error was module.elastic_beanstalk_application.output.app_name: Resource 'aws_elastic_beanstalk_application.default' does not have attribute 'name' for variable 'aws_elastic_beanstalk_application.default.name'
instead of the usual permission error
I’m not sure about my wish to give admin rights to a user that should only timely modify the architecture ^^
well, then you have to find out all the required permissions the user will need for all services you deploy (which could be not an easy task)
and once you add new AWS resources to deploy, you’ll have to find and specify new permissions again
that is not scalable
Does anyone have any advice on the elasticsearch module?
we use this one https://github.com/cloudposse/terraform-aws-elasticsearch
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
invocation example https://github.com/cloudposse/terraform-root-modules/blob/master/aws/elasticsearch/main.tf
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Yes, that’s the one I’m using. It builds the cluster just fine but I’m unable to access the consoles it creates.
you can’t, at least from the internet
it creates the cluster in a VPC
so the cluster is not publicly accessible
ok
a few ways of accessing it:
- Add a bastion server to the VPC
- We usually deploy an identity-aware proxy (IAP) in a kubernetes cluster, which allows us to access the Kibana (which is also deployed with the cluster)
ok, thanks for the advice.
2019-06-11
How do I get the SG id sg-...
with a data resource to later user in another SG’s ingress rule? :sweat_smile:
I’ve tried both aws_security_group
(.arn
) and aws_security_groups
(.ids[0]
) but no luck.
Invalid id: "16" (expecting "sg-...")
Uhh. Scratch that. Something else going on there
If I have this correct, *-dev-exec
from ecs-web-app is supposed to be the role we change to enable SSM read/write, correct?
If so, it seems we cannot access it without it being an output. I’ll PR changes if needed, but I wanted to verify this before doing so. Thoughts?
@johncblandii here’s how we did it: https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/main.tf#L354-L362
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
for Atlantis
yeah, was looking at it, but i don’t see reference to those params
hrmmm
You don’t see reference to module.web_app.task_role_name
?
and i did use the task_role_name
(which params…)
that didn’t work, though, since that’s not the exec one
we’re using terraform-aws-ecs-alb-service-task
so this part https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/master/main.tf#L124-L150
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
are you sure you what that phase?
?
is this something your container needs to do?
e.g. read from SSM
with chamber
using task_secrets
and it throws access denied
ok, haven’t tried that yet
ah, forgot you have chamber in there
Fetching secret data from SSM Parameter Store in us-west-2: AccessDeniedException: User: arn:aws:sts::496386341798:assumed-role/event-horizon-replicator-dev-exec/654fb1ebeef7484c867a885e23ae82a0 is not authorized to perform: ssm:GetParameters on resource: arn:aws:ssm:..........
you might need to get back to us on that one!
ok…i’ll tie things in then get back to you
@Erik Osterman (Cloud Posse) here’s the working version. Basically we need these permissions added to the exec role. I’ll get around to PR’ing the exposure of those so we don’t have to manually recreate it.
data "aws_iam_policy_document" "replicator_ssm_exec" {
statement {
effect = "Allow"
resources = ["*"]
actions = [
"ssm:GetParameters",
"secretsmanager:GetSecretValue",
"kms:Decrypt",
]
}
}
resource "aws_iam_role_policy" "replicator_ecs_exec_ssm" {
name = "${local.application_name_full}-ssm-policy"
policy = "${data.aws_iam_policy_document.replicator_ssm_exec.json}"
role = "${local.application_name}-replicator-dev-exec"
}
I’m using the github.com/cloudposse/terraform-aws-codebuild and get the following error when setting cache_enabled = false
and/or cache_bucket_suffix_enabled = false
:
${local.cache_def[var.cache_enabled]}
* module.name.local.cache: local.cache: key "0" does not exist in map local.cache_def in:
${local.cache_def[var.cache_enabled]}
cc: @Erik Osterman (Cloud Posse) @Igor Rodionov @Andriy Knysh (Cloud Posse) @jamie
I wrote that bit, ill take a look
and try again
its a terraform pain. true and false are binary, but “true” and “false” are strings that terraform converst to binary
and in the cache_def
map, it uses the string version
I can probably make that more stable in the future, especially with 0.12
@Bogdan
@jamie it worked! I can also help with a PR if you’re too busy
thanks!
(we’ll be slowly fixing the "true"
/"false"
issue as we upgrade modules to 0.12)
the latest terraform-null-label
now supports boolean
130 modules to go
let alone having to have both versions of terraform installed next to each other depending on the modules you’re using
yea…
speaking of which, have you seen how we handle that in geodesic
?
you can write use terraform 0.12
in the .envrc
for a given project
oh thats nice
@Bogdan theres an open PR with the fix for it in there https://github.com/cloudposse/terraform-aws-codebuild/pull/43
What: Updates to allow module.label.context to be passed in to the module. Example to show how to address parameter store Why Because someone asked about it in the slack channel
Cool! I see Erik and Andriy had a look
What: Updates to allow module.label.context to be passed in to the module. Example to show how to address parameter store Why Because someone asked about it in the slack channel
Which I totally don’t remember doing
anyone have an AWS AppMesh terraform module they’re working on?
no, but LMK if you find something
I’ve been getting super confused trying to translate https://github.com/aws/aws-app-mesh-examples/blob/master/examples/apps/colorapp/ecs into Terraform
AWS App Mesh is a service mesh that you can use with your microservices to manage service to service communication. - aws/aws-app-mesh-examples
I can translate that to terraform for ya. Have you started yet?
@cabrinha @Erik Osterman (Cloud Posse) I’ve started on the appmesh module https://github.com/bitflight-public/terraform-aws-app-mesh
Terraform module for creating the app mesh resources - bitflight-public/terraform-aws-app-mesh
@jamie wow dude thanks a lot!
I’m going to test this out.
its not finished
But you can look through it
its missing the resources for the app mesh virtual services
There is a lot I still don’t understand about AppMesh
namespace, stage, I hope we can make these values configurable or have the ability to leave them blank
and, if you keep waiting I’ll have the example folder create an ecs cluster, create the services in it with the envoy sidecars and the xray sidecars and the cloudwatch logging, and the rest of that colorteller app
Similar to what we did here: https://github.com/cloudposse/terraform-aws-efs/pull/25
The creation of the EFS volume's CNAME is currently out of user's control. This change proposes the option to allow users to set their own DNS name for the volume. If no name is set, fallba…
@jamie you’re a saint
Yeah theres already an override for the naming
variable "mesh_name_override" {
description = "To provide a custom name to the aws_appmesh_mesh resource, by default it is named by the label module."
default = ""
}
So yeah. You can’t leave the label name blank, but you can override it
Anyway… I have to stop for now on making this. I have a Dockerfile I need to update for a client
@Andriy Knysh (Cloud Posse) I am planning to add support for 0.12 to tfmask if is ok with you guys
to be honest, I wish there is a way to auto mark vault data source as sensitive
thanks @Julio Tain Sueiras, your contribution is welcome
2019-06-12
https://github.com/cloudposse/terraform-aws-ecs-container-definition - I believe this is a blocker for many who wants to use Terraform 0.12, or is it working with 0.12 already?
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
how do I get tfmask installed / setup as part of a pipeline?
https://github.com/cloudposse/terraform-aws-ecs-container-definition/pull/36 - @Igor Rodionov you are one of contributor there who is online Does it look good for you to get this merged? I also run all examples with Terraform 0.12 and they work as expected.
This is required to get this module to work with Terraform 0.12
yes, but I do not involved now into migration to terraform 0.12
This is the only change which is needed to get that module to work with both - 0.11 and 0.12. The rest is cosmetic changes which we can do later.
done
release created 0.15.0
Thanks! I really don’t like to maintain my own forks
what removed provider block why While using this module a terrform plan always requested to specify the region, although the variable region is set in the module. provider "aws" { assum…
Is it true to this day you can still not pin providers in modules without supporting all parameters of the provider?!?!?
give this a try? https://github.com/cloudposse/terraform-aws-dynamic-subnets/issues/56#issuecomment-501342606
While using this module a terrform plan always requested to specify the region, although the variable region is set in the parent module. Example provider "aws" { assume_role { role_arn =…
or try this: terraform plan -var <region>
?
Thanks! Will see what he says.
@Andriy Knysh (Cloud Posse) you didn’t run into this?
no
not with dynamic subnets
I had this issue when running the examples on my computer
prompting for region every time
I guess since we always set AWS_REGION
we don’t notice it
yes
deployed atlantis
many times using the subnet module https://github.com/cloudposse/terraform-root-modules/blob/master/aws/ecs/vpc.tf
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
@jamie @loren what are your thoughts on provider pinning at the module level?
We’ve been plagued by constant regressions with every aws provider release
its a good idea
I have fixed the issue with it in my pull request
sec
I always pin the explicit version for the aws provider in my root module, and test updates there. I don’t worry about it in the module, I’m fine if a known min version is documented in the readme
@Erik Osterman (Cloud Posse) just tested, if you define a provider w/o region in a low-level module, like this
provider "aws" {
version = "~> 2.12.0"
}
and then define a provider in top-level module that uses the low-level module
provider "aws" {
region = "us-east-1"
}
it will ask for region all the time
(the top-level region does not apply, they don’t merge)
Can you try adding region
, default it to ""
, then see if it uses the AWS_REGION
env
if that’s the case, maybe we use that pattern
that way it works both ways
will try now
@Andriy Knysh (Cloud Posse) did this work?
@loren - the fix is to pass providers explicitly, which is a new feature in 0.11.14
Modules are used in Terraform to modularize and encapsulate groups of resources in your infrastructure. For more information on modules, see the dedicated modules section.
module "vpc" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.3.4>"
providers = {
aws = "aws"
}
namespace = "${local.namespace}"
stage = "${local.stage}"
name = "${local.name}"
cidr_block = "172.16.0.0/16"
}
@jamie heads up
ah, @loren just saw your comment here: https://github.com/cloudposse/terraform-aws-dynamic-subnets/issues/56#issuecomment-501342606
While using this module a terrform plan always requested to specify the region, although the variable region is set in the parent module. Example provider "aws" { assume_role { role_arn =…
Nice!
How do you want the module changed? Remove the provider, or keep the provider?
I think I ran the right make
commands for docs, but let me know of any changes required: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/27
The task exec outputs are needed to add to the policy for SSM secrets usage and likely other things.
the example was broken too. it runs now, though
The task exec outputs are needed to add to the policy for SSM secrets usage and likely other things.
Lgtm
pushed lint fixes
bump @Andriy Knysh (Cloud Posse)
cut 0.12.0
Public #office-hours starting now! Join us on Zoom if you have any questions. https://zoom.us/j/684901853
Thanks for the office hours, that was very helpful!
Hi I was using : https://github.com/cloudposse/terraform-aws-ecs-container-definition and I did not see a way to use docker labels, Is that possible?
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
Currently that is true.
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
ok I do not see a reason no to have it
I mean adding it it will not break anything
Can someone explain me how this module work ? I wanted to send a PR to add the Docker labels but I do not understand how the json is being rendered with replace, thanks
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) Did one of you guys write this module?
I was reading last night and I seem to understand now
I could just add :
json_with_stop_timeout = "${replace(local.json_with_memory_reservation, "/\"stop_timeout_sentinel_value\"/", local.encoded_stop_timeout)}"
json_with_docker_labels = "${replace(local.json_with_stop_timeout, "/\"docker_labels_sentinel_value\"/", local.encoded_docker_labels)}"
plus the local value
I guess
like:
encoded_docker_labells = "${jsonencode(local.docker_labels)}"
I have a project where someone ran terraform using 0.12 by mistake, since it was a super simple repo it just worked, does anyone know is there a way to make the state file compatible with 0.11.14 again?
Delete the state file, and import the resources using tf 0.11?
mmm, that might work
I don’t think so, but figured I’d ask
If you’re using a versioned bucket, you might be able to just pull the previous .tfstate
file
and restore it
(our terraform-aws-tfstate-backend
does versioning)
2019-06-13
And then add the terraform version restrictions into your project config to prevent another oops before you’re ready.
terraform {
required_version = "0.11.14"
backend "s3" {}
}
I’ve been using required_version = "<0.12.0, >= 0.11.11"
(or pick some preferred minimum for you)
i pin exact versions for terraform in my root config because state is not backwards compatible
ah, interesting technique.
Same. we were pinning to 0.11.x, then we ended up with three different x
in our various Terraform roles. I just pinned them all to 0.11.14 this week
so if someone runs apply with tf 0.11.14, that’s it, everyone has to use that version
^^ Is that true? How do you upgrade the state files then when you migrate to newer versions, specifically 0.12?
run apply with terraform 0.12, if it succeeds, your state now requires tf 0.12. if it fails, your state is not updated
https://github.com/tfutils/tfenv seems to work quite well
Terraform version manager. Contribute to tfutils/tfenv development by creating an account on GitHub.
yes, have been using it for almost a year
Terraform version manager. Contribute to tfutils/tfenv development by creating an account on GitHub.
damn, please share these goodies more often
I had to use this because the fmt is broken in 0.12 and the ci was complaining about my commits ( yes I’ve been spoiled by the fmt ) so I had to revert to 0.11 for work projects
https://www.slideshare.net/AntonBabenko/terraform-aws-modules-and-some-bestpractices-may-2019 - slide 108
Slides from my meetup talks during meetups in Germany. Follow me: https://twitter.com/antonbabenko https://github.com/antonbabenko
hello all! I’m new here but have been using the sweetops modules in terraform for a little while but have run into an issue. I am trying to use the jenkins module that creates an beanstalk app etc. The issue I am running into is beanstalk has this error Creating Load Balancer listener failed Reason: An SSL policy must be specified for HTTPS listeners
. I made a key pair for the jenkins module so im not sure what needs to happen?
2019-06-14
Any idea for the error
Initializing modules...
- module.eg_prod_bastion_label
Error downloading modules: Error loading modules: module eg_prod_bastion_label: Error parsing .terraform/modules/3b7de6adc81422f0cdde31b2ae8597c0/main.tf: At 14:25: Unknown token: 14:25 IDENT var.enabled
using https://github.com/cloudposse/terraform-null-label The example in copy & paste..
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
got doubt & found it. Master is 0.12 terraform so I had to force it ot use 0.11.1
Yes @Erik Osterman (Cloud Posse) went with the convention that seems to be most popular on using master for the current latest, and branch 0.11/master for the latest of the 0.11 terraform
But pinning it to a release tag is always the best option.
this is gonna be fun
does anyone know how to get an ip address from an address_prefix in terraform and then pass that to ansible ?
Did you solve this issue?
Do you still need help with this?
TIL terraform output
takes an output name as an argument, for when you don’t want all the outputs
I created a simple codedeploy module for my Fargate task and everything went well but I notice that the target groups were switched so not terraform wants to correct the change :
Terraform will perform the following actions:
~ module.alb.aws_lb_listener.https
default_action.0.target_group_arn: "arn:aws:elasticloadbalancing:us-east-1:99999999999:targetgroup/app-green/6cf4c676cb238179" => "arn:aws:elasticloadbalancing:us-east-1:99999999999:targetgroup/app-default/fd2cdc38bdf07078"
where they use local-exec provider to do the code deploy part which I found weird but that is me…
what do you guys recommend to “solve” this ? maybe this does not need solving and just deal with it, since at some point the arn will go back to the original
maybe I could run another TF that access the same state and search using data. resources and switch the target group arns…
any ideas are welcome
2019-06-15
Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.
2019-06-16
I want to create nodejs elastic beanstalk resources using terraform template
take a look at his example https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/examples/complete/main.tf
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
all you need to do is to change the image from Docker to NodeJS
2019-06-17
Does anyone have a nice way of transformation a list output of aws_instances.public_ips
to something suitable for CIDR_BLOCK
in an SG?
You just need to add /32 to each one right?
This might do https://www.terraform.io/docs/configuration-0-11/interpolation.html#formatlist-format-args-
Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.
While using this module a terrform plan always requested to specify the region, although the variable region is set in the parent module. Example provider "aws" { assume_role { role_arn =…
I need some feedback here! Difficult choice and it feels like we are going against what most other modules do, but not sure if that’s reason enough to continue that practice.
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
I should have posted this today instead of late friday…. anyhow, any thoughts on this ?
I created a simple codedeploy module for my Fargate task and everything went well but I notice that the target groups were switched so not terraform wants to correct the change :
Hi #terraform
Let’s say I have a list keys and I want to transform it into a map, using something like:
locals {
account = "287985351234"
names = ["alpha", "beta", "gamma"]
region = "eu-west-1"
}
data "null_data_source" "kms" {
count = "${length(local.names)}"
inputs = {
key = "${upper(local.names[count.index])}"
value = "${format("arn:aws:kms:%s:%s:key/%s",local.region, local.account, local.names[count.index])}"
}
}
output "debug" {
value = "${data.null_data_source.kms.*.outputs}"
}
The output is a list of maps:
data.null_data_source.kms[2]: Refreshing state...
data.null_data_source.kms[0]: Refreshing state...
data.null_data_source.kms[1]: Refreshing state...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
debug = [
{
key = ALPHA,
value = arn:aws:kms:eu-west-1:287985351234:key/alpha
},
{
key = BETA,
value = arn:aws:kms:eu-west-1:287985351234:key/beta
},
{
key = GAMMA,
value = arn:aws:kms:eu-west-1:287985351234:key/gamma
}
]
Is there any way to make it one map with all the keys like this?:
{
ALPHA = "arn:aws:kms:eu-west-1:287985351234:key/alpha",
BETA = "arn:aws:kms:eu-west-1:287985351234:key/beta",
GAMMA = "arn:aws:kms:eu-west-1:287985351234:key/gamma"
}
you can create two lists and then use zipmap
to create a map https://www.terraform.io/docs/configuration-0-11/interpolation.html#zipmap-list-list-
Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.
to create the two lists, use formatlist
https://www.terraform.io/docs/configuration-0-11/interpolation.html#formatlist-format-args-
Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.
the problem is that I want to apply functions to values in the first list (the keys). You can’t do that with formatlist.
(I think )
just create two lists for names
, one in lower-case, the other in upper-case
umm, no.
look, i can easily write out the last map. i want to reduce the amount of error prone cut and paste in my template.
once there is twenty names in the list it gets very verbose and tedious to scroll through
what happens if you do:
data "null_data_source" "kms" {
count = "${length(local.names)}"
inputs = {
"${upper(local.names[count.index])}" = "${format("arn:aws:kms:%s:%s:key/%s",local.region, local.account, local.names[count.index])}"
}
}
oh right, again a list of maps
shoot i think i did something like this somewhere, using zipmap
goes hunting through old code
ahh yes, you can key into your null data source outputs…
locals {
its_a_map = "${zipmap(data.null_data_source.kms.*.outputs.key, data.null_data_source.kms.*.outputs.value)}"
}
Perfect! Thanks a million
2019-06-18
resource "null_resource" "get-ssm-params" {
provisioner "local-exec" {
command = "aws ssm get-parameters-by-path --path ${local.ssm_vars_path}--region ${var.region} | jq '.[][].Name' | jq -s . > ${local.ssm_vars}"
}
}
resource "null_resource" "convert-ssm-vars" {
count = "${length(local.ssm_vars)}"
triggers = {
"name" = "${element(local.ssm_vars, count.index)}"
"valueFrom" = "${local.ssm_vars_path}${element(local.ssm_vars, count.index)}"
}
}
guys do you know a way to get the output of a command that ran in a null_resource
local-exec provisioner into an HCL list? I tried the above but didn’t work
do you need to read the ssm params that way? you could probably load the value with a data element, jsondecode it, and get the Name key out with normal interpolation syntax
@Mads Hvelplund thanks. I’ll use the external
provider and data source to execute my script
sure, there’s a data source for a single ssm param
but not if you have tens
if there’s tens, one would like a different way to read them without also generating/creating tens of data sources
Hey bogdan, You may want to instead generate a lambda function and call it, to return the list.
Helpful question stored to <@Foqal> by @loren:
Hi <strong><a href='/terraform'>#terraform</a></strong>...
In the same way #airship uses the lambda function to lookup docker labels.
I hear you @jamie but when would the lambda update the task definition’s container definition to inject any new/updated params? i found that having more control on when the TD/CD gets a new revision (at apply-time) is safer
@Bogdan oh I was meaning you can extend the tf functionality by creating a lambda and immediately calling it. And using the results to change the task…
If you haven’t done this before have a look at the process the https://airship.tf ecs service module creates and calls the lambda function to get extra ecs task details
Flexible Terraform templates help setting up your Docker Orchestration platform, resources 100% supported by Amazon
I need to create a beanstalk environment and I was going to use the cloudposse modules, but I need a NLB with static ips and SSL termination
but I do not know if Beanstalk multicontainer might not be able to use such setup ?
2019-06-19
@Callum Robertson this channel will help you.
Hi Everyone, with https://github.com/hashicorp/terraform/issues/17179 still open ( full reshuffle of resources / recreation happens when the top item of the list gets removed ) , I’m wondering how other Terraformers are doing mass creation of AWS resources, like users.
Hi, We are missing a better support for loops which would be based on keys, not on indexes. Below is an example of the problem we currently have and would like Terraform to address: We have a list …
We are adding users one by one by creating a separate module for each user, e.g. https://github.com/cloudposse/root.cloudposse.co/tree/master/conf/users/overrides
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
when we run it, the users is added to the required groups
that way we can add/remove users from groups without touching other users and without changing group lists
we did it only for users
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
@cytopia
@maarten thanks for starting the conversation.
The only downside I currently see with this approach is when it comes to deleting users.
If all users where managed by a single module defines in terraform.tfvars
as a list of items (dynamically generated Terraform code with independent resource blocks, so you don’t hit the issue of resource re-creation by changing a list in between) I could simply delete an entry, regenerate the code and create a pull request to be reviewed. Upon merge and deploy, that user will be deleted.
With the multiple module approach I will probably have multiple directories and when deleting a directory and push the changes to git, nobody can actually delete that user, because you will have to terraform destroy
in that directory before deleting it.
How do you handle that situation with the multi-module approach?
@cytopia https://github.com/cloudposse/root.cloudposse.co/blob/master/conf/users/README.md is an actual single root module approach with multiple ‘user-module’ definitions. Users will be hardcoded in tf directly as there won’t be any re-usability possible.
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
Re: With the multiple module approach I will probably have multiple directories and when deleting a directory and push the changes to git, nobody can actually delete that user, because you will have to terraform destroy
in that directory before deleting it.
This will always be the case with every root module you create, hence first destroy then delete.
I am just thinking around how well that would work if you had something like Atlantis setup for auto-provisioning on Pull requests.
In that case you would always have the need for manual provisioning before a review has actually happened.
when using a single root module approach for users, or iam in general you don’t have this problem
How would that would eliminate the need for manual provisioning before review/merge for deleting users/roles?
As I’ve stated above, you would probably need to terraform destroy
manually and locally, then remove the directory, git commit and push. Or am I mistaken here?
I think you’re not capturing what I meant. With the single root module approache with multiple users, one justs add a user and deletes a user and only terraform apply
is used. The state of the iam root module itself does not need to be destroyed to delete a user
@Andriy Knysh (Cloud Posse) 0.12.2 added yaml decoding and encoding, pretty nice(and allow pretty nice but weird conversion from json to yaml)
(terraform v0.12.2)
I’m using the null-label in a reusable module. I want to pass the user-supplied tags as well as add a new one
tags = "${var.tags}"
is what I have now.. the user may or may not have supplied any tags
what’s the proper way to add a tag there?
in your top-level module, you can merge
the tags supplied by the user with some hardcoded tags. Then send the resulting map to the label module
for example https://github.com/cloudposse/terraform-aws-dynamic-subnets/blob/master/nat-gateway.tf#L14
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
then provide it to the label https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/main.tf#L13
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Variables not allowed
on <value for var.security_groups> line 1: (source code not available)
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
@Andriy Knysh (Cloud Posse) question for ya, do you guys use AzureDevops?
we (Cloud Posse) are mostly AWS shop with a bit of GCP
i’m sure there are many people here who use Azure
I mean AzureDevOps as in the CI/CD service
so me(and my company is going to release terraform provider for azuredevops)
just need to word the license correctly
(to avoid any issue)
(my company as in the company I work in)
I would follow the lead of what terraform-providers
(official hashicorp org) uses
it’s all MPL-2.0
true
P.S. I already implemented the following
Resources:
azuredevops_project
azuredevops_build_definition
azuredevops_release_definition
azuredevops_service_endpoint
azuredevops_service_hook
azuredevops_variable_group
azuredevops_task_group
Data Sources:
azuredevops_project
azuredevops_service_endpoint
azuredevops_source_repository
azuredevops_workflow_task
azuredevops_group
azuredevops_user
azuredevops_build_definition
azuredevops_agent_queue
azuredevops_task_group
azuredevops_variable_group
azuredevops_variable_groups
btw with the new yamlencode function
that mean for helm provider you can do something like this
#office-hours starting now! https://zoom.us/j/684901853
Have a demo of using Codefresh for ETL
Thanks for the invite @jamie
Hey all, this slack group looks fantastic, DevOps & Cloud engineer here from New Zealand. Getting big into the Hasicorp stack and from looking around, this place is a treasure chest of good ideas
also have a look at our archives
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
Thanks @Erik Osterman (Cloud Posse), appreciate that mate!
@endofcake is also in NZ and serious terraformer
Awesome, thanks Erik, @endofcakeyou going to be at DevOps days?
Hi @Callum Robertson , not sure yet. I went to the first conference and it was really good. Will have to think whether I can afford the second one though.
Welcome @Callum Robertson!
I’m getting Error https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment: Variables not allowed
on <value for var.security_groups> line 1: (source code not available)
Variables may not be used here.
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
show the exact error. what TF version are you using?
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Terraform v0.12.2
- provider.aws v2.15.0
We are working on same project
Error: Variables not allowed
on <value for var.private_subnets> line 1: (source code not available)
Variables may not be used here.
https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment has not been converted to TF 0.12 yet
Getting below error var.private_subnets List of private subnets to place EC2 instances Enter a value: 1
Error: variable private_subnets should be type list, got number
you using TF 0.12?
YES
the modules are not converted to TF 0.12 yet
we are working on it now
okay
is it resolved yet ?
No, We have copied 0.11.3 binary terraform .
2019-06-20
Any experts who did implement cloudposse Terraform module for jenkins in aws
Need help with it !
Any possibility of getting someones patch PR’d for eks-workers module? https://github.com/cloudposse/terraform-aws-eks-workers/compare/cloudposse<i class="em em-master...fliphess"</i>patch-1?expand=1>
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Any Terraform aws experts ?
Will pay you guys for cloudpose Jenkins setup
simple pull request to allow for custom default target group port : https://github.com/cloudposse/terraform-aws-alb/pull/19
This is to enable the option to specify the default target group port in the cases that the service does not listen on port 80.
@Ramesh are you seeing issues with jenkins
? Have you seen the examples here https://github.com/cloudposse/terraform-aws-jenkins/tree/master/examples? (they were 100% tested, but it was some time ago)
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
as of now, the examples will not work verbatim and will need little changes. For example, terraform-aws-dynamic-subnets
has been already converted to TF 0.12, so pinning to master
will not work and for TF 0.11 needs to be pinned to 0.12.0
as in ref=tags/0.12.0
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
@Ramesh let us know when you are stuck, could help you get going or review the plan/apply
errors if any
Great, thanks Aknysh. Let me retry.
@jose.amengual will check the PR, thanks
thanks
@jose.amengual please rebuild README
ohhh sorry I will
done
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
amazing
thanks
what are your thoughts of creating a target group module ?
we have some custom target groups, some times more than https
so I was thinking on doing somethin like the alb-listeners rules module where you could create target groups
but I’m not sure
imagine doing bluegreen , where can I create the target group for bluegreen ?
that is the answer I’m trying to answer
sounds good
what about this one https://github.com/cloudposse/terraform-aws-alb-ingress
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
I’m using that one too
so for bluegreen I will use a different target group that will be using a different listener rule in a custom port other than 443 like 8080 or something
yes
in order to do that I need to call that module twice so I can pass the proper variables
that is fine
you can use var.port
ah yes, twice
but the alb module will create the alb detault target group but not the bluegreen TG
so I have two options : or I create a module for custom target groups and make the alb module to have like a no_target_groups flag
it could be improved yes
or in my CodeDeploy module add the bluegreen target group
we don’t usually use the default TG after it gets created
same here
lol
if you see how to improve it, PRs always welcome
so a target_group module could be useful then
yes
cool, I will work on that
thanks for your help
not sure if it deserves to be a module since it will have just one resource (unless you add more functionality to it)
hahahah true….and plus is key part of an ALB
they are married
ALB + listener rule + TG
so maybe inside of the alb module will be better
there are too many possible combinations here
maybe a separate TG module will be useful to not repeat many settings all the time
and it could be used from other modules, e.g. alb
or alb-ingress
(which supports external TG w/o creating a new one)
yes that was one of the problems I was thinking
then I will have to do count on the resource creation to support custom TGs in the alb module and it could get messy
2019-06-21
any one using localstack to test TF here? I mean at least for simple projects as it doesn’t have 100% coverage
I use it as a docker machine in my mac
to test simple iam roles, lambdas and dynamosdb structures
I was looking into that
and I got to here : https://opencredo.com/blogs/terraform-infrastructure-design-patterns/
In this post we’ll explore Terraform’s ability to grow with the size of your infrastructure through its slightly hidden metaprogramming capabilities.
I just don’t like multiple files per layer
layer/main.tf+outputs.tf+variables.tf
I just don’t like multiple files per layer
I :heart: multiple files per layer.
Layers facilitate these types of overrides, which are very powerful, imho:
1) replace an earlier layer file with an empty file
2) use the terraform *[_override.tf](http://_override.tf)
filename feature to merge/replace final values into the config.
How can you empty a whole file of SGs if they are being use?
How do you know one SG of tga file belongs to what?
By the name of the resource?
You can try to delete the in-use SG manually in the AWS VPC web console – it should then complain and tell you all resources that are using the SG.
2019-06-22
@antonbabenko got your issue on tfstate backend. I am afk, but will address’s when back
2019-06-23
Hi can someone help me in using connection below is the error
Error: Missing required argument
on main.tf line 26, in resource “aws_instance” “example”: 26: connection {
The argument “host” is required, but no definition was found.
and here is my code
provider “aws” { profile = “default” region = “us-east-1” }
resource “aws_key_pair” “mykey” { key_name = “mykey” public_key = “${file(“${var.PATH_TO_PUBLIC_KEY}”)}” }
resource “aws_instance” “example” { ami = “ami-2757f631” instance_type = “t2.micro” key_name = “${aws_key_pair.mykey.key_name}”
provisioner “file” { source = “script.sh” destination = “/tmp/script.sh” } provisioner “remote-exec” { inline = [ “chmod +x /tmp/script.sh”, “sudo /tmp/script.sh” ] } connection { user = “${var.INSTANCE_USERNAME}” private_key = “${file(“${var.PATH_TO_PRIVATE_KEY}”)}” } }
@vishnu.shukla can you test by moving the connection block into the provisioner block?
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
connection {
user = "${var.INSTANCE_USERNAME}"
private_key = "${file("${var.PATH_TO_PRIVATE_KEY}")}"
}
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"sudo /tmp/script.sh"
]
connection {
user = "${var.INSTANCE_USERNAME}"
private_key = "${file("${var.PATH_TO_PRIVATE_KEY}")}"
}
}
}
It is meant to be fine to use it in either, but it seems like yours isnt inheriting the host address correctly.
An alternative is to move the provisioners into a null_resource, and pass in the host address like,
host = "${aws_instance.example.public_ip}"
I think this was a tf 0.12 change… It no longer has logic to automatically set the host
attribute, you have to pass it in the provisioner now
2019-06-24
Why do you need to specify providers block in the module? I have env vars set to AWS_REGION=eu-west-1 and AWS_DEFAULT_REGION=eu-west-1 which makes it impossible for me to use this module when worki…
Our plan (currently in action) is to continue rolling provider pinning. Please speak up if you have any insights.
Hey everyone, im banging my head against the wall on a certain implementation logic with terraform: network modules to create subnets that consume maps or lists
lets assume i write a module to create a subnet and all routing and firewall resources that are required to be deployed on each and every subnet in the company
when i call the module and pass a map(or list) with three cidrs everything is great at the start
but what do you do when later on you want to delete subnet #2
terraform looks at the indexes when iterating. Element #0 is subnet-#1, Element #1 is subnet-#2 etc.
when i remove subnet-#2, terraform wants to delete element #1 and #2 = deleting subnets-#2 and #3 and than redeploy the previous subnet-#3 at the position of element#2
thats an issue because the subnet-#3 didnt change at all, just its index in the list. any ideas how to tackle this?
hey @tobiaswi, not sure if this is over simplifying what you’re asking
but have you tried just targeting the resource that you’re wanting to destroy?
The terraform destroy
command is used to destroy the Terraform-managed infrastructure.
you can also specify the index used for each subnet if you didn’t want it to be dynamic
can you share the code snippet of your config?
yeah, terraform doesn’t do that well (yet), https://github.com/hashicorp/terraform/issues/14275
We have a lot of AWS Route53 zones which are setup in exactly the same way. As such, we are using count and a list variable to manage these. The code basically looks like this: variable "zone_…
Helpful question stored to <@Foqal> by @loren:
Hi can someone help me in using connection below is the error...
here’s the issue to track for the eventual solution, https://github.com/hashicorp/terraform/issues/17179
Hi, We are missing a better support for loops which would be based on keys, not on indexes. Below is an example of the problem we currently have and would like Terraform to address: We have a list …
HI, Is anyone familiar with this module https://github.com/terraform-aws-modules/terraform-aws-vpc
Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc
I am trying to get a VPN created to connect to our environment
I am currently using that module for the VPC setup
Hey Ayo
Terraform module which creates VPN gateway resources on AWS - terraform-aws-modules/terraform-aws-vpn-gateway
Gives a great example of using this module in conjunction with what you’ve got
@Ayo Bami ^
oh nice, I had a quick look at it earlier, I Just didn’t understand.. How does my laptop connect to it
@Callum Robertson Pretty much sure thats all I need I just don’t understand the concept of how to connect to it and how is that IP generated
are you creating a S2S VPN or a client VPN?
client VPN ideally
Provides an AWS Client VPN endpoint for OpenVPN clients.
Might help you to conceptualise what you’re doing
The following tasks help you become familiar with Client VPN. In this tutorial, you will create a Client VPN endpoint that does the following:
@Callum Robertson Thanks for your help I was able to configure the VPN, I think its missing authorize-client-vpn-ingress I can’t seem to find that in terraform documentation. Any idea how I can get authorize-client-vpn-ingress witout creating it on the console ?
I personally haven’t played with client VPN”s in terraform. if that automation isn’t currently possible with Terraform, try using a null resource with a local-exec provisioner
A resource that does nothing.
Sorry, I’m suggesting you do this as it might be available in the AWS CLI
Let me know how you get on mate
@Callum Robertson terraform is run in ci/cd so destroy isnt a option. a change has to occur in the variable that defines the input. What do you mean with specific index? got an example for that with count and a module? Sorry cant share, its internal code but its very generic, nothing specially fancy
@loren yes thats exactly my issue. thank you for the issues. i’ll be watching these
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 03, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
2019-06-25
Reads from existing Cloud Providers (reverse Terraform) and generates your infrastructure as code on Terraform configuration - cycloidio/terracognita
@antonbabenko did you know about https://github.com/dtan4/terraforming ?
What’s different in terracognita
?
Reads from existing Cloud Providers (reverse Terraform) and generates your infrastructure as code on Terraform configuration - cycloidio/terracognita
Export existing AWS resources to Terraform style (tf, tfstate) - dtan4/terraforming
It looks like dtan4 is not working on the project anymore
Reads from existing Cloud Providers (reverse Terraform) and generates your infrastructure as code on Terraform configuration - cycloidio/terracognita
Export existing AWS resources to Terraform style (tf, tfstate) - dtan4/terraforming
so the future of terraforming is uncertain
there is a bunch of PRs waiting
Howdy. I really appreciate the work done here, but there are now 20 PRs awaiting review, including some that appear to have bug fixes, expanded resources descriptions, and new AWS services support….
terraforming
is a lovely tool – I hope it continues to stay useful/maintained
terracognita scans the whole account, while terraforming creates resources one-by-one. Also, terracognita is very new. Time will show.
I’d argue that makes terracognita an antipattern. You almost never want a single monolithic terraform configuration for everything.
after 4-5 failed monolith terraform setups, 100% agree… although this could be an interesting starting point, generate tf code for all my resources and then i can regroup them into smaller pieces
but how did you split the TF state file afterwards?
terraform state mv
I need help : so I was using
module "terraform_state_backend" {
source = "git::<https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.7.0>"
name = "${var.name}"
namespace = "${var.namespace}"
stage = "${var.stage}"
attributes = "${var.attributes}"
region = "${var.region}"
}
created the state all that, added the provider config to my main.tf and then I created another s3 bucket resource BUT with the same name, so now my state is in the same bucket than my ALB logs
how can I move this out ?
if I change the bucket name for the ALB logs is trying to delete the other bucket but since is not empty it can’t
how many resources did you already create? can you just manually delete everything in the AWS console and start with new state?
like 5 resources
I can’t delete it
can’t delete it manually? or not allowed to delete it?
you mean the resources ?
that where created by terraform ?
yes
I’m asking if we already gave the static ips to the clients
can I just create a new “state bucket” wit the module and copy the content over ?
you can, but you will have to give it a different name, which since you are using the module will require giving it diff var.name
or adding some attributes to create a unique name
yes I was thinking to give it another attribute so the name is different
I thought by leaving this
module "terraform_state_backend" {
source = "git::<https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.7.0>"
name = "${var.name}"
namespace = "${var.namespace}"
stage = "${var.stage}"
attributes = "${var.attributes}"
region = "${var.region}"
}
inside of my main.tf it will not try to delete the bucket
but then I have this :
module "s3_bucket" {
source = "git::<https://github.com/cloudposse/terraform-aws-s3-log-storage.git?ref=tags/0.4.0>"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
delimiter = "${var.delimiter}"
attributes = "${var.attributes}"
tags = "${var.tags}"
region = "${var.region}"
policy = "${data.aws_iam_policy_document.default.json}"
versioning_enabled = "true"
lifecycle_rule_enabled = "false"
sse_algorithm = "aws:kms"
}
module "terraform_state_backend" {
source = "git::<https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.7.0>"
name = "${var.name}"
namespace = "${var.namespace}"
stage = "${var.stage}"
attributes = ["${compact(concat(var.attributes, list("2")))}"]
region = "${var.region}"
}
so let me see if I understand this correctly : 1.- I create a new state bucket with the module 2.- then I copy the state file to the new bucket 3.- Then I manually edit the state file to use the new bucket
-
- cross fingers
this is not valid ? https://www.quora.com/How-do-I-move-terraform-state-from-one-bucket-to-another ?
in reality the only bucket I need to change is the state bucket not the logs bucket
but I guess when the state was copied from the local state file to the s3 bucket then it keep the name of the bucket there
somehow I manage to do it
what did you do? @jose.amengual
I made sure my plan was clean
deleted everything and started with clean state?
Then I did a terraform state pull > terraform. Tfstate
The I disable the remote state in the main. Tf
The I did an apply target to the tf state module
Then I changes the provider config to the new bucket
Then I did a init, and say yes to copy the tf state from cache
And from then on I basically made sure to have the state in the new bucket and changed the S3 log bucket name than then forced to remove it
Delete it manually etc
There it was not pretty
But it worked
There was no need to do it that way, if I created the state bucket before hand it could have been 4 commands
But I wanted to have the state bucket as part of the project tfstate
And due to that I had to delete the dynamo table manually etc
this are basically the steps
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jun 26, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-06-26
How do you guys pass variable values to the modules ?
- On the cli like -var foo=bar
- using terraform.tfvars
- ENV variables
for not-secrets, using .tfvars
for secrets, ENV vars, which we usually read from SSM using chamber
chamber…
is that an external tool ?
found it
Yes
I really wish we had consul and Vault
chamber is just a tool to access SSM param store
Not a full blown secret management system
I understand
Vault is little bit difficult to setup compared to just using SSM
Tell me about it, it took me 2 months to setup a full production cluster
did you try the “official” hashicorp module?
this 2017 in november , and I was working at EA where is all chef based
aha… yea, makes sense it would be that much effort
to be honest setting up Vault for prod with 2 instances for prod is like 15 lines in the config file, the pain was Consul to some extend, solving the chicken and egg problem for the master key, learning about consul-template
but the hardest part by far is to understand the Identity provider , how the IAM auth and PSK7 authorization works and tight all that up to the policies
if you have very good understanding on IAM
it is easier to setup if you run it with AWS
but it is a pretty good product
#office-hours starting now https://zoom.us/j/508587304
Hey all! Is anyone familiar with creating a workspace and setting multiple tfe variables through an API call? I’ve trying to make the post for the tfe vars, but I can’t seem to get multiple attributes in one JSON payload. Tried declaring multiple “attributes” sections, tried putting them in a list [{}], tried declaring multiple data payloads. When declaring multiple attributes, only the last attribute listed takes. Multiple data payloads fails, and setting the attributes as a list fails, due to invalid JSON body.
Unfortunately, not that many here running TFE (terraform enterprise)
@johncblandii is running it though
Fair enough
hey @jose.amengual if you’re on Mac. @jamie told me about aws-vault, it’s been fantastic for me, puts all of your access keys, secrets and session tokens into your ENV vars. There’s also a container variation of it if you’re looking to use outside your local machine
I use it every day
bu secrets I was referring too are db passwords etc
ah my mistake mate!
np
A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault
@Andriy Knysh (Cloud Posse) is hard at work upgrading our modules to 0.12
so far, ~10% are updated (but we’re focused on the most popular/most referenced modules)
to follow along, see here: https://github.com/search?q=topic%3Ahcl2+org%3Acloudposse&type=Repositories
we are using the hcl2
label on all 0.12 compatible modules
also, in the process of upgrading the modules, we’re adding tests to each one. checkout the test/
folder.
Rackspace Infrastructure Automation has 30 repositories available. Follow their code on GitHub.
Rackspace has started publishing terraform modules
what’s wierd is they are not following terraform module naming conventions.
That is a very weird naming scheme
@Andriy Knysh (Cloud Posse) How painful (or hopefully, smooth) has the migration been so far? Our entire infrastructure is still on 0.11, but I’d really like to migrate.
it’s actually pretty smooth
mostly just syntactic sugar
with some exceptions like objects
https://github.com/cloudposse/terraform-null-label/blob/master/variables.tf#L55
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Terraform module to provision an AWS CloudTrail and an encrypted S3 bucket with versioning to store CloudTrail logs - cloudposse/terraform-aws-cloudtrail
but we are not only converting to TF 0.12, we also adding:
- Real examples (if missing)
- Many tests to test the module and the example, and to actually provision the example in AWS test account
- Codefresh test pipeline to run those tests
after that’s done, an open PR will trigger all those tests to lint the code, check terraform and providers version pinning, validate TF code, validate README, provision the example in AWS, and check the results (then destroy it)
@Lee Skillen ^
That’s a fantastic piece of work, well done. I am definitely going to have to send some tweets your way as thanks for inspiration. Looking forward to the migration. Mostly to remove a heap of hcl workarounds. I’ll check on the #terragrunt folks too since that’s our setup.
I’ll need to evaluate whether we can (and should) step away from the terragrunt-driven pipeline that we have at the moment too. Sensing future pain.
@loren and @antonbabenko can probably speak more to what’s involved upgrading terragrunt projects to 0.12. there was recently some chatter in #terragrunt
Thanks Eric - I didn’t know there was a #terragrunt joins
2019-06-27
Good day everyone !
Just curious do we have any CloudFront module which is populated for multiple types of origin ? https://sourcegraph.com/github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/-/blob/main.tf
example now everything on my side is not always intended to use S3_origin_Config
origin {
domain_name = "${aws_s3_bucket.mysite.bucket_regional_domain_name}"
origin_id = "${module.mysite_label.id}" # A unique identifier for the origin (used in routes / caching)
# s3 should not be configured as website endpoint, else must use custom_origin_config
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.mysite.cloudfront_access_identity_path}"
}
}
it might intended to use custom origin also. I have plan to rewrite this module to add few more conditions to make enduser can choose custom origin or s3 origin
Sourcegraph is a web-based code search and navigation tool for dev teams. Search, navigate, and review code. Find answers.
With 0.12 this should be achievable. With 0.11 it would have been too cumbersome.
Sourcegraph is a web-based code search and navigation tool for dev teams. Search, navigate, and review code. Find answers.
We are updating all modules to 0.12
thanks for your hardworking Erik
2019-06-28
Hi everyone , First of all i would like to thank everyone behind the Cloudposse terraform modules it literally made my life easier , i have used the EKS and the EKS-Worker modules and i have an issue that i cant figure out, basically both the eks and the eks-workers work fine and the nodes can join the cluster but it never becomes Ready as well as they never get assigned secondary ip’s which it did the first time i ran terraform, im not sure if its related to the ami or the initialization of the worker nodes even though i haven’t changed anything, how should i proceed ? and what might be the cause? the tags !? im using the terraform-aws-modules/vpc/aws for the VPC and subnets with the complete eks example for the eks cluster and worker nodes
@suleiman ali your VPC is tagged with [kubernetes.io/cluster/yourclustername](http://kubernetes.io/cluster/yourclustername) shared
?
Are these working nodes in a private or public subnet? the configure of the EKS nodes is different. However, if you’re nodes are connecting to the cluster, it sounds like it may even be a SG issue on the EKS cluster talking to your nodes, have you triple checked the SG’s and the tags on those SG’s?
Additionally, it might be worth checking your user_data
I run the following
“/etc/eks/bootstrap.sh –kubelet-extra-args ‘–cloud-provider=aws’ –apiserver-endpoint ‘${var.eks_cluster_endpoint}’ –b64-cluster-ca ‘${var.eks_certificate_authority}’ ‘${var.eks_cluster_name}’ echo “/var/lib/kubelet/kubeconfig” cat /var/lib/kubelet/kubeconfig echo “/etc/systemd/system/kubelet.service” cat /etc/systemd/system/kubelet.service
What would you like to be added: Support for t3a, m5ad and r5ad instance types. Why is this needed: AWS had added new instance types, and the AMI does not currently support them.
@cytopia thanks btw for upstreaming all your fixes to terraform-docs.awk
to our cloudposse/build-harness
. You saved @Andriy Knysh (Cloud Posse) a bunch of time today when he was updating one of our modules and ran into some of the issues you fixed.
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) there is another fix on the way. I tried to make the PR as descriptive and detailed as possible. I left an example with which you can try out the current faulty behaviour.
PR here: https://github.com/cloudposse/build-harness/pull/157
Do not double quote TF >= 0.12 legacy quoted types This PR addresses another issue with double-double quoting legacy types. This happens if you still double quote string, list and map in Terrafo…
yes thanks @cytopia, all your changes worked perfectly
2019-06-29
k, lately didn’t have a chance to work on terraform-lsp due to a broken computer, just receive my order for a USB monitor, and it work pretty well, so I should able to continue the terraform lsp for rest of the objectives
2019-06-30
HI guys, has anyone being able to configure authorize-client-vpn-ingress using terraform. I can’t find it in the documentation. Thanks in advance
replied to the original thread AYo