#terraform (2019-07)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2019-07-01
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 10, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-07-02
Does anyone know if default block reuse through “locals” still works in 0.12? I’m getting consistent errors in 0.12 that I didn’t get in 0.11 like “An argument named “health_check” is not expected here. Did you mean to define a block of type “health_check”?”
Specifically I was using similar syntax to this but for health_checks instead of tags https://www.terraform.io/docs/configuration/locals.html
Local values assign a name to an expression that can then be used multiple times within a module.
that sounds like the error where tf 0.12 differentiates strongly between attrs (assigned using =
) and blocks (no assignment), i.e.
attr = { foo = "bar" }
block {
foo = "bar"
}
tf 0.11 let you do either in many/most places, tf 0.12 forces you to use the syntax that matches the implementation
So this works
but if you use the same syntax for healthcheck’s, since you want to keep it DRY, it fails with the error above
Did TF 0.12 do some special grant for “tags” blocks?
yes, tags
are no longer blocks, they are attrs
to make blocks DRY, you probably want to investigate dynamic blocks, https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
Mmm I’ll take a look at those, thanks for the pointers @loren
The syntax is now expecting a for-loop as a base for it, instead of simple substitution using a “local” block. Even if I faked the for-loop it seems like I’d still need to repeat myself for all keys in the “content” section in all resources vs just referencing the local. Still trying to figure out if there’s any good way of doing this or if I just have to go back to repeating it in all resource blocks
@Vasco Pinho you will also get the same (or similar) error message when using dynamic
block with for
loop on a list
variable without updating the type of the variable to list(object)
and specifying the types of items in the list. For example:
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
var.custom_error_response
was type = "list"
and a similar error was shown
needed to specify the concrete type https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/variables.tf#L284
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
Yeah the point here was to have a short concise file, which we did in 0.11 because the healthcheck block (which was the same for most resources) got interpolated. I just undid all of that for 0.12 and now it works. Oh well.
is there a module for an ASG with a mixed instances policy?
Anyone know of the best way to check for the existance of a variable and if not found, fall back on defaults?
"${var.ami-id == "" ? data.aws_ami.default.image_id : var.ami-id}"
hi @cabrinha i dont know the best way to do this, but i used the same way in my project
Yup, exactly as above
2019-07-03
Hi guys, what is the best way to have an S3 bucket in website mode backed by a CDN ?
It seems like <https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn>
does not support website mode
and <https://github.com/cloudposse/terraform-aws-cloudfront-cdn>
does not handle S3 origins
here is a real example https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
we use it to deploy https://docs.cloudposse.com/
you’re so fast… I tried the origin_domain_name without success, I’m trying again
it seems like cloudfront is not finding the bucket ; when I look at the cdn conf (in the CLI), the origin type is “custom”
but when I copy past the name of the origin in the browser, it’s ok
destroy then apply, takes some time on CDn but worked like a charm. Thanks !
Glad it worked for you
Public Office Hours starting now! Join me here: https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8
Have any questions? This is your chance to ask us anything.
2019-07-04
What is better way to setup condition for run or not provisioner?
better than?
than
.............
provisioner "remote-exec" {
inline = ["chmod +x deploy.sh", "${var.autodeploy=="true" ? local.autodeploy : local.info}"]
}
}
locals {
autodeploy="sh ./deploy.sh"
info="echo 'to deploy run \n sh deploy.sh' >> ~/README"
}
what is deploy.sh doing?
You don’t want to tie app deployments to Terraform runs, that would be very bad.
its just task from devops course and in normal life at least exists ansible and jenkins for deploying and configuring)
Hi All, I am setting up a pipeline by using Terraform, getting below error tried changing the name fo S# bucket but still get the same error, please help me to get it resolved
Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again. status code: 409, request id: 415EB74A26A388CF, host id: /YawK/bKVKRTufDZHJi35WZuzkDlF0Fg0qY+rEN2rVZgH0oFwQPRC4YwXOvUFOYb2lCpHADtlQ4=
on code_pipeline.tf line 1, in resource “aws_s3_bucket” “codepipeline”: 1: resource “aws_s3_bucket” “codepipeline” {
Hi @vishnu.shukla that means that someone out there has already taken the name of the bucket and you need to change it.
Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
I tried changing the name many times but no luck
Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again. status code: 409, request id: 331EC24BF2D990D5, host id: 43fnC5EwsbQ2TC9QF5OWNLomL6qHRe2YB6WEi0bVXUhyLG8ZflD7yROP+wPAjzY9G1PafUWh0fY=
on code_pipeline.tf line 1, in resource “aws_s3_bucket” “codepipelinesrttwegdye5w26wfdbg452”: 1: resource “aws_s3_bucket” “codepipelinesrttwegdye5w26wfdbg452” {
see this even weird name
hmm try that with aws cli
and see if it will work
that is really weird
anyone has aws code deploy teraaform script?
you mean terraform main.tf or the script to route traffic to the new target group ?
are you doing bluegreen ?
@Nikola Velkovski sounds like you’re changing the resource name, and not the actual bucket name.
Look one line below, and there should be a bucket = "something"
ah yeah I somehow failed to see that
thanks @marc!
Too many context switches
no problem!
having some issue with this: https://github.com/cloudposse/terraform-aws-key-pair
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
am i supposed to create my own key first?
I’d like the module to generate the key and push it to AWS
@cabrinha if you set this var to true
, a new key will be generated and pushed to AWS https://github.com/cloudposse/terraform-aws-key-pair/blob/master/variables.tf#L41
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
otherwise, an existing key gets imported
also look at the example https://github.com/cloudposse/terraform-aws-key-pair/tree/master/examples/complete
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
k thanks
2019-07-05
Anyone had issues with TF12/Geodesic and provider aliasing in the aws provider?
genericish example https://gist.github.com/ChrisMcKee/675d4b954fffc08046a7712efe9497db
in geodesic I get
Error: fork/exec /conf/temp/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.17.0_x4: no such file or directory
running out of geodesic it works
Does that file exist?
yeah, tried it one that i’ve been using a while (setup guard duty in all the regions)
2019/07/05 16:53:27 [DEBUG] [aws-sdk-go] {}
2019/07/05 16:53:27 [DEBUG] plugin: waiting for all plugin processes to complete...
2019-07-05T16:53:27.934Z [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: 2019/07/05 16:53:27 [ERR] plugin: plugin server: accept unix /tmp/plugin915308162: use of closed network connection
2019-07-05T16:53:27.945Z [DEBUG] plugin: plugin process exited: path=/conf/guardduty/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.18.0_x4
same error in debug
2019/07/05 17:08:44 [TRACE] BuiltinEvalContext: Initialized "aws" provider for provider.aws.replicaregion
2019/07/05 17:08:44 [TRACE] <root>: eval: terraform.EvalNoop
2019/07/05 17:08:44 [TRACE] <root>: eval: *terraform.EvalOpFilter
2019/07/05 17:08:44 [TRACE] <root>: eval: *terraform.EvalSequence
2019/07/05 17:08:44 [TRACE] <root>: eval: *terraform.EvalGetProvider
2019/07/05 17:08:44 [TRACE] <root>: eval: *terraform.EvalValidateProvider
2019/07/05 17:08:44 [TRACE] buildProviderConfig for provider.aws.replicaregion: using explicit config only
2019/07/05 17:08:44 [TRACE] GRPCProvider: GetSchema
2019-07-05T17:08:44.916Z [INFO] plugin: configuring client automatic mTLS
2019-07-05T17:08:44.954Z [DEBUG] plugin: starting plugin: path=/conf/temp/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.17.0_x4 args=[/conf/temp/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.17.0_x4]
2019/07/05 17:08:44 [ERROR] <root>: eval: *terraform.EvalInitProvider, err: fork/exec /conf/temp/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.17.0_x4: no such file or directory
2019/07/05 17:08:44 [ERROR] <root>: eval: *terraform.EvalSequence, err: fork/exec /conf/temp/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.17.0_x4: no such file or directory
2019-07-05T17:08:45.081Z [DEBUG] plugin: plugin process exited: path=/conf/temp/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.17.0_x4 pid=1730
2019-07-05T17:08:45.081Z [DEBUG] plugin: plugin exited
2019/07/05 17:08:45 [TRACE] [walkValidate] Exiting eval tree: provider.aws.replicaregion (close)
Error: fork/exec /conf/temp/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.17.0_x4: no such file or directory
I tried shifting the aws provider back from 18 to 17^ as 18 came out today and I hate coincidences
race condition with multiple providers? init stepping on itself? try pre-staging the binary in the same directory as your terraform binary…?
we’ve also seen such errors due to the bash PATH cache (though in that case the file in the msg really does not exist)… https://medium.com/faun/no-such-file-or-directory-seriously-ee14e51a1cf2
terraform-provider-aws_v2.17.0_x4: no such file or directory
I’ve had these sorts of things happen on alpine
fixed by doing apk add libc6-compat
check if the binary exists. if it does, then run ldd /conf/temp/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.17.0_x4
it stores them all in /localhost/ not sure if thats the gripe. Just weird as hell its only moaning when using provider aliasing.
/localhost/.terraform.d/plugins/linux_amd64/terraform-provider-aws_v2.18.0_x4
ldd: terraform-provider-aws_v1.60.0_x4: Not a valid dynamic program
Copying the plugin in to the folder fixed the issue (in that I can at least do what I need to) but it fails doing multiprovider without it copied. More WSL weirdness maybe.
There is a setting to change the plugin cache dir
Both env and config option
Try using that to change the location inside of geodesic
hopefully wsl2 will be less annoying
that will show the dynamic linking
then ensure everything it dynamically links to exists.
Question: can I have a data resource that might not exist on the first run but will exist in subsequent runs ?
I remember having issues with data lookups that did not exist
no, data source must exist, this is a big limitation
bummer
@jose.amengual you can use a self made lambda datasource as workaround
how ?
so in your terraform run you apply the lambda and subsequently use aws_lambda_invocation
for a lookup of a value.
so you are using the lambda as some sort of key:value store
no the lambda is used as a datasource. It’s doing the same kind of lookup a regular datasource would normally do but it won’t fail the moment the actual resource is not existing yet.
@Andriy Knysh (Cloud Posse) added local variables completion
super! nice job
what vim plugin is this?
I mainly use spacemacs and vscode, hope they add completion for 0.12 soon enough
2019-07-06
Is terraform-lsp so it work in all editors
Language Server Protocol for Terraform. Contribute to juliosueiras/terraform-lsp development by creating an account on GitHub.
2019-07-08
^ that looks awesome
Is anyone familiar with queueing apply’s on all workspaces in TFE through an API call? Trying to figure out how to dynamically integrate the CI/CD pipeline that’s generating ECR images and lambda packages, into ECS task definitions and lambda functions. The trouble I’ve run into though is you can only queue up an apply call on an individual workspace, which means you need a list of them, which defeats dynamic-ism and the self-service nature of terraform
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 17, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
hi. anyone here know how to get an old version of the terraform docs? over a year ago, we built our infrastructure using terraform v11 and the docs for IAM resources look completely different today.
Have a look at this Dockerfile that shows you how to build every version down to the very first yourself: https://github.com/cytopia/docker-terraform-docs/blob/master/Dockerfile
Alpine-based multistage-build version of terraform-docs and terraform-docs-replace in multiple versions to be used for CI and other reproducible automations - cytopia/docker-terraform-docs
Thanks. i’ll look into this.
According to this https://github.com/hashicorp/terraform/issues/15058#issuecomment-306099829 the only way is to view the versioned history of the website in git.
Hi there, Terraform Version v0.8.X The office environment has locked terraform version which is v0.8.8. I'd like to go through the terraform documents only on that version because I am not sure…
@drexler difference is not so big from what I experienced, what issues are you facing ?
is this what you need? https://www.terraform.io/docs/configuration-0-11/index.html
Terraform uses text files to describe infrastructure and to set variables. These text files are called Terraform configurations and end in .tf
. This section talks about the format of these files as well as how they’re loaded.
Does anyone know if there’s a syntax to convert a tuple to a list(string)? (Using 0.12)
The docs say “if a module argument requires a value of type list(string) and a user provides the tuple [“a”, 15, true], Terraform will internally transform the value to [“a”, “15”, “true”] by converting the elements to the required string element type.” However, I’m getting this error:
The given value is not suitable for child module variable “ingress_cidr_blocks” defined at .terraform/modules/postgres_security_group/terraform-aws-modules-terraform-aws-security-group-a332a3b/modules/postgresql/variables.tf element 0: string required.
Good summary of the changes, https://blog.gruntwork.io/terraform-up-running-2nd-edition-early-release-is-now-available-b104fc29783f
Learn the top 10 problems that have been fixed in Terraform since the 1st edition
TIL count
can reference data sources in tf 0.12!
2019-07-09
without any “count of” errors?
Seemingly so, yes
Though I’m sure there are still limits
That’s great
Hi guys, I need a bit of help. I m setting up a bunch of aws codepipeline using terraform. I wanted today to add a step that integrates a Jenkins job ( Adding it from the console works great ) and I hit this issue: https://github.com/terraform-providers/terraform-provider-aws/issues/6931 Does anyone of you have a workaround for it?
This issue was originally opened by @bsarbhukan as hashicorp/terraform#19696. It was migrated here as a result of the provider split. The original body of the issue is below. Current Terraform Vers…
2019-07-10
Hey there,
Has anybody had issues with terraboard after upgrading its db (psql 10.6 – 11.4) ? Doesn’t show anything on the dashboard for me after the upgrade : / And in the logs, just an “automigrate” message.
just recreated the db in the end…
2019-07-11
Question : Is it possible to have a filter on a data resource to find something with a tag like : ``` name = “private: 1” like using regex or something ?
the subsequent subnets are private:2, 3 4 etc
TIL terraform state mv
doesn’t support nested modules. They need to be moved one by one.
what version are you using? Worked fine for me with 0.11.14
Terraform v0.11.14
Will prepare PoC.
Workspace.
Apply with patch -p1 < ~/Downloads/0001-poc.patch
Commands with outputs
As you can see, I’ve ended up with module.moved.local_file.foo
, instead of module.moved.module.second.local_file.foo
.
2019-07-12
I have a situation with Cognito where my user_pool_client is recreated, but an authenticate rule on the ALB is not being updated with the new client id for some reason. Any ideas why? (v 0.11 of terraform)
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) PR for https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/28
ECS Service supports a deployment_controller to enable support for CodeDeploy integration. This further enables the ability to use Blue/Green deployments via CodeDeploy.
2019-07-13
2019-07-14
good news
2019-07-15
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 24, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Are ya’ll able to use https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/backing-services/.envrc#L2 pattern for private repos?
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
yes!
there are 2 ways
option A: use ssh agent https://github.com/cloudposse/geodesic/blob/master/rootfs/etc/init.d/atlantis.sh#L70-L76
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
option B: use git credential helpers to rewrite ssh://
and [[email protected]](mailto:[email protected])
repos to https://
and use GITHUB_TOKEN
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
Here’s the github-credential-helper
“plugin” https://github.com/cloudposse/geodesic/blob/master/rootfs/usr/local/bin/git-credential-github
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
hmm, thought so but I’m getting asked for username/password when terraform init
using .envrc
even though I have SSH agent loaded
oh, then you are probably not using a git ssh url
e.g. don’t use source = "<https://github.com/myorg/repo.git>"
, but use source = "[email protected]:myorg/repo.git"
Ya tried that too, will have a poke, ta
if it’s asking for a password, it must be https
oh, and the .envrc
would look like:
export TF_CLI_INIT_FROM_MODULE="git::[email protected]:cloudposse/terraform-root-modules.git//aws/backing-services?ref=tags/0.40.0"
Yup got it, agent was failing to load my key
Ta
that’s if you use ssh-agent
mode
but if you use the git credential helper, then you can simply use an ENV
ya
Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.
Looks like https://github.com/cloudposse/terraform-root-modules/blob/master/aws/account-settings/outputs.tf#L5-L7 is broken
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Terraform module to provision general IAM account settings - cloudposse/terraform-aws-iam-account-settings
doesn’t output minimum_password_length
Will open a PR shortly
@chrism IIRC you were using geodesic on Windows? Any gotchas?
Someone in my team is a Windows bod, there’s always one
lol everyones either windows or mac here; WSL can be a colossal D at times.
I’ve had issues with the terraform plugin cache folder in recent weeks; but thats relocatable ENV TF_PLUGIN_CACHE_DIR=/tmp
I have to set ENV ASSUME_ROLE_INTERACTIVE=false
as assume-role fails to work in the new format.
aws-vault works fine though and the rest seems ok. Obviously if you want to utilise the /localhost thing it isn’t mapping to c:\users\user its mapping to c:\users\user\appdata.… etc… wsl path (it kindly pumps that out when it boots so you can tell where its mapped)
btw, the interactive assume role stuff is what @joshmyers helped us implement earlier this year
I know, it was around then I first mentioned it in the other channel when it broke
Ya, I broke it, apologies!
It’s minimal stuff tbh; I’m finding that most stuff wont get used unless its automated though… people are lazy af
Cool, cheers for that
Does anyone has an idea how I can make this snippet Terraform 0.11 code (which is part of a module) compatible with Terraform 0.12 so the module can already be used in a Terraform 0.12 context
data "aws_ami" "instance" {
most_recent = true
filter = "${var.runner_ami_filter}"
owners = "${var.runner_ami_owners}"
}
variable "ami_filter" {
type = "list"
default = [{
name = "name"
values = ["amzn-ami-hvm-2018.03*-x86_64-ebs"]
}]
}
The problem is the way I pass the block as list of a map is not supported by TF 0.12
IF it is still a block:
variable "ami_filter" {
type = "list"
default {
name = "name"
values = ["amzn-ami-hvm-2018.03*-x86_64-ebs"]
}
}
OR if they changed default
to be a map:
variable "ami_filter" {
type = "list"
default = {
name = "name"
values = ["amzn-ami-hvm-2018.03*-x86_64-ebs"]
}
}
oh silly me, i see what you’re doing, that’s a variable def
Yepz, and a hack
I would give the user the option to define the filter in a flexible way
variable "ami_filter" {
type = list(map())
default = [{
name = "name"
values = ["amzn-ami-hvm-2018.03*-x86_64-ebs"]
}]
}
change the type
that is not solving my issue the part above is the module code, I would like to keep the module for the moment .011
but be able use as consumer of the module 0.12
you can’t keep this stuff backwards compatible
but the code snippet you provided is tf 0.12
the block/attr thing is a major blocker for us
slowing down our whole 0.12 adoption
Yepz, my plan was to make the module first .12 compatible
but seems impossible
basically have to uplift everything to 0.12 somehow
either maintain multiple branches/versions, or just make a hard stop on 0.11 support
yepz,
Thanks, for the feedback. I had last week a chat with a few guys from Hashicorp they mentioned that it should be possible to use a tf 0.11 module in a tf 0.12 context. So I was just giving it a try
They don’t particularly seem to care that they broke backwards compatibility for this syntax in the transition… https://github.com/hashicorp/terraform/issues/20505#issuecomment-496601736
Terraform Version Running a Google provider acceptance test with github.com/hashicorp/terraform v0.12.0-alpha4.0.20190226230829-c2f653cf1a35 vendored. master is in this state. Terraform Configurati…
thanks for the reference
PR for allowing ssl policy changes: https://github.com/cloudposse/terraform-aws-alb/pull/23
(backwards compatible) @Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse)
The SSL policy is outdated and consumers may choose to use different values. @aknysh @osterman
~@johncblandii - thanks for the enhancement. @Andriy Knysh (Cloud Posse) is on vacation so it may take a day or two before we get to it.~
tried to make sure it was backwards compat
thx for the quick merge
I am using terraform-aws-modules/rds/aws
module, when i tried to restore database from snapshot it timed out
now when i try to terraform plan and terraform apply, i get the following error
aws_db_instance.this: Error modifying DB Instance aartdb-eakf: InvalidDBInstanceState: Database instance is not in available state
Is anyone aware of this issue ?
2019-07-16
@Erik Osterman (Cloud Posse) quick patch to the deployment controller work: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/29.
It doesn’t work unless you have ignore changes on since the resources is duplicated. I completely missed that.
#28 added the controller to only one of the services. The service is duplicated so it isn’t applying in all scenarios. @aknysh @osterman
is it possible to add a service task on that module that is arbitrary ? like for a blue green ?
I’m doing blue/green right now
i can show you my approach after this
if blue_green = “enabled” and blue_green_port = “something” do x
I also know that @LeoGmad is doing it
yeah
we wrap this module so we just attached our stuff in our own module
I forked the module because of that
mmm I see
show me when you can , I’m very interested
gimme a sec. i’ll add bits here
my problem is that I can’t use the standard port
same problem with the alb module
yeah. i just added a new lb listener with port 8443 (vs 443)
since only creates http and https target groups
yeah, i just added new ones
what I was thinking to do is to create a Target group module
that is added to the alb module and where you can define arbitrary target group or defaults in no custom ones are defined
(leaving parts out for brevity)
module "alb_service_task" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=tags/0.13.0>"
}
module "alb_ingress" {
source = "git::<https://github.com/cloudposse/terraform-aws-alb-ingress.git?ref=tags/0.7.0>"
}
module "alb_ingress_green" {
source = "git::<https://github.com/cloudposse/terraform-aws-alb-ingress.git?ref=tags/0.7.0>"
}
resource "aws_lb_listener" "green" {
load_balancer_arn = "${var.alb_arn}"
port = "${var.alb_ingress_port_green}"
protocol = "${coalesce(var.alb_ingress_protocol_green, var.alb_ingress_protocol)}"
ssl_policy = "${var.ssl_policy}"
certificate_arn = "${var.certificate_arn}"
default_action {
target_group_arn = "${module.alb_ingress_green.target_group_arn}"
type = "forward"
}
}
and the same thing with the task
default_action {
target_group_arn = "${module.alb_ingress_green.target_group_arn}"
type = "forward"
}
is that TF 0.11 compatible ?
yup
this is all 0.11; we’re migrating to .12 now, but none of that is active
do you have your code deploy bits worked out?
0.13.1 works, @Erik Osterman (Cloud Posse):
deployment_controller.0.type: "ECS" => "CODE_DEPLOY" (forces new resource)
yes code deploy is all good
I used it for Fargate but is almost the same
what do you need ?
same
I can send you a gist if you want
nah, i have it worked out. was going to share if you were still piecing it together
just finalizing the TF at this point
how are you going to do the route traffic thing in Code deploy
manual
did you do a script to do the API call ?
ohh I see
but it is configurable w/ timeouts
we have not decided on that
we technically haven’t either so using manual for now.
it demo’s better.
I will like it automatic
run some test and then do it
nice
how’d you do your aws_codedeploy_deployment_group
, @jose.amengual? specifically the lb section:
load_balancer_info {
target_group_pair_info {
prod_traffic_route {
listener_arns = ["${module.alb.https_listener_arn}"]
}
target_group {
name = "${module.main_container.alb_target_group_name}"
}
test_traffic_route {
listener_arns = ["${module.main_container.alb_green_listener_arn}"]
}
}
}
on sec
* aws_codedeploy_deployment_group.this: InvalidTargetGroupPairException: Target group pair must have exactly two target groups
resource "aws_codedeploy_deployment_group" "ecs_deployment_group" {
app_name = "${aws_codedeploy_app.bluegreen_ecs.name}"
deployment_config_name = "CodeDeployDefault.ECSAllAtOnce"
deployment_group_name = "${module.codedeploy_label.id}-DeploymentGroup"
service_role_arn = "${aws_iam_role.code_deploy_ecs_role.arn}"
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "STOP_DEPLOYMENT"
wait_time_in_minutes = "${var.wait_time}"
}
terminate_blue_instances_on_deployment_success {
action = "TERMINATE"
termination_wait_time_in_minutes = "${var.wait_time}"
}
}
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
ecs_service {
cluster_name = "${var.cluster_name}"
service_name = "${var.service_name}"
}
load_balancer_info {
target_group_pair_info {
prod_traffic_route {
listener_arns = ["${var.alb_listener_arn}"]
}
target_group {
name = "${var.alb_target_group_default_name}"
}
target_group {
name = "${var.alb_target_group_green_name}"
}
test_traffic_route {
listener_arns = ["${var.alb_green_listener_arn}"]
}
}
}
}
what i thought. cool. i’ll add the second target_group
I’m planning to use terraform-aws-route53-cluster-zone
to create Route53 hosted zones, and I have a question regarding the creation of the parent zone. Can I use this module to create a parent zone resource in Route53?
I don’t think this module handles that use-case.
Actually, we don’t have a module for the TLD; I guess since we mostly register the zone via Route53 which creates them for us.
Ah, thanks.
I will write the modules for that then.
2019-07-17
#office-hours starting now! https://zoom.us/s/508587304
=( the azuredevops terraform provider postpone to next week to finalize the license on the repo
Has anyone here used https://github.com/nozaq/terraform-aws-secure-baseline
Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations. - nozaq/terraform-aws-secure-baseline
Interested to know myself. Taken elements from his
Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations. - nozaq/terraform-aws-secure-baseline
I’ve pulled bits as well and created my own baseline, but never implemented his as is.
2019-07-18
Copying this from general channel!
Hi guys, I am using your ssh-key-gen module for generating ssh key pair via terraform (https://github.com/cloudposse/terraform-aws-key-pair). I keep was following through the example in readme, however I keep getting permission denied error for creating secrets
directory.
Error: mkdir /secrets: permission denied
on .terraform/modules/test.ssh_key_pair/main.tf line 45, in resource "local_file" "public_key_openssh":
45: resource "local_file" "public_key_openssh" {
Error: mkdir /secrets: permission denied
on .terraform/modules/test.ssh_key_pair/main.tf line 52, in resource "local_file" "private_key_pem":
52: resource "local_file" "private_key_pem" {
Can someone help me out and suggest me what needs to be updated?
the user you are running under does not have permissions to folder /secrets
try to change the folder to ./secrets
or change the user
or give it the required permissions
cool, I will try it out. Another dumb question, user in this case is my machine’s root user which is running terraform job and seems to have all the permission. This folder is created locally in project repo, is that correct?
/secrets
is not under project, it’s in root folder
./secrets
is local folder
great, that worked
Thanks @Andriy Knysh (Cloud Posse) One more question though, how do you intend to get pem key when running terraform job from CI say something like CircleCI? Push it via artifact to some location from where you can get it?
yes, you can write it to S3 or SSM param store or Secret Manager, for example, using terraform
Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair
2019-07-19
how do people normally deal with security groups here? the ec2 modules only wants a list of them… do you create flat definitions for each purpose, do you make modules for it?
we usually create a security group per module with all the required configurations (especially for backing services like RDS, Elasticsearch, Redis, etc.)
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
then we specify a list of other security groups and CIDR blocks as ingresses to allow to access the service
I was looking at this : https://github.com/cloudposse/terraform-aws-ssm-parameter-store/blob/master/example/main.tf and is mention in the doc that this works great with Chamber but if a parameter is on terraform and then I use chamber to create a new one then the state file will be out of sync so what will be the recommended way to do it ? just use chamber from the beginning and not declare anything in terraform ?
Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store
we use that in a few ways. For example, when creating an RDS cluster, we can generate a database username and password and write them to SSM using the module
then from a CI/CD pipeline, when we build and start the app, we use chamber
to read the username and password from SSM and provide it as ENVs to the app
we can manually write some secrets to SSM using chamber
, but it’s not related to TF state
I c ok
I have some configs for apps that are going to be running now on ECS so we were thinking to use chamber in a script to write the config keys to parameter store
but I was confused about some examples were segments of config parameter were created using terraform
I don’t want dev to have to run terraform for config changes
and just to make this more confusing , please correct me if I’m wrong : SSM parameter Store uses KMS to encrypt the Secrets and store them in AWS secrets ?
or a parameter store secure string = aws secret encrypted string ?
yes, you can specify SecureString
and kms_key_id to encrypt
it is soooo confusing
see this example https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair/blob/master/examples/complete/main.tf
Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair
I seeee ok…
and have you guys played around with credential rotation for RDS ?
do you mean KMS key rotation?
no the db password rotation
Recently, we launched AWS Secrets Manager, a service that makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to rotate secrets automatically, which can help you meet your security and compliance needs. Secrets Manager offers built-in integrations for MySQL, PostgreSQL, and […]
i don’t remember we played with that. maybe @Erik Osterman (Cloud Posse) has more inputs
we are not using secrets manager right now. for RDS I would instead use the IAM authentation with automatic rotation.
ohhhh interesting ok I will look into that
and is there an advantage on using Secret Manager over SSM parameter store SecureString+KMS encrypt ?
they seem to me very similar offerings
As I remember, secret manager is more expensive and does rate limiting
And we started using SSM even before secret manager was introduced
For some applications it’s better if cost is not an issue
secrets manager provides a formal way to use lambdas to rotate secrets according to custom business logic
a lot of work needs to go into defining those rotation strategies.
plus applications need to be updated to use it.
exactly
RDS IAM authentication also requires application code changes and we’ve noticed a lack of examples (e.g. can’t find a single one for ruby)
I miss Vault
haha
yea…
lol
I was wondering how that will play out with terraform
example on how to use type=SecureString and KMS Key ID https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/main.tf#L245
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
I seee ok, make sense
How do you guys deploy ECS tasks/images ? using terraform and push new task def ?
I’m looking at : https://github.com/fabfuel/ecs-deploy
but I’m worry about the TF state getting out of sync
and then an apply breaking things
I was looking for elastic-beanstalk
module for terraform version >~ 0.12
2019-07-21
HashiCorp Terraform 0.12.2 added official support for a Puppet provisioner. One caveat is that the provisioner is only available in 0.12.x of Terraform. The provisioner provides a number of feature…
2019-07-22
planning to develop a GCP DeploymentManager to Terraform tool
Hi @Julio Tain Sueiras! Are you also in the Google Cloud Developers slack workspace?
I don’t think so @Blaise Pabon
There is a lot of good activity there, including Google developer relations staff.
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 31, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-07-23
Hi everyone, I’m looking for a good example to set defaults for complex variable types.
variable dynamic_ordered_cache_behavior {
description = "Ordered Cache Behaviors to be used in dynamic block"
type = list(object({
path_pattern = string
allowed_methods = list(string)
cached_methods = list(string)
target_origin_id = string
compress = bool
query_string = bool
cookies_forward = string
headers = list(string)
query_string_cache_keys = list(string)
whitelisted_names = list(string)
viewer_protocol_policy = string
min_ttl = number
default_ttl = number
max_ttl = number
lambda_function_associations = list(object({
event_type = string
include_body = bool
lambda_arn = string
}))
}))
default = []
}
For example I’d like to set headers to [] by default, has anyone done this before?
Seems this is not possible, at least not as I imaged it with partial maps.
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
variable "apply_config_map_aws_auth" {
type = "string"
default = "false"
description = "Whether to generate local files from `kubeconfig` and `config_map_aws_auth` and perform `kubectl apply` to apply the ConfigMap to allow the worker nodes to join the EKS cluster"
}
When wouldn’t you want to do this (as in whats the use case for it being a setting)?
2019-07-24
hi, can i extract data into a locals?
actually having issues with this:
* module.security_group.aws_security_group_rule.egress_with_cidr_blocks: lookup: lookup() may only be used with flat maps, this map contains elements of type list in:
${split(",", lookup(var.egress_with_cidr_blocks[count.index], "cidr_blocks", join(",", var.egress_cidr_blocks)))}
fixed it
variable "groups" {
description = "Map of groups with members"
type ┆= "map"
default = {
"00_disabled" = ["user1", "user2"]
"01_group1" = ["user2", "user3", "user4", "user5"]
"02_group2" = ["user3", "user4"]
"03_group3 = ["user3", "user6"]
"04_group4" = ["user5", "user6", "user7"]
"05_group5" = ["user5", "user2"]
}
}
resource "aws_iam_group" "groups" {
count = "${length((keys(var.groups))}"
name = "${element(keys(var.groups), count.index)}"
}
resource "aws_iam_user" "users" {
count = "${length(distinct(flatten(values(var.groups))))}"
name = "${element(distinct(flatten(values(var.groups))), count.index)}"
depends_on = [aws_iam_group.groups]
}
any workaround for IDX change when adding or removing users? How you manage users with terraform? Seperated module per user and group?
https://github.com/hashicorp/terraform/pull/21922 I’d wait for this one to merge, and then work with maps, until then nothing fancy
Allow instances to be created according to a map or a set of strings. locals { little_map = { a = 1 b = 2 } } resource "random_pet" "server" { for_each = little_ma…
thnx
Does anyone know how to read security group correct for AWS_ALB using terraform 0.12, I am building a sg
resource "aws_lb" "gitlab_alb" {
load_balancer_type = "application"
security_groups = aws_security_group.gitlab_alb.id
ip_address_type = "ipv4"
subnets = var.public_subnet_id
and I get following error:
on ../../alb.tf line 3, in resource "aws_lb" "gitlab_alb":
3: security_groups = aws_security_group.gitlab_alb.id
Inappropriate value for attribute "security_groups": set of string required.
use list, [aws_security_group.gitlab_alb.id]
same for subnets subnets = [var.public_subnet_id]
Gotcha, I thought they updated this that it should work without using []
Upgrading to Terraform v0.12
if the input vars are lists, then you don’t need to wrap them in []
aws_security_group.gitlab_alb.id
is not a list, it’s a string, so you need []
if var.public_subnet_id
is a list (which by its name looks like just a string), then you don’t need []
Public #office-hours with cloud posse starting now! https://zoom.us/s/508587304 join if you have any questions or want to listen in.
Does anyone know how to avoid triggering replacement of launch template when making a change on tags ?
What a tag do you change? Name?
@s2504s no, not name tag
custom tags used for various purposes
i removed some redundant tags from default map and it now forces a new launch template creation
don’t the tags pass thru to the instances, not really the template? and you can’t modify a template just replace it, right? so i don’t think the ask is possible
but it is also possible that it can create a new version of the launch template
have you tried to set the option
lifecycle {
ignore_changes = ["tag_specifications"]
i did not try that
2019-07-25
Does anyone know a good repository/tutorial to create a php lambda layer with terraform?
hello I’m using : https://github.com/cloudposse/terraform-aws-ecs-alb-service-task and I was trying to do this :
volumes = [
{
name = "pepe"
host_path = "/mnt/pepe.ramdisk"
},
{
name = "pepe-incrementals"
docker_volume_configuration {
scope = "task"
driver = "local"
}
}
]
}
but I get
module.ecs_alb_service_task.aws_ecs_task_definition.default: volume.1.docker_volume_configuration: should be a list
I thought that was going to work….
in TF 0.11 (with its loose/weak typing), mostly all blocks are lists, although it’s not clear from the docs
try
docker_volume_configuration = [
{
scope = "task"
driver = "local"
}]
ohhhhhh
thanks
2019-07-26
Hi guys
I used your terraform-aws-tfstate-backend repo to create remote backend for my tf project. I went ahead and created two separate repos which are deploying using this remote backend (using same s3 bucket and same dynamodb table but different file name). I am getting following error:
Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
status code: 400, request id: {some-lock-value}
followed by tf basic state lock error:
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
Can I not use same s3 bucket and dynamodb table for two different projects? I thought this should be fine because I am using different file names so a different lock will be created for each tfstate file.
yes, you can definitely use the same bucket for multiple projects.
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/project"
region = "us-east-1"
}
}
just make sure you vary the key
Looks like that is just the s3 object path
Been a while since I’ve used this
Hi, I.m using: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group but I need my instances to have a 500GB root volume and I’m guessing that can’t be specify ? but in this modeule : https://github.com/cloudposse/terraform-aws-ec2-instance it can, can somehow use both ?
@jose.amengual you can attach extra drives with block device mapping, however I don’t think the launch template gives you the option for root disk size: https://www.terraform.io/docs/providers/aws/r/launch_template.html
Unsure off top of my head if that is a TF limitation or AWS one. So your only option if you want autoscaling may be to modify the AMI.
Provides an EC2 launch template resource. Can be used to create instances or auto scaling groups.
block_device_mappings
var
Specify volumes to attach to the instance besides the volumes specified by the AMI
that that will anything but the root
this is a ECS instance
I will have to change the ECS agent config so docker images are store in a different volume
root is specified by the AMI
I was hoping I will have the option like when you do it in the console when it gives you the option for a bigger root volume
@jose.amengual do you have that option when creating an EC2 instance or launch template?
when creating an Instance
Yea this is why it’s available in aws-ec2-instance module and not in launch template/asg one. Only option if using latter is to modify your AMI to the 500gb volume
I was using a ECS optimized instance
I thought I could just define the size of the root volume
I guess when you use the wizard and change the root volume size it creates a new AMI ID
2019-07-29
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Aug 07, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Hey Folks, Trying to find some Terraform Modules related to AWS - app stream service ( for creating fleets and stacks) any help appreciated
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
But I Don’t see any modules or templates herwe
I mean its just a discussion goin on
Because its feature requests
meaning it cannot be donw
done*
no resources for AppStream in terraform aws provider, it can be done with terraform-provider-appstream https://github.com/ops-guru/terraform-provider-appstream
AWS Appstream2.0 terraform provider. Contribute to ops-guru/terraform-provider-appstream development by creating an account on GitHub.
TYSM
2019-07-30
2019-07-31
Has anyone here referenced remote state output in 0.12 using Terraform Cloud remote state storage? I’m following their reference example
at https://www.terraform.io/docs/backends/types/remote.html but it’s bailing with Expected an equals sign ("=") to mark the beginning of the attribute value.
on the workspaces {} block.
Terraform can store the state and run operations remotely, making it easier to version and work with in a team.
@sweetops how are you liking terraform cloud for state storage? easy?
well, the initial configuration was super easy. But referencing remote state data/output using the docs, appears to be broken.
Is there a separate channel for tf 0.11 -> 0.12 upgrade woes?
haha, just teh #terraform-0_12
We have many interlinked modules that use lots of shared state. We aren’t even sure how to get there from here.
s/shared state/remote state lookups/
i would move the shared state look ups to to values instead stored in SSM
icky
that would better decouple them from current/future compatibility issues
$ git grep '{data.terraform_remote' |wc -l
1102
yikes
That’s a spicy meatball!
and that’s not the result of code generation?
you’ve been very busy
Yeah.
I feel like Hashicorp let us down by not having remote state compatibility
To provide flexibility when upgrading decomposed environments that use terraform_remote_state, Terraform v0.11.14 introduced support for reading outputs from the Terraform v0.12 state format, so if you upgrade all of your configurations to Terraform v0.11.14 first you can then perform v0.12 upgrades of individual configurations in any order, without breaking terraform_remote_state usage.
Oh, hmm
Maybe we had issues because we didn’t yet have 0.11.14 remote state
maybe a terraform refresh
everywhere will save us
Oh wow, hope that works!
report back…
It works just fine! We apparently were not on 0.11.14 last we tested reading 0.12 remote state.
Hi everyone! I am facing issues while trying to provision each ec2 instance with the below TF connection block:
connection {
type = "winrm"
host = "${aws_instance.ec2instance.*.private_ip[count.index]}"
user = "${var.username}"
password = "${var.admin_password}"
timeout = "${var.timeout_tf}"
}
The issue is with host = "${aws_instance.ec2instance.*.private_ip[count.index]}"
line. I have tried all the below modifications however still get the same error message Error: Cycle: aws_instance.ec2instance[1], aws_instance.ec2instance[0]
host = "${aws_instance.ec2instance.*.private_ip}"
host = "${ element(aws_instance.ec2instance.*.private_ip, count.index) }"
host = "${aws_instance.ec2instance.*.private_ip[count.index]}"
Any pointers will be greatly appreciated.
hi andy, in you case, i think this code can be useful, please try it:
variable "count" {
default = 2
}
connection {
count = "${var.count}"
type = "winrm"
host = "${element(aws_instance.ec2instance.*.private_ip, count.index)}"
user = "${var.username}"
password = "${var.admin_password}"
timeout = "${var.timeout_tf}"
}
It turns out you cannot use count = "${var.count}"
under connection block and it seems I had tried this in #2 of my original post.
Thanks for assisting. I belive there’s something wrong with the splat syntax I am using..
Any other pointers ?
i am thinking…for now i dont know whats going on…
if you set ${element(aws_instance.ec2instance.*.private_ip, count.index +1)}
i dont know whether it will works
NP, let me quickly try that.
could you show all of your resource code?
Yep, just a sec.
tks
resource "aws_instance" "ec2instance" {
count = "${var.instance_count}"
ami = "${var.ami_id}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
subnet_id = "${var.subnet_id}"
vpc_security_group_ids = ["${aws_security_group.ec2instance-sg.id}"]
# iam_instance_profile = "${aws_iam_instance_profile.test_profile.name}"
root_block_device {
delete_on_termination = true
}
ebs_block_device {
device_name = "xvdb"
delete_on_termination = true
}
tags = {
Name = "${var.instance_name}"
Environment = "${var.environment}"
Application = "${var.application_name}"
Role = "${var.instance_role}"
}
connection {
type = "winrm"
# host = "${element(aws_instance.ec2instance.*.private_ip, count.index)}"
# host = "${element(aws_instance.ec2instance.*.private_ip, count.index + 1)}"
# host = "${aws_instance.ec2instance.1.private_ip}"
user = "${var.username}"
password = "${var.admin_password}"
timeout = "${var.timeout_tf}"
}
### Changing the hostname of the instance to prepare for domain join process
provisioner "remote-exec" {
inline = [
"powershell.exe Rename-computer –newname ${var.instance_name}-count.index -Force -Restart"
]
}
}
For testing, it’s one resource and one provisioner
i think i found the issue, wait a sec
if you do not specify the host in the connection block, does it work? because you are inside the loop when you call on provider remote-exec
Dint actaully try that.. let me try it quickly.
Error: Missing required argument
on infra.tf line 41, in resource "aws_instance" "ec2instance":
41: connection {
The argument "host" is required, but no definition was found.
sorry man, checking your code, seems everything its ok, i dont know more…
NP Ruan! Thanks for stepping in to help!
I will keep looking..
okayy, if you find the solution, please let me know, now I’m curious!
For sure!
please, try to do this… last alternative
resource "aws_instance" "ec2instance" {
count = "${var.instance_count}"
ami = "${var.ami_id}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
subnet_id = "${var.subnet_id}"
vpc_security_group_ids = ["${aws_security_group.ec2instance-sg.id}"]
# iam_instance_profile = "${aws_iam_instance_profile.test_profile.name}"
root_block_device {
delete_on_termination = true
}
ebs_block_device {
device_name = "xvdb"
delete_on_termination = true
}
tags = {
Name = "${var.instance_name}"
Environment = "${var.environment}"
Application = "${var.application_name}"
Role = "${var.instance_role}"
}
}
resource "null_resource" "ec2cmd" {
count = "${var.instance_count}"
connection {
type = "winrm"
host = "${element(aws_instance.ec2instance.*.private_ip, count.index)}"
user = "${var.username}"
password = "${var.admin_password}"
timeout = "${var.timeout_tf}"
}
### Changing the hostname of the instance to prepare for domain join process
provisioner "remote-exec" {
inline = [
"powershell.exe Rename-computer –newname ${var.instance_name}-count.index -Force -Restart"
]
}
}
ping @Andy
Sure..
Unfortunately, no luck Error: Cycle: aws_instance.ec2instance[1], aws_instance.ec2instance[0]
Attributes of other resources
The syntax is TYPE.NAME.ATTRIBUTE. For example, ${aws_instance.web.id} will interpolate the ID attribute from the aws_instance resource named web. If the resource has a count attribute set, you can access individual attributes with a zero-based index, such as ${aws_instance.web.0.id}. You can also use the splat syntax to get a list of all the attributes: ${aws_instance.web.*.id}.
This is from the official documentation and that’s what we are trying.
Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.
I mean the splat syntax mentioned in the last line.
i got it…but looks like ok … check it https://www.terraform.io/docs/provisioners/null_resource.html
The null_resource
is a resource allows you to configure provisioners that are not directly associated with a single existing resource.
Yep, will try in a few and update.
Strangely enough, The same connection block seems to work with null resource but not with the ec2 resource.
yep, stranger things happens here
@Andy did u find some way to resolve the issue?
Hi Ruan, yep, I was able to resolve this issue few minutes back. The trick was to just use self attribute and it would automatically loop over the count. Here’s the snippet which works:
connection {
type = "winrm"
host = "${self.private_ip}"
user = "${var.username}"
password = "${var.admin_password}"
timeout = "${var.timeout_tf}"
}
owoww!!! nice!!! thanks for share!
Additional information:
I have the count set to 2.
From the above point#1, If I put in host = “${aws_instance.ec2instance.1
.private_ip}”, It does not give me the error message however doesnt provision both the instances. Just provisions the instance with count.index=1.