#terraform (2019-01)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2019-01-02

Got a question regarding the output vars of https://github.com/cloudposse/terraform-aws-elasticache-redis, it seems I am not getting the host out of it.
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

@Wessel Were DNS records created for the elasticache cluster?

Nope, I didn’t use that.

Didn’t use what exactly?

Well, I didn’t supply a route53 zone_id

I figured that is a prerequisite for the creation of dns records.

Yup

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

The output comes from that module invocation https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/output.tf#L17
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

So, I guess it’s considered best practice to supply zone_id for internal lookups?

Or am I understanding this incorrectly?

Am not sure what you mean by your last comment.

If you don’t supply a zone_id, the invocation of https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/main.tf#L123 will be enabled = false
, which means that the record is not created and nothing is outputted

Yes, I gather that much

Then what I am looking for is the primary endpoint

But that’s clearly not being outputted.

Right, so in that case you will need to change that module to output https://www.terraform.io/docs/providers/aws/r/elasticache_replication_group.html#primary_endpoint_address
Provides an ElastiCache Replication Group resource.

but I’d strongly advise using a DNS CNAME to the cluster by supplying the zone_id

Yes, that’s what I was referring to with my previous comment regarding best practice with a CNAME.

OK, yup in that case

I can see that all modules you guys create allow for a zone_id.

I definitely see the advantage of that.

ponting your apps to use the endpoint directly is less than ideal as you lose the ability to repoint DNS to flip between clusters

Changes to your cluster will then require some app config changes and a redeploy

Yes, but my knowldge regarding route 53 might be somewhat lacking in this area, but can you restrict who can perform lookups?

Depends what you mean by who, but generally you would restrict access to the thing via IAM or security groups etc, rather than restricting the DNS lookup

Route53 can be an internal zone (only available within a VPC)

Alright, awesome. That’s all I need then, on to utilising Route53 then!

Thanks!

No problem!

Sorry, still have one question regarding clustering with Redis.

Can’t seem to get it to create a redis cluster in Clustered Redis
mode

What is the error you are getting?>

Well, I’m not getting any errors. I can’t figure out how to indicate I want a Clustered Redis
engine.

I can specify multiple nodes, just no multiple shards.

Provides an ElastiCache Replication Group resource.

Looks like you may want cluster_mode

yes exactly

seems like creation of a replication group isn’t supported.

PRs welcome

Will take a look at it tomorrow, shouldn’t be too hard.

The only method for optional options is to mirror the defaults?

(thus allowing override for edge cases, such as native redis cluster mode.)

Note that those values in the docs are not defaults.

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

When using terraform-aws-elasticache-redis, when I do not specify a value for “replication_group_id”, I get the following Terraform (0.11.11) error: Error: module.example_redis.aws_elasticache_replication_group.default: “replication_group_id” must contain from 1 to 20 alphanumeric characters or hyphens

Note that the example at https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/examples/simple/main.tf does not specify “replication_group_id”
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

checking

thx

hrmmmmm

ok, so looks like if you don’t specify one, we generate one

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

what values do you have for namespace
, stage
and name
?

any chance they contain invalid characters?

namespace = “general” name = “iam2-redis” stage = “${var.environment}” // where the value is ‘dev’

hrm… looks good

I commented out the “zone” attribute

maybe this is the issue?

hrmm don’t think it could be related…

any words of wisdom would be appreciated

where did you specify stage = "${var.environment}"

in the module

module “example_redis” { source = “git://github.com/cloudposse/terraform-aws-elasticache-redis.git?ref=master>” namespace = “general” name = “iam2-redis” stage = “${var.environment}” //zone_id = “${var.route53_zone_id}” security_groups = [”${aws_security_group.rds_security_group.id}”]
auth_token = “${random_string.auth_token.result}” vpc_id = “${aws_vpc.iam2-persistence-vpc.id}” subnets = [”${aws_subnet.rds_subnet_1.id}”, “${aws_subnet.rds_subnet_2.id}”] maintenance_window = “wed00-wed00” cluster_size = “2” instance_type = “cache.t2.micro” engine_version = “4.0.10” alarm_cpu_threshold_percent = “75” alarm_memory_threshold_bytes = “10000000” apply_immediately = “true” availability_zones = [”${data.aws_availability_zones.available.names[0]}”, “${data.aws_availability_zones.available.names[1]}”]
automatic_failover = “false” //replication_group_id = “a123” }
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

hrm lgtm

any chance you have a space in var.environment
?

e.g.

export TF_VAR_environment="dev "

no.

This is from the config file: environment = “dev”

can you try this…. for debugging

you said that if you specify a replication_group_id
it works (e.g. a123
)

what if you specify: replication_group_id = "general-dev-iam2-redis"

do you get the same error?

yes, when I uncomment the a123 it does work (I did not type ‘yes’ with the apply)

it should work too

a123 is just a random string

eplication_group_id = "${var.replication_group_id == "" ? module.label.id : var.replication_group_id}"

we’re setting it to module.label.id

which should just be the concatenation of namespace
, stage
, name
and attributes

(with -
delimiter)

…maybe sometihng it doesn’t like with this "general-dev-iam2-redis"

the iam2-

I changed it to remove the hyphen. same error

module “example_redis” { source = “git://github.com/cloudposse/terraform-aws-elasticache-redis.git?ref=master>” namespace = “general” name = “iam2Redis” stage = “dev” //zone_id = “${var.route53_zone_id}” security_groups = [”${aws_security_group.rds_security_group.id}”]
auth_token = “${random_string.auth_token.result}” vpc_id = “${aws_vpc.iam2-persistence-vpc.id}” subnets = [”${aws_subnet.rds_subnet_1.id}”, “${aws_subnet.rds_subnet_2.id}”] maintenance_window = “wed00-wed00” cluster_size = “2” instance_type = “cache.t2.micro” engine_version = “4.0.10” alarm_cpu_threshold_percent = “75” alarm_memory_threshold_bytes = “10000000” apply_immediately = “true” availability_zones = [”${data.aws_availability_zones.available.names[0]}”, “${data.aws_availability_zones.available.names[1]}”]
automatic_failover = “false” //replication_group_id = “a123”
}

ohhhg

we ran into this before

what seeing this weird issue when I'm trying to use the terraform-aws-elasticache-redis Error: Error running plan: 2 error(s) occurred: * module.elasticache_redis.module.dns.var.records: Resour…

did you try this?

TF_LOG=DEBUG

no. I will try

echo -n general-dev-iam2-redis | wc
0 1 22

22 characters long

so that’s the rub

then you’ll get more info about the underlying API respone

I should have tried it. I read about TF_LOG=DEBUG

@stephen recently contributed the replication_group_id
field

maybe he’s seen this error too and can shed some light

as far as you know, is it OK to just use any ID, or does it need to correspond to a pre-provisioned real resource?

as far as I know, it does not need to correspond to anything

cool. thanks
2019-01-03

So I am going to start the year with building a kops terraform module to spin up a kops cluster in an existing vpc from a cluster.yaml template

wonderful - would love to be part of that

hell yes you will!

Novice question. How do I manage dependencies between different state files. Example, I have a state file (A) that has Security Groups, and another state file (B) that has an instance and uses (A) as a data source to pull the security group information from. If I now update (A) with a different Security Group name, it won’t be able to destroy the existing security group ( create_before_destroy
only works within the same state file) because it’s tied to a resource in (B), and I’ll need to re-run the apply on (B) as well. Is this a typical limitation of breaking things up into different state files, or is there a better way to structure this?

@Igor yes, if you change the interface between the modules (and a SG name is an interface), then you need to re-apply everything. You’ll need to do it in any cases even if you use just one state file. That though should be rarely needed

@Igor there is no nice way to do it AFAIK - and it is a pita

use naming conventions to uniquely name all the resources (e.g. terraform-null-label
module)

then all the names will stay the same and unique

If you can seed things from the same variables and avoid passing state around that maybe able to help a little

you can just update/change the internals of each module (e.g. add rules to the SG), which should not affect any other module

Terragrunt has the idea of dependencies between state https://github.com/gruntwork-io/terragrunt#dependencies-between-modules
Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules. - gruntwork-io/terragrunt

or, you can tag the resources, and then look them up by the tags (e.g. https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks-backing-services-peering/main.tf)

but it is less than ideal due to the plan/apply phases of Terraform. say you have X,Y,Z that are dependant on state from A. Plan all doesn’t do what you think it will, because A hasn’t actually been applied yet, so the changes to A are not shown in the plan for X,Y,Z

Passing around state is super helpful, but also problematic and should be avoided for things like @Andriy Knysh (Cloud Posse) suggestions

It can also make a bootstrap process harder - ordering matters.

Thanks @Andriy Knysh (Cloud Posse) @joshmyers. I guess there is no silver bullet. I am okay keeping track of order of applies, but not sure specifically about the case where something cannot be destroyed because it’s in use by a resource in a different state file.

Unless I can run the apply on (B) while (A) is stuck on its destroy step.. does the state of (A) get updated resource-by-resource?

yea, that’s not fun. try to use diff states as little as possible as @joshmyers pointed out

for example, you can prob put the SG into the same module because they are related

Yeah, makes sense. I need to think about this while planning how I structure things. Thanks for the feedback.

is this just a one-time task you want to do (destroy the SG and create a new one), or is this in your workflow?

I am in testing mode at the moment, thinking of the different scenarios/workflows, and just stumbled on this

if one time task, then just destroy and recreate everything. If it’s a workflow, don’t do it

select unique names for all the resources, and the issue will be reduced

anyone here ever tried to get a local-exec script output to a terraform variable?

Executes an external program that implements a data source.

data "external" "example" {
program = ["bash", "${path.module}/example.sh"]
query {
cluster_name = "my-cluster"
}
}

token = "${data.external.example.result.token}"

in example.sh
:

# Extract cluster name
eval "$(jq -r '@sh "CLUSTER_NAME=\(.cluster_name)"')"
# Output token as JSON
jq -n --arg token "$TOKEN" '{"token": $token}'

thanks

Terraform module to fetch any kind of artifacts using curl (binary and text okay) - cloudposse/terraform-external-module-artifact

have you guys run into this error?

* data.template_file.galaxy_kube_config: Resource 'data.external.token' does not have attribute 'result.token' for variable 'data.external.token.result.token'

did you write the token from the external program to the output? (either as shown here https://github.com/cloudposse/terraform-external-module-artifact/blob/master/main.tf or here https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161#issuecomment-395302615)
Kubernetes implemented support for an exec authentication provider, where the client can generically reach out to another binary to retrieve a valid authentication token. This is used for supportin…

doing this fixed it
"${data.external.token.result["token"]}"

as opposed to
"${data.external.galaxy_token.result.token}"

anyone here use the kubernetes provider?

for some reason my kubernetes_cluster_role_binding
hangs on my initial terraform apply

but if i ctrl+c and run terraform apply again it works automatically

kubernetes_cluster_role_binding.kube_system_default_role_binding: Still creating... (5m10s elapsed)
kubernetes_cluster_role_binding.kube_system_default_role_binding: Still creating... (5m20s elapsed)
kubernetes_cluster_role_binding.kube_system_default_role_binding: Still creating... (5m30s elapsed)
kubernetes_cluster_role_binding.kube_system_default_role_binding: Still creating... (5m40s elapsed)

second attempt:
kubernetes_cluster_role_binding.kube_system_default_role_binding: Creation complete after 1s (ID: kube-system-default-role-binding)

@btai maybe you can add logging with TF_LOG to get a bit more output. @alex.somesan

yeah ill do that right now

kubernetes_cluster_role_binding.kube_system_default_role_binding: Still creating... (1m50s elapsed)
2019/01/03 15:20:24 [TRACE] dag/walk: vertex "root", waiting for: "provider.external (close)"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "data.external.galaxy_token", waiting for: "helm_release.soxhub_cluster_chart"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "data.template_file.galaxy_kube_config", waiting for: "data.external.galaxy_token"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "output.galaxy_kube_config", waiting for: "data.template_file.galaxy_kube_config"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "provider.helm (close)", waiting for: "helm_release.soxhub_cluster_chart"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "provider.external (close)", waiting for: "data.external.galaxy_token"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "kubernetes_service.load_balancer"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "provider.kubernetes (close)", waiting for: "kubernetes_service.load_balancer"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "kubernetes_service.load_balancer", waiting for: "helm_release.soxhub_cluster_chart"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "provider.template (close)", waiting for: "data.template_file.galaxy_kube_config"
2019/01/03 15:20:28 [TRACE] dag/walk: vertex "helm_release.soxhub_cluster_chart", waiting for: "kubernetes_cluster_role_binding.kube_system_default_role_binding"
2019/01/03 15:20:29 [TRACE] dag/walk: vertex "root", waiting for: "provider.external (close)"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "data.template_file.galaxy_kube_config", waiting for: "data.external.galaxy_token"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "data.external.galaxy_token", waiting for: "helm_release.soxhub_cluster_chart"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "kubernetes_service.load_balancer"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "provider.kubernetes (close)", waiting for: "kubernetes_service.load_balancer"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "provider.external (close)", waiting for: "data.external.galaxy_token"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "output.galaxy_kube_config", waiting for: "data.template_file.galaxy_kube_config"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "provider.helm (close)", waiting for: "helm_release.soxhub_cluster_chart"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "provider.template (close)", waiting for: "data.template_file.galaxy_kube_config"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "kubernetes_service.load_balancer", waiting for: "helm_release.soxhub_cluster_chart"
2019/01/03 15:20:33 [TRACE] dag/walk: vertex "helm_release.soxhub_cluster_chart", waiting for: "kubernetes_cluster_role_binding.kube_system_default_role_binding"
kubernetes_cluster_role_binding.kube_system_default_role_binding: Still creating... (2m0s elapsed)

^ @maarten

does that mean anything to you?

No but maybe to @alex.somesan

@maarten it ended up being an issue I was having with the azure provider, not the kubernetes provider. all i needed to do was update to the newest version of the azure provider which fixed this issue for me
2019-01-04

@Andriy Knysh (Cloud Posse) is using this kubernetes provider too - he’ll be online tormorrow

Good morning team

I have a question

Any idea when route53 resolver resource will be available ?

I saw this pull request https://github.com/terraform-providers/terraform-provider-aws/pull/6574
Fixes #6563. Includes: #6549 #6554 Acceptance tests (so far): $ make testacc TEST=./aws/ TESTARGS='-run=TestAccAwsRoute53ResolverEndpoint_' ==> Checking that code complies with gofmt r…

but I’m not sure

@rbadillo can you provide some additional context?

we use route53 in dozens of modules

I’m talking about the new route53 resolver feature

they just released back in Nov 2018

(or do you mean when the PR will be merged? )

the PR with the new feature

ohhhhhh gotcha! no, unfortunately don’t know anyone in connection to the PR
2019-01-07

Hello! I was just looking at the updates: https://github.com/cloudposse/terraform-root-modules/pull/83/files and I believe https://github.com/cloudposse/terraform-root-modules/pull/83/files#diff-25b6b1e862ed7056e75cc43421b112c0R123 should be SecureString
instead of String
, but I dont konw if there is some other intention behind not making it a SecureString
what Set defaults for all envs why Without defaults, it’s pretty moot to support enabled flags since you still need to define a lot of useless envs
what Set defaults for all envs why Without defaults, it’s pretty moot to support enabled flags since you still need to define a lot of useless envs

BTW, I thought you used chamber to manage the secrets, do you just bootstrap them via terraform?

nice catch @pecigonzalo, that should be SecureString

in the last releases, we added code to write secrets (and other settings) to SSM from all modules via Terraform so they could be used from other modules as needed - simplifies the bootstrap process

Yeah I noticed that, I think its a great idea, and in most cases is not sensitive information, but I was wondering in the cases where it is, you also leave that info in the state, which I guess is fine since its encrypted in S3 (in most cases). Just wondering if that is your workflow, as we use a similar pattern and also use chamber

but we write secrets only with chamber, to keep them out of TF

yea, it’s not ideal. but as you said, it’s encrypted in S3

and it’s simplifies the cold-start/bootstrap process a lot

once all the secrets are written to SSM from TF, we also use chamber
to read them when we deploy k8s services

Hi

I have a doubt on terraform configuration on azure platform

can we use condition like count to make the argument extension as optional in vmss on azure platform using terraform configuration

#terraform like the example as given below

extension { count = “${var.enabled == “true” ?}” name = “setexecutionpolicy-${local.vm_name_prefix}” publisher = “Microsoft.Compute” type = “CustomScriptExtension” type_handler_version = “1.9”
settings = <<SETTINGS
{
"commandToExecute": "powershell.exe Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force"
}
SETTINGS }

#terraform can you use count to enable or disable the extension depending upon our requirement?

@praveen i think not, count could be used only in resources count = "${var.enabled == "true" ? 1 : 0}"

for what you want to achieve, you can use the slice
pattern (as we call it) to add different values to a setting depending on diff conditions

look here for some explanation https://www.reddit.com/r/Terraform/comments/99zri8/gke_regional_cluster_deployment_defining_zones/
0 votes and 3 comments so far on Reddit

@Andriy Knysh (Cloud Posse) got it. I am trying to create base module for windows vmss. For which I will have to include optional for enabling IIS on the base module. Is there a way I can do enable IIS optional when required

if you share some code, we can point you in the right direction

I have azurerm_virtual_machine_extension which I will have to make it optional for vmss base module. As I understand I cannot use azurerm_virtual_machine_extension for azure vmss. for which I will have to use extension as an argument . Just wanted to know how can I make this configuration as optional (enable/disable)

resource “azurerm_virtual_machine_extension” “iis” { count = “${var.iis_enabled == “true” ? var.av_set_size : 0}” name = “iis_${format(“${var.vm_name}%02d”,count.index+1)}” location = “${var.az_location}” resource_group_name = “${data.azurerm_resource_group.rg.name}” virtual_machine_name = “${format(“${var.vm_name}%02d”,count.index+1)}” depends_on = [“azurerm_virtual_machine.vm”] publisher = “Microsoft.Compute” tags = “${local.tags}” type = “CustomScriptExtension” type_handler_version = “1.9”
settings = <<SETTINGS { “commandToExecute”:”PowerShell -Command " Install-WindowsFeature -name web-server,Web-Default-Doc,Web-Http-Errors,Web-Http-Tracing,Web-Static-Content,Web-Http-Redirect,Web-Http-Logging,Web-Stat-Compression,Web-Dyn-Compression,Web-Filtering,Web-Net-Ext45,Web-Asp-Net45,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Mgmt-Console,Web-Scripting-Tools";PowerShell -Command " Remove-WindowsFeature -name Web-Dir-Browsing"; PowerShell -Command " Start-Sleep -Seconds 60; Get-Website | Remove-Website; Start-Sleep -Seconds 30; Get-IISAppPool | Remove-WebAppPool; Set-WebConfigurationProperty -Filter ‘/system.applicationHost/sites/siteDefaults/limits’ -PSPath IIS: -Name connectionTimeout -Value (New-TimeSpan -sec 30); Set-WebConfigurationProperty -Filter ‘/system.applicationHost/applicationPools/applicationPoolDefaults’ -PSPath IIS: -Name startMode -Value ‘AlwaysRunning’; Set-WebConfigurationProperty -Filter ‘/system.applicationHost/applicationPools/applicationPoolDefaults/processModel’ -PSPath IIS: -Name idleTimeout -Value ‘0000’"; exit 0” } SETTINGS

here is my extension which I use in windows vm configuration making it as optional using count

How about having an extension that has an empty commandToExecute?

and how would I injust the command to execute while sourcing the module

do you have any example/reference for it

define a local commandToExecute as ${var.enabled ? 'PowerShell -Command...' : 'echo DoNothing'}
. Then use settings = ${local.commandToExecute}"

May I know if you have an example or reference of this scenario

I’m just thinking of a possible workaround. I don’t have any examples.

Looks like the slice pattern example that @Andriy Knysh (Cloud Posse) provided is a similar idea

@praveen do you want to enable/disable the whole extension
, or just change the command inside settings
?

whole extension

Except as a block within https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_scale_set.html, not as azurerm_virtual_machine_extension, if I understood correctly
Manages a Virtual Machine scale set.

in what code do you want to do it? (the second snippet above has it already resource "azurerm_virtual_machine_extension" "iis"
)

Here’s something similar as an example: https://github.com/devops-workflow/terraform-aws-autoscaling/blob/master/main.tf Look at launch_configuration. It is defined inside the module, but allow it to be passed in instead.
Terraform module which creates Auto Scaling resources on AWS - devops-workflow/terraform-aws-autoscaling

yes this is a common pattern - if it’s not provided in the variables, then create a new one

so this is not a problem. I thought @praveen wanted to add different config to extension {
in the first code snippet depending on some condition

@Andriy Knysh (Cloud Posse) “azurerm_virtual_machine_extension” is a resource which I used for creating a standalone virtual machine. I understand that we cannot use “azurerm_virtual_machine_extension” resource for vmss . for which we may have to use argument as extension to configure any vm extensions on vmss

as I will have to create generic module for windows vmss. am not finding opportunity to make iss enable optional for windows vmss module

share the code where you use this block
extension {
count = "${var.enabled == "true" ?}"
name = "setexecutionpolicy-${local.vm_name_prefix}"
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.9"
settings = <<SETTINGS
{
"commandToExecute": "powershell.exe Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force"
}
SETTINGS
}

can you provide me your email address so that I can share the code with you

@praveen DM me

@Andriy Knysh (Cloud Posse) DM the code

if not without using extension or with any other approach can we optionally enable ISS while sourcing windows vmss module

i’ll take a look @praveen

sure


can you give me example or snippet of slice pattern

It’s in the Reddit post above @praveen

Give me a few minutes, I’ll adapt it to your example

sure @Andriy Knysh (Cloud Posse)

checking Reddit post @Igor

@praveen here is a simplified version of using the slice
pattern adjusted for your code

locals {
extensions = [
{
name = "StaplesBGInfo"
},
{
name = "setexecutionpolicy-${local.vm_name_prefix}"
},
{
name = "IIS-${local.vm_name_prefix}"
}
]
# <https://www.terraform.io/docs/configuration/interpolation.html#slice-list-from-to->
to_index = "${var.iss_extention_enabled == "true" ? 3 : 2}"
extensions_final = "${slice(local.extensions, 0, local.to_index)}"
}
resource "azurerm_virtual_machine_scale_set" "vmss" {
extension = ["${local.extensions_final}"]
}

if var.iss_extention_enabled == "true"
, then extensions_final
will contain all three extensions

if var.iss_extention_enabled == "false"
, then extensions_final
will contain the first two extensions

I didn;t understand this line # https://www.terraform.io/docs/configuration/interpolation.html#slice-list-from-to- to_index = “${var.iss_extention_enabled == “true” ? 3 : 2}” extensions_final = “${slice(local.extensions, 0, local.to_index)}” }
Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.

it’s a comment

Is it possible to update the code which I sent you so that I can better understand

by making only IIS optional

should my locals look like this?

locals { count = “${var.create == “0” ? 0 : 1}” asgcount = “${length(var.asg_ids) > 0 ? 1 : 0}” vm_name_prefix = “${replace(var.resource_prefix,”/[_]/”,”-“)}” extensions = [ { name = “StaplesBGInfo” }, { name = “setexecutionpolicy-${local.vm_name_prefix}” }, { name = “IIS-${local.vm_name_prefix}” } ] to_index = “${var.iss_extention_enabled == “true” ? 3 : 2}” extensions_final = “${slice(local.extensions, 0, local.to_index)}” } }

Hi folks. I have a question about cloudposse/terraform-aws-cloudfront-s3-cdn. Does anyone have experience with that?

what’s the question @Rob?

Hi @Andriy Knysh (Cloud Posse), I was successful in created an S3 bucket and a CloudFront distro using this module. However, I need to set some attributes such as making the bucket private and restricting the CF distro to only accept signed URLs. I cannot find anywhere how to do that.

And my Google searches are not helping

@Andriy Knysh (Cloud Posse) terraform blocks are just list variables? that’s so useful… and I can’t believe I didn’t pick up on that earlier.


For resources that can be either created within parent blocks or as separate resources, is there one method that’s better than the other? For example ingress egress rules on aws_security_group

Maybe the aws_security_group example isn’t a good one, as it looks like the two methods cannot be used together. Not sure if that holds true for other similar examples.

we did both, but ended up creating separate resources (not inline) especially for SGs

for a few reasons

- They are managed separately and could be changed separately

- They could be provided from outside of the module, e.g. add an additional rule to the SG - TF will not try to delete/update the SG

Thanks, that makes sense

- They could be managed separately with a
count
(e.g. enable/disable) - useful in many cases, and not possible in inline blocks - this is probably the most important reason to use separate resources

on the other hand, a reason to use inline rules is that they are enforced exclusively, meaning if someone goes and adds a new rule to the security group outside of terraform, the inline rule config will detect that and create a diff to force it back to exactly what is in the tf config…


@praveen sent you example for your code

(you’d think tags
would have given it away, ha)

can’t help you with Azure questions since I’m not familiar with it, so you have to take a look at the code and figure out the remaining issues (if any) by yourself

sorry @Rob, too much traffic here

no worries

@Andriy Knysh (Cloud Posse) Thank you very much . am going to test it now and let you know

@Andriy Knysh (Cloud Posse), I have to get lunch so I’ll ping you later when I get back. Thanks for the offer of help.

@Rob the bucket is already private https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L56 (if that’s was your question)
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

signed URLs are currently not supported by the module. I guess this needs to be added https://www.terraform.io/docs/providers/aws/r/cloudfront_distribution.html#trusted_signers
Provides a CloudFront web distribution resource.

@Rob if you test it and open a PR, we’ll review promptly

thanks @Andriy Knysh (Cloud Posse), I’ll give it try shortly.

@Andriy Knysh (Cloud Posse) I just checked the bucket that was created and the 4 permissions, including ACLs, are marked as False. Going to delete the CF distro and bucket and try again. Will let you know. Note this is in the Permissions tab and the Public access settings button.

or are you suggesting that I modify the code and submit a PR? Maybe I didn’t understand.

@Rob yes, for trusted_signers
, would be great if you could test it and open a PR

will do. thx

@Andriy Knysh (Cloud Posse), waiting for the terraform destory to complete. The CF distros take a while to complete. I’ll get it done asap and if it works, submit a PR. Thanks for your help.

Yea, CF is painfully slow.

especially when having used things like CloudFlare or Fastly

yeah! it finished!

do I need to create a fork in order to create a PR?

yes, fork it, then create a new branch, add your code, open a PR

The code change worked and the distro was created with a trusted_signer of “self”

thx

Hi there! I am new to the community, is this the correct place to ask questions about the terraform modules?

yep, mostly about the cloudposse/terraform-*
modules

ok!

we also have a #terraform-aws-modules channel for https://github.com/terraform-aws-modules
Collection of Terraform AWS modules supported by the community - Terraform AWS modules

Maybe you have a quick insight for me.

I am trying to implement https://github.com/cloudposse/terraform-aws-elasticache-redis
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

And I keep getting:
Error: module.redis.aws_elasticache_replication_group.default: "replication_group_id" must contain from 1 to 20 alphanumeric characters or hyphens

I have tried manually applying a replication_group_id
but it still throws

hrm….

can you share the group id you manually set?

replication_group_id = "grizzly-redis-staging"

too long

OO man

echo -n grizzly-redis-staging | wc
0 1 21

i guess reading the error would help

wow sorry

(i know, it doesn’t look that long!!!)

honestly, this just came up for someone else last week

so that’s why it was fresh in my memory

Thank you so much!!

echo -n general-dev-iam2-redis | wc
0 1 22

Awesome thank you so much!

np! that’s what we’re here for…

and thanks so much for all of the great modules!!

thanks! means a lot to hear it…

The amount of time saved from these modules is unreal, it is very much appreciated!!!

@Andriy Knysh (Cloud Posse) I created a PR.

thanks again for your help

btw, if you haven’t already, please give us a ★ on any of our github projects that you’re using It helps us a lot!

starred everyone I have used

thanks so much!!

will do. great stuff guys. Keep up the good work. Saves us a lot of time.

thanks @Rob!

@Rob can you add 'self' is acceptable.
to the variable description and then rebuild README by executing these commands:

make init
make readme/deps
make readme

(we don’t directly modify README.md
, we modify README.yaml and
varaibles.tf and
outputs.tf` - then the README generator builds README.md)

sure thing

though I didn’t see anything in the yaml file to update. Am i missing something?

done and pushed

thanks @Rob

if the list is empty, does it still work?

I guess I should test that. oops. shame on me Will test and notify you when complete

thanks

@Andriy Knysh (Cloud Posse), yes, it works if you don’t specify trusted_signers and uses the default which is []


ah, I didn’t mean that as a slight against CP.

oh haha emoji fail

I just mean that terraform breaks my heart sometimes

for me, the biggest gripe is “count of cannot be computed”

hahaha. that is annoying

nothing is perfect

yea…

has anyone here taken a serious stab at using pulumi?

The way I get it, Pulumi is to Terraform what SASS is to CSS.

interesting concept. First I’ve heard of it. I like the purple website lol

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

thanks @Andriy Knysh (Cloud Posse)
2019-01-08

Is any one using the https://github.com/cloudposse/terraform-aws-iam-user module in combination with the https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder
Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user
This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder

what is the currently supported method for creating IAM users?

@Jan Don’t think I follow, supported method?

how do I add IAM users

that have keybase setup already

been a long two days and im tired so im probably missing something

@Jan here is our latest doc on this https://github.com/cloudposse/reference-architectures/tree/master/templates/conf/users
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

brilliant, thank you!

(Note that directory contains templates)

There is also https://github.com/cloudposse/terraform-null-smtp-mail
Terraform module to send transactional emails via an SMTP server (e.g. mailgun) - cloudposse/terraform-null-smtp-mail

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

anyone aware of or use any tools to allow someone to require any version of a module as long as it’s not a major breaking change for terraform? for example in node i can require a version “^1.0.0” which will give me any 1.x.x version of a package but not 2.0.0

oh that would be nice

i think it might be hard to do because source strings can’t be variables and i’ve seen on their issues this will most likely never change

one thing we were thinking of was using tags similar to 1.x.x and 1.4.x in github and move them with each release

ohhhhhhhhh

that’s a clever hack.

I don’t think I personally want to maintain it, but I like the thinking.

i’ve been trying to motivate myself to write an app that solves the terraform module versioning issue - unfortunately, i’m not working with terraform much at the moment at the day job so it’s making it hard – tl;dr i’ve done a ton of searching and don’t know of any existing tool or workflow for this

also want terraform landscape rewritten to go =P so I get a single binary

I haven’t seen any tools for that.

The other tool I want is one for vendoring modules.

and it would rewrite all source
definitions to something like: ./vendor/github.com/cloudposse/terraform-aws-vpc

(like go)

doesn’t it do something similar now with just very ugly names? by downloading all modules in .terraform/modules/sha

yea, more or less.

But I would never commit .terraform/modules
to git

(because it’s so ugly) haha

ugh. that’s a good point though. terraform get -backend=false
is really a lot like go get

with terraform being written in go
and their familiarity with go
, I wonder why they chose an entirely different pattern from go, java, node, etc

Hi guys! I’ve just published automated process to publish searchable PDFs for terraform core and all official providers, so we can work while flying and offline Here we go - https://github.com/antonbabenko/terraform-docs-as-pdf . Tooling may contain bugs, so please let me know by opening an issue, or submit PRs.
Complete Terraform documentation (core + all official providers) as PDF files. Updating nightly. - antonbabenko/terraform-docs-as-pdf


I’d like to do this eventually for cloudposse/docs

it’s a nice readable format

I can now update my linkedin profile and add ghostscript and wkhtmltopdf. Both are terribly slow and demand a lot of resources to run.



Addresses issue #3503 . This request was already possible using the TF_DATA_DIR environment variable, adds the ability to do so with the standard config file as well.

2019-01-09

Hi guys, I’m new here. First of all I want to say ‘thanks’ for such a great collection of templates that you share. I’m building a stack for couple of small backend apps written in Django. My plan is to use this: https://github.com/cloudposse/terraform-aws-ecs-web-app but I want to reduce costs by not using public/private subnets and NAT gateways. Here are the questions:
- Is it possible to use this template with only 1 public subnet per app (I know that it’s not recommended, but in this case costs are much more important than security)?
- What other resources should I define? Can you provide an example on how to use this template?
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

hey @Lukasz German welcome

you deploy ecs-web-app
into your own VPC (see https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L69)
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

and into your own subnets https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L71
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

it says private_subnet_ids
, but it’s just a name, you can definitely provide just one public
subnet Id, for example:

private_subnet_ids = ["subnet-XXXXXX"]

to create a VPC, you can use https://github.com/cloudposse/terraform-aws-vpc
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

to create subnets, you can use any of our modules https://github.com/cloudposse?utf8=%E2%9C%93&q=-subnets&type=&language=

here is an example on how to use vpc
and subnets
https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L36
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

(it’s for EKS, but will be similar for ECS)

you can set this to false
to disable NAT gateways and save on cost https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L58
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

to create just one subnet, specify just one AZ here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L48
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Thanks @Andriy Knysh (Cloud Posse), that was very helpful. I was able to prepare some config and most of the things work, but I cannot enable CoePipeline. I think it might be related with the fact that ECS service is not created by default-backend-web-app
. Moreover, when I try to execute terraform apply
this error occurs: aws_security_group.ecs_service: Error creating Security Group: InvalidGroup.Duplicate: The security group 'test-prod-template' already exists for VPC 'vpc-0e35f5dde1e0df651' status code: 400, request id: 02565b24-ffd0-4f31-91d9-cdac82ba072e
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Here is the configuration that I’m trying to apply.


Subnets are free so you can just create them in each AZ and use as needed

for ECS in terraform-root-modules

(will be ready by EOW)

NAT gateways are not free

Looking at your .travis.yml files, how does travis know what to do for these steps?
I am unable to find where these steps processes are defined:
make terraform/install
, make terraform/get-plugins
, etc

@midacts it’s in the Makefile
https://github.com/cloudposse/terraform-aws-vpc/blob/master/Makefile
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

which uses build-harness
which has all of that stuff

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness

I see. that makes more sense. Thanks for the information.

something for you guys to consider for the null label, two tags that i think are useful that i added to our implementation are last_updated_by and repo so we can see who the last person to apply terraform was and which repo the code lives in that maintains a resource

that’s nice!

I’ve also seen where people add a commit sha

(or could be a module version)

The only downside with this is that you will always have terraform changes this way, which can get quite noisy
2019-01-10


I have a tfstate on a existant AWS S3 bucket created manually, and now I can implement a module used to create a S3 bucket and DynamoDB to lock state….

the question is: how I can migrate the tfstate from the original bucket to the new bucket? Or how I can apply changes into the original bucket manually created (to enable versioning and encrypt for example) without last the tfstate (because I have a production environment running)

I’m trying this using terraform import, but I cant have good results…

@javier if you remove the remote state directive in your terraform file, and run terraform init
, your state should be available locally again.

then you can add the directive with your new bucket configured and run init again and it should be on there. I’m not sure if it’s possible in one step, but I’m sure 2 steps will work.

and there is a module for that https://github.com/cloudposse/terraform-aws-tfstate-backend
Provision an S3 bucket to store terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption - cloudposse/terraform-aws-tfstate-backend

@maarten If I removed the backend directive do not lost the actual state? is not necessary copy the state from the bucket to my local?

you should always make a backup of your state just in case, but if you remove the remote state directive the state should be available again as local terraform.tfstate

perfect! Thanks @maarten @Andriy Knysh (Cloud Posse) I will try the module and removing the backend directive

i think you do terraform init
on the S3 state first, then comment out the remote state, then do terraform init
again - TF should offer you to import the state

exactly

I’m trying to import the actual s3 BUCKET to my tfstate using ‘terraform import’ and apply the changes if the state for these bucket was different to the config

but I cant have good results… so I will try the module and removing the backend directive

thanks again

take a look here, maybe will help https://docs.cloudposse.com/reference-architectures/cold-start/#provision-tfstate-backend-project-for-root (it’s tricky to provision the remote state and store the state in the remote state when you don’t have the remote state yet )

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Good! Thanks for your help

You can use local-exec provisioner to execute sed
, which will uncomment desired remote state section in tf file after null_resource
is being executed for the first time, which does nothing…

That’s an interesting idea

self mutilating code

Yes, I did it couple times, mostly for fun. I also tried to solve limitations of 0.11 when trying to emulate dynamic/null-able arguments in resources.

I think this could be where something like pulumi could shine

Maybe, but I am still not convinced in giving it another try. I don’t have problems which can’t be solved in Terraform. Some of solutions are not very nice, but still…

we’re all betting on the terraform 0.12 “hail mary”

ohh yeah. Maybe finally I will be able to make AWS S3 module which is as flexible as my ec2-security-group one.

hehe, @Andriy Knysh (Cloud Posse) I see you are doing similar with just sed
in the docs.

yea, we used sed
, but now have scripts that do all the steps automatically https://github.com/cloudposse/terraform-root-modules/tree/master/aws/tfstate-backend/scripts
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

i’m having trouble using terraform to upgrade the engine version of an RDS instance that has a read replica

a straight-up terraform apply
fails because the read replica needs to be updated before the primary can be updated:

[staging.gladly.qa] backing-services> terraform apply
...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ module.database.aws_db_instance.default
engine_version: "9.6.8" => "9.6.9"
~ module.database.aws_db_instance.read_replica
engine_version: "9.6.8" => "9.6.9"
Plan: 0 to add, 2 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.database.aws_db_instance.default: Modifying... (ID: staging)
engine_version: "9.6.8" => "9.6.9"
Error: Error applying plan:
1 error(s) occurred:
* module.database.aws_db_instance.default: 1 error(s) occurred:
* aws_db_instance.default: Error modifying DB Instance staging: DBUpgradeDependencyFailure: One or more of the DB Instance's read replicas need to be upgraded: staging-read-replica-1
status code: 400, request id: f18110e4-1b27-4d1e-bb07-3e9591d1ddbf

some googling indicated that i should target the read replica and upgrade it first before updating the primary, but when i try targetting just the read replica, it picks up the primary too:

[staging.gladly.qa] backing-services> terraform apply -target=module.database.aws_db_instance.read_replica
...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ module.database.aws_db_instance.default
engine_version: "9.6.8" => "9.6.9"
~ module.database.aws_db_instance.read_replica
engine_version: "9.6.8" => "9.6.9"
Plan: 0 to add, 2 to change, 0 to destroy.

anyone know how to get around this?

can use terraform plan/apply -target=...

the only thing that I can think of is doing it manually via click ops. And then applying. I haven’t had problems like this before so maybe it is something in the modules that is causing this ?

thanks, @Andriy Knysh (Cloud Posse), but when i try to -target
the replica, it picks up the primary too

thanks, @Nikola Velkovski i’m probably going to end up doing click ops for this

also -target
sometimes decides which other resources are chained together

I mean it’s not -target
per se but amazon’s api

also, i like the term “click ops”… have never encountered it before, TIL!

ClickOps is a good addition to DevOps

maybe try this https://www.terraform.io/docs/commands/taint.html
The terraform taint
command manually marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.

well, my life would have been better without knowing it

it looks like taint
will result in the replica db being destroyed and recreated which frightens me

yea

but it’s just a replica, isn’t it?

chaosops FTW

it’s a read replica that our application actually uses for some read-only operations unfortunately

why not update both as the last TF plan shows?

updating both fails with “Error modifying DB Instance staging: DBUpgradeDependencyFailure: One or more of the DB Instance’s read replicas need to be upgraded:…”

ah i see sorry


Hot off the press: https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair/
Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair

(this is useful for bot accounts)

chamber read -q atlantis atlantis_ssh_key > /dev/shm/id_rsa

easily rotate SSH secrets by just tainting the resource

what are the options for dealing with “default” resources, like a default route table and default network ACL?

my understanding is that the terraform module aws_default_route_table
makes it explicit that the resource can’t be destroyed, but otherwise using the regular aws_route_table
would be fine

is there some trap I’m setting up for myself if I take the latter option?
2019-01-11

our best practice is never to use any of the default resources in an account

are you piggy backing on an existing infrastructure or starting fresh?

Adopting an existing infrastructure

aha ..

well, one option is to define the resources and use terraform import

just make sure the setting match

run terraform plan
until they sync up perfectly. then you have a baseline to affect change.

except terraform import
is lossy

eg - you can import EC2 instances, but it might not notice or complain if you don’t set all the parameters (like terminiation_protection
).. So terraform import
is good, but not perfect!

this is true - i treat terraform import in the current state as “get me started” and then follow up with more changes it missed (what i think @Erik Osterman (Cloud Posse) was also eluding to above)

maybe he was alluding @sarkis? haha!

that too

that’s what we’re doing, just running into this ambiguity with aws_default_route_table
and its ilk. I can import it with aws_route_table
and get the plans to match up, just wondering if that will come back to bite me one day

I guess it’s probably not a big deal, nothing is permanent after all

Terraform provider to automate the creation of BLESS deployments - chanzuckerberg/terraform-provider-bless

This looks amazing

Keep secrets out of terraform state store

Oh I missed the gist of this but still cool

This is related to Netflix bless.
2019-01-12

was added to this conversation

I’ve added Foqal as a test to see how it goes over the next few weeks.

It’s only in this channel right now.


Did you know that over 50% of your users questions are left unanswered? Did you also know that 60% of the time, your contributors are answering support questions instead of making your open…

If you have any feedback (Good/bad), do let me know.

or ping @vlad

@vlad has joined the channel

yup im here

hi @vlad

hey

@Andriy Knysh (Cloud Posse)
2019-01-13

@Erik Osterman (Cloud Posse) I have a question, it may seem obvious but why Terraform over CloudFormation, if you’re planning to use AWS to host your infrastructure and services. Let me give some context, our CTO has mandated that the DevOps team to design, deploy and manage the SA using CloudFormation, the DevOps team is in opposition of this in favor of Terraform. We have to justify that decision.

I have A LOT of thoughts on this

but I can’t discuss right now. can you hit me up again tomorrow.

@dalekurt I’ve had 2 main reasons: 1) Agnostic. You’ll say that you’re only going to use AWS so that doesn’t matter. But that is not reality. In addition to AWS you’ll use a notification system, version control, monitoring, etc.. By having a single tool that can manage many of these (can be extended for the ones it doesn’t), you greatly reduce the amount of tooling and learning required. I’ve used Terraform for Pagerduty, Datadog, InfluxDB, Grafana, Github, and have plans to use it for other services. 2) Neither Terraform or CloudFormation supports all of AWS or new features as they are released. But I’ve found that generally Terraform supports features sooner. Anyone can add support for a feature or request one. And if CloudFormation supports something that Terraform doesn’t, Terraform can run CloudFormation

TL;DR: cloudformation is one piece of the puzzle. You still need to automate everything else.


I’m on the right track.

Terraform is extensible

Terraform is used to create, manage, and manipulate infrastructure resources. Examples of resources include physical machines, VMs, network switches, containers, etc. Almost any infrastructure noun can be represented as a resource in Terraform.

Even if yiu only plan to use a single cloud provider there are loads of options for other services

And modular

Cf has also only just added config drift management

I have converted every Cf org I worked with to terraform

Always with lengthy deep dives into the pros and cons

@Jan Did you do that manually, or with the import feature?

I have used import often but almost always write the tf from scratch

Whilst building the provisioning layer you have the best perspective to find flaws

With production systems often its a case of import and then run plan

If you see a planned change yiu have either missed something or what you are meant to have had diverged from the Cf code base

Nice!

@dalekurt https://www.reddit.com/r/Terraform/comments/af4lsb/why_terraform_and_not_just_shell_scripts/edvflu9 This was written for Terraform opposed to shell, but also fully captures why Terraform is such a good tool.
4 votes and 9 comments so far on Reddit

Ever wish you could initialize terraform state backends using environment variables? (and without #terragrunt or other wrappers)

Now you can! https://github.com/cloudposse/tfenv
Transform environment variables for use with Terraform (e.g. HOSTNAME
⇨ TF_VAR_hostname
) - cloudposse/tfenv

Additional context available here: https://github.com/hashicorp/terraform/issues/19300#issuecomment-453859797
Current Terraform Version 0.11.10 Use-cases In a CI pipeline, configuring the terraform commands through environment variables rather than explicit arguments is a highly useful feature. We currentl…

export TF_CLI_INIT_BACKEND_CONFIG_BUCKET=my-bucket
source <(tfenv)
terraform init

this maps to TF_CLI_ARGS_init=-backend-config=my-bucket
The TF_CLI_ARGS*
are natively supported by terraform
.

TF_CLI_PLAN_REFRESH=true
maps to TF_CLI_ARGS_plan=-refresh=true


Combining tfenv
with direnv
and a task runner (e.g. make
) gives ultimate flexibility without lock-in to one particular tool.

2019-01-14

Hi Everyone, I am trying to fix a “value of ‘count’ cannot be computed” but I cannot pintpoint the problem. When the module is applied together with a new ALB, I get a “‘count’ cannot be computed” which doesn’t make sense to me as the count condition itself is not dependent of the alb listener resource.
Everything within the count condition is not the result of another resource. It is however a parameter coming from a merged map, but that should actually just work. @Andriy Knysh (Cloud Posse)
Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service

I think I know the culprit. I populate a map with values dependent of another resources, in the same map are boolean values which are used in create conditionals. This map is merged with default values. I think the merge doesn’t happy before all values are populated and voila.
Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service

Yes probably the dependency on the other resource and the merge are the culprit

click

we have it constatly

weird usually this happens with locals

and I all see is vars

@maarten counts
with maps
are not handled correctly by TF https://docs.cloudposse.com/troubleshooting/terraform-value-of-count-cannot-be-computed/

in your module, you provide a lookup map of all possible record types and then provide the desired record type in var.route53_record_type)

a quick fix (not ideal, but should work) would be

count = "${var.create && var.load_balancing_type == "application" && ! var.redirect_http_to_https && var.route53_record_type != "NONE" ? 1 : 0 }"

it will skip the test for allowed types, but you do it in many other places anyway

What module should I be using to deploy an atlantis setup

I am using the geodesic ref arch if that makes a difference

@Jan That is being worked on at the moment

Geodesic module for managing Atlantis with ECS Fargate - cloudposse/geodesic-aws-atlantis

yea, this will be archived
Geodesic module for managing Atlantis with ECS Fargate - cloudposse/geodesic-aws-atlantis

we’re redoing it. @joshmyers is putting the finishing touches.

awesomeness

Will hold off for now

ha, thanks @foqal

Helpful question stored to <@Foqal> by @Andriy Knysh (Cloud Posse):
Hi Everyone, I am trying to fix a "value of 'count' cannot be computed" but I cannot pintpoint the problem...


https://github.com/cloudposse/geodesic-aws-atlantis is going to be archived to be replaced with https://github.com/cloudposse/terraform-root-modules/pull/91
what Add ECS cluster with Atlantis Task why Terraform CI/CD

but still ongoing work

If you have bigger fish to fry for now, I’d leave the Atlantis stuff until it properly lands (hopefully a few days)

I wanted to get the k8s stuff out

butwaiting on our parent zone to get delegated

Hi, Just checking if we have module for creating app service environment using terraform on Azure platform

We (cloudposse) have not built out any mdoules for Azure

Hi, Is there any opportunity to create app service environment in Azure platform using terraform?

#terraform Hi, Is there any opportunity to create app service environment in Azure platform using terraform?

Manages an App Service (within an App Service Plan).

@Andriy Knysh (Cloud Posse) is app service plan & app service environment both the same?

anyone know what is required to get ` provisioner “local-exec” ` to output to stdout while doing the action?

I saw some resources do it, but I have a python script that is not showing anything

@praveen i’m not familiar with Azure, but looks like they are diff resources https://www.terraform.io/docs/providers/azurerm/r/app_service.html https://www.terraform.io/docs/providers/azurerm/r/app_service_plan.html
Manages an App Service (within an App Service Plan).
Manages an App Service Plan component.

@pecigonzalo maybe this will help https://sweetops.slack.com/archives/CB6GHNLG0/p1546543043108000
anyone here ever tried to get a local-exec script output to a terraform variable?

Hey thanks! but not exactly what im looking for

Im trying to make it so my script in a null resource
outputs it output to stdout douring run

so we can see the progress

We dont need to capture it

we are toying with the idea of a asg-roll
module which basically has script to roll the instances of an ASG group

but we want to see the progress

does | tee /dev/stdout work ? |

instead of getting it all at the end

the “when” terraform shows output seems pretty random, as sometimes does only after it finishes

sometimes is doing it after some minute ¯_(ツ)_/¯

or | tee /dev/stderr |

Ill try that, seems hacky but hacky that works is always awesome

1 vote and 0 comments so far on Reddit

the terraform gods have heard our prayers and answered.

A Terraform plan output prettifier. Contribute to dmlittle/scenery development by creating an account on GitHub.

we’ve added scenery
to our cloudposse/packages
alpine apk distribution
A Terraform plan output prettifier. Contribute to dmlittle/scenery development by creating an account on GitHub.

“A Terraform plan output prettifier” in go


ya’ll ever run into issues where cloning modules is slow from github?

yea

especially if you have A LOT of modules

i mean, it can take minutes

yeah it’s really painful in some of our directories… was contemplating uploading modules to S3 to see if that gets better performance

are you using EFS/NFS by any chance?

I’ve noticed that’s exponentially slower due to all the stat
operations.

be glad if you aren’t using codecommit github is soooo much faster, just about always


I dont think it would be that hard to do, but had not had time to look at the codebase

it would make dev a lot faster at least

Work on terraform provider for DocumentDB is progressing nicely: https://github.com/terraform-providers/terraform-provider-aws/issues/7077
Amazon DocumentDB announced.
2019-01-15

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

what are all these services used for?

bootstraping the root accounts

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

what I am looking for is what are the services used for

rd psql + mysql + elasticache + replicas

AFAIK for KOPS

nope

kops does not use rds not redis / memcache

not by itself any how

but some operators or etc might

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

I think its probably more for atlantis or something

has references to KOPS

and AFAIK it does not use elasticsearch either

yea

hence my asking

vpc_cidr_block = "${backing_services_cidr}"
zone_name = "${domain_name}"
region = "${aws_region}"
postgres_cluster_enabled = "false"
kops_metadata_enabled = "false"
rds_cluster_replica_enabled = "false"
rds_cluster_replica_cluster_identifier = "${namespace}-${stage}-postgres"

in templates/conf/backing-services/terraform.tfvars

mmmmm

@Jan those backing services are for applications (running on k8s for example), they not for the infra

those are just examples

you can provision all or some of them as needed

that sounds odd, as you already have CP modules for those

#terraform In Azure am sourcing virtual machine module to create multiple windows VM’s. As part of the virtual machine module I also have virtual machine extensions which does not get attach to multiple instances created while sourcing base module

here is the virtual machine base module code resource “azurerm_virtual_machine” “vm” { count = “${length(var.system)}” name = “${var.app_code}${var.app_name}${local.az_env}_${count.index}” location = “${var.location}” resource_group_name = “${var.resource_group_name}” network_interface_ids = [”${element(azurerm_network_interface.nic.*.id, count.index)}”] vm_size = “${var.vm_type}” delete_os_disk_on_termination = true delete_data_disks_on_termination = true

and the extension as follows

resource “azurerm_virtual_machine_extension” “set_execution_policy” { name = “setexecutionpolicy-${lookup(var.system[count.index], “hostname”)}” location = “${var.location}” resource_group_name = “${var.resource_group_name}” virtual_machine_name = “${element(azurerm_virtual_machine.vm.*.name, count.index)}” depends_on = [“azurerm_virtual_machine.vm”]
publisher = “Microsoft.Compute” type = “CustomScriptExtension” type_handler_version = “1.9”
settings = <<SETTINGS { “commandToExecute”: “powershell.exe Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force” } SETTINGS }

I knoew am doing something wrong with virtual_machine_name = “${element(azurerm_virtual_machine.vm.*.name, count.index)}” in extension

is there a way we can attach extensions to multiple instances created using base virtual machine module

@praveen we’ll take a look at ^

@Jan @pecigonzalo to give you more details on having backing-services in the root modules:

@Andriy Knysh (Cloud Posse) should I provide you the complete code or the snippet will suffice ?

root-modules
is the resource catalog - everything you deploy in your environment(s) goes in there. CloudPosse root modules is just an example. Every company gets its own copy with only the resources they need

On my side currently im building modules for private dns zones

already done as such

then a terraform rendered k8s cluster spec

changing a modul for kops backing state

as I aklready have domain

var.domain_enabled

that actually helps

- Those invocations of RDS, Redis, Elasticsearch in
root-modules
are more than just instantiating those modules. They are connected to the other resources and networks. For example, RDS/Redis/Elasticsearch security groups are connected to the kops/EKS/ECS security groups.kops
VPC is used to peer to the backing services VPC

and all of that (and more) is codified in the root-modules
, so those backing services are not just CP modules’ invocation, it’s more about connecting them together (network, DNS, SGs, etc) and provisioning the entire infra

@praveen you can send the code, we’ll take a look

@Andriy Knysh (Cloud Posse) I just sent you the code


Hah, I’ve always struggled to explain this in slack without it interpreting my markdown. Using a raw snippet is a good idea!

haha I have been on that ride before

hello world


or hit the + to the left of the text input and create a code snippet

hi


which files, and pushed by what?

you create a log bucket, assign it to the EB environment, and then beanstalk will push all logs from all instances (including your app) to the bucket

doesnt work tho.

also https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L609 this is not honoured
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

terraform created all the s3 buckets for each environment. but they are empty. and ELB logs are pushed to the AWS default of elb-logs-< your aws account number>

i’m not sure, but maybe ELB logs get pushed just to the default bucket (don’t remember)

in practice that is what is hapenning. im not really concerned about that tho. lol im just curious about that one setting.

if you find any issues, please open a PR

thats assuming that I know enough to make changes lol

was just digging around in cli.. eb config is showing these settings here

i wonder if thats related

(cloudposse is not actively deploying any beanstalk clusters… we wrote these in 2016 for a few customers)

Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.

#terraform am trying to make the following resource optional as part of azure virtual machine base module

resource “azurerm_network_interface_backend_address_pool_association” “nic” { count = “${var.lb_bepool_IDs == “0” ? 0 : 1}” network_interface_id = “${element(azurerm_network_interface.nic.*.id, count.index)}” ip_configuration_name = “${var.app_code}${var.app_name}IP${count.index}” backend_address_pool_id = “${var.lb_bepool_IDs}” }

with lb_bepool_IDs is defined in end code as lb_bepool_IDs = “${module.lb.backend_address_pool_id}”

it throws the following error value of ‘count’ cannot be computed

ah yea, the famous error take a look here https://docs.cloudposse.com/troubleshooting/terraform-value-of-count-cannot-be-computed/

should I change the count decision to true or false?

@praveen there is no easy answer to that error. You need to play with [module.lb](http://module.lb)
because it’s a dependency for "azurerm_network_interface_backend_address_pool_association" "nic"

you can share here the code for the module so we’d take a look if anything could be resolved fast

oh looks like terraform-aws-rds-cluster module outputs don’t work. in envvars of a beanstalk module.. i’m trying this:


and in plan or apply, its not wanting to create rds_db_name because ^ module.rds_cluster_…. is empty

when i do terraform state show… i can see these things under “module.rds_cluster_aurora_postgres.aws_rds_cluster.default”

i have no idea how to troubleshoot this

is the enabled
flag set to "true"

for the module.rds_cluster_aurora_postgres

@i5okie are you using this module https://github.com/cloudposse/terraform-aws-rds-cluster?
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

yes

and when you apply it, it should show all these outputs https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/outputs.tf
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

you see them?

under module.rds_cluster_aurora_postgres.aws_rds_cluster.default
yes

you also using this https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment?
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

setting env vars here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/variables.tf#L349?
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

indeed

Did you check the beanstalk UI under environment settings if you have those values there?

they didn’t get added, and deployed failed, thats how i found out.

i’ve been using the normal postgres rds previously. trying aurora first time here.

Ok so the problem is the failed deployment?

no the problem is that terraform doesn’t understand what “RDS_DB_NAME”, “${module.rds_cluster_aurora_postgres.name}“, is

It should be a map

Variable

and @Erik Osterman (Cloud Posse) i didn’t notice your question earlier. yes. it created the environment.

and it is

Let me show an example of that

i just didn’t copy the []

i’ve got a bunch of envvars in there already. i just copied the one line as example here.

terraform created all the other envvars except ones referring to ${module.rds_cluster_aurora_postgres. …}

this was working before https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf#L49
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

I’ll try it again tomorrow.

Helpful question stored to <@Foqal> by @Andriy Knysh (Cloud Posse):
oh looks like terraform-aws-rds-cluster module outputs don’t work. in envvars of a beanstalk module.. i’m trying this:...

@Andriy Knysh (Cloud Posse) i don’t think map is the problem here.

ok, was the RDS cluster created?


yes i created the cluster first, then tried to apply targetting this one beanstalk module.

i’ve created the environment with manually entering the envvar values. i’ll try changing them to the module outputs now and try to see if terraform will want to change envvars or not brb

hmm

maybe its fine

i’ll have to try again tomorrow.

i just remembered that the first time i tried, when i ran into this issue.. it finished created the rds cluster, but never finished adding the aurora instance.

yea, maybe there were some issues with creating the RDS instances

yep, I inherited this terraform thing, and was writing documentation on deploying a new stack.. missed some important steps assuming terraform will create the resource if its referenced. #hamberdwrong

fixed

2019-01-16

#terraform , am working on creating azure virtual machine module. for which I am not able to make “azurerm_network_interface_backend_address_pool_association” optional. The intention is to make it optional is to source vm module for provisioning standalone VM, as well as for vm’s behind load balancer

resource “azurerm_network_interface_backend_address_pool_association” “nic” { count = “${local.enabled}” network_interface_id = “${element(azurerm_network_interface.nic.*.id, count.index)}” ip_configuration_name = “${var.app_code}${var.app_name}IP${count.index}” backend_address_pool_id = “${var.lb_bepool_IDs}” }

with locals as locals { enabled = “${var.lb_enabled == “false” ? false : true }”

am i doing something wrong here

what’s the error you are getting ?

if am sourcing the module for deploying standalone instance the error is “module.vm.azurerm_network_interface_backend_address_pool_association.nic: Can not parse “backend_address_pool_id” as a resource id: Cannot parse Azure ID: parse : empty url”

for instance behind lb the error is “module.vm.var.lb_bepool_IDs: variable lb_bepool_IDs in module vm should be type string, got list”

can you show me the definition for the var lb_bepool_IDs
?

variable “” {} etc..

I have DM you the comeplete code

variable “lb_bepool_IDs” { type = “string” }

Hello, I just found CloudPosse and joined the workspace. I am looking for a Terraform module for a a VPC that lets me start with a Direct Connect based VPC and Internet Gateway. Our IT department created these two assets and so I need to start with them, not a VPC of my own making and not the Default VPC. Does such a module exist?

We don’t have a module ready for this yet. @Andriy Knysh (Cloud Posse) has started one, but not been able to finish it.

As @Jan mentions, he has something in the works but it’s not yet contributed.

we are working on VPN + Virtual gateway + Customer gateway module, not Direct Connect

hahaha

I just created that

Its a touch rough and I need to contribute it back

but can happily talk you through it

Nice! I’d like that very much

Are you on a tight deadline?

I have a few deadlines of my own to get out before I can clean that up and push it upstream

I have until the end of the month, so don’t let me come between you and your deadlines.

oki cool

lemme talk you through it

I have a set cidr that was agreed upon with corp ITS

Sounds like me; I have a VPC with a CIDR block in four separate accounts, for Dev, Test, Stage and Prod

yea

so my tf module is like this

vpc-accountname.division.cloud.company.com
├── init.tf
├── outputs.tf
├── peering.tf
├── valiables.tf
└── vpc.tf

this is 100% the stock vpc module with the addition of a few things

@praveen so count = "${local.enabled}"
is not working because
locals {
enabled = "${var.lb_enabled == "false" ? false : true }" }

is wrong

should be enabled = "${var.lb_enabled == "false" ? 0 : 1 }"

let me change it

is it ok if I have the variable as follows

variable “lb_enabled” { description = “Set to false to prevent the module from creating any resources” default = “false” }

@Jan - When you say the “stock vpc module” do you mean the CloudPosse module here: https://github.com/cloudposse/terraform-aws-vpc?
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

https://github.com/cloudposse/terraform-aws-vpc + https://github.com/cloudposse/terraform-aws-dynamic-subnets + https://github.com/cloudposse/terraform-terraform-label
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

Ah, OK


welcome @Taco

then in the peering, which I still need to clean up and make redundant gw/vpn’s

Thanks @Andriy Knysh (Cloud Posse)

OK, great, thanks! I look forward to the finished bits

we did not use Direct Connect before, so @Jan would be of much help here


I didn’t even know about it until I came to my current position, so I’m still getting used to the idea of not creating a new VPC for each application/project

I need to extend it a bit

andf have it to redundant cgw’s

and vpn’s

but thats the bassics to all of it

I will make some time to commit it this week

Sounds like you’ve got your hands full

haha yea


Thanks for your time, I’ll keep checking in and I will explore your repositories; there’s a lot of stuff…

just remind me

I will contribute it to cloudposse

Will do

@Andriy Knysh (Cloud Posse) as per your recommendations I did made the changes with no luck. Now I made the following changes but it it still not working

resource “azurerm_network_interface_backend_address_pool_association” “nic” { count = “${var.lb_enabled == “false” ? “${var.lb_enabled}” : “${length(azurerm_network_interface.nic..id)}”}” network_interface_id = “${element(azurerm_network_interface.nic..id, count.index)}” ip_configuration_name = “${var.app_code}${var.app_name}IP${count.index}” backend_address_pool_id = “${var.lb_bepool_IDs}” }

variable “lb_bepool_IDs” { type = “string” default = “” }
variable “lb_enabled” { type = “string” description = “Set to false to prevent the module from creating any resources” default = “false” }

for instance without LB it throws error as “ Can not parse “backend_address_pool_id” as a resource id: Cannot parse Azure ID: parse : empty url”

for instance with LB it throws error as “ module.vm.azurerm_network_interface_backend_address_pool_association.nic: azurerm_network_interface_backend_address_pool_association.nic: value of ‘count’ cannot be computed”

count = "${var.lb_enabled == "false" ? 0 : length(azurerm_network_interface.nic.*.id)}"

@Andriy Knysh (Cloud Posse) no luck. Same errors for both with LB & without LB as posted above

DM you the complete code for your ref

will be back soon and take a look @praveen

sure @Andriy Knysh (Cloud Posse)

@Andriy Knysh (Cloud Posse) were you able to check my code

@Andriy Knysh (Cloud Posse), just to be made aware that I see there is a bug with using count and making optional for resource azurerm_network_interface_backend_address_pool_association

I got it fixed using deprecated argument load_balancer_backend_address_pool_ids

As a final solution to #17/#47, add a new resource random_password which is identical to random_string except that it treats the result as Sensitive data and avoids writing it to the console. The …

excited for this

(posted 1 day ago)

crazy how often that happens to me. when I need something, someone has already opened the pr.

last week, I was about to go add build_timeout
to our terraform-aws-codebuild
module, but noticed Strech96
contributed it 2 weeks ago.
2019-01-17

https://github.com/terraformdns - should give some ideas how 0.12 will work in reality.
Re-usable Terraform modules that aim to abstract over different DNS hosting services - Terraform DNS Modules

Nice

@antonbabenko the big question is going to be how much we all have to undo our Terraform hacks to date

Not sure about everyone, but my hacks are usually around counts which are inside reusable modules. I don’t use complex types, or recursions in chains of modules. Users of modules will not see too many changes, because they will continue to operate with key/values in most cases, while infrastructure teams work on modules and internals. It will be pretty nice to get rid of jsonnet and finally make 100% featurefull s3 bucket resource module

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

It works and is super useful, but pretty sure if something is gonna break, it be that.


Sir @jamie will you come to our rescue once 0.12 GA?

Oui!

I tried to make it using all built in interpolation- instead of hacks that use blocks/maps etc


But I also think I can finesse it once it is released

Serious question, do you ever consider not using terraform modules sometimes because they may overcomplicate the task at hand?

For prototyping yes….

but I kind of like modules b/c they provide a way to document a capability/requirement

I mean, a module of (1) resource doesn’t make that much sense

however, if you use that (1) resource in a lot of places and there are a lot of default settings, that you want to set, then I think it still makes sense.

I think “complicate” is a loaded term. I use it all the time. We as an “engineering community” use it all the time to justify one decision or another. Again, I do this. My point is more, “complicate” needs a qualifier. What about it is complicated?

I was just in another slack community, where I suggested bundling the tool chain in a docker container. They said they didn’t want to complicate things. IMO docker is the easiest thing in the world to use to bundle tools, easier than native

Sure

Simple stuff, sometimes is done without a module for us (the company I work at)

@sarkis one of the things I’ve started doing more of is “localized modules”

that is, sometimes it doesn’t make sense to have one-module-per-repo

so localized modules are basically submodules and only ever used by the module it is contained in?

i do like this, i think what i am struggling with is the whole 1:1 module to repo approach is turning into a maintenance nightmare

I think the 1:1 module/repo strategy is best suited for companies like cloudposse that need to share modules across organizations

I think the monorepo strategy is better suited for inside organizations


which contains modules and submodules therein

also, if you haven’t looked into terraform init -from-module=...
do so!

I this pattern


We are using cloudposse/tfenv
to accomplish it in an alternative way using strictly environment variables

especially for highly esoteric business logic

stuff that will never leave your org

Here is an example: https://github.com/cloudposse/terraform-root-modules/tree/master/aws/root-dns
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

We do that as well, to ensure the code is “composable” anyway or supports intrepolations for environments

there’s a localized ns
module.

Similarly we have a repo of default-dns
which setups SPF CNAMEs and other default stuff for all our registred domains

some that are empty, but just so people dont use them to spam email

it is a module so we can just easily spwan new ones (hopefully even easier on terraform 0.12) but it would not make sense to make that a “public” module outside of that repo, as it has no use

I guess it really depends if you have a use for DRY or ensuring it meets some standard (interolation of environments, safe defaults, etc)

we use data blocks all the time outside of modules, we try to avoid a bit more resources blocks

We use this and have started to use more SSM


Yeah, agreed

The only thing that im a bit hesitant by using SSM for terraform with data providers is that now, you always need SSM to deploy. We use SSM, but with chamber and either chamber exec --
and TF_VAR
variables or chamber export -f dotenv -o chamber.auto.tfvars
. This way if I wan to run without SSM, we just fill the vars or set the env vars

I did a PR for this reason to chamber not long ago, to add support for multi depth paths (/this/that/foo/bar
)

hey dudes, hope you are all well. running into a weird issue that i havent seen.
resource "aws_vpc_peering_conection" "staging_peer" {
peer_vpc_id = "vpc-7476xxx"
vpc_id = "${module.vpc.vpc_id}"
auto_accept = "true"
}
gives me Error: aws_vpc_peering_conection.staging_peer: Provider doesn't support resource: aws_vpc_peering_conection


In connection

ahhhhh hahaha

what a putz


im not too proud…


that was bad

anyone around for some fargate chat? I can’t seem to get it to start using the cp modules

hey!

so oooooooo we finally have an e2e example

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules


this is deploying atlantis

on fargate

using our modules

ok, great. checking

key thing…. make sure your repo has a buildspec.yaml

we provide one in the README.md
of that example

btw, a couple PRs on https://github.com/cloudposse/terraform-aws-ecs-alb-service-task are pending
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

ah…so using terraform-aws-ecs-web-app

yes, but terraform-aws-ecs-web-app
is not required

however, it shows you how to piece it all together

so if you don’t want to use the ecs-web-app
module, you can start with your own module

i have the service integrated w/ the config generating

using terraform-aws-ecs-alb-service-task

what Terraform module to deploy Atlantis to an ECS fargate cluster. why So we can run Atlantis outside of Kops

here’s how we are refactoring atlantis into a standalone ecs-task-as-a-module

so a module is like a helm chart


make sure you’re on the latest versions

tied up some lose ends

added support for github webhooks

(also, i’m actively deploying this as we speak on a customer site! so recently “tested” )

lol

deploying atlantis?

yep

but our flavor

yeah

when a container definition changes, does it trigger a new deploy?

no

or an update?

ok

we need to ignore changes on the container definition

i changed ports and it didn’t show TF changes

interesting

so terraform is great for deploying static infrastructure

it is a p.o.s. for managing the SDLC of apps ontop of that infra

our strategy is this…

use CI/CD for all changes there after using the patch technique

that’s what the imagedefinitions.json
is doing in the buildspec.yaml


that said, before you go TOO far down this path, checkout #airship

they take a different approach to ours which might suit certain use-cases better

that’s run by @maarten and I know @jamie is a big contributor

coolio

taint/atlantis:
terraform taint -module atlantis_web_app.ecs_alb_service_task aws_ecs_task_definition.default

so if I need to redeploy the definition, I have this

ahh

the constant challenge/strife with terraform and ECS is the task definition

we can use the data
provider to look up the current defintion, but then we break the cold start

or we can ignore changes, but then we cannot easily update the task definition

There are a few ways though, and one I haven’t implemented yet

if we taint, and redeploy, we might revert the image that’s current deployed.

for our immediate use-case, this taint approach is working good enough without

if you figure out some clever way around this, would love contributionss

e.g. using SSM parameter store somehow

well fargate is the new direction so I’ll definitely hit this wall and kick some tires on solutions
2019-01-18

But what is the challenge with the task? we have the service and task defined in terraform and it works without issues


fargate is also expensive


They did that 50% price reduction not long ago

I guess it has its usecase, I just dislike awsvpc from it, as it puts there some “magic” that does not exist on the docker universe and complicates things

everyone was asking for overlay network support from them, and thye added awsvpc…. thanks AWS

Yea they did the same initially with aks

Was my first big issue with it

Because a very practical limit on the number of nodes and pods yiu can run

I managed to exhaust a test cluster during the aws summit when they announced it

did they remove that now?

they support other cni’s now as I understand

Im keen to see what the k8s control plane as a service looks like in a few months

alicloud will be adding kops native support mid this year (so says a birdie I know)


at which point there are 5 large k8s control plane as a service providers that if they get to the same version / feature set it would be awesome

I would love more lover for docker-swarm

hot off the press! https://registry.terraform.io/modules/cloudposse/mq-broker/aws/0.1.0
2019-01-21

Terraform utility to mask select output from terraform plan
- cloudposse/tfmask

very nice addition!

dirty little hack for now

still

Feel like the terraform core needs something to combat this


yep

tfmask
output

Anyone hit https://github.com/hashicorp/terraform/issues/17048 and have a working workaround ?
Related issue & conversation terraform-providers/terraform-provider-google#912 We opened this issue in the google provider, but @danawillow recommended we open it here as this is more of a core…

how do we this?

remove enabled
?

(maybe share the module you’re working on for activemq)

@Erik Osterman (Cloud Posse) Likely as it essentially fails when disabled (on first run anyway). If it was enabled and state written, and later enabled = "false"
I suspect that may work

For context - https://github.com/cloudposse/terraform-aws-mq-broker/pull/4/commits/3b3614976b8a7c18a22e3c9d939ffc23e2603fe1
When initially building this module, it was done piece meal adding the outputs once the initial apply had been done. apply/plan etc all looked good after the initial apply. However, there is a bug …

Hi Terraformers, @Erik Osterman (Cloud Posse) pointed me here because maybe someone is doing SES click tracking with custom email link tags? Long story short I’m trying to get graphed CW metrics for different links from an email using provided by aws cloudwatch destination. Had anyone any success with more fine grained configuration besides general open/click metrics? I know that I could get stats I need by running either firehose -> s3 -> athena or sns -> lambda -> cloudwatch but maybe there’s an easier way going with provided cloudwatch iintegration but the SES monitoring docs suck big time and it’s trial and error for me currently to figure that out.
2019-01-22

Hi everyone,
I took a look on your terraform modules regarding bucket S3 for log storage, cloudtrail and cloudtrail-s3-bucket (to attach a policy to a bucket)
I wonder how I can use those modules in my case.
I would like to create a global S3 to store all events coming from other AWS accounts.
The problem I need to solve is about the Policy. To allow multiple AWS accounts to write in a bucket located on one account, we need to add resources in the aws_iam_policy_document
(https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-set-bucket-policy-for-multiple-accounts.html)
To do so, I would like to get a list of AWS accounts as input (variable) and add them in the policy. Do you know how I can handle this ? Maybe there is a better way to do that…
Modify the policy on an Amazon S3 bucket, so that CloudTrail log files can be written to the bucket from multiple accounts.

Is anyone using RDS database migration service to create a read replica of the main DB in another region?

Doesn’t RDS just support cross region read replica built in? https://aws.amazon.com/blogs/aws/cross-region-read-replicas-for-amazon-rds-for-mysql/

You can now create cross-region read replicas for Amazon RDS database instances! This feature builds upon our existing support for read replicas that reside within the same region as the source database instance. You can now create up to five in-region and cross-region replicas per source with a single API call or a couple of […]

Yes it does!

Sorry I was catching up on the channel and realized after I messaged you someone posted this later on lol

@bentrankille maybe this will help, here we have a CloudTrail bucket module with policies https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/blob/master/main.tf
S3 bucket with built in IAM policy to allow CloudTrail logs - cloudposse/terraform-aws-cloudtrail-s3-bucket

Terraform module to provision an AWS CloudTrail and an encrypted S3 bucket with versioning to store CloudTrail logs - cloudposse/terraform-aws-cloudtrail

here we create the bucket and CloudTrail in the audit
account (which collects all logs from all other accounts like prod
, staging
, etc. https://github.com/cloudposse/terraform-root-modules/blob/master/aws/audit-cloudtrail/main.tf
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

and here we create CloudTrails for all other accounts pointing to the bucket in the audit
account https://github.com/cloudposse/terraform-root-modules/blob/master/aws/cloudtrail/main.tf
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

@pericdaniel for Aurora, you can create a cross-region replica https://aws.amazon.com/about-aws/whats-new/2016/06/amazon-aurora-now-supports-cross-region-replication/ https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html
You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into an AWS Region that is closer to your users, and make it easier to migrate from one AWS Region to another.

@Andriy Knysh (Cloud Posse) Thank you, I didn’t see the terraform-root-modules repo. I wonder how it works when you add a new trail to your audit account ? How the policy is modified ? In the terraform-aws-cloudtrail-s3-bucket module, it takes only one resource:
resources = [
"arn:aws:s3:::${module.label.id}/*",
]

you mean a new bucket?

it takes the bucket

If I know the name of your bucket, I can write my cloudtrail events to your bucket with this policy, right ?

of course I wouldn’t have access to read it

users who have access to the audit
account would be able to read the logs and access the bucket

that’s the main purpose of having a separate audit
account, to restrict access

I get it about the read access, however, about the write access. Everyone can write to this bucket if the bucket name is known ?
Do you have only one resource in your policy like this ?
“Resource”: “arns3:::bucket_name/AWSLogs/*”
In my mind, I would prefer have several resources to be sure who can write in my global bucket. Like this for example:
“Resource”:
- “arn
s3:::bucket_name/AWSLogs/111111111111/*”,
- “arn
s3:::bucket_name/AWSLogs/22222222222/*”
And one day, if I have a third account to add, I would have to add the following:
- “arn
s3:::bucket_name/AWSLogs/33333333333/*”

then you’ll need to add all of that to the list
resources = [
"arn::....../.......",
]

Yes, I understand that. I just want to be sure I understand well the policy in your module. If you give me your bucket name, I can write my trail events in your bucket without modifying the policy. The folders will be: your_bucket_name/AWSLogs/My_account_ID/Cloudtrail/
Am I right ?

yes should work

since we have *
at the end

arn::${module.label.id}/*

(you ned to test it. If any issues, open a PR or issue, we’ll review)

Thank you, I test it in my infrastructure and it works. I think this is a little weird for security reason. Every AWS accounts knowing the bucket name can write its events in this bucket.
However I don’t know how to handle it easily with terraform modules (modify the policy each time a new aws account is created)

yes, you know the bucket, you add the bucket ARN to
resources = [
"arn:s3:::${module.label.id}/*",
]

Helpful question stored to <@Foqal> by @Andriy Knysh (Cloud Posse):
Hi everyone,...

Does Terraform have an issue with pulling vpc IDs from 2 different regions?

When I try to pull from us-west-2

There’s no.problem

But eu-central-1 gives me the vpc does not exist

@pericdaniel You need to setup 2 providers. Since AWS providers are region specific

@Steven so do I then pull that data within that Terraform file?

@pericdaniel You setup 2 aws providers (default one and an alias) for the 2 regions, then you can do 2 VPC data lookups (1 with each provider). Then use the results however you want

Yeah I got the first part. Not sure how to do the data lookup for the second vpc

@pericdaniel This is assuming you need info from both regions in a single terraform run. Many times you don’t and can just have separate terraform runs with different provider definitions

Yeah I’m trying to set up database migration service

And what to do it all in one

And just pull the subnets and vpc id

Even though I’m passing it as bsrs

Vara

Vars

Example VPC lookup specifing provider

data “aws_vpc” “vpc” { provider = “aws.member”
filter { name = “tag:Name” values = [”${var.account_name}-mgmt”] }

In this case there is an AWS provider with the alias ‘member’

That’s what I was missing! Now I have the provider alias must he defined by module

I got it

Provider = AWS.alias

If this is inside a module, then You define a provider construct in the module and have resources reference that. When the module is called you pass a a fully defined provider to the alias defined in the module

Time to head to work. There is a doc page on this, if you need more details

If you are using Organizations, you can now have a master cloudtail for the whole org

I’m not sure I can use it in my case. I am a part of a bigger organization.. So unless we can use “sub-organization” I can’t use this feature.



@Steven thank you so much for your help!

If I’m doing multiple resource s3 bucket objects writing to the same bucket… Do they overwrite?

I want 3 different files stored in s3 in the same bucket

same “path” to the file? yes, it’ll overwrite

Same bucket
2019-01-23

@Erik Osterman (Cloud Posse) could you merge https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/17 if its ok and you have the time.
Allows to add volume definitions to task like this module "alb_service_task" { source = "../terraform-aws-ecs-alb-service-task" … volumes = [ { …

@Andriy Knysh (Cloud Posse) can take a look
Allows to add volume definitions to task like this module "alb_service_task" { source = "../terraform-aws-ecs-alb-service-task" … volumes = [ { …

@Samuli thanks! Please see the comments

done..

@Samuli That could be squashed into a single commit IMO

?

Sorry wrong person!


#terraform , am trying to add a user group using terraform config and it does not accept backslash

variable “accessgroup” { default = “domain\user1” }

did you try \\

?

the \u
says to escape the u

yes, I tried 4 back slashes, 2,3 and also 1 forward slashes

with no luck

just trued it once again with no luck

what’s the error message?

- azurerm_virtual_machine_scale_set.vmss: Code=”VMExtensionProvisioningError” Message=”VM has reported a failure when processing extension ‘dsc-wqwdwebfe’. Error message: "The DSC Extension received an incorrect input: Compilation errors occurred while processing configuration ‘WindowsStaplesDefault’. Please review the errors reported in error stream and modify your configuration code appropriately. A Using variable cannot be retrieved. A Using variable can be used only with Invoke-Command, Start-Job, or InlineScript in the script workflow. When it is used with Invoke-Command, the Using variable is valid only if the script block is invoked on a remote computer. Exception calling "InvokeWithContext" with "2" argument(s): "A Using variable cannot be retrieved. A Using variable can be used only with Invoke-Command, Start-Job, or InlineScript in the script workflow. When it is used with Invoke-Command, the Using variable is valid only if the script block is invoked on a remote computer." A Using variable cannot be retrieved. A Using variable can be used only with Invoke-Command, Start-Job, or InlineScript in the script workflow. When it is used with Invoke-Command, the Using variable is valid only if the script block is invoked on a remote computer.\n\nAnother common error is to specify parameters of type PSCredential without an explicit type. Please be sure to use a typed parameter in DSC Configuration, for example:\n\n configuration Example {\n param([PSCredential] $UserAccount)\n …\n }.\nPlease correct the input and retry executing the extension.".”

Please use code blocks when posting things like this into Slack, helpful for organisation

sure, can you give me link to code blocks


do we have an example for allowing backslash in terraform

Does this help, @praveen? https://www.terraform.io/docs/configuration/interpolation.html#built-in-functions
Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.

See the yellow box
2019-01-24

Could someone enlighten me on what’s the difference between terraform-terraform-label and terraform-null-label?

@Samuli for simple use-cases, there are no real difference, they will do the same things… except for:

terraform-null-label
usesnull_data_source
to do additional things likeadditional tags
andcontext
(if you need those)

terraform-terraform-label
is much simpler implementation (terraform-terraform-label
was forked fromterraform-null-label
before a lot of additional functionality was added to -null-label)

so if you use basic label to uniquely name resources, both will do the same

if you want context
, additional params like environment
, additional tags (needed for tagging some AWS resources like Autoscaling Groups, which have diff tag format), then use null-label

@Erik Osterman (Cloud Posse) can we get your blessing on @wbrown43’s PR? https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/73
We need it released for some other work
what Added output “elb_load_balancers” why To attach a WAF to loadbalancer

@wbrown43 has joined the channel

one sec


released 0.9.0




@Erik Osterman (Cloud Posse) there is another PR https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/72
Have tested with load balanced environment, but not SingleInstance. Terraform wants to update the settings on the environment every time, even with no changes (and even using the module from master…

I’ve increased the vpc limit and the interface endpoints. Any ideas what else I need to increase to get things rolling?
Error: Error applying plan:
2 error(s) occurred:
* module.stack.module.vpc.module.vpc.aws_vpc_endpoint.dynamodb: 1 error(s) occurred:
* aws_vpc_endpoint.dynamodb: Error creating VPC Endpoint: VpcEndpointLimitExceeded: The maximum number of VPC endpoints has been reached.
* module.stack.module.vpc.module.vpc.aws_vpc_endpoint.s3: 1 error(s) occurred:
* aws_vpc_endpoint.s3: Error creating VPC Endpoint: VpcEndpointLimitExceeded: The maximum number of VPC endpoints has been reached.

is it working after the increase?

maybe it’s cuz dynamodb and s3 vpc endpoints are not interface endpoints, they are gateway endpoints?

no, not working after

different limits: https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-endpoints
Request increases to the following default Amazon VPC component limits as needed.

am I just missing it?

heh. you and me both! might have to just open a more generic support request

going w/ a general request to see what I can find out. I think their service limits are missing an option for sure

The follow-up response from Support. It is as suspected, but I have more questions since their suggestion is not a valid option in the dropdown.
Hi
Thank you for contacting AWS premium Support. My name is Tendai and I will be happy to assist you on this case.
I understand you are having the VpcEndpointLimitExceeded error. Upon checking your recent changes I can see that your requested an increase was for VPC interface endpoint but the actual increase which caters for your case is gateway endpoints .Ref [1].
Please note that Dynamo DB and s3 uses Gateway VPC Endpoints instead of VPC interface endpoints .


Hello all. question : does terraform allow enabling aws cloudtrail in another sub account while being in the main billing account?

Hey @Neha

yes it’s possible, here is an example

provision cloudtrail and the bucket in audit
account https://github.com/cloudposse/terraform-root-modules/blob/master/aws/audit-cloudtrail/main.tf
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

@Andriy Knysh (Cloud Posse)’s question was answered by <@Foqal>

thanks @foqal

then in other accounts (prod, staging, etc.) provision cloudtrails and point them to the bucket in audit
https://github.com/cloudposse/terraform-root-modules/blob/master/aws/cloudtrail/main.tf#L33
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

we separate audit
from the main billing
account (to allow only a few people to access it), but they could be the same



for security and compliance reasons

that reminds me- can we do the same with GuardDuty? enabling it in another account while being in the billing account?

we did not work with GuardDuty. What resources does it require?

I guess what I’m asking is, is that process only for cloudtrail? or it could be applied to other aws resources?

so, we usually separate ALL resources into different AWS account per environment -> prod, staging, dev, testing are in separate acccounts

look here https://docs.cloudposse.com/reference-architectures/notes-on-multiple-aws-accounts/, useful videos at the end

GuardDuty just requires enabling it. but we have cloudwatch set up that is targetted to an sns rule and we’re forwarded any/all alerts that come in to guardDuty

sounds like you want just one GuardDuty and all alerts coming to it from different accounts

not sure how it (GuardDuty) works, but probably will require setting some cross-account permissions

No permissions, it’s more like vpc peering really… You send an invite from the guardduty “master” to the “member” account, and from the member account you accept the invite.

Currently in terraform, you can only send the invite… Acceptance is still an open PR

Work In Progress The provider acceptance test framework needs additional updates to support a second provider configuration with additional credentials. The initial implementation in this pull requ…

looks like the EFS module is borked.
* module.efs.output.mount_target_ids: At column 3, line 1: conditional operator cannot be used with list values in:
${local.enabled ? aws_efs_mount_target.default.*.id : list("") }
* module.efs.output.mount_target_dns_names: At column 3, line 1: conditional operator cannot be used with list values in:
${local.enabled ? aws_efs_mount_target.default.*.dns_name : list("") }
* module.efs.output.network_interface_ids: At column 3, line 1: conditional operator cannot be used with list values in:
${local.enabled ? aws_efs_mount_target.default.*.network_interface_id : list("") }
* module.efs.output.mount_target_ips: At column 3, line 1: conditional operator cannot be used with list values in:
${local.enabled ? aws_efs_mount_target.default.*.ip_address : list("") }

crap use the previous release

@joshmyers was just adding some things

yup

we’ll get that fixed tonight

0.7.1 works

there are some comments about it https://github.com/cloudposse/terraform-aws-efs/pull/21#issuecomment-456954156
what Add enabled variable and update docs why So we can boolean creation of these resources from the caller module

(easy fix)

looking at this module: https://github.com/cloudposse/terraform-aws-multi-az-subnets/blob/master/public.tf
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

is there a benefit to having a route table per az subnet?

I cannot recall if there was a technical reason or just trying to keep it symmetrical

looking at it right now, it seems like we could get by with a single route table for public and a single route table for private

the key thing is we need to have one NGW per AZ

why is that @Erik Osterman (Cloud Posse)

for HA

how does that work?

let’s say we deploy one NGW

theres a possibility a nat gateway can become unavailable?

we have instances in us-west-2a, us-west-2b and us-west-2c

and we have one ngw in us-west-2a

all azs can use that ngw

but if 2a goes off line, then 2b and 2c are all affected

so from a networking perspective, we want to maintain HA.

i see

that makes sense

since the nat gateway lives inside the subnet?

yea, sorta. So a subnet cannot span AZs. and then the NGW is attached to a subnet.

do you need a nat gateway for public subnets?

no

only private ones need to do NAT

why does your tf have it for public subnets?

I think we have an IGW for public ones

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

fyi

aha, so you provision the NGW on a public subnet

then private subnets route through it

ah

if the NGW was on a private network, it would make no difference

I was going to say, you could use a NGW on a private subnet if you wanted traffic from that subnet to egress to a particular network on a specific IP

but I am not sure if that’s techncially possible since NGWs require an EIP.

i see

yeah

thanks, that helps alot

There are a lot of subnet strategies. Especially as the organization gets larger, there will be stricter subnet allocations

Our modules are great in a “cold start” scenario where there aren’t a lot of other conflicting subnets.

I know @Jan is working on a different subnet module strategy.

Also, we have a few subnet modules for different kinds of allocations.

im actually going single k8s cluster in a vpc

What we’re missing is a preallocated/manual subnet strategy, where users specify their own subnet CIDRs.

so the vpc i just have split out to half public/half private subnets

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

pretty simple, and dependent services will just vpc peer

Have you seen the example by @Andriy Knysh (Cloud Posse) ?

pretty nice actually

have you guys not had any problems

with module dependencies?

I think this works well from a cold-start

no “count of” issues

but I could be mistaken.

meaning

module "eks_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.1.1>"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
attributes = "${var.attributes}"
tags = "${var.tags}"
vpc_id = "${module.vpc.vpc_id}"
subnet_ids = ["${module.subnets.public_subnet_ids}"]
allowed_security_groups = ["${distinct(compact(concat(var.allowed_security_groups_cluster, list(module.eks_workers.security_group_id))))}"]
allowed_cidr_blocks = ["${var.allowed_cidr_blocks_cluster}"]
enabled = "${var.enabled}"
}

if we take a look at this module

it uses module.vpc.vpc_id

so on initial terraform apply

have you guys not ran into the problem where the eks_cluster module will spin up concurrently with the vpc module

I haven’t used this one personally

ah

We mostly using kops

some gke

because i know you can’t use depends_on
within a module

yea… we know all too well

haha


ill give this a shot though

@Andriy Knysh (Cloud Posse) knows it very well. ping him if you get stuck.

man

that depends_on is on my last nerve

so wish .12 would drop!

soon i think

so they keep saying

i know

I’ve resigned to “eventual”

right

yeah so like i was saying, this is as clean as it gets when writing terraform: https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks/eks.tf
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules


but my first thought is if it will provision everything in the correct order without depends_on
working for modules

literally using that one right now

so long as it isn’t computing, references to other modules seem to work decently

sooooo terraform seems to infer order pretty well

yea

cluster_name = "${module.eks_cluster.eks_cluster_id}"

^ that

toss something in a count
and

haha

because it does count
before generating resources

yeah

@johncblandii did you end up using our our EKS modules?

sure did

editing the cluster right now


tied efs into it too

cool (we’ll get that EFS bug fixed soon)

vpc -> subnets -> eks -> workers
^ cp stack ftw

< np

nested modules havent been an issue?

for you guys?

what I love about our EKS module is how easily you can create multiple types of node pools with full customization

you just output output

when you says multiple node pools

yeah

gets verbose until .12 where you can output the module itself

you mean private subnet instances , public subnet instances?

no, so you could create a GPU node pool, a high compute node pool, high memory node pool, spot instance node pool, etc

ah

all part of the same cluster

can i do public/private too?

yea, i don’t see why not (but haven’t tested that)

….and generally wouldn’t recommend public anything except for ALBs

AWS supports NLBs now

and k8s as well

@btai the EKS example works good

No explicit depends on

Terraform does it automatically

@Andriy Knysh (Cloud Posse) i figured

how would you do multiple node pools?

just multiple module eks workers?

Yes

And add its security groups to the cluster security group

good example https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/examples/complete/main.tf
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

just imagine that configured differently for different types

how do you deploy your pods to the correct node pools?


say a pod needs to go on a GPU instance?

(disappear)

I’m a k8s noob; struggling this minute to be exact, lol

i am as well, just a few months

weeks for me

definitely dazed; (maybe a month)

trying to rebuild jenkins in here now (for efs vs ebs)

Probably by tagging the nodes, then k8s has labels and annotations to select the required nodes for pods

ahhhh…yes. I forgot about tagging

thats right

@Andriy Knysh (Cloud Posse) is there a list of resources that need to have the [kubernetes.io/cluster/cluster_name](http://kubernetes.io/cluster/cluster_name)
tag?


this is freaking amazing. lets give to @Andriy Knysh (Cloud Posse)



with an average 1 hour turnaround!!

ridiculous!

and that’s well over 500 CRs in the past 90 days

we’re a small company. this astounds me.

indeed

thanks guys

@btai as far as I know, these two tags are required for EKS:

thanks!

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

for cluster https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L33
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

as was described in the docs

Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs


Latest module: https://github.com/cloudposse/terraform-aws-vpn-connection
Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network - cloudposse/terraform-aws-vpn-connection

Create site-to-site VPN connections

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

what is the purpose of the join in role = "${join("", aws_iam_role.default.*.name)}"

that’s when dealing with counts for enablement/disablement of a resource

if a count=0
, aws_iam_role.default.0.name
would error

the splat operator *
is required

we call this the “join-splat” pattern


yeah i realized you guys are putting count enables everywhere

makes sense

yea, makes it easier to enable/disable functionality

when composing modules in other modules
2019-01-25

Check this up. @antonbabenko created a list with terraform best practice. https://www.terraform-best-practices.com/

sidetracking but nice find gitbooks

Any tips for a tool to generate docs out of markdown? We want to have a website for our terraform modules repo markdown and other tools

We use a pre-commit hook generated by: https://github.com/segmentio/terraform-docs
Generate documentation from Terraform modules in various output formats - segmentio/terraform-docs

Yeah we use that for the markdowns in the repo, but wanted to make a website out of it

Gotcha

looking for something simpel

@pecigonzalo There are programs to convert markdown to other formats. But are you looking for something that would collect the doc for all your modules?

Yeah, more like live collect from a branch

I guess, similar to what gitbooks does

or github pages

i’ve used both mkdocs and sphinx, both are pretty simple

well, calling sphinx “simple” is probably not accurate i guess. but once past the initial learning curve, to where you have CI generating the html for the static site, actually maintaining the docs is quite simple

there are all sorts of static site generators, many have plugins for IDEs, so you can see right in your editor how it will render

mkdocs, sphinx, hugo, jekyll, etc etc

pelican

we use hugo for docs.cloudposse.com

Nice. I’m going to push soon for a dev portal internally as well so will give Hugo a look

Forecastle is a control panel which dynamically discovers and provides a launchpad to access applications deployed on Kubernetes – [✩Star] if you’re using it! - stakater/Forecastle

i want to try this out

for the dev portal

Ill give them a look, thanks for the suggestions

you can even take a gitlab/github project, disable the source code feature, and it’ll default the home page to the wiki

I did not know that, nice

mkdocs is great for general kinda runbook/index sites

@maarten vuepress for static sites… checkout airship.tf

@Andriy Knysh (Cloud Posse) have you seen this before?
* module.eks_workers.data.aws_ami.eks_worker: data.aws_ami.eks_worker: UnauthorizedOperation: You are not authorized to perform this operation.

Trying to get the latest EKS ami from the Amazon account

no, i did not

are you using our example?

data "aws_ami" "eks_worker" {
filter {
name = "name"
values = ["${var.eks_worker_ami_name_filter}"]
}
most_recent = true
owners = ["602401143452"] # Amazon
}

under what user are you provisioning the cluster?

an admin user

hmm, maybe something changed on AWS side

if you don’t resolve it, I’ll take a look later today

so I’m assuming a role (to switch AWS accounts) and im wondering if thats the reason why?

What permissions does the role have?

so the master account has no permissions

but im assuming a role into another account (where the provisioning actually takes place)

and that role has admin

I set the profile for the backend

hmm

i wonder if i need to set hte profile for the provider too

@Andriy Knysh (Cloud Posse) that was it

how hard would it be to override the label format with cp modules?

namespace-stage-product means:
blah-dev-product1 blah-dev-product2 blah-test-product1 blah-test-product2

you do not see the products together. we generally use:
product-stage

They would never be in the same account, right?


prod is solo

dev, qa, uat are together

prod+support are together

So, there’s the environment field, but it’s not been integrated into all of our modules

If you want to add it as needed, I think we’d be okay with that. @Andriy Knysh (Cloud Posse)?

how’s it format things?

The alternative is to take this into the stage name.

or the namespace

namespace cp-blog

true. that’s a fair work-around

Note to self: do not delete your state when testing TF before destroying resources.

@Erik Osterman (Cloud Posse) we need a route table per nat gateway correct?

So looking at the code yesterday, I couldn’t see where that requirement was implied or required. The underlying requirement is one subnet per AZ and therefore one NGW per AZ.

anyone hit this when starting up a Fargate container? (using cp modules)
CannotStartContainerError: API error (500): failed to initialize logging driver: failed to create Cloudwatch log stream: ResourceNotFoundException: The specified log group does not exist. status code:

hrm

and you create the log group?

nah. thought the module would, but guess i should add that into the TF?

it depends which modules you use

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

makes sense

our “web app” module is more opinionated and creates it

many people choose not to use the web app module and go around it

yeah…going around it and seemed to have left that part out.

but it tries to connect to one which i didn’t provide so that was odd

thought it’d just ignore it

ahh….i see.
variable "log_options" {
type = "map"
description = "The configuration options to send to the `log_driver`"
default = {
"awslogs-region" = "us-west-2"
"awslogs-group" = "default"
"awslogs-stream-prefix" = "default"
}
}

@Andriy Knysh (Cloud Posse) how are you installing aws-iam-authenticator on your EKS masters?

using the terraform kubernetes provider ?

hey guys, i’m running into an issue where my backend seems like it’s being executed after another role has been assumed. i have a setup where an admin account can assume roles into multiple environment accounts. while setting up dynamodb locking for the remote state file, i came across this:
terraform {
required_version = ">= 0.11.3"
backend "s3" {
bucket = "<redacted>"
dynamodb_table = "terraform_locks"
encrypt = true
key = "infra/dev/vpc/terraform.tfstate"
region = "us-east-1"
shared_credentials_file = "~/.aws/admin-credentials"
}
}
provider "aws" {
region = "us-east-1"
shared_credentials_file = "~/.aws/admin-credentials"
assume_role {
role_arn = "<redacted>"
session_name = "Terraform"
}
}
i have a dynamodb table existing in the admin account, but NOT the dev account (this is the environment in my example). however i get an error about required resources not being found, but once i create the same dynamodb table in the dev account, everything works fine and i see the lock being created. anyone have an idea what’s going on here?

the terraform docs state: Due to the assume_role setting in the AWS provider configuration, any management operations for AWS resources will be performed via the configured role in the appropriate environment AWS account. The backend operations, such as reading and writing the state from S3, will be performed directly as the administrator's own user within the administrative account.

so fwiw, there’s a role_arn
supported by the backend
configuration

that’s what we use

we explicitly specify the role_arn
for both the provider
and for the backend

So if you plan on executing the backend in the admin account where you’re running terraform from, you should specify that account as the role?

Appreciate the quick response!

in our case, we have one bucket per account (“share nothing”)

Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

we sometimes do cross-account to lookup outputs from other modules

@btai re: how are you installing aws-iam-authenticator
on your EKS masters?

you don’t

Oh?

EKS comes with it already installed

you need to do two things:

I installed it on my local machine

- Install it locally to be able to use
kubectl
with it

- Update
kubecfg
to use it

Ohh I think I need to do #2

Thanks

for #1, if you use geodesic
, you are good to go (https://github.com/cloudposse/geodesic/blob/master/packages.txt#L3, https://github.com/cloudposse/packages/tree/master/vendor/aws-iam-authenticator)
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! https://slack.cloudposse.com/ - clou…
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages


for #2, kubeconfig
gets generated here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/kubeconfig.tpl#L24
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Thanks!
2019-01-26

Hello Everyone! I am currently using terraform to launch infrastructure into an already existing vpc.

I am having trouble figuring out how to use an existing vpc id to deploy a resource. Any suggestions?

You can do a data lookup for the VPC id

@integratorz if you deploy into an existing VPC, you should know its ID, or at least some tags to do data lookup as @Steven mentioned

then use the ID in all other modules, e.g. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L68
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

lookup VPC by tags https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks-backing-services-peering/main.tf
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

If I know the vpc id how would I go about specifying it in the config?

for instance I can’t just put vpc_id = "vpc-xxxxxx"

@integratorz What’s the context? What are you trying to do exactly?

@kritonas.prod I am working on automating deployments of new servers into an existing VPC. I was actually able to get that working but am now having trouble getting the host to join to the AWS Hosted AD Domain.

@integratorz Good stuff. Are you getting any errors?

Well when I
terraform apply

Everything gets provisioned then I get a timeout error at the end after about 5 minutes

Wow wouldn’t you know it

I had a file prevision that was causing terraform to hang

so wasn’t actually “not” joining the domain

was just never getting to that point


2019-01-27

I’m looking into your iam-user
module and I was wondering if there’s a way to loop this and create a batch of users from a list. As far as I can understand I cant use count
with a TF module, and for/for each loops are not available until TF 0.12.. Any advice is welcome!

You can technically use count
, but we don’t recommend it.

If the count changes, terraform will destroy/create all resources which is ugly.

We didn’t add support for a list of users for this reason.

Our strategy is to invoke the module once for each user. We create a file per user.

This strategy works really well with #atlantis. Since each user add is a PR.

Then to remove a user from the environment, we just revert the PR.

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Got it, this is probably what I’m doing as well, as a list would indeed be risky in case of an error. Great, and you’re sending out an email with the output.

Yea, that sends a transactional email. Though I think there’s a bug with that whereby it runs during plan
as well because it uses the external data provider.

So count can be used with modules as well? I stumbled upon a years old TF ticket requesting this very thing, and it’s still open

No, count cannot be used with modules

but you can pass a list of users to a module and have the iam resource generate N of them

we didn’t implement that in the module for the aforementioned reasons

Got it, I misinterpreted what you initially said. Thanks for the help!

No prob! We’re here to unblock you.
2019-01-28

hey my eks pods (aws-node-*) are constantly in a crashloopbackoff state and when I try to get the logs, I get
Error from server: Get https://<private_ip>:10250/containerLogs/kube-system/aws-node-tp9jn/aws-node?follow=true: dial tcp <private_ip>:10250: i/o timeout

any ideas?

i suppose i can add a bastion node

@btai without more context to me that looks like a port not accepting traffic - not too comfortable with EKS yet but i’d assume some security group issue

i’ve gotten into the EKS worker node w/ a bastion server

this is the error log im getting for aws-node pods
{"log":"ERROR: logging before flag.Parse: W0128 21:05:18.233802 13 client_config.go:533] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.\n","stream":"stderr","time":"2019-01-28T21:05:18.23463547Z"}
{"log":"Failed to communicate with K8S Server. Please check instance security groups or http proxy setting","stream":"stdout","time":"2019-01-28T21:05:48.268072298Z"}

@Andriy Knysh (Cloud Posse) i noticed in your eks example you’re spinning up your eks workers in public subnets, I’ve spun them up in private subnets im wondering if thats causing the issue


any suggestions here on how we should handle this? https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/31
This PR prevents module to wipe peering connection out when applying it to existing infrastructure. It just ignores changes for whole route part. Sadly, it looks like we can't state ignore_chan…

Use aws_route
instead of the inline route schema, make the route optional, be sure to output the route table ids, let the user add routes in a separate module?


good suggestion. we’ll try that.

The inline route is in public.tf, but not present in private

yea, we try to never do this, but this one slipped by.

The issue with deleting the route might be related to this, https://github.com/terraform-providers/terraform-provider-aws/issues/5631
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…

We’ve submitted the linked pr to fix that one, but it’s blocked pending the 2.0 release, due to a change in behavior (for backwards compatibility in 1.x)
2019-01-29

@squidfunk have you met @maarten? He’s the author behind #airship , another well “terraform module as a product”

@maarten is also in Berlin

@squidfunk is maintaining this module: https://github.com/squidfunk/terraform-aws-cognito-auth
Serverless Authentication as a Service (AaaS) provider built on top of AWS Cognito - squidfunk/terraform-aws-cognito-auth

@squidfunk Your module looks like very solid piece of code! Maybe I will need it in the future.

Thanks, glad to hear!

Haven’t met @maarten yet though I’ve just started abstracting services into Terraform modules. @antonbabenko feel free to drop me a note if you run into any problems or have feedback!


@squidfunk props for the additional headers in web app tightening security

@Maciek Strömich thanks - absolutely necessary in my opinion, otherwise the door to XSS/CSRF is wide open.

Hi @squidfunk I’m in the tropics and not touching code by rule of law here. Happy to take a look at your stuff when I’m back, I currently have a simple Cognito implementation with the ALB here: https://airship.tf/guide/ecs_service/load_balancing.html#application-lb-cognito-authentication
Flexible Terraform templates help setting up your Docker Orchestration platform, resources 100% supported by Amazon

where are you vacationing?
Flexible Terraform templates help setting up your Docker Orchestration platform, resources 100% supported by Amazon

Ko Pha-ngan


@maarten I’ll also take a look at your stuff Maybe I can draw some inspiration. Enjoy your vacation!

hey @squidfunk I note that you aren’t using a aws_cognito_identity_provider
in your module. You pass a name around for one. Is it assumed you have already set on up?

I assume you mean the variable cognito_identity_pool_provider
? It’s used as the developer_provider_name
of the identity pool, see: https://github.com/squidfunk/terraform-aws-cognito-auth/blob/b5cf938b41b1bcb338f55639c2d44b8c76f299cc/modules/identity/main.tf#L140-L157
Serverless Authentication as a Service (AaaS) provider built on top of AWS Cognito - squidfunk/terraform-aws-cognito-auth

Maybe it’s not the best name for this variable. If you find any room for improvement, I’m very happy to discuss it on the issue tracker!

Ah, grand. did you look at implementing MFA at all? Seems several open issues with cognito resources to do with this

Nope, not yet. Currently the sole use case is SPA. However, I assume we could totally integrate that.

Thinking about it - we probably only would have to extend the API to handle auth challenges and integrate that into the frontend.

anyone using EKS with the kubernetes provider by providing the necessary config from your EKS resource?

i believe the EKS resource is missing client_cert and client_key as outputs?
provider "kubernetes" {
host = "<https://104.196.242.174>"
client_certificate = "${file("~/.kube/client-cert.pem")}"
client_key = "${file("~/.kube/client-key.pem")}"
cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}"
}

we did not use kubernetes
provider for the EKS module, used these resources https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

all those attributes for kubernetes
provider are optional (client_certificate
, client_key
, etc.)

instead, you can provide kubeconfig
, e.g. https://github.com/cloudposse/terraform-aws-kops-iam-authenticator-config/blob/master/main.tf#L17
Terraform module to create and apply a Kubernetes ConfigMap for aws-iam-authenticator
to be used with Kops to map IAM principals to Kubernetes users - cloudposse/terraform-aws-kops-iam-authentica…

which for the EKS cluster gets generated here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/outputs.tf#L1
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

@Andriy Knysh (Cloud Posse) i wanted to avoid using config_path because it would require exporting the kubeconfig to my local path for the new cluster

I guess i want it to work independently of whats running in my local (someone who doesnt have kubectl can still run terraform apply
)

if you use aws-iam-authenticator
, the kubeconfig
still needs to include this command https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/kubeconfig.tpl#L24
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

also, the resource does not export anything else for auth except certificate_authority
https://www.terraform.io/docs/providers/aws/r/eks_cluster.html#attributes-reference
Manages an EKS Cluster

i see, so ill need to use a null_resource to export the cluster config

yes

once you export kubeconfig
, then you can use either kubernetes
provider or https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf#L30 to apply the config map
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

annoying thing is when I run aws eks update-kubeconfig --name {cluster_name} --profile {aws_profile}
, that kube config user needs
env:
- name: AWS_PROFILE
value: {aws_profile}
appended to it

ah one of the newest aws cli versions have the fix

[#3619] eks update-kubeconfig set env from –profile/AWS_PROFILE env

anyone use the ecs cli to deploy via docker-compose files? I’m just curious if it is worth exploring
2019-01-30

Hi, am working on adding windows domain administrator group to a server using DSC as an extension for terraform config. As part of terraform DSC config I am passing the argument as following

“access_group”: “${local.accessgroup}”

with local as accessgroup = “${replace(var.accessgroup, “_”, “\\”)}”

and passing the value as accessgroup = “domain_groupname”

but it fails with the following error

"DSC Configuration ‘WindowsStaplesDefault’ completed with error(s). Following are the first few: PowerShell DSC resource MSFT_GroupResource failed to execute Test-TargetResource functionality with error message: Could not find a principal with the provided name [quilldevecomweb\sg-quillecmweb-quilldotcom-support] The SendConfigurationApply function did not succeed. LCM failed to start desired state configuration manually.".”

@praveen is it for Azure?

yes @Andriy Knysh (Cloud Posse)

it is for Azure platform

Hello, I am currently working on spinning up ec2 instances in AWS. I am wondering if there is a good way to attach an encrypted EBS volume to windows server’s C drive?

(cloudposse has no windows experience =)

Hey @Erik Osterman (Cloud Posse) I ended up figuring it out if you ever need it!

Not that you guys deal with windows probably ever lol

glad you got it running

Good evening. My tf uses the terraform-aws-modules/security-group/aws
module to setup a default security group. Right after I cloudposse/elasticsearch/aws:0.2.0
to define an Elasticsearch cluster and make use of the security group. Roughly it looks like this:
module "securitygroup" {
source = "terraform-aws-modules/security-group/aws//modules/elasticsearch"
name = "${var.aws_env}_elasticsearch_default"
vpc_id = "${local.opsworks_vpc_id}"
ingress_cidr_blocks = ["${local.opsworks_vpc_cidr_block}"]
}
module "elasticsearch" {
source = "cloudposse/elasticsearch/aws"
version = "0.2.0"
security_groups = ["${module.securitygroup.this_security_group_id}"]

During plan/apply I get the error: Error: Error running plan: 1 error(s) occurred: * module.elasticsearch.aws_security_group_rule.ingress_security_groups: aws_security_group_rule.ingress_security_groups: value of 'count' cannot be computed

Somehow the ES module does not detect the dependency and does not know it has to wait for the outcome of the SG creation. I used the same sequence in other places without issues. Do you know a way around?

I can use the terraform ... -target module.securitygroup
option to create the SG in a first run and then call apply a 2nd time, which creates Elasticsearch without issues.
And alternatively I can put the SG creation in a separate module, but I like the idea of keeping these together.
Thanks for any ideas.

Yes, 2-phased apply is basically your only option


@Tobias Hoellrich TF has troubles calculating counts
between modules. Try to just create a SG using https://www.terraform.io/docs/providers/aws/r/security_group.html and provide it to module "elasticsearch"
- it could work
Provides a security group resource.

another approach, if you use a consistent naming (e.g. by using https://github.com/cloudposse/terraform-null-label), you know the names and IDs of all resources in advance. So instead of providing the ID of the SG to the elasticsearch
module, you can provide the same ID from the label
module (e.g. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L97)
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
2019-01-31

@Tobias Hoellrich Look into TF’s overrides, I believe this might give you the most elegant solution. Just put the ES module part into a separate file and name it [override.tf](http://override.tf)
or postfix it with [_override.tf](http://_override.tf)
so that it’s merged last, after everything else is created.
https://www.terraform.io/docs/configuration/override.html
Terraform loads all configuration files within a directory and appends them together. Terraform also has a concept of overrides, a way to create files that are loaded last and merged into your configuration, rather than appended.


And for the records:
- using a plain
resource "aws_security_group"
did not work and caused the same'count' cannot be computed
error [override.tf](http://override.tf)
also did not work and caused the same error

This is during the planning phase, right?

I ended up moving a bunch of security groups into a separate module which was applied before the ES module. Tks again.

Morning! I was trying to use terraform-aws-alb-target-group-cloudwatch-sns-alarms and got a bunch of errors that were due to me copy/pasting the example straight from the page, which defaulted to an older release with issues, do you normally update examples with versions?

@Alec We strive to keep the examples up to date but alas, there are too many modules to keep on top of

PRs welcome, or open an issue and we can have a look

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

Here’s a working example

Thanks @Erik Osterman (Cloud Posse) – I’m actually going to use that for another project, and I did successfully get it working once I noticed the 0.1.0 tag in the git ref – I’ll submit a PR shortly. I’m still learning how this is all structured, but it’s becoming much more familiar. Appreciate all your guys work.

thanks @Alec

Also, here’s a complete example from our “Service Catalog” that we actively maintain and deploy: https://github.com/cloudposse/terraform-root-modules/tree/master/aws/ecs
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

how do you guys handle the ordered_placement_strategy
in a ecs service module, from an input perspective (passing a list of maps or map) when passing several strategies? I couldn’t find an example in cloudposse ecs service modules

team, any idea if this is coming soon ? https://github.com/terraform-providers/terraform-provider-aws/labels/service%2Froute53resolver
Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

pr’s been open for a while, they haven’t been merging much in the way of new features lately, https://github.com/terraform-providers/terraform-provider-aws/pull/6574
Fixes #6563. Includes: #6549 #6554 Acceptance tests (so far): $ make testacc TEST=./aws/ TESTARGS='-run=TestAccAwsRoute53ResolverEndpoint_' ==> Checking that code complies with gofmt r…

just docs, bugs, cleanup, and 0.12 prep mostly

make sense

thanks @loren

I’m seeing the ecs-web-app flag with a basic image (nginx:latest
) on Fargate. Any thoughts here?
I don’t see any logs for it, but it does show:
Stopped reason Task failed ELB health checks in (target-group arn:aws:elasticloadbalancing:us-west-2:496386341798:targetgroup/sparkle-view-default-test/8bc8ccfff5c546df)
module "default_backend_web_app" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.10.0>"
namespace = "${local.application_name}"
stage = "default"
name = "${var.stage}"
vpc_id = "${module.stack.vpc_id}"
container_image = "nginx:latest"
container_cpu = "256"
container_memory = "512"
container_port = "80"
#launch_type = "FARGATE"
listener_arns = "${module.alb.listener_arns}"
listener_arns_count = "1"
aws_logs_region = "${data.aws_region.current.name}"
ecs_cluster_arn = "${aws_ecs_cluster.this.arn}"
ecs_cluster_name = "${aws_ecs_cluster.this.name}"
ecs_security_group_ids = ["${module.stack.vpc_default_security_group_id}"]
ecs_private_subnet_ids = ["${module.stack.vpc_private_subnets}"]
alb_ingress_healthcheck_path = "/"
alb_ingress_paths = ["/*"]
alb_ingress_listener_priority = 100
codepipeline_enabled = "false"
ecs_alarms_enabled = "true"
autoscaling_enabled = "false"
alb_name = "${module.alb.alb_name}"
alb_arn_suffix = "${module.alb.alb_arn_suffix}"
alb_target_group_alarms_enabled = "true"
alb_target_group_alarms_3xx_threshold = "25"
alb_target_group_alarms_4xx_threshold = "25"
alb_target_group_alarms_5xx_threshold = "25"
alb_target_group_alarms_response_time_threshold = "0.5"
alb_target_group_alarms_period = "300"
alb_target_group_alarms_evaluation_periods = "1"
environment = [
{
name = "PORT"
value = "80"
},
]
}

@johncblandii what do you mean by flag
?

flap?

does /
return 200?

flap…yes

got some progress. let me check back on the default

yeah, it is INACTIVE now

ok, check ports

is nginx on 80 or 8080?

yeah, just nginx:latest in use

checking ports

missing

port_mappings = [{
"containerPort" = "${var.atlantis_port}"
"hostPort" = "${var.atlantis_port}"
"protocol" = "tcp"
}]

target group is port 80

ahh…on the default. 1 sec

port_mappings = [
{
"containerPort" = "80"
"hostPort" = "80"
"protocol" = "tcp"
},
{
"containerPort" = "80"
"hostPort" = "443"
"protocol" = "tcp"
}
]

without ssl on the image, 443 to 80 would still pass, right?

so on fargate, I tihn kthe hostPort must equal the container port

ah, right. i think i saw that in the docs

def’ understand these ALBs more now

pretty quality

yea, they are

i love that they added the support for auth

just cognito right, or did they add basic auth finally ?

no, cognito


what you added

that might be useful for another project we’re using

@maarten go back under your palmtree and enjoy your vacay


yeah, enough slack for today


just one with hosts for our domains all mapped through TF. (awyeah)

@johncblandii what’s also nice is the http->https redirect

yeah, played w/ that as well (not in the tf setup yet)