#terraform (2018-12)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2018-12-01

Deploying cloudposse/terraform-aws-eks-cluster
https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete but running into the following errors:
module.eks_cluster.aws_security_group_rule.ingress_security_groups: aws_security_group_rule.ingress_security_groups: value of 'count' cannot be computed
module.eks_workers.module.autoscale_group.data.null_data_source.tags_as_list_of_maps: data.null_data_source.tags_as_list_of_maps: value of 'count' cannot be computed
I see that autoscale_group
was updated https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/5 to use terraform-terraform-label
which from searching the Slack history seemed to be an issue.
and eks_cluster
is also failing at ingress_security_groups
https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/main.tf#L69
I’m not passing in any tags or security group so the terraform-aws-eks-cluster
module is just using the defaults.


Yep. I’ve been going through that.

Ok :-)

When targeting the eks_cluster
module replacing:
allowed_security_groups = ["${distinct(compact(concat(var.allowed_security_groups_cluster, list(module.eks_workers.security_group_id))))}"]
with
allowed_security_groups = ["sg-09264f5790c8e28e1"]
resolves the issue.
So right now I’m trying to target the eks_workers
module.

How do you correctly format the tag
variable?

@Andriy Knysh (Cloud Posse) we should have a Makefile in that example

tags="map('environment','operations')"

No you can pass tags just like normal

A map

Some places show it written using interpolation because it used to be that way in older versions of terraform

And that got copied and pasted everywhere

I’ll have to check it again, when we tested a few months ago, we didn’t see those errors

Passing in the tags while targeting the eks_workers
module didn’t work.

Aside from the hard coding you did as a workaround, no other changes from the example?

Commenting out the null_data_source
all together when targeting the eks_workers
module resolves the issue.
# data "null_data_source" "tags_as_list_of_maps" {
# count = "${var.enabled == "true" ? length(keys(var.tags)) : 0}"
#
# inputs = "${map(
# "key", "${element(keys(var.tags), count.index)}",
# "value", "${element(values(var.tags), count.index)}",
# "propagate_at_launch", true
# )}"
# }
resource "aws_autoscaling_group" "default" {
count = "${var.enabled == "true" ? 1 : 0}"
name_prefix = "${format("%s%s", module.label.id, var.delimiter)}"
...
service_linked_role_arn = "${var.service_linked_role_arn}"
launch_template = {
id = "${join("", aws_launch_template.default.*.id)}"
version = "${aws_launch_template.default.latest_version}"
}
# tags = ["${data.null_data_source.tags_as_list_of_maps.*.outputs}"]
lifecycle {
create_before_destroy = true
}
}

@Erik Osterman (Cloud Posse) no I tried to pass in minimal variables and use the eks module defaults.


With the null_data_source
in eks_workers
commented out I had hoped the full eks module would work (I reverted the eks_cluster
allowed_security_groups
back to the default) but eks_cluster
still errors.
module.eks_cluster.aws_security_group_rule.ingress_security_groups: aws_security_group_rule.ingress_security_groups: value of 'count' cannot be computed

Unfortunately don’t know what is causing this. Seems like something has changed in terraform recently that breaks it.

On a serious note, on AWS use kops for Kubernetes

We wrote these modules to evaluate the possibility of using EKS but consider terraform as not the right tool for the job

No automated rolling upgrades

No easy way to do drain and cordon

So from a lifecycle management perspective we only use kops with our customers

Ah. Ok. Easy enough. Thanks @Erik Osterman (Cloud Posse)


But it doesn’t help with upgrades yet either

I’ve heard lots of great things about kops so I’ll just roll with that. The goal is to deploy the statup app and test namespacing per PR.

Thanks for your help @Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse)

np, sorry the EKS module didn’t work for you. We tested it many times, but it was a few months ago and we didn’t touch it since then. Those count errors are really annoying. Maybe HCL 2.0 will solve those issues


Hi. I am facing some sort of issue with dns cert validation using cloudflare
provider

it works perfectly fine in a different tf module

but fails in my compute module

i am not able to figure out what’s going on


@rohit what error are you receiving?

* aws_alb_listener.my_app-443: Error creating LB Listener: CertificateNotFound: Certificate 'arn:aws:acm:us-east-1:011123456789:certificate/a7f83c2e-57c4-4046-b81a-f9a6fe153ad0' not found
status code: 400, request id: 6c1234e8-f111-12e3-af9f-19f5bf9b2312
2018/12/01 11:41:44 [ERROR] root.compute.my_app: eval: *terraform.EvalSequence, err: 1 error(s) occurred:
* aws_alb_listener.my_app-443: Error creating LB Listener: CertificateNotFound: Certificate 'arn:aws:acm:us-east-1:011123456789:certificate/a7f83c2e-57c4-4046-b81a-f9a6fe153ad0' not found
status code: 400, request id: 6c1234e8-f111-12e3-af9f-19f5bf9b2312
Error: Error applying plan:
3 error(s) occurred:
* module.compute.module.my_app.cloudflare_record.my_app_cloudflare_public: 1 error(s) occurred:
* cloudflare_record.my_app_cloudflare_public: Error finding zone "crs.org": ListZones command failed: error from makeRequest: HTTP status 400: content "{\"success\":false,\"errors\":[{\"code\":6003,\"message\":\"Invalid request headers\",\"error_chain\":[{\"code\":6103,\"message\":\"Invalid format for X-Auth-Key header\"}]}],\"messages\":[],\"result\":null}"
* module.compute.module.my_app.cloudflare_record.dns_cert_validation: 1 error(s) occurred:
* cloudflare_record.dns_cert_validation: Error finding zone "crs.org": ListZones command failed: error from makeRequest: HTTP status 400: content "{\"success\":false,\"errors\":[{\"code\":6003,\"message\":\"Invalid request headers\",\"error_chain\":[{\"code\":6103,\"message\":\"Invalid format for X-Auth-Key header\"}]}],\"messages\":[],\"result\":null}"
* module.compute.module.my_app.aws_alb_listener.my_app-443: 1 error(s) occurred:
* aws_alb_listener.my_app-443: Error creating LB Listener: CertificateNotFound: Certificate 'arn:aws:acm:us-east-1:011123456789:certificate/a7f83c2e-57c4-4046-b81a-f9a6fe153ad0' not found
status code: 400, request id: 6c1234e8-f111-12e3-af9f-19f5bf9b2312

i replaced original values with dummy values but that’s the error

The certificate not found error for me has always been when I accidentally created it in one region but try to use it in a different region

i will double check

and also if DNS is not configured correctly

then AWS can’t read the record and validation fails

Seems that authentication doesn’t work to cloudflare.

i use similar code for dns validation in different module and it does work

This issue was originally opened by @JorritSalverda as hashicorp/terraform#2551. It was migrated here as part of the provider split. The original body of the issue is below. In a CloudFlare multi-u…

@rohit you sure that Cloudflare owns the zone [crs.org](http://crs.org)
? Name servers point to
IN ns13.dnsmadeeasy.com. 86400
IN ns15.dnsmadeeasy.com. 86400
IN ns11.dnsmadeeasy.com. 86400
IN ns12.dnsmadeeasy.com. 86400
IN ns10.dnsmadeeasy.com. 86400
IN ns14.dnsmadeeasy.com. 86400

i replaced the original zone with a different value

kk

check if DNS works for the zone (name servers, etc.)

using nslookup ?

any tool to read the records from the zone

or it’s just what it says Invalid format for X-Auth-Key header

Do you want to request a feature or report a bug? Reporting a bug What did you do? Ran traefik in a windows container and set cloudlfare to be the dnsProvider. What did you expect to see? I expecte…

somehow it worked now

cert validation does take forever

it usually takes up to 5 minutes

yeah it does

how do you perform dns cert validation ?

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

same DNS validation

basically use route53 instead of cloudflare

Yes

We recommend 2 types of domains

One is for infrastructure and service discovery

The other is for branded (aka vanity) domains

Host vanity domains on CloudFlare

And CNAME those to your infra domains

Ensure that Cloudflare is configured for end to end encryption

@Erik Osterman (Cloud Posse) how can i ensure that it is configured for end to end encryption ?

we do use https for all our connections and TLSv1.2

It’s a flag in their UI presumably exposed via API

Is it called Always Use HTTPS
?

Yup

I am sure it is checked but i will double check

I am on my phone so can’t screenshot it

no worries

but i didn’t understand how it is important in this context

It’s relative to our suggestion of creating 2 levels of domains

Use ACM with route53 for service discovery domain

Use Cloudflare with vanity domains

Use end to end encryption to ensure privacy

we do use route53 for internal services and use cloudflare for publicly exposed endpoints

Just red flags go up for me if using Cloudflare to validate acm certificates

because the name on the cert matches with the endpoint we have to use cloudflare to do the validation

Ok

Just curious, if we create internal alb, then then the only way to is to do email validation, correct ?

So I think that with Cloudflare things can work differently

We are using route53 with acm on internal domain for docs.cloudposse.com

Internal domain is .org

We use DNS validation

Then we use Cloudflare to cname that to the origin

I am pretty certain we didn’t need the cert name to match because I don’t think CloudFlare is passthru

Cloudflare requests the content from the origins and performs transformations on the content

Which is only possible if it first decrypts the content MITM style

E.g. compression and minificarion

It is not a simple reverse proxy

after what you said, when i think about it i am not able to remember why we had to do it this way

i will have to check with someone from OPS team


did you see my above question ? because you are on phone i will paste it again
if we create internal alb, then then the only way to is to do email validation, correct ?

Ohhhhhhhh I see

Internal alb, so non routable

Good question. I have not tried acm dns validation with internal domains, but that would make the most sense since there is no way for AWS to reach it due to the network isolation.

we currently use email validation for internal alb but was not sure if that can be automated in terraform

Perhaps there are some work arounds where you allow some kind of AWS managed vpc endpoint to reach it kind of like you can do with Private S3 buckets

For example with classic ELBs we could allow some AWS managed security group access to the origin.

Now with this being a VPC not sure if they’ve done anything to accommodate it.

Please let me know if you find a conclusive answer yes/no

will do. I am trying to find more about this or any other alternative in terraform for internal alb


Let’s start with a quick quiz: Take a look at haveibeenpwned.com (HIBP) and tell me where the traffic is encrypted between: You see HTTPS which is good so you know it’s doing crypto things in your browser, but where’s the other end of the encryption? I mean at what


@Erik Osterman (Cloud Posse) my google terraform search for anything now returns one of your modules

Lol yes we tend to do that…

that’s pretty awesome

As far as I know, there is no way to do email validation using TF. Maybe some third party tools exist

@Andriy Knysh (Cloud Posse) that’s what i thought. so what happens when i pass validation_method = "EMAIL"
to the aws_acm_certificate
resource

FAQs for AWS Certificate Manger

Aha you need to establish a private CA

Q: What are the benefits of using AWS Certificate Manager (ACM) and ACM Private Certificate Authority (CA)?

ACM Private CA is a managed private CA service that helps you easily and securely manage the lifecycle of your private certificates. ACM Private CA provides you a highly-available private CA service without the upfront investment and ongoing maintenance costs of operating your own private CA.

i think that would work

i am assuming there is a terraform resource to create private cert

Yea let me know!

Provides a resource to manage AWS Certificate Manager Private Certificate Authorities

That’s a lot of effort for an internal network with SSL offloading by the ALB, meaning it ends up to be HTTP in the same network anyway.

” curious, if we create internal alb, then then the only way to is to do email validation, correct ?”
That has nothing to do with it, as long as ACM can check if the CNAME resolves to the .aws-domain it will work out with creating the certificate.

Cool - are you doing this with #airship?

@maarten so if i create a record for the internal alb in route53 private hosted zone then i don’t have to do any cert validation ?

Cert validation is independent of any load balancer.

For AWS it’s just a mechanism to know that you control that domain, either through E-mail or DNS.

but i still need a cert for internal service for https, correct ?

yes

let me look it up for you

AWS Certificate Manager (ACM) does not “currently” support SSL certificates for private hosted zones.

Good to know… so your options are in fact then to
- create your own ssl authority or
- use a public domain instead

Certificate creation is outside the scope of the module, as it’s more likely to have one wildcard cert for many services

Let’s continue here Rohit, sub-threads confuse me

i noticed that when i do terraform destory
it destroys all the resources but the certificate manager still has the certs that were created by terraform. Is there flag that needs to be passed for it to destroy those certs ?

That’s odd, are you 100% sure ?

yes

I saw the cert gets destroyed

maybe you can try and see if you can repeat that behaviour

trying now

Looks like it was just that one time

when i tried again everything was destroyed gracefully

i am trying to pass params to my userdata template,
data "template_file" "user-data" {
template = "${file("${path.cwd}/modules/compute/user-data.sh.tpl")}"
vars = {
app_env = "${terraform.workspace}"
region_name = "${local.aws_region}"
chef_environment_name = "${local.get_chef_environment_name}"
}
}

and in my template, i have the following
export NODE_NAME="$$(app_env)app-$$(curl --silent --show-error --retry 3 <http://169.254.169.254/latest/meta-data/instance-id>)" # this uses the EC2 instance ID as the node name

the app_env
is not being evaluated

it just stays as is

is there anything wrong in my template ?

@rohit i think you need one $
, not two, and curly braces

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

the format is ${var}

i tried that

export NODE_NAME="$(app_env)app-$$(curl --silent --show-error --retry 3 <http://169.254.169.254/latest/meta-data/instance-id>)" # this uses the EC2 instance ID as the node name

curly braces

yeah that was a silly mistake by me

when i updated the template and ran terraform plan
followed by terraform apply
the userdata on the launch template did not get updated

i am assuming it is because of the difference between tempalte vs resource

@Andriy Knysh (Cloud Posse) is this true ?

Renders a template from a file.

template_file
uses the format ${var}

so i would have to wrap curl command in ${}

yes

ah wait

not sure

you use ${var}
format in the template if you set the var
in template_file

this $$(curl --silent --show-error --retry 3 <http://169.254.169.254/latest/meta-data/instance-id>)
is not evaluated by TF I suppose

is it called when the file gets executed on the instance?

yes, the userdata gets executed on the instance

yea those formats are different and often confusing, have to always look them up

yeah. similarly, i always have to look up symlink
syntax for source and destination

I am using terraform-terraform-label
and terraform-aws-modules/vpc/aws
module, i noticed that even though i have name = "VPC"
in terraform-terraform-label
it is converting it to lowercase

and setting the Name as myapp-vpc
instead of myapp-VPC

it does convert to lower case https://github.com/cloudposse/terraform-terraform-label/blob/master/main.tf#L4
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

yeah i saw that, can i know why is it being converted to lowercase?

i believe for consistency. is it a problem for you? (it does not convert tags)

yeah i was wanting to keep whatever was defined in the name if possible

Is there anything i could do to achieve this ?

i mean i could do the same thing in my module but without converting the tags to lowercase

locals {
enabled = "${var.enabled == "true" ? true : false }"
id = "${local.enabled == true ? join(var.delimiter, compact(concat(list(var.namespace, var.stage, var.name), var.attributes))) : ""}"
name = "${local.enabled == true ? format("%v", var.name) : ""}"
namespace = "${local.enabled == true ? format("%v", var.namespace) : ""}"
stage = "${local.enabled == true ? format("%v", var.stage) : ""}"
attributes = "${local.enabled == true ? format("%v", join(var.delimiter, compact(var.attributes))) : ""}"
# Merge input tags with our tags.
# Note: `Name` has a special meaning in AWS and we need to disamgiuate it by using the computed `id`
tags = "${
merge(
map(
"Name", "${local.id}",
"Namespace", "${local.namespace}",
"Stage", "${local.stage}"
), var.tags
)
}"
}

Do you have plans to do the same in your module ?

we use it in many other modules and we convert to lower case. actually, we always specify lower-case names. So for backwards compatibility we would not change it to not convert. But you might want to open a PR against our repo with added flag to specify whether to convert or not (e.g. var.convert_case
) with default true

i can definitely do that

i don’t think that would take a lot of effort

i might be wrong

i guess the tricky part would be to figure out nested conditional statements. I believe nested conditional statements will become easier in terraform v0.12

i am not sure if this is a valid syntax in terraform
id = "${local.enabled == true ? (local.convert_case == true ? lower(join(var.delimiter, compact(concat(list(var.namespace, var.stage, var.name), var.attributes)))) : join(var.delimiter, compact(concat(list(var.namespace, var.stage, var.name), var.attributes))))) : ""}"

can anyone please confirm ?

more or less, but i would store the non-transformed value somewhere as a local

rather than duplicate it twice

yeah that would make more sense

how do i get access to push my git branch
to your repository ?

oh, start by creating a github fork of the repo

then make your changes to the fork

from there, you can open up a PR against our upstream

sorry, i thought that i can push my feature branch directly

i opened a PR

can you paste the link

PR link ?

reviewed

Also, can you add your business use-case to the “why” of the description


i wanted to use convert_case
variable but to make it match your variable naming convention i used convertcase
but i took your suggestion and used i wanted to use
convert_case variable but to make it match your variable naming convention i used
convertcase`

i think we use underscores everywhere

if we don’t it’s because the upstream resource did not (e.g. terraform)

@Erik Osterman (Cloud Posse) thanks. Even though this was a small change it was fun working on it

yea, this will make it easier for you to contribute other changes

yea

thanks! we’ll get this merged tomorrow probably

that would be great. when do you generally push to registry ?

oh, registry picks up changes in real-time

so as soon as we merge and tag a release it will be there.

that is awesome

since it’s late, it’s best to have andriy review too

i miss things

no worries

what tool do you use to automatically push to registry?

the registry actually is just a proxy for github

all we need to do is register a module with the registry

it’s not like docker registry where you have to push

ohh i thought it was similar to docker registry

but this is better i guess, this may have it’s pros and cons

i like that there’s zero overhead as a maintainer

nothing we need to remember to do

everytime i need to push a package to rubyforge i gotta dig up my notes

i’m curious - is anyone else peeved by the TF_VAR_
prefix? I find us having to map a lot of envs. Plus, when using chamber they all get uppercased.

I want a simple cli that normalizes environment variables for terraform. it would work like the existing env
cli, but support adding the TF_VAR_
prefix and auto-lower everything after it.

e.g.

tfenv terraform plan

tfenv command

this could be coupled with chamber.

chamber exec foobar -- tfenv terraform plan

(not to be confused with the rbenv
cli logic)
2018-12-02

If case is the main complaint, I wonder if there’s been any discussion in making the env case insensitive in terraform core?

I’m not turning up any discussion around it, but I’m on my phone and not searching all that extensively

i know i’ve tested it for case insensitivity and it does not work

so i have a prototype of the above that i’m about to push

it supports whitelists and blacklists (regex) for security (e.g. exclude AWS_*
credentials)

this is going to make interoperability between all of our components much cleaner and our documentation easier to write.

Oh, I know it’s case sensitive for sure, I mean something like a feature request to make it case insensitive so upper case works (which is more inline with env convention), without breaking backwards compatibility

Ah gotcha, yea can’t find any feature request for that

I don’t see that getting approved though…

Might be worth having the conversation at least, since they really are going against env convention as it is…

Not related to the main discussion, but still :) consider using direnv tool to automatically set env per directory. I use it with terragrunt to set correctly IAM role to assume per directory.

Do you use –terragrunt-working-dir with that? Or just change directory first?

I just change directory

never had a need to play with –terragrunt-working-dir

Ahhh ok, I only use the flag myself

yea, direnv is nice for that

my problem is that we use kops, gomplate, chamber, etc

and the TF_VAR_
stuff makes it cumbersum

chamber upper cases everything

kops should not depend on TF_VAR_
envs

so we end up duplicating a lot of envs (or reassigning them)

so i want a canonical list of normal envs that we can map to terraform

@antonbabenko though i’ve not gone deep on direnv

do you see a way to achieve the same thing in a DRY manner?

if so, i’d prefer to use direnv

basically, i’d like to normalize envs (lowercase) and add TF_VAR_
prefix without defining by hand for each env

Do kops and chamber not namespace the envs they read?

If not, I suppose where I’m going is, why not use the TF_VAR_ env in those configs? All the prefixing would be annoying, but at least the values would only be defined once

I don’t like making terraform the canonical source of truth

Oh chamber upper cases things. Blergh

Yea plus that

I guess I don’t see how to inform the different utilities which envs to use or which env namespaces/prefixes to apply per utility… I think it would require going down the road you started with a new wrapper utility with it’s own config file

I once used a shell script which converted env vars for similar reason, but it was before I discovered direnv.

I think I am going to combine what I have with direnv

Per your tip :-)

Yeah, sometimes a little copy-paste is still the simplest abstraction…

@tamsky are you also using direnv?


we added support for this yesterday

0.41.0
2018-12-03

The problem I found with using direnv previously on a large project was that users of the project started to expect a .envrc file anytime any thing could or needed to be set in env vars. Maybe worth thinking about if there is a clear cut separation between what IS in direnv and what is NOT

@Jan true - and to your point that you raised earlier with me about having a clear cut delineation of what goes where

i don’t have a “best practice” in mind yet. i need to let this simmer for a bit.

consistency is better than correctness

indeed.

@Jan going to move to #geodesic

@Andriy Knysh (Cloud Posse) Did you get a chance to review my PR ?

Is it possible to attach ASG to existing cloudwatch event using terraform ?

yes, that’s possible

are you using our autoscale group module?

here’s an example: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/autoscaling.tf
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

@Jake Lundberg (HashiCorp) it was just brought to our attention that the registry is mangling html entites: https://registry.terraform.io/modules/cloudposse/ansible/null/0.4.1


while on github we see


OK, I’d open an issue in Github.

I can’t find the appropriate repo to open it against.

@Erik Osterman (Cloud Posse) i am using aws_autoscaling_group
resource, so once an ASG is created i want to go and add the ASG name in cloudwatch event so that it will disassociate nodes from chef server

makes sense ?

@rohit I’m not sure on the question there

I’ve seen those disassociation scripts put in as shutdown scripts

Although that does need a clean shutdown event

Let me explain this in a better way. When autoscaling event is triggered, the userdata script gets executed and it does associate the node with the chef server. When a node in this ASG is terminated, a cloudwatch event is triggered which inturn calls a lambda function to disassociate the node from the chef server

i was planning to do this in terraform

so i am using aws_autoscaling_group
resource to create ASG

OK, still not sure what the question is :D

@rohit as @Erik Osterman (Cloud Posse) pointed out, you can connect the ASG to CloudWatch alarms using Terraform https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/autoscaling.tf#L27
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

i want to connect the ASG to cloudwatch event rule

{
"source": [
"aws.autoscaling"
],
"detail-type": [
"EC2 Instance Terminate Successful"
],
"detail": {
"AutoScalingGroupName": [
"MY-ASG"
]
}
}

Looks like you are on the right track for that. I’m still not seeing a question or explanation of what problem you are having. Have you looked at what @Andriy Knysh (Cloud Posse) posted?

I think what i need is aws_cloudwatch_event_rule
and not aws_cloudwatch_metric_alarm

i am struggling to figure out how to add my asg name to cloudwatch event rule

Events in Amazon CloudWatch Events are represented as JSON objects. For more information about JSON objects, see RFC 7159 . The following is an example event:

resource "aws_cloudwatch_event_rule" "console" {
name = "name"
description = "....."
event_pattern = <<PATTERN
{
"detail-type": [
"..."
]
,
"resources": [
"${aws_autoscaling_group.default.name}"
]
}
PATTERN
}

try this ^

Provides a CloudWatch Event Rule resource.

what i am trying to do is exactly mentioned here https://aws.amazon.com/blogs/mt/using-aws-opsworks-for-chef-automate-to-manage-ec2-instances-with-auto-scaling/

Step 7: Add a CloudWatch rule to trigger the Lambda function on instance termination

@rohit

take a look at this as well https://blog.codeship.com/cloudwatch-event-notifications-using-aws-lambda/

In this article, we will see how we can set up an AWS Lambda function to consume events from CloudWatch. By the end of this article, we will have an AWS Lambda function that will post a notification to a Slack channel. However, since the mechanism will be generic, you should be able to customize it to your use cases.

will do. Thanks

i am really struggling with this today

Maybe https://github.com/brantburnett/terraform-aws-autoscaling-route53-srv/blob/master/main.tf helps?
Manages a Route 53 DNS SRV record to refer to all servers in a set of auto scaling groups - brantburnett/terraform-aws-autoscaling-route53-srv

this is helpful

You don’t want "${join("\",\"", var.autoscaling_group_names)}"
though, right

I think i will still have to add my ASG name to the cloudwatch event

i am not thinking straight today

so pardon me

Yup, you should do that

has anyone used the resource aws_mq_broker
https://www.terraform.io/docs/providers/aws/r/mq_broker.html in conjunction Hashicorp vault to store the users as a list?
Provides an MQ Broker Resource

I have tried to use them together to display a list a users from Vault but I think the Terraform resource checks for its dependancies before anything is ran against the API

I have tried the following:
Serving Individual users with attributes
user = [
{
username = "${data.vault_generic_secret.amq.data["username"]}"
password = "${data.vault_generic_secret.amq.data["password"]}"
console_access = true
}
]

&
Sending a dictionary of users
user = [${data.vault_generic_secret.amq.data["users"]}]

what’s the problem you’re having?

the issue is that in order to use this resources you would have to keep user login information in version control

what I want to do is keep the user information in Vault separate of the source code

is this possible

I am currently having to manually replace the user credentials with REDACTED
with every change to Terraform

I hope I am explaining my issue accurately

i am a confused because from the example, it looks like to me they are omnig from vault?

username = "${data.vault_generic_secret.amq.data["username"]}"
password = "${data.vault_generic_secret.amq.data["password"]}"

and not form source control

those are my examples of what I have tried (or close to that)

but they don not work

aha, i see

it fails before it makes an API call

ok, so what you’re trying to do looks good more or less. now we need to figure out why it’s failing.

can you share the invocation of vault_generic_secret
?

which leads me to believe that it does some sort of dependency check for required attributes

well I have gone as far as checking to make sure Terraform is getting the secret which it is

but Terraform fails even before it queries Vault

I have verified this by outputting the invocation of the secret. Which works when I do not use the AMQ Resource but does not when I include the AMQ resource in the Terraform config

can you share the precise error?

lemme see if I can create a quick mock

I thought I saved my work in a branch but didn’t

ok here is the error with some trace


ok, please share the invocation of module.amq

and the invocation of data.vault_generic_secret

sorry but what do you mean by invocation
? do you want to see how I am executing it our do you want the entire stack trace?

oh, yes, that’s ambiguous

i mean the HCL code


I don’t see data.vault_generic_secret

and I don’t see module.amq

wasn’t done, I keep getting distracted


this one is the root config and the former was the module I created

ok

I would explore the data structures.

Error: module.amq.aws_mq_broker.amq_broker: "user.0.password": required field is not set

this gives me a hint that perhaps the input data structure is wrong

perhaps ${data.vault_generic_secret.amq.data["users"]}
is not in the right structure.

is users a list of maps?

[{ username: "foo", password: "bar"}, {username: "a", password: "b"}]

also, I have seen problem with this

amq_users = ["${data.vault_generic_secret.amq.data["users"]}"]

where in ["${...}"]
the content of the ${...}
interpolation returns a list.

so we get a list of lists.

terraform is supposed to flatten this, however, I have found this not the case when working with various providers.

my guess is that the authors don’t properly handle the inputs.

perhaps try this too

amq_users = "${data.vault_generic_secret.amq.data["users"]}"

thanks I’ll give that a shot

do you know if Terraform first tries to validate the resource dependancies before actually executing the resource? During the plan or pre-apply stages?

validate the resource dependancies

not from what i’ve seen. there’s very little “schema” style validation

I noticed that my change for convert_case
to terraform-terraform-label
defaults the value "true"
but in [main.tf](http://main.tf)
i am using it as boolean

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

so i am going to submit another PR to fix this issue

sorry about that

@rohit wait

everything is ok

you convert it to boolean
in the locals

i feel something is wrong

and you need to add it

sorry, we missed it

instead of using var.convert_case
i used local.convert_case

locals {
convert_case = "${var.convert_case == "true" ? true : false }"
}

add this ^

yeah that’s what i am doing now

i found this when i was trying to use it in my project

and use one locals
, no need to make multiple

will do

@Andriy Knysh (Cloud Posse) i am sorry about not noticing that earlier

yea, we did not notice it either

submitted another PR

tested this time?

what is the recommended way to test it ?

run terraform plan/apply
on the example https://github.com/cloudposse/terraform-terraform-label/blob/master/examples/complete/main.tf
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

Looks good to me

please run terraform fmt
, lint
is failing https://travis-ci.org/cloudposse/terraform-terraform-label/builds/463143725?utm_source=github_status&utm_medium=notification

didn’t realize that formatting would cause The Travis CI build
to fail

pushed the change

thanks, merged

awesome

which tool do you use for secrets management ?

my 2¢, I love credstash
works really well with Terraform.

if you can use KMS in AWS

is there an API for credstash
?

we use chamber
https://github.com/segmentio/chamber
CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.



any thoughts about using hashicorp vault
?

Valut is really cool but is a higher barrier to start using (setting up servers) vs credstash
is serverless.

that is great

what about chamber
?

chamber
uses AWS Systems Manager Parameter Store also serverless.

I’m reading https://segment.com/blog/the-right-way-to-manage-secrets/ and probably would have picked chamber
if it was available when I started using credstash

good to know. will read about it

I am looking for something that will play nicely with Chef

I’m looking at implementing vault at my work but that’s to cover multiple requirements not just terraform secrets.

Yea, it’s pretty sweet!

i am looking at something that is not limited to terraform secrets as well

but something that works with AWS + Chef

I use credstash
for .env in our rails app, terraform secrets, and secrets in Salt. Having said that after reading more about chamber
I probably would have gone with that instead.
2018-12-04

anyone here use terraform to spin up EKS/AKS/GKE and use the kubernetes/helm providers

Yes I spin up EKS and Helm as well as Concourse using Terraform @btai

@Matthew do you have RBAC enabled? if so, when you spin up your EKS cluster are you automatically running a helm install in the same terraform apply?

Yes to both of those

scenario being, running a helm install to provision the cluster with roles, role bindings, cluster role bindings, possibly cluster wide services, etc.

so im not using EKS, im using AKS so im not sure if you run into this issue

but im attempting to create some very basic clusterrolebindings

provider "kubernetes" {
host = "${module.aks_cluster.host}"
client_certificate = "${base64decode(module.aks_cluster.client_cert)}"
client_key = "${base64decode(module.aks_cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.aks_cluster.ca_cert)}"
}

Give me a moment and we can discuss

kk

anyone here have experience with terraform enterprise? is it worth the cost in your opinion?

i didnt do too much digging, but i wasnt a fan of having to have a git repo per module. they do this because they use git tags for versioning modules, but i didnt want to have that many repos

i have no clue why this wont work


-> this should fix it owners = ["137112412989"]
also you can simplify your filters
filter {
name = "name"
values = ["amzn-ami-hvm-*-x86_64-gp2"]
}

# Grab most recent linux ami
data "aws_ami" "linux_ami" {
most_recent = true
owners = ["137112412989"]
executable_users = ["self"]
filter {
name = "name"
values = ["amzn-ami-hvm-*-x86_64-gp2"]
}
}
also I’m not sure if values = ["amzn-ami-hvm-*"]
is enough so I’ve updated it to values = ["amzn-ami-hvm-*-x86_64-gp2"]

cc @pericdaniel

Thank you so much!!

Seems like owners is the key component

@pericdaniel Always use owner for Amazon AMIs. Otherwise you could get anyone’s AMI with a similar name. And there are many of those. Same thing for other vendor AMIs. Here is a terraform module that handles a number of vendors https://github.com/devops-workflow/terraform-aws-ami-ids
Terraform module to lookup AMI IDs. Contribute to devops-workflow/terraform-aws-ami-ids development by creating an account on GitHub.

Thank you!!!
2018-12-05

hello! i have a problem with terraform and cloudfront, wondering if someone has seen this issue

I have a website hosted statically on S3 with Private ACL. I am using Cloudfront to serve the content

I am using an S3 origin identity, which requires an access identity resource

and I see that it creates the access identity successfully, but the problem is that it does not attach it to the cloudfront distro

when I go back to the AWS console, the access id is listed in the dropdown but not selected

i suspect the problem has to do with this section of the code:

any tips?


@inactive here is what we have and it’s working, take a look https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

mmm… you have the same code as i do. the s3_origin_config snippet references the access_id, which I expected would set it

not sure why its not doing in my case

you can share your complete code, we’ll take a look (maybe something else is missing)

ok


@inactive try deleting this
website {
index_document = "${var.webui_root_object}"
}

it prob creates the bucket as website (need to verify if that statement is enough for that), but anyway, if it creates a website, it does not use origin access identity - cloudfront distribution just points to the public website URL

we also have this module to create CloudFront distribution for a website (which could be an S3 bucket or a custom origin) https://github.com/cloudposse/terraform-aws-cloudfront-cdn
Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - cloudposse/terraform-aws-cloudfront-cdn

and here is how to create a website from S3 bucket https://github.com/cloudposse/terraform-aws-s3-website
Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

in short, CloudFront could be pointed to an S3 bucket as origin (in which case you need origin access identity) AND also to an S3 bucket as a website - they have completely diff URLs in these two cases

ok let me try that

i get this error Resource ‘aws_security_group.dbsg’ not found for variable ‘aws_security_group.dbsg.id’ even tho i created the sg… hmmm

@pericdaniel need help or found the solution?

@inactive how it goes?

i found a new solution! thank you tho!

i think i did a local and an outpiut

to make it work

Learn about the inner workings of Terraform and examine all the elements of a provider, from the documentation to the test suite. You’ll also see how to create and contribute to them.

my favorite is the first word in the first sentence

Terrafom is an amazing tool that lets you define your infrastructure as code. Under the hood it’s an incredibly powerful state machine that makes API requests and marshals resources.

ill have to watch this

Quick question to @here does the Cloudtrail Module support setting the SNS settings? and is that really needed, just have not used it….

Our terraform-aws-cloudtrail
does not current have any code for that

this module created by Jamie does something similar

Terraform module for creating alarms for tracking important changes and occurances from cloudtrail. - cloudposse/terraform-aws-cloudtrail-cloudwatch-alarms

with SNS alarms

I have not personally deployed it.

@davidvasandani someone else reported the exact same EKS problem you ran into

- module.eks_workers.module.autoscale_group.data.null_data_source.tags_as_list_of_maps: data.null_data_source.tags_as_list_of_maps: value of ‘count’ cannot be computed

@patrickleet is trying it right now

maybe you can compare notes

@patrickleet has joined the channel

Good to know I’m not alone.

or let him know what you tried


kops is pretty easy/peasy right?

So easy.

I was up and running in no time and have started learning to deploy statup.


haha ok so you moved to kops

kops is much easier to manage the full SDLC of a kubernetes cluster

rolling updates, drain+cordon

yea I have a couple of kops clusters

and a gke one

ok, yea, so you know the lay of the land

I was specifically trying to play with eks using terraform and came across your modules

we rolled out EKS ~2 months ago for the same reason

these modules were the byproduct of that. @Andriy Knysh (Cloud Posse) is not sure what changed. we always leave things in a working state.

yea - I’m able to plan the vpc

which has tags calculated

which is where it’s choking

it’s not able to get the count

Same here.

haven’t actually tried applying the vpc plan and moving further though

@patrickleet I think we ran into the same issue.

With the null_data_source
in eks_workers
commented out I had hoped the full eks module would work (I reverted the eks_cluster
allowed_security_groups
back to the default) but eks_cluster
still errors.
module.eks_cluster.aws_security_group_rule.ingress_security_groups: aws_security_group_rule.ingress_security_groups: value of 'count' cannot be computed

Wait.

I was able to plan the VPC just fine.

make your enabled
var false

it will plan the VPC with the tags count just fine.

but then errors out when I set it to true, see the Slack link above.

I’ll take a look at the EKS module tomorrow. Not good to have it in that state :(

Is there a comprehensive gitignore
file for any terraform project ?

this should be good enough https://github.com/cloudposse/terraform-null-label/blob/master/.gitignore
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

that’s what i was using

do you generally state the plan files somewhere if not in your repo ?

you mean to add the plan files to .gitignore
?

A collection of useful .gitignore templates. Contribute to github/gitignore development by creating an account on GitHub.

i mean do you store the generated plan somewhere

if you do it on your local computer, the state is stored locally. We use S3 state backend with DynamoDB table for state locking, which is a recommended way of doing things https://github.com/cloudposse/terraform-aws-tfstate-backend
Provision an S3 bucket to store terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption - cloudposse/terraform-aws-tfstate-backend

are you asking about the TF state or about the files that terraform plan
can generate and then use for terraform apply
?

i was asking about the later. when you run terraform plan --var-file="config/app.tfvars" --out=app.storage.plan
and then terraform apply app.storage.plan

do you store the generated plan file somewhere or just ignore it ?

since we store TF state in S3, we don’t use the plan files, you can say we ignore them

cool

if i have tfvars inside a config dir, ex: config/app1.tfvars, config/app2.tfvars
, i think including *.tfvars
in gitignore will restrict both these files to be checkedin in my code repo, correct ?

maybe **/*.tfvars
?

i thought *.tfvars
would be enough, maybe i was wrong

should be enough

will try that

Also, what is the importance of lifecycle
block here https://github.com/cloudposse/terraform-aws-tfstate-backend/blob/d7da47b6ee33511bfe99c0fdc49e9e9ac4ab88ec/main.tf#L55
Provision an S3 bucket to store terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption - cloudposse/terraform-aws-tfstate-backend

in this case it’s not actually needed. but if you use https://github.com/cloudposse/terraform-aws-dynamodb-autoscaler, then it will help to prevent TF from recreating everything when the auto-scaler changes the read/write capacity
Terraform module to provision DynamoDB autoscaler. Contribute to cloudposse/terraform-aws-dynamodb-autoscaler development by creating an account on GitHub.

Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
2018-12-06

Life cycle hooks can also be super useful for doing X when Y happens. A common case I have is when destroying a Cassandra cluster I need to unmount the ebs volumes from the instances so I can do a fast snapshot and then terminate

Nifty idea!

Need to see if I can publish that

actually this is self explanatory
provisioner "remote-exec" {
inline = [
"sudo systemctl stop cassandra",
"sudo umount /cassandra/data",
"sudo umount /cassandra/commitlog",
]
when = "destroy"
}
}

On Event terminate remote-exec umount

Also great for zero downtime changes, create before destroy

Etc

In the case you posted I would expect its to let terraform know that it doesn’t need to care about any. Changes of the values

lifecycle { ignore_changes = [“read_capacity”, “write_capacity”] }

makes sense

never use lifecycle hooks so it is new to me

In general, where are the lifecycle hooks used ?

Can be used under different conditions. A common one could be where Terraform isn’t the only thing managing a resource, or element of resources.

or if a resource could change outside of TF state like in the example with DynamoDB auto-scaler, which can update the provisioned capacity and TF would see different values and try to recreate it

Seems like this could be considered a bug…. If the table is enabled for autoscaling, then TF shouldb automatically ignore changes to provisioned capacity values…

yea, but auto-scaling is a separate AWS resource and TF would have know how to reconcile two resources, which is not easy

Oh it’s two resources? I see… Hmmm… Maybe it could be handled in one, kind of like rules in a security group have two modes, inline or attached…

@Andriy Knysh (Cloud Posse) I am happy to report that your suggestion fixed my problem, re: CloudFront

I still don’t quite understand how it’s still able to pull the default html page, since I thought that I had to explicitly define it under the index_document parameter

but it somehow works

tyvm for your help

glad it worked for you @inactive

did you deploy the S3 bucket as a website?

i assume no, since I removed the whole website {…} section

but CF doesn’t seem to care, as it serves the content without that designation

we just released new version of https://github.com/cloudposse/terraform-aws-eks-cluster
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

which fixes value of 'count' cannot be computed errors

the example here should work now in one-phase apply https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

@davidvasandani @patrickleet


Thanks @Andriy Knysh (Cloud Posse) and @Erik Osterman (Cloud Posse)

While this module works y’all still prefer kops
though right?

Yea…

This is more a proof of concept or looking for folks to ask/add additional features?


Since we don’t have a story for that, we aren’t pushing it. It’s mostly there on standby pending customer request to use EKS over Kops

Then we will invest in the upgrade story.

I really like the architecture of the modules

And the way we decomposed it makes it easy to have different kinds of node pools

The upgrade story would probably use that story to spin up multiple node pools and move workloads around and then then scale down the old node pool

Almost like replicating the Kubernetes strategy for deployments and replica sets

But for auto scale groups instead
2018-12-07

is this section working for anyone ? https://github.com/cloudposse/terraform-aws-elasticsearch/blob/master/main.tf#L159
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

it sets a router53 record for me but it’s not working

dig production-kibana.infra.mytonic.com
;; QUESTION SECTION:
;production-kibana.infra.mytonic.com. IN A
;; ANSWER SECTION:
production-kibana.infra.mytost.com. 59 IN CNAME vpc-fluentd-production-elastic-43nbkjmxhatoajegdhyxekul3a.ap-southeast-1.es.amazonaws.com/_plugin/kibana/.
;; AUTHORITY SECTION:
. 86398 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2018120700 1800 900 604800 86400
;; Query time: 333 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)

but if I hit <http://production-kibana.infra.mytost.com>
it’s not working.

I have to add <http://production-kibana.infra.mytost.com/_plugin/kibana/>
manually

@Andriy Knysh (Cloud Posse)

I have a generalish terraform / geodesic question so not sure if I should ask in here or #geodesic

I’d like to spin up . 4 accounts with 4 identical vpc’s (with the exception of name and cidr) in which 3 will have kops managed k8s clusters

2 of those 3 the k8s cluster would be launched after the vpc needs to be around for some time

What I have not quite yet under stood is how the two different flows will look.
1.a) create vpc b) create resources in that vpc c) create k8s cluster with kops in existing vpc 2.a) create vpc using kops b) create resources in the vpc created by kops

for flow 1 in a geodesig way

im not sure I understand how or if it is supported to launch into and EXISTING vpc

Terraform module to create a peering connection between a backing services VPC and a VPC created by Kops - cloudposse/terraform-aws-kops-vpc-peering

terraform-aws-kops-vpc-peering - Terraform module to create a peering connection between a backing services VPC and a VPC created by Kops

So is it expected that any geodesic kops created k8s cluster will run in its own vpc?

@Jan i’ll help you

so yes, you can deploy kops
cluster into an existing VPC

but we don’t do that for a few reason

you don’t want to manage CIDR overlaps for example

I know I can and how to, its the pattern I have followed

so we deploy kops
into a separate VPC

mmm

Not sure I understand the rational there

and then deploy a backing-services
VPC for all other stuff like Aurora, Elasticsearch, Redis etc.

then we do VPC peering

we don’t want to manage CIDR overlaps

there is no overlap

the k8s cluster uses the vpc cidr

but you can deploy kops
into an existing VPC

there is no diff, but just more separation

IP’s are still managed by the dhcp optionset in the vpc

mmm

and we have TF modules that do it already

interesting

i can show you the complete flow for that (just deployed it for a client)

Alright I think I will write a TF module to do k8s via kops into an existing and make a pr

Yea I have seen the flow for creating a backing VPC + kops (k8s)vpc with peering

when we tried to deploy kops into existing VPC, we ran into a few issues, but i don’t remember exactly which ones

I recall there being weirdness and cidr overlaps if you dont use the subnet =ids and stuff

deploying into a separate VPC, did not see any issues

so for example I have some thing like this

corporate network === AWS DirectConnect ==> [shared service-vpc(k8s)] /22 cidr
—> peering –> [{prod,pre-prod}-vpc(k8s)] /16 cidr

where we run shared monitoring and logging and ci/cd services in the shared-services vpc (mostly inside k8s)

k8s also has ingress rules that expose services in the peered prod and pre prod accounts tot he corp network

overhead on the corp network is just the routing of the /22

so you already using vpc peering?

I am busy setting this all up

corp network will have direct connect to a /22 vpc

the /22 with have peering to many vpcs within a /16

that /16 we will split as many times as we need, probably into /24’s

or 23’s

@Jan your question was how to deploy kops
into an existing VPC?

My question was more if there is geodesic support for that flow or if i should add a module to do so

we deploy kops
(from geodesic
, but it’s not related) from a template, which is here https://github.com/cloudposse/geodesic/blob/master/rootfs/templates/kops/default.yaml
Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. https://slack.cloudposse.com/ - cloudposse/geodesic

the template deploys it into a separate vpc

Yep this I used to do having terraform render a go templated

ok I will play with a module and see where the overlap/handover is

if we want to deploy into an existing vpc using the template, it should be modified

there is no TF module for that

Yep, I will create one

TF module comes into play when we need to do vpc peering

just figured Id ask before I made one

you thinking about a TF module to deploy kops
?

a new tf module to fetch vpc metadatas (maybe), and deploy k8s via kops into the existing vpc

vpc metadata would not be needed if it vpc was created in tf

for kops metadata, take a look https://github.com/cloudposse/terraform-aws-kops-metadata
Terraform module to lookup resources within a Kops cluster for easier integration with Terraform - cloudposse/terraform-aws-kops-metadata

yea so this is that flow in reverse

example here https://github.com/cloudposse/terraform-root-modules/blob/master/aws/backing-services/kops-metadata.tf
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

or here we lookup both vpcs, kops and backing-services https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks-backing-services-peering/main.tf
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

@Jan not sure if it answers your questions :disappointed: but what we have is 1) kops
template to deploy kops into a separate VPC; 2) TF modules for VPC peering and kops metadata lookup

we played with deploying kops into an existing vpc, but abandoned it for a few reasons

@sohel2020 you have to manually add _plugin/kibana/
to <http://production-kibana.infra.mytost.com>
because URL paths are not supported for CNAMEs (and [production-kibana.infra.mytost.com](http://production-kibana.infra.mytost.com)
is a CNAME to the Elasticsearch URL generated by AWS)

https://stackoverflow.com/questions/9444055/using-dns-to-redirect-to-another-url-with-a-path https://webmasters.stackexchange.com/questions/102331/is-it-possible-to-have-a-cname-dns-record-point-to-a-url-with-a-path
I’m trying to redirect a domain to another via DNS. I know that using IN CNAME it’s posible. www.proof.com IN CNAME www.proof-two.com. What i need is a redirection with a path. When someone type…
I’ve registered several domains for my nieces and nephews, the idea being to create small static webpages for them, so they can say ‘look at my website!’. In terms of hosting it, I’m using expres…

even if you do it in the Route53 console, you get the error
The record set could not be saved because:
- The Value field contains invalid characters or is in an invalid format.


Yea you want to inject kops into a vpc. We don’t have that but I do like it and we have a customer that did that but without modularizing it.

I have done it in several ways in tf

I will look to make a module and contribute it

We would like to have a module for it, so that would be awesome

thanks @Jan

just a quick sanity check. when was the last time someone used the terraform-aws-vpc module… i copy pasta’d the example
module "vpc" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=master>"
namespace = "cp"
stage = "prod"
name = "app"
}
and it gave me Error: module "vpc": missing required argument "vpc_id"

We use it all the time

sorta funny when thats what im trying to create

perhaps the sample is messed

Last week most recently. @Andriy Knysh (Cloud Posse) any ideas?

Most current examples are in our root modules folder

Sec

@solairerove will be adding and testing all examples starting next week probably

@solairerove has joined the channel

just used it yesterday on the EKS modules

used the example that pulled the master branch

maybe im an idiot

let me go back and check

nm nothing to see here

tested this yesterday https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L36
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

i musta gotten my pasta messed up


at least it didnt take me all day


the error must have been from another module?

yeah

heres another question… https://github.com/cloudposse/terraform-aws-cloudwatch-flow-logs
Terraform module for enabling flow logs for vpc and subnets. - cloudposse/terraform-aws-cloudwatch-flow-logs

whats kinesis in there for.

shipping somewhere is suppose

Could have lambda slurp off the Kinesis stream, for example

This module was done more for checking a compliance checkbox

All logs stored in s3, but nothing immediately actionable

ahh cool. they have shipping to cwl now, was just wondering if there was a specific purpose

Yea, unfortunately not.
2018-12-08

how to know the best value for max_item_size
for memcache aws_elasticache_parameter_group
?

does it depend on the instance type ?
2018-12-09

@rohit Are you hitting a problem caused by the default max item size?

@joshmyers no. i just want to know how the max_item_size works

memcached development tree. Contribute to memcached/memcached development by creating an account on GitHub.

Max item size is the length of the longest value stored. If you are serializing data structures or session data, this could get quite large. If you are just storing simple key value pairs a smaller number is probably fine. This is a hint to memcache for how to organize the data and the size of slabs for storing objects.

What kinda things are you storing in there?

we are using memcache for tomcat session storage

so basically session information

Is 1mb object storage enough for you?

I think so but i will have to check

Looks like availability_zones
option is deprecated in favor of preferred_availability_zones
https://github.com/cloudposse/terraform-aws-elasticache-memcached/blob/65a0655e8bde7fb177516bbcdd394eddc8cfcc88/main.tf#L76
Terraform Module for ElastiCache Memcached Cluster - cloudposse/terraform-aws-elasticache-memcached

Could you add an issue for this?
Terraform Module for ElastiCache Memcached Cluster - cloudposse/terraform-aws-elasticache-memcached

sure can

Thanks!

I think i also know how to fix it

so will probably submit a PR sometime tomorrow

Is there a way to view what is stored in dynamodb table ? I was not able to find anything but empty table

The Dynamodb table can be viewed and modified via the DynamoDB UI. There is nothing hiding it Once a Terraform config is setup correctly to write to S3 with locking and applied successfully, it will write to that table. Utill then it will be empty

ohh ok

I am using the terraform-aws-vpc
module https://github.com/terraform-aws-modules/terraform-aws-vpc
Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc

is it possible to update openvpn configuration,after a vpc is created using this module ?

openvpn? <– not related to vpc

Is there a way to provide subnet information instead of cidr_blocks
in resource aws_security_group
ingress and egress ?

Can you share an example? Pseudo code

I am on my phone

# create efs resource for storing media
resource "aws_efs_file_system" "media" {
tags = "${merge(var.global_tags, map("Owner","${var.app_owner}"))}"
encrypted = "${var.encrypted}"
performance_mode = "${var.performance_mode}"
throughput_mode = "${var.throughput_mode}"
kms_key_id = "${var.kms_key_id}"
}
resource "aws_efs_mount_target" "media" {
count = "${length(var.aws_azs)}"
file_system_id = "${aws_efs_file_system.media.id}"
subnet_id = "${element(var.vpc_private_subnets, count.index)}"
security_groups = ["${aws_security_group.media.id}"]
depends_on = ["aws_efs_file_system.media", "aws_security_group.media"]
}
# security group for media
resource "aws_security_group" "media" {
name = "${terraform.workspace}-media"
description = "EFS"
vpc_id = "${var.vpc_id}"
lifecycle {
create_before_destroy = true
}
ingress {
from_port = "2049" # NFS
to_port = "2049"
protocol = "tcp"
cidr_blocks = ["${element(var.vpc_private_subnets,0)}","${element(var.vpc_private_subnets,1)}","${element(var.vpc_private_subnets,2)}"]
description = "vpc private subnet"
}
ingress {
from_port = "2049" # NFS
to_port = "2049"
protocol = "tcp"
cidr_blocks = ["${element(var.vpc_public_subnets,0)}","${element(var.vpc_public_subnets,1)}","${element(var.vpc_public_subnets,2)}"]
description = "vpc public subnet"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.global_tags, map("Owner","${var.app_owner}"))}"
}

i am getting the following error

any reason you’re not using our module? …just b/c our goal is to make the modules rock solid, rather than manage multiple bespoke setups

It’s just that some of the parameters were not being passed in your module

for example, kms_key_id,throughput_mode

Provides a security group resource.

yes, i can but i want to know if it is possible to directly pass subnet info
Provides a security group resource.

or vpc id

You can pass cidr, security group, or prefix. VPC doesn’t make sense in this context. But I gave you an example of how to get subnets from the vpc in other thread

I don’t think we can pass subnet groups in cidr_blocks

You can pass a list of cidrs. How do you mean subnet group differently?

But it is not hard to use data to look up the cidr blocks

@Steven could you please share an example ?

I am using terraform-aws-modules/vpc/aws
module

For the VPC, you are creating the subnets. So, there is no other option than providing the cidrs you want. But once the subnets have been created, you can query for them instead of hard coding them into other code.

data “aws_vpc” “vpc” { tags { Environment = “${var.environment}” } }
data “aws_subnet_ids” “private_subnet_ids” { vpc_id = “${data.aws_vpc.vpc.id}”
tags { Network = “private” } }
data “aws_subnet” “private_subnets” { count = “${length(data.aws_subnet_ids.private_subnet_ids.ids)}” id = “${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}” }

Example of using data from above def:

vpc_id = “${data.aws_vpc.vpc.id}” subnets = “${data.aws_subnet_ids.private_subnet_ids.ids}” ingress_cidr = “${data.aws_subnet.private_subnets.*.cidr_block}”

@Erik Osterman (Cloud Posse) i am using cloudposse/terraform-aws-efs
module https://github.com/cloudposse/terraform-aws-efs/blob/1ad219e482eba444eb31b6091ecb6827a0395644/main.tf#L38
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs

and i have to pass security_groups

when i execute the following code


i get the following error


any ideas why ?
2018-12-10

mmm

@rohit does it work without the aws_efs_mount_target
resource? Does it create the security group?

It looks legit on first inspection (under caffeinated at the mo)

yea I would agree, nothing jumps out as being wrong

Could be something like https://github.com/hashicorp/terraform/issues/18129
Seemingly, when a validation error occurs in a resource (due to a failing ValidateFunc), terraform plan returns missing resource errors over returning the original validation error that caused the …

Also, you shouldn’t need that explicit depends_on
, unless that was you testing the graph

i thought it was dependency error so added depends_on

without aws_efs_mount_target
, i get the following error
* module.storage.module.kitemedia.aws_security_group.media: "ingress.0.cidr_blocks.0" must contain a valid CIDR, got error parsing: invalid CIDR address: subnet-012345d8ce0d89dbc

So that is the problem. The error when wanting to create the SG is not bubbling up through the code and the SG isn’t actually created because of above error parsing the CIDR address which is actually a subnet

and the error you are receiving instead is just saying I can’t find that SG id

fix the ^^ error with CIDR and try re running without the depends_on and with aws_efs_mount_target uncommented

i think that fixed the issue

@joshmyers thanks

also, i am not able to view output variables

Are you outputting the variables?

yes

for efs, i am now facing the following issue

* module.storage.module.kitemedia.aws_efs_mount_target.kitemedia[0]: 1 error(s) occurred:
* aws_efs_mount_target.kitemedia.0: MountTargetConflict: mount target already exists in this AZ
status code: 409, request id: 74cdca17-fc8a-11e8-bdb0-d7feddd82bcc
* module.storage.module.kitemedia.aws_efs_mount_target.kitemedia[11]: 1 error(s) occurred:
* aws_efs_mount_target.kitemedia.11: MountTargetConflict: mount target already exists in this AZ
status code: 409, request id: 7eca3ec0-fc8a-11e8-b0cb-e76fd688df15
* module.storage.module.kitemedia.aws_efs_mount_target.kitemedia[4]: 1 error(s) occurred:
* aws_efs_mount_target.kitemedia.4: MountTargetConflict: mount target already exists in this AZ
status code: 409, request id: 755597d2-fc8a-11e8-a0c5-25395ed55c14
* module.storage.module.kitemedia.aws_efs_mount_target.kitemedia[24]: 1 error(s) occurred:

What is var.aws_azs
set to?

It looks like that count is way higher than I’d imagine the number of AZs available…

@joshmyers you were correct

i updated the count to use the length

` count = “${length(split(“,”, var.aws_azs))}”`

but i still get 2 errors

* module.storage.module.kitemedia.aws_efs_mount_target.kitemedia[2]: 1 error(s) occurred:
* aws_efs_mount_target.kitemedia.2: MountTargetConflict: mount target already exists in this AZ
status code: 409, request id: c11d8b5d-fc8c-11e8-a6ac-03687caf52eb
* module.storage.module.kitemedia.aws_efs_mount_target.kitemedia[0]: 1 error(s) occurred:
* aws_efs_mount_target.kitemedia.0: MountTargetConflict: mount target already exists in this AZ
status code: 409, request id: c11db25f-fc8c-11e8-adc4-b7e10e019ae2

Any reason not to declare aws_azs
as a list, rather than csv?

and have they actually been created already but not written to the Terraform state?

If so, manually delete the EFS mount targets and re run TF

that worked


i am still not able to view the outputs

i have them in outputs.tf

output "kitemedia_dns_name" {
value = "${aws_efs_file_system.kitemedia.id}.efs.${var.aws_region}.amazonaws.com"
}

when i run terraform output kitemedia_dns_name

The state file either has no outputs defined, or all the defined
outputs are empty. Please define an output in your configuration
with the `output` keyword and run `terraform refresh` for it to
become available. If you are using interpolation, please verify
the interpolated value is not empty. You can use the
`terraform console` command to assist.

this is what i am seeing

Inspect the statefile - is ^^ correct and there are no outputs defined in there?

and is that output in the module that you are calling?

do i have to use fully qualified name ?


@rohit try just terraform output
from the correct folder and see what it outputs

when i navigate to the correct folder, i get
The module root could not be found. There is nothing to output.

i am able to see them in the state file

but not on the command line using terraform output

@rohit again, are you outputting the value in the module, and then also in the thing which calls the module as my example above?

yeah tried that

still same issue

Without being able to see any code

¯_(ツ)_/¯

this is what i have inside [outputs.tf](http://outputs.tf)
under kitemedia module
output "kitemedia_dns_name" {
value = "${aws_efs_file_system.kitemedia.id}.efs.${var.aws_region}.amazonaws.com"
}

and then this is what i have inside [outputs.tf](http://outputs.tf)
that calls kitemedia
module
output "kitemedia_dns_name" {
value = "${module.storage.kitemedia.kitemedia_dns_name}"
}

Where has “storage” come from in module.storage.kitemedia.kitemedia_dns_name

Please post full configs including all vars and outputs for both the module and the code calling the module in a gist

@rohit what IDE/Editor are you using?

suggest to try https://www.jetbrains.com/idea/, it has a VERY nice Terraform plugin, shows and highlights all errors, warning, and other inconsistencies like wrong names, missing vars, etc.

Capable and Ergonomic Java IDE for Enterprise Java, Scala, Kotlin and much more…

vscode is actually quite nice too - never thought I’d say that about an MS product but times are a changin’

yea, i tried it, it’s nice

Also has good TF support

i am using vscode

im using vscode too, works well enough

yep

just tried to understand where storage
comes from in module.storage.kitemedia.kitemedia_dns_name
as @joshmyers pointed out

It’s actually module.kitemedia.kitemedia_dns_name

Working now?

nope

i am not sure what’s wrong

can you post the full code?

i will try

i have another question

so i have the following structure,
modules/compute/app1/main.tf, modules/compute/app2/main.tf, modules/compute/app3/main.tf

i want to use an output variable from modules/compute/app1/main.tf
in modules/compute/app2/main.tf

so i am writing my variable to outputs.tf

now, how do i access the variable in modules/compute/app2/main.tf
?

does this makes sense ?

it’s easy if all of those folders are modules, then you use a relative path to access the module and use its outputs, e.g. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L62
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

I was not aware that something like this can be done
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

i am sure there are many more features like this

@Andriy Knysh (Cloud Posse) i tried what you suggested but i am still facing problems

@rohit post here your code and the problems you are having

It is hard to paste the entire code but i will try my best

In my modules/compute/app1/outputs.tf
i have
output "ec2_instance_security_group" {
value = "${aws_security_group.instance.id}"
}

In my modules/compute/app2/main.tf
i am trying to do something like this

data "terraform_remote_state" "instance" {
backend = "s3"
config {
bucket = "${var.namespace}-${var.stage}-terraform-state"
key = "account-settings/terraform.tfstate"
}
}
security_groups = ["${data.terraform_remote_state.instance.app1.ec2_instance_security_group}"]

i feel that i am missing something here

you did not configure your remote state, you just using ours

I looked at the state file and it is listed under
"path": [
"root",
"compute",
"app1"
],
"outputs": {
"ec2_instance_security_group": {
"sensitive": false,
"type": "string",
"value": "REDACTRED"
}
}

config {
bucket = "${var.namespace}-${var.stage}-terraform-state"
key = "account-settings/terraform.tfstate"
}

^ needs to be updated to reflect your bucket and your folder path

yes i did that

i did not wanted to paste the actual values

hmm

there is no secrets here
config {
bucket = "${var.namespace}-${var.stage}-terraform-state"
key = "account-settings/terraform.tfstate"
}

Is the output from my state file helpful ?

update and paste the code

data "terraform_remote_state" "compute" {
backend = "s3"
workspace = "${terraform.workspace}"
config {
bucket = "bucketname"
workspace_key_prefix = "tf-state"
key = "terraform.tfstate"
region = "us-east-1"
encrypt = true
}
}

is that helpful ?

if you look at the state bucket, do you see the file under terraform.tfstate
?

it’s probably under one of the app
subfolders

e.g. app1
or app2

the state file is under, bucketname/tf-state/eakk/terraform.tfstate

key = "tf-state/eakk/terraform.tfstate"

ohh

security_groups = ["${data.terraform_remote_state.instance.app1.ec2_instance_security_group}"]

is this correct then ?

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

what is the significance of account_settings
here ?

should it match with any of my resource names ?

security_groups = [“${data.terraform_remote_state.compute.ec2_instance_security_group}“] something like this

Resource 'data.terraform_remote_state.compute' does not have attribute 'ec2_instance_security_group' for variable 'data.terraform_remote_state.compute.ec2_instance_security_group'
`

@rohit here is a working example on how we define the outputs in one module https://github.com/cloudposse/terraform-root-modules/blob/master/aws/account-settings/outputs.tf
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

and then reference the output from another module (in different folder) using remote state https://github.com/cloudposse/terraform-root-modules/blob/master/aws/users/main.tf#L32
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

is there any significance of root_iam
in your data block ? or can it be anything ?

it’s a module to provision IAM on the root account

the folder name could be different (if that’s what you are asking )

Still did not work

so i decided to take a different approach

did you first provision aws_security_group.instance
before accessing its state?

yes i did

in what folder?

please cd
to that folder and run terraform output
(and paste the result here)

The module root.app1 could not be found. There is nothing to output.

Looks like you didn’t provision anything in that folder

Direct message me with all the files, I’ll take a look. Remove all the secrets if any

if those apps are just resources, then you can use https://www.terraform.io/docs/providers/terraform/d/remote_state.html
Accesses state meta data from a remote backend.

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

anyone installing gitlab omnibus with terraform?

Terraform creation of Gitlab-CE installed with Omnibus in AWS Is anyone else doing this? How are you doing your backup and restore when you need to replace the gitlab instance for some reason? I recently needed to up the disk size of my instance so when I made the change in TF I needed to terminate my currently running instance and then apply the new TF plan which included the new disk size. Once the new instance comes up I have a template file in TF which is parsed into the user data for the i…

Im running gitlab form within k8s

Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.

oh thats the ee

sec

Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.

yea i saw the forum on

one

i didnt know if someone had the full stack

the vpc and subnets and everthing is easy

is that piece he has all i need hten?

Im not sure sorry, just found that

as much as possible I try to do things in k8s

yea i need to do that too!

do you have it for k8? @Jan

In launch template, is it possible to set latest version to default ?

@pericdaniel what are you looking for? https://sweetops.slack.com/archives/CB6GHNLG0/p1544481990492100
do you have it for k8? @Jan

@davidvasandani I’m looking for terraform that autocreates gitlab omnibus… if its not out there i just thought about writing it

@pericdaniel and you’re looking for AWS? I just checked Gitlab’s website and found they have a Terraform bootstart for GCE but it hasn’t be updated in a year.

yea for aws

i was thinking about converting the gce one

@pericdaniel this may help you in your quest. https://medium.com/@aloisbarreras_18569/a-comprehensive-guide-to-running-gitlab-on-aws-bd05cb80e363

A series of posts that will examine why we chose GitLab at Alchemy and teach you how to automate your own installation on AWS using…



In launch template, is it possible to set latest version to default ?

Provides a Launch Template data source.

probably using that

latest_version
- The latest version of the launch template.

(haven’t personally dealt with it lately)

i was able to use latest_version in autoscalinggroup but i was not able to set newly created version on the launch template as default

hrm… @maarten might have some ideas
2018-12-11

@rohit Passing the baton to @jamie who worked with launch templates earlier, maybe he knows a bit more. What do you mean with “not able to set newly created version on the launch template as default”. Did you get a kind of error or you didn’t notice a change ? When updating a launch config of an autoscaling group the launch config has been changed but you won’t see immediate effect. AWS does not do the replacement of the instances itself. Best way to tackle that is either a separate blue/green deployment deployment process or to something like step functions together with a lambda which takes care of that.

you can actually configure the autoscaling group to do a graceful replacement of the instances without using lambda

this thread isn’t readable, is this the thread with the cloudformation resource ?

Haha. No it’s one of the lead terraform devs talking about how they do it for their own servers

resource "aws_launch_configuration" "someapp" {
lifecycle { create_before_destroy = true }
image_id = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
security_group = ["${var.security_group}"]
}
resource "aws_autoscaling_group" "someapp" {
lifecycle { create_before_destroy = true }
name = "someapp - ${aws_launch_configuration.someapp.name}"
launch_configuration = "${aws_launch_configuration.someapp.name}"
desired_capacity = "${var.nodes}"
min_size = "${var.nodes}"
max_size = "${var.nodes}"
min_elb_capacity = "${var.nodes}"
availability_zones = ["${split(",", var.azs)}"]
vpc_zone_identifier = ["${split(",", var.subnet_ids)}"]
load_balancers = ["${aws_elb.someapp.id}"]
}
The important bits are:
- Both LC and ASG have create_before_destroy set
- The LC omits the “name” attribute to allow Terraform to auto-generate a random one, which prevent collisions
- The ASG interpolates the launch configuration name into its name, so LC changes always force replacement of the ASG (and not just an ASG update).
- The ASG sets “min_elb_capacity” which means Terraform will wait for instances in the new ASG to show up as InService in the ELB before considering the ASG successfully created.
The behavior when “var.ami” changes is:
(1) New “someapp” LC is created with the fresh AMI (2) New “someapp” ASG is created with the fresh LC (3) Terraform waits for the new ASG’s instances to spin up and attach to the “someapp” ELB (4) Once all new instances are InService, Terraform begins destroy of old ASG (5) Once old ASG is destroyed, Terraform destroys old LC
If Terraform hits its 10m timeout during (3), the new ASG will be marked as “tainted” and the apply will halt, leaving the old ASG in service.

I know this isn’t answering rohits question.

Doesn’t this also imply that all dependencies of the asg also need to have create_before_destroy then ?

I expect so.

that is how create_before_destroy works isnt it

fort a 3az, 2 tier vpc

should I use https://github.com/cloudposse/terraform-aws-multi-az-subnets or https://github.com/cloudposse/terraform-aws-dynamic-subnets
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

ideally I will not have the same size public / private subnets

but have say 2/3 of the cidr for the vpc in private

The above method of blue/green TF deploys of new AMIs works well, fails safe

Only if you use ami baking and no docker though.

ASG events can trigger deploys of app code so the new AMIs pass ELB health checks, so as new instances come in and healthy, old are rolled out. you don’t need to bake your app into AMI, I generally wouldn’t suggest that

So the max_subnets as a way to calculate subnets used in https://github.com/cloudposse/terraform-aws-multi-az-subnets has an interesting side effect
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

a /22 sent to a 3az vpc would by default create subnets with /26 rather than /25

this is because the max_subnets default of 6 means that the /22 need to fit 6*(number of tiers)

so its in fact max_subnets_per tier

not max per vpc

so max_subnets = 3 nets 8 subnets so 6x /25’s

much less waste

@Jan the two subnet modules were created to divide a VPC into private and public subnets according to some set of rules (different for the two modules). It was already brought to attention that they ‘waste’ IP space and in general don’t provide all possible solutions. We mostly use https://github.com/cloudposse/terraform-aws-dynamic-subnets for all our deployments and it works well
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

if you have some specific requirements on how to divide a VPC not covered by any of the modules, then a new module needs to be created

there is a million ways to divide a vpc

Im already all sorted thanks


cheers for following up though

which module did you end up using?

we have another one https://github.com/cloudposse/terraform-aws-named-subnets
Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.

Have you got an example of using https://github.com/cloudposse/terraform-aws-dynamic-subnets to do a two tier 3az vpc?
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

two tier means private and public subnets?

yes

spanning 3 availability zones

but that’s what we use everywhere

latest example https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L46
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

maybe I missed something then

right

I see

cool thanks, will switch over to that one

and test

@maarten I am able to use newly created launch template version to create autoscaling group but on the launch template there is an option to set the default version which i am not able to do for the latest version using terraform

and no, i don’t get an error

I just don’t see an option to do it in terraform

@rohit here how we did it https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/main.tf#L97
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

not sure if that’s the correct way of doing it

(and that’s not suited for green/blue deployments as @jamie and @maarten discussed)

@Andriy Knysh (Cloud Posse) i saw this but this will only set the autoscaling group to use the latest version

it says here that you can use latest
or default
https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html#launch_template-1
Provides an AutoScaling Group resource.

and the syntax shown here https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html#with-latest-version-of-launch-template
Provides an AutoScaling Group resource.

yes, that is on the autoscaling group

so i guess it should be version = "$$Default"

but i am talking about setting the default version on launch_template

which has nothing to do with autoscaling group

where in https://www.terraform.io/docs/providers/aws/r/launch_template.html do you see that attribute?
Provides an EC2 launch template resource. Can be used to create instances or auto scaling groups.

Maybe it’s not configurable, but something the WEBUI team came up with after smoking something. Just deep dive here https://github.com/aws/aws-sdk-go to see if it’s in their api.
AWS SDK for the Go programming language. Contribute to aws/aws-sdk-go development by creating an account on GitHub.

aws ec2 modify-launch-template –launch-template-id “lt-0444afefe36b9f2c0” –default-version “1” –region eu-central-1

ok, so there is a command line for it, that helps

ok, so you can set the default version with it, meaning that if you launch a template without giving the version it will take the default one. I’m not so sure why you would want that with Terraform as you control the whole chain with Terraform anyway and thus can pick the launch configuration you made with Terraform.

yea thanks @maarten

the default version will be used if you don’t specify any version

makes sense

thanks

anyway it’s better to recreate the template and ASG if any changes

@Andriy Knysh (Cloud Posse) is this subnet naming difference intentional?
https://github.com/cloudposse/terraform-aws-dynamic-subnets/blob/master/private.tf#L15
https://github.com/cloudposse/terraform-aws-dynamic-subnets/blob/master/public.tf#L15
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

ending up with x-xx-subnet-private-eu-central-1b
& x-xx-xxx-public-eu-central-1b

x, xx, xxx brin vars im setting namespace, stage, name

no, it’s a bug (regression after the last updates to the module)

thanks for finding it


just looked out of place

Far nicer subnet module btw

still passing ` max_subnet_count = 3`

also decided on ` availability_zones = [”${slice(data.aws_availability_zones.available.names, 0, 3)}”]`

for now

yea

but im getting expected results

it was designed to take all AZs into account

yea I saw

that’s why it divides into smaller subnets

which I guess makes sense

so I mean thats 100% valid in that context

though I dont use any region with less than 3az and build my k8s clusters around that choice

will explroe that later

an easy change

Just got a vpn gw and cgw working



Is it possible to create a target group without creating elb ?
2018-12-12

Hi there, I’m new to terraform (used it to make ec2 instace ). I’m trying to use this module: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn …well to create cloudfront with s3 source we don’t have any domain and with this code:
module "cdn" {
source = "cloudposse/cloudfront-s3-cdn/aws"
version = "0.3.5"
stage = "dev"
name = "app"
namespace = "pg"
}
I get this error:
...
module.cdn.data.aws_iam_policy_document.origin: Refreshing state...
module.cdn.module.logs.aws_s3_bucket.default: Creation complete after 4s (ID: pg-dev-pg-app-logs)
Error: Error applying plan:
1 error(s) occurred:
* module.cdn.aws_s3_bucket.origin: 1 error(s) occurred:
* aws_s3_bucket.origin: Error putting S3 CORS: MalformedXML: The XML you provided was not well-formed or did not validate against our published schema
status code: 400, request id: 8E821E105B6852CA, host id: F+zK01RI/I3BcuzlnK+nRLEdvLz4G4bRkJgGutEYI8fS4iBNTGw7UGLWik+GtLcCyvqXQxMcecU=
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
this sounds quite cryptic to me, any idea where to start digging?
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

@ikar have you run this with debug logging turned on?

nope, didn’t know about the possibility, thanks! But it seems the module requires at least cors_allowed_origins = ["*"]
. Though hit another problem which probly can’t be solved using that module, will write s3+cdn by myself from scratch.

Have you looked at the example? https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/example/main.tf#L6-L19
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

It will be coming from https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L61-L67
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

yes, that is way more complicated than what I need (no route 53). also don’t need logging

you probably do want logging…

https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#cors_rule cors_rule is optional
Provides a S3 bucket resource.

but I’m thinking https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L64 maybe breaking if nothing at all is set
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

not at this point. yes, it says it is optional, but ends with the error above

Turn on debug logging and see if there is a broken looking cors config

how to do that?

Terraform has detailed logs which can be enabled by setting the TF_LOG environment variable to any value. This will cause detailed logs to appear on stderr

holy cow! that’s a lot of debug info

hahah yea debug is brutal

@ikar if you still having the issue, paste your complete code here and we’ll take a look

Welcome to the community btw :)

thanks @Andriy Knysh (Cloud Posse)! creating custom module, but thanks

Are you creating Cloudfront with plain S3 bucket as origin, or as a website?

Cloudfront with plain S3 bucket as origin

@ikar it’ll be easier for the community to help you modify one of the existing modules (and more useful to the community as a whole) vs writing a brand new module from scratch. Good luck either way though!!

I wanted to try custom solution to learn about terraform. The thing is I wasn’t able to setup CDN behaviour correctly using this: <https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn>

by “correctly” I mean as requested by a project manager


@ikar if you can go into more detail of what the module is failing to configure maybe we can help you. Possibly by turning a current static configuration into something more dynamic for your and others needs.

thanks @davidvasandani, already achieved what I needed with using pure resources. Next time, I’ll try to be more patient. I’m in EU timezone so I’m usually exhausted when the community is waking up


this combined with atlantis would be a great way to allow community to contribute new modules

literally open up a PR for the terraform module repo

get it approved, merged

then contribute your changes

nifty idea!



Terraform AWS Permissions Generator. Contribute to Stretch96/terraform-aws-permissions-generator development by creating an account on GitHub.

Interesting idea


Along with Netflix Aardvark and Repokid could prove to be a sweet trio

access profiling and automated and ongoing right-sizing. We allow developers to deploy their applications with a basic set of permissions and then use profiling data to remove permissions that are demonstrably not used.

I’ve wanted to do exactly this!

I was thinking it would be nice to run terraform
through an AWS API proxy and then turn parse the requests to figure out the canned permissions

terraform-aws-permissions-generator
seems kind of like that

https://github.com/duo-labs/cloudmapper has a super interesting feature with audit usage
CloudMapper helps you analyze your Amazon Web Services (AWS) environments. - duo-labs/cloudmapper

Being some one that has always understood infrastructure sets in a visual and conceptual way it pretty cool

I have written a TF module for cloudmapperto run in Fargate for a client.

So I can provide that if you want something to start on?

Would absolutely love to take a look!

Still exploring it

Interactive visualizations of Terraform dependency graphs using d3.js - 28mm/blast-radius

love the authors bio: “Tool-using primate, proficient typist, and Seattle-resident Systems Administrator.”
Interactive visualizations of Terraform dependency graphs using d3.js - 28mm/blast-radius

Hahahaha, my kids dude

Is also pretty neat

@Jan oh, that’s interesting we just had someone last week run… terraform destroy --force
on our folder that contains all of our dev infrastructure

Anyone have any recommendations for making that as least destructive as possible . It stopped them when the IAM role they were using was deleted.

Took us 30 mins to recover so it was a good exercise

not really… maybe more sandbox accounts

terraform destroy --force
on our folder that contains all of our dev infrastructure

ya, one option is to seperate the dev folders by teams


though wait, you have the entire dev
infra defined in one project?

yea, exactly, more projects ~ states

it’s a single terraform project with about 2 dozen modules

terraform state list | wc
571

we’ve moved away from large projects mostly because they (a) take FOREVER to plan (b) have a huge blast radius

we have basically been waiting for it to get slow to plan/apply, but it has been fine so we had no reason to split it out.

but after the deletion that might give us a good reason

and it will also improve atlantis locks

yea, exactly

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

here’s roughly how we organize everything

this will vary from company to company, team to team

each one of those folders in aws
corresponds to a project

coldstart is pretty painful. we go into each folder and plan/apply

looking into https://github.com/uber/astro
Astro is a tool for managing multiple Terraform executions as a single command - uber/astro

~Have you played with terragrunt
?~ I’d be curious to hear how it compares with astro
. I’m a big terragrunt fan

i like what terragrunt achieves

i’m not particularly fond with how it achieves it.

i don’t like the overloading of tfvar


….

i do like task runners. i use those all the time

make, rake, etc.

i don’t like that make isn’t very “pipeline” oriented. it’s too close to a language, so people abuse it (us included!)

what i like about astro is it presents a pipeline like a build pipeline. do this, then do that. and it does it in YAML, which all my other pipelines are in.

for the record, haven’t used astro yet. just been pondering the approach almost everyday for the past few weeks.

re: astro, ditto.

ya, we essentially group our “projects/services” as modules unto themselves in their own git repo.

So the main state just references modules with their variables.

yea, that’s nice

we use that setup to have CI tests that test out the modules on changes.

are you using plain terraform for everything?

no #terragrunt

Correct, no terragrunt


The terraform init
command is used to initialize a Terraform configuration. This is the first command that should be run for any new or existing Terraform configuration. It is safe to run this command multiple times.

I have never been overly fond of terragrunt.

same

but this terraform init -from-module=....
is top of mind

(incidentally, something terragrunt does use under the hood)

ya, that’s interesting functionality

so i’d like to be able to define an architecture more or less without writing a line of terraform code

assuming all the modules already exist

anyways, that’s what I think I want. not sure yet.

i have a call with Dans @ Uber next tuesday to learn more about how they use Astro at uber.

nice

Got ya, so define the modules and make them more resuable with a simple abstraction on top.

yea… and since we’re both helmfilers, basically something like #helmfile

Take your time reviewing this monster PR: https://github.com/cloudposse/terraform-aws-eks-cluster/pull/7
There was a small tweak needed in the README for it to work. // CC @osterman @aknysh

Hi guys, do we have a terraform module for cognito user pool? If not, do we have any plans for it in the future?

ah, bummer! no not yet.

Get list of cognito user pools.

sorry…that’s data.

Provides a Cognito User Pool resource.

that’s a lot of args. LOL. I can see where a module would help.



anyone k8s with EKS? trying to get my bearings and the auth is winning right now

I saw your PR

have you looked at the examples/complete

that was tested as of a couple of days ago

i didn’t. checking

just pushed a fix, btw

thx!

i have it running. it worked flawlessly

@Andriy Knysh (Cloud Posse) is most familiar with it

Unauthorized
is my life right now. i will go back through the setup steps

got it. def’ was a config thing from the yaml output. i needed to use the AWS_PROFILE
env var or update the config yaml to pass it into aws-iam-authenticator


awesome!

is this a skunkworks project to get off of beanstalk?

2018-12-13

somewhat, yeah. it is to deploy our new jenkins setup and a way to open eyes to using k8s

hi everyone, I am developing a terraform provider for Ansible AWX (a.k.a. Tower).

if anyone is interested or wants contribute to the project, it is warmed welcome

Terraform provider plugin for AWX/Tower - API v2. Contribute to mauromedda/terraform-provider-awx development by creating an account on GitHub.

@deftunix looks interesting, thanks for sharing. I’ll take a look in more details

what are people using instead of beanstalk? are you just creating web instances on a docker cluster and then using api gatway or somthing

@pericdaniel so depending on your requirements and many other considerations, and speaking only about AWS, it could be Beanstalk (but it’s like creating a cluster for each app), ECS, EKS, or kops
. You can deploy any web app on a Kubernetes cluster, no need to use API gateway (although it could be used, but serves diff purpose). You deploy the app containers on k8s, use nginx-ingress
for ingress, and ELB/ALB and Route53 for routing/DNS

for ECS, take a look at the very nice implementation here https://airship.tf (thanks to @maarten)
Home of Terraform Airship

This is work in progress people don’t refer to it yet
Home of Terraform Airship


interesting

thank you!

also, Fargate
could be a nice solution in some cases

so it’s one of Beanstalk, ECS, Fargate (currently just for ECS), EKS, kops - each has pros and cons

yea that makes sense

@Andriy Knysh (Cloud Posse) do you know why workers would not have access to the cluster when running terraform-aws-eks-cluster/workers?

“No resources found.” has none found

so yes, take a look here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

yeah

EKS currently does not support for the master to join the cluster automatically

you have to apply the config

i did

the configmap, right?

it could be done manually, or automatically

If you want to automatically apply the Kubernetes configuration, set var.apply_config_map_aws_auth
to “true”

yes configmap

configmap/aws-auth unchanged

still no nodes found

do i need to restart the workers after?

i did not restart them

Does anyone here use cloudflare for dns cert validation ?

Everytime i run a plan i see that it wants to update in-place the entry in cloudflare
cloudflare_record.dns_cert_validation

so it doesn’t make sense to me why you do the DNS validation for ACM with cloudflare

cloudflare has it’s own TLS certificates

and cloudflare works like MITM not passthru

when we generate a cert in aws we want to do dns validation

for which we are using cloudflare

@Erik Osterman (Cloud Posse) The issue i am facing is mentioned here https://github.com/terraform-providers/terraform-provider-cloudflare/issues/154
Terraform Version 0.11.10 Affected Resource(s) Please list the resources as a list, for example: cloudflare_record aws_acm_certificate Terraform Configuration Files variable "domain" {} v…

@johncblandii when you run kubectl get nodes
using the generated kubeconfig
file, what do you see?

no nodes

No resources found.

and when i tried to add nginx i get:
Warning FailedScheduling 25m (x4 over 25m) default-scheduler no nodes available to schedule pods

ok, i need to find my records how i did it, will ping you

ok

I’m a relative noob on k8s so I def’ appreciate the help

recreating my workers with a key so i can ssh and muck around

@johncblandii from my notes:

first, run aws-iam-authenticator token -i cpco-testing-eks-cluster
to check if aws-iam-authenticator token
works

then I think your configmap was not applied for some reason. You can manually do it by executing kubectl apply -f config-map-aws-auth-cpco-testing-eks-cluster.yaml --kubeconfig kubeconfig-cpco-testing-eks-cluster.yaml

got json back on the token

looks good

i did the apply

configmap/aws-auth unchanged

finally, kubectl get nodes --kubeconfig kubeconfig-cpco-testing-eks-cluster.yaml
to see the nodes

still a no go

can you try to delete the config map from the cluster, and then exec kubectl apply -f config-map-aws-auth-cpco-testing-eks-cluster.yaml --kubeconfig kubeconfig-cpco-testing-eks-cluster.yaml

kubectl get configmaps --all-namespaces --kubeconfig kubeconfig-cpco-testing-eks-cluster.yaml

run the command and show the output

NAMESPACE NAME DATA AGE
ingress-nginx nginx-configuration 1 2h
ingress-nginx tcp-services 0 2h
ingress-nginx udp-services 0 2h
kube-system aws-auth 1 18s
kube-system coredns 1 10h
kube-system extension-apiserver-authentication 5 10h
kube-system kube-proxy 1 10h

what can i run on a worker to verify it can communicate w/ the controlplane?

can you go to the AWS console and check if the worker instances are OK?

they are

all 3 running

also, when you run kubectl apply -f config-map-aws-auth-cpco-testing-eks-cluster.yaml --kubeconfig kubeconfig-cpco-testing-eks-cluster.yaml
, what’s the output?

configmap/aws-auth created

(you need to use your file name, but you know that )

yup

hmm, out of ideas for now, will have to spawn a cluster and check it. @Erik Osterman (Cloud Posse) any ideas here?

ever work from the worker to test config?

# kubectl cluster-info dump
The connection to the server localhost:8080 was refused - did you specify the right host or port?

when testing, i applied the map, nodes joined, and i deployed some app

you need to use --kubeconfig
for all commands

dang. that’s weird

here’s the TF…1 sec

module "eks_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=master>"
namespace = "${local.namespace}"
name = "${var.application_name}"
stage = "${var.stage}"
tags = "${var.tags}"
subnet_ids = ["${module.vpc.public_subnets}"]
vpc_id = "${module.vpc.vpc_id}"
# `workers_security_group_count` is needed to prevent `count can't be computed` errors
workers_security_group_ids = ["${module.eks_workers.security_group_id}"]
workers_security_group_count = 1
}
module "eks_workers" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=master>"
namespace = "${local.namespace}"
stage = "${var.stage}"
name = "${var.application_name}"
tags = "${var.tags}"
key_name = "jcbii-ops-prod"
instance_type = "m5.xlarge"
associate_public_ip_address = true
vpc_id = "${module.vpc.vpc_id}"
subnet_ids = ["${module.vpc.public_subnets}"]
health_check_type = "EC2"
min_size = 3
max_size = 5
wait_for_capacity_timeout = "10m"
cluster_name = "${local.full_name}"
cluster_endpoint = "${module.eks_cluster.eks_cluster_endpoint}"
cluster_certificate_authority_data = "${module.eks_cluster.eks_cluster_certificate_authority_data}"
cluster_security_group_id = "${module.eks_cluster.security_group_id}"
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = "true"
cpu_utilization_high_threshold_percent = "80"
cpu_utilization_low_threshold_percent = "20"
}

do i need a different config here?

no, that worked


/usr/bin/aws-iam-authenticator token -i /var/lib/kubelet/kubeconfig
this shows the json output w/ a token as well

# kubectl get pods --kubeconfig /var/lib/kubelet/kubeconfig
Error from server (Forbidden): pods is forbidden: User "system:node:" cannot list pods in the namespace "default": unknown node for user "system:node:"

is that the same kubeconfig that was generated?

no, i’m on the worker

can you check this https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
This chapter covers some common errors that you may see while using Amazon EKS and how to work around them.

checking

arn is valid

…for the IAM role in the configmap

cluster name matches on worker

this is going to be something really stupid

take a look here https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#worker-node-access-to-eks-master-cluster and please go to the AWS console and confirm that you have two SGs, one for the cluster and another for the workers, and that they both allow the other group to access
Using Terraform to configure AWS EKS.

that sg is in your module already, right?

yes

but i want you to confirm that the rules are ok (and there was no regression when we updated the modules)

checking

it is set to all->all

both directions?

from cluster to workers and from workers to cluster?

yup

outbound is all->all 0.0.0.0

maybe a vpc issue. 1 sec

they are in the same VPC?

cluster and workers?

yeah

i used a custom module built off the public one (not the one in your example)

going to use the setup from your complete

ok, please test it, if still issues, let me know and I’ll spawn a cluster and go through everything again

ok…it is recreated with essentially the same TF as your complete example

module "vpc" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=master>"
namespace = "${local.namespace}"
stage = "${var.stage}"
name = "${var.application_name}"
tags = "${local.tags}"
cidr_block = "${var.cidr_block}"
}
module "subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=master>"
availability_zones = "${data.aws_availability_zones.available.names}"
namespace = "${local.namespace}"
stage = "${var.stage}"
name = "${var.application_name}"
tags = "${local.tags}"
region = "${data.aws_region.current.name}"
vpc_id = "${module.vpc.vpc_id}"
igw_id = "${module.vpc.igw_id}"
cidr_block = "${module.vpc.vpc_cidr_block}"
nat_gateway_enabled = "true"
}
module "eks_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=master>"
namespace = "${local.namespace}"
name = "${var.application_name}"
stage = "${var.stage}"
tags = "${var.tags}"
subnet_ids = ["${module.subnets.public_subnet_ids}"]
vpc_id = "${module.vpc.vpc_id}"
# `workers_security_group_count` is needed to prevent `count can't be computed` errors
workers_security_group_ids = ["${module.eks_workers.security_group_id}"]
workers_security_group_count = 1
}
module "eks_workers" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=master>"
namespace = "${local.namespace}"
stage = "${var.stage}"
name = "${var.application_name}"
tags = "${var.tags}"
key_name = "jcbii-ops-prod"
instance_type = "m5.xlarge"
associate_public_ip_address = true
vpc_id = "${module.vpc.vpc_id}"
subnet_ids = ["${module.subnets.public_subnet_ids}"]
health_check_type = "EC2"
min_size = 3
max_size = 5
wait_for_capacity_timeout = "10m"
cluster_name = "${local.full_name}"
cluster_endpoint = "${module.eks_cluster.eks_cluster_endpoint}"
cluster_certificate_authority_data = "${module.eks_cluster.eks_cluster_certificate_authority_data}"
cluster_security_group_id = "${module.eks_cluster.security_group_id}"
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = "true"
cpu_utilization_high_threshold_percent = "80"
cpu_utilization_low_threshold_percent = "20"
}

converting to a closer usage of complete by using the labels

weird. using complete
directly resulted in:
Apply complete! Resources: 35 added, 0 changed, 0 destroyed.
Outputs:
config_map_aws_auth =
eks_cluster_arn =
eks_cluster_certificate_authority_data =
eks_cluster_endpoint =
eks_cluster_id =
eks_cluster_security_group_arn =
eks_cluster_security_group_id =
eks_cluster_security_group_name =
eks_cluster_version =
kubeconfig =
workers_autoscaling_group_arn =
workers_autoscaling_group_default_cooldown =
workers_autoscaling_group_desired_capacity =
workers_autoscaling_group_health_check_grace_period =
workers_autoscaling_group_health_check_type =
workers_autoscaling_group_id =
workers_autoscaling_group_max_size =
workers_autoscaling_group_min_size =
workers_autoscaling_group_name =
workers_launch_template_arn =
workers_launch_template_id =
workers_security_group_arn =
workers_security_group_id =
workers_security_group_name =

@Andriy Knysh (Cloud Posse) any idea why it doesn’t even attempt to run the eks cluster?

i copied example/complete, created a tfvars file with customizations, and ran it

hmmm…. when you run the example, it should create everything, tested it many times

what do you mean by “it doesn’t even attempt to run the eks cluster”?

i downloaded files, created tfvars, ran apply, and it only did the vpc parts

the only change was the path to the eks_cluster module

plan is empty (no changes) and output has no value

this is weird

ok, can you clone the repo to your computer, go to the example/complete
folder, update [variables.tf](http://variables.tf)
with your values, and run terraform plan
from the example folder

pretty much exactly what i did, but i’ll do it from a clone

(the example uses the cluster module itself, which is in the repo)

yup

Plan: 35 to add, 1 to change, 0 to destroy.

no eks cluster in there

too big to post. 1 sec

+ module.subnets.aws_eip.default[0]
+ module.subnets.aws_eip.default[1]
+ module.subnets.aws_eip.default[2]
+ module.subnets.aws_nat_gateway.default[0]
+ module.subnets.aws_nat_gateway.default[1]
+ module.subnets.aws_nat_gateway.default[2]
+ module.subnets.aws_network_acl.private
+ module.subnets.aws_network_acl.public
+ module.subnets.aws_route.default[0]
+ module.subnets.aws_route.default[1]
+ module.subnets.aws_route.default[2]
+ module.subnets.aws_route_table.private[0]
+ module.subnets.aws_route_table.private[1]
+ module.subnets.aws_route_table.private[2]
+ module.subnets.aws_route_table.public
+ module.subnets.aws_route_table_association.private[0]
+ module.subnets.aws_route_table_association.private[1]
+ module.subnets.aws_route_table_association.private[2]
+ module.subnets.aws_route_table_association.public[0]
+ module.subnets.aws_route_table_association.public[1]
+ module.subnets.aws_route_table_association.public[2]
+ module.subnets.aws_subnet.private[0]
+ module.subnets.aws_subnet.private[1]
+ module.subnets.aws_subnet.private[2]
+ module.subnets.aws_subnet.public[0]
+ module.subnets.aws_subnet.public[1]
+ module.subnets.aws_subnet.public[2]
+ module.vpc.aws_internet_gateway.default
+ module.vpc.aws_vpc.default
+ module.subnets.module.nat_label.null_resource.default
+ module.subnets.module.private_label.null_resource.default
+ module.subnets.module.private_subnet_label.null_resource.default
+ module.subnets.module.public_label.null_resource.default
+ module.subnets.module.public_subnet_label.null_resource.default
+ module.vpc.module.label.null_resource.default

and ~ module.subnets.data.aws_vpc.default

that’s from the clone

providers:
├── provider.aws
├── provider.local
├── provider.null
├── module.cluster_label
├── module.eks_cluster
│ ├── provider.aws (inherited)
│ ├── provider.template
│ └── module.label
├── module.eks_workers
│ ├── provider.aws (inherited)
│ ├── provider.template
│ ├── module.autoscale_group
│ ├── provider.aws (inherited)
│ ├── provider.null
│ └── module.label
│ └── module.label
├── module.label
├── module.subnets
│ ├── provider.aws (inherited)
│ ├── module.nat_label
│ └── provider.null
│ ├── module.private_label
│ └── provider.null
│ ├── module.private_subnet_label
│ └── provider.null
│ ├── module.public_label
│ └── provider.null
│ └── module.public_subnet_label
│ └── provider.null
└── module.vpc
├── provider.aws (inherited)
└── module.label
└── provider.null

true vs “true”

applying again

this has been a rough 24 hours. eks is not my friend right now

i’ll direct message you my test repo, which is exact copy of the cloudposse example

just ran terraform plan
on it: Plan: 65 to add, 1 to change, 0 to destroy.

hope it will work for you

it worked. map didn’t run because of “true” vs true again, but doing that now

so will verify the cluster in a sec

NODES!!!!


Has anyone here experienced an issue with permissions after generating an aws key pair through terraform? I have an elastic beanstalk environment and application that needs an ssh key attached. everything runs smoothly if i generate the key pair through the console and specify the hard-coded name to EC2KeyName. however, if I generate the aws key pair using https://github.com/cloudposse/terraform-aws-key-pair, i get an error about the role policy not having the proper permissions
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

here’s the error message:
Environment health has transitioned from Pending to Severe. ELB processes are not healthy on all instances. Initialization in progress (running for 11 minutes). None of the instances are sending data. Access denied while accessing Auto Scaling and Elastic Load Balancing using role "arn:aws:iam::<>:role/aws-elasticbeanstalk-service-role". Verify the role policy. ELB health is failing or not available for all instances.
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

Hrm… that’s really odd

my gut tells me the relationship is a red herring

could the key be getting generated in a region other than the beanstalk environment?

@johncblandii are you using our terraform-aws-key-pair
module with yoru beanstalk environments?

No. We lock down our instances.

appreciate the quick response guys. the region is definitely the same, and I can see it in the console after it’s created

is this a depends_on issue? the key is generated after the beanstalk env?

…basically a race issue

i reference the name of the key from the module, so that doesn’t seem to be the case. i also generated the graph using terraform graph
, it looks correct that way

let me try depends_on to be 100% sure though

if you gen the key first then the beanstalk, does it work

depends_on won’t work on modules until .12

< and by “gen key” i mean with tf apply -target

let me give that a shot

@johncblandii ah yes, explicitly generating the key with tf apply -target
beforehand, then running tf apply
succeeds. any recommendations on creating module dependencies? i thought implicit dependencies came from any variable references

@Erik Osterman (Cloud Posse) has a link he has to share with us all who hit this problem.

.12 can’t come fast enough

yea, seriously

Haha, here’s my goto link: https://docs.cloudposse.com/faq/terraform-value-of-count-cannot-be-computed/

that’s it.

It’s more about “count of cannot be computed”, but the two-phased approach is mentioned there

We often have a Makefile
in our projects

awesome, thanks for the link

why do the Makefile’s download a tf version? why not use my local one?

/Library/Developer/CommandLineTools/usr/bin/make terraform/install terraform/get-modules terraform/get-plugins terraform/lint terraform/validate
Installing Terraform 0.10.7 (darwin) from <https://releases.hashicorp.com/terraform/0.10.7/terraform_0.10.7_darwin_amd64.zip>
######################################################################## 100.0%
/Users/john.bland/Work/terraform-aws-efs/build-harness/vendor/terraform version
Terraform v0.10.7
Your version of Terraform is out of date! The latest version
is 0.11.10. You can update by downloading from www.terraform.io/downloads.html

so we sandbox everything in the build-harness

only way to have a chance of escaping the “works on my machine” syndrome

coolio

what command were you trying to run?

any reason it isn’t using the latest version?

what Upgrade to latest version for newer terraform release why https://sweetops.slack.com/archives/CB6GHNLG0/p1544739118674300

@solairerove is helping us keep packages up to date until we can automate it

coolio

ready to approve

Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness

thanks

make init

hrm

surprised we install terraform on make init

sorry…lint

aha, ok that make more sense.

we don’t have an automated process yet for opening PRs to bump versions

I want to add it to our cloudposse/packages

for now, i’ll open an issue and we’ll get that version bumped

gotcha

what’s the secret sauce to run before pushing?

oh, for the docs?

make readme

(assumes at some point you’ve run make readme/deps
)

yup. got that one

thought there was another too, but seems that did ok

fmt?

terraform fmt

clear there

check the .travis.yml
for all the steps

cool. just don’t want to waste your time when I put the PR up


ah, good to know

haha, appreciated it!

make init
make readme/deps
make readme

cool

Added a basic example as well

thoughts?

pushed a quick update on the readme

btw, example might need some work

let me actually make those changes

Thanks

np

pushed

ran it within my eks work and it worked



All facts

@Dombo thanks!

Okay I’m going crazy

Why isn’t this working

variable “allowed_ports” { type = “list” description = “List of allowed ingress ports” default = [“22”,”88”] }

from_port = “${element(var.allowed_ports, count.index)}” to_port = “${element(var.allowed_ports, count.index)}”

It only adds port 22 to sg

use count

show the whole module where you have
from_port = "${element(var.allowed_ports, count.index)}"
to_port = "${element(var.allowed_ports, count.index)}"

Ah let me add thay

so it should be something like here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/main.tf#L75
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

you use count
in the resource, and then you can use count.index

I think I got it!

Having some other route table shennigans now

I know what’s up

Do you have an example of running a bunch of Linux commands on a box through a tpl file?

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Thank you!

Contribute to cloudposse/terraform-aws-user-data-cloud development by creating an account on GitHub.

Contribute to cloudposse/terraform-aws-user-data-assets development by creating an account on GitHub.

How do you know when to base64encode?

@Andriy Knysh (Cloud Posse) I want to usersata this

###Gitlab bootstrap
#!/bin/bash
# Install GitLab sudo apt-get update sudo apt-get install -y curl openssh-server ca-certificates apt-get update echo ‘postfix postfix/mailname string ${1}’ | debconf-set-selections echo ‘postfix postfix/main_mailer_type string "Internet Site"’ | debconf-set-selections
# Get Repository curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh | sudo bash sudo EXTERNAL_URL=”http://gitlabtest.com” apt-get install gitlab-ee

anyone here send EC2 instance status events (cloudwatch + sns +lambda) to slack?

maybe checkout this aws-to-slack lambda… it supports the AWS Health Dashboard, https://github.com/arabold/aws-to-slack#what-is-it
Forward AWS CloudWatch Alarms and other notifications from Amazon SNS to Slack. - arabold/aws-to-slack

wrote a tf module based around that lambda function… https://github.com/plus3it/terraform-aws-slack-notifier
Terraform module that builds and deploys a lamdbda function for the aws-to-slack package. - plus3it/terraform-aws-slack-notifier

like those hardware failure notices, scheduled maintenance, etc

we have the module for sns<->slack

we’re missing the cloudwatch side

@Erik Osterman (Cloud Posse) have you setup an EC2 health check feed into slack before?

No, nothing handy.
@Erik Osterman (Cloud Posse) have you setup an EC2 health check feed into slack before?

@jamie or @maarten might have something

hot off the press! https://github.com/cloudposse/tfenv
Transform environment variables for use with Terraform (e.g. TF_VAR_lower_case
) - cloudposse/tfenv

nice!
Transform environment variables for use with Terraform (e.g. TF_VAR_lower_case
) - cloudposse/tfenv

Is there a way to define the order in which output variables display ?

not really

i think that if you put them all in the same file, they will output in that order they appear (but just a hunch)

they don’t appear in the same order

hrm…

then I don’t think there is a way “out of the box”

not a big deal, thought there would be something already available to do this

you can emit them in json
and then do what ever you want (E.g. with jq
)

terraform output -format=json | jq ...

(or something like that)

we do something like this, but with terraform-docs
; https://github.com/cloudposse/test-harness/blob/master/tests/terraform/00.output-descriptions.bats
Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness

nice

that works for me

thanks


yes it is
2018-12-14


Question: when moving state (due to a refactor of my files/approach), what’s the proper format? I’m trying to automate it with a simple ruby script.
Inside is basically the efs, eks_cluster, and eks_workers module in the modules/cluster and the route53 module in modules/route53. Previously I ran the whole thing from the modules folder as it wasn’t expected to grow in the way it did, but it continues to grow so I moved them to modules with a root level project to kick it all off.
Any thoughts would be appreciated.
New structure:
├── modules
│ ├── cluster
│ │ ├── efs.tf
│ │ ├── kubectl.tf
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── route53
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
| main.tf
Old structure:
├── cluster
│ ├── efs.tf
│ ├── kubectl.tf
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── route53
| ├── main.tf
| ├── outputs.tf
| └── variables.tf

argh, moving state en mass is a pain if you have lots of resources

What do you mean by proper format?

the mv format

Still not sure I follow. The format is as defined when you do a terraform state list


terraform state mv module.certificate.aws_route53_zone.default module.MY_RENAMED_MODULE.aws_route53_zone.default

just moving the files doesn’t necessarily mean they will move in the state…

Indeed

The statefile has no idea how your files are ogranised. If you have changed resource names and don’t want Terraform to throw the old ones away and create your new ones, use state mv
for example

yeah, renaming resources or modules inside your terraform config is when state mv
comes into play. avoid that, and you should still be golden, regardless of how you organize the files

so I’m moving the state as well

state was in cluster/. I’m moving it to the root

local state, just a file in the repo? (i.e. no backend?)

correct. just working locally right now

then you ought to be able to just move the file and it should still work

I did and it wanted to create 60+ things

then you renamed modules or nested them or something

right

that’s why I’m looking for the proper syntax to move the state

specifically when it is nested

yep, i hit this issue before too, struggled for a long time to figure out the nested address… hmm….

what i think i did before is build it one time using the new layout in a new account, then used state list
to get the new state addresses, then i was able to map those to the old state addresses and use state mv
to move everything

but the general format for the nested module is something like this:
module.<module_name>.module.<nested_module_name>.<resource_type>.<resource_name>

you might be able to move whole modules, which mostly worked for me but still left a couple cleanup items…
terraform state mv module.<module_name> module.<module_name>.module.<nested_module_name>

ok cool. i’ll give it a whirl after this meeting and report back. thx

Have you renamed any of your resources? e.g. aws_instance.foo
-> aws_instance.bar
?

nah, resources are the same except the root now has a module wrapping the module/cluster

so module cluster -> modules/cluster -> eks, efs, etc

module route53 -> modules/route53 -> route53

@jamie I noticed that “test kitchen” is by new context. Have you used it much over there?

Sadly no. The companies I’m working currently with are not coming from a test driven design background. So it’s not come up. But I read through the docs for this. And found a few others that were recommended through reddit threads.

Yes - precisely

We want to have a standard test-suite (plan, apply restroy), in addition to eventually custom functional tests

Have you touched on it yet? Do you want to wrap it into the modules we are creating?

I think the common tests will be external

but be seemlessly integrated using the build-harness

Here’s an example of what tests might look like

Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness

Team, I have a question, do you guys know if it is possible to create a file in a terraform run and in the same run, execute it ?

I’m using null_resource and local-exec

but looks like it is not working

resource "null_resource" "test_file" {
provisioner "local-exec" {
command = <<EOT
cat > test.sh << EOF
echo "Success"
EOF
EOT
}
}
resource "null_resource" "run_file" {
provisioner "local-exec" {
command = "bash test.sh"
}
depends_on = [
"null_resource.test_file",
]
}

@rbadillo do you mean that when run_file
gets executed, it could not find test.sh
(depends_on = [“null_resource.test_file”,] not working)? or some other errors?

it can’t find it

the file?

correct

can you try it on your side, please?

to create local files, better to use https://www.terraform.io/docs/providers/local/r/file.html
Generates a local file from content.

example https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf#L18
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

amazing

exactly what I’m doing

EKS stuff

thanks a lot

I can vouch for the EKS stuff

2018-12-15

i thought there was a magic variable (like path.module
) which referred to the module name itself (e.g. github repo)

anyone have a tip to get it?

e.g. inside the terraform-aws-eks-cluster
module, is it possible to know I’m inside terraform-aws-eks-cluster
?

Would path.cwd
work?

Do you actually want to know the directory name, which could change if someone cloned into a custom directory, or do you want it to always be a specific value for a given module? If the latter, probably a local would be needed?

in this case, the wanting to publish artifacts (zips of lambdas) to [artifacts.cloudposse.com/<module-name](http://artifacts.cloudposse.com/<module-name>/<git)/<git> hash>/lambda.zip

Maybe split path.module
on /
and take the last element?

Oh, or just dirname(path.module)
, better for cross-platform support too

but modules are checked out to hashed folders

(probably git hash)

turns out it is not the git hash

..checked out by terraform init

Then back to a local you define in each module :p

yea, that may be the best bet

I really like this one - https://github.com/cloudposse/terraform-aws-ecs-container-definition , because I was missing some properties like secrets in this one - https://github.com/blinkist/terraform-aws-airship-ecs-service/tree/master/modules/ecs_container_definition . Good job with all this ECS pieces.
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service

yea, someone just contributed the secrets piece last week (i think)

thanks

At the very least, you can get unblocked using a local, and then if someone figures out the magic sauce you can just adjust the local

yep, easy peasy to update local

@antonbabenko you can also just use my module in your atlantis one

I am really-really trying to use parts of your ECS modules in my current project. Some pieces fit well, but some I have to copy-paste.

That’s fine

Ask in the #airship channel in case you have questions

I will, but my case is rather straightforward. I really recommend to add documentation and working examples. Working examples with just single terraform apply
is critical to get people on board.


Finishing the Per Module Guides now, after that I’ll work out some use-cases.

Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.

“Thank you for your testimonial!“

thanks anton!!!

Retrieves the content at an HTTP or HTTPS URL.

ugh - devil is in the details

The given URL may be either an http or https URL. At present this resource can only retrieve data from URLs that respond with text/* or application/json content types, and expects the result to be UTF-8 encoded regardless of the returned content type header.

so we cannot use it to download zips

unless we do something retarded and base64 encode them

Yep, I made an external module to create a local filecache to work around that, but it’s less than ideal for sure

Terraform module to retrieve and cache files. Contribute to plus3it/terraform-external-file-cache development by creating an account on GitHub.

aha, so you ended up writing an a script in python that you call from terraform

Yup

though the dependency on external pip modules makes it more difficult to use as a drop-in module

The requirements aren’t gonna change, you can pip install six and boto3 any way you like

I use a terragrunt hook myself, but any task runner could do it

thanks for this! you gave me an idea for how I’m going to attempt it



data "external" "curl" {
program = ["curl", "-sSL", "--write-out", "{\"success\": \"true\"}", "-o", "${local.output_file}", "${local.url}"]
}

while this still requires an external app (curl
), it’s installed by default on many systems.

Haha nice! Yeah, I had a hard req on s3 support and cross-platform

s3://
urls?

Yep

ah, yea, that makes it harder.

And I had already written the python module for another project, soooo

I should probably just publish that as its own package, I’ve reused it so many times now

i kind of wish terraform would support more escape hatches

it will reduce some of the pressure on them

Agreed

i wish there was a simple interpolation of exec

the expectation that data.external
needs JSON output should be loosened to text
or json

2018-12-16

published https://github.com/cloudposse/terraform-external-module-artifact to download binary artifacts from http(s) endpoints (e.g. useful for public lambda zips)
Terraform module to fetch any kind of artifacts using curl (binary and text okay) - cloudposse/terraform-external-module-artifact
2018-12-17

This will add a –case-sensitive switch to the following commands: * delete * history * import * read Notably changing the import behaviour to fail an import payload with mixed case keys This foll…

Hi everyone. I’m trying to use https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment but I need to use a network load balancer.
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

The module has protocols set to HTTP and HTTPS set here: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/904dc86139a924538d7804b3a4e79db60936253e/main.tf#L548
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Is there any way to override these values from my main.tf given I’m importing the module with the source pointing at GitHub?

Hi @samant.maharaj welcome

looks like those are hardcoded

so if you want to add support to other values, they needs to be converted to variables

you can clone/fork the repo and test with your values, then open a PR against the cloudposse repo

OK thanks. Still a little new to Terraform so I wasn’t sure if it was possible to override them after importing the module.

so yea, if you clone the repo to your computer, you can set the values to whatever you want for testing - the fastest way to test it

there is an example here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/master/examples/complete
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

if you cd
to examples/complete
, you could run terraform init/plan/apply
to test from the example


here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/examples/complete/main.tf#L46 change the source to
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

source = "../../"

to use the module’s code from your computer

@samant.maharaj does it not work when you pass in loadbalancer_type
?

Sadly no. Network load balancers must have listener protocol set to TCP among other issues. It seems it’s not enough to disable the listener. AWS will still complain about the listener protocol being unsupported.

At this point it seems it might be easier to copy the module and modify it to suit.

They accept pull requests. If you make it support the NLB, commit it back to the community.

that should accept these values: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elasticbeanstalkenvironment
Configure globally available options for your Elastic Beanstalk environment.

the only problem might be with these lines: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L645-L649
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
2018-12-18

Hi guys, I’m trying to setting cloudwatch-sns alarm via terraform. I come across with https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms
However, once apply to a single RDS, the terraform record the “snapshot” in the tfstate file. When I try to apply to another RDS, it warn that it will delete the first setup and create new resources for second RDS.
Anyone has ideas on how to make the alarm reusable for multiple RDS instance?
Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms

ohhhhhhh

@Trex you are correct - this is an oversight

the resource names are not using labels (@jamie )

Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms

we haven’t had a chance to get back to this

terraform doesn’t complain about the conflict, but the alarms will constantly have their dimension overwritten with every deploy if the names aren’t unique.

@Trex would you be open to contributing back a PR?

yes, i will see what i can help

grrr, hate this error so much…
value of 'count' cannot be computed


ah I’ve seen that one, when you use count in locals teffaform doesn’t compute it immediatelly


yes it is

hey guys, i’m currently using this module (https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf) or a slight variation of it given that I need to start multiple elastic beanstalk applications & environments with the same iam role and service roles. therefore i pulled out any IAM related resources out and use implicit dependencies to pass into the creation of the beanstalk apps and environments.
however, sometimes i’m getting an error about ELB health failing which is caused by the proper role and policy attachments not being attached in time. it seems like there’s a race condition going on because it succeeds about 25% of the time. have any of you encountered this before?
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

@jeffrey please share the complete code, we’ll take a look (difficult to say anything w/o seeing the code)

it succeeds about 25% of the time

welcome to beanstalk!

if you’re starting from scratch, highly recommend you move to ECS or other container based approach

@Erik Osterman (Cloud Posse) 100% agree - however, we’re making our initial move into infrastructure-as-code and would like to first mimic our existing infrastructure before making any architectural changes. we have full intentions of moving off of beanstalk shortly after

fair enough….

@jeffrey let us know if you need any help

@Andriy Knysh (Cloud Posse) absolutely - i appreciate you reaching out. do you mind if I draw up a diagram tomorrow morning to explain my complication? perhaps i’m just not understanding terraform completely

Yes let’s check it tomorrow

@Andriy Knysh (Cloud Posse) here’s a quick diagram i drew up of iam-related dependencies. this part should be identical to how they’re defined in the cloudposse repo, besides the fact that it’s been pulled out to a separate module so that it can be provided as a parameter so multiple beanstalk environments and apps can share the same service roles and instance profiles.
starting from the left, the Service AWS IAM role is created, which both the Enhanced Health and Beanstalk Service role policy attachments have a dependency on. on the right side, the EC2 AWS IAM role is created, which the Default role policy, Web tier Role Policy Attachment, Worker tier Role Policy Attachment, and EC2 instance profile all have a dependency on.
As for elastic beanstalk, the environment is dependent on the Service AWS IAM role, and the application is dependent on the Service AWS IAM role and EC2 instance profile. however, because there isn’t a strict dependency that the Enhanced Health role policy is attached to the Service AWS IAM role before the EB environment is created, oftentimes I’m noticing that the environment is being created immediately after the IAM role is created but before the enhanced health policy is attached. as a result of this, I get the “ELB health is failing” because the health can’t be communicated back yet. I can’t use depends_on
in this case because I’m not using these as resources in the same file, but rather, the EB env and application are modules

is that during terraform apply
?

@jeffrey would it be alright if we helped you modify the existing module to accept an IAM role as a variable?

yep, and for backwards compatibility, we need to check if a role was provided and use it, otherwise fall back to creating a new one

apologize for the delayed response!
here’s the module that calls elastic beanstalk application
module "api_server_app" {
source = "../modules/app"
beanstalk_app_description = "api server application"
beanstalk_app_name = "${var.stack_name}-api-server"
iam_role_arn = "${aws_iam_role.beanstalk_service.arn}"
}
where “../modules/app” contains the following:
resource "aws_elastic_beanstalk_application" "default" {
description = "${var.beanstalk_app_description}"
name = "${var.beanstalk_app_name}"
appversion_lifecycle {
service_role = "${var.iam_role_arn}"
max_count = 128
delete_source_from_s3 = true
}
}

here’s the module that calls the elastic beanstalk enviroment:
module "poms_api_server" {
source = "../modules/beanstalk"
app = "${module.poms_api_server_app.app_name}"
keypair = "${module.key.key_name}"
stack_name = "${var.stack_name}"
subnets = "${join(",","${module.vpc.beanstalk_subnets}")}"
vpc_id = "${module.vpc.vpc_id}"
# IAM
instance_profile_name = "${aws_iam_instance_profile.beanstalk_ec2.name}"
# LB
service_role_arn = "${aws_iam_role.beanstalk_service.name}"
}
where the “../modules/beanstalk” only contains the elastic beanstalk environment code from https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf (minus iam roles, security groups, s3 buckets, and the environment variables), and slight modifications in the variable naming
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

@davidvasandani thanks - the existing module currently accepts the IAM role as a variable already. however, this is what I believe - i’m creating the iam role, then creating additional role policy attachments onto it for enhanced health and beanstalk service. however, the application is dependent on just the iam role and begins creation immediately when it’s ready, but before the role policies have been fully created/attached

@jeffrey you can try depends_on
https://www.terraform.io/docs/configuration/resources.html#explicit-dependencies
The most important thing you’ll configure with Terraform are resources. Resources are a component of your infrastructure. It might be some low level component such as a physical server, virtual machine, or container. Or it can be a higher level component such as an email provider, DNS record, or database provider.

thanks - i’ll see what i can do here. i was hoping there was another solution given that none of the elastic beanstalk examples i’ve seen actually used depends_on
and only resources could have depends_on
rather than modules
The most important thing you’ll configure with Terraform are resources. Resources are a component of your infrastructure. It might be some low level component such as a physical server, virtual machine, or container. Or it can be a higher level component such as an email provider, DNS record, or database provider.

another thing you can try is this:

here
instance_profile_name = "${aws_iam_instance_profile.beanstalk_ec2.name}"
# LB
service_role_arn = "${aws_iam_role.beanstalk_service.name}"

you use the names, but TF will calculate the names before the resources get created

if you use the IDs, then TF will wait for the resources to be created by AWS

(and you prob need to use arn
here, not name
): service_role_arn = “${aws_iam_role.beanstalk_service.arn}”

oh awesome, i’ll try those. so you suggest using ids
in both these instances?

well, in one

the other needs arn
, no?

within the service role i’ve tried the arn and the name before, both ran succesfully so i’m assuming it does a lookup

hmmm

but those are diff things

i’ll run through those again and make sure the resources being created are as expected. at the time, i was making sure the script was compiling correctly. appreciate the tips!

any one pretty familiar with cross-across terrraform with assumed roles?

think so yes

provider "aws" {
alias = "testing"
assume_role {
role_arn = "arn:aws:iam::126450723953:role/OrganizationAccountAccessRole"
}
}
data "terraform_remote_state" "testing" {
backend = "s3"
provider = "aws.testing"
config {
bucket = "${var.namespace}-testing-terraform-state"
key = "account-settings/terraform.tfstate"
}
}

should I be able to do something like this?

…some context

I am in our “root” aws account and want to read the tfstate from a child account

I don’t see why not

this wouldn’t work if you have the access

Trick question:
Error: data.terraform_remote_state.testing: Provider doesn't support data source: terraform_remote_state

sorry

so i think there’s something subtle i don’t understand….

ah I do it differently yes, but you won’t like what I do

might help for perspective though

One administrative account, here default role ( before switching to assume role ) has access to the state bucket and kms

inside the same state bucket you then have different environments and stacks ( yep .. )

then in say ‘provider.tf’ the provider does an assumerole into the testing/staging/production account

where it does it’s operations

is that more or less clear ?

but can’t you just fix it on s3 access level ?

@Erik Osterman (Cloud Posse) doesn’t it accept role_arn inside the config block?

ohhhhhhh right

yea, that makes things easier for sure

yes that’s clear - i forget you mentioned that

….still holding out from going down that path but maybe we need to consider it some day

i think the error message is trying to say that the the data source terraform_remote_state
does not support the provider
argument?

that data source does some odd stuff internally with the aws credential/provider to access the state file

so does it work with role_arn in the terraform_remote_state config {} ?

got it working!!

data "terraform_remote_state" "testing" {
backend = "s3"
config {
role_arn = "arn:aws:iam::126450723953:role/OrganizationAccountAccessRole"
bucket = "${var.namespace}-testing-terraform-state"
key = "account-dns/terraform.tfstate"
}
}
output "testing_name_servers" {
value = "${data.terraform_remote_state.testing.name_servers}"
}

so I didn’t need to specify any aliases

beauty

yeah, that’s what i saw in the code, that state does the credential setup itself

the key was to add the role_arn
to the config
section.

makes sense now that you have it working

yea, even easier than I thought it would be

can’t believe i’ve put it off for this long

kinda related, i saw this comment earlier today, which i can’t believe works… https://github.com/terraform-providers/terraform-provider-aws/issues/571#issuecomment-448372889
This issue was originally opened by @bootswithdefer as hashicorp/terraform#12337. It was migrated here as part of the provider split. The original body of the issue is below. AWS Organizations has …

provider "aws" {
...
}
resource "aws_organizations_account" "subaccount" {
...
// More about this below
provisioner "local-exec" {
command = "sleep 120"
}
}
locals {
role_arn = "arn:aws:iam::${ aws_organizations_account.subaccount.id }:role/OrganizationsAccountAccessRole"
}
provider "aws" {
alias = "subaccount"
assume_role {
role_arn = "${local.role_arn}"
}
}
resource "aws_dynamodb_table" "lock_table" {
provider = "aws.subaccount"
name = "terraform-lock-table"
...
}

haha, check this out

data "terraform_remote_state" "root" {
backend = "s3"
config {
bucket = "${var.namespace}-root-terraform-state"
key = "accounts/terraform.tfstate"
}
}
data "terraform_remote_state" "testing" {
backend = "s3"
config {
role_arn = "${data.terraform_remote_state.root.testing_organization_account_access_role}"
bucket = "${var.namespace}-testing-terraform-state"
key = "account-dns/terraform.tfstate"
}
}
locals {
testing_name_servers = "${data.terraform_remote_state.testing.name_servers}"
}
resource "aws_route53_record" "testing_dns_zone_ns" {
# count = "${signum(length(local.testing_name_servers))}"
zone_id = "${aws_route53_zone.parent_dns_zone.zone_id}"
name = "testing"
type = "NS"
ttl = "30"
records = ["${local.testing_name_servers}"]
}
output "testing_name_servers" {
value = "${local.testing_name_servers}"
}

getting the organization account access role for a subaccount from the state of one module in our root account

then using that to assume role into the subaccount and read the state from a module in that account

oh, and it even works with that count enabled


oh nice one doing the same, but not so automated

this or brute forcing the creation of a new domain until the ns-servers match

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) any reason why this uses the AZ as the count
as opposed to the subnets?
resource "aws_efs_mount_target" "default" {
count = "${length(var.availability_zones)}"
file_system_id = "${aws_efs_file_system.default.id}"
subnet_id = "${element(var.subnets, count.index)}"
security_groups = ["${aws_security_group.default.id}"]
}
https://github.com/cloudposse/terraform-aws-efs/blob/master/main.tf#L18-L23
Terraform Module to define an EFS Filesystem (aka NFS) - johncblandii/terraform-aws-efs

I think the rule is you only need one per az and not one per subnet

but if it is per az and there are public/private subnets, will the one’s left out not get access?

when I used the private subnets, my eks cluster couldn’t talk to efs

well…it still can’t use it. lol. but using public got beyond the “can’t connect” phase


I don’t recall the details off the top of my head

Just remembered this requirement for some reaon…

ha…just found it.

You can create one mount target in each Availability Zone.

Creating and deleting Amazon EFS mount targets in a VPC

interesting.

Hrmmm haven’t tried to use it yet with EKS

It’s been a LONG time since we touched EFS for client engagements. Not that there is anything wrong just memory of it fading.

All good. I’ve never used it so it is new to me.

trying to get my nfs setup on the cluster

Dec has been a month of “here’s something new…get it in production”

@johncblandii the assumption was one subnet per AZ. Actually it’s two subnets per AZ (private and public), but you use either private or public to place the EFS

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

it does not cover all possible use cases I agree

and you have to provide the same number of subnets as you have AZs
2018-12-19

Did anyone use blue-green deployments with ECS&CodeDeploy? I am trying, but it is not so straightforward, it is more greenfield area then I wanted it to be… In case you will want to make it - https://gist.github.com/antonbabenko/632b54e8e488b9f48d016238792a9193

always used https://github.com/silinternational/ecs-deploy without issues
Simple shell script for initiating blue-green deployments on Amazon EC2 Container Service (ECS) - silinternational/ecs-deploy

I’ve opened an issue there - https://github.com/silinternational/ecs-deploy/issues/168
Great project! Unfortunately, it does not work for my use case where I want to use it for blue-green deployments via CodeDeploy with ECS services (as described here). The error is like this: /usr/l…

I am facing the same issue that is mentioned here https://github.com/hashicorp/terraform/issues/13012
Terraform Version Terraform v0.9.1 Affected Resource(s) aws_network_acl_rule Terraform Configuration Files resource "aws_network_acl" "network_acl" { vpc_id = "${aws_vpc.CI…

any ideas why ?

Do a create, then import it into another blank state (easy to do with a local test setup and remove state) and view the difference. I have seen these issues with how aws stores values vs how they are declared

Do a create ?

@rohit, @Jan is correct that the issue happens when TF has some settings, but after apply
AWS stores diff settings. Then TF reads them from AWS and sees they are different and tries to apply it again

terraform-aws-elastic-beanstalk-environment recreates all settings on each terraform plan/apply setting.1039973377.name: "InstancePort" => "InstancePort" setting.1039973377.n…
2018-12-20

I am facing weird issues with aws_launch_template
resource where when we work with the same code on mac vs windows

everytime my teammate or i run terraform plan it shows that there is a change in the launch template

and i believe it is because of
user_data = "${base64encode(data.template_file.user-data.rendered)}"

if you are using file
for template
(not inline), the path will be different on diff systems

is there a way to fix/avoid this issue?

do you use it like here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/main.tf#L121
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

but we did not test it on Windows, so don’t know if it will behave differently

yes

that’s why it’s better to use containers for that (geodesic
is a good example)

Sorry i don’t understand how this issue is related to containers ?

reproducibility

the same result on mac, windows and linux

ohh i see

everybody uses the same container

that makes sense

data "template_file" "user-data" {
template = "${file("${path.cwd}/modules/compute/frontend/user-data.sh.tpl")}"
vars = {
app_env = "${terraform.workspace}"
}
}

@Andriy Knysh (Cloud Posse) above is how we use i am currently using remplate_file

maybe try path.module
and see what happens

@Andriy Knysh (Cloud Posse) i tried this and received the following error
no such file or directory in:
${file("${path.module}/modules/compute/tde/user-data.sh.tpl")}

any other ideas ?

what’s your folder structure?

path.module
will give you the root path, where you have [main.tf](http://main.tf)

i do have the full path

template = "${file("${path.module}/modules/compute/frontend/user-data.sh.tpl")}"

@rohit I don’t know your folder structure so it’s difficult to say anything, but you can test it by reading the file in the data template (as you showed above), and then output it as shown here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/outputs.tf#L3
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

play with the full path in template = "${file("${path.module}/modules/compute/frontend/user-data.sh.tpl")}"
until TF could find the file

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.

also, you sure that the template files on mac and windows are 100% identical? (e.g. line breaks on windows differ from mac)

yeah line breaks differ but we are running the same code

i am not sure about the settings

Different question, is it currently possible to launch an ec2 instance using launch template resource but not in an autoscaling group ?

yes

Launch an instance from a set of parameters in a launch template.

Launch an instance from a set of parameters in a launch template.

launch template is just a metadata with diff params (stored somewhere in some AWS database). Autoscaling is a service that maintains the required number of running instances

i mean using terraform

i did not find an option to provide launch template id to aws_instance
resource

yea, don’t see it here https://www.terraform.io/docs/providers/aws/r/instance.html
Provides an EC2 instance resource. This allows instances to be created, updated, and deleted. Instances also support provisioning.

many attributes are the same as in launch template

yes lot of the attributes are same

i did not find any resources online to do this

I am thinking of creating a feature request on github

Looks like there is already one

got a pr already too, a bit long in the tooth, maybe add a thumbs up in case they sort on reactions… https://github.com/terraform-providers/terraform-provider-aws/pull/4543
Fixes #4264 Changes proposed in this pull request: Add support for launch template in instance resource Output from acceptance testing: make testacc TEST=./aws TESTARGS='-run=TestAccAWSInstan…


Team, does anybody know if it is possible to escape $
in template_file ?

My template file has this:
$(kubectl config current-context)

by not escaping the $
it results in an empty file because somehow it is being evaluated

$$

When using data "template_file" I have an Apache config which also unfortunately uses the ${} syntax which I don't want Terraform to do substitution. I tried escaping the $ with a bac…

it doesn’t work

it escapes the $ if it is follow by {

but not $()

the double $$
is to escape interpolation no to escape the dollar sign

some info here https://github.com/hashicorp/terraform/issues/19462
Terraform Version Terraform v0.11.10 + provider.azurerm v1.19.0 Terraform Configuration Files 4 variable "foo" { 5 default = "foo$${bar" 6 } Expected Behavior foo contains foo${…

we did something like this before: template = "${replace(var.storage_account_container_readonly_template, "$$$$", "$")}"

yep, looks like that’s the workaround for now

Terraform module to easily define consistent cluster domains on Route53 (e.g. [prod.ourcompany.com](http://prod.ourcompany.com)
) - cloudposse/terraform-aws-route53-cluster-zone

what it's not clear how we currently do versioning old strategy Bump patch always unless there was a “known breaking change” Bump minor anytime there was a breaking change We never bumped major…

I’ve documented our versioning strategy so it’s more clear for others.

https://www.terraform.io/docs/providers/aws/r/route53_zone_association.html
NOTE: Unless explicit association ordering is required (e.g. a separate cross-account association authorization), usage of this resource is not recommended. Use the vpc configuration blocks available within the aws_route53_zone resource instead.
Yes, but when I create the zone, all the other VPCs might not yet exist

so I shall not take your advice @hashicorp

@Andriy Knysh (Cloud Posse) so after playing a bit i ended up with
template = "${file("${path.module}/user-data.sh.tpl")}"

that worked but when i ran terraform plan it says there is an update to the launch template

@rohit does the TF output show what part it wants to update?

for that i will have to run terraform apply
, correct ?

or plan

plan doesn’t show anything

it says
~ module.compute.module.frontend.aws_launch_template.frontend
latest_version: "5" => "0"
user_data: "encodeddata"

i see it after refreshing

it shows the entire userdata

it looks like it wants to update the latest version

yeah because it thinks that there is an update to launch template

has anyone come across cloudwatch alarms created through terraform not being displayed in the elastic beanstalk console, but being correctly displayed in the cloudwatch console? i’ve checked all configurations to make sure they match up, including the dimensions; however any cloudwatch alarm created through the GUI still displays in both the EB console and cloudwatch console.
the only difference between the 2 are specifying the actions to be taken when the threshold is met

Have we considered using terraform-landscape yet? https://github.com/coinbase/terraform-landscape
Improve Terraform’s plan output to be easier to read and understand - coinbase/terraform-landscape

I love landscape
. It makes the output MUCH easier to read.
Improve Terraform’s plan output to be easier to read and understand - coinbase/terraform-landscape

Work with terragrunt
too.

Awesomeness, will be pining you next year!


trying not to depend on ruby

it installs the world

node takes the cake there

@Erik Osterman (Cloud Posse) why not use a docker image?
terraform plan ... | docker run -i --rm landscape

so we’re running in docker


this will probably bloat the image by 25-40% just by adding ruby =P

Yeah def don’t want to bloat the geodesic image.


@sarkis this can be your claim to fame

then submit it as a PR to hashicorp

Fair point
2018-12-21

Someone want to go rewrite landscape in golang

yes, please!

diff renderer changes are coming with 0.12 https://github.com/hashicorp/terraform/issues/15180#issuecomment-435241641
but 0.12 still leaves something to be desired:
# aws_iam_role_policy.policy will be created
+ resource "aws_iam_role_policy" "policy" {
+ id = (known after apply)
+ name = "tf-example-policy"
+ policy = "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Action\": [\n \"kinesis:PutRecord\",\n \"kinesis:PutRecords\"\n ],\n \"Resource\": [\n \"*\"\n ],\n \"Effect\": \"Allow\"\n }\n ]\n}\n"
+ role = (known after apply)
}
vs 0.11 w/ landscape
+ aws_iam_role_policy.policy
id: "<computed>"
name: "tf-example-policy"
policy: {
"Statement": [
{
"Action": [
"kinesis:PutRecord",
"kinesis:PutRecords"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
],
"Version": "2012-10-17"
}
role: "${aws_iam_role.role.id}"


Is it possible to apply selected changes from terraform plan
?

No, you must do a targeted terraform plan

At least I’m about 90% certain

how does that work ?

so for example, base on @davidvasandani text grab above, we could run

terraform apply -target aws_iam_role_policy.policy

to only apply that change

the same is true for modules

or similarly

terraform plan -out planfile
terraform apply planfile

this will only apply the changes that are in the planfile

and plan
supports -target
as well

that’s awesome

i am not aware of all the capabilities that terraform provides

so whenever i need something, i go and check if terraform that specific feature

hey; I got bit by https://github.com/cloudposse/terraform-aws-tfstate-backend/commit/86b17f16e0c95244e87c859c18e28afa4deb1783
in particular because terraform-aws-tfstate-backend doesn’t have an “environment”, we synthesized it:
module "terraform_state_backend" {
source = "git::<https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=master>"
# The terraform_state_backend module, unlike other CloudPosse modules, does
# not have an "environment" argument, so we synthesize it.
namespace = "${var.namespace}-${var.client_codename}"
# environment = ""
stage = "${var.stage}"
name = "${var.name}"
region = "${var.aws_region}"
}
- Support latest version of terraform-null-label and its variables. Bumps veresion of terraform-null-label to 0.5.3 (latest at time of writing). Copies variable descriptions from new version. …

the new version deletes te dashes

there doesn’t seem to be an obvious workaround for that

unless the new thing takes an environment? let’s see

(I have other code that imports state so it cares p deeply about what the exact bucket name is – and because it’s importing state to bootstrap, it can’t just ask the current state what the bucket name is)

sorry @lvh, we messed it up

we’ll fix it

can you pin to the previous release for now?

(don’t use master
)

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

sure no problem

I was able to fix the naming issue by setitng the environment var though

still can’t reach the module outputs? but not sure that has anything to do with the change

{
"path": [
"root",
"terraform_state_backend"
],
"outputs": {},
"resources": {},
"depends_on": []
},
… weirdly

nuking the env and starting over fixed it, oh well

don’t use master

okiedokie this doesn’t really stirke me as a bug, but I’ll pin a tag anyway
2018-12-22


PSA don’t use boolean types in terraform.
What Document why in terraform booleans should be expressed as type string rather than type boolean. Why The boolean default value true does not work consistently as expected and will in some cases…
2018-12-23

Question about https://github.com/cloudposse/terraform-aws-jenkins. After setup, I navigated to the URL and saw the Elastic Beanstalk splash page - “Congratulations! Your Docker Container is now running in Elastic Beanstalk on your own dedicated environment in the AWS Cloud.” Am I doing something wrong here? Or is this a current bug? Any ideas?
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Love the terraform modules you guys put out, by the way

Maybe in the aws console go look at the created elb and see if it’s target is set and shows as healthy

I have not run that module myself though
2018-12-24

It shows as healthy. I did see that the Jenkins container exposes port 8080 but the ALB listener points to port 80. Do you think that’s the issue?

Not sure if that’s just an elastic beanstalk thing. Haven’t worked with it before

It is normal for the container port and alb port to be different

figured out the issue.

i set “null” to the github_oauth_token parameter instead of actually leaving it blank

Thanks for reporting back! If you wouldn’t mind - maybe a good idea to open the issue against the repo in case others make the same mistake.

sure thing. let me figure out the solution first and then i’ll happily open up an issue
2018-12-26

@mrwacky ^^

fancy

Question regarding https://github.com/cloudposse/terraform-aws-datadog-integration .. Once I run the code should I go to datadog console and fill the AWS integration form again to complete the integration install ? ExternalID is changing every time I open the AWS integration form under my datadog account
Terraform Module for integration DataDog with AWS. Contribute to cloudposse/terraform-aws-datadog-integration development by creating an account on GitHub.

@Sanjay - Its been a while since we set that up, so I cannot recall seeing that

@Sanjay the Datadog page is confusing. From what I remember, first open the page, to get the external ID. Then run Terraform, then back to the Datadog page, find a way to Save. I made the mistake often to get a new id instead of just saving it.

Thanks @maarten and @Erik Osterman (Cloud Posse).. Yes it is a 3 step process 1) Get external ID 2) Run TF code for integration which creates IAM role 3) Go back to Datadog console and input IAM role and AccountID to complete the integration. Was wondering if there is a way to do everything using TF code rather than using console ?

I opened this issue to track the process

what Missing documentation describing setup process it is a 3 step process 1) Get external ID 2) Run TF code for integration which creates IAM role 3) Go back to Datadog console and input IAM role …

Also one other question I had is about https://github.com/cloudposse/terraform-datadog-monitor . Do we need to run integration first or can we just run the monitors code independently without installing AWS integration.. Here it is using API KEY and APP KEY
Terraform module to provision Standard System Monitors (cpu, memory, swap, io, etc) in Datadog - cloudposse/terraform-datadog-monitor

@Sanjay from what I recall they are disjoint

so the terraform-datadog-monitors
module sets up the alerts, but the metrics themselves come from the datadog agents - of which are not covered by any of our modules

the terraform-aws-datadog-integration
enables datadog to scrape metrics from cloudwatch/rds, but I don’t think the terraform-datadog-monitor
module is setup to work with it.
2018-12-28

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures



Awesome stuff! I have a 1 question you have identity
and root
account description both stating that is the place to add users and delegate or where users login.
I guess “identity” is an optional account in case you dont want your users on root?
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Yes, exactly!

We haven’t built out our examples to support identity
yet (we’re using root
), but this is the eventual goal.

makes sense

We’ve released the first version of our “reference architectures” cold start automation process.

This will get you setup with #geodesic starting with a fresh AWS account.

(just used this to provision the account infra for one of our customer)


@Jan @mcrowe @Dombo @daveyu @rohit.verma @rohit.verma @tamsky

@mcrowe has joined the channel

Brilliant work mate!