#terraform (2018-09)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2018-09-03

Andrew Jeffree avatar
Andrew Jeffree

So I’ve got a weird problem that I’ve never encountered before I have a module that creates a vpc does the usual stuff. public/private subnets, nat gateways etc I then use the data returned from this module with an ec2 module which puts x instances in those subnets/availability zones and evenly spreads them

In this situation I’m creating 4 instances. the first 2 get created fine the 3rd it attempts to create in the correct AZ but the wrong subnet specifically a subnet that terraform hasn’t created or management it’s actually one of the subnets in the default vpc and then the 4th node also fails because it has the correct AZ but is instead using the subnet that the 3rd should which is in a different AZ

Andrew Jeffree avatar
Andrew Jeffree

I’ve checked the state and the subnet from the default vpc doesn’t show up in there at all

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andrew Jeffree this is one of the big reasons we have our purpose built modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you using our subnet modules and our vpc module?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(basically, it’s easy to mess things up)

Andrew Jeffree avatar
Andrew Jeffree

No I’m using mine. I wasn’t after you to support these, but I was hoping someone may have encountered this weirdness in the past on the Terraform side.

Andrew Jeffree avatar
Andrew Jeffree

and I’ve used these modules for quite a while without this problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you using any data providers?

Andrew Jeffree avatar
Andrew Jeffree

nope

Andrew Jeffree avatar
Andrew Jeffree

just passing the output from one module to another

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you extracting values from maps?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(may lead to different orders)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe the order of the subnets and the order of the azs is different

Andrew Jeffree avatar
Andrew Jeffree

yeah so I considered the ordering issue

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok

Andrew Jeffree avatar
Andrew Jeffree

but that doesn’t explain where the default vpc subnet is coming into things

Andrew Jeffree avatar
Andrew Jeffree

that’s the part that’s doing my head in.

Andrew Jeffree avatar
Andrew Jeffree

I’m aware you can manage the default vpc somewhat in terraform, but I don’t do that. I just leave it be and ignore it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, we never use it

Andrew Jeffree avatar
Andrew Jeffree

ah I found my bug. It’s a conditional that someone else added to the module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

phew! that’s a relief

Andrew Jeffree avatar
Andrew Jeffree

by default in our module we put instances in private subnets, but someone wanted to put them in public subnets, so they create a variable and they were merging an empty list with the list of private subnet ids

Andrew Jeffree avatar
Andrew Jeffree

which means the 3rd item in the list when computed was blank which means AWS tries to put it in the default vpc.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

good sleuthing! makes sense

Andrew Jeffree avatar
Andrew Jeffree
06:46:52 AM

goes to have words with the author of that commit…

Andrew Jeffree avatar
Andrew Jeffree

ah also thank you for acting as a sounding board @Erik Osterman (Cloud Posse) much appreciated.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andrew Jeffree you wouldn’t believe it. I ran into the same problem today while training one of my guys. I wouldn’t have figured it out nearly as quickly if you hadn’t shared this. Thanks!!

Andrew Jeffree avatar
Andrew Jeffree

Haha you’re welcome.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

np, sometimes that’s all it takes

2018-09-04

Toby avatar

Hi Gang,

I’ve just started using your vpc_peering terraform module and have run into an issue during the plan stage. I’m getting

* module.vpc_peering.data.aws_route_table.requestor: data.aws_route_table.requestor: value of 'count' cannot be computed

I checked out the FAQ at https://github.com/cloudposse/docs/blob/master/content/faq/terraform-value-of-count-cannot-be-computed.md and it doesn’t seem to be the same issue.

I am getting it during the plan stage when the requestor_vpc_id is coming from the output of a vpc module, however that vpc hasn’t yet been created and the id is going to be computed at this stage.

Something you’ve seen before and if so is this a supported scenario?

cloudposse/docs

Cloud Posse Developer Hub. Complete documentation for the Cloud Posse solution. https://docs.cloudposse.com - cloudposse/docs

Toby avatar
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "1.40.0"

  name = "hub-vpc"
  cidr = "${var.bit_mask_16}.0.0/16"
  azs             = ["${var.az_a}"]
  private_subnets = ["${var.bit_mask_16}.1.0/24"]
  public_subnets  = ["${var.bit_mask_16}.101.0/24"]

  enable_nat_gateway = true
  enable_vpn_gateway = false

  tags = {
    Terraform = "true"
    Environment = "hub"
  }
}

module "vpc_peering" {
  source           = "git::<https://github.com/cloudposse/terraform-aws-vpc-peering.git?ref=master>"
  namespace        = "hub"
  stage            = "dev"
  name             = "hub-to-mc"
  requestor_vpc_id = "${module.vpc.vpc_id}"
  acceptor_vpc_id  = "${var.mc_vpc_id}"

  tags = {
    Terraform = "true"
    Environment = "hub"
  }
}
Toby avatar

"${var.mc_vpc_id}" is a hard coded id of an existing vpc in the same account.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Toby The way terraform-aws-vpc-peering currently works is to lookup both requestor and acceptor VPCs and subnets (was implemented that way to use with kops for Kubernetes, e.g. https://github.com/cloudposse/terraform-aws-kops-vpc-peering).

So when you create a VPC in the same project as terraform-aws-vpc-peering, Terraform does not wait for the VPC to be created and tries to look it up from the data sources - that’s why it can’t calculate the count of aws_route_table because it depends on the number of subnets, which in turn depens on the VPC.

This currently can be solved (w/o redesigning the module or creating a new one, which could be done) in two diff ways:

  1. Place VPC in a separate folder from ``terraform-aws-vpc-peering` and provision it first

  2. Use multi-stage provisioning with -target (that’s how we did it for the module)

terraform plan -target=module.vpc
terraform apply -target=module.vpc

terraform plan
terraform apply

The first plan/apply provisions just the VPC. The second plan/apply provisions evrything else (since the VPC is already provisioned, TF is able to look it up).

cloudposse/terraform-aws-kops-vpc-peering

Terraform module to create a peering connection between a backing services VPC and a VPC created by Kops - cloudposse/terraform-aws-kops-vpc-peering

Toby avatar

Thanks for your prompt reply @Andriy Knysh (Cloud Posse), I was using the -target as a workaround so I’ll carry on doing that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, it’s the solution for now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we might add another module to accept the existing VPC IDs (or modify the current one)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks for testing btw

Matthew avatar
Matthew

Has anyone ever scripted out RDS Cross-region replication using Terraform?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matthew that’s what it says about cross-region replication

with Aurora MySQL you can setup a cross-region Aurora Replica from the RDS console. The cross-region replication is based on single threaded MySQL binlog replication and the replication lag will be influenced by the change/apply rate and delays in network communication between the specific regions selected. Aurora PostgreSQL does not currently support cross-region replicas
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did not test it with TF

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
AWS: aws_rds_cluster_instance - Terraform by HashiCorp

Provides an RDS Cluster Resource Instance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) there is a way to do cross region replication

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Daren does this at gladly with terraform

Daren avatar

Ill send you the repo

Matthew avatar
Matthew

@Erik Osterman (Cloud Posse) Thank you sir

Matthew avatar
Matthew

@Erik Osterman (Cloud Posse) Did you ever find this repo?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice

Matthew avatar
Matthew

Thank you @Andriy Knysh (Cloud Posse) that is what i’m trying to do currently and here it has https://www.terraform.io/docs/providers/aws/r/rds_cluster.html#replication_source_identifier

Matthew avatar
Matthew
08:45:48 PM
Matthew avatar
Matthew

but when you actually specify the DB Arn and try to run the script terraform spits out “ You cannot set up replication between a DB instance and a DB cluster across regions.”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you use MySQL?

Matthew avatar
Matthew

Yes Aurora-MySQL

2018-09-05

rohit.verma avatar
rohit.verma

@Erik Osterman (Cloud Posse) or @Andriy Knysh (Cloud Posse) do we have any terraform module for setting up required vpc endpoints, this is useful in escaping some data transfer charges, as explained here https://aws.amazon.com/vpc/pricing/

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
VPC Endpoints - Amazon Virtual Private Cloud

Use a VPC endpoint to privately connect your VPC to other AWS services and endpoint services.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t have modules for that, but it looks like simple enough https://www.terraform.io/docs/providers/aws/r/vpc_endpoint.html

AWS: aws_vpc_endpoint - Terraform by HashiCorp

Provides a VPC Endpoint resource.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we can create module(s) together

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matthew we talked with @Daren, we’ll show you how to do cross region replication for MySQL (@Daren has a working example)

Matthew avatar
Matthew

@Andriy Knysh (Cloud Posse) Appreciate your energy and time looking into that for me

Matthew avatar
Matthew

I am open for discussion whenever i’m still currently trying it this moment

Matthew avatar
Matthew

Ahh so now when i specify the cluster ARN in replication_source_identifier, it creates the cluster as a replica, but then my instance gets deployed as a WRITER rather than a Reader. Holler when you’re free

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matthew when @Daren is online, we’ll give you the example

1
rohit.verma avatar
rohit.verma

@Andriy Knysh (Cloud Posse) / @Erik Osterman (Cloud Posse) Thanks for response, I will try to create one module which is in sync with with geodesic. I believe mostly Gateway type endpoints are worth using, If I am right there is no cost for them. Also Bit confused about com.amazonaws.ap-south-1.logs endpoint. Since we are using fluentd-cloudwatch log forwarding, will it provide us some cost benefit ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you test the latest version? @h20melonman

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

was working ok

h20melonman avatar
h20melonman

@Andriy Knysh (Cloud Posse) i did, however it errored. I build some db’s in may and i didn’t lock down to a version and the new version breaks with this error “* module.rds_cluster_aurora_mysql.output.arn: Resource ‘aws_rds_cluster.default’ does not have attribute ‘arn’ for variable ‘aws_rds_cluster.default.*.arn’ “

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@rohit.verma I think it will provide a cost benefit since the traffic never exits the AWS internal networks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@h20melonman can you delete and re-create the cluster?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and pin down to a release

h20melonman avatar
h20melonman

so then i thought , well i’ll go back to what i used before and encountered another issue. where if i build something or run plan against something already built it seems to add / remove availability_zones on it’s own. even though i’ve defined only the two i want

h20melonman avatar
h20melonman

what is the suggested way to pin a release ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yea, a lot was changed in the module since May

h20melonman avatar
h20melonman
08:18:11 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we added the required number of AZs

h20melonman avatar
h20melonman

ah , thats prob why its thrashing around on me : )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to set the number to the same number of AZs you currently have deployed

h20melonman avatar
h20melonman

ok

h20melonman avatar
h20melonman

what about version pinning , is that ^^ ok

h20melonman avatar
h20melonman

thanks btw !

h20melonman avatar
h20melonman

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah, sorry, forget about the number of AZs, it’s from a diff module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

version pinning, not OK

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here’s how to do it:

h20melonman avatar
h20melonman

kk

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  source             = "git::<https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.4.3>"
h20melonman avatar
h20melonman

great.

h20melonman avatar
h20melonman

i’ll give that a shot. thanks for being avail !

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

remove version = "0.3.5"

h20melonman avatar
h20melonman

got it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

add ` cluster_family = “aurora-mysql5.7”`

h20melonman avatar
h20melonman

ok

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

h20melonman avatar
h20melonman

@Andriy Knysh (Cloud Posse) I’ve updated to exactly whats in ‘with_cluster_parameters/main.tf’ & outputs.tf , but am getting this error “* module.rds_cluster_aurora_mysql.output.arn: Resource ‘aws_rds_cluster.default’ does not have attribute ‘arn’ for variable ‘aws_rds_cluster.default.*.arn’ “

h20melonman avatar
h20melonman

any ideas? and apoligies if these are basic questions. i truly appreciate your help !

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sounds like you are still using some old version of the module (which did not have the arn output)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you use source = "git::<https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.4.3>"?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and also terraform init

h20melonman avatar
h20melonman

did both. and i had deleted the db i was working on and removed the old state file from s3.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you do terraform destroy first?

h20melonman avatar
h20melonman

if is switch to 0.4.0 and comment out of the outputs.tf ( used from examples w/c/p ) arn,endpoint,reader_endpoint everything seems fine. and yes i did a terrafrom destroy before starting any of this

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well, you can direct message me your code; I can’t answer the question because we did provision the cluster many times in the last few days using the examples from the repo

h20melonman avatar
h20melonman

sure thing.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matthew thanks to @Daren, here is a working example for RDS cross-region replication

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
resource "aws_db_subnet_group" "replica" {
  name       = "replica"
  subnet_ids = ["xxxxxxx", "xxxxxxx", "xxxxxx"]
}

resource "aws_kms_key" "repica" {
  deletion_window_in_days = 10
  enable_key_rotation     = true
}

resource "aws_db_instance" "replica" {
  identifier                  = "replica"
  replicate_source_db         = "${var.source_db_identifier}"
  instance_class              = "${var.instance_class}"
  db_subnet_group_name        = "${aws_db_subnet_group.replica.name}"
  storage_type                = "io1"
  iops                        = 1000
  monitoring_interval         = "0"
  port                        = 5432
  kms_key_id                  = "${aws_kms_key.repica.arn}"
  storage_encrypted           = true
  publicly_accessible         = false
  auto_minor_version_upgrade  = true
  allow_major_version_upgrade = true
  skip_final_snapshot         = true
}
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

typo - repica

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

did you add this to the documentation or examples?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not yet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this could be added to the RDS module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks @Daren

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and how to call it

provider "aws" {
  alias  = "remote"
  region = "us-west-2"
}

module "database_remote_replica" {
  source = "..."

  providers = {
    aws = "aws.remote"
  }

  namespace            = "eg"
  stage                = "prod"
  instance_class       = "..."
  source_db_identifier = "<ARN>"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is not for Aurora though, just for RDS

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


but when you actually specify the DB Arn and try to run the script terraform spits out ” You cannot set up replication between a DB instance and a DB cluster across regions.”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think it says that you need to create a remote cluster (not just an instance)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you need to create two clusters in two diff regions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and use the provider pattern above to provision the remote cluster for replication

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Add extra filters to comply with CIS AWS benchmark by MichaelKuhinica · Pull Request #7 · cloudposse/terraform-aws-cloudtrail-cloudwatch-alarms

Description This PR adds some extra filters to comply with CIS AWS benchmark Course of action 3.2 Ensure a log metric filter and alarm exist for Management Console sign-in without MFA 3.3 Ensure…

2018-09-06

maarten avatar
maarten
Mastercard/terraform-provider-restapi

A terraform provider to manage objects in a RESTful API - Mastercard/terraform-provider-restapi

antonbabenko avatar
antonbabenko

or this one - https://github.com/dikhan/terraform-provider-openapi . They are rather similar in terms of making things for general APIs.

dikhan/terraform-provider-openapi

OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file) - dikhan/terraform-provider-openapi

maarten avatar
maarten

The most photogenic developer vs the company with the priceless slogan, tough one

2
antonbabenko avatar
antonbabenko

https://github.com/GoogleCloudPlatform/magic-modules - pretty cool project and reasoning.

GoogleCloudPlatform/magic-modules

Magic Modules: Automagically generate Google Cloud Platform support for OSS - GoogleCloudPlatform/magic-modules

antonbabenko avatar
antonbabenko
GoogleCloudPlatform/magic-modules

Magic Modules: Automagically generate Google Cloud Platform support for OSS - GoogleCloudPlatform/magic-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, that’s how all code should look like

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then we introduce patterns on top of it

antonbabenko avatar
antonbabenko

I think there will be some standard for auto-generated code platforms, so that once AWS adopts it we suddenly will not have 1700+ open issues in AWS provider.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

1700!? Wow.

antonbabenko avatar
antonbabenko

no, 1672 actually

pericdaniel avatar
pericdaniel

Is there a way to use packer

pericdaniel avatar
pericdaniel

in terraform

pericdaniel avatar
pericdaniel

i want to create a windows box with some windows roles installed on it

antonbabenko avatar
antonbabenko
Null Resource - Terraform by HashiCorp

A resource that does nothing.

External Data Source - Terraform by HashiCorp

Executes an external program that implements a data source.

tamsky avatar

Just like the tf documentation recommends, I too recommend avoiding external data_source providers that are anything but read-only. @pericdaniel – if you go this route for firing up packer, you’ll wind up creating resources (triggering builds) when performing innocuous actions, such as terraform plan, which isn’t normal.

Null Resource - Terraform by HashiCorp

A resource that does nothing.

External Data Source - Terraform by HashiCorp

Executes an external program that implements a data source.

pericdaniel avatar
pericdaniel

hm.. still trying to understand. I guess i need the best way to deploy windows boxes into aws and install a few features onto the boxes. is there another route you recommend going?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pericdaniel why do you need to deploy Windows boxes to AWS? (not just a question out of interest, describe the problem you need to solve)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Because all those solutions with packer etc. are not simple

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Maybe you could just use this https://aws.amazon.com/ec2/vm-import (again, depends on what you want to achieve)

pericdaniel avatar
pericdaniel

Deploying AWS AD service

pericdaniel avatar
pericdaniel

To be able to make changes and start setting up active directory

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud. AWS Microsoft AD is built on actual Microsoft Active Directory and does not require you to synchronize or replicate data from your existing Active Directory to the cloud. You can use standard Active Directory administration tools and take advantage of built-in Active Directory features, such as Group Policy and single sign-on (SSO).

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i guess you can deploy AWS Directory Service for Microsoft Active Directory and then connect it to the on-premisses AD

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


AWS Microsoft AD makes it easy to migrate Active Directory–dependent, on-premises applications and workloads to the AWS Cloud. With AWS Microsoft AD, you can seamlessly run infrastructure across your own data center and AWS without synchronizing or replicating data from your existing Active Directory to the AWS Cloud.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(I hope what you are trying to do is not http://xyproblem.info )

The XY Problem

Asking about your attempted solution rather than your actual problem

antonbabenko avatar
antonbabenko

@Andriy Knysh (Cloud Posse) thanks for xyproblem.info link

1
tamsky avatar

@pericdaniel – there’s the non-windows version of AD: https://www.turnkeylinux.org/domain-controller

Domain Controller - free Active Directory server | TurnKey GNU/Linux

A Samba4-based Active Directory-compatible domain controller that supports printing services and centralized Netlogon authentication for Windows systems, without requiring Windows Server. Since 1992, Samba has provided a secure and stable free software re-implementation of standard Windows services and protocols (SMB/CIFS).

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

AWS has Samba as well

pericdaniel avatar
pericdaniel

thank you! ill take a look

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jamie has a module that runs packer with lambda

antonbabenko avatar
antonbabenko

What if it takes more than 5 minutes to run? How does module handles that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jamie

tamsky avatar

AFAIK Jamie has two modules with two distinct approaches, only one uses packer. And while they may use lambda kick things off – neither directly run packer in the lambda…

  • one uses CodeBuild pipelines in the manner described in [1]
  • another uses AWS SSM automation “steps” [2]

[1] https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/ [2] https://github.com/bitflight-public/terraform-aws-ssm-ami-bakery/blob/master/automationdocument.tf

bitflight-public/terraform-aws-ssm-ami-bakery

An AWS native ‘serverless’ module for building AMI’s and publishing them - bitflight-public/terraform-aws-ssm-ami-bakery

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks @tamsky

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

did you end up using this in your project?

tamsky avatar

AMI automation is currently on a back burner – a POC got setup, but stalled waiting for GH credentials

tamsky avatar

“I think I can… I think I can…”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

jamie avatar

Thanks for that @tamsky

1

2018-09-07

sylvain.rouquette avatar
sylvain.rouquette

hi there, I’m trying to use cloudposse/terraform-aws-ecr, but I’m getting a ton of error during the plan:

sylvain.rouquette avatar
sylvain.rouquette
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.terraform_remote_state.root: Refreshing state...
data.aws_iam_policy_document.login: Refreshing state...
data.aws_iam_policy_document.assume_role: Refreshing state...

------------------------------------------------------------------------

Error: Error running plan: 11 error(s) occurred:

* module.ecr.output.policy_write_name: Resource 'aws_iam_policy.write' not found for variable 'aws_iam_policy.write.name'
* module.ecr.output.role_arn: Resource 'aws_iam_role.default' does not have attribute 'arn' for variable 'aws_iam_role.default.*.arn'
* module.ecr.output.role_name: Resource 'aws_iam_role.default' does not have attribute 'name' for variable 'aws_iam_role.default.*.name'
* module.ecr.aws_iam_instance_profile.default: 1 error(s) occurred:

* module.ecr.aws_iam_instance_profile.default: Resource 'aws_iam_role.default' not found for variable 'aws_iam_role.default.name'
* module.ecr.output.policy_write_arn: Resource 'aws_iam_policy.write' not found for variable 'aws_iam_policy.write.arn'
* module.ecr.aws_iam_role_policy_attachment.default_ecr: 1 error(s) occurred:

* module.ecr.aws_iam_role_policy_attachment.default_ecr: Resource 'aws_iam_role.default' not found for variable 'aws_iam_role.default.name'
* module.ecr.output.policy_login_name: Resource 'aws_iam_policy.login' not found for variable 'aws_iam_policy.login.name'
* module.ecr.output.policy_read_arn: Resource 'aws_iam_policy.read' not found for variable 'aws_iam_policy.read.arn'
* module.ecr.output.policy_read_name: Resource 'aws_iam_policy.read' not found for variable 'aws_iam_policy.read.name'
* module.ecr.output.policy_login_arn: Resource 'aws_iam_policy.login' not found for variable 'aws_iam_policy.login.arn'
* module.ecr.data.aws_iam_policy_document.default_ecr: 1 error(s) occurred:

* module.ecr.data.aws_iam_policy_document.default_ecr: Resource 'aws_iam_role.default' not found for variable 'aws_iam_role.default.arn'
sylvain.rouquette avatar
sylvain.rouquette

with this TF script

sylvain.rouquette avatar
sylvain.rouquette
provider "aws" {
    alias = "assume_repo_admin"
    assume_role {
        role_arn     = "arn:aws:iam::${data.terraform_remote_state.root.account_repo_id}:role/OrganizationAccountAccessRole"
        session_name = "setup_account_repo"
    }
}

# create ECR repositories

module "ecr" {
    providers = {
        aws = "aws.assume_repo_admin"
    }
    source      = "git::<https://github.com/cloudposse/terraform-aws-ecr.git?ref=master>"
    name        = "cloud/databank_fe"
    namespace   = "repo"
    stage       = "travis"
}
cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

sylvain.rouquette avatar
sylvain.rouquette

the provider is because I’m trying to set up the ECR in a different account, but the problem also occurs in the same account

sylvain.rouquette avatar
sylvain.rouquette

I’m using terraform 0.11.8, I noticed that your CI is using TF 0.10.7, could it be the problem?

sylvain.rouquette avatar
sylvain.rouquette

it seems like aws_iam_role.default fails to be generated automatically if I don’t specify roles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@sylvain.rouquette can you open a GitHub issue for this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are actively using this module with 0.11.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

2018-09-10

sylvain.rouquette avatar
sylvain.rouquette

@Erik Osterman (Cloud Posse) ok I’ll open a bug. Your usage is a bit different though. You use it through terraform-aws-kops-ecr, roles are already created (but even in my case with created roles, it wouldn’t work)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@sylvain.rouquette thanks, we’ll take a look

antonbabenko avatar
antonbabenko
4
loren avatar

nice, i’ve been using landscape a lot recently, https://github.com/coinbase/terraform-landscape

coinbase/terraform-landscape

Improve Terraform’s plan output to be easier to read and understand - coinbase/terraform-landscape

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

1
loren avatar

on our team i call it “plan for humans”

1
endofcake avatar
endofcake

for landscape, as well as https://www.npmjs.com/package/terraform-ecs-plan-checker for ECS tasks (JSON blobs are horrible)

terraform-ecs-plan-checker

Simple command-line tool to check forced resource Terraform container definitions

2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Gabe avatar

I like landscape because it diffs the json as well

Gabe avatar

instead of having to decipher how two big json blobs are different

loren avatar

yep, exactly

pericdaniel avatar
pericdaniel

anyone know how to pass a terraform variable into packer

jamie avatar

Output to a file for that kind of thing. They don’t link together like that. You can however do things like write terraform outputs to Parameter Store and read that from packer.

pericdaniel avatar
pericdaniel

interesting! maybe ill try that!

pericdaniel avatar
pericdaniel

such as instance type

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
10:52:18 PM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
10:52:48 PM

Pretty sweet

3
camurphy avatar
camurphy

hey there, i’m having an issue with the Cloud Posse Elastic Beanstalk template. i’m working off this example https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/examples/complete/main.tf

the generated plan has an empty Environment tag:

  solution_stack_name:          "" => "64bit Amazon Linux 2018.03 v2.8.1 running PHP 7.2"
  tags.%:                       "" => "4"
  tags.Environment:             "" => ""

this is resulting in an error on apply:

* module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:

* aws_elastic_beanstalk_environment.default: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreateEnvironmentInput.Tags[0].Value.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s also this complex example: https://github.com/cloudposse/terraform-aws-jenkins (doesn’t direclty address your question - but this shows how to use it with CI/CD)

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @camurphy, can you show your code how you invoke the module - the example above was tested 13 days ago and was working as it is

camurphy avatar
camurphy
03:36:29 AM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you have keypair = "123" provisioned in AWS?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will fail if you provide a wrong key name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you don’t want it, leave it as empty string

camurphy avatar
camurphy

oh correct keypair is provisioned, just changed the value for the purpose of sharing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sounds like you might be passing a tag with an empty value

camurphy avatar
camurphy

i hadn’t removed anything from your example, perhaps i need to add something?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you try setting TF_LOG=DEBUG

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then rerun

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Debugging - Terraform by HashiCorp

Terraform has detailed logs which can be enabled by setting the TF_LOG environment variable to any value. This will cause detailed logs to appear on stderr

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe some more useful output

camurphy avatar
camurphy
03:58:14 AM
camurphy avatar
camurphy

i reverted to exactly your example with the exception of changing the region to ap-southeast-2 and adding a profile under the aws provider

camurphy avatar
camurphy
03:59:14 AM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I’ll check

camurphy avatar
camurphy

thanks guys

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:05:23 AM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) any clue?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Environment is empty

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no clue where that’s getting injected

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Out of curiousity, what happens if you pass

tags = {
  "Environment" = "example"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, that’s from terraform-null-label

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

latest changes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah crap

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll fix that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no good deed goes unpunished

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(got the same error)

camurphy avatar
camurphy

yep, adding that tags block fixes it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks for reporting the issue

camurphy avatar
camurphy

no worries, thanks for the templates

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’ll get that fixed. glad there’s an easy workaround for now.

camurphy avatar
camurphy

i had one more question about this module @Erik Osterman (Cloud Posse), is there a way to use it without creating a route 53 hosted zone? happy to use the elasticbeanstalk.com domain for staging. the readme says zone_id is not required but when omitted i get module.elastic_beanstalk_environment.module.tld.aws_route53_record.default: zone_id must not be empty

2018-09-11

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We’ll fix that too :)

pericdaniel avatar
pericdaniel

anyone know of a way to join a windows AWS instance to the domain using terraform?

pecigonzalo avatar
pecigonzalo

I think that is more on the scope of Ansible or some other provisioner

pericdaniel avatar
pericdaniel

i found this… hoping to try to understand it and have it work

pericdaniel avatar
pericdaniel
How to Configure Your EC2 Instances to Automatically Join a Microsoft Active Directory Domain | Amazon Web Servicesattachment image

Seamlessly joining Windows EC2 instances in AWS to a Microsoft Active Directory domain is a common scenario, especially for enterprises building a hybrid cloud architecture. With AWS Directory Service, you can target an Active Directory domain managed on-premises or within AWS. How to Connect Your On-Premises Active Directory to AWS Using AD Connector takes you […]

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pericdaniel that should work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
AWS – Microsoft AD setup with terraform — Sanjeev Nithyanandam's Blog

Goal – To setup Microsoft Active Directory in AWS Assumptions: You are familiar with terraform Familiar with basics of Active Directory AWS VPC is setup with 2 private subnets. Create Microsoft AD using terraform Shell # Microsoft AD resource

pericdaniel avatar
pericdaniel

i think i got it

pericdaniel avatar
pericdaniel

testing it rightn ow

pericdaniel avatar
pericdaniel

oooooooooooooooo

pericdaniel avatar
pericdaniel

let me look

pecigonzalo avatar
pecigonzalo

pericdaniel avatar
pericdaniel

thank you!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@camurphy https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/48 fixes both issues (empty tags and default DNS zone)

Bump `terraform-null-label` version. Make `zone_id` optional by aknysh · Pull Request #48 · cloudposse/terraform-aws-elastic-beanstalk-environment

what Bump terraform-null-label version Make zone_id optional why New terraform-null-label version fixes the issue with empty tag values (which breaks Elastic Beanstalk environment) Don&#39;t cre…

camurphy avatar
camurphy

thanks @Andriy Knysh (Cloud Posse)!

2018-09-12

eric_garza avatar
eric_garza

Hi

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@eric_garza i wanted to ask you to open an issue for that, but I see we already have it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Add support for custom_error_responses · Issue #23 · cloudposse/terraform-aws-cloudfront-s3-cdn

It would be nice to have support for adding custom error responses to the cloudfront distribution. I can put together a PR if this sounds good.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll add it

eric_garza avatar
eric_garza

ah, thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or you can open a PR for that

eric_garza avatar
eric_garza

Not sure what that is yet

eric_garza avatar
eric_garza

How long until you guys add that, I could just clone this and add for my local use.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so if you know how to add it for your local use, then you know how to implement it

eric_garza avatar
eric_garza

I do!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean after you implement it and test, open a PR against our repo and we’ll review and merge it

eric_garza avatar
eric_garza

Sorry, I have not ever contributed to github projects before.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ahh ok

eric_garza avatar
eric_garza

got it

eric_garza avatar
eric_garza

seemed the other person in issue added the PR, would that not cover this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can fork the repo, create a new branch, make the changes (and test), then open a PR against our repo

eric_garza avatar
eric_garza

gotcha

eric_garza avatar
eric_garza

how about the outstanding PR?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

he did not do it

eric_garza avatar
eric_garza

ahh, well , I will, bear with me

eric_garza avatar
eric_garza

i like your modules btw, good work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need help with the PR

eric_garza avatar
eric_garza

I submitted the new input param change in PR, think I did it right, but not under the original. Tested locally, works.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks @eric_garza

2018-09-13

loren avatar

anyone happen to know if terraform 0.12 will interpolate variables and variable files? i would find it very handy to be able to use data sources and pass data source values through a variable…

loren avatar

looks like sortof but not really for the use case i’d want, https://github.com/gruntwork-io/terragrunt/issues/466#issuecomment-385034334

Separate configuration file for Terragrunt? · Issue #466 · gruntwork-io/terragrunt

Hi! I&#39;m one of the engineers at HashiCorp who works on Terraform Core. As you might be aware, we&#39;ve been working for some time now on various improvements to the Terraform configuration lan…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not sure about interpolation in vars, but it says you’d be able to pass resources and modules as inputs and outputs https://www.hashicorp.com/blog/terraform-0-12-rich-value-types

HashiCorp Terraform 0.12 Preview: Rich Value Types

As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The pos…

loren avatar

tks, had forgotten about that one. need to noodle it some to understand whether it really gets me there

camurphy avatar
camurphy

perhaps a bit of a n00b question but how does one ensure that the terraform_state_backend module is able to run before the terraform block where the bucket and dynamo db tables are referenced? https://github.com/cloudposse/terraform-aws-tfstate-backend

most terraform directory structures i see have a [variables.tf](http://variables.tf), [main.tf](http://main.tf) and [outputs.tf](http://outputs.tf) but perhaps i need a different structure to execute the plan for the creation of the backend … then a separate plan for the rest of the stack that consumes that backend? thanks in advance

cloudposse/terraform-aws-tfstate-backend

Provision an S3 bucket to store terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption - cloudposse/terraform-aws-tfstate-backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@camurphy I believe you are asking a slightly different question: how do we provision terraform-aws-tfstate-backend to store state before we have an S3 backend to store state for terraform-aws-tfstate-backend?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here is how …

camurphy avatar
camurphy

i guess i’m asking if terraform-aws-tfstate-backend has to be part of a plan in a separate directory/project to this block:

terraform {
  required_version = ">= 0.11.3"
  
  backend "s3" {
    region         = "us-east-1"
    bucket         = "< the name of the S3 bucket >"
    key            = "terraform.tfstate"
    dynamodb_table = "< the name of the DynamoDB table >"
    encrypt        = true
  }
}
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but to answer your question - yes we do it in separate project folder

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

camurphy avatar
camurphy

ah a bootstrap process, makes sense

camurphy avatar
camurphy

thanks guys, sorry should have RFTM

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s coldstart problem….

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Just released our ec2 auto scale module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module provision an EC2 autoscale group - cloudposse/terraform-aws-ec2-autoscale-group

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This will be used for our upcoming EKS modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(Thanks @Andriy Knysh (Cloud Posse) !)

2018-09-14

maarten avatar
maarten

@Andriy Knysh (Cloud Posse) What do you think of using aws_cloudformation_stack for the AutoScalingGroup creation to have AutoScalingRollingUpdate ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@maarten yea TF does not support it because AWS API does not support it. Using CloudFormation for that sounds like a good idea

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Using Terraform for zero downtime updates of an Auto Scaling group in AWSattachment image

A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we used aws_cloudformation_stack in a few modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-efs-backup

Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup

loren avatar

i’ve had a lot of trouble with the cfn AutoScalingRollingUpdate feature, and failure modes that cause rollbacks resulting in unexpected states… these days i tend to prefer AutoScalingReplacingUpdate, which is more of a blue/green option. just seems that any failure/rollback is much more reliable this way.

maarten avatar
maarten

@loren Ah good to know! Have you used cloudformation with create conditions? And if so, do you know how I can conditionally output, something from it. How to use Splat syntax with it is unclear to me so far.

loren avatar

yeah, conditional outputs are pretty straightforward in cfn…

"Outputs": {
  "OutputName": {
    "Condition": //ConditionName,
    // Normal Output Arguments
  }
}
maarten avatar
maarten

no i mean, something else

loren avatar

oh

maarten avatar
maarten

Given you have

  resource "aws_cloudformation_stack" "autoscaling_group" {
   count = ".... create t/f
.......
    "Outputs": {
      "AsgName": {
        "Description": "The name of the auto scaling group",
         "Value": {"Ref": "${local.name}"}
      }
    }
  }
  EOF
  }
maarten avatar
maarten

and you conditionally create this stack, how to conditionally reference the output of it

loren avatar

oh oh oh, conditional outputs on the tf side lolol

loren avatar

one sec

loren avatar

ok, so i don’t have an example where we’re conditionally creating the stack, which is the complicating factor

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s try this

maarten avatar
maarten

hehe

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you have this output and no count in the resource:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
output "datapipeline_ids" {
  value       = "${aws_cloudformation_stack.datapipeline.outputs["DataPipelineId"]}"
  description = "Datapipeline ids"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then with count might look like this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

output “datapipeline_ids2” { value = “${lookup(element(concat(aws_cloudformation_stack.datapipeline.*.outputs, list(map(“DataPipelineId”, “”))), 0), “DataPipelineId”, “”)}” description = “Datapipeline ids” }

1
loren avatar

getting a value where the output is conditional is easy, otherwise… value = "${lookup(aws_cloudformation_stack.<resource_name>.outputs, "OutputName", "")}"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(not tested )

maarten avatar
maarten

wow, joining with a map with the key but empty value

maarten avatar
maarten

hm, doesn’t work

maarten avatar
maarten

lookup gets a string it says.. unsure why

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok ok I said not tested

maarten avatar
maarten

i want my money back

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me check

maarten avatar
maarten

Hi @loren I have a question about AutoScalingReplacingUpdate vs AutoScalingRollingUpdate, at what times did a rollback take place ? And would AutoScalingReplacingUpdate be something that would work for infrastructure that runs many containers ? AutoScalingRollingUpdate would allow a one by one draining and moving of the containers to the new infrastructure.

loren avatar

Honestly, I just got fed up with failures when using AutoScalingRollingUpdate resulting in an failed rollback, or some other condition where the resulting instances were not actually exactly the way they were before the update

loren avatar

Switched to AutoScalingReplacingUpdate and everything started working exactly the way I wanted

loren avatar

rolling updates are subject to the min/max values of your ASG. if you’re already at the max, then an instance will be terminated before launching a new one based on the new LaunchConfig. if the new instance fails, the rollback action is triggered. That’s where it gets tricky, because the instance was terminated, so now new instances must be launched. Depending on what all’s changed in the template and its dependencies, that new launch of the old launch config may also fail

loren avatar

With AutoScalingReplacingUpdate, a whole new ASG is created, with its own min/max values. No instances are terminated. If any resource signal fails, the new ASG is simply deleted

loren avatar

Only if all resource signals are received/successful is the original ASG deleted, at which point it is subject to draining policies and whatnot

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@loren nice points

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@maarten this worked (for me )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
locals {
  list = "${coalescelist(aws_cloudformation_stack.datapipeline.*.outputs, list(map("DataPipelineId", "")))}"
  map  = "${local.list[0]}"
  item = "${lookup(local.map, "DataPipelineId", "")}"
}

output "datapipeline_ids" {
  value       = "${local.item}"
  description = "Datapipeline ids"
}
1
loren avatar

you might be better off with containers and rolling updates, since they ought to be more immutable and less subject to the issues i kept running into

rms1000watt avatar
rms1000watt

Have you guys worked with clients that actually want API Gateway + Lambda (aka. serverless.com)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@rms1000watt we use Lambda for some small tasks (e.g. IAM backup, S3 events, etc.), but I think not for deploying a microservice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I guess the issue with that is that you have to start developing with lambda in completely different mindset, but they usually use the standard stuff like Python, Ruby, Java, etc., which is better suited for EC2/ECS/Kubernetes deployments

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not easy to break it into pieces for Lambda, and Lambda also has a lot of limitations (starting with a simple one: how to you connect and test all of that locally)

rms1000watt avatar
rms1000watt

@Andriy Knysh (Cloud Posse) From what I’ve seen with my friends at Lantern (@justin.dynamicd) I think they just have dev accounts for integration tests. Then they rely on unit tests before things get deployed.

Honestly, it’s because I devoted a lot of time to this project and just kind of curious what adoption could look like: https://github.com/rms1000watt/serverless-tf And if I should rewrite it when 0.12 is released

rms1000watt/serverless-tf

Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.

loren avatar

i may play with that, i like serverless for being so easy, but not so much a fan of it being backed by cloudformation

rms1000watt/serverless-tf

Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.

rms1000watt avatar
rms1000watt

Yeah, it’s been quite smooth with Veritone & Terragrunt

loren avatar

i’ve been using this module a lot just to manage lambda deployments, but it seems to work best with python and now i’m using a node module that doesn’t vendor node_modules, and this tf module doesn’t currently run npm install --production, https://github.com/claranet/terraform-aws-lambda

claranet/terraform-aws-lambda

Terraform module for AWS Lambda functions. Contribute to claranet/terraform-aws-lambda development by creating an account on GitHub.

rms1000watt avatar
rms1000watt

oh, interesting

rms1000watt avatar
rms1000watt

yeah, you can pass in arbitrary commands in vendor_cmd and test_cmd

rms1000watt avatar
rms1000watt

I guess it would make sense to add build_cmd also or similar

loren avatar

i’ll try to play with this next week and see if i can make it work for my use case… thanks!

rms1000watt avatar
rms1000watt

For sure! Feel free to reach out if you need any tips/tricks

justin.dynamicd avatar
justin.dynamicd
06:20:42 PM

@justin.dynamicd has joined the channel

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@rms1000watt https://github.com/rms1000watt/serverless-tf looks very interesting, thanks for sharing

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and yea, please rewrite in 0.12 so we all see a real example (somebody needs to be first )

rms1000watt avatar
rms1000watt

Haha, yeah, the rewrite would be so it could mirror serverless.com serverless.yml as much as possible

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

interesting idea with generating TF files

rms1000watt avatar
rms1000watt

Had to do it that way since TF was so strict.. I have to allot all potential lambda slots

rms1000watt avatar
rms1000watt

like, currently, it has a limit of 10 lambda functions

rms1000watt avatar
rms1000watt

but it can be regenerated to support X

loren avatar

is there yet a library that reliably writes hcl? or would you just write json (and rely on 0.12’s improved json roundtrip support)?

rms1000watt avatar
rms1000watt

hmm, not sure off the top of my head. With 0.12 I don’t really want to rely on any libraries or json stuff. I’d prefer to just maintain it as HCL but with nested objects in arrays in objects in arrays, lolol–however serverless.yml does it already

rms1000watt avatar
rms1000watt

then maaaaaaybe write a tool to convert serverless.yml to this module definition. But that would be an entirely different repo/thing.. just separation of concerns

rms1000watt avatar
rms1000watt

shrugs

loren avatar

i think converting serverless.yml could be pretty tough, they are very linked to cloudformation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as @rms1000watt mentioned, write everything in Go, problem solved

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Write everything in Go and use FROM scratch Docker containers in Multi-Stage builds. Problem solved

rms1000watt avatar
rms1000watt

@loren just depends how close the HCL spec matches the serverless.yml

functions:
  index:
    handler: handler.hello
    events:
      - http: GET hello
rms1000watt avatar
rms1000watt

No clue yet until I start jumping in.. or if anyone even wants it.. haha

loren avatar

yeah, the most basic serverless.yml stuff would be easy, but it gets more complicated pretty quick. such as native CFN in the resources section

loren avatar

and references specific to cfn resource names in the serverless generated template

loren avatar

plus the whole plugin system

rms1000watt avatar
rms1000watt

hehe yeah, serverless has done a good job with all it’s features and functionality

rms1000watt avatar
rms1000watt

I mean, some stuff would just be “do it proper in terraform, then reference the role arn in the module input”

1
loren avatar

not knocking the work so far, i’m pretty excited by a terraform-native project for making serverless-like stuff easier

1
rms1000watt avatar
rms1000watt

No worries! Yeah, it’s been fun so far

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, excited to see someone making serverless easier in terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we haven’t done too much serverless at cloudposse. we have a few little helpers here and there.

loren avatar

does the file argument also support directories? or is it assumed that a function is a single file module? or is file just pointing to the file with the handler, and the whole directory with that file is packaged?

stobiewankenobi avatar
stobiewankenobi

file supports modules, and should zip up the entire directory.

loren avatar

gotcha tks

stobiewankenobi avatar
stobiewankenobi

I didn’t write this, but I think I know how Ryan did it

stobiewankenobi avatar
stobiewankenobi

@rms1000watt correct me if wrong

stobiewankenobi avatar
stobiewankenobi
data "archive_file" "lambda_py_0" {
  type        = "zip"
  source_file = "${local.lambda_py_0_source_file}"
  output_path = "${local.lambda_py_0_zip}"

  depends_on = ["null_resource.py_0_build"]
  count      = "${local.lambda_py_0_count}"
}
stobiewankenobi avatar
stobiewankenobi

Since this is the source file, if you give it a dir, it’ll zip up the entire dir, and pass the zip file back to the lambda func to create it from the zip.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ses-lambda-forwarder

This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder

stobiewankenobi avatar
stobiewankenobi

Yeah that’s how I do it as well

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

stobiewankenobi avatar
stobiewankenobi

If Ryan didn’t do it that way FOR SHAMMEEE and should fix.

loren avatar

yeah, was poking around to trace it out, but i do see archive_file takes both source_file and source_dir… https://www.terraform.io/docs/providers/archive/d/archive_file.html#argument-reference

Archive: archive_file - Terraform by HashiCorp

Generates an archive from content, a file, or directory of files.

loren avatar

source_file works on directories though?

stobiewankenobi avatar
stobiewankenobi

Ah yea, but if you do the archive file bit

stobiewankenobi avatar
stobiewankenobi

You want the archive bit to ensure it catches hashed changes.

stobiewankenobi avatar
stobiewankenobi

so it takes a dir, zips it up, and get’s a hash of it it persists to state, for later comparison

stobiewankenobi avatar
stobiewankenobi

This picks up code changes and updates lambda

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So as it relates to our modules, I don’t like that the zip creation is done by the user. More advanced lambdas will require deps. For example, the SES module uses npm. Having npm installed shouldn’t be a requirement. Thus, here’s how I’m proposing we change it for our ses module: https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder/issues/2

Refactor Zip Creation as Part of CI/CD · Issue #2 · cloudposse/terraform-aws-ses-lambda-forwarder

what Suggested improvements: (can be implemented in separate PR) Build/package zip as part of CI to create an artifact Attach ZIP artifact to release Derive module version from git rename module to…

stobiewankenobi avatar
stobiewankenobi

What do you mean by this? Zip creation above is done automatically by point to a dir. Does that mirror your guys goals as well or am I missing something.

Refactor Zip Creation as Part of CI/CD · Issue #2 · cloudposse/terraform-aws-ses-lambda-forwarder

what Suggested improvements: (can be implemented in separate PR) Build/package zip as part of CI to create an artifact Attach ZIP artifact to release Derive module version from git rename module to…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem is that the zip needs to include all dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Fetching all dependencies via NPM is outside the scope of terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thus, the CI/CD pipeline should fetch and package all dependencies as an artifact.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And then terraform provision the lambdas

stobiewankenobi avatar
stobiewankenobi

Ah got it, I see what you’re referring to

loren avatar

heh, and i’m ok with requiring that our users have npm… i like the vendor_cmd approach that Ryan took… the user specifies the command, they kinda ought to know that they’ll actually need that command

loren avatar

i’d wrap the execution in a script or make target anyway, and test/install whatever command was actually needed

maarten avatar
maarten

Random thought, would it be possible to use codebuild inside the module, and use it’s artifact in the same cycle ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@loren the problem is also ensuring that different users have the same versions of dependencies. also, for example, sometimes building deps locally (e.g. on a mac) will lead to binary artifacts which are CPU architecture specific. safer to build in a uniform environment.

loren avatar

good points! artifacts should be created from spec files with reproducible dependencies

rms1000watt avatar
rms1000watt

lololololol sorry, was on a call

rms1000watt avatar
rms1000watt
loren avatar

sorry, didn’t mean relative paths… i mean, if i have a directory like this:

project/
  src/
    index.js
    modules/
      foo/
        index.js
rms1000watt avatar
rms1000watt

yeah, it will handle nested also

loren avatar

i need at least the src directory zipped up (if not project), not just src/index.js

rms1000watt avatar
rms1000watt

Ohhhh

rms1000watt avatar
rms1000watt

crap

rms1000watt avatar
rms1000watt

that might break to be honest

rms1000watt avatar
rms1000watt

I have my js and py too simple as just 1 file

rms1000watt avatar
rms1000watt

feel free to fix and make a PR

rms1000watt avatar
rms1000watt

hahaha jk–no worries

loren avatar

i can do a pr, maybe next week

rms1000watt avatar
rms1000watt
rms1000watt/serverless-tf

Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.

rms1000watt avatar
rms1000watt

it would need a whole directory zipped instead of just the file

loren avatar

that’s what i was thinking from skimming the code, also

rms1000watt avatar
rms1000watt

hmm.. optional zip_dir flag or something.. interesting

loren avatar

could take mutually exclusive args, file and dir maybe?

loren avatar

one uses archive_file with source_file, the other uses archive_file with source_dir?

rms1000watt avatar
rms1000watt

even better

rms1000watt avatar
rms1000watt

way better

loren avatar

could also try some interpolation magic on the input, but i don’t see a function in terraform to test if a path is a file or dir, so would need to make some assumption based on the string, which i think would be pretty fragile

rms1000watt avatar
rms1000watt

Totally. But the only catch is the variable name is file

rms1000watt avatar
rms1000watt

so putting a dir value in a variable named file would be awkward

loren avatar

yeah, that would be a backwards-incompatible change

rms1000watt avatar
rms1000watt

the dir and file idea is golden

loren avatar

file would get renamed to path or somesuch

rms1000watt avatar
rms1000watt

ah

rms1000watt avatar
rms1000watt

ya

rms1000watt avatar
rms1000watt

some examples use relatives paths

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@maarten you question was about if we can build an artifact with CodeBuild and then use it in other TF resources?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it says you can store the build artifact in an S3 bucket https://www.terraform.io/docs/providers/aws/r/codebuild_project.html#location

AWS: aws_codebuild_project - Terraform by HashiCorp

Provides a CodeBuild Project resource.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
AWS: aws_s3_bucket_object - Terraform by HashiCorp

Provides metadata and optionally content of an S3 object

maarten avatar
maarten

@Andriy Knysh (Cloud Posse) yeah but related to lambda deployments. So instead of npm locally, using codebuild to create the actual artifact, which after is used again by the module to deploy to aws lambda.

maarten avatar
maarten

was more a hypothetical question..

rms1000watt avatar
rms1000watt

https://github.com/rms1000watt/notejam/blob/master/buildspec.yml https://github.com/rms1000watt/notejam/blob/master/testspec.yml

You can define whatever steps you need in the *.yml that’s used in the CodeBuild + CodePipeline

rms1000watt/notejam

Unified sample web app. The easy way to learn web frameworks. - rms1000watt/notejam

rms1000watt/notejam

Unified sample web app. The easy way to learn web frameworks. - rms1000watt/notejam

rms1000watt avatar
rms1000watt

i have a codepipeline thats Source > Codebuild (build) > Deploy (ecs) > Codebuild (Integration test)

rms1000watt avatar
rms1000watt

so it could be like Source > Codebuild (compile?) > Codebuild (deploy to lambda?) > Codebuild (integration test)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea you can run any commands in buildpec.yml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/jenkins

Contribute to cloudposse/jenkins development by creating an account on GitHub.

2018-09-15

Ryan Ryke avatar
Ryan Ryke

hello everyone wave

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hey @Ryan Ryke

Ryan Ryke avatar
Ryan Ryke

actually if you guys have a second… ive been playing with the ecs-web-app module… looks like its needs listener arns, i used the aws-alb module to build the alb, then took the outputs listener_arns from there and fed them into the web-app module. but its erroring while trying to create the target group, says that its missing an elb

Ryan Ryke avatar
Ryan Ryke
* aws_ecs_service.default: InvalidParameterException: The target group with targetGroupArn arn:aws:elasticloadbalancing:us-west-2:629113624323:targetgroup/bv-staging-hw/c0cd39cd95d47b66 does not have an associated load balancer.
	status code: 400, request id: bb3f68be-b905-11e8-a342-17f3363522bb "bv-staging-hw"
Ryan Ryke avatar
Ryan Ryke

is the web-app known to be working?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) can probably find a reference architecture

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i can

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

1 sec

Ryan Ryke avatar
Ryan Ryke

it looks like the alb module creates a target group with default at the end of it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is how we used it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "alb" {
source             = "git::<https://github.com/cloudposse/terraform-aws-alb.git?ref=tags/0.2.5>"
name               = "cluster"
namespace          = "${var.namespace}"
stage              = "${var.stage}"
attributes         = "${var.attributes}"
vpc_id             = "${module.vpc.vpc_id}"
ip_address_type    = "ipv4"
subnet_ids         = ["${module.subnets.public_subnet_ids}"]
security_group_ids = ["${module.vpc.vpc_default_security_group_id}"]
access_logs_region = "${var.region}"

https_enabled = "true"
http_ingress_cidr_blocks = "${var.ingress_cidr_blocks_http}"
https_ingress_cidr_blocks = "${var.ingress_cidr_blocks_https}"
certificate_arn    = "${var.default_cert_arn}"
}


module "ecs_cluster_label" {
source    = "git::<https://github.com/cloudposse/terraform-terraform-label.git?ref=tags/0.1.6>"
name      = "cluster"
namespace = "${var.namespace}"
stage     = "${var.stage}"
}

# ECS Cluster (needed even if using FARGATE launch type)
resource "aws_ecs_cluster" "default" {
name = "${module.ecs_cluster_label.id}"
}

# default backend app
module "default_backend_web_app" {
source    = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.8.0>"
name      = "backend"
namespace = "${var.namespace}"
stage     = "${var.stage}"
vpc_id    = "${module.vpc.vpc_id}"

container_image  = "${var.default_container_image}"
container_cpu    = "256"
container_memory = "512"
container_port   = "80"

#launch_type                 = "FARGATE"
listener_arns                = "${module.alb.listener_arns}"
listener_arns_count          = "1"
aws_logs_region              = "${var.region}"
ecs_cluster_arn              = "${aws_ecs_cluster.default.arn}"
ecs_cluster_name             = "${aws_ecs_cluster.default.name}"
ecs_security_group_ids       = ["${module.vpc.vpc_default_security_group_id}"]
ecs_private_subnet_ids       = ["${module.subnets.private_subnet_ids}"]
alb_ingress_healthcheck_path = "/healthz"
alb_ingress_paths            = ["/*"]

codepipeline_enabled = "false"
ecs_alarms_enabled   = "true"
autoscaling_enabled  = "false"

alb_name                                        = "${module.alb.alb_name}"
alb_arn_suffix                                  = "${module.alb.alb_arn_suffix}"
alb_target_group_alarms_enabled                 = "true"
alb_target_group_alarms_3xx_threshold           = "25"
alb_target_group_alarms_4xx_threshold           = "25"
alb_target_group_alarms_5xx_threshold           = "25"
alb_target_group_alarms_response_time_threshold = "0.5"
alb_target_group_alarms_period                  = "300"
alb_target_group_alarms_evaluation_periods      = "1"
}

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# web app
module "web_app" {
source    = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.8.0>"
name      = "app"
namespace = "${var.namespace}"
stage     = "${var.stage}"
vpc_id    = "${module.vpc.vpc_id}"

container_image  = "${var.default_container_image}"
container_cpu    = "4096"
container_memory = "8192"

#container_memory_reservation = ""
container_port = "80"
desired_count  = "${var.desired}"

autoscaling_enabled               = "true"
autoscaling_dimension             = "cpu"
autoscaling_min_capacity          = "${var.min}"
autoscaling_max_capacity          = "${var.max}"
autoscaling_scale_up_adjustment   = "1"
autoscaling_scale_up_cooldown     = "60"
autoscaling_scale_down_adjustment = "-1"
autoscaling_scale_down_cooldown   = "300"

#launch_type           = "FARGATE"
listener_arns          = "${module.alb.listener_arns}"
listener_arns_count    = "1"
aws_logs_region        = "${var.region}"
ecs_cluster_arn        = "${aws_ecs_cluster.default.arn}"
ecs_cluster_name       = "${aws_ecs_cluster.default.name}"
ecs_security_group_ids = ["${module.vpc.vpc_default_security_group_id}"]
ecs_private_subnet_ids = ["${module.subnets.private_subnet_ids}"]

alb_ingress_healthcheck_path  = "/"
alb_ingress_paths             = ["/*"]
alb_ingress_listener_priority = "100"

codepipeline_enabled = "true"
github_oauth_token   = "${var.GITHUB_OAUTH_TOKEN}"
repo_owner           = "XXXXX"
repo_name            = "XXXXX"
branch               = "${var.WEB_APP_BRANCH}"
ecs_alarms_enabled   = "true"

alb_target_group_alarms_enabled                 = "true"
alb_target_group_alarms_3xx_threshold           = "25"
alb_target_group_alarms_4xx_threshold           = "25"
alb_target_group_alarms_5xx_threshold           = "25"
alb_target_group_alarms_response_time_threshold = "0.5"
alb_target_group_alarms_period                  = "300"
alb_target_group_alarms_evaluation_periods      = "1"
alb_name                                        = "${module.alb.alb_name}"
alb_arn_suffix                                  = "${module.alb.alb_arn_suffix}"
}
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i need to go now, will be back in a few hours and will be able to answer qs if you have any

Ryan Ryke avatar
Ryan Ryke

hmm im setup almost the exact same, will make it exact

Ryan Ryke avatar
Ryan Ryke

worked, must be something in there that is required for the apply to work. ill have to dig in a little bit more, ty for the full sample

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Andriy Knysh (Cloud Posse)

Ryan Ryke avatar
Ryan Ryke

yes thanks so much @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no problem, glad it worked for you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

2018-09-17

Ryan Ryke avatar
Ryan Ryke

hi guys, it looks like the host_port variable is unused in [main.tf](http://main.tf). https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/variables.tf#L76 am i missing something?

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

maarten avatar
maarten

@Ryan Ryke I’m working on another ECS module. the empty string host_port will be rendered as null. When using ECS together with an ALB the host port will be dynamically allocated. It’s explained here :

https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html

If using containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.

PortMapping - Amazon EC2 Container Service

Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.

Ryan Ryke avatar
Ryan Ryke

im familiar with the dynamic port mapping, but i guess im a little confused on your comment. yes the module should work, or no the module doesnt work and thats why you are working on a new ecs one.

PortMapping - Amazon EC2 Container Service

Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.

maarten avatar
maarten

I was just trying to say “I’m just another guy who also works on ECS a lot so I can answer this question for you”

maarten avatar
maarten
blinkist/terraform-aws-airship-ecs-service

Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service

maarten avatar
maarten

It’s not as clean code-wise as Cloudposse’s but in production with 3 different customers and it works nicely.

maarten avatar
maarten

nevermind, I now see your point, it’s redundant

maarten avatar
maarten

just make an issue or a pr

Ryan Ryke avatar
Ryan Ryke

ahh ok, for now im going to break out the separate modules to understand how they all play nice together, then ill put in a pr into the web-app module

maarten avatar
maarten

Do you want to use host_port ? What is the use case ?

Ryan Ryke avatar
Ryan Ryke

i was just looking to change container_port

rms1000watt avatar
rms1000watt

@Ryan Ryke ^^^ meant for you

maarten avatar
maarten

Fixed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@maarten @jamie @antonbabenko I might be coming around to using Atlantis as I’ve resolved in my mind how we would do it within the “geodesic” model of operations. The current blocker for us is running it in different accounts. I’d prefer to run one instance per account to limit blast radius.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atlantis nodes in different accounts with one repository · Issue #249 · runatlantis/atlantis

We have a repository that contains our live terraform definitions for multiple accounts. We currently have 4 accounts and plan to have an Atlantis node in each account. We&#39;ve tossed around the …

1

2018-09-18

Pierre avatar

Hi guys. First, thanks for your awesome modules ! I am using this one https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment . I have a question. This module creates a load balancer which has a default security group. Am I correct ? This ELB security group allow ingress 443/tcp or 80/tcp from 0.0.0.0 . I would like change 0.0.0.0 to a custom cidr. Is it possible ?

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Pierre good question

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

looks like we don’t support it at this time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The fix would be to add a section like this:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
08:12:13 AM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and pass the SecurityGroups parameter

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you want to open a PR, we’ll promptly review it

Pierre avatar

thanks @Erik Osterman (Cloud Posse). I will look at it

Gabe avatar

do you guys know of or have any tools to test terraform modules? I was thinking something along the lines of it imports the module, you specify the input variables you want to test, and you make sure the plan is as you expect

loren avatar

is this closer to what you’re looking for? https://github.com/elmundio87/terraform_validate

elmundio87/terraform_validate

Assists in the enforcement of user-defined standards in Terraform - elmundio87/terraform_validate

Gabe avatar

uuu i like this and will probably use it … but not quite

Gabe avatar

take the null-label module that cloudposse made for example… it has an enable flag where if you set it to false no resources get made… i am looking for something where i can run a terraform plan with enable set to false and see it plans to make resources and set to false to see that it doesn’t

Gabe avatar

this is a very simple case… but for more complicated modules with logic involved this would become very useful to me

Gabe avatar

i saw this https://github.com/gruntwork-io/terratest but it actually creates resources and you have to then write api calls to validate and destroy resources which takes time

gruntwork-io/terratest

Terratest is a Go library that makes it easier to write automated tests for your infrastructure code. - gruntwork-io/terratest

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Gabe we’ve been thinking about testing and CI/CD for terraform modules and even have some POCs, but all of that is in initial stage. @Erik Osterman (Cloud Posse) can give you more info on that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, we’re taking a different, perhaps “easier” approach. while terratest is awesome for writing tests at that level of complexity, we’re currently content with testing that modules create/destroy and are idempotent.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/test-harness

Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
bats-core/bats-core

Bash Automated Testing System. Contribute to bats-core/bats-core development by creating an account on GitHub.

Gabe avatar

oh cool i’ll check it out

Gabe avatar

i guess as pretty simple example of something i would like to be able to test is if the module has a flag to turn on/off certain resources ensuring it works as expected

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

one consideration for that is to create multiple terraform.tfvars files with all the different permutations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so in addition to testing successful create/destroy/idempotency for each permutation, you would also test that a specific state is achieved

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i know some are using test kitchen for terraform. we’re holding off on introducing the ruby dependency as long as possible

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s amazing what can be accomplished just using jq

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/test-harness

Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so since the entire terraform state is json, jq will address a lot

1
Gabe avatar

i found this https://github.com/palantir/tfjson so you can output the plan in json… so theoretically you wouldn’t even have to create the resources

palantir/tfjson

Terraform plan file to JSON. Contribute to palantir/tfjson development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is that necessary?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe it’s a more concise representation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there is an -out parameter which will emit the plan in json

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, i guess the .tfplan is not json

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i thought it was

Gabe avatar
Add JSON Output Format to terraform plan · Issue #11883 · hashicorp/terraform

Terraform Version Terraform v0.8.6 Affected Resource N/A Terraform Configuration Files N/A Debug Output N/A Panic Output N/A Expected Behavior terraform plan -out plan.json -format json should crea…

Gabe avatar

doesn’t look like it’s coming out in terraform 0.12 either

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

bummed there’s no binary release

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’d add it to our packages distro

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/packages

Cloud Posse installer and distribution of native apps - cloudposse/packages

ivan.pinatti avatar
ivan.pinatti

Hi everyone!

I have a Jenkins deployment that I did a few months ago using https://github.com/cloudposse/terraform-aws-jenkins , and since last week my automated backup (AWS DataPipeline) started breaking and it is not completing anymore.

It is not throwing any errors, it looks like the backup job simply hangs. Anyone could shed some light on it ?

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @ivan.pinatti

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

anyway, since EFS backup is done by a CloudFormation template https://github.com/cloudposse/terraform-aws-efs-backup/blob/master/templates/datapipeline.yml

cloudposse/terraform-aws-efs-backup

Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please make sure the CF stack is green

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
01:10:51 AM
ivan.pinatti avatar
ivan.pinatti

All green on CF

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, not sure what’s the limits are on S3 bucket versioning. It might has been reached, please check (https://github.com/cloudposse/terraform-aws-efs-backup/blob/master/s3.tf)

cloudposse/terraform-aws-efs-backup

Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup

ivan.pinatti avatar
ivan.pinatti

Versioning is enabled but there is no set limit anywhere.

cloudposse/terraform-aws-efs-backup

Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup

ivan.pinatti avatar
ivan.pinatti

On my AWS console it shows like this;

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you run terraform plan/apply to check what it says?

ivan.pinatti avatar
ivan.pinatti

I actually can’t because I had to change the CodeBuild and CodePipeline as my Jenkins Docker image repo is in Bitbucket.

ivan.pinatti avatar
ivan.pinatti

If I do that it is going to change these.

ivan.pinatti avatar
ivan.pinatti

Let me grab a few snapshots of the console

ivan.pinatti avatar
ivan.pinatti
04:56:52 PM
ivan.pinatti avatar
ivan.pinatti
04:57:34 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no other errors, just Cancelled?

ivan.pinatti avatar
ivan.pinatti

On the stdout and stderr it looks like it just hangs

ivan.pinatti avatar
ivan.pinatti

No other errors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe you can taint the pipeline and then terraform apply -target=.... to re-create it?

ivan.pinatti avatar
ivan.pinatti

I will jump in a meeting now, I will try later and let you know.

ivan.pinatti avatar
ivan.pinatti
02:20:03 AM
Ryan Ryke avatar
Ryan Ryke

re:testing, we’ve been using kitchen terraform for a customer recently and it seems to fit the bill on a lot of different items

Ryan Ryke avatar
Ryan Ryke

i can chat in greater detail should someone want to hear about it

Ryan Ryke avatar
Ryan Ryke

also still working on the ecs-web-app module… i’ve gotten to the point now where the codepipeline is erroring on s3 permissions :

Insufficient permissions
Unable to access the artifact with Amazon S3 object key 'bv-staging-hw-xxxxx/task/kN9HAdK' located in the Amazon S3 artifact bucket 'bv-staging-hw-xxxxx'. The provided role does not have sufficient permissions.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

really odd

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no tweaks to the modules?

Ryan Ryke avatar
Ryan Ryke

figured i would thread it

Ryan Ryke avatar
Ryan Ryke

do you guys use this module often?

Ryan Ryke avatar
Ryan Ryke

also i noticed that this alb_ingress_paths = ["/"]

Ryan Ryke avatar
Ryan Ryke

needs to be specified otherwise the target group doesnt get attached to the alb (and it errors)

Ryan Ryke avatar
Ryan Ryke

that was the first issue i had

Ryan Ryke avatar
Ryan Ryke

any ideas?

Ryan Ryke avatar
Ryan Ryke

looks like the assume role has the permissions to access the s3 bucket

Ryan Ryke avatar
Ryan Ryke

yeah this is a mystery to me.

Ryan Ryke avatar
Ryan Ryke

so it looks like the build phase is building the image and pushing it ok, but its not uploading any data via the artifacts (nothing is specified in the buildspec that im using)

Ryan Ryke avatar
Ryan Ryke

so added artifacts in the buildspec and it seemed to get past the issue.. its looking for imagedefinitions.json

Ryan Ryke avatar
Ryan Ryke

would you happen to have a sample as to what its expecting? @Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s actively in use for a couple of apps at a client site at the current version. But there must be some settings missing that are not well documented

Ryan Ryke avatar
Ryan Ryke

i think i just need the imagedefinitions.json

Ryan Ryke avatar
Ryan Ryke

my assumption was that it was handled by the container def module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes need that

Ryan Ryke avatar
Ryan Ryke

do you happen to have a sanitized sample?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sec

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please add any other issues. :)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We will get them fixed

Ryan Ryke avatar
Ryan Ryke

that looks good, what does that imagedefinitions.json look like

Ryan Ryke avatar
Ryan Ryke

also thank you

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think that file is created as part of the process

Ryan Ryke avatar
Ryan Ryke
right here ` printf ‘[{“name”:”%s”,”imageUri”:”%s”}]’ “$CONTAINER_NAME” “$REPO_URI:$IMAGE_TAG”tee imagedefinitions.json`
Ryan Ryke avatar
Ryan Ryke

ok cool

Ryan Ryke avatar
Ryan Ryke

thank you very much

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry that’s a big missing piece. We’ll update the docs

Ryan Ryke avatar
Ryan Ryke

past my bed time, thanks so much dude

Ryan Ryke avatar
Ryan Ryke

nope

Ryan Ryke avatar
Ryan Ryke

i had the issue with the module this morning, so this evening i ripped them all into separate modules, still the same issue

2018-09-19

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Ryan Ryke are you up and running?

Ryan Ryke avatar
Ryan Ryke

checking now

Ryan Ryke avatar
Ryan Ryke

looks good

Ryan Ryke avatar
Ryan Ryke

pericdaniel avatar
pericdaniel

ssm_document isn’t applying even though the instance has the correct role? Any ideas why?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pericdaniel did you implement everything from http://www.sanjeevnandam.com/blog/aws-microsoft-ad-setup-with-terraform ?

AWS – Microsoft AD setup with terraform — Sanjeev Nithyanandam's Blog

Goal – To setup Microsoft Active Directory in AWS Assumptions: You are familiar with terraform Familiar with basics of Active Directory AWS VPC is setup with 2 private subnets. Create Microsoft AD using terraform Shell # Microsoft AD resource

pericdaniel avatar
pericdaniel

yea i followedd that

pericdaniel avatar
pericdaniel

right now

pericdaniel avatar
pericdaniel

i notcied that when i go to run command

pericdaniel avatar
pericdaniel

that I dont see the instances I want in there

pericdaniel avatar
pericdaniel

i think it has somthing to do with the ami

pericdaniel avatar
pericdaniel

but not sure

endofcake avatar
endofcake

If you can’t see the instances, this may mean that either the ssm agent is not running or is having trouble connecting to the mothership. Easiest to take a look at its logs to see what’s what.

pericdaniel avatar
pericdaniel

Thank you Sir

pericdaniel avatar
pericdaniel

I have built AMIs with packer

pericdaniel avatar
pericdaniel

So I’m wondering if I’m missing something

endofcake avatar
endofcake

When I had problems with instances not showing up in SSM panel, normally I’d go into an instance and check the agent. So, at the very least, it needs to be 1) installed 2) running 3) able to connect to AWS API

2018-09-20

pericdaniel avatar
pericdaniel

it cant hit hte aws api

pericdaniel avatar
pericdaniel

got it working by setting up NAT GW

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ohh, so you deployed it into private subnets w/o a NAT gateway and the instances could not connect to AWS

pericdaniel avatar
pericdaniel

yes sir

pericdaniel avatar
pericdaniel

for some reason i figured that it would be able to anyways and that it was all internal

pericdaniel avatar
pericdaniel

cause it seems like for most services its able to reach it without hitting tghe internet

pericdaniel avatar
pericdaniel

but not this =[

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm, what “most services”?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it’s in a private subnet w/o a NAT gateway, the traffic can’t leave the VPC at all (except for VPC Private Links)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

glad you solved the issue @pericdaniel

pericdaniel avatar
pericdaniel

sorry

pericdaniel avatar
pericdaniel

anyone set the computername of a windows instance based on instance tags

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Renaming an EC2 Windows instance and customizing its wallpaper with tags using PowerShell

I decided to give blogging a shot because I think I have some potentially interesting patterns in AWS automation using Ansible to share and get feedback on. If you know me, you know how much of a Linux and Mac OS X guy I am, and thus how ironic it

pericdaniel avatar
pericdaniel

thank oyu!

pericdaniel avatar
pericdaniel

do you need to create template files

pericdaniel avatar
pericdaniel

or can oyu just add user data to the resource you want to use it for

pericdaniel avatar
pericdaniel
04:47:47 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use both HEEDOCS and template files, but we prefer to use https://www.terraform.io/docs/providers/template/d/file.html

pericdaniel avatar
pericdaniel

in that example am I creating a .tpl file that contains what I have above and then referencing that under the resource im crreating or can I just leave it there as a a data type

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me show an example

pericdaniel avatar
pericdaniel

yay i love examples

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

pericdaniel avatar
pericdaniel

cool! so for vars.. the only var I would have is the ${var.tag_box}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Use as many as you need :)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Update them in the template and in the terraform module

pericdaniel avatar
pericdaniel

is there a way to check if it ran into an error?

pericdaniel avatar
pericdaniel

i have it looking at the tpl file

pericdaniel avatar
pericdaniel

but doesnt do a rename or anything

pericdaniel avatar
pericdaniel

hmm

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i suggest you test the renaming PowerShell script locally before deploying to make sure it does actually work

johncblandii avatar
johncblandii

I’m running the vpc peering module and hitting this error continuously:

data.aws_vpc_endpoint_service.dynamodb: Refreshing state...
data.aws_vpc.acceptor: Refreshing state...
data.aws_vpc_endpoint_service.s3: Refreshing state...
data.aws_subnet_ids.acceptor: Refreshing state...
data.aws_route_table.acceptor[2]: Refreshing state...
data.aws_route_table.acceptor[4]: Refreshing state...
data.aws_route_table.acceptor[8]: Refreshing state...
data.aws_route_table.acceptor[5]: Refreshing state...
data.aws_route_table.acceptor[6]: Refreshing state...
data.aws_route_table.acceptor[0]: Refreshing state...
data.aws_route_table.acceptor[7]: Refreshing state...
data.aws_route_table.acceptor[3]: Refreshing state...
data.aws_route_table.acceptor[1]: Refreshing state...

Error: Error refreshing state: 1 error(s) occurred:

* module.vpc_peering.module.vpc_peering.data.aws_route_table.requestor: data.aws_route_table.requestor: value of 'count' cannot be computed
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Ahh the count issue again :)

johncblandii avatar
johncblandii

hehe

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We’ll take a look

johncblandii avatar
johncblandii

my guess is the subnet count differs or the referenced vpc is different

johncblandii avatar
johncblandii

ahh…checking

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Apply in stages using -target

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

First provision VPC and subnets

johncblandii avatar
johncblandii
inter module dependency · Issue #14432 · hashicorp/terraform

Terraform version: 0.9.5 I have two terraform modules ct-vpc and ct-vpc-peering and I am trying to establish connection between the two modules. here below, I document my approach: vpc module varia…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We saw that issue before

johncblandii avatar
johncblandii

it wasn’t being too kind

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

In that module

johncblandii avatar
johncblandii

i can try again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Do you know how to use -target?

johncblandii avatar
johncblandii

could’ve been PEBKAC

johncblandii avatar
johncblandii

yes

johncblandii avatar
johncblandii

so how far down the “tree” do I have to plan individually?

johncblandii avatar
johncblandii

to the resource level?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs - cloudposse/terraform-aws-vpc-peering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the VPC needs to be provisioned first using terraform apply -target=

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

after it’s provisioned, run apply on terraform-aws-vpc-peering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you have the VPC code in a separate folder, no need to use -target, just apply it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform-aws-vpc-peering is created in such a way so it does the data source lookup

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

mostly for kops deployments

johncblandii avatar
johncblandii

ok…let me give that a whirl

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

could be done differently

johncblandii avatar
johncblandii

i did the plan and not the apply itself

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-kops-vpc-peering

Terraform module to create a peering connection between a backing services VPC and a VPC created by Kops - cloudposse/terraform-aws-kops-vpc-peering

johncblandii avatar
johncblandii

started a custom one for our internal module repo but figure no need to recreate it

johncblandii avatar
johncblandii

saw that one too

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to apply it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for the VPC to be created first

johncblandii avatar
johncblandii

got it. that’s what i missed

johncblandii avatar
johncblandii

so guess i partially knew how to use -target

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, need to use it if the VPC code and peering code are in the same folder

johncblandii avatar
johncblandii

they’re not. two modules used in one file

johncblandii avatar
johncblandii
module "vpc" {
  source = "../../modules/application-vpc"

  application_name = "vpc-peering-test"
  cidr             = "172.100.0.0/16"
  namespace        = "tf"
  private_subnets  = ["172.100.1.0/24", "172.100.2.0/24", "172.100.3.0/24"]
  public_subnets   = ["172.100.5.0/24", "172.100.6.0/24", "172.100.7.0/24"]
  stage            = "DEV"
}

module "vpc_peering" {
  source = "../../modules/application-vpc-peering"

  application_name = "vpc-peering-test"
  namespace        = "tf"
  stage            = "DEV"

  acceptor_vpc_id  = "vpc-xxx"
  requestor_vpc_id = "${module.vpc.vpc_id}"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one file is the same as in one folder

johncblandii avatar
johncblandii

i meant the core code was in separate folders, but i gotcha

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform apply -target=module.vpc.aws_vpc.application-vpc

johncblandii avatar
johncblandii

thx a ton for the help

1
johncblandii avatar
johncblandii

and for the modules. they’re quite useful.

Ryan Ryke avatar
Ryan Ryke

hey dudes exposed some variables in the terraform-aws-web-app

Ryan Ryke avatar
Ryan Ryke

not sure its up to cp standards, but it works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Ryan Ryke !

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

LGTM

Ryan Ryke avatar
Ryan Ryke

cool

Ryan Ryke avatar
Ryan Ryke

glad i could contribute back at least a little bit

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

GitHub doesn’t let me tag releases on iOS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Even if I request desktop site so we can merge as soon as I get to a desk or @Andriy Knysh (Cloud Posse) online

2018-09-21

pericdaniel avatar
pericdaniel

@Andriy Knysh (Cloud Posse) my issue was that the tag I had set.. had a forward slash in it. And you cant name servers with a / in it lol

1
Ryan Ryke avatar
Ryan Ryke

another quick, potentially stupid question. the multi-az-subnets. i’m trying to build a 3 tier vpc. public - app - db. the non-public subnets are all building as expected with one exception. only one of the private subnets is mapping to a nat gateway. the other two subnets dont have a nat in their routes in both tiers…

Ryan Ryke avatar
Ryan Ryke

tried updating the “az_ngw_count” but didnt have any luck

Ryan Ryke avatar
Ryan Ryke
module "vpc" {
  source     = "s3::<https://s3-us-west-2.amazonaws.com/clc-terraform-modules/aws/vpc/terraform-aws-vpc-0.3.4.zip//terraform-aws-vpc-0.3.4>"

  name       = "${var.name}"
  namespace  = "${var.namespace}"
  stage      = "${var.stage}"
  cidr_block = "${var.cidr_block}"
}

locals {
  public_cidr_block  = "${cidrsubnet(module.vpc.vpc_cidr_block, 5, 0)}"
  app_cidr_block = "${cidrsubnet(module.vpc.vpc_cidr_block, 5, 4)}"
  db_cidr_block = "${cidrsubnet(module.vpc.vpc_cidr_block, 5, 8)}"
}

module "public_subnets" {
  source     = "git::<https://github.com/cloudposse/terraform-aws-multi-az-subnets.git?ref=master>"
  name                = "public"
  namespace           = "${var.namespace}"
  stage               = "${var.stage}"
  vpc_id              = "${module.vpc.vpc_id}"
  availability_zones  = "${var.availability_zones}"
  type                = "public"
  igw_id              = "${module.vpc.igw_id}"
  nat_gateway_enabled = "true"
  cidr_block          = "${local.public_cidr_block}"
}

module "app_subnets" {
  source     = "git::<https://github.com/cloudposse/terraform-aws-multi-az-subnets.git?ref=master>"
  name                = "app"
  namespace           = "${var.namespace}"
  stage               = "${var.stage}"
  vpc_id              = "${module.vpc.vpc_id}"
  availability_zones  = "${var.availability_zones}"
  type                = "private"
  cidr_block          = "${local.app_cidr_block}"
  az_ngw_ids          = "${module.public_subnets.az_ngw_ids}"
  az_ngw_count        = 3
}

module "db_subnets" {
  source     = "git::<https://github.com/cloudposse/terraform-aws-multi-az-subnets.git?ref=master>"
  name                = "db"
  namespace           = "${var.namespace}"
  stage               = "${var.stage}"
  vpc_id              = "${module.vpc.vpc_id}"
  availability_zones  = "${var.availability_zones}"
  type                = "private"
  cidr_block          = "${local.db_cidr_block}"
  az_ngw_ids          = "${module.public_subnets.az_ngw_ids}"
  az_ngw_count        = 3
}
Ryan Ryke avatar
Ryan Ryke

@Andriy Knysh (Cloud Posse)

Ryan Ryke avatar
Ryan Ryke

ty

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the problem here?

Ryan Ryke avatar
Ryan Ryke

only one of each of the private subnets has the route to the nat gateway

Ryan Ryke avatar
Ryan Ryke

so like just the first one routes publically

Ryan Ryke avatar
Ryan Ryke

changing the ngw count doesnt seem to have any effect either

Ryan Ryke avatar
Ryan Ryke

@Andriy Knysh (Cloud Posse) here

Ryan Ryke avatar
Ryan Ryke

any idea?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’ll need to deploy your example to see what it does (it’s been long time since I tested the module and the examples in it)

Ryan Ryke avatar
Ryan Ryke

the issue is here

Ryan Ryke avatar
Ryan Ryke
cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Ryan Ryke avatar
Ryan Ryke

length is count in the number if items, so any single digit number will only create 1 route. based on the count https://github.com/cloudposse/terraform-aws-multi-az-subnets/blob/master/private.tf#L69

cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah yea thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

will fix it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

already deployed it and saw the same issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(stupid mistake )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Ryan Ryke

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan Ryke can you show the code?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we usually use this module to create public and private subnets https://github.com/cloudposse/terraform-aws-dynamic-subnets (more usage, fewer bugs )

cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We also have https://github.com/cloudposse/terraform-aws-named-subnets which employs a slightly different strategy depending on what you want to achieve.

cloudposse/terraform-aws-named-subnets

Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.

Ryan Ryke avatar
Ryan Ryke
cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Ryan Ryke avatar
Ryan Ryke

also i posted my sample in the other thread

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i also commented on your PR, thanks, LGTM just a few nitpicks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve released our EKS terraform modules for Kubernetes this week.

Welcome feedback

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I created #chatops for those interested

Ryan Ryke avatar
Ryan Ryke

updated that pr @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Ryan Ryke, merged the PR

Ryan Ryke avatar
Ryan Ryke

cool thanks dude

Ryan Ryke avatar
Ryan Ryke

you have a chance to take a peek at those subnets ?

Ryan Ryke avatar
Ryan Ryke

or do you have samples for the dynamic subnets that i could peek at

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did not look at your code yet

Ryan Ryke avatar
Ryan Ryke

right i saw those samples was tough to tell exactly what that is creating

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh, it creates one public subnet and one private subnet in each AZ you provide

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you can control NAT gateway creation for private subnet (enable or disable them)

Ryan Ryke avatar
Ryan Ryke

right so im guessing that is your guys standard vpc config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have three diff subnets modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but yes we mostly use dynamic-subnets for its simplicity

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan Ryke I lost track in what channel did you post your code can you repost it?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Ryan Ryke, your PR with the fix was merged

2018-09-22

Ryan Ryke avatar
Ryan Ryke

cool thanks

Ryan Ryke avatar
Ryan Ryke

oy, so looking at the ecs-web-app some more, with the new buildspec.yml that @Erik Osterman (Cloud Posse) graciously showed me. im having a slight issue. essentially the command

printf '[{"name":"%s","imageUri":"%s"}]' "$CONTAINER_NAME" "$REPO_URI:latest" | tee imagedefinitions.json

essentially overwrites the environment variables that are set by the container_definition module. So at this point, we have to set the app environment variables in the codebuild job so that we can use them.

Ryan Ryke avatar
Ryan Ryke

im wondering if there is a way to not use the imagedef file from codebuild and just stick with the env variables from the container_definition module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can, but that’s incompatible with CI/CD

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically the app repo is authoritative on what software runs on the ECS task

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And terraform is authoritative on what the infrastructure looks like

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The definition is what tells ecs what version of the container to run

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If we used strictly envs and wanted to use cicd, we’d need to run terraform from the cicd pipeline which is more complicated

Ryan Ryke avatar
Ryan Ryke

yeah i think i have it in a usable state

Ryan Ryke avatar
Ryan Ryke

still some testing / use to work on.

Ryan Ryke avatar
Ryan Ryke

right so i think i figured this issue out essentially if the task def built by tf isnt the current running one when codepipeline runs, it removes the env vars

Ryan Ryke avatar
Ryan Ryke

on another note, i created another pr. i need the security group id out of the ecs module https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pulls

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Ryan Ryke avatar
Ryan Ryke

there will be another one for the web-app once the service-task is released

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for the PR! Commented

2018-09-23

Ryan Ryke avatar
Ryan Ryke

updated @Erik Osterman (Cloud Posse)

2018-09-24

Ryan Ryke avatar
Ryan Ryke

morning

Ryan Ryke avatar
Ryan Ryke

so poking around a little bit more, im using the alb module to pass the listener arns into the web-app module. but i believe the alb module creates that default target group with hard coded port 80 and attaches it to the port 80 listener. which precludes me from attaching to that listener in the web app module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@rms1000watt here’s how we use it with the listener ARN:

rms1000watt avatar
rms1000watt

@Ryan Ryke ^^^^

Ryan Ryke avatar
Ryan Ryke

sorry @rms1000watt

rms1000watt avatar
rms1000watt

No worries–i always end up finding out cool stuff this way

Ryan Ryke avatar
Ryan Ryke

@Erik Osterman (Cloud Posse) yeah i have all of that, the problem that im running into is that the container port is on 4000

Ryan Ryke avatar
Ryan Ryke

and for some reason the target group is created, it registers the fargate containers. but it never attaches to the alb listener

Ryan Ryke avatar
Ryan Ryke

trying to get it to go from 80->4000

Ryan Ryke avatar
Ryan Ryke

but it wont connect to the alb

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

lol, I keep doing that

Ryan Ryke avatar
Ryan Ryke

and, im not sure it could because the default target group is attached to the port 80 listener

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, i think i follow

Ryan Ryke avatar
Ryan Ryke

when i create the target group outside the webapp module and feed it in, it gives me a “cannot calculate count”

Ryan Ryke avatar
Ryan Ryke

from the ingress module

Ryan Ryke avatar
Ryan Ryke

and that is all created in the terraform-aws-alb

Ryan Ryke avatar
Ryan Ryke

so i guess im wondering how you are creating this outside of this smaple https://gist.github.com/osterman/15c639a970252c9adee8da09538659e8#file-ecs-tasks-tf-L15

Ryan Ryke avatar
Ryan Ryke

do you use the aws-alb module for that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sec

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

added example

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(added comment to gist)

Ryan Ryke avatar
Ryan Ryke

so the output to that regarding target groups would be a targetgroup-default, then a target group for the app?

Ryan Ryke avatar
Ryan Ryke

and does this have any capability to run on a different container port than 80?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, I think for your use-case we should make that parameter overridable

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

probably default_target_group_port

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in general, don’t see the benefit to run on non-standard ports on ECS. Your container definition can map 80:4000.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but nothing against adding the port parameter if you want it

Ryan Ryke avatar
Ryan Ryke

right so i had the problem doing the container mapping with fargate

Ryan Ryke avatar
Ryan Ryke

i get this

Error: Error applying plan:

1 error(s) occurred:

* module.hw_pipeline.module.ecs_alb_service_task.aws_ecs_task_definition.default: 1 error(s) occurred:

* aws_ecs_task_definition.default: ClientException: When networkMode=awsvpc, the host ports and container ports in port mappings must match.
	status code: 400, request id: 05402366-c031-11e8-92ce-c3e5c8d9c0bf
Ryan Ryke avatar
Ryan Ryke
port_mappings = [
      {
        containerPort = "4000"
        hostPort = "80"
        protocol = "tcp"
      }
    ]
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sigh…. yes, you’re correct. i forgot about that limitation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


When networkMode=awsvpc, the host ports and container ports in port mappings must match

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so……… with that said, we’re back to square one.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

let’s just parameterize the port and then you should be good to go.

Ryan Ryke avatar
Ryan Ryke

right so in playing around with the aws-ecs-web-app, it looks like i have to add “port” to the alb-ingress module, and “container_port” to the ecs_alb_service_task

Ryan Ryke avatar
Ryan Ryke

and it seems fine so i can put that in the web-app module as a new variable if you guys want to see that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

container_port is already available, right?

Ryan Ryke avatar
Ryan Ryke

so in the web-module both of those port options were not exposed from the underlying modules

Ryan Ryke avatar
Ryan Ryke

i took container_port out and added the “port_mappings” from the updated 0.3.0 container_definition

Ryan Ryke avatar
Ryan Ryke

but im thinking i might pull that back out

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for completeness, i think the container definition can preserve the port_mappings

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but we can keep the webapp opinionated and only support one port

Ryan Ryke avatar
Ryan Ryke

ok so i will do my best and put something together to pr for you guys

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thanks man - sorry for the grief

Ryan Ryke avatar
Ryan Ryke

together we all get better… i wont lie, a little bit frustrating, but there are a lot of use cases

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it never ceases to amaze me how changing one thing (e.g. a port) can explode the scope

Ryan Ryke avatar
Ryan Ryke

and i still have one issue i havent resolved yet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what issue is that?

Ryan Ryke avatar
Ryan Ryke

let me take a screen shot

Ryan Ryke avatar
Ryan Ryke
07:48:47 PM
Ryan Ryke avatar
Ryan Ryke

the -default is created by the alb module

Ryan Ryke avatar
Ryan Ryke

and its attached to the alb with a listener of port 80

Ryan Ryke avatar
Ryan Ryke

the one without default is the correct one

Ryan Ryke avatar
Ryan Ryke

its created by the web-app (and underlying sub modules)

Ryan Ryke avatar
Ryan Ryke

problem is, it doesnt attach to the alb

Ryan Ryke avatar
Ryan Ryke

once i update the listener (which is also created in the alb module) to point to the correct target group

Ryan Ryke avatar
Ryan Ryke

all is well

Ryan Ryke avatar
Ryan Ryke

its driving me nuts… i feed in the alb arns and all that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

any place you can share some snippets?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sorry - a bit dense and actually @sarkis was the one who worked on all this stuff, so i am not up to speed on the details

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

high level, the design model looked like this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ALB creates a listener with a default target group. then we attach a default backend (something that always returns a pretty 404) which unhandles all requests which don’t have an explicit route. every task we add using the ingress module should typically use the default listener arn of the ALB and creats a new target group. we never tested mixing ports.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this pattern is the default with kubernetes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but TBH never saw anyone implement it with ECS (except for us)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/default-backend

Default Backend for ECS that serves a pretty 404 page - cloudposse/default-backend

Ryan Ryke avatar
Ryan Ryke
module "hw_pipeline" {
    source                 = "../../../terraform-aws-ecs-web-app"
    namespace              = "${var.namespace}"
    stage                  = "${var.stage}"
    name                   = "hw"
    listener_arns          = ["${module.hw_alb.listener_arns}"]
    listener_arns_count    = "1"
    aws_logs_region        = "us-west-2"

    vpc_id                 = "${module.vpc.vpc_id}"
    codepipeline_enabled   = "true"

    ecs_cluster_arn        = "${aws_ecs_cluster.ecs_cluster.arn}"
    ecs_cluster_name       = "${aws_ecs_cluster.ecs_cluster.name}"
    ecs_private_subnet_ids = ["${module.app_subnets.az_subnet_ids["us-west-2a"]}", "${module.app_subnets.az_subnet_ids["us-west-2b"]}", "${module.app_subnets.az_subnet_ids["us-west-2c"]}"]

    ecs_security_group_ids = ["${aws_security_group.app_traffic.id}"]
    container_cpu          = "512"
    container_memory       = "1024"
    container_image        = "xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/bv-staging-hw-ecr:latest"

    port_mappings = [
      {
        containerPort = "4000"
        protocol = "tcp"
      }
    ]


    desired_count = "1"

    #alb_target_group_arn = "${aws_lb_target_group.temp.arn}"
    alb_name = "${module.hw_alb.alb_name}"
    alb_arn_suffix = "${module.hw_alb.alb_arn_suffix}"
    alb_ingress_healthcheck_path = "/"
    alb_ingress_paths             = ["/"]
    alb_ingress_listener_priority = "100"

    github_oauth_token = "xxxxxxxxxxxxx"
    repo_owner = "xxxxxxxx"
    repo_name = "hello_world"
    branch = "master"

    ecs_alarms_enabled   = "true"

    alb_target_group_alarms_enabled                 = "true"
    alb_target_group_alarms_3xx_threshold          = "25"
    alb_target_group_alarms_4xx_threshold           = "25"
    alb_target_group_alarms_5xx_threshold           = "25"
    alb_target_group_alarms_response_time_threshold = "0.5"
    alb_target_group_alarms_period                  = "300"
    alb_target_group_alarms_evaluation_periods      = "1"

    environment = [
      {
        name = "COOKIE"
        value = "cJzXwLAT8dwD9SSgBITcRI1ib4ejNts4bgatcfhv"
      },
      {
        name = "PORT"
        value = "80"
      },
      {
        name = "DATABASE_URL"
        value ="postgres://${var.db_user}:${data.aws_ssm_parameter.db_password.value}@${module.rds_instance.instance_endpoint}/apidb"
      }
    ]
}
Ryan Ryke avatar
Ryan Ryke

ok so i think im just an idiot

Ryan Ryke avatar
Ryan Ryke

i didnt not notice the “/” rule on the load balancer

Ryan Ryke avatar
Ryan Ryke

ie the aws-alb module creates this

  + module.hw_alb.aws_lb_listener.http
      id:                                        <computed>
      arn:                                       <computed>
      default_action.#:                          "1"
      default_action.0.target_group_arn:         "${aws_lb_target_group.default.arn}"
      default_action.0.type:                     "forward"
      load_balancer_arn:                         "arn:aws:elasticloadbalancing:us-west-2:6xxxxxxx:loadbalancer/app/bv-staging-hw-alb/00bf512a3bba527e"
      port:                                      "80"
      protocol:                                  "HTTP"
      ssl_policy:                                <computed>

  + module.hw_alb.aws_lb_target_group.default
      id:                                        <computed>
      arn:                                       <computed>
      arn_suffix:                                <computed>
      deregistration_delay:                      "15"
      health_check.#:                            "1"
      health_check.0.healthy_threshold:          "2"
      health_check.0.interval:                   "15"
      health_check.0.matcher:                    "200-399"
      health_check.0.path:                       "/"
      health_check.0.port:                       "traffic-port"
      health_check.0.protocol:                   "HTTP"
      health_check.0.timeout:                    "10"
      health_check.0.unhealthy_threshold:        "2"
      name:                                      "bv-staging-hw-alb-default"
      port:                                      "80"
      protocol:                                  "HTTP"
      proxy_protocol_v2:                         "false"
      slow_start:                                "0"
      stickiness.#:                              <computed>
      target_type:                               "ip"
      vpc_id:                                    "vpc-xxxxxxxx"
Ryan Ryke avatar
Ryan Ryke

am i missing something here?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am afk. Not quite sure I follow. We have a default target group act like a 404 handler. It matches only when nothing else matches with a higher priority.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is similar to the kubernetes ingress module where there is a default backend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can dig up somewhere an example, but won’t be able to get to it for a few hours

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you rephrase what you are trying to accomplish even higher level?

pericdaniel avatar
pericdaniel

Whats wrong with this?

pericdaniel avatar
pericdaniel
05:17:16 PM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what’s the error?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

pericdaniel avatar
pericdaniel
05:18:23 PM
pericdaniel avatar
pericdaniel

Basically cant find it

pericdaniel avatar
pericdaniel

=’[

loren avatar

your filter name is off, probably

pericdaniel avatar
pericdaniel

yea im trying to figure out what the value for name should be

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

is it your AMI?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


executable_users - (Optional) Limit search to users with explicit launch permission on the image. Valid items are the numeric account ID or self.

pericdaniel avatar
pericdaniel

its a public windows AMI

loren avatar

what i usually do for amazon amis is grab the ami id from the launch wizard, then look it up in the amis console, then work out the pattern from there

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it should not be self

pericdaniel avatar
pericdaniel

should be amazon

loren avatar

the current ami id from the launch wizard in us-east-1 is ami-01945499792201081

pericdaniel avatar
pericdaniel

AMI Name Windows_Server-2016-English-Full-Base-2018.09.15

loren avatar

yep, so use Windows_Server-2016-English-Full-Base-* as the filter value pattern…

pericdaniel avatar
pericdaniel

when I changed self to Amazon I got

pericdaniel avatar
pericdaniel
  • data.aws_ami.ami: 1 error(s) occurred:

  • data.aws_ami.ami: data.aws_ami.ami: InvalidUserID.Malformed: Invalid user id: “amazon” status code: 400, request id: 4decb292-f8a6-4c54-bcc3-f70942a20516

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

amazon is not a valid value

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just remove that field

pericdaniel avatar
pericdaniel

deletining it saved the day

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
AWS: aws_ami - Terraform by HashiCorp

Get information on a Amazon Machine Image (AMI).

pericdaniel avatar
pericdaniel

lol

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you meant owners

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fwiw, here’s how @Andriy Knysh (Cloud Posse) does it in our EKS workers module https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/main.tf#L150-L160

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(just wanted to post the same)

pericdaniel avatar
pericdaniel

thisi s what i have

pericdaniel avatar
pericdaniel
05:25:23 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so in general, if the filter name is specific (as in your case), you might skip adding owners b/c it will find the AMI in any case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you think it could collide with another AMI with similar name pattern, set owners to amazon

loren avatar

do others consider it a “best practice” to specify owners regardless? i’m always concerned with getting hijacked

2
loren avatar

thought i saw somewhere that packer and terraform were moving to make it a required field

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i think if you know you want an AMI from Amazon, why not specify it?

johncblandii avatar
johncblandii

Anyone using TF Enterprise w/ Sentinel have any thoughts?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(hrm… it hasn’t come up before in this channel - but would be curious if anyone is using it)

johncblandii avatar
johncblandii
HashiCorp Sentinel framework

Policy as code framework for HashiCorp Enterprise Products.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you using TF enterprise for CD?

Ryan Ryke avatar
Ryan Ryke
08:18:08 PM
Ryan Ryke avatar
Ryan Ryke

@Erik Osterman (Cloud Posse)

johncblandii avatar
johncblandii

evaluating it @Erik Osterman (Cloud Posse)

Ryan Ryke avatar
Ryan Ryke

has anyone seen the web app module re creating the container definition every apply ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

btw, @maarten is probably one of the most senior guys here on ECS, though he’s managing his own distribution of ECS modules.

1

2018-09-25

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sounds vaguely familiar

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

checking

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so here’s the problem from recollection (a bit fuzzy)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  • because codepipeline is constantly pushing a new image defintion for every release, the container definition diverges quickly
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  • we could use a data source to query the current definition, but then we introduce a cold start problem since the datasources always fail on a new task
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so i think we decide the best fix was to ignore_changes on the container definition, but doesn’t look like we carried that out.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@sarkis sound familiar?

maarten avatar
maarten

I’ve always used ignore_changes, but I have a new branch which needs testing which works by looking up the datasource after bootstrapping when the container_image == “”

https://github.com/blinkist/terraform-aws-airship-ecs-service/tree/cicd_agnostic_ecs_service

The “ecs_task_definition_selector” compares the created task definition, with the current live one, if no changes are found, the live task will be set for the ecs_service hence no update to ecs_service will be made.

blinkist/terraform-aws-airship-ecs-service

Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service

Ryan Ryke avatar
Ryan Ryke

awesome thanks guys, so heres the weird thing, the module has ignore in it but the high level one does not

Ryan Ryke avatar
Ryan Ryke

further, it was working before. and something i changed is now causing it to not work and change on every run

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Container definition change?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I mean the port mapping change

Ryan Ryke avatar
Ryan Ryke

yeah i think it might be the port mapping

Ryan Ryke avatar
Ryan Ryke

thats really the only thing ive changed from the original web-app module with the exception of adding port to ecs task

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Any luck?

Ryan Ryke avatar
Ryan Ryke

was onsite with a customer all day today

Ryan Ryke avatar
Ryan Ryke

havent taken a peek at it yet this evening

Ryan Ryke avatar
Ryan Ryke

@Erik Osterman (Cloud Posse) it looks like the newer module is doing this "memoryReservation": null,

Ryan Ryke avatar
Ryan Ryke

not sure thats what the deal is

Ryan Ryke avatar
Ryan Ryke

going to switch it back to the older module and check it out

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Adjust output regexp to preserve string type of environment values by fernandosilvacornejo · Pull Request #9 · cloudposse/terraform-aws-ecs-container-definition

what Adjust the regexp used to overcome Terraform&#39;s type conversions for integer and boolean parameters in the JSON container definition. The new regexp preservers the string type for environme…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Could be related to this recent change

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The Regex was modified

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@fernando can maybe shed some light

Ryan Ryke avatar
Ryan Ryke

yep its the newer version

Ryan Ryke avatar
Ryan Ryke

the old version doesnt have that

Ryan Ryke avatar
Ryan Ryke

that regex… looks like a buncha gobley gook

Ryan Ryke avatar
Ryan Ryke

hmm

Ryan Ryke avatar
Ryan Ryke

i dont know off hand how to fix this

Ryan Ryke avatar
Ryan Ryke

the old version doesnt handle my container port of 4000

Ryan Ryke avatar
Ryan Ryke

Ryan Ryke avatar
Ryan Ryke

sorta stuck here

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, I think we can get this fixed tomorrow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you open an issue against that repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

post an example of the container definition causing problems for you

Ryan Ryke avatar
Ryan Ryke

lets hold off for now

Ryan Ryke avatar
Ryan Ryke

something weird is happening and im not sure why

Ryan Ryke avatar
Ryan Ryke

i changed it to the old one, ran it twice

Ryan Ryke avatar
Ryan Ryke

the container wouldnt run becuase of the port issue, then i changed it back to the new one, and now it seems to be running ok

Ryan Ryke avatar
Ryan Ryke

so i moved on

Ryan Ryke avatar
Ryan Ryke

i swear its like my first time with tf

Ryan Ryke avatar
Ryan Ryke

now my problem is with the alb ingress … trying to attach the https listener and im getting this

* module.hw_pipeline.module.alb_ingress.aws_lb_listener_rule.paths[1]: index 1 out of range for list var.listener_arns (max 1) in:

${var.listener_arns[count.index]}
Ryan Ryke avatar
Ryan Ryke

lol

Ryan Ryke avatar
Ryan Ryke

the alb module is attaching the listener

Ryan Ryke avatar
Ryan Ryke

yep its backed to messed up

Ryan Ryke avatar
Ryan Ryke

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can maybe zoom tomorrow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ping me in the afternoon

loren avatar

anyone have tips for dealing with data that is a list of lists in terraform? i need to create multiple iam users, and attach one or more iam policies to each user…

loren avatar

aws_iam_user_policy_attachment only works for a single policy arn, so i need to count over both the number of users, and for each user count over the number of policies to attach to that user

maarten avatar
maarten

@loren I moved away from creating users from a list. There was a terraform issue regarding that, especially when removing a user at the beginning of the list Terraform couldn’t deal with that. Let me look that up, not sure if that is still actual.

I’m actually creating groups with policies and attaching users to those that.

loren avatar

yeah, i’m familiar with what happens with the resources when you modify the list…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Agree with @maarten

loren avatar

more interested in how to deal with the data structure

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What about a module that represents one user

loren avatar

i would still end up with a list of groups, and a list of policies to attach to each group

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then code generation that generates one tf file per user that invokes the module

loren avatar

i’ve done this before, too, was hoping to keep it simpler by leaving it all in terraform since i’m not worried about the consequences to list modification for this use case

maarten avatar
maarten
module "user_matheus" {
  source    = "../../modules/terraform-aws-user"
  username  = "matheus"
  namespace = "${var.namespace}"
  belongs_to_groups = ["${local.default_staging_qa_user_groups}"]
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea like that

loren avatar

per user resources then. blech.

loren avatar

alright

maarten avatar
maarten

still much better than not knowing what happens when you modify that list

loren avatar

i’m ok with the consequence for this use case is all

loren avatar

just can’t figure out how to deal with the data structure

maarten avatar
maarten

i’ll pm you

loren avatar

“simpler” is probably the wrong word, considering terraform’s limited support for this kind of logic. maybe, fewer steps?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
sethvargo/terraform-provider-filesystem

A @HashiCorp Terraform provider for interacting with the filesystem - sethvargo/terraform-provider-filesystem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Use terraform to generate the terraform code :P

antonbabenko avatar
antonbabenko

And then use resource "null_resource" { provisioner "local-exec" { command = "terraform apply ..." } }. This way you can call Terraform from Terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then use terraform to apply it

loren avatar

i take it back, please make it stop

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@loren what about something like this (just an idea)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
locals {
  users           = []
  users_lenght    = "${lenght(local.users)}"
  policies        = []
  policies_lenght = "${lenght(local.policies)}"
}

# element(list, index) - Returns a single element from a list at the given index
# If the index is greater than the number of elements, this function will wrap using a standard mod algorithm
resource "aws_iam_user_policy_attachment" "test-attach" {
  count      = "${local.users_lenght * local.policies_lenght}"
  user       = "${element(local.users, count.index)}"
  policy_arn = "${element(local.policies, count.index)}"
}
loren avatar

the policies per user may all be different… i know i am pushing my luck with the data structure and different types for different keys, but this is kind of the idea…

  users = [
    {
      name = "..."
      policies = [
        "abc",
        "def",
      ]
    },
    {
      name = "..."
      policies = [
        "123",
        "456",
        "abc",
      ]
    },
  ]
loren avatar

maarten gave me an idea using the group membership resources, since they support list-based arguments

maarten avatar
maarten

Also using groups to give users policies is best practice following the AWS Well-Architected Framework

loren avatar

quite right, i am appropriately ashamed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maarten avatar
maarten
Define resources by iterating over a list · Issue #4410 · hashicorp/terraform

Feature idea: a resources declaration to declare several resources, possibly by iterating over variables (list or map) Say you want to manage IAM users with Terraform, but DRY-up their groups (or o…

maarten avatar
maarten

So with groups, roles and policies this isn’t a problem, but with users and their unmanaged login-profiles this ends up as bolognese.

maarten avatar
maarten

What is really miss in IAM btw is an attachable assume_role_policy.

loren avatar

ok, thanks to @maarten, this module gets me to a working config…

variable "users" {
  type = "list"
  description = "List of maps of IAM user names and a comma-separated-string of their IAM group memberships by group name"
}

variable "groups" {
  type = "map"
  description = "Map of IAM group names to a policy arn"
}

resource "aws_iam_user" "this" {
  count = "${length(var.users)}"

  name = "${lookup(var.users[count.index], "name")}"
}

resource "aws_iam_group" "this" {
  count = "${length(var.groups)}"

  name = "${element(keys(var.groups), count.index)}"
}

resource "aws_iam_group_policy_attachment" "this" {
  count = "${length(aws_iam_group.this.*.name)}"

  group      = "${aws_iam_group.this.*.name[count.index]}"
  policy_arn = "${lookup(var.groups, aws_iam_group.this.*.name[count.index])}"
}

resource "aws_iam_user_group_membership" "this" {
  count = "${length(var.users)}"

  user   = "${lookup(var.users[count.index], "name")}"
  groups = "${split(",", lookup(var.users[count.index], "groups"))}"

  depends_on = [
    "aws_iam_group_policy_attachment.this",
    "aws_iam_user.this"
  ]
}
2
loren avatar

this take a list of users and each user’s groups, and a map of group names to policy arns. creates a group per policy, attaches the policy to the group, and makes each user a member of their groups

loren avatar

pretty much the trick ends up being that aws_iam_user_group_membership takes a list of groups, so no need to iterate over a nested list

loren avatar

takes inputs of the form:

  users = [
    {
      name = "..."
      groups = "abc,def",
    },
    {
      name = "..."
      groups = "123,456",
    },
  ]

  groups = {
    "abc" = "arn:..."
    "def" = "arn:..."
    "123" = "arn:..."
    "456" = "arn:..."
  }

2018-09-26

jonboulle avatar
jonboulle
03:40:16 PM

@jonboulle has joined the channel

maarten avatar
maarten

Anyone seeing this a lot lately ?

Error: Error loading state: RequestError: send request failed
caused by: Get <https://xx-tf-state.s3.eu-central-1.amazonaws.com/?prefix=env%3A%2F>: EOF
Ryan Ryke avatar
Ryan Ryke

got that yesterday, retry fixed it

maarten avatar
maarten

Last few days this is going on and it still is.

Ryan Ryke avatar
Ryan Ryke

havent had it today yet

stobiewankenobi avatar
stobiewankenobi

Yes

stobiewankenobi avatar
stobiewankenobi

Lots and lots. An open issue exists for this as well.

stobiewankenobi avatar
stobiewankenobi
Intermittent error using s3 state · Issue #4709 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

mrwacky avatar
mrwacky

Not a lot, but we did get that the other day, yes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Ryan Ryke if you get a chance, can you open an issue for that problem? https://github.com/cloudposse/terraform-aws-ecs-container-definition/issues

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so I can try to reproduce and fix it

Ryan Ryke avatar
Ryan Ryke

hey yeah, been heads down today

Ryan Ryke avatar
Ryan Ryke

still ahving the issue

Ryan Ryke avatar
Ryan Ryke

havent looked at it

Ryan Ryke avatar
Ryan Ryke
Container definition rebuild on every run · Issue #13 · cloudposse/terraform-aws-ecs-alb-service-task

using this module as part of https://github.com/cloudposse/terraform-aws-ecs-web-app Working on the web app module, and increased the version to 0.6.0 module &quot;ecs_alb_service_task&quot; { sour…

2018-09-27

Ryan Ryke avatar
Ryan Ryke

anyone able to recreate this?

Ryan Ryke avatar
Ryan Ryke

anyone around ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan Ryke are you saying that container_definition_json = "${module.container_definition.json}" gets changed on each terraform plan and TF wants to update it?

wave1
Ryan Ryke avatar
Ryan Ryke

to add some context im using the web-app module

Ryan Ryke avatar
Ryan Ryke

i forked it

Ryan Ryke avatar
Ryan Ryke

and updated the container def source to the newer module with port mappings

Ryan Ryke avatar
Ryan Ryke

this is updating everytime

-/+ module.hw_pipeline.module.ecs_alb_service_task.aws_ecs_task_definition.default (new resource required)
      id:                                        "bv-staging-hw" => <computed> (forces new resource)
Ryan Ryke avatar
Ryan Ryke

so its the ecs_service_task that is updating everytime

Ryan Ryke avatar
Ryan Ryke

when i roll back the container def module back to the older one

Ryan Ryke avatar
Ryan Ryke

it doesnt want to update every time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Adjust output regexp to preserve string type of environment values by fernandosilvacornejo · Pull Request #9 · cloudposse/terraform-aws-ecs-container-definition

what Adjust the regexp used to overcome Terraform&#39;s type conversions for integer and boolean parameters in the JSON container definition. The new regexp preservers the string type for environme…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the regex changed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i haven’t had a chance to take a look

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@fernando is in the slack, but didn’t get a response

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok so you are saying that before the update everything was ok?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(I did not test it so can’t give an answer right now before I test)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan Ryke ^

Ryan Ryke avatar
Ryan Ryke

right

Ryan Ryke avatar
Ryan Ryke

and while i have you guys

Ryan Ryke avatar
Ryan Ryke

if you have the time i have one other question on the intended usage of the alb_ingress module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok

Ryan Ryke avatar
Ryan Ryke

so in my app im working on listening on both 80 and 443

Ryan Ryke avatar
Ryan Ryke

i used the alb module to set up both listener_arns

Ryan Ryke avatar
Ryan Ryke

feed them into the web_app module

Ryan Ryke avatar
Ryan Ryke
Error: module.hw_pipeline.module.alb_ingress.aws_lb_listener_rule.paths: 1 error(s) occurred:

* module.hw_pipeline.module.alb_ingress.aws_lb_listener_rule.paths[1]: index 1 out of range for list var.listener_arns (max 1) in:

${var.listener_arns[count.index]}
Ryan Ryke avatar
Ryan Ryke

should only be for one listener at a time and should i just call the alb_ingress module again outside the web_app_module

Ryan Ryke avatar
Ryan Ryke

i guess im confused with this logic here

  count        = "${length(var.hosts) > 0 && length(var.paths) == 0 ? var.listener_arns_count : 0}"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you link me to the line

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

higher level… this is providing oen kind of routing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

host-based routing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there’s host-based and path-based

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

right now, it doesn’t support both.

Ryan Ryke avatar
Ryan Ryke

yeah i just have paths

Ryan Ryke avatar
Ryan Ryke
cloudposse/terraform-aws-alb-ingress

Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress

Ryan Ryke avatar
Ryan Ryke

on 2 listener arns

Ryan Ryke avatar
Ryan Ryke

breaks on the second one

Ryan Ryke avatar
Ryan Ryke

im also wanting to forward 80 to 443

Ryan Ryke avatar
Ryan Ryke

so im thinking i might be better off to for the high level module to call these submodules a couple times

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sec

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what are you passing for listener_arns_count?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

1?

Ryan Ryke avatar
Ryan Ryke

2

Ryan Ryke avatar
Ryan Ryke

and listener arns has 2 arns in it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you zoom

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

?

Ryan Ryke avatar
Ryan Ryke

sure let me shut my door

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Splat doesn't work properly · Issue #13397 · hashicorp/terraform

Hi there, Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https:/…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i remember running into this when we implemented our memcache module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe try this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
0.9.5 regression: index out of range when passing a list of values to a module · Issue #14521 · hashicorp/terraform

Terraform Version This is a 0.9.5 regression, still occurring in master (as of f5056b7) I am guessing that this has something to do with #14135 but I could be lying. Affected Resource(s) Not resour…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
0.9.5 regression: index out of range when passing a list of values to a module · Issue #14521 · hashicorp/terraform

Terraform Version This is a 0.9.5 regression, still occurring in master (as of f5056b7) I am guessing that this has something to do with #14135 but I could be lying. Affected Resource(s) Not resour…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also looks interesting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so i wonder if we can rework how we return the output

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
value       = "${compact(concat(aws_lb_listener.http.*.arn, aws_lb_listener.https.*.arn))}"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe we should write it that way now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(without the [])

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


The surrounding brackets are optional for resource attributes, and no longer recommended.

Ryan Ryke avatar
Ryan Ryke

cool will try in a couple of minutes

Ryan Ryke avatar
Ryan Ryke

as the output of the alb module ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Both

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Output and input to ingress

Ryan Ryke avatar
Ryan Ryke

right yeah ill pass it along

Ryan Ryke avatar
Ryan Ryke

alb is outside the webapp

Ryan Ryke avatar
Ryan Ryke

testing now

Ryan Ryke avatar
Ryan Ryke

@Erik Osterman (Cloud Posse) genius

Ryan Ryke avatar
Ryan Ryke

i pulled [] out of everywhere

Ryan Ryke avatar
Ryan Ryke

ty ty ty

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Omfg

Ryan Ryke avatar
Ryan Ryke

lol what a trip

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That was a Hail Mary

Ryan Ryke avatar
Ryan Ryke

Ryan Ryke avatar
Ryan Ryke

i can put some prs in if you want

Ryan Ryke avatar
Ryan Ryke

assuming alb first

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would very much appreciate it

Ryan Ryke avatar
Ryan Ryke

ok let me see if i can do this

Ryan Ryke avatar
Ryan Ryke

would like to get this code off my box lol

Ryan Ryke avatar
Ryan Ryke

oh hang on i have to revert a couple of changes that we made on the call

Ryan Ryke avatar
Ryan Ryke

interestingly enough it takes two applys to get it to wor

Ryan Ryke avatar
Ryan Ryke
removing the brackets for a correct count with listener_arns by rryke · Pull Request #13 · cloudposse/terraform-aws-alb

was failing on anything larger than 1 with index 1 out of range for list var.listener_arns (max 1) in: ${var.listener_arns[1]}

Ryan Ryke avatar
Ryan Ryke

guessing something in here

esource "aws_lb_listener_rule" "paths" {
  count        = "${length(var.paths) > 0 && length(var.hosts) == 0 ? var.listener_arns_count : 0}"
  listener_arn = "${var.listener_arns[count.index]}"
  priority     = "${var.priority + count.index}"

  action {
    type             = "forward"
    target_group_arn = "${local.target_group_arn}"
  }

  condition {
    field  = "path-pattern"
    values = ["${var.paths}"]
  }
}
Ryan Ryke avatar
Ryan Ryke

trying to understand how to either fix this or this is intended

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the aws_lb_listener_rule.paths resource is created if you want to use path based routing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. send /api to service A

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the aws_lb_listener_rule.hosts resource is created if you want to use host based routing.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. send [api.example.com](http://api.example.com) to service A

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Adjust output regexp to preserve string type of environment values by fernandosilvacornejo · Pull Request #9 · cloudposse/terraform-aws-ecs-container-definition

what Adjust the regexp used to overcome Terraform&#39;s type conversions for integer and boolean parameters in the JSON container definition. The new regexp preservers the string type for environme…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t see here https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_environment where it’s required to have integers and booleans as strings for ENV vars

Task Definition Parameters - Amazon Elastic Container Service

Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container definitions, volumes, task placement constraints, and launch types. The family is the name of the task, and each family can have multiple revisions. The IAM task role specifies the permissions that containers in the task should have. The network mode determines how the networking is configured for your containers. Container definitions specify which image to use, how much CPU and memory the container are allocated, and many more options. Volumes allow you to share data between containers and even persist the data on the container instance when the containers are no longer running. The task placement constraints customize how your tasks are placed within the infrastructure. The launch type determines which infrastructure your tasks use.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so what’s probably happening is that we send strings to AWS, but they get converted to the original types

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then TF reads them and compares to what it has and sees that they are different

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Module recreates all `settings` on each `terraform plan/apply` · Issue #43 · cloudposse/terraform-aws-elastic-beanstalk-environment

terraform-aws-elastic-beanstalk-environment recreates all settings on each terraform plan/apply setting.1039973377.name: &quot;InstancePort&quot; => &quot;InstancePort&quot; setting.1039973377.n…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or actually what’s going on is that the PR keeps the strings for all values, but it was supposed to do it only for ENV vars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that needs to be fixed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) do you think you can take a stab at the container definition fix tomorrow?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok

Ryan Ryke avatar
Ryan Ryke
removing the brackets for a correct count with listener_arns by rryke · Pull Request #13 · cloudposse/terraform-aws-alb

was failing on anything larger than 1 with index 1 out of range for list var.listener_arns (max 1) in: ${var.listener_arns[1]}

1
Ryan Ryke avatar
Ryan Ryke

ill put a pr in for an update ecs-web-app with these related changes, along with port mapping, and any updates you might be able to muster with the container_definition

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

tagged a release

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

0.2.6

1

2018-09-28

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Container definition rebuild on every run · Issue #13 · cloudposse/terraform-aws-ecs-alb-service-task

using this module as part of https://github.com/cloudposse/terraform-aws-ecs-web-app Working on the web app module, and increased the version to 0.6.0 module &quot;ecs_alb_service_task&quot; { sour…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the latest release of https://github.com/cloudposse/terraform-aws-ecs-container-definition correctly handles environment when converting to JSON (preserves strings)

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(if, as you mentioned in the PR, you are using 0.6.0, it’s a very old version. Can you try the latest?)

Ryan Ryke avatar
Ryan Ryke

yep i am

Ryan Ryke avatar
Ryan Ryke

when i revert the port mapping and use container port it works fine

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you show the difference (when it works and when it does not)

Ryan Ryke avatar
Ryan Ryke

also if i revert to 0.5.0 it works fine

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the code difference

Ryan Ryke avatar
Ryan Ryke

its pretty basic, swap out portmapping with container port and adjust the version

Ryan Ryke avatar
Ryan Ryke

the part thats recreating it is the ecs_task_def

Ryan Ryke avatar
Ryan Ryke

but when i revert container def it stops doing it

Ryan Ryke avatar
Ryan Ryke

current container def is 0.3.0

Ryan Ryke avatar
Ryan Ryke

current ecs_alb_service_task is 0.6.0

Ryan Ryke avatar
Ryan Ryke
module "container_definition" {
  source                       = "git::<https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=tags/0.3.0>"
  container_name               = "${module.default_label.id}"
  container_image              = "${var.container_image}"
  container_memory             = "${var.container_memory}"
  container_memory_reservation = "${var.container_memory_reservation}"
  container_cpu                = "${var.container_cpu}"
  healthcheck                  = "${var.healthcheck}"
  environment                  = "${var.environment}"
  port_mappings                = "${var.port_mappings}"
  log_options = {
    "awslogs-region"        = "${var.aws_logs_region}"
    "awslogs-group"         = "${aws_cloudwatch_log_group.app.name}"
    "awslogs-stream-prefix" = "${var.name}"
  }
}

module "ecs_alb_service_task" {
  source                    = "git::<https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=tags/0.6.0>"
  name                      = "${var.name}"
  namespace                 = "${var.namespace}"
  stage                     = "${var.stage}"
  alb_target_group_arn      = "${module.alb_ingress.target_group_arn}"
  container_definition_json = "${module.container_definition.json}"
  container_name            = "${module.default_label.id}"
  desired_count             = "${var.desired_count}"
  task_cpu                  = "${var.container_cpu}"
  task_memory               = "${var.container_memory}"
  ecr_repository_name       = "${module.ecr.repository_name}"
  ecs_cluster_arn           = "${var.ecs_cluster_arn}"
  launch_type               = "${var.launch_type}"
  vpc_id                    = "${var.vpc_id}"
  security_group_ids        = ["${var.ecs_security_group_ids}"]
  private_subnet_ids        = ["${var.ecs_private_subnet_ids}"]
  container_port            = "${var.container_port}"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the code above is not working (updates every time)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s your var.port_mappings?

Ryan Ryke avatar
Ryan Ryke
port_mappings                                   = [
      {
        containerPort                               = "4000"
        protocol                                    = "tcp"
      }
    ]
Ryan Ryke avatar
Ryan Ryke

@Andriy Knysh (Cloud Posse)

Ryan Ryke avatar
Ryan Ryke

oops sorry

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok so i know the issue

Ryan Ryke avatar
Ryan Ryke

nice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

add hostPort the same as containerPort

Ryan Ryke avatar
Ryan Ryke

tried thast

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform keeps forcing new resource on unchanged container_definitions · Issue #16769 · hashicorp/terraform

Terraform Version Terraform v0.11.0 + provider.aws v1.4.0 Terraform Configuration Files resource &quot;aws_ecs_task_definition&quot; &quot;httpd&quot; { family = &quot;foo-httpd-${var.environment}&…

Ryan Ryke avatar
Ryan Ryke

let me try again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
aws_ecs_task_definition needs updating even if nothing changes (look at "portMappings") · Issue #3401 · terraform-providers/terraform-provider-aws

Terraform Version Terraform v0.11.3 provider.aws v1.8.0 provider.template v1.0.0 Affected Resource(s) Please list the resources as a list, for example: aws_ecs_task_definition Terraform Configurati…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, what version of TF aws provider are you using?

Ryan Ryke avatar
Ryan Ryke

latest

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok so please check the hostPort and then the healthcheck as described here https://github.com/terraform-providers/terraform-provider-aws/issues/3401#issuecomment-420830007

aws_ecs_task_definition needs updating even if nothing changes (look at "portMappings") · Issue #3401 · terraform-providers/terraform-provider-aws

Terraform Version Terraform v0.11.3 provider.aws v1.8.0 provider.template v1.0.0 Affected Resource(s) Please list the resources as a list, for example: aws_ecs_task_definition Terraform Configurati…

Ryan Ryke avatar
Ryan Ryke

will check

Ryan Ryke avatar
Ryan Ryke

that did it

Ryan Ryke avatar
Ryan Ryke

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which one?

Ryan Ryke avatar
Ryan Ryke

host port

Ryan Ryke avatar
Ryan Ryke

almost that i tried that before

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so well, they said they fixed the port issue in aws provider 1.36.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which one do you have?

Ryan Ryke avatar
Ryan Ryke

1.32

Ryan Ryke avatar
Ryan Ryke

let me re init

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s not the latest

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok anyway, thank you for testing and finding the issues. We’ll add a description to README that if you have TF provider < 1.36.0 AND use network_mode = "awsvpc" then you have to add hostPort to the port mappings

2018-09-29

Ryan Ryke avatar
Ryan Ryke

feel free to comment so we can get it merged plz

Ryan Ryke avatar
Ryan Ryke

hi @Erik Osterman (Cloud Posse) im not sure what that error means

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Ryan Ryke it’s complaining that it needs to be properly formatted

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just run terraform fmt .

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
06:37:44 PM
Ryan Ryke avatar
Ryan Ryke
| => terraform fmt
outputs.tf
variables.tf
________________________________________________________________________________________________________________
| ~/chef/terraform-aws-ecs-web-app @ RR-PRO (ryanryke)
| => terraform fmt .
__________________________________________
Ryan Ryke avatar
Ryan Ryke

¯_(ツ)_/¯

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

perfect

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

now git diff

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you should see the changes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

terraform fmt == terraform fmt .

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the first time you ran it, it formatted the code

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

second time no more formatting necessary

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Ryan Ryke bump

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

want to commit & push that?

Ryan Ryke avatar
Ryan Ryke

sure thing

Ryan Ryke avatar
Ryan Ryke

there you go

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

merged & released

praveen avatar
praveen

Hi, am trying to created azure services using terraform. I have base modules with Azure availability set, load balancers, and couple of azurerm_virtual_machine_extension’s in the base DRY module i want to condition as a option to create only few extensions while deploying azure VM’s while referring to base DRY base modules am trying to use count = “${var.enabled == “true” ? 1 : 0}” in base azurerm_virtual_machine_extension tf file to see if I can use the extension as an option but its not working while an using the count option can you please let me know if am using the right approach by using count in base .tf file(resource) as an option

praveen avatar
praveen

hi

2018-09-30

praveen avatar
praveen

Hi, am trying to created azure services using terraform. I have base modules with Azure availability set, load balancers, and couple of azurerm_virtual_machine_extension’s in the base DRY module

praveen avatar
praveen

i want to condition as a option to create only few extensions while deploying azure VM’s while referring to base DRY base modules

praveen avatar
praveen

am trying to use count = “${var.enabled == “true” ? 1 : 0}” in base azurerm_virtual_machine_extension tf file to see if I can use the extension as an option

praveen avatar
praveen

but its not working while an using the count option

praveen avatar
praveen

can you please let me know if am using the right approach by using count in base .tf file(resource) as an option

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@praveen count = "${var.enabled == "true" ? 1 : 0}" is a correct way to enable/disable a resource

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use what we call splat+join pattern for resources with counts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you put count into a resource, then it becomes a list (not a single resource)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so anywhere you use any of the resource’s attribute, you have to get the item from the list

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

splat+join pattern does this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if enabled = "true", it gets the first item

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if enabled = "false", it gets an empty string (and TF does not complain)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hope that helps. let us know if you need more help

    keyboard_arrow_up