#terraform (2018-09)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2018-09-03
So I’ve got a weird problem that I’ve never encountered before I have a module that creates a vpc does the usual stuff. public/private subnets, nat gateways etc I then use the data returned from this module with an ec2 module which puts x instances in those subnets/availability zones and evenly spreads them
In this situation I’m creating 4 instances. the first 2 get created fine the 3rd it attempts to create in the correct AZ but the wrong subnet specifically a subnet that terraform hasn’t created or management it’s actually one of the subnets in the default vpc and then the 4th node also fails because it has the correct AZ but is instead using the subnet that the 3rd should which is in a different AZ
I’ve checked the state and the subnet from the default vpc doesn’t show up in there at all
@Andrew Jeffree this is one of the big reasons we have our purpose built modules
are you using our subnet modules and our vpc module?
(basically, it’s easy to mess things up)
No I’m using mine. I wasn’t after you to support these, but I was hoping someone may have encountered this weirdness in the past on the Terraform side.
and I’ve used these modules for quite a while without this problem
are you using any data providers?
nope
just passing the output from one module to another
are you extracting values from maps?
(may lead to different orders)
maybe the order of the subnets and the order of the azs is different
yeah so I considered the ordering issue
ok
but that doesn’t explain where the default vpc subnet is coming into things
that’s the part that’s doing my head in.
I’m aware you can manage the default vpc somewhat in terraform, but I don’t do that. I just leave it be and ignore it.
yea, we never use it
ah I found my bug. It’s a conditional that someone else added to the module.
phew! that’s a relief
by default in our module we put instances in private subnets, but someone wanted to put them in public subnets, so they create a variable and they were merging an empty list with the list of private subnet ids
which means the 3rd item in the list when computed was blank which means AWS tries to put it in the default vpc.
good sleuthing! makes sense
goes to have words with the author of that commit…
ah also thank you for acting as a sounding board @Erik Osterman (Cloud Posse) much appreciated.
@Andrew Jeffree you wouldn’t believe it. I ran into the same problem today while training one of my guys. I wouldn’t have figured it out nearly as quickly if you hadn’t shared this. Thanks!!
Haha you’re welcome.
np, sometimes that’s all it takes
2018-09-04
Hi Gang,
I’ve just started using your vpc_peering
terraform module and have run into an issue during the plan stage.
I’m getting
* module.vpc_peering.data.aws_route_table.requestor: data.aws_route_table.requestor: value of 'count' cannot be computed
I checked out the FAQ at https://github.com/cloudposse/docs/blob/master/content/faq/terraform-value-of-count-cannot-be-computed.md and it doesn’t seem to be the same issue.
I am getting it during the plan stage when the requestor_vpc_id
is coming from the output of a vpc
module, however that vpc hasn’t yet been created and the id is going to be computed at this stage.
Something you’ve seen before and if so is this a supported scenario?
Cloud Posse Developer Hub. Complete documentation for the Cloud Posse solution. https://docs.cloudposse.com - cloudposse/docs
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "1.40.0"
name = "hub-vpc"
cidr = "${var.bit_mask_16}.0.0/16"
azs = ["${var.az_a}"]
private_subnets = ["${var.bit_mask_16}.1.0/24"]
public_subnets = ["${var.bit_mask_16}.101.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = false
tags = {
Terraform = "true"
Environment = "hub"
}
}
module "vpc_peering" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc-peering.git?ref=master>"
namespace = "hub"
stage = "dev"
name = "hub-to-mc"
requestor_vpc_id = "${module.vpc.vpc_id}"
acceptor_vpc_id = "${var.mc_vpc_id}"
tags = {
Terraform = "true"
Environment = "hub"
}
}
"${var.mc_vpc_id}"
is a hard coded id of an existing vpc in the same account.
@Toby The way terraform-aws-vpc-peering
currently works is to lookup both requestor
and acceptor
VPCs and subnets (was implemented that way to use with kops
for Kubernetes, e.g. https://github.com/cloudposse/terraform-aws-kops-vpc-peering).
So when you create a VPC in the same project as terraform-aws-vpc-peering
, Terraform does not wait for the VPC to be created and tries to look it up from the data sources -
that’s why it can’t calculate the count of aws_route_table
because it depends on the number of subnets, which in turn depens on the VPC.
This currently can be solved (w/o redesigning the module or creating a new one, which could be done) in two diff ways:
-
Place VPC in a separate folder from ``terraform-aws-vpc-peering` and provision it first
-
Use multi-stage provisioning with
-target
(that’s how we did it for the module)
terraform plan -target=module.vpc
terraform apply -target=module.vpc
terraform plan
terraform apply
The first plan/apply
provisions just the VPC.
The second plan/apply
provisions evrything else (since the VPC is already provisioned, TF is able to look it up).
Terraform module to create a peering connection between a backing services VPC and a VPC created by Kops - cloudposse/terraform-aws-kops-vpc-peering
Thanks for your prompt reply @Andriy Knysh (Cloud Posse), I was using the -target
as a workaround so I’ll carry on doing that
yea, it’s the solution for now
we might add another module to accept the existing VPC IDs (or modify the current one)
thanks for testing btw
Has anyone ever scripted out RDS Cross-region replication using Terraform?
@Matthew that’s what it says about cross-region replication
with Aurora MySQL you can setup a cross-region Aurora Replica from the RDS console. The cross-region replication is based on single threaded MySQL binlog replication and the replication lag will be influenced by the change/apply rate and delays in network communication between the specific regions selected. Aurora PostgreSQL does not currently support cross-region replicas
we did not test it with TF
I don’t even see how it could be done here https://www.terraform.io/docs/providers/aws/r/rds_cluster_instance.html
Provides an RDS Cluster Resource Instance
@Andriy Knysh (Cloud Posse) there is a way to do cross region replication
@Daren does this at gladly with terraform
Ill send you the repo
@Erik Osterman (Cloud Posse) Thank you sir
@Erik Osterman (Cloud Posse) Did you ever find this repo?
nice
Thank you @Andriy Knysh (Cloud Posse) that is what i’m trying to do currently and here it has https://www.terraform.io/docs/providers/aws/r/rds_cluster.html#replication_source_identifier
Manages a RDS Aurora Cluster
but when you actually specify the DB Arn and try to run the script terraform spits out “ You cannot set up replication between a DB instance and a DB cluster across regions.”
do you use MySQL?
Yes Aurora-MySQL
2018-09-05
@Erik Osterman (Cloud Posse) or @Andriy Knysh (Cloud Posse) do we have any terraform module for setting up required vpc endpoints, this is useful in escaping some data transfer charges, as explained here https://aws.amazon.com/vpc/pricing/
@rohit.verma you mean this https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html?
Use a VPC endpoint to privately connect your VPC to other AWS services and endpoint services.
we don’t have modules for that, but it looks like simple enough https://www.terraform.io/docs/providers/aws/r/vpc_endpoint.html
Provides a VPC Endpoint resource.
(we can create module(s) together
@Matthew we talked with @Daren, we’ll show you how to do cross region replication for MySQL (@Daren has a working example)
@Andriy Knysh (Cloud Posse) Appreciate your energy and time looking into that for me
I am open for discussion whenever i’m still currently trying it this moment
Ahh so now when i specify the cluster ARN in replication_source_identifier, it creates the cluster as a replica, but then my instance gets deployed as a WRITER rather than a Reader. Holler when you’re free
@Andriy Knysh (Cloud Posse) / @Erik Osterman (Cloud Posse) Thanks for response, I will try to create one module which is in sync with with geodesic. I believe mostly Gateway type endpoints are worth using, If I am right there is no cost for them. Also Bit confused about com.amazonaws.ap-south-1.logs
endpoint. Since we are using fluentd-cloudwatch log forwarding, will it provide us some cost benefit ?
did you test the latest version? @h20melonman
this example for example https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/examples/basic/main.tf
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
was working ok
@Andriy Knysh (Cloud Posse) i did, however it errored. I build some db’s in may and i didn’t lock down to a version and the new version breaks with this error “* module.rds_cluster_aurora_mysql.output.arn: Resource ‘aws_rds_cluster.default’ does not have attribute ‘arn’ for variable ‘aws_rds_cluster.default.*.arn’ “
@rohit.verma I think it will provide a cost benefit since the traffic never exits the AWS internal networks
@h20melonman can you delete and re-create the cluster?
and pin down to a release
so then i thought , well i’ll go back to what i used before and encountered another issue. where if i build something or run plan against something already built it seems to add / remove availability_zones on it’s own. even though i’ve defined only the two i want
what is the suggested way to pin a release ?
so yea, a lot was changed in the module since May
we added the required number of AZs
ah , thats prob why its thrashing around on me : )
maybe yes
try to set the number to the same number of AZs you currently have deployed
ok
what about version pinning , is that ^^ ok
thanks btw !
@Andriy Knysh (Cloud Posse)
ah, sorry, forget about the number of AZs, it’s from a diff module
version pinning, not OK
here’s how to do it:
kk
source = "git::<https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.4.3>"
great.
i’ll give that a shot. thanks for being avail !
remove version = "0.3.5"
got it
add ` cluster_family = “aurora-mysql5.7”`
Creates a new DB cluster parameter group.
ok
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
@Andriy Knysh (Cloud Posse) I’ve updated to exactly whats in ‘with_cluster_parameters/main.tf’ & outputs.tf , but am getting this error “* module.rds_cluster_aurora_mysql.output.arn: Resource ‘aws_rds_cluster.default’ does not have attribute ‘arn’ for variable ‘aws_rds_cluster.default.*.arn’ “
any ideas? and apoligies if these are basic questions. i truly appreciate your help !
sounds like you are still using some old version of the module (which did not have the arn
output)
did you use source = "git::<https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.4.3>"
?
and also terraform init
did both. and i had deleted the db i was working on and removed the old state file from s3.
did you do terraform destroy
first?
if is switch to 0.4.0 and comment out of the outputs.tf ( used from examples w/c/p ) arn,endpoint,reader_endpoint everything seems fine. and yes i did a terrafrom destroy before starting any of this
well, you can direct message me your code; I can’t answer the question because we did provision the cluster many times in the last few days using the examples from the repo
sure thing.
@Matthew thanks to @Daren, here is a working example for RDS cross-region replication
resource "aws_db_subnet_group" "replica" {
name = "replica"
subnet_ids = ["xxxxxxx", "xxxxxxx", "xxxxxx"]
}
resource "aws_kms_key" "repica" {
deletion_window_in_days = 10
enable_key_rotation = true
}
resource "aws_db_instance" "replica" {
identifier = "replica"
replicate_source_db = "${var.source_db_identifier}"
instance_class = "${var.instance_class}"
db_subnet_group_name = "${aws_db_subnet_group.replica.name}"
storage_type = "io1"
iops = 1000
monitoring_interval = "0"
port = 5432
kms_key_id = "${aws_kms_key.repica.arn}"
storage_encrypted = true
publicly_accessible = false
auto_minor_version_upgrade = true
allow_major_version_upgrade = true
skip_final_snapshot = true
}
typo - repica
did you add this to the documentation or examples?
not yet
this could be added to the RDS module
thanks @Daren
and how to call it
provider "aws" {
alias = "remote"
region = "us-west-2"
}
module "database_remote_replica" {
source = "..."
providers = {
aws = "aws.remote"
}
namespace = "eg"
stage = "prod"
instance_class = "..."
source_db_identifier = "<ARN>"
}
this is not for Aurora though, just for RDS
regarding this:
but when you actually specify the DB Arn and try to run the script terraform spits out ” You cannot set up replication between a DB instance and a DB cluster across regions.”
I think it says that you need to create a remote cluster (not just an instance)
so you need to create two clusters in two diff regions
and use the provider
pattern above to provision the remote cluster for replication
Description This PR adds some extra filters to comply with CIS AWS benchmark Course of action 3.2 Ensure a log metric filter and alarm exist for Management Console sign-in without MFA 3.3 Ensure…
2018-09-06
A terraform provider to manage objects in a RESTful API - Mastercard/terraform-provider-restapi
or this one - https://github.com/dikhan/terraform-provider-openapi . They are rather similar in terms of making things for general APIs.
OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file) - dikhan/terraform-provider-openapi
The most photogenic developer vs the company with the priceless slogan, tough one
https://github.com/GoogleCloudPlatform/magic-modules - pretty cool project and reasoning.
Magic Modules: Automagically generate Google Cloud Platform support for OSS - GoogleCloudPlatform/magic-modules
https://github.com/GoogleCloudPlatform/magic-modules/blob/master/templates/terraform/resource.erb - so easy to read!
Magic Modules: Automagically generate Google Cloud Platform support for OSS - GoogleCloudPlatform/magic-modules
yea, that’s how all code should look like
and then we introduce patterns on top of it
I think there will be some standard for auto-generated code platforms, so that once AWS adopts it we suddenly will not have 1700+ open issues in AWS provider.
1700!? Wow.
no, 1672 actually
Is there a way to use packer
in terraform
i want to create a windows box with some windows roles installed on it
You can either use https://www.terraform.io/docs/providers/null/resource.html or https://www.terraform.io/docs/providers/external/data_source.html to call your packer script
A resource that does nothing.
Executes an external program that implements a data source.
Just like the tf documentation recommends, I too recommend avoiding external data_source providers that are anything but read-only
. @pericdaniel – if you go this route for firing up packer
, you’ll wind up creating resources (triggering builds) when performing innocuous actions, such as terraform plan
, which isn’t normal.
A resource that does nothing.
Executes an external program that implements a data source.
hm.. still trying to understand. I guess i need the best way to deploy windows boxes into aws and install a few features onto the boxes. is there another route you recommend going?
@pericdaniel why do you need to deploy Windows boxes to AWS? (not just a question out of interest, describe the problem you need to solve)
Because all those solutions with packer
etc. are not simple
Maybe you could just use this https://aws.amazon.com/ec2/vm-import (again, depends on what you want to achieve)
Deploying AWS AD service
To be able to make changes and start setting up active directory
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud. AWS Microsoft AD is built on actual Microsoft Active Directory and does not require you to synchronize or replicate data from your existing Active Directory to the cloud. You can use standard Active Directory administration tools and take advantage of built-in Active Directory features, such as Group Policy and single sign-on (SSO).
i guess you can deploy AWS Directory Service for Microsoft Active Directory and then connect it to the on-premisses AD
AWS Microsoft AD makes it easy to migrate Active Directory–dependent, on-premises applications and workloads to the AWS Cloud. With AWS Microsoft AD, you can seamlessly run infrastructure across your own data center and AWS without synchronizing or replicating data from your existing Active Directory to the AWS Cloud.
(I hope what you are trying to do is not http://xyproblem.info )
Asking about your attempted solution rather than your actual problem
@pericdaniel – there’s the non-windows version of AD: https://www.turnkeylinux.org/domain-controller
A Samba4-based Active Directory-compatible domain controller that supports printing services and centralized Netlogon authentication for Windows systems, without requiring Windows Server. Since 1992, Samba has provided a secure and stable free software re-implementation of standard Windows services and protocols (SMB/CIFS).
AWS has Samba as well
thank you! ill take a look
@jamie has a module that runs packer with lambda
What if it takes more than 5 minutes to run? How does module handles that?
@jamie
AFAIK Jamie has two modules with two distinct approaches, only one uses packer
. And while they may use lambda kick things off – neither directly run packer
in the lambda…
- one uses CodeBuild pipelines in the manner described in [1]
- another uses AWS SSM automation “steps” [2]
[1] https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/ [2] https://github.com/bitflight-public/terraform-aws-ssm-ami-bakery/blob/master/automationdocument.tf
An AWS native ‘serverless’ module for building AMI’s and publishing them - bitflight-public/terraform-aws-ssm-ami-bakery
thanks @tamsky
did you end up using this in your project?
AMI automation is currently on a back burner – a POC got setup, but stalled waiting for GH credentials
“I think I can… I think I can…”
2018-09-07
hi there, I’m trying to use cloudposse/terraform-aws-ecr
, but I’m getting a ton of error during the plan:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.terraform_remote_state.root: Refreshing state...
data.aws_iam_policy_document.login: Refreshing state...
data.aws_iam_policy_document.assume_role: Refreshing state...
------------------------------------------------------------------------
Error: Error running plan: 11 error(s) occurred:
* module.ecr.output.policy_write_name: Resource 'aws_iam_policy.write' not found for variable 'aws_iam_policy.write.name'
* module.ecr.output.role_arn: Resource 'aws_iam_role.default' does not have attribute 'arn' for variable 'aws_iam_role.default.*.arn'
* module.ecr.output.role_name: Resource 'aws_iam_role.default' does not have attribute 'name' for variable 'aws_iam_role.default.*.name'
* module.ecr.aws_iam_instance_profile.default: 1 error(s) occurred:
* module.ecr.aws_iam_instance_profile.default: Resource 'aws_iam_role.default' not found for variable 'aws_iam_role.default.name'
* module.ecr.output.policy_write_arn: Resource 'aws_iam_policy.write' not found for variable 'aws_iam_policy.write.arn'
* module.ecr.aws_iam_role_policy_attachment.default_ecr: 1 error(s) occurred:
* module.ecr.aws_iam_role_policy_attachment.default_ecr: Resource 'aws_iam_role.default' not found for variable 'aws_iam_role.default.name'
* module.ecr.output.policy_login_name: Resource 'aws_iam_policy.login' not found for variable 'aws_iam_policy.login.name'
* module.ecr.output.policy_read_arn: Resource 'aws_iam_policy.read' not found for variable 'aws_iam_policy.read.arn'
* module.ecr.output.policy_read_name: Resource 'aws_iam_policy.read' not found for variable 'aws_iam_policy.read.name'
* module.ecr.output.policy_login_arn: Resource 'aws_iam_policy.login' not found for variable 'aws_iam_policy.login.arn'
* module.ecr.data.aws_iam_policy_document.default_ecr: 1 error(s) occurred:
* module.ecr.data.aws_iam_policy_document.default_ecr: Resource 'aws_iam_role.default' not found for variable 'aws_iam_role.default.arn'
with this TF script
provider "aws" {
alias = "assume_repo_admin"
assume_role {
role_arn = "arn:aws:iam::${data.terraform_remote_state.root.account_repo_id}:role/OrganizationAccountAccessRole"
session_name = "setup_account_repo"
}
}
# create ECR repositories
module "ecr" {
providers = {
aws = "aws.assume_repo_admin"
}
source = "git::<https://github.com/cloudposse/terraform-aws-ecr.git?ref=master>"
name = "cloud/databank_fe"
namespace = "repo"
stage = "travis"
}
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
the provider is because I’m trying to set up the ECR in a different account, but the problem also occurs in the same account
I’m using terraform 0.11.8, I noticed that your CI is using TF 0.10.7, could it be the problem?
it seems like aws_iam_role.default fails to be generated automatically if I don’t specify roles
@sylvain.rouquette can you open a GitHub issue for this?
We are actively using this module with 0.11.
Here is our invocation https://github.com/cloudposse/terraform-root-modules/tree/master/aws/ecr
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
If you haven’t already, be sure to pin down your AWS provider! https://www.terraform.io/docs/providers/aws/guides/version-2-upgrade.html#provider-version-configuration
Terraform AWS Provider Version 2 Upgrade Guide
2018-09-10
@Erik Osterman (Cloud Posse) ok I’ll open a bug. Your usage is a bit different though. You use it through terraform-aws-kops-ecr, roles are already created (but even in my case with created roles, it wouldn’t work)
@sylvain.rouquette thanks, we’ll take a look
nice, i’ve been using landscape a lot recently, https://github.com/coinbase/terraform-landscape
Improve Terraform’s plan output to be easier to read and understand - coinbase/terraform-landscape
for landscape, as well as https://www.npmjs.com/package/terraform-ecs-plan-checker for ECS tasks (JSON blobs are horrible)
Simple command-line tool to check forced resource Terraform container definitions
similar idea: https://github.com/coinbase/terraform-landscape
I like landscape because it diffs the json as well
instead of having to decipher how two big json blobs are different
yep, exactly
anyone know how to pass a terraform variable into packer
Output to a file for that kind of thing. They don’t link together like that. You can however do things like write terraform outputs to Parameter Store and read that from packer.
interesting! maybe ill try that!
such as instance type
Pretty sweet
hey there, i’m having an issue with the Cloud Posse Elastic Beanstalk template. i’m working off this example https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/examples/complete/main.tf
the generated plan has an empty Environment tag:
solution_stack_name: "" => "64bit Amazon Linux 2018.03 v2.8.1 running PHP 7.2"
tags.%: "" => "4"
tags.Environment: "" => ""
this is resulting in an error on apply
:
* module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:
* aws_elastic_beanstalk_environment.default: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreateEnvironmentInput.Tags[0].Value.
There’s also this complex example: https://github.com/cloudposse/terraform-aws-jenkins (doesn’t direclty address your question - but this shows how to use it with CI/CD)
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
hi @camurphy, can you show your code how you invoke the module - the example above was tested 13 days ago and was working as it is
do you have keypair = "123"
provisioned in AWS?
it will fail if you provide a wrong key name
if you don’t want it, leave it as empty string
oh correct keypair is provisioned, just changed the value for the purpose of sharing
sounds like you might be passing a tag with an empty value
i hadn’t removed anything from your example, perhaps i need to add something?
Can you try setting TF_LOG=DEBUG
then rerun
Terraform has detailed logs which can be enabled by setting the TF_LOG environment variable to any value. This will cause detailed logs to appear on stderr
maybe some more useful output
i reverted to exactly your example with the exception of changing the region to ap-southeast-2
and adding a profile under the aws provider
I’ll check
thanks guys
@Andriy Knysh (Cloud Posse) any clue?
Environment
is empty
no clue where that’s getting injected
Out of curiousity, what happens if you pass
tags = {
"Environment" = "example"
}
yea, that’s from terraform-null-label
latest changes
Ah crap
we’ll fix that
no good deed goes unpunished
(got the same error)
yep, adding that tags block fixes it
thanks for reporting the issue
no worries, thanks for the templates
we’ll get that fixed. glad there’s an easy workaround for now.
i had one more question about this module @Erik Osterman (Cloud Posse), is there a way to use it without creating a route 53 hosted zone? happy to use the elasticbeanstalk.com domain for staging. the readme says zone_id
is not required but when omitted i get module.elastic_beanstalk_environment.module.tld.aws_route53_record.default: zone_id must not be empty
2018-09-11
We’ll fix that too :)
anyone know of a way to join a windows AWS instance to the domain using terraform?
I think that is more on the scope of Ansible or some other provisioner
i found this… hoping to try to understand it and have it work
Seamlessly joining Windows EC2 instances in AWS to a Microsoft Active Directory domain is a common scenario, especially for enterprises building a hybrid cloud architecture. With AWS Directory Service, you can target an Active Directory domain managed on-premises or within AWS. How to Connect Your On-Premises Active Directory to AWS Using AD Connector takes you […]
@pericdaniel that should work
also, here is a complete solution in TF http://www.sanjeevnandam.com/blog/aws-microsoft-ad-setup-with-terraform
Goal – To setup Microsoft Active Directory in AWS Assumptions: You are familiar with terraform Familiar with basics of Active Directory AWS VPC is setup with 2 private subnets. Create Microsoft AD using terraform Shell # Microsoft AD resource
i think i got it
testing it rightn ow
oooooooooooooooo
let me look
thank you!
@camurphy https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/48 fixes both issues (empty tags and default DNS zone)
what Bump terraform-null-label version Make zone_id optional why New terraform-null-label version fixes the issue with empty tag values (which breaks Elastic Beanstalk environment) Don't cre…
thanks @Andriy Knysh (Cloud Posse)!
2018-09-12
Hi
@eric_garza i wanted to ask you to open an issue for that, but I see we already have it
It would be nice to have support for adding custom error responses to the cloudfront distribution. I can put together a PR if this sounds good.
we’ll add it
ah, thanks
or you can open a PR for that
Not sure what that is yet
How long until you guys add that, I could just clone this and add for my local use.
so if you know how to add it for your local use, then you know how to implement it
I do!
i mean after you implement it and test, open a PR against our repo and we’ll review and merge it
Sorry, I have not ever contributed to github projects before.
ahh ok
got it
seemed the other person in issue added the PR, would that not cover this?
so you can fork the repo, create a new branch, make the changes (and test), then open a PR against our repo
gotcha
how about the outstanding PR?
he did not do it
ahh, well , I will, bear with me
i like your modules btw, good work
thanks
let us know if you need help with the PR
I submitted the new input param change in PR, think I did it right, but not under the original. Tested locally, works.
thanks @eric_garza
2018-09-13
anyone happen to know if terraform 0.12 will interpolate variables and variable files? i would find it very handy to be able to use data sources and pass data source values through a variable…
looks like sortof but not really for the use case i’d want, https://github.com/gruntwork-io/terragrunt/issues/466#issuecomment-385034334
Hi! I'm one of the engineers at HashiCorp who works on Terraform Core. As you might be aware, we've been working for some time now on various improvements to the Terraform configuration lan…
not sure about interpolation in vars, but it says you’d be able to pass resources and modules as inputs and outputs https://www.hashicorp.com/blog/terraform-0-12-rich-value-types
As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The pos…
tks, had forgotten about that one. need to noodle it some to understand whether it really gets me there
perhaps a bit of a n00b question but how does one ensure that the terraform_state_backend
module is able to run before the terraform
block where the bucket and dynamo db tables are referenced? https://github.com/cloudposse/terraform-aws-tfstate-backend
most terraform directory structures i see have a [variables.tf](http://variables.tf)
, [main.tf](http://main.tf)
and [outputs.tf](http://outputs.tf)
but perhaps i need a different structure to execute the plan for the creation of the backend … then a separate plan for the rest of the stack that consumes that backend? thanks in advance
Provision an S3 bucket to store terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption - cloudposse/terraform-aws-tfstate-backend
@camurphy I believe you are asking a slightly different question: how do we provision terraform-aws-tfstate-backend
to store state before we have an S3 backend to store state for terraform-aws-tfstate-backend
?
here is how …
i guess i’m asking if terraform-aws-tfstate-backend
has to be part of a plan in a separate directory/project to this block:
terraform {
required_version = ">= 0.11.3"
backend "s3" {
region = "us-east-1"
bucket = "< the name of the S3 bucket >"
key = "terraform.tfstate"
dynamodb_table = "< the name of the DynamoDB table >"
encrypt = true
}
}
tis is how we use it https://github.com/cloudposse/terraform-root-modules/tree/master/aws/tfstate-backend
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
but to answer your question - yes we do it in separate project folder
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
ah a bootstrap process, makes sense
thanks guys, sorry should have RFTM
it’s coldstart problem….
take a look at the docs as well https://docs.cloudposse.com/reference-architectures/cold-start/ https://docs.cloudposse.com/reference-architectures/cold-start/#provision-tfstate-backend-project-for-root
Just released our ec2 auto scale module
Terraform module provision an EC2 autoscale group - cloudposse/terraform-aws-ec2-autoscale-group
This will be used for our upcoming EKS modules
(Thanks @Andriy Knysh (Cloud Posse) !)
2018-09-14
@Andriy Knysh (Cloud Posse) What do you think of using aws_cloudformation_stack for the AutoScalingGroup creation to have AutoScalingRollingUpdate ?
@maarten yea TF does not support it because AWS API does not support it. Using CloudFormation for that sounds like a good idea
A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…
we used aws_cloudformation_stack
in a few modules
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
i’ve had a lot of trouble with the cfn AutoScalingRollingUpdate feature, and failure modes that cause rollbacks resulting in unexpected states… these days i tend to prefer AutoScalingReplacingUpdate, which is more of a blue/green option. just seems that any failure/rollback is much more reliable this way.
@loren Ah good to know! Have you used cloudformation with create conditions? And if so, do you know how I can conditionally output, something from it. How to use Splat syntax with it is unclear to me so far.
yeah, conditional outputs are pretty straightforward in cfn…
"Outputs": {
"OutputName": {
"Condition": //ConditionName,
// Normal Output Arguments
}
}
no i mean, something else
oh
Given you have
resource "aws_cloudformation_stack" "autoscaling_group" {
count = ".... create t/f
.......
"Outputs": {
"AsgName": {
"Description": "The name of the auto scaling group",
"Value": {"Ref": "${local.name}"}
}
}
}
EOF
}
and you conditionally create this stack, how to conditionally reference the output of it
oh oh oh, conditional outputs on the tf side lolol
one sec
ok, so i don’t have an example where we’re conditionally creating the stack, which is the complicating factor
let’s try this
hehe
if you have this output and no count
in the resource:
output "datapipeline_ids" {
value = "${aws_cloudformation_stack.datapipeline.outputs["DataPipelineId"]}"
description = "Datapipeline ids"
}
then with count
might look like this:
output “datapipeline_ids2” { value = “${lookup(element(concat(aws_cloudformation_stack.datapipeline.*.outputs, list(map(“DataPipelineId”, “”))), 0), “DataPipelineId”, “”)}” description = “Datapipeline ids” }
getting a value where the output is conditional is easy, otherwise… value = "${lookup(aws_cloudformation_stack.<resource_name>.outputs, "OutputName", "")}"
(not tested )
wow, joining with a map with the key but empty value
hm, doesn’t work
lookup gets a string it says.. unsure why
ok ok I said not tested
i want my money back
let me check
Hi @loren I have a question about AutoScalingReplacingUpdate vs AutoScalingRollingUpdate, at what times did a rollback take place ? And would AutoScalingReplacingUpdate be something that would work for infrastructure that runs many containers ? AutoScalingRollingUpdate would allow a one by one draining and moving of the containers to the new infrastructure.
Honestly, I just got fed up with failures when using AutoScalingRollingUpdate resulting in an failed rollback, or some other condition where the resulting instances were not actually exactly the way they were before the update
Switched to AutoScalingReplacingUpdate and everything started working exactly the way I wanted
rolling updates are subject to the min/max values of your ASG. if you’re already at the max, then an instance will be terminated before launching a new one based on the new LaunchConfig. if the new instance fails, the rollback action is triggered. That’s where it gets tricky, because the instance was terminated, so now new instances must be launched. Depending on what all’s changed in the template and its dependencies, that new launch of the old launch config may also fail
With AutoScalingReplacingUpdate, a whole new ASG is created, with its own min/max values. No instances are terminated. If any resource signal fails, the new ASG is simply deleted
Only if all resource signals are received/successful is the original ASG deleted, at which point it is subject to draining policies and whatnot
@loren nice points
@maarten this worked (for me )
locals {
list = "${coalescelist(aws_cloudformation_stack.datapipeline.*.outputs, list(map("DataPipelineId", "")))}"
map = "${local.list[0]}"
item = "${lookup(local.map, "DataPipelineId", "")}"
}
output "datapipeline_ids" {
value = "${local.item}"
description = "Datapipeline ids"
}
you might be better off with containers and rolling updates, since they ought to be more immutable and less subject to the issues i kept running into
Have you guys worked with clients that actually want API Gateway + Lambda (aka. serverless.com)?
@rms1000watt we use Lambda for some small tasks (e.g. IAM backup, S3 events, etc.), but I think not for deploying a microservice
I guess the issue with that is that you have to start developing with lambda in completely different mindset, but they usually use the standard stuff like Python, Ruby, Java, etc., which is better suited for EC2/ECS/Kubernetes deployments
not easy to break it into pieces for Lambda, and Lambda also has a lot of limitations (starting with a simple one: how to you connect and test all of that locally)
@Andriy Knysh (Cloud Posse) From what I’ve seen with my friends at Lantern (@justin.dynamicd) I think they just have dev accounts for integration tests. Then they rely on unit tests before things get deployed.
Honestly, it’s because I devoted a lot of time to this project and just kind of curious what adoption could look like: https://github.com/rms1000watt/serverless-tf And if I should rewrite it when 0.12 is released
Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.
i may play with that, i like serverless for being so easy, but not so much a fan of it being backed by cloudformation
Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.
Yeah, it’s been quite smooth with Veritone & Terragrunt
i’ve been using this module a lot just to manage lambda deployments, but it seems to work best with python and now i’m using a node module that doesn’t vendor node_modules, and this tf module doesn’t currently run npm install --production
, https://github.com/claranet/terraform-aws-lambda
Terraform module for AWS Lambda functions. Contribute to claranet/terraform-aws-lambda development by creating an account on GitHub.
oh, interesting
yeah, you can pass in arbitrary commands in vendor_cmd
and test_cmd
I guess it would make sense to add build_cmd
also or similar
i’ll try to play with this next week and see if i can make it work for my use case… thanks!
For sure! Feel free to reach out if you need any tips/tricks
@justin.dynamicd has joined the channel
@rms1000watt https://github.com/rms1000watt/serverless-tf looks very interesting, thanks for sharing
and yea, please rewrite in 0.12 so we all see a real example (somebody needs to be first )
Haha, yeah, the rewrite would be so it could mirror serverless.com serverless.yml as much as possible
interesting idea with generating TF files
Had to do it that way since TF was so strict.. I have to allot all potential lambda slots
like, currently, it has a limit of 10 lambda functions
but it can be regenerated to support X
is there yet a library that reliably writes hcl? or would you just write json (and rely on 0.12’s improved json roundtrip support)?
hmm, not sure off the top of my head. With 0.12 I don’t really want to rely on any libraries or json stuff. I’d prefer to just maintain it as HCL but with nested objects in arrays in objects in arrays, lolol–however serverless.yml
does it already
then maaaaaaybe write a tool to convert serverless.yml
to this module definition. But that would be an entirely different repo/thing.. just separation of concerns
shrugs
i think converting serverless.yml
could be pretty tough, they are very linked to cloudformation
as @rms1000watt mentioned, write everything in Go, problem solved
Write everything in Go and use FROM scratch
Docker containers in Multi-Stage builds. Problem solved
@loren just depends how close the HCL spec matches the serverless.yml
functions:
index:
handler: handler.hello
events:
- http: GET hello
No clue yet until I start jumping in.. or if anyone even wants it.. haha
yeah, the most basic serverless.yml stuff would be easy, but it gets more complicated pretty quick. such as native CFN in the resources
section
and references specific to cfn resource names in the serverless generated template
plus the whole plugin system
hehe yeah, serverless has done a good job with all it’s features and functionality
I mean, some stuff would just be “do it proper in terraform, then reference the role arn in the module input”
not knocking the work so far, i’m pretty excited by a terraform-native project for making serverless-like stuff easier
No worries! Yeah, it’s been fun so far
yea, excited to see someone making serverless easier in terraform
we haven’t done too much serverless at cloudposse. we have a few little helpers here and there.
does the file
argument also support directories? or is it assumed that a function is a single file module? or is file just pointing to the file with the handler, and the whole directory with that file is packaged?
file supports modules, and should zip up the entire directory.
gotcha tks
I didn’t write this, but I think I know how Ryan did it
@rms1000watt correct me if wrong
data "archive_file" "lambda_py_0" {
type = "zip"
source_file = "${local.lambda_py_0_source_file}"
output_path = "${local.lambda_py_0_zip}"
depends_on = ["null_resource.py_0_build"]
count = "${local.lambda_py_0_count}"
}
Since this is the source file, if you give it a dir, it’ll zip up the entire dir, and pass the zip file back to the lambda func to create it from the zip.
yea, that’s how we do it here https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder/
This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder
Yeah that’s how I do it as well
If Ryan didn’t do it that way FOR SHAMMEEE and should fix.
yeah, was poking around to trace it out, but i do see archive_file takes both source_file and source_dir… https://www.terraform.io/docs/providers/archive/d/archive_file.html#argument-reference
Generates an archive from content, a file, or directory of files.
source_file works on directories though?
Ah yea, but if you do the archive file bit
You want the archive bit to ensure it catches hashed changes.
so it takes a dir, zips it up, and get’s a hash of it it persists to state, for later comparison
This picks up code changes and updates lambda
So as it relates to our modules, I don’t like that the zip creation is done by the user. More advanced lambdas will require deps. For example, the SES module uses npm. Having npm installed shouldn’t be a requirement. Thus, here’s how I’m proposing we change it for our ses module: https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder/issues/2
what Suggested improvements: (can be implemented in separate PR) Build/package zip as part of CI to create an artifact Attach ZIP artifact to release Derive module version from git rename module to…
What do you mean by this? Zip creation above is done automatically by point to a dir. Does that mirror your guys goals as well or am I missing something.
what Suggested improvements: (can be implemented in separate PR) Build/package zip as part of CI to create an artifact Attach ZIP artifact to release Derive module version from git rename module to…
The problem is that the zip needs to include all dependencies
Fetching all dependencies via NPM is outside the scope of terraform
Thus, the CI/CD pipeline should fetch and package all dependencies as an artifact.
And then terraform provision the lambdas
Ah got it, I see what you’re referring to
heh, and i’m ok with requiring that our users have npm… i like the vendor_cmd
approach that Ryan took… the user specifies the command, they kinda ought to know that they’ll actually need that command
i’d wrap the execution in a script or make target anyway, and test/install whatever command was actually needed
Random thought, would it be possible to use codebuild inside the module, and use it’s artifact in the same cycle ?
@loren the problem is also ensuring that different users have the same versions of dependencies. also, for example, sometimes building deps locally (e.g. on a mac) will lead to binary artifacts which are CPU architecture specific. safer to build in a uniform environment.
good points! artifacts should be created from spec files with reproducible dependencies
lololololol sorry, was on a call
@loren it handles relative paths: https://github.com/rms1000watt/serverless-tf/blob/master/local.tf#L40
sorry, didn’t mean relative paths… i mean, if i have a directory like this:
project/
src/
index.js
modules/
foo/
index.js
yeah, it will handle nested also
i need at least the src
directory zipped up (if not project
), not just src/index.js
Ohhhh
crap
that might break to be honest
I have my js
and py
too simple as just 1 file
feel free to fix and make a PR
hahaha jk–no worries
i can do a pr, maybe next week
Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.
it would need a whole directory zipped instead of just the file
that’s what i was thinking from skimming the code, also
hmm.. optional zip_dir
flag or something.. interesting
could take mutually exclusive args, file
and dir
maybe?
one uses archive_file with source_file, the other uses archive_file with source_dir?
even better
way better
could also try some interpolation magic on the input, but i don’t see a function in terraform to test if a path is a file or dir, so would need to make some assumption based on the string, which i think would be pretty fragile
Totally. But the only catch is the variable name is file
so putting a dir value in a variable named file
would be awkward
yeah, that would be a backwards-incompatible change
the dir
and file
idea is golden
file
would get renamed to path
or somesuch
ah
ya
some examples use relatives paths
@maarten you question was about if we can build an artifact with CodeBuild and then use it in other TF resources?
it says you can store the build artifact in an S3 bucket https://www.terraform.io/docs/providers/aws/r/codebuild_project.html#location
Provides a CodeBuild Project resource.
and then use this to read it https://www.terraform.io/docs/providers/aws/d/s3_bucket_object.html
Provides metadata and optionally content of an S3 object
@Andriy Knysh (Cloud Posse) yeah but related to lambda deployments. So instead of npm locally, using codebuild to create the actual artifact, which after is used again by the module to deploy to aws lambda.
was more a hypothetical question..
https://github.com/rms1000watt/notejam/blob/master/buildspec.yml https://github.com/rms1000watt/notejam/blob/master/testspec.yml
You can define whatever steps you need in the *.yml
that’s used in the CodeBuild + CodePipeline
Unified sample web app. The easy way to learn web frameworks. - rms1000watt/notejam
Unified sample web app. The easy way to learn web frameworks. - rms1000watt/notejam
i have a codepipeline thats Source > Codebuild (build) > Deploy (ecs) > Codebuild (Integration test)
so it could be like Source > Codebuild (compile?) > Codebuild (deploy to lambda?) > Codebuild (integration test)
yea you can run any commands in buildpec.yml
Contribute to cloudposse/jenkins development by creating an account on GitHub.
2018-09-15
hello everyone
hey @Ryan Ryke
actually if you guys have a second… ive been playing with the ecs-web-app module… looks like its needs listener arns, i used the aws-alb module to build the alb, then took the outputs listener_arns from there and fed them into the web-app module. but its erroring while trying to create the target group, says that its missing an elb
* aws_ecs_service.default: InvalidParameterException: The target group with targetGroupArn arn:aws:elasticloadbalancing:us-west-2:629113624323:targetgroup/bv-staging-hw/c0cd39cd95d47b66 does not have an associated load balancer.
status code: 400, request id: bb3f68be-b905-11e8-a342-17f3363522bb "bv-staging-hw"
is the web-app known to be working?
@Andriy Knysh (Cloud Posse) can probably find a reference architecture
i can
1 sec
it looks like the alb module creates a target group with default
at the end of it
this is how we used it
module "alb" {
source = "git::<https://github.com/cloudposse/terraform-aws-alb.git?ref=tags/0.2.5>"
name = "cluster"
namespace = "${var.namespace}"
stage = "${var.stage}"
attributes = "${var.attributes}"
vpc_id = "${module.vpc.vpc_id}"
ip_address_type = "ipv4"
subnet_ids = ["${module.subnets.public_subnet_ids}"]
security_group_ids = ["${module.vpc.vpc_default_security_group_id}"]
access_logs_region = "${var.region}"
https_enabled = "true"
http_ingress_cidr_blocks = "${var.ingress_cidr_blocks_http}"
https_ingress_cidr_blocks = "${var.ingress_cidr_blocks_https}"
certificate_arn = "${var.default_cert_arn}"
}
module "ecs_cluster_label" {
source = "git::<https://github.com/cloudposse/terraform-terraform-label.git?ref=tags/0.1.6>"
name = "cluster"
namespace = "${var.namespace}"
stage = "${var.stage}"
}
# ECS Cluster (needed even if using FARGATE launch type)
resource "aws_ecs_cluster" "default" {
name = "${module.ecs_cluster_label.id}"
}
# default backend app
module "default_backend_web_app" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.8.0>"
name = "backend"
namespace = "${var.namespace}"
stage = "${var.stage}"
vpc_id = "${module.vpc.vpc_id}"
container_image = "${var.default_container_image}"
container_cpu = "256"
container_memory = "512"
container_port = "80"
#launch_type = "FARGATE"
listener_arns = "${module.alb.listener_arns}"
listener_arns_count = "1"
aws_logs_region = "${var.region}"
ecs_cluster_arn = "${aws_ecs_cluster.default.arn}"
ecs_cluster_name = "${aws_ecs_cluster.default.name}"
ecs_security_group_ids = ["${module.vpc.vpc_default_security_group_id}"]
ecs_private_subnet_ids = ["${module.subnets.private_subnet_ids}"]
alb_ingress_healthcheck_path = "/healthz"
alb_ingress_paths = ["/*"]
codepipeline_enabled = "false"
ecs_alarms_enabled = "true"
autoscaling_enabled = "false"
alb_name = "${module.alb.alb_name}"
alb_arn_suffix = "${module.alb.alb_arn_suffix}"
alb_target_group_alarms_enabled = "true"
alb_target_group_alarms_3xx_threshold = "25"
alb_target_group_alarms_4xx_threshold = "25"
alb_target_group_alarms_5xx_threshold = "25"
alb_target_group_alarms_response_time_threshold = "0.5"
alb_target_group_alarms_period = "300"
alb_target_group_alarms_evaluation_periods = "1"
}
# web app
module "web_app" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.8.0>"
name = "app"
namespace = "${var.namespace}"
stage = "${var.stage}"
vpc_id = "${module.vpc.vpc_id}"
container_image = "${var.default_container_image}"
container_cpu = "4096"
container_memory = "8192"
#container_memory_reservation = ""
container_port = "80"
desired_count = "${var.desired}"
autoscaling_enabled = "true"
autoscaling_dimension = "cpu"
autoscaling_min_capacity = "${var.min}"
autoscaling_max_capacity = "${var.max}"
autoscaling_scale_up_adjustment = "1"
autoscaling_scale_up_cooldown = "60"
autoscaling_scale_down_adjustment = "-1"
autoscaling_scale_down_cooldown = "300"
#launch_type = "FARGATE"
listener_arns = "${module.alb.listener_arns}"
listener_arns_count = "1"
aws_logs_region = "${var.region}"
ecs_cluster_arn = "${aws_ecs_cluster.default.arn}"
ecs_cluster_name = "${aws_ecs_cluster.default.name}"
ecs_security_group_ids = ["${module.vpc.vpc_default_security_group_id}"]
ecs_private_subnet_ids = ["${module.subnets.private_subnet_ids}"]
alb_ingress_healthcheck_path = "/"
alb_ingress_paths = ["/*"]
alb_ingress_listener_priority = "100"
codepipeline_enabled = "true"
github_oauth_token = "${var.GITHUB_OAUTH_TOKEN}"
repo_owner = "XXXXX"
repo_name = "XXXXX"
branch = "${var.WEB_APP_BRANCH}"
ecs_alarms_enabled = "true"
alb_target_group_alarms_enabled = "true"
alb_target_group_alarms_3xx_threshold = "25"
alb_target_group_alarms_4xx_threshold = "25"
alb_target_group_alarms_5xx_threshold = "25"
alb_target_group_alarms_response_time_threshold = "0.5"
alb_target_group_alarms_period = "300"
alb_target_group_alarms_evaluation_periods = "1"
alb_name = "${module.alb.alb_name}"
alb_arn_suffix = "${module.alb.alb_arn_suffix}"
}
i need to go now, will be back in a few hours and will be able to answer qs if you have any
hmm im setup almost the exact same, will make it exact
worked, must be something in there that is required for the apply to work. ill have to dig in a little bit more, ty for the full sample
Thanks @Andriy Knysh (Cloud Posse)
yes thanks so much @Andriy Knysh (Cloud Posse)
no problem, glad it worked for you
we need to add the example to https://github.com/cloudposse/terraform-root-modules
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
2018-09-17
hi guys, it looks like the host_port
variable is unused in [main.tf](http://main.tf)
. https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/variables.tf#L76 am i missing something?
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
@Ryan Ryke I’m working on another ECS module. the empty string host_port will be rendered as null. When using ECS together with an ALB the host port will be dynamically allocated. It’s explained here :
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html
If using containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.
im familiar with the dynamic port mapping, but i guess im a little confused on your comment. yes the module should work, or no the module doesnt work and thats why you are working on a new ecs one.
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.
I was just trying to say “I’m just another guy who also works on ECS a lot so I can answer this question for you”
Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service
It’s not as clean code-wise as Cloudposse’s but in production with 3 different customers and it works nicely.
nevermind, I now see your point, it’s redundant
just make an issue or a pr
ahh ok, for now im going to break out the separate modules to understand how they all play nice together, then ill put in a pr into the web-app module
Do you want to use host_port ? What is the use case ?
i was just looking to change container_port
@Ryan Ryke ^^^ meant for you
Fixed
@maarten @jamie @antonbabenko I might be coming around to using Atlantis as I’ve resolved in my mind how we would do it within the “geodesic” model of operations. The current blocker for us is running it in different accounts. I’d prefer to run one instance per account to limit blast radius.
We have a repository that contains our live terraform definitions for multiple accounts. We currently have 4 accounts and plan to have an Atlantis node in each account. We've tossed around the …
2018-09-18
Hi guys. First, thanks for your awesome modules ! I am using this one https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment .
I have a question. This module creates a load balancer which has a default security group. Am I correct ? This ELB security group allow ingress 443/tcp or 80/tcp from 0.0.0.0
. I would like change 0.0.0.0
to a custom cidr. Is it possible ?
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
@Pierre good question
looks like we don’t support it at this time
The fix would be to add a section like this:
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Configure globally available options for your Elastic Beanstalk environment.
and pass the SecurityGroups
parameter
If you want to open a PR, we’ll promptly review it
thanks @Erik Osterman (Cloud Posse). I will look at it
do you guys know of or have any tools to test terraform modules? I was thinking something along the lines of it imports the module, you specify the input variables you want to test, and you make sure the plan is as you expect
is this closer to what you’re looking for? https://github.com/elmundio87/terraform_validate
Assists in the enforcement of user-defined standards in Terraform - elmundio87/terraform_validate
uuu i like this and will probably use it … but not quite
take the null-label module that cloudposse made for example… it has an enable flag where if you set it to false no resources get made… i am looking for something where i can run a terraform plan with enable set to false and see it plans to make resources and set to false to see that it doesn’t
this is a very simple case… but for more complicated modules with logic involved this would become very useful to me
i saw this https://github.com/gruntwork-io/terratest but it actually creates resources and you have to then write api calls to validate and destroy resources which takes time
Terratest is a Go library that makes it easier to write automated tests for your infrastructure code. - gruntwork-io/terratest
@Gabe we’ve been thinking about testing and CI/CD for terraform modules and even have some POCs, but all of that is in initial stage. @Erik Osterman (Cloud Posse) can give you more info on that
yea, we’re taking a different, perhaps “easier” approach. while terratest is awesome for writing tests at that level of complexity, we’re currently content with testing that modules create/destroy and are idempotent.
Here is some very early work: https://github.com/cloudposse/test-harness/tree/master/tests/terraform
Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness
and here is a small example on using that https://github.com/cloudposse/terraform-null-label/blob/add-tests/codefresh.yml
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Bash Automated Testing System. Contribute to bats-core/bats-core development by creating an account on GitHub.
oh cool i’ll check it out
i guess as pretty simple example of something i would like to be able to test is if the module has a flag to turn on/off certain resources ensuring it works as expected
one consideration for that is to create multiple terraform.tfvars
files with all the different permutations
so in addition to testing successful create/destroy/idempotency for each permutation, you would also test that a specific state is achieved
i know some are using test kitchen for terraform. we’re holding off on introducing the ruby dependency as long as possible
it’s amazing what can be accomplished just using jq
https://github.com/cloudposse/test-harness/blob/master/tests/terraform/00.input-descriptions.bats#L5
Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness
so since the entire terraform state is json, jq
will address a lot
i found this https://github.com/palantir/tfjson so you can output the plan in json… so theoretically you wouldn’t even have to create the resources
Terraform plan file to JSON. Contribute to palantir/tfjson development by creating an account on GitHub.
is that necessary?
maybe it’s a more concise representation
there is an -out
parameter which will emit the plan in json
oh, i guess the .tfplan
is not json
i thought it was
Terraform Version Terraform v0.8.6 Affected Resource N/A Terraform Configuration Files N/A Debug Output N/A Panic Output N/A Expected Behavior terraform plan -out plan.json -format json should crea…
doesn’t look like it’s coming out in terraform 0.12 either
bummed there’s no binary release
i’d add it to our packages distro
Cloud Posse installer and distribution of native apps - cloudposse/packages
Hi everyone!
I have a Jenkins deployment that I did a few months ago using https://github.com/cloudposse/terraform-aws-jenkins , and since last week my automated backup (AWS DataPipeline) started breaking and it is not completing anymore.
It is not throwing any errors, it looks like the backup job simply hangs. Anyone could shed some light on it ?
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
hi @ivan.pinatti
looks like https://github.com/cloudposse/terraform-aws-jenkins is a popular repo
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
anyway, since EFS backup is done by a CloudFormation template https://github.com/cloudposse/terraform-aws-efs-backup/blob/master/templates/datapipeline.yml
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
please make sure the CF stack is green
also, not sure what’s the limits are on S3 bucket versioning. It might has been reached, please check (https://github.com/cloudposse/terraform-aws-efs-backup/blob/master/s3.tf)
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
Versioning is enabled but there is no set limit anywhere.
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
On my AWS console it shows like this;
can you run terraform plan/apply
to check what it says?
I actually can’t because I had to change the CodeBuild and CodePipeline as my Jenkins Docker image repo is in Bitbucket.
If I do that it is going to change these.
Let me grab a few snapshots of the console
no other errors, just Cancelled
?
On the stdout and stderr it looks like it just hangs
No other errors
maybe you can taint the pipeline and then terraform apply -target=....
to re-create it?
I will jump in a meeting now, I will try later and let you know.
re:testing, we’ve been using kitchen terraform for a customer recently and it seems to fit the bill on a lot of different items
i can chat in greater detail should someone want to hear about it
also still working on the ecs-web-app module… i’ve gotten to the point now where the codepipeline is erroring on s3 permissions :
Insufficient permissions
Unable to access the artifact with Amazon S3 object key 'bv-staging-hw-xxxxx/task/kN9HAdK' located in the Amazon S3 artifact bucket 'bv-staging-hw-xxxxx'. The provided role does not have sufficient permissions.
really odd
no tweaks to the modules?
figured i would thread it
do you guys use this module often?
also i noticed that this alb_ingress_paths = ["/"]
needs to be specified otherwise the target group doesnt get attached to the alb (and it errors)
that was the first issue i had
any ideas?
looks like the assume
role has the permissions to access the s3 bucket
yeah this is a mystery to me.
so it looks like the build phase is building the image and pushing it ok, but its not uploading any data via the artifacts (nothing is specified in the buildspec that im using)
so added artifacts in the buildspec and it seemed to get past the issue.. its looking for imagedefinitions.json
would you happen to have a sample as to what its expecting? @Erik Osterman (Cloud Posse)
It’s actively in use for a couple of apps at a client site at the current version. But there must be some settings missing that are not well documented
i think i just need the imagedefinitions.json
my assumption was that it was handled by the container def module
Yes need that
do you happen to have a sanitized sample?
Sec
Please add any other issues. :)
We will get them fixed
that looks good, what does that imagedefinitions.json look like
also thank you
I think that file is created as part of the process
right here ` printf ‘[{“name”:”%s”,”imageUri”:”%s”}]’ “$CONTAINER_NAME” “$REPO_URI:$IMAGE_TAG” | tee imagedefinitions.json` |
ok cool
thank you very much
Yup
Sorry that’s a big missing piece. We’ll update the docs
past my bed time, thanks so much dude
nope
i had the issue with the module this morning, so this evening i ripped them all into separate modules, still the same issue
2018-09-19
@Ryan Ryke are you up and running?
checking now
looks good
ssm_document isn’t applying even though the instance has the correct role? Any ideas why?
@pericdaniel did you implement everything from http://www.sanjeevnandam.com/blog/aws-microsoft-ad-setup-with-terraform ?
Goal – To setup Microsoft Active Directory in AWS Assumptions: You are familiar with terraform Familiar with basics of Active Directory AWS VPC is setup with 2 private subnets. Create Microsoft AD using terraform Shell # Microsoft AD resource
yea i followedd that
right now
i notcied that when i go to run command
that I dont see the instances I want in there
i think it has somthing to do with the ami
but not sure
If you can’t see the instances, this may mean that either the ssm agent is not running or is having trouble connecting to the mothership. Easiest to take a look at its logs to see what’s what.
Thank you Sir
I have built AMIs with packer
So I’m wondering if I’m missing something
When I had problems with instances not showing up in SSM panel, normally I’d go into an instance and check the agent. So, at the very least, it needs to be 1) installed 2) running 3) able to connect to AWS API
2018-09-20
it cant hit hte aws api
got it working by setting up NAT GW
ohh, so you deployed it into private subnets w/o a NAT gateway and the instances could not connect to AWS
yes sir
for some reason i figured that it would be able to anyways and that it was all internal
cause it seems like for most services its able to reach it without hitting tghe internet
but not this =[
hmm, what “most services”?
if it’s in a private subnet w/o a NAT gateway, the traffic can’t leave the VPC at all (except for VPC Private Links)
glad you solved the issue @pericdaniel
sorry
anyone set the computername of a windows instance based on instance tags
@pericdaniel take a look at this https://forums.aws.amazon.com/thread.jspa?threadID=181520
I decided to give blogging a shot because I think I have some potentially interesting patterns in AWS automation using Ansible to share and get feedback on. If you know me, you know how much of a Linux and Mac OS X guy I am, and thus how ironic it
thank oyu!
6 votes and 11 comments so far on Reddit
do you need to create template files
or can oyu just add user data to the resource you want to use it for
you can use both HEEDOCS
and template files, but we prefer to use https://www.terraform.io/docs/providers/template/d/file.html
Renders a template from a file.
in that example am I creating a .tpl file that contains what I have above and then referencing that under the resource im crreating or can I just leave it there as a a data type
let me show an example
yay i love examples
Terraform module for provisioning EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module for provisioning EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
cool! so for vars.. the only var I would have is the ${var.tag_box}
Use as many as you need :)
Update them in the template and in the terraform module
is there a way to check if it ran into an error?
i have it looking at the tpl file
but doesnt do a rename or anything
hmm
i suggest you test the renaming PowerShell script locally before deploying to make sure it does actually work
I’m running the vpc peering module and hitting this error continuously:
data.aws_vpc_endpoint_service.dynamodb: Refreshing state...
data.aws_vpc.acceptor: Refreshing state...
data.aws_vpc_endpoint_service.s3: Refreshing state...
data.aws_subnet_ids.acceptor: Refreshing state...
data.aws_route_table.acceptor[2]: Refreshing state...
data.aws_route_table.acceptor[4]: Refreshing state...
data.aws_route_table.acceptor[8]: Refreshing state...
data.aws_route_table.acceptor[5]: Refreshing state...
data.aws_route_table.acceptor[6]: Refreshing state...
data.aws_route_table.acceptor[0]: Refreshing state...
data.aws_route_table.acceptor[7]: Refreshing state...
data.aws_route_table.acceptor[3]: Refreshing state...
data.aws_route_table.acceptor[1]: Refreshing state...
Error: Error refreshing state: 1 error(s) occurred:
* module.vpc_peering.module.vpc_peering.data.aws_route_table.requestor: data.aws_route_table.requestor: value of 'count' cannot be computed
Ahh the count issue again :)
hehe
We’ll take a look
my guess is the subnet count differs or the referenced vpc is different
ahh…checking
First provision VPC and subnets
yeah, i saw that on https://github.com/hashicorp/terraform/issues/14432#issuecomment-301128718
Terraform version: 0.9.5 I have two terraform modules ct-vpc and ct-vpc-peering and I am trying to establish connection between the two modules. here below, I document my approach: vpc module varia…
We saw that issue before
it wasn’t being too kind
In that module
i can try again
Do you know how to use -target?
could’ve been PEBKAC
yes
so how far down the “tree” do I have to plan individually?
to the resource level?
the module does lookup of the requestor
VPC https://github.com/cloudposse/terraform-aws-vpc-peering/blob/master/main.tf#L31
Terraform module to create a peering connection between two VPCs - cloudposse/terraform-aws-vpc-peering
so the VPC needs to be provisioned first using terraform apply -target=
after it’s provisioned, run apply
on terraform-aws-vpc-peering
if you have the VPC code in a separate folder, no need to use -target
, just apply it
terraform-aws-vpc-peering
is created in such a way so it does the data source lookup
mostly for kops
deployments
ok…let me give that a whirl
could be done differently
i did the plan and not the apply itself
Terraform module to create a peering connection between a backing services VPC and a VPC created by Kops - cloudposse/terraform-aws-kops-vpc-peering
started a custom one for our internal module repo but figure no need to recreate it
saw that one too
you need to apply it
for the VPC to be created first
got it. that’s what i missed
so guess i partially knew how to use -target
yea, need to use it if the VPC code and peering
code are in the same folder
they’re not. two modules used in one file
module "vpc" {
source = "../../modules/application-vpc"
application_name = "vpc-peering-test"
cidr = "172.100.0.0/16"
namespace = "tf"
private_subnets = ["172.100.1.0/24", "172.100.2.0/24", "172.100.3.0/24"]
public_subnets = ["172.100.5.0/24", "172.100.6.0/24", "172.100.7.0/24"]
stage = "DEV"
}
module "vpc_peering" {
source = "../../modules/application-vpc-peering"
application_name = "vpc-peering-test"
namespace = "tf"
stage = "DEV"
acceptor_vpc_id = "vpc-xxx"
requestor_vpc_id = "${module.vpc.vpc_id}"
}
one file is the same as in one folder
i meant the core code was in separate folders, but i gotcha
terraform apply -target=module.vpc.aws_vpc.application-vpc
and for the modules. they’re quite useful.
hey dudes exposed some variables in the terraform-aws-web-app
needed to expose environment variables in this module. I pulled the variable descriptions from here https://github.com/cloudposse/terraform-aws-ecs-container-definition
not sure its up to cp standards, but it works
Thanks @Ryan Ryke !
LGTM
cool
GitHub doesn’t let me tag releases on iOS
Even if I request desktop site so we can merge as soon as I get to a desk or @Andriy Knysh (Cloud Posse) online
2018-09-21
@Andriy Knysh (Cloud Posse) my issue was that the tag I had set.. had a forward slash in it. And you cant name servers with a / in it lol
another quick, potentially stupid question. the multi-az-subnets. i’m trying to build a 3 tier vpc. public - app - db. the non-public subnets are all building as expected with one exception. only one of the private subnets is mapping to a nat gateway. the other two subnets dont have a nat in their routes in both tiers…
tried updating the “az_ngw_count” but didnt have any luck
module "vpc" {
source = "s3::<https://s3-us-west-2.amazonaws.com/clc-terraform-modules/aws/vpc/terraform-aws-vpc-0.3.4.zip//terraform-aws-vpc-0.3.4>"
name = "${var.name}"
namespace = "${var.namespace}"
stage = "${var.stage}"
cidr_block = "${var.cidr_block}"
}
locals {
public_cidr_block = "${cidrsubnet(module.vpc.vpc_cidr_block, 5, 0)}"
app_cidr_block = "${cidrsubnet(module.vpc.vpc_cidr_block, 5, 4)}"
db_cidr_block = "${cidrsubnet(module.vpc.vpc_cidr_block, 5, 8)}"
}
module "public_subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-multi-az-subnets.git?ref=master>"
name = "public"
namespace = "${var.namespace}"
stage = "${var.stage}"
vpc_id = "${module.vpc.vpc_id}"
availability_zones = "${var.availability_zones}"
type = "public"
igw_id = "${module.vpc.igw_id}"
nat_gateway_enabled = "true"
cidr_block = "${local.public_cidr_block}"
}
module "app_subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-multi-az-subnets.git?ref=master>"
name = "app"
namespace = "${var.namespace}"
stage = "${var.stage}"
vpc_id = "${module.vpc.vpc_id}"
availability_zones = "${var.availability_zones}"
type = "private"
cidr_block = "${local.app_cidr_block}"
az_ngw_ids = "${module.public_subnets.az_ngw_ids}"
az_ngw_count = 3
}
module "db_subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-multi-az-subnets.git?ref=master>"
name = "db"
namespace = "${var.namespace}"
stage = "${var.stage}"
vpc_id = "${module.vpc.vpc_id}"
availability_zones = "${var.availability_zones}"
type = "private"
cidr_block = "${local.db_cidr_block}"
az_ngw_ids = "${module.public_subnets.az_ngw_ids}"
az_ngw_count = 3
}
@Andriy Knysh (Cloud Posse)
ty
what’s the problem here?
only one of each of the private subnets has the route to the nat gateway
so like just the first one routes publically
changing the ngw count doesnt seem to have any effect either
@Andriy Knysh (Cloud Posse) here
any idea?
i’ll need to deploy your example to see what it does (it’s been long time since I tested the module and the examples in it)
the issue is here
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets
length
is count in the number if items, so any single digit number will only create 1 route. based on the count https://github.com/cloudposse/terraform-aws-multi-az-subnets/blob/master/private.tf#L69
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets
ah yea thanks
will fix it
already deployed it and saw the same issue
(stupid mistake )
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets
thanks @Ryan Ryke
@Ryan Ryke can you show the code?
we usually use this module to create public and private subnets https://github.com/cloudposse/terraform-aws-dynamic-subnets (more usage, fewer bugs )
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
We also have https://github.com/cloudposse/terraform-aws-named-subnets which employs a slightly different strategy depending on what you want to achieve.
Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.
do you guys have any samples for https://github.com/cloudposse/terraform-aws-dynamic-subnets
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
also i posted my sample in the other thread
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
https://github.com/cloudposse/terraform-aws-jenkins/blob/master/examples/new_vpc_new_subnets/main.tf
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
i also commented on your PR, thanks, LGTM just a few nitpicks
We’ve released our EKS terraform modules for Kubernetes this week.
- https://github.com/cloudposse/terraform-aws-eks-cluster
- https://github.com/cloudposse/terraform-aws-eks-workers
Welcome feedback
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
I created #chatops for those interested
updated that pr @Andriy Knysh (Cloud Posse)
thanks @Ryan Ryke, merged the PR
cool thanks dude
you have a chance to take a peek at those subnets ?
or do you have samples for the dynamic subnets that i could peek at
for dynamic subnets I posted a few examples https://sweetops.slack.com/archives/CB6GHNLG0/p1537558581000100?thread_ts=1537542352.000100&cid=CB6GHNLG0
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
did not look at your code yet
right i saw those samples was tough to tell exactly what that is creating
oh, it creates one public subnet and one private subnet in each AZ you provide
and you can control NAT gateway creation for private subnet (enable or disable them)
right so im guessing that is your guys standard vpc config
we have three diff subnets
modules
but yes we mostly use dynamic-subnets
for its simplicity
@Ryan Ryke I lost track in what channel did you post your code can you repost it?
thanks @Ryan Ryke, your PR with the fix was merged
2018-09-22
cool thanks
oy, so looking at the ecs-web-app some more, with the new buildspec.yml that @Erik Osterman (Cloud Posse) graciously showed me. im having a slight issue. essentially the command
printf '[{"name":"%s","imageUri":"%s"}]' "$CONTAINER_NAME" "$REPO_URI:latest" | tee imagedefinitions.json
essentially overwrites the environment variables that are set by the container_definition module. So at this point, we have to set the app environment variables in the codebuild job so that we can use them.
im wondering if there is a way to not use the imagedef file from codebuild and just stick with the env variables from the container_definition module
You can, but that’s incompatible with CI/CD
Basically the app repo is authoritative on what software runs on the ECS task
And terraform is authoritative on what the infrastructure looks like
The definition is what tells ecs what version of the container to run
If we used strictly envs and wanted to use cicd, we’d need to run terraform from the cicd pipeline which is more complicated
yeah i think i have it in a usable state
still some testing / use to work on.
right so i think i figured this issue out essentially if the task def built by tf isnt the current running one when codepipeline runs, it removes the env vars
on another note, i created another pr. i need the security group id out of the ecs module https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pulls
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
there will be another one for the web-app once the service-task is released
Thanks for the PR! Commented
2018-09-23
updated @Erik Osterman (Cloud Posse)
2018-09-24
morning
so poking around a little bit more, im using the alb module to pass the listener arns into the web-app module. but i believe the alb module creates that default target group with hard coded port 80 and attaches it to the port 80 listener. which precludes me from attaching to that listener in the web app module
@rms1000watt here’s how we use it with the listener ARN:
@Ryan Ryke ^^^^
sorry @rms1000watt
No worries–i always end up finding out cool stuff this way
@Erik Osterman (Cloud Posse) yeah i have all of that, the problem that im running into is that the container port is on 4000
and for some reason the target group is created, it registers the fargate containers. but it never attaches to the alb listener
trying to get it to go from 80->4000
but it wont connect to the alb
lol, I keep doing that
and, im not sure it could because the default target group is attached to the port 80 listener
ok, i think i follow
when i create the target group outside the webapp module and feed it in, it gives me a “cannot calculate count”
from the ingress module
and that is all created in the terraform-aws-alb
so i guess im wondering how you are creating this outside of this smaple https://gist.github.com/osterman/15c639a970252c9adee8da09538659e8#file-ecs-tasks-tf-L15
do you use the aws-alb module for that?
yep
sec
added example
(added comment to gist)
so the output to that regarding target groups would be a targetgroup-default, then a target group for the app?
and does this have any capability to run on a different container port than 80?
Ok, I think for your use-case we should make that parameter overridable
probably default_target_group_port
in general, don’t see the benefit to run on non-standard ports on ECS. Your container definition can map 80:4000.
but nothing against adding the port parameter if you want it
right so i had the problem doing the container mapping with fargate
i get this
Error: Error applying plan:
1 error(s) occurred:
* module.hw_pipeline.module.ecs_alb_service_task.aws_ecs_task_definition.default: 1 error(s) occurred:
* aws_ecs_task_definition.default: ClientException: When networkMode=awsvpc, the host ports and container ports in port mappings must match.
status code: 400, request id: 05402366-c031-11e8-92ce-c3e5c8d9c0bf
port_mappings = [
{
containerPort = "4000"
hostPort = "80"
protocol = "tcp"
}
]
sigh…. yes, you’re correct. i forgot about that limitation.
When networkMode=awsvpc, the host ports and container ports in port mappings must match
so……… with that said, we’re back to square one.
let’s just parameterize the port and then you should be good to go.
right so in playing around with the aws-ecs-web-app, it looks like i have to add “port” to the alb-ingress module, and “container_port” to the ecs_alb_service_task
and it seems fine so i can put that in the web-app module as a new variable if you guys want to see that?
container_port
is already available, right?
so in the web-module both of those port options were not exposed from the underlying modules
i took container_port out and added the “port_mappings” from the updated 0.3.0 container_definition
but im thinking i might pull that back out
for completeness, i think the container definition can preserve the port_mappings
but we can keep the webapp opinionated and only support one port
ok so i will do my best and put something together to pr for you guys
thanks man - sorry for the grief
together we all get better… i wont lie, a little bit frustrating, but there are a lot of use cases
it never ceases to amaze me how changing one thing (e.g. a port) can explode the scope
and i still have one issue i havent resolved yet
what issue is that?
let me take a screen shot
the -default is created by the alb module
and its attached to the alb with a listener of port 80
the one without default is the correct one
its created by the web-app (and underlying sub modules)
problem is, it doesnt attach to the alb
once i update the listener (which is also created in the alb module) to point to the correct target group
all is well
its driving me nuts… i feed in the alb arns and all that
any place you can share some snippets?
sorry - a bit dense and actually @sarkis was the one who worked on all this stuff, so i am not up to speed on the details
high level, the design model looked like this
ALB creates a listener with a default target group. then we attach a default backend (something that always returns a pretty 404) which unhandles all requests which don’t have an explicit route. every task we add using the ingress module should typically use the default listener arn of the ALB and creats a new target group. we never tested mixing ports.
this pattern is the default with kubernetes
but TBH never saw anyone implement it with ECS (except for us)
here’s our default backend https://github.com/cloudposse/default-backend
Default Backend for ECS that serves a pretty 404 page - cloudposse/default-backend
module "hw_pipeline" {
source = "../../../terraform-aws-ecs-web-app"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "hw"
listener_arns = ["${module.hw_alb.listener_arns}"]
listener_arns_count = "1"
aws_logs_region = "us-west-2"
vpc_id = "${module.vpc.vpc_id}"
codepipeline_enabled = "true"
ecs_cluster_arn = "${aws_ecs_cluster.ecs_cluster.arn}"
ecs_cluster_name = "${aws_ecs_cluster.ecs_cluster.name}"
ecs_private_subnet_ids = ["${module.app_subnets.az_subnet_ids["us-west-2a"]}", "${module.app_subnets.az_subnet_ids["us-west-2b"]}", "${module.app_subnets.az_subnet_ids["us-west-2c"]}"]
ecs_security_group_ids = ["${aws_security_group.app_traffic.id}"]
container_cpu = "512"
container_memory = "1024"
container_image = "xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/bv-staging-hw-ecr:latest"
port_mappings = [
{
containerPort = "4000"
protocol = "tcp"
}
]
desired_count = "1"
#alb_target_group_arn = "${aws_lb_target_group.temp.arn}"
alb_name = "${module.hw_alb.alb_name}"
alb_arn_suffix = "${module.hw_alb.alb_arn_suffix}"
alb_ingress_healthcheck_path = "/"
alb_ingress_paths = ["/"]
alb_ingress_listener_priority = "100"
github_oauth_token = "xxxxxxxxxxxxx"
repo_owner = "xxxxxxxx"
repo_name = "hello_world"
branch = "master"
ecs_alarms_enabled = "true"
alb_target_group_alarms_enabled = "true"
alb_target_group_alarms_3xx_threshold = "25"
alb_target_group_alarms_4xx_threshold = "25"
alb_target_group_alarms_5xx_threshold = "25"
alb_target_group_alarms_response_time_threshold = "0.5"
alb_target_group_alarms_period = "300"
alb_target_group_alarms_evaluation_periods = "1"
environment = [
{
name = "COOKIE"
value = "cJzXwLAT8dwD9SSgBITcRI1ib4ejNts4bgatcfhv"
},
{
name = "PORT"
value = "80"
},
{
name = "DATABASE_URL"
value ="postgres://${var.db_user}:${data.aws_ssm_parameter.db_password.value}@${module.rds_instance.instance_endpoint}/apidb"
}
]
}
ok so i think im just an idiot
i didnt not notice the “/” rule on the load balancer
ie the aws-alb module creates this
+ module.hw_alb.aws_lb_listener.http
id: <computed>
arn: <computed>
default_action.#: "1"
default_action.0.target_group_arn: "${aws_lb_target_group.default.arn}"
default_action.0.type: "forward"
load_balancer_arn: "arn:aws:elasticloadbalancing:us-west-2:6xxxxxxx:loadbalancer/app/bv-staging-hw-alb/00bf512a3bba527e"
port: "80"
protocol: "HTTP"
ssl_policy: <computed>
+ module.hw_alb.aws_lb_target_group.default
id: <computed>
arn: <computed>
arn_suffix: <computed>
deregistration_delay: "15"
health_check.#: "1"
health_check.0.healthy_threshold: "2"
health_check.0.interval: "15"
health_check.0.matcher: "200-399"
health_check.0.path: "/"
health_check.0.port: "traffic-port"
health_check.0.protocol: "HTTP"
health_check.0.timeout: "10"
health_check.0.unhealthy_threshold: "2"
name: "bv-staging-hw-alb-default"
port: "80"
protocol: "HTTP"
proxy_protocol_v2: "false"
slow_start: "0"
stickiness.#: <computed>
target_type: "ip"
vpc_id: "vpc-xxxxxxxx"
am i missing something here?
I am afk. Not quite sure I follow. We have a default target group act like a 404 handler. It matches only when nothing else matches with a higher priority.
This is similar to the kubernetes ingress module where there is a default backend
I can dig up somewhere an example, but won’t be able to get to it for a few hours
Can you rephrase what you are trying to accomplish even higher level?
Whats wrong with this?
what’s the error?
Basically cant find it
=’[
your filter name is off, probably
yea im trying to figure out what the value for name should be
is it your AMI?
executable_users - (Optional) Limit search to users with explicit launch permission on the image. Valid items are the numeric account ID or self.
its a public windows AMI
what i usually do for amazon amis is grab the ami id from the launch wizard, then look it up in the amis console, then work out the pattern from there
so it should not be self
should be amazon
the current ami id from the launch wizard in us-east-1 is ami-01945499792201081
here’s a console link to that ami, https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images<i class="em em-visibility=public-images;search=ami-01945499792201081;sort=desc"</i>creationDate>
AMI Name Windows_Server-2016-English-Full-Base-2018.09.15
yep, so use Windows_Server-2016-English-Full-Base-*
as the filter value pattern…
when I changed self to Amazon I got
-
data.aws_ami.ami: 1 error(s) occurred:
-
data.aws_ami.ami: data.aws_ami.ami: InvalidUserID.Malformed: Invalid user id: “amazon” status code: 400, request id: 4decb292-f8a6-4c54-bcc3-f70942a20516
amazon
is not a valid value
Get information on a Amazon Machine Image (AMI).
just remove that field
deletining it saved the day
Get information on a Amazon Machine Image (AMI).
lol
you meant owners
fwiw, here’s how @Andriy Knysh (Cloud Posse) does it in our EKS workers module https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/main.tf#L150-L160
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
(just wanted to post the same)
thisi s what i have
so in general, if the filter name is specific (as in your case), you might skip adding owners
b/c it will find the AMI in any case
if you think it could collide with another AMI with similar name pattern, set owners
to amazon
do others consider it a “best practice” to specify owners
regardless? i’m always concerned with getting hijacked
thought i saw somewhere that packer and terraform were moving to make it a required field
i think if you know you want an AMI from Amazon, why not specify it?
Anyone using TF Enterprise w/ Sentinel have any thoughts?
(hrm… it hasn’t come up before in this channel - but would be curious if anyone is using it)
for anyone interested https://www.hashicorp.com/sentinel
Policy as code framework for HashiCorp Enterprise Products.
are you using TF enterprise for CD?
@Erik Osterman (Cloud Posse)
evaluating it @Erik Osterman (Cloud Posse)
has anyone seen the web app module re creating the container definition every apply ?
btw, @maarten is probably one of the most senior guys here on ECS, though he’s managing his own distribution of ECS modules.
2018-09-25
hrm
sounds vaguely familiar
checking
so here’s the problem from recollection (a bit fuzzy)
- because codepipeline is constantly pushing a new image defintion for every release, the container definition diverges quickly
- we could use a
data
source to query the current definition, but then we introduce a cold start problem since the datasources always fail on a new task
so i think we decide the best fix was to ignore_changes
on the container definition, but doesn’t look like we carried that out.
@sarkis sound familiar?
I’ve always used ignore_changes
, but I have a new branch which needs testing which works by looking up the datasource after bootstrapping when the container_image == “”
https://github.com/blinkist/terraform-aws-airship-ecs-service/tree/cicd_agnostic_ecs_service
The “ecs_task_definition_selector” compares the created task definition, with the current live one, if no changes are found, the live task will be set for the ecs_service hence no update to ecs_service will be made.
Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service
awesome thanks guys, so heres the weird thing, the module has ignore in it but the high level one does not
further, it was working before. and something i changed is now causing it to not work and change on every run
Container definition change?
I mean the port mapping change
yeah i think it might be the port mapping
thats really the only thing ive changed from the original web-app module with the exception of adding port to ecs task
Any luck?
was onsite with a customer all day today
havent taken a peek at it yet this evening
@Erik Osterman (Cloud Posse) it looks like the newer module is doing this "memoryReservation": null,
not sure thats what the deal is
going to switch it back to the older module and check it out
what Adjust the regexp used to overcome Terraform's type conversions for integer and boolean parameters in the JSON container definition. The new regexp preservers the string type for environme…
Could be related to this recent change
The Regex was modified
@fernando can maybe shed some light
yep its the newer version
the old version doesnt have that
that regex… looks like a buncha gobley gook
hmm
i dont know off hand how to fix this
the old version doesnt handle my container port of 4000
sorta stuck here
ok, I think we can get this fixed tomorrow
can you open an issue against that repo
post an example of the container definition causing problems for you
lets hold off for now
something weird is happening and im not sure why
i changed it to the old one, ran it twice
the container wouldnt run becuase of the port issue, then i changed it back to the new one, and now it seems to be running ok
so i moved on
i swear its like my first time with tf
now my problem is with the alb ingress … trying to attach the https listener and im getting this
* module.hw_pipeline.module.alb_ingress.aws_lb_listener_rule.paths[1]: index 1 out of range for list var.listener_arns (max 1) in:
${var.listener_arns[count.index]}
lol
the alb module is attaching the listener
yep its backed to messed up
Can maybe zoom tomorrow
Ping me in the afternoon
anyone have tips for dealing with data that is a list of lists in terraform? i need to create multiple iam users, and attach one or more iam policies to each user…
aws_iam_user_policy_attachment
only works for a single policy arn, so i need to count over both the number of users, and for each user count over the number of policies to attach to that user
@loren I moved away from creating users from a list. There was a terraform issue regarding that, especially when removing a user at the beginning of the list Terraform couldn’t deal with that. Let me look that up, not sure if that is still actual.
I’m actually creating groups with policies and attaching users to those that.
yeah, i’m familiar with what happens with the resources when you modify the list…
Agree with @maarten
more interested in how to deal with the data structure
What about a module that represents one user
i would still end up with a list of groups, and a list of policies to attach to each group
Then code generation that generates one tf file per user that invokes the module
i’ve done this before, too, was hoping to keep it simpler by leaving it all in terraform since i’m not worried about the consequences to list modification for this use case
module "user_matheus" {
source = "../../modules/terraform-aws-user"
username = "matheus"
namespace = "${var.namespace}"
belongs_to_groups = ["${local.default_staging_qa_user_groups}"]
}
Yea like that
per user resources then. blech.
alright
still much better than not knowing what happens when you modify that list
i’m ok with the consequence for this use case is all
just can’t figure out how to deal with the data structure
i’ll pm you
“simpler” is probably the wrong word, considering terraform’s limited support for this kind of logic. maybe, fewer steps?
A @HashiCorp Terraform provider for interacting with the filesystem - sethvargo/terraform-provider-filesystem
Use terraform to generate the terraform code :P
And then use resource "null_resource" { provisioner "local-exec" { command = "terraform apply ..." } }
. This way you can call Terraform from Terraform
Then use terraform to apply it
@loren what about something like this (just an idea)
locals {
users = []
users_lenght = "${lenght(local.users)}"
policies = []
policies_lenght = "${lenght(local.policies)}"
}
# element(list, index) - Returns a single element from a list at the given index
# If the index is greater than the number of elements, this function will wrap using a standard mod algorithm
resource "aws_iam_user_policy_attachment" "test-attach" {
count = "${local.users_lenght * local.policies_lenght}"
user = "${element(local.users, count.index)}"
policy_arn = "${element(local.policies, count.index)}"
}
the policies per user may all be different… i know i am pushing my luck with the data structure and different types for different keys, but this is kind of the idea…
users = [
{
name = "..."
policies = [
"abc",
"def",
]
},
{
name = "..."
policies = [
"123",
"456",
"abc",
]
},
]
maarten gave me an idea using the group membership resources, since they support list-based arguments
Also using groups to give users policies is best practice following the AWS Well-Architected Framework
quite right, i am appropriately ashamed
Feature idea: a resources declaration to declare several resources, possibly by iterating over variables (list or map) Say you want to manage IAM users with Terraform, but DRY-up their groups (or o…
So with groups, roles and policies this isn’t a problem, but with users and their unmanaged login-profiles this ends up as bolognese.
What is really miss in IAM btw is an attachable assume_role_policy.
ok, thanks to @maarten, this module gets me to a working config…
variable "users" {
type = "list"
description = "List of maps of IAM user names and a comma-separated-string of their IAM group memberships by group name"
}
variable "groups" {
type = "map"
description = "Map of IAM group names to a policy arn"
}
resource "aws_iam_user" "this" {
count = "${length(var.users)}"
name = "${lookup(var.users[count.index], "name")}"
}
resource "aws_iam_group" "this" {
count = "${length(var.groups)}"
name = "${element(keys(var.groups), count.index)}"
}
resource "aws_iam_group_policy_attachment" "this" {
count = "${length(aws_iam_group.this.*.name)}"
group = "${aws_iam_group.this.*.name[count.index]}"
policy_arn = "${lookup(var.groups, aws_iam_group.this.*.name[count.index])}"
}
resource "aws_iam_user_group_membership" "this" {
count = "${length(var.users)}"
user = "${lookup(var.users[count.index], "name")}"
groups = "${split(",", lookup(var.users[count.index], "groups"))}"
depends_on = [
"aws_iam_group_policy_attachment.this",
"aws_iam_user.this"
]
}
this take a list of users and each user’s groups, and a map of group names to policy arns. creates a group per policy, attaches the policy to the group, and makes each user a member of their groups
pretty much the trick ends up being that aws_iam_user_group_membership
takes a list of groups, so no need to iterate over a nested list
takes inputs of the form:
users = [
{
name = "..."
groups = "abc,def",
},
{
name = "..."
groups = "123,456",
},
]
groups = {
"abc" = "arn:..."
"def" = "arn:..."
"123" = "arn:..."
"456" = "arn:..."
}
2018-09-26
@jonboulle has joined the channel
Anyone seeing this a lot lately ?
Error: Error loading state: RequestError: send request failed
caused by: Get <https://xx-tf-state.s3.eu-central-1.amazonaws.com/?prefix=env%3A%2F>: EOF
got that yesterday, retry fixed it
Last few days this is going on and it still is.
havent had it today yet
Yes
Lots and lots. An open issue exists for this as well.
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Not a lot, but we did get that the other day, yes
@Ryan Ryke if you get a chance, can you open an issue for that problem? https://github.com/cloudposse/terraform-aws-ecs-container-definition/issues
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
so I can try to reproduce and fix it
hey yeah, been heads down today
still ahving the issue
havent looked at it
@Erik Osterman (Cloud Posse) https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/issues/13
using this module as part of https://github.com/cloudposse/terraform-aws-ecs-web-app Working on the web app module, and increased the version to 0.6.0 module "ecs_alb_service_task" { sour…
2018-09-27
anyone able to recreate this?
anyone around ?
@Ryan Ryke are you saying that container_definition_json = "${module.container_definition.json}"
gets changed on each terraform plan
and TF wants to update it?
to add some context im using the web-app module
i forked it
and updated the container def source to the newer module with port mappings
this is updating everytime
-/+ module.hw_pipeline.module.ecs_alb_service_task.aws_ecs_task_definition.default (new resource required)
id: "bv-staging-hw" => <computed> (forces new resource)
so its the ecs_service_task that is updating everytime
when i roll back the container def module back to the older one
it doesnt want to update every time
@Andriy Knysh (Cloud Posse) i think it could be related to https://sweetops.slack.com/archives/CB6GHNLG0/p1537932417000200?thread_ts=1537889592.000100&cid=CB6GHNLG0
what Adjust the regexp used to overcome Terraform's type conversions for integer and boolean parameters in the JSON container definition. The new regexp preservers the string type for environme…
the regex changed
i haven’t had a chance to take a look
@fernando is in the slack, but didn’t get a response
ok so you are saying that before the update everything was ok?
(I did not test it so can’t give an answer right now before I test)
@Ryan Ryke ^
right
and while i have you guys
if you have the time i have one other question on the intended usage of the alb_ingress module
ok
so in my app im working on listening on both 80 and 443
i used the alb module to set up both listener_arns
feed them into the web_app module
Error: module.hw_pipeline.module.alb_ingress.aws_lb_listener_rule.paths: 1 error(s) occurred:
* module.hw_pipeline.module.alb_ingress.aws_lb_listener_rule.paths[1]: index 1 out of range for list var.listener_arns (max 1) in:
${var.listener_arns[count.index]}
should only be for one listener at a time and should i just call the alb_ingress module again outside the web_app_module
i guess im confused with this logic here
count = "${length(var.hosts) > 0 && length(var.paths) == 0 ? var.listener_arns_count : 0}"
can you link me to the line
higher level… this is providing oen kind of routing
host-based routing
there’s host-based and path-based
right now, it doesn’t support both.
yeah i just have paths
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
on 2 listener arns
breaks on the second one
im also wanting to forward 80 to 443
so im thinking i might be better off to for the high level module to call these submodules a couple times
sec
what are you passing for listener_arns_count
?
1?
2
and listener arns has 2 arns in it
can you zoom
?
sure let me shut my door
Hi there, Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https:/…
i remember running into this when we implemented our memcache module
maybe try this
Terraform Version This is a 0.9.5 regression, still occurring in master (as of f5056b7) I am guessing that this has something to do with #14135 but I could be lying. Affected Resource(s) Not resour…
Terraform Version This is a 0.9.5 regression, still occurring in master (as of f5056b7) I am guessing that this has something to do with #14135 but I could be lying. Affected Resource(s) Not resour…
also looks interesting
so i wonder if we can rework how we return the output
value = "${compact(concat(aws_lb_listener.http.*.arn, aws_lb_listener.https.*.arn))}"
maybe we should write it that way now
(without the [
… ]
)
The surrounding brackets are optional for resource attributes, and no longer recommended.
cool will try in a couple of minutes
as the output of the alb module ?
Yea
Both
Output and input to ingress
right yeah ill pass it along
alb is outside the webapp
testing now
@Erik Osterman (Cloud Posse) genius
i pulled []
out of everywhere
ty ty ty
Omfg
lol what a trip
That was a Hail Mary
i can put some prs in if you want
assuming alb first
Would very much appreciate it
ok let me see if i can do this
would like to get this code off my box lol
oh hang on i have to revert a couple of changes that we made on the call
interestingly enough it takes two applys to get it to wor
was failing on anything larger than 1 with index 1 out of range for list var.listener_arns (max 1) in: ${var.listener_arns[1]}
guessing something in here
esource "aws_lb_listener_rule" "paths" {
count = "${length(var.paths) > 0 && length(var.hosts) == 0 ? var.listener_arns_count : 0}"
listener_arn = "${var.listener_arns[count.index]}"
priority = "${var.priority + count.index}"
action {
type = "forward"
target_group_arn = "${local.target_group_arn}"
}
condition {
field = "path-pattern"
values = ["${var.paths}"]
}
}
trying to understand how to either fix this or this is intended
so the aws_lb_listener_rule.paths
resource is created if you want to use path
based routing
e.g. send /api
to service A
so the aws_lb_listener_rule.hosts
resource is created if you want to use host
based routing.
e.g. send [api.example.com](http://api.example.com)
to service A
what Adjust the regexp used to overcome Terraform's type conversions for integer and boolean parameters in the JSON container definition. The new regexp preservers the string type for environme…
i don’t see here https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_environment where it’s required to have integers and booleans as strings for ENV vars
Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container definitions, volumes, task placement constraints, and launch types. The family is the name of the task, and each family can have multiple revisions. The IAM task role specifies the permissions that containers in the task should have. The network mode determines how the networking is configured for your containers. Container definitions specify which image to use, how much CPU and memory the container are allocated, and many more options. Volumes allow you to share data between containers and even persist the data on the container instance when the containers are no longer running. The task placement constraints customize how your tasks are placed within the infrastructure. The launch type determines which infrastructure your tasks use.
so what’s probably happening is that we send strings to AWS, but they get converted to the original types
then TF reads them and compares to what it has and sees that they are different
similar to what was going on with Elastic Beanstalk https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/issues/43
terraform-aws-elastic-beanstalk-environment recreates all settings on each terraform plan/apply setting.1039973377.name: "InstancePort" => "InstancePort" setting.1039973377.n…
or actually what’s going on is that the PR keeps the strings for all values, but it was supposed to do it only for ENV vars
that needs to be fixed
@Andriy Knysh (Cloud Posse) do you think you can take a stab at the container definition fix tomorrow?
ok
was failing on anything larger than 1 with index 1 out of range for list var.listener_arns (max 1) in: ${var.listener_arns[1]}
ill put a pr in for an update ecs-web-app with these related changes, along with port mapping, and any updates you might be able to muster with the container_definition
tagged a release
2018-09-28
@Ryan Ryke regarding this PR https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/issues/13
using this module as part of https://github.com/cloudposse/terraform-aws-ecs-web-app Working on the web app module, and increased the version to 0.6.0 module "ecs_alb_service_task" { sour…
the latest release of https://github.com/cloudposse/terraform-aws-ecs-container-definition correctly handles environment
when converting to JSON (preserves strings)
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
what version of https://github.com/cloudposse/terraform-aws-ecs-web-app are you using?
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
(if, as you mentioned in the PR, you are using 0.6.0
, it’s a very old version. Can you try the latest?)
yep i am
when i revert the port mapping and use container port it works fine
can you show the difference (when it works and when it does not)
also if i revert to 0.5.0 it works fine
the code difference
its pretty basic, swap out portmapping with container port and adjust the version
the part thats recreating it is the ecs_task_def
but when i revert container def it stops doing it
current container def is 0.3.0
current ecs_alb_service_task is 0.6.0
module "container_definition" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=tags/0.3.0>"
container_name = "${module.default_label.id}"
container_image = "${var.container_image}"
container_memory = "${var.container_memory}"
container_memory_reservation = "${var.container_memory_reservation}"
container_cpu = "${var.container_cpu}"
healthcheck = "${var.healthcheck}"
environment = "${var.environment}"
port_mappings = "${var.port_mappings}"
log_options = {
"awslogs-region" = "${var.aws_logs_region}"
"awslogs-group" = "${aws_cloudwatch_log_group.app.name}"
"awslogs-stream-prefix" = "${var.name}"
}
}
module "ecs_alb_service_task" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=tags/0.6.0>"
name = "${var.name}"
namespace = "${var.namespace}"
stage = "${var.stage}"
alb_target_group_arn = "${module.alb_ingress.target_group_arn}"
container_definition_json = "${module.container_definition.json}"
container_name = "${module.default_label.id}"
desired_count = "${var.desired_count}"
task_cpu = "${var.container_cpu}"
task_memory = "${var.container_memory}"
ecr_repository_name = "${module.ecr.repository_name}"
ecs_cluster_arn = "${var.ecs_cluster_arn}"
launch_type = "${var.launch_type}"
vpc_id = "${var.vpc_id}"
security_group_ids = ["${var.ecs_security_group_ids}"]
private_subnet_ids = ["${var.ecs_private_subnet_ids}"]
container_port = "${var.container_port}"
}
the code above is not working (updates every time)?
what’s your var.port_mappings
?
port_mappings = [
{
containerPort = "4000"
protocol = "tcp"
}
]
@Andriy Knysh (Cloud Posse)
oops sorry
ok so i know the issue
nice
add hostPort
the same as containerPort
tried thast
and read this https://github.com/hashicorp/terraform/issues/16769
Terraform Version Terraform v0.11.0 + provider.aws v1.4.0 Terraform Configuration Files resource "aws_ecs_task_definition" "httpd" { family = "foo-httpd-${var.environment}&…
let me try again
Terraform Version Terraform v0.11.3 provider.aws v1.8.0 provider.template v1.0.0 Affected Resource(s) Please list the resources as a list, for example: aws_ecs_task_definition Terraform Configurati…
also, what version of TF aws provider are you using?
latest
ok so please check the hostPort
and then the healthcheck
as described here https://github.com/terraform-providers/terraform-provider-aws/issues/3401#issuecomment-420830007
Terraform Version Terraform v0.11.3 provider.aws v1.8.0 provider.template v1.0.0 Affected Resource(s) Please list the resources as a list, for example: aws_ecs_task_definition Terraform Configurati…
will check
that did it
which one?
host port
almost that i tried that before
so well, they said they fixed the port issue in aws provider 1.36.0
which one do you have?
1.32
let me re init
that’s not the latest
ok anyway, thank you for testing and finding the issues. We’ll add a description to README that if you have TF provider < 1.36.0 AND use network_mode = "awsvpc"
then you have to add hostPort
to the port mappings
2018-09-29
feel free to comment so we can get it merged plz
hi @Erik Osterman (Cloud Posse) im not sure what that error means
@Ryan Ryke it’s complaining that it needs to be properly formatted
just run terraform fmt .
| => terraform fmt
outputs.tf
variables.tf
________________________________________________________________________________________________________________
| ~/chef/terraform-aws-ecs-web-app @ RR-PRO (ryanryke)
| => terraform fmt .
__________________________________________
¯_(ツ)_/¯
perfect
now git diff
you should see the changes
terraform fmt
== terraform fmt .
so the first time you ran it, it formatted the code
second time no more formatting necessary
@Ryan Ryke bump
want to commit & push that?
sure thing
there you go
merged & released
Hi, am trying to created azure services using terraform. I have base modules with Azure availability set, load balancers, and couple of azurerm_virtual_machine_extension’s in the base DRY module i want to condition as a option to create only few extensions while deploying azure VM’s while referring to base DRY base modules am trying to use count = “${var.enabled == “true” ? 1 : 0}” in base azurerm_virtual_machine_extension tf file to see if I can use the extension as an option but its not working while an using the count option can you please let me know if am using the right approach by using count in base .tf file(resource) as an option
hi
2018-09-30
Hi, am trying to created azure services using terraform. I have base modules with Azure availability set, load balancers, and couple of azurerm_virtual_machine_extension’s in the base DRY module
i want to condition as a option to create only few extensions while deploying azure VM’s while referring to base DRY base modules
am trying to use count = “${var.enabled == “true” ? 1 : 0}” in base azurerm_virtual_machine_extension tf file to see if I can use the extension as an option
but its not working while an using the count option
can you please let me know if am using the right approach by using count in base .tf file(resource) as an option
@praveen count = "${var.enabled == "true" ? 1 : 0}"
is a correct way to enable/disable a resource
we use what we call splat+join
pattern for resources with counts
for example:
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
if you put count
into a resource, then it becomes a list (not a single resource)
so anywhere you use any of the resource’s attribute, you have to get the item from the list
splat+join
pattern does this:
if enabled = "true"
, it gets the first item
if enabled = "false"
, it gets an empty string (and TF does not complain)
hope that helps. let us know if you need more help