#terraform-aws-modules (2020-04)
Terraform Modules
Discussions related to https://github.com/terraform-aws-modules
Archive: https://archive.sweetops.com/terraform-aws-modules/
2020-04-02
Hello, requesting some thoughts/opinions. So I am looking at building reusable tf modules for my teams. We are divided in our opinion on whether we should use a mono-repo and have sub-directories for each aws resource or one github repo per resource. On my previous projects, I have done the later. Each component has it’s own lifecycle (tagging etc. ) that way and also only the required modules get downloaded and not the entire mono-repo during terraform get. What do you guys think ?
One repo per module, yes definitely! On the strict aws resource split, I think this should never be a strict rule and should be evaluated on a per-case basis. If you are extremely strict with the one-resource per module then in many cases just just create a simple “abstraction” around a resource which doesn’t need abstraction because it’s super simple to begin with. I think it’s better to define in the team how great modules look like by having certain community modules as example. Whenever there is a debate on style or structure, a team member should be able to argue that it makes sense to do so because x and y. Cheers.
Thank you so much for your response .. That’s definitely the way to go.
In my case we’re doing modules not for each resource but for each “infrastructure component”. For example we have a module for codepipeline which include codebuild, codecommit and all the IAM around it.
i agree with separate repos from an access perspective, especially with different teams. You might not want some teams accessing/building vpc or iam resources, while letting them build out ec2 or s3 etc…
2020-04-03
2020-04-04
2020-04-06
Hello - I am using the terraform-aws-alb
module and am trying to figure out how to attach targets to the created load balancer. I have instances that are running due to the autoscale_group
module, but I’m uncertain how to attach them. I’ve looked at the regular Terraform aws_lb_target_group_attachment
resource, but haven’t worked out how to deal with the fact that I have two instances but target_id
on aws_lb_target_group_attachment
appears to only take one id. Any guidance would be much appreciated.
Are you using the https://github.com/terraform-aws-modules/terraform-aws-autoscaling module? I don’t see any module named autoscale_group
on the main module registry.
If so, you just put the alb target group arns from the load balancer module into the autoscaling module, with something like target_group_arns = module.alb.target_group_arns
Regardless of module, target_group_arns
is a field on the aws_autoscaling_group
terraform resource
Terraform module which creates Auto Scaling resources on AWS - terraform-aws-modules/terraform-aws-autoscaling
Thanks, and sorry for the typo. Your info helped.
2020-04-07
A follow up to yesterday’s question. I am using the CloudPosse ALB module (https://github.com/cloudposse/terraform-aws-alb) in conjunction with the CloudPosse ASG module (https://github.com/cloudposse/terraform-aws-ec2-autoscale-group). I linked them via target_group_arns
as suggested yesterday. I instructed the ASG to used a standard, Linux AMI as its image, and I also tell the ASG to install httpd, etc. via userdata
. However, I keep getting a 504 Gateway Time-out error. During troubleshooting, I noticed that the registered targets in my target group are failing their health check with 504 errors. When I look at the actual EC2 instances, they are using the default VPC security group which has no ingress or egress rules. So I found my 504 problem, but I’m not certain why my targets don’t have the proper security groups. The module is generating the expected security group to let in [0.0.0.0/0] over port 80, but that security group is not assigned to the targets in my target group. I see that the security groups are assigned to the ENIs, but that’s it. Any help/advice is most appreciated.
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
May have figured this out. I needed to get the security group from the ALB module and feed it into the ASG module.
2020-04-09
Hi Guys, I’ve added some additional parameters into terraform-aws-codebuild https://github.com/cloudposse/terraform-aws-codebuild/pull/53 Can some one review pls. Thnks
what Added support for : private repository auth git_submodules_config vpc_config logs_config git_clone_depth why They were missing, and I needed them
Just a quick one to add a missing output: https://github.com/cloudposse/terraform-aws-rds/pull/59
what Adds the ARN of the RDS cluster as an output why Due to some weirdness in the API, you can't make read replicas in different subnet groups without using the ARN. See referenced issue. …
2020-04-14
Hey @Maxim Mironenko (Cloud Posse) - any movement on https://github.com/cloudposse/terraform-aws-tfstate-backend/pull/43 ?
Whilst the current option policy ensures server-side encryption, encryption of the transport mechanism isn't enforced. This change extends the S3 bucket policy to enforce encryption in transit,…
Looks like it is failing because of an unrelated README change?
Whilst the current option policy ensures server-side encryption, encryption of the transport mechanism isn't enforced. This change extends the S3 bucket policy to enforce encryption in transit,…
(I’m the author)
This does seem a bit of an odd failure. @Maxim Mironenko (Cloud Posse) if there’s anything I can do, let me know
@bazbremner having some issues with GitHub actions. They recently did some changes related to tokens. I am on it
2020-04-15
Hi All, Can some suggest module for ElastiCache (REDIS)
Hi im trying to create a multiple subnets with terraform-aws-multi-az-subnets
. However, since count
is not allowed within modules, is there a way to use a single module and have some kind of iteration over the cidr lists to generate the subnets?
Maybe the module terraform-aws-vpc module fits your need.
i had used that module but i needed more fine-grained control over the subnets created. Essentially needed 4 subnets per AZ with a greater IP range in the private ones. I ended up rolling it with the existing TF resources.
2020-04-16
2020-04-20
Hello there, I’d like to disable the creation of the s3 endpoint when using the EMR module: https://github.com/cloudposse/terraform-aws-emr-cluster/pull/14 – I’ve already got an S3 endpoint managed somewhere else.
what Add the variable create_vpc_endpoint_s3 to control VPC S3 Endpoint creation why Users may already have their own S3 Endpoint in the selected VPC. If they do, this module fails because there…
@cabrinha there is a minor change request for your PR. also, after it will be addressed I will run rebuild README.md routine, so please, make sure your repo allow write access for our bot
what Add the variable create_vpc_endpoint_s3 to control VPC S3 Endpoint creation why Users may already have their own S3 Endpoint in the selected VPC. If they do, this module fails because there…
Thanks for looking at this so quickly @Maxim Mironenko (Cloud Posse) – I’ve updated the PR with your suggestion.
How do I allow write access for the bot?
no need, we are fine, bot works well
How can I duplicate the README and FMT commands you guys run on your PRs in my own org?
it is as easy as running:
make init
make readme/deps
make readme
and for FMT:
make terraform/install TERRAFORM_VERSION=0.12.19
terraform fmt -recursive
if you don’t want to do so on your host machine, you can use docker image
for example this one: https://github.com/cloudposse/geodesic
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
Would I be able to copy this file into my own repos and use it the same way?
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
@Andriy Knysh (Cloud Posse) is the master of how that works. @Maxim Mironenko (Cloud Posse) though is getting caught up
It’d also be great if we could get read-only access to these firebase workflows that are running against these modules too. Not a necessity but, interesting.
<https://github.com/cloudposse/terraform-aws-emr-cluster/actions>
and
<https://github.com/cloudposse/actions/actions>
@cabrinha you need a few steps to be able to use GitHub actions like we do:
- Copy
slash-command-dispatch.yml
to your repo
- Add the repo access token as secret
- The action above calls these workflows https://github.com/cloudposse/actions/tree/master/.github/workflows
Our Library of GitHub Actions. Contribute to cloudposse/actions development by creating an account on GitHub.
which you need to have as well
we separated the dispatched from the executor since we use one executor for all our repos (we just add the dispatcher to them) - so it’s easy to update it in one place
but you can use the dispatcher and the executor from just one repo
@Maxim Mironenko (Cloud Posse)
The bot and these commands are really nice!
2020-04-22
Anyone have a module I can plugin to get RDS event logging for cloudwatch events pushed to cloudwatch logs+pager duty or similar destination. I saw an older cloudposse one, some promise. Anything else?
2020-04-24
@Andriy Knysh (Cloud Posse) i have been using cloudposse for long time , hey just need a direction for how to include a provision for reading a another account bucket i have been using private subnet for emr clusters
for cross-account access, you need to add permissions on both sides
on the one side, add an S3 bucket policy with permissions for the other account’s entities (users, groups or roles) to access the bucket
yeah this i have added
on emr side on ec2 roles ?
on the other side, add permissions to users/groups/roles to access the bucket
(I don’t know about your architecture so can’t advise on where to add those roles, emr or ec2)
hmm,
it also depends on how you use it, just EC2 or Kubernetes
this is pure emr on aws
but the description above applies to any case
yes
@navdeep EMR is a complicated topic. If you show me the code where you think you should do it, I would be able to help you
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
give me about 30 mins, I’ll find some code for EMR
@navdeep on the bucket side, you add this:
variable "s3_bucket_allow_access_principal_arns" {
type = list(string)
description = "ARNs of the principals that should be allowed to access the datalake S3 bucket, e.g. ARNs of other AWS accounts for cross-account access"
default = []
}
data "aws_iam_policy_document" "datalake_bucket_access" {
statement {
effect = "Allow"
actions = [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject",
"s3:PutObjectAcl"
]
resources = [
aws_s3_bucket.datalake.arn,
"${aws_s3_bucket.datalake.arn}/*"
]
principals {
type = "AWS"
identifiers = var.s3_bucket_allow_access_principal_arns
}
}
}
resource "aws_s3_bucket_policy" "datalake_bucket_access" {
bucket = aws_s3_bucket.datalake.id
policy = data.aws_iam_policy_document.datalake_bucket_access.json
}
@navdeep actually, what problem are you trying to solve? Why your bucket is in different account from the EMR cluster?
what we did for a client, we created EMR cluster and S3 bucket in one account (let’s call it data
. Then created Firehoses in other accounts (e.g. prod
, staging
). Then we added a bucket policy to allow access from those Forehoses (cross-account). Then allowed the Firehoses to write to the bucket (cross-account)
so company has multiple accounts and thats because of different business verticals, i tried to put above policy before too,thanks for mentioning ,
The applications deployed in the other accounts (prod, staging, dev) have permissions to write data to the corresponding Firehoses (in the same account). Then, the Firehoses send data to the bucket in the data
account. The EMR cluster in the data
account (specifically, Hive and Presto) can access the S3 bucket in that account
Note that you can’t have a Firehose in the data
account and push data from apps in other accounts - ASW SDKs don’t have the possibility to push to Firehose in another account
that’s why we created Firehoses in all other accounts and allowed them to write to the datalake bucket in data
account
hmm correct this seems to be a good design, this is more of a legacy we are carrying
But to do what you mentioned (EMR in one account, the bucket in another), I think you need to add resource "aws_iam_role_policy_attachment"
(with permissions to accesss the bucket cross-account) to these roles:
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
(not sure to both or just one of those, did not test it cross-account)
hmm ok i will check and if it can be configured i will push a PR,
thanks
you can add additional variables to add additional policies to those two roles
(would be a good addition to the module)
hey yup, though what worked us is adding read policy to give access to ec2 role we are creating in this module , shall i push a PR to add this in documentation ? if you need to read data from different account give following policy to ec2 role getting created
and hey thanks again !!
thanks @navdeep
PRs always welcome
2020-04-26
2020-04-27
Hey folks - Opened a smol PR - https://github.com/cloudposse/terraform-aws-route53-alias/pull/21 was hoping for maybe a quick turnaround? cc @Erik Osterman (Cloud Posse) / @Maxim Mironenko (Cloud Posse)
what Allow for allow_overwrite functionality why I want to manage some existing records with Terraform, so need this functionality which switches the action to an UPSERT, from CREATE. See https://w…
@Andriy Knysh (Cloud Posse) can you review this
yes
set the channel topic: Terraform Modules
Thank you!
Terraform Module to Define Vanity Host/Domain (e.g. [brand.com](http://brand.com)
) as an ALIAS record - cloudposse/terraform-aws-route53-alias
Thanks @Andriy Knysh (Cloud Posse)!! Hope you’re good!
Terraform Module to Define Vanity Host/Domain (e.g. [brand.com](http://brand.com)
) as an ALIAS record - cloudposse/terraform-aws-route53-alias