#terraform (2018-10)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2018-10-01


so with your beanstalk module… How can I add user data?

.ebextensions
is not enough?

I need to get add swap to instances as they are created. someone before me had created ebextensions to add script to add swap, but when instances are refreshed or whatever it doesnt work. also he didn’t add it to fstab so on restart no swap.

we did it that way for swap

also turns out ebextensions are only executed once on environment creation (i think), and any changes later are ignored.

it’s been a couple years since i looked at it

but i think there were sections that run on every build

and some that only ran on creation

hmm i’ll have to research into it more. couldn’t find any details about that yet

i found our old code for that

commands:
00_add_swap_space:
command: "/tmp/add-swap-space.sh"
files:
"/tmp/add-swap-space.sh":
mode: "000755"
content: |
#!/bin/bash
set -o xtrace
set -e
SWAP_SIZE=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r ".SWAP_SIZE")
if [ "$SWAP_SIZE" == "0" ];
then
echo "Swap is not enabled"
exit 0
fi
if grep -E 'SwapTotal:\s+0+\s+kB' /proc/meminfo; then
echo "Enabling swap space (${SWAP_SIZE} mb)"
dd if=/dev/zero of=/var/swapfile bs=1M count=$SWAP_SIZE
/sbin/mkswap /var/swapfile
chmod 000 /var/swapfile
/sbin/swapon /var/swapfile
else
echo "Not creating additional swap space"
fi

yep thats essentially what i have

minus the command part

maybe use folder hooks? can’t speak from experience, but this guy suggests it: https://github.com/equivalent/scrapbook2/blob/master/archive/blogs/2016-08-22-aws-elasticbeanstalk-hooks.md#direct-ebextension-command-enxecution
web-development notes and archive of Old articles - equivalent/scrapbook2


In aws elasticbean talk. When we setup extensions in .ebextensions i wonder what is difference between commands and container_commands My command is like this container_commands: 04_insert_a…

hmm so in my case the command does run, but it does nothing because swapfile already exists. it just doesn’t actually re-enable it

if instance is rebooted

Just call “swapon /the/swapfile”

So if the instance is rebooted, it checks for the file. If it’s found it calls swapon, else it creates it and calls swap on

yeah i’ll add that. thank you

i do like your ENV for swapsize

i might steal that idea

lol
2018-10-02

I just had a second former co-worker independently discover this comment and thank me for it https://github.com/hashicorp/terraform/issues/9368#issuecomment-253950367 </brag>

haha, very nice - like the fix

also, props for using template_file
over HEREDOCs - hate those

though in the particular issue, I’d argue the correct fix is to use an iam_policy_document
which rather than templated JSON.

Generates an IAM policy document in JSON format

Well, in my defense, I made that comment 2 years ago, when I had 2 years less TF experience, and might predate iam_policy_document


haha very likely!


it’s awesome though.. i have been coming across a lot of members contributions/comments

Where’s TF 0.12, we’re dying here ?

@maarten, @Daren, @jamie just in the past week


end of oct

<—– excited but not excited

we have 100+ modules to update


@Ryan Ryke what do you think about this: https://github.com/cloudposse/terraform-aws-ecs-container-definition/pull/12
The usage for the module appears to be in a single container context resource "aws_ecs_task_definition" "this" { container_definitions = "${module.container_definition.js…

looks pretty cool, i like the idea.
The usage for the module appears to be in a single container context resource "aws_ecs_task_definition" "this" { container_definitions = "${module.container_definition.js…

im not totally sure about which context this is used in, unless you needed a sidecar or some sort of container to link to

other wise you could just call the module twice… i am sure im missing a use case here

yea - sidecar use-case i think

by @stephen

hey all. I’m quite confused by the ecr_repository_name
variable to the terraform-aws-ecs-alb-service-task
module. There are no references to it in the configuration. How does one link an ECR repo to a ECS task?

This is in the imagedefinition.json


which is created as part of the build process

seems like that’s related to codepipeline being enabled? Is it the case that terraform-aws-ecs-web-app
can’t be configured with an image in ECR unless it’s used with codepipeline?
Happy to open a PR if that’s the case. Just want to be sure I’m understanding the current setup correctly.

I’ve managed to fix this for myself locally by prepending the ECR registry name to the image name in container_definition
:
container_image = "${module.ecr.registry_url}:${var.container_image}"`
Unfortunately, since the image repository is defined within aws-ecs-web-app
module, I don’t see a way to apply this fix without modifying aws-ecs-web-app
itself. Currently working with a local clone for this reason…
Does anyone know a way to do this just by passing the right var.container_image
to aws-ecs-web-app
?

if it requires a change to container_definition
, it seems like it should be conditioned on codepipeline_enabled
. Sound correct?

if using CI/CD, which is the use-case we optimized for, the repo is set with the buildspec.yaml
which defines how to update the imagedefinitions.json
file

did you take a look at the gist I sent you above?

hey @dan

look here for an example https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L66
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

thanks @Andriy Knysh (Cloud Posse)
if you search for ecr_repository_name
in that repo, you’ll see that it’s only references in the example. Do you know where it gets used?
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

@dan i recommend starting with the terraform-aws-ecs-web-app
module for a more “turnkey” app

thanks @Erik Osterman (Cloud Posse). I am still curious to find where this variable is used. Neither the web app nor the service task wrappers reference it outside of the docs or examples…

of course, if you need to do something which doesn’t fit the mold, you can always use the terraform-aws-ecs-alb-service-task
as a building block.

the web app module will be a good reference implementation for you

man! we have too man dans for me to keep track of…
2018-10-03

did you take a look at the gist for the buildspec.yaml
?

yes. can you confirm it’s only relevant if i’m using codepipeline?
I see that I could prepend the equivalent of
$AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com/$IMAGE_REPO_NAME
to the image name, though the repository name seems to be only accessible from within aws-ecs-web-app
.

The docs for codepipeline_enabled
say:
“A boolean to enable/disable AWS Codepipeline and ECR”
Which makes me think the current setup doesn’t permit pulling images from ECR without codepipeline_enabled
.

i hard coded the ecr uri into the ecs-web-app module

@Ryan Ryke ok, so it sounds like we’re missing something?

It’s a little chicken and egg

It needs to know the repo and tag before It can build the container definition

And it can’t get the tag until it runs a container definition from the build spec

In the car atm

I have a quick question about https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html#waiting-for-elb-capacity
So I typically use min_elb_capacity
and deploy a new LC+ASG together when config/code changes within my ASG instances.
A colleague hit me up today with this observation –
“””
Assuming a the old ASG has scaled-out (increased instance-count) due to scaling-rules
– and because the terraform value for min_elb_capacity
is not dynamic
this will cause an unwanted reduction in the instance count when the new ASG is deployed.
“””
Has anyone seen a method of propagating the current desired_capacity
from the currently active ASG to the newly proposed ASG during the terraform plan
phase?
Provides an AutoScaling Group resource.

I think the best thing would be to ignore_changes
to desired_capacity

OK, I’ll try that.

has anyone touched this module in a while https://github.com/cloudposse/terraform-aws-cloudwatch-flow-logs
Terraform module for enabling flow logs for vpc and subnets. - cloudposse/terraform-aws-cloudwatch-flow-logs

looks like kinesis is in there not 100% sure what its needed for

@Ryan Ryke looks like the module was created almost a year ago and never updated. We can take a look at it together if you want to use it

@tamsky regarding desired_capacity
:

You may want to omit desired_capacity attribute from attached aws_autoscaling_group when using autoscaling policies. It’s good practice to pick either manual or dynamic (policy-based) scaling

Provides an AutoScaling Scaling Group resource.

that’s what we use here https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/autoscaling.tf
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
2018-10-04

@dan we took a look into your specific question with specifying the ECR repo with terraform-aws-ecs-alb-service-task
module

What you want to do is use the terraform-aws-ecs-container-definition
module to create a JSON task definition.

set the container_image
to the canonical “docker” url to the image

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition


then once you have all that, pass the container definition JSON as the value of container_definition_json
of the terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

@Andriy Knysh (Cloud Posse) is going to remove that confusing ecr_repository_name
which is not used

@dan example added here https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/15/files#r222824398
what Remove unused vars Update README with examples and descriptions Add usage example why The vars were declared but never used and they are unnecessary Add example for specifying container def…
2018-10-05

new version released with usage examples and explanation how to setup the container image https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task


@Gabe thanks for the PR https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/34
what Support use of tags by label modules why Interoperability with all modules

going to have @Andriy Knysh (Cloud Posse) review - then we’re good to merge


thank you


awesome thank you!
2018-10-07

for those interested and want to follow along, we’re working on some enhancements for atlantis
2018-10-10

@Andriy Knysh (Cloud Posse) I’ve been out of the loop on your EKS plugin, but is it production ready?

yep

all modules were tested on AWS

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Thank you I am going to use it today

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

and here is a complete working example (the one we tested)

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Thank you good sir
2018-10-11

@bober2000 https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Under resource "aws_elastic_beanstalk_environment" "default" {
you’ll see similar settings

@Andy so just insert my settings to resource “aws_elastic_beanstalk_environment” “default” {

ok, thanks

will try this

yup or set a variable for value = "8.6.4"
part so you can easily switch between versions for other apps

sorta depends on your use cases and if you’ll have multiple apps or environments

just used this module from git directly source = “git://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=master>”
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

so I need to clone repo to localhost and change it there

got it

yup

hey @bober2000, welcome

@Andy thanks for answering the questions

@bober2000 if you want to open a PR to add the settings to the module, we’ll review it

@Andriy Knysh (Cloud Posse) sure! Will do this as fast as understood how

ok

here are the steps that should help you:

-
Fork the repo into your GitHub account
-
Create a new branch
-
Modify the code in your branch
Add this
setting {
namespace = "aws:elasticbeanstalk:container:nodejs"
name = "NodeVersion"
value = "8.6.4"
}
-
Test on AWS (terraform plan/apply)
-
Open a Pull Request against our repo
-
We review and merge your changes into our repo

8.6.4
should be a new variable, e.g. nodejs_version

setting {
namespace = "aws:elasticbeanstalk:container:nodejs"
name = "NodeVersion"
value = "${var.nodejs_version}"
}

Oh, thanks - I know how to contribute using GitHub - I mean I’m still only newbie in terraform syntax

Will add this - thanks for a guide

Don’t forget to add to variables.tf as well

yep

and then rebuild README by executing the following commands in the module folder:

make init
make readme/deps
make readme

Anyone know if there is a terraform module(+lambda) out there for AWS service limit monitoring ?

That would be a great one!

Haven’t seen though…

I want a module for billing alerts too

https://s3.amazonaws.com/solutions-reference/limit-monitor/latest/limit-monitor.pdf there is a CF template https://s3.amazonaws.com/solutions-reference/limit-monitor/latest/limit-monitor.template Easiest to just TF around it

My requirement is to not to delete and not even trying to delete, since S3 is the backend. Currently it’s trying to delete and failing since versioning is enabled. Was trying to set a deletion policy to “retain” (S3) running terraform. (moot) Was wondering if there is option to tell terraform not to delete backend S3 ??

@GFox)(AWSDevSecOps can you add some additional context? sounds like you might be talking about the terraform-aws-tfstate-backend
module?

Thank you @Erik Osterman (Cloud Posse), working on it, I’m not an aws guy yet, more azure and openstack, but helping a friend right now, while l’m reading up looking into it, and, when I get a better picture, will def ping you back

Aha then yes there are a few options

Sorry, I see now more clear what you are asking

So when you create the bucket resource you will want to add a lifecycle block that says prevent destroy

@GFox)(AWSDevSecOps you need to put tfstate-backend
into a separate folder, provision it using TF local state, then import the state into the tfstate-backend
bucket

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

all other modules are in different folders, so when you run terraform destroy
on them, TF will not attemp to destroy the state bucket

wow, quick responses, great help and great stuff

also take a look at our docs where we describe the process https://docs.cloudposse.com/reference-architectures/cold-start/#provision-tfstate-backend-project-for-root

@Andriy Knysh (Cloud Posse) PR for adding NodeJS version select https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/50

Thanks @bober2000!

Left 1 comment

@bober2000 LGTM thanks. Just rebuild README

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) README ready, linting too

thanks! waiting on @Andriy Knysh (Cloud Posse) for final approval

@bober2000 merged to master. Thanks for the PR!

Thanks for the help here on the terraform-aws-ecs-web-app ECR issue a few days ago!
I’m now wondering if I’ve misunderstood the use-case for the module. Is it ok to have the instances it defines live on a public subnet? The only way to specify the subnets is via ecs_private_subnet_ids
, which leads me to believe it should only be used on private subnets. In my use-case, the containers need internet access. When I look under the hood, I don’t see any reason for the “private” qualifier. Is it just a poorly-named variable?

it was designed with the expectation that the tasks are on a private subnet

which means your VPC needs a NAT gateway to reach the internet

have you seen our VPC and subnet modules? Those will take care of it

Ah, thanks. Will look now.

(though I suppose it will just work if you give it public subnet IDs)

even though it’s called private_...
(just a hunch)

that’s what I was hoping for. Will report back.


if not, and you want to submit a PR, will review it promptly
2018-10-12

Hi again. When trying to
terraform destroy
I’m getting several errors: module.elastic_beanstalk_environment.aws_s3_bucket.elb_logs (destroy): 1 error(s) occurred: aws_s3_bucket.elb_logs: error deleting S3 Bucket (develop-dev-vitaliimorvaniukdev-logs): BucketNotEmpty: The bucket you tried to delete is not empty status code: 409 module.dev_front_end.module.logs.aws_s3_bucket.default (destroy): 1 error(s) occurred: aws_s3_bucket.default: error deleting S3 Bucket (develop-dev-front-dev-vitalii.morvaniuk-dev-logs): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket. status code: 409
force_destroy = true
is setted

what should I check ?

was force_destroy = true
set from the very beginning?

oh, this is a tricky thing i discovered a few months ago.

if you change force_destroy = true
but do not terraform apply
, it doesn’t register

@bober2000 and if the rest has been deleted already, best is to do a terraform apply_target on module.elastic_beanstalk_environment.aws_s3_bucket.elb_logs with the force_destroy = true option. And then the destroy again.

Yes force_destroy = true was set from the beginning


[empty bucket]-button

Its what I’m doing now - but I’d like it to be deleted automatically
[empty bucket]-button

i’m not quite sure what to look into

in older versions of terraform, it was common that versions weren’t force deleted

terraform –version Terraform v0.11.7
- provider.aws v1.40.0
- provider.null v1.0.0

This issue was originally opened by @osterman as hashicorp/terraform#7854. It was migrated here as part of the provider split. The original body of the issue is below. Terraform Version Terraform v…

but that’s been working for me as of relatively recently - using our terraform-aws-tfstate-backend
module which has versions

Erik, you have a long history of tf issues

hah, that’s ironic

since v0.6.16

its more tragic hehe, it’s super old

Ok, one more question - I need to add RDS instance to my beanstalk environment - as far as I see there are no option for this - should I do it separatly and after that add something like this:
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "RDS_USERNAME"
value = "${aws_db_instance.rds-app-prod.username}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "RDS_PASSWORD"
value = "${aws_db_instance.rds-app-prod.password}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "RDS_DATABASE"
value = "mydb"
value = "${aws_db_instance.rds-app-prod.name}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "RDS_HOSTNAME"
value = "${aws_db_instance.rds-app-prod.endpoint}"
}

For prod it’s recommended to do it separately http://www.michaelgallego.fr/blog/2013/10/26/do-not-associate-rds-instance-with-beanstalk-environment/
Discuss about some pros and cons of associating a RDS instance with an Elastic Beanstalk environment

And then just pass RDS Hostname username/password etc as variables to the environment

Ok, thanks

https://github.com/cloudposse/terraform-aws-rds so something like that and then you can use the outputs
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
2018-10-15

First let me say thanks for contributing great terraform modules. I have a question though. I need a simple redis elastic cache cluster. I have looked at https://github.com/cloudposse/terraform-aws-elasticache-redis but this seems to be geared towards clusters with replication such as https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.Redis.Groups.html#Replication.Redis.Groups.Cluster. Is this correct or my assumptions are wrong? Thanks in advance for any feedback.
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
Each shard in a replication group has a single read/write primary node and up to 5 read-only replica nodes.

hey @Miguel Mendez

yes the module does not support https://www.terraform.io/docs/providers/aws/r/elasticache_cluster.html
Provides an ElastiCache Cluster resource.

Provides an ElastiCache Replication Group resource.

OK, any plans supporting it?

Most of our modules are driven either by client engagements or community contributions. As of right now, we don’t have any clients asking for it :-)

We are really good about promptly reviewing PRs and nearly accept all contributions. If you’re interested, please take a stab at it!

OK great. I will create then a module and submit a PR. Thanks once again for your contributions.

Awesome @Miguel Mendez ! Thanks for taking a look at it

Hey all, I’m trying to get https://github.com/cloudposse/terraform-aws-eks-cluster to work. No nodes are showing up when i do kubectl get nodes. The userdata log on the instance looks fine.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

@nicgrayson did @Andriy Knysh (Cloud Posse) share the reference implementation?

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

I’m using tf from the readme

^ what we deployed and tested

you need to apply the ConfigMap for the worker nodes to join the cluster

you can apply it manually, or using https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks/kubectl.tf
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

(EKS does not do it automatically)

ah okay thanks

Worked like a charm

nice

let us know if any issues
2018-10-16

Hi all. I’m trying to create RDS instance using https://github.com/cloudposse/terraform-aws-rds
module "elastic_beanstalk_rds" {
source = "git::<https://github.com/cloudposse/terraform-aws-rds.git?ref=master>"
namespace = "${var.namespace}"
stage = "${var.environment}"
name = "${var.user_account_name}"
dns_zone_id = "${var.parent_zone_id}"
host_name = "db"
security_group_ids = ["${module.vpc.vpc_default_security_group_id}"]
database_name = "app"
database_user = "admin"
database_password = "password"
database_port = 5432
multi_az = "false"
storage_type = "gp2"
allocated_storage = "5"
engine = "postgresql"
engine_version = "9.6.6"
instance_class = "db.t2.micro"
db_parameter_group = "default.postgres9.6"
#parameter_group_name = "default.postgres9.6"
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

But getting
module.elastic_beanstalk_rds.aws_db_instance.default: Resource 'aws_db_parameter_group.default' does not have attribute 'name' for variable 'aws_db_parameter_group.default.*.name'
error

what I’m doing wrong ?

try passing a list instead

see if that works, unfamiliar with the module but might point you in the right direction if so

er hm, likely from the commented out parameter_group_name

looks like it’s checking for the length of that here: https://github.com/cloudposse/terraform-aws-rds/blob/master/main.tf#L54
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

hey @bober2000 and @jarv

if parameter_group_name
is not provided, the default is ""
(empty string)

it looks like a race condition

oh, hm didn’t see that.. ~so guess just a depends_on should prevent that as well~an ignore this suggestion probably, think I need more sleep

looks like it’s the ‘famous’ issue with TF counts
after the latest release (count expressions have been changed)

@bober2000 can you try the previous release source = "git::<https://github.com/cloudposse/terraform-aws-rds.git?ref=tag/0.4.0>"

that one was working for a long time

and we’ll look into the issue with the latest release 0.4.1

also, don’t use master
in your code (git::<https://github.com/cloudposse/terraform-aws-rds.git?ref=master
>), always pin to a release (for all modules). Modules get changed and there is a possibility of regression

thanks for tips - will try

does anyone have an example for using the event selector in the cloudtrail module? https://registry.terraform.io/modules/cloudposse/cloudtrail/aws/0.5.0?tab=inputs trying to capture all S3 events https://www.terraform.io/docs/providers/aws/r/cloudtrail.html#event_selector
Provides a CloudTrail resource.

@shaiss That doc references the Terraform “cloudtrail” resource here:
event_selector
Description: Specifies an event selector for enabling data event logging, It needs to be a list of map values. See: <https://www.terraform.io/docs/providers/aws/r/cloudtrail.html> for details on this map variable
Default: []
I happen to be using that Terraform resource and am using the one from this example: https://www.terraform.io/docs/providers/aws/r/cloudtrail.html#logging-all-s3-bucket-object-events
Provides a CloudTrail resource.

@shaiss (It’s possible you already know this and that I’ve been no help at all. Hopefully not the case)

thanks @markmutti

@shaiss let us know if it’s working for you

@markmutti I’ll chk your link. thx @Andriy Knysh (Cloud Posse) I stepped away for lunch. Belly full, I’m now ready to get back to coding

@markmutti so I get the example of using the map that’s listed in your link IF I was using the default/built in resource “aws_cloudtrail”. However, I’m trying to use the cloudposse cloudtrail module which wants a list for event_selector, not a map. This is where I’m banging my head

ie. event_selector = {[data_resource = “AWS::Object”]}
my syntax is probably wrong

@shaiss try this
event_selector = [{
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}]

i guess it was changed to be a list in the last commit so many event selectors could be specified

@Andriy Knysh (Cloud Posse) nope: **

Error: module.cloudtrail.aws_cloudtrail.default: event_selector.0.data_resource: should be a list

what Change the event_selector var from a map to a list type. why It is currently a type map, that then gets put inside a list. Even though it is a null map by default, because it is embedded int…

if not working, try the previous release (which was tested)

ok, let me try that now

0.4.2 of the cloudtrail module still gives the same “should be a list” error

same w/ 0.4.1

module "cloudtrail" {
source = "cloudposse/cloudtrail/aws"
version = "0.4.2"
name = "${var.cloudtrailbucket["name"]}"
stage = "${var.cloudtrailbucket["stage"]}"
namespace = "${var.cloudtrailbucket["namespace"]}"
s3_bucket_name = "${module.cloudtrail-s3-bucket.bucket_id}"
event_selector = [{
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}]
}

the code above generates “should be a list” error?

yep

i’ll try to reproduce and fix the issue when I get some spare time

has anyone here used the terraform-aws-alb
and terraform-aws-alb-ingress
modules to configure a load balancer to redirect http to https? It’s an obvious option when adding an ingress rule via the AWS UI, but I’m lost finding the equivalent option in the cloudposse modules. I feel like I’m missing something simple…

@dan we don’t have this implemented right now. @Ryan Ryke wanted the same thing. Not sure what he ended up doing.

@Erik Osterman (Cloud Posse) good to know - thanks

Looks like support for this was released in August (after our first look at it)

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…

Support for fixed-response and redirect actions has been merged into master via #5430 and will release with version 1.33.0 of the AWS provider, later this week. (August 20)

@dan if you get a chance to implemented, we would love a PR

resource "aws_lb_listener" "front_end" {
load_balancer_arn = "${aws_lb.front_end.arn}"
port = "80"
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}

Provides a Load Balancer Listener resource.

@dan

@LeoGmad I think you guys might dig this too….

@LeoGmad has joined the channel

yeah, i would be interested in that. i can try and put a pr in this weekend. for now they are handling it inside the container

not totally sure how you would implement it though

@Ryan Ryke I think we would add this to the terraform-aws-alb
module
resource "aws_lb_listener" "https_redirect" {
count = "${var.https_redirect_enabled == "true" ? 1 : 0}"
.... # code from above
}
Somewhere here. https://github.com/cloudposse/terraform-aws-alb/blob/master/main.tf#L27
And then a ternary here to select the appropriate ARNs: https://github.com/cloudposse/terraform-aws-alb/blob/master/outputs.tf#L38
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb

ahh right something like that might make sense

(or maybe return 0 ARNs for HTTP if redirect enabled)

when is .12 coming out?

maybe hashiconf?

yea, that’s my expectation
2018-10-17

hey

quick one

let’s say I want to only deploy your beautiful jenkins terraform solution

what is a good practice to start?

copy the main.tf from the examples folder to the root of the project and run terraform init .. plan .. apply?

I wasnt able to find that skimming the readme

thx

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

for now I created a deploy folder and run terraform init deploy/ etc.

hey @ff

hey

so I think you already have your project structure similar to this https://github.com/cloudposse/terraform-root-modules/tree/master/aws
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

each app/module should be in a separate folder

got it

ah I see

you add for example jenkins
folder

and copy our example in there

then cd to the folder and run terraform plan/apply

thanks that was the hint I was missing

back to work

1 sec

and pin all modules to a release

don’t use master

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

master
can be changed/updated and there is a possibility to introduce regression

got it

this is awesome stuff … how did we infrastructure back in the 2000s

or even further back

@ff That is terrifying to think about. We did it very badly.

lots of metal and cables

and sometimes even documentation

Oh man, and SSHing into a million machines to apply patches, config drift, you name it

or DRBD
2018-10-18

hey again… making some progress with the jenkins-terraform

but

ending up here


I already learned about the other issue with the empty value for the github auth token and added a - in the value as described in https://github.com/cloudposse/terraform-aws-jenkins/issues/11
Hi, I've just cloned the repo to test it and I'm following the doc, however, it is asking for more variables than it is described. My steps were: git clone terraform init terraform plan It …

but here is a dead end. I don’t see a name tag either. Please advise

I now worked - it was obvious after asking the question here - using another tag name and value (before: Terraform = “true”, after Department = “abc”)

whatever

It only works once. Once the tag has been set, the next run fails.

this is my config - please help, I am at a dead end

hey @ff what’s exactly are you seeing? terraform apply
first time works, but second time fails?

yes

if I change the tags, it works once again

by the way, just filed a pull request - it only worked with a personal access token instead of “-”

It just worked with a personal access token for me.

Aaand the next PR https://github.com/cloudposse/jenkins/pull/14
Updated to latest Jenkins version since there were a lot of notifications regarding security issues etc.

anyone have a good read replica module

rds read replica

@Ryan Ryke hrmmm I believe we recently made some fixes to support this use-case

Trying to remember who here was working in that

@Andriy Knysh (Cloud Posse) do you remember who?

Gladly?

Daren gave us the example code that they used, but it was to help someone else

Let me check

resource "aws_db_subnet_group" "replica" {
name = "replica"
subnet_ids = ["xxxxxxx", "xxxxxxx", "xxxxxx"]
}
resource "aws_kms_key" "repica" {
deletion_window_in_days = 10
enable_key_rotation = true
}
resource "aws_db_instance" "replica" {
identifier = "replica"
replicate_source_db = "${var.source_db_identifier}"
instance_class = "${var.instance_class}"
db_subnet_group_name = "${aws_db_subnet_group.replica.name}"
storage_type = "io1"
iops = 1000
monitoring_interval = "0"
port = 5432
kms_key_id = "${aws_kms_key.repica.arn}"
storage_encrypted = true
publicly_accessible = false
auto_minor_version_upgrade = true
allow_major_version_upgrade = true
skip_final_snapshot = true
}

Here’s the link to the discussion

Maybe some more juice there

right, so no module just a raw resource

@Erik Osterman (Cloud Posse) we helped Gladly with this module terraform-aws-rds-replica-vpc

they have it, but it’s not public

Yea, so IMO not sure it makes sense to have a module for it

This customer created a private module that does this by creating a vpc, subnet and RDS instance configured as a replica

But then that means that vpc should be used basically for nothing else

yes, that was mostly for cross-region replica

so more complex than prob needed

This is good for checking the box on a pci compliance box

@Ryan Ryke can you provide the use-case you are solving?

@Ryan Ryke if you use Aurora, you don’t need all of that

they just want a read replica in prod

also true

i dont need a module for it

there really isnt a whole lot of tf theere

i modified @Andriy Knysh (Cloud Posse)’s sample and im done

10 minutes

replica for plain RDS is mostly useful if you need cross-region replication

agreed, they want to hit a separate endpoint for reporting

ok, then you need it

why not Aurora? (historical reasons?)

just comfort level for them

yea, but Aurora is faster

Terraform and encrypted cross-region read replicas used to be a pain IIRC - not sure if still the case

yea, this is much easier now. no jumping through hoops.


probably not anymore https://sweetops.slack.com/archives/CB6GHNLG0/p1536204881000100

Looks like resolved - https://github.com/terraform-providers/terraform-provider-aws/issues/518
This issue was originally opened by @gdowmont as hashicorp/terraform#11784. It was migrated here as part of the provider split. The original body of the issue is below. Hi, It would be great if ter…

thanks

is there a cloudposse e2e terraform module that just gets me an ecs cluster I can run some one-off (and scheduled, but not persistent) tasks on? looks like the most plausible public one is https://github.com/arminc/terraform-ecs
AWS ECS terraform module. Contribute to arminc/terraform-ecs development by creating an account on GitHub.

(or, ya know, start writing your own terraform :))

yes/no

we’ve taken a more decomposed approach to ECS fargate

do you require CI/CD?

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

this module does most of the heavy lifting

you give it a vpc and subnets, and it deploys a traditional “webapp”

it takes care of setting up the service task, task definition, codebuild, codepipeline, alb rules, autoscaling,etc

however, if you have something very simple - where you basically just want “heroku” but on AWS - there’s something else to consider

Yeah; I saw that one – it seems to do almost everything besides the cluster, and it’s the cluster I’m really after

the cluster is a 1-liner

OTOH what eventually want is a cron job running a container with some persistent storage, so maybe I should just run k8s instead

the cluster resource itself is a oneliner but the instances in it aren’t, right?

you still need an autoscaling group and a launch configuration etc

oh, we only deal with Fargate


heh that’ll do it – what do you do for persistence in fargate? iirc it doesn’t support docker volumes

just bind mounts

and IIRC you can’t make the bind mount just point at an EBS volume you control

yea, that’s still a drawback. we’re mostly a kubernetes shop.

in an “ideal world” you don’t need the durability in your containers and can offload that to an external backing service ala S3 with object storage

but i realize that’s not right for everything

have you seen this? https://github.com/jpignata/fargate

i mean, I can mostly use goofys I think, but it sounds easier to just use k8s + ebs storage in the pod

I’m using this for simple off off stuff.

none of the apps that I’m currently using reallllly want a POSIX fs

it’s more of a key value store that maybe uses fopen

yea, goofys
is a hack

plus it requires elevated capabilities for fuse

not sure if it will work on ECS Fargate

as a security person I don’t want fuse anywhere near my containers tbh

OK: you convinced me, time to deploy some k8s

we’ve only used goofys
as a last resort

we have EKS modules too

@jsanchez

@jsanchez has joined the channel
2018-10-20

Hey guys, thanks for all the work you’ve put into the modules on github, it’s an awesome collection. I’m trying to build out a pipeline to deploy an ecs cluster, and the target group that is created seems to have no targets - I’m having trouble digging through and finding a reason that might cause it. Has anyone run into similar before? I’m using terraform-aws-alb, terraform-aws-ecs-web-app, and terraform-aws-ecs-alb-service-task

^ (semi) solved it myself - tasks were never working in the ecs cluster (unable to access my ECR to pull the image, unsure why), so there never were any targets to register. Womp.
2018-10-21

@George thanks so much!

Sounds like you figured it out

also, did you see our example buildspec.yaml
?


I did, wasn’t quite sure how I would integrate it into what I was using

(Or if it replaced some or all of the components, aside from the vpc and subnets)

Hrm… I thought we had a full example somewhere

looking

here’s a more complete example that helped @Ryan Ryke get up. We really need to add this to our terraform-root-modules


I’ll check it when I get back to my machine, thanks for the heads up!

cool - just ping me if you’re stuck

Outputs define values that will be highlighted to the user when Terraform applies, and can be queried easily using the output command. Output usage is covered in more detail in the getting started guide. This page covers configuration syntax for outputs.

TIL:
output "sensitive" {
sensitive = true
value = VALUE
}

Note that this is mostly useful in the CI scenario as anyone with access to the state can always terraform output or read it directly.

yeah i can help if needed
2018-10-22

Found some weird behaviour and managed to file a new issue even with the current Github outage. https://github.com/cloudposse/terraform-aws-rds-cluster/issues/37

thanks for looking into it

@ff try without the availability_zones

It does not accept - it’s a mandatory variable

been there

and with availability_zones = []

Hi, availability_zones is EC2 classic, I believe that the module and the examples will get better if EC2 classic support is dropped. The current examples are mixing EC2 Classic params with VPC para…

stupid me

tryin’

nope

availability_zones.#: “3” => “0” (forces new resource) availability_zones.1126047633: “eu-central-1a” => “” (forces new resource) availability_zones.2903539389: “eu-central-1c” => “” (forces new resource) availability_zones.3658960427: “eu-central-1b” => “” (forces new resource)

and in turn the cluster nodes are also forced new resoruces

just remove everything, and create again without az’s

then it should work and keep working

Got ya. Testing.


first w/o terraform destory

let’s see

thanks for the hint

sure np, wanted to do a quick pr, but GH is still suffering it seems

good old single points of failure

will report back in about 15mins when the “apply” has finished

haha

does not work

availability_zones.#: “3” => “0” (forces new resource) availability_zones.1126047633: “eu-central-1a” => “” (forces new resource) availability_zones.2903539389: “eu-central-1c” => “” (forces new resource) availability_zones.3658960427: “eu-central-1b” => “” (forces new resource)

availability_zones = []

shall I use a terraform destroy and rebuild the environment?

but should not make a difference I think

so we should be able to actually drop the variable

because I assume that an empty bracket means something else than a non existing variable

ah sorry man, i think you’re right

@Andriy Knysh (Cloud Posse) do you have thoughts on this

Let me check

for the time being I worked around it by using native terraform aws resources

nevertheless I thought it might be helpful to fix this for the community

Yea, I think we should just drop that variable from the module altogether

@ff if we do that, does it fix your problems?

I think so

We currently use 0.6 in prod (from before my time) and I’ve been tasked with upgrading to current. State files stored in s3. Anyone got any general guides or info sources on how to go about upgrading? I see projects like terraforming etc, and/or regeneration of state files.

terraform 0.6.x
-> 0.11.x
?

(I’ve been tasked with can be read as “hey I wanna do some terraform” “ok here” “no wait wha-“

@Erik Osterman (Cloud Posse) yes

*0.7 actually, my bad

hrm… haven’t had to do that big of a jump

most important thing is to backup the state files so you have recourse

typically, terraform cli is great about upgrading (never downgrading) tf state

there maybe some syntax change, but you’ll be warned by terraform of those

objective should be to run a terraform plan
and see no changes; however, sometimes terraform changes default values for things in which case you will see changes.

i think in the CHANGELOG
for every release they publish an upgrade path (if necessary)

i would put together some notes after combing though those release notes that can be you run book

Hmm, so back up the state files, upgrade terraform binary, and terraform plan until it shows no errors?

yea, that’s all I can think of off the top of my head

@maarten @jamie any tips?

yo

@George is upgrading some legacy infra from 0.7.x
-> 0.11.x
; any words of wisdom?

Wait for 12?

HA

Lol

It comes with an upgrade script

good point

Oh, seriously?

ya

Lemme go investigate

the jump to 11 breaks a lot of “sloppy” code practices of older versions

Terraform is a tool for building, changing, and combining infrastructure safely and efficiently. - hashicorp/terraform

pre-release available!

11 stopped allowing things like output references that are lists when the lists are empty

and we spend half of our time working around that with ugly interpolations

11 also broke ‘count’, so that calculated count values arent allowed in modules

These things will likely still break in 12, but the fixes are much more elegant

in terraform 10 you can do
output "the_alb_name" {
value = "${aws_ecs_service.app_with_lb_awsvpc.*.name}"
}

and if aws_ecs_service.app_with_lb_awsvpc
wasn;t created… it calmly shows an empty value

without crying about it

in 11, if you want your output to work with an empty list from a resource, then its all:
output "the_alb_name" {
value = "${join("",compact(concat(list(""),aws_ecs_service.app_with_lb_awsvpc.*.name)))}"
}

to get the same output without an error

Hmm, I’ll have to review the erm…state of the current configs we have. Unsure if my predecessor wrote anything that takes advantage of those features. Thanks for the explanation and heads up about 0.12!

Your predesessor will have taken advantage of them

since pre 11, that was just how it was done.. in all the terraform examples too

thanks @jamie! great summary

11:26 Pacific Daylight TimeWe have resumed webhook delivery and will continue to monitor as we process the backlog of events. <- github

Ha, thanks Sorry @George I would give you more guidance on upgrade to 0.11, but 0.12 is a breaking change anyway and you might as well just rewrite your code once instead of twice.

You may also find there is very little to rewrite after the upgrade tool is out as well.


I added a #atlantis channel since we’re doing a lot more with it these days (related to runatlantis.io).

Terraform module to send transactional emails via an SMTP server (e.g. mailgun) - cloudposse/terraform-null-smtp-mail

hot off the press!

Any plans to include that emailcli into packages?

Nvm @Erik Osterman (Cloud Posse) just seen https://github.com/cloudposse/packages/pull/95
what Add 12-factor style email cli why Easily send emails from command line or geodesic shell

@vadzim can you review?
what Add 12-factor style email cli why Easily send emails from command line or geodesic shell

thanks @joshmyers!

actually, we have a PR for it already - checking status

Yea, we’ll get that merged tomorrow probably
2018-10-23

Given you can’t invoke modules with counts and TF isn’t an ideal language as yet, how do you see using/invoking that module? The example mentions creation of users, which I’d imagine is done as a list of vars. A module per user? I see other use cases but was interested in that as I have run into similar use case before.

i think we’re going to move away from the list-of-users model

and instead do one tf file per user

Agreed that would be more flexible at the moment although I’d hope 0.12 with hcl2 would improve that with rich data structures…

what Implement a terraform module that provisions an IAM user for humans with a login profile and encrypted password why Support operations by pull request to add users to AWS

i’m getting this setup for our demo on thursday

IAM user account management with #atlantis

Ahh, nice.

so, in this case, would invoke with each user.

I see how that all fits together now

Nice way of doing it

what i don’t like is all the vars I have to pass related to smtp

Modules all the way down

You could infer some of them if not otherwise provided, then again…HCL :(

Also, I’m on mobile right now.

heads up, v0.11.10 was just released and the download urls are different… they no longer have the v
in the version token… works:
• https://releases.hashicorp.com/terraform/0.11.10/terraform_0.11.10_linux_amd64.zip doesn’t work:
• https://releases.hashicorp.com/terraform/v0.11.10/terraform_v0.11.10_linux_amd64.zip

@loren ugh, thanks for the heads up

sucks for our packaging system

wonder if that was deliberate on their behalf

Seems like they’re saying that the prior working URLs were an accident? https://github.com/hashicorp/terraform/issues/19163#issuecomment-432310297
Terraform Version Terraform v0.11.9 + provider.archive v1.1.0 + provider.aws v1.41.0 + provider.null v1.0.0 + provider.random v2.0.0 + provider.template v1.0.0 + provider.tls v1.2.0 Expected Behavi…

Looks like we dodged the bullet on this one

Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages

Nice. I don’t know how I managed to get the other URL into my configs. Blergh.

luke - maintainer of #atlantis is joining HashiCorp to work fulltime on the project (announced at HashiConf)

btw! if any of you are at hashiconf, reach out here! I know @antonbabenko is there

Yes, I am inside the keynote room now

We will be live streaming Mitchell Hashimoto and Armon Dadgar’s opening morning keynote on Tuesday, October 23rd. The live stream will start at 9:30am PST and end at 11:00am PST.

Hey Anton, how many people are there compared to Amsterdam ?


Terraform collaboration for everyone.

State Storage, Locking, and History. No more state files! Automatically store and access state remotely whenever and wherever you run Terraform. State access is automatically locked during Terraform operations. In the UI, view a history of changes to the state, who made them, and when.

Today at HashiConf 2018 in San Francisco, we are announcing major updates across our entire suite of open source and enterprise products. Our mission…

The Helm provider is used to deploy software packages in Kubernetes. The provider needs to be configured with the proper credentials before it can be used.

I just discovered tfenv
- any good/bad experiences with this here? Seems like it is going to be useful with the new 0.12.x version coming up and working with “legacy” terraform.

isn’t that in ruby?

hate to install that runtime just to switch envs in terraform

bash

Terraform version manager. Contribute to Zordrak/tfenv development by creating an account on GitHub.

I think this is less of an issue when doing this all The Right Way™ (using container via geodesic)

haha

i think tfenv
might be helpful initially

though i think there’s some way to specify the version compatibility

in fact things like this just reminds me I need to use geodesic everywhere

to support hcl 1 and 2

Upgrading to Terraform v0.12

i guess only one or the other will be supported

they recommend adding a version constraint on terraform

just released https://github.com/cloudposse/terraform-aws-iam-user to manage IAM user account for humans
Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

it supports automatically associating the user with a list of groups as well as password generation using pgp+keybase

yea so the problem that led me to tfenv - was working with multiple repos/modules with different terraform version constraints

also - i was used to the rbenv
style of .terraform-version
in the root

yea

yea, given that - i think it’s probably the best alternative

i can see where in long-lived environments that managing the terraform versions will be essential and risk to keep everything up to date perhaps

any project open on the new hcl migration? can probably help out with that if there’s a burn down list. (haven’t been following tf updates just know that was likely going to be released soonish)

or maybe just beta.. might try a couple just to dig into the new stuff. haven’t been deep in tf dev in a bit

@jarv that would be HUGE

heh yeah you guy’s have a ton of modules.. was managing 50+ (over time) private repos at previous employer, didn’t have a lot of breaking tf changes during that time but can’t imagine it’s easy

was very close to going with cloudformation instead after hearing so many 0.6/0.7 horror stories

yea, understand the temptation…

in the end, would have just been trading pros/cons

yeah still do a fair bit of cloudformation when it makes sense, don’t mind it. service catalog support is pretty interesting, also if I can reuse any of the aws supported projects without tweaking much that’s a good tradeoff in a tf shop imo

i think it’s still too early, but when the time comes, I’ll create a “Terraform Module Upgrade” project and add everything there.

will announce that as soon as it’s there.

I can also help out, but have a few modules of my own to do first.. but happy to join forces for problem solving ofc.

Thanks guys! we’re going to need the help

whoaaaa … @Erik Osterman (Cloud Posse) you see Atlantis team is joining Hashicorp?

yea! that’s both scary and exciting



ah thanks

but yea, relevant here too

i want to use atlantis for more than terraform

but this doesn’t bode well for that roadmap

well now there’s github actions

yea

waiting on invite

anyone already have access to github actions?

not yet - waiting as well

GitHub Actions vs. HashiCorp/Atlantis

~(possibly) Somewhat related:~oticed systems manager added a wait for user input action. unfamiliar with ssm but was curious if something like terraform ci was one of the use cases for adding thateh not sure that makes sense.. unsure why I was thinking about that now

guess I was probably thinking ssm because it manages (just?) os state.

Shall we create a 0.12 channel so problems & solutions don’t get lost in other talks ?

I think it’s a good suggestion


Does anyone know of a way to provide multiple SSLCertificateArns for a beanstalk environment that is using an ALB? The name is called SSLCertificatesArns which implies that you can specify multiple arns but I have tried:
setting {
namespace = "aws:elbv2:listener:443"
name = "SSLCertificateArns"
value = "<someArnForFirstCert>"
}
setting {
namespace = "aws:elbv2:listener:443"
name = "SSLCertificateArns"
value = "<someArnForSecondCert>"
}
And also tried:
setting {
namespace = "aws:elbv2:listener:443"
name = "SSLCertificateArns"
value = "<someArnForFirstCert>,<someArnForSecondCert>"
}
And:
setting {
namespace = "aws:elbv2:listener:443"
name = "SSLCertificateArns"
value = ["<someArnForFirstCert>", "<someArnForSecondCert>"]
}
Neither of these approaches seems to work correctly and cannot find any other documentation on it other than: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elbv2-listener
Configure globally available options for your Elastic Beanstalk environment.

Sorry - not sure how to do it, but if you figure it out and need to make some changes - we accept nearly all PRs

@Jeremy Looking at the Elastic Beanstalk Documentation it seems that Arns actually refer to just one ARN, maybe they wanted to be prepared future wise. Could you actually add multiple in the AWS Console ? As alternative you could create a new Certificate in ACM with support for multiple domain names, and have just one ARN.

Terraform provider for checking out git repositories and making changes - Yelp/terraform-provider-gitfile

cool idea - too bad not maintained
2018-10-24

Hi, I am looking for information on how to source passwords from azure keyvault using remote-exec(terraform). Basically I will have to copy a property file to the server which I will source it from git. I do not want the properties file to have sensitive information like secrets/ passwords. so I would want to append the file to password/secret’s from keyvault in azure platform

or either using cloud-init
can I use variable which will query keyvault?

@praveen Not able to use the native resource (https://www.terraform.io/docs/providers/azurerm/r/key_vault.html)?
Manages a Key Vault.

@Mark, yes I use key vault. But my question is if I can query keyvault by passing query keyvault variable in remote-exec or cloud-init

@praveen what about giving the server access to that keyvault, and retrieve those values at boot ?

how can I do that. can I have any example if already done. I mean if there is any ref to git code

I’m not an Azure expert, but maybe this can help you. https://github.com/sozercan/terraform-vm-keyvault/tree/ea67b8ca5eac82fd92bfe27f40bcf4ada565d93e
Microsoft Azure Linux VM created with Terraform that uses Azure Key Vault - sozercan/terraform-vm-keyvault

I will try this and let you know the result

Could I get some reviews on this PR? https://github.com/EasterSealsBayArea/terraform-aws-elastic-beanstalk-environment/pull/1
It is a fork not going to our master yet (using gitflow; will test/validate it internally from the develop branch with our projects then commit back). I didn’t want to go straight to the official yet without giving it some solid testing.
Problem Add health log streaming to https://github.com/EasterSealsBayArea/terraform-aws-elastic-beanstalk-environment. Ensure log rotate exists. AWS docs: https://docs.aws.amazon.com/elasticbeansta…

@johncblandii don’t see anything controversial with the PR

to rebuild the readme run

make init
make readme

i did the readme part but not init. will do that now. thx

oh, that’s the easier way to install everything.

oh…maybe this is needed to be resolved for a full rebuild?
/bin/bash: gomplate: command not found
make: *** [readme/build] Error 127

1 sec

oh, looks like we’re missing a dep

you can run:

make packages/install/gomplate

ahh

pushed

make readme/deps

i mistook docs/terraform.md
for the readme without even checking the filename

good to know @Andriy Knysh (Cloud Posse)

PR looks good to me

cool

will get it tested on our stuff then PR it to your upstream

a few other things will come soon too

sidebar: this .12 stuff in TF will clean up the env vars (for/foreach loops) and settings (null values) tremendously



@johncblandii if you want to help us update the beanstalk module (when the time comes), we’d be grateful

absolutely, @Erik Osterman (Cloud Posse). going to do more updates to help out


@johncblandii any reason you’re not using ECS or Kubernetes?

(we found beanstalk deployments to be more flakey in the long run, which is why we have moved to the other systems)

moving there. devops is a small team here so it comes down to time and we have the beanstalk stuff down


another quick and easy one https://github.com/EasterSealsBayArea/terraform-aws-elastic-beanstalk-environment/pull/2
Feature Some Beanstalk outputs were missing. Solution Added all available Beanstalk outputs.

2018-10-25

a bit confused about what to expect of Terraform when using terraform-aws-ecs-web-app
. If a new task definition is created, should Terraform automatically redeploy the service with the new definition? I’m currently seeing the new definition show up, but just the old tasks remain running. I guess the question is, should aws ecs update-service
be necessary if my terraform-aws-ecs-web-app
is configured correctly?

the terraform-aws-ecs-web-app
module is designed to work with CodeBuild/CodePipeline to automatically deploy changes merged to master.

this only works if you add a buildspec.yaml
to your projects


so, to answer your quesiton, calling aws ecs update-service
is not needed if configured correctly

awesome, thanks for the quick help!

hope it helps!

How do you guys manage your multi-env settings? Currently we use terragrunt, but that’s going away.
I know we can do tfvar files, but is that the best way? I don’t want to end up doing tf plan -var-file=prod.tfvars
and tf plan -var-file=prod.tfvars -var-file=uswest2.tfvars
and so on and so forth to separate the vars for reuse across a lot of configs (4 app stages, multi-region, multi-account).
Thoughts?

hey @johncblandii, maybe you already saw that, but here is what we do:

Although there are many possible ways of doing that, we use containers + ENV vars
pattern.
As you mentioned, template rendering is another pattern (as implemented in terragrunt
).
We store the ENV vars in either AWS SSM (secrets) or in Dockerfiles (not secrets).
Here are more details:
-
We have a collection of reusable TF modules https://github.com/cloudposse/terraform-root-modules. The modules have no identity, everything is configurable via ENV vars. (In other words, they don’t care where they will be deployed and how).
-
We deploy each stage (root, prod, staging, dev, testing) in a separate AWS account for security and better management
-
For each AWS account/stage (root, prod, staging, dev, testing), we have a GitHub repo which is a container (for which we use
geodesic
https://github.com/cloudposse/geodesic):
https://github.com/cloudposse/root.cloudposse.co https://github.com/cloudposse/prod.cloudposse.co https://github.com/cloudposse/staging.cloudposse.co https://github.com/cloudposse/dev.cloudposse.co https://github.com/cloudposse/testing.cloudposse.co
Not secret ENV vars are defined in the Dockerfiles, e.g. https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L17 In other words, the account containers have identity defined via the ENV vars.
-
https://github.com/cloudposse/terraform-root-modules is added to the containers https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L36
-
Inside the containers, users assume IAM roles ro access the corresponding AWS account and then provision TF modules.
-
Inside the containers we use
chamber
(https://github.com/segmentio/chamber) to read secrets from SSM (per AWS account)
So when we run a container (e.g. prod
), we already have all ENV vars setup, and we read all the secrets from the account SSM store.
An account/stage can be in any region (also specified via ENV var, e.g. https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L14)
Take a look at our docs for more details: https://docs.cloudposse.com/reference-architectures/ https://docs.cloudposse.com/reference-architectures/cold-start/ https://docs.cloudposse.com/reference-architectures/notes-on-multiple-aws-accounts/ https://docs.cloudposse.com/geodesic/

going to digest that a bit more but those *.cp repos are containers you run on prod or containers that deploy prod?

thx for the docs. i’ll digest those as well

containers that you run on your local computer or on CI/CD

inside the containers you run diff commands to deploy TF, Helm, Helmfiles etc.

the containers have all ENV vars setup for a particular env (account/stage)

ok, i thought that’s what I was reading. interesting approach

so to add new account/stage/env, you create a new GitHub repo with new container specific to that env

and define specific ENV vars, and in the Dockerfile copy specific TF modules and helmfiles etc.

good deal. hadn’t thought of that approach


(other than docker)

plus you have an immutable artifact that contains all the tools you need for that version of the infrastructure.

so we can use the same tools, processes we use to manage regular apps (e.g. nodejs apps, go apis, etc) with managing the infrastructure as code (terraform).

if you don’t mind me asking, why is terragrunt going away?

it is quite verbose when attempting to duplicate a project to another account or region

it is decent for the simple setup and useful for cascading tfvar values, but our directory structure is getting long in the tooth for 4 accounts and 1 region. if we go to 2 regions it’ll be unwieldy. I’m trying to get ahead of that curve

Okay - would be happy to jump on a zoom screen share and show you how we do it.

you use tg?

i’d love to do that, btw.

or you mean show the above stuff?

Both actually :-)


good deal. how much time should I select?

went w/ 60. we may not need it but just in case


2018-10-26

Hi all. I still trying to create/destroy beanstalk using
source = "git::<https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tag/0.5.0>"
Option
force_destroy = true
But getting this
1 error(s) occurred:
* module.dev_front_end.module.logs.aws_s3_bucket.default (destroy): 1 error(s) occurred:
* aws_s3_bucket.default: error deleting S3 Bucket (develop-dev-front-dev-vitalii-logs): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
status code: 409, request id: 01B995B9AA71BAC0, host id: KwxTu/DhRRig0CtYmdq0qokvgEgCGDeiUAUB2b4yQna9hmnzWieVdtcSi8aGzg6oF4mk5JRff2s=

What is wrong ?

Was the bucket created with force destroy set to true?

(I’m reading https://github.com/hashicorp/terraform/issues/7854 )
Terraform Version Terraform v0.6.16 Affected Resource(s) aws_s3_bucket Terraform Configuration Files resource "aws_s3_bucket" "storage" { bucket = "storage.${var.dns_zone}&…

(Also https://stackoverflow.com/questions/49611774/aws-s3-bucket-delete-issue ) last comment mentions lifecycle policy, maybe check that too
I am deleting bucket from AWS S3 and versioning is enabled, but it’s showing this error: aws_s3_bucket.bucket: Error deleting S3 Bucket: BucketNotEmpty: The bucket you tried to delete is not emp…

yea if the bucket was created without force_destroy = true
and it was added later, it will not be force destroyed

try to apply again and then destroy

or destroy manually and then apply with force destroy

but looks like the issue is still not solved or just does not work in some cases https://github.com/terraform-providers/terraform-provider-aws/issues/208
This issue was originally opened by @osterman as hashicorp/terraform#7854. It was migrated here as part of the provider split. The original body of the issue is below. Terraform Version Terraform v…

Yes bucket was created with force_destroy

Can I prevent creating bucket for logs ?

not in the current version. If you open a PR, we’ll review it

need to add a var elb_logs_enabled
and then add count
here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L990
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

resource "aws_s3_bucket" "elb_logs" {
count = "${var.elb_logs_ebnabled == "true" ? 1: 0}"
bucket = "${module.label.id}-logs"
acl = "private"
force_destroy = "${var.force_destroy}"
policy = "${data.aws_iam_policy_document.elb_logs.json}"
}

then update here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L557
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

and here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L561
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Configure globally available options for your Elastic Beanstalk environment.

I’m fighting the beanstalk env with a silly error about the label. I passed in tags, didn’t pass tags, etc and it keeps giving me fits.
module.sc-api-env-active.module.elastic_beanstalk_environment.module.label.data.null_data_source.tags_as_list_of_maps: data.null_data_source.tags_as_list_of_maps: value of 'count' cannot be computed
Version:
source = "[email protected]:cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=0.5.0"

any thoughts?

ok, this is the problem https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L3
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

yeah

the latest versions of null-label
added a lot of stuff, but it breaks in complex configurations

i saw the new stuff in there

it got complex pretty quickly

so we need to change it to ref=tags/0.3.3
which does not have all that stuff and was working

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

want to open a PR?

sure. which do you prefer? moving the version back or changing the source?

let’s use the latest of git://github.com/cloudposse/terraform-terraform-label.git?ref=tags/0.1.6>
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

we’ll fix the null-label

(simplify it)

cool. PR incoming in a sec

testing

thanks

Problem The terraform-null-label grew a bit complex and is throwing errors when used: * module.sc-api-env-active.module.elastic_beanstalk_environment.module.label.data.null_data_source.tags_as_list…

tested?

about to test on my module

i tested in the example

Switched to source = "[email protected]:eastersealsbayarea/terraform-aws-elastic-beanstalk-environment.git?ref=53c5aa8"
Plan: 48 to add, 0 to change, 0 to destroy.

ok thanks, will merge

coolio

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

sweet. thx

new release https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/releases/tag/0.6.1
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

have you guys seen an attribute listed in docs but error as an unavailable attribute?

docs clearly say description
is an attribute, but it isn’t working when output: https://www.terraform.io/docs/providers/aws/r/elastic_beanstalk_environment.html#description-1
Provides an Elastic Beanstalk Environment Resource

hmm, we saw something like that for other modules, but not in EB

testing stuff locally

could be some cache issue. it works when i reference the local module

removed it

PR for merge to master: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/57
Features EasterSealsBayArea#3 EasterSealsBayArea#2 EasterSealsBayArea#1 Testing Update examples/complete/main.tf’s source to ../../ Plan it Verify the plan completes successfully

pushed a documentation fix

test results added


@johncblandii the PR looks good, thanks, just one comment

cool. checking

pushed

terraform fmt

please run

ugh…meant to do that

i turned off auto-format in VS

it made me lazy to formatting.

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

thanks
2018-10-27

If any gophers or terraform provider authors are around and have some free time - could use a review: https://github.com/terraform-providers/terraform-provider-pagerduty/pull/99
This fixes #97, instantiating scheduled actions if use_support_hours incident urgency type is set. It ensures that the pagerduty API calls will include an empty scheduled_actions in certain cases, …

whoa! terraform fmt github action: https://www.terraform.io/docs/github-actions/actions/fmt.html
Terraform by HashiCorp

/cc @Andriy Knysh (Cloud Posse)
Terraform by HashiCorp

Haha, I knew that :). Thanks

I keep forgetting this is still closed beta , just checked I don’t have access yet - did you manage to get an invite?

No not yet

@sarkis the PR looks OK to me (as far as I can tel w/o testing anything )

Thanks for looking @Andriy Knysh (Cloud Posse)

this does exactly what atlantis
does https://www.terraform.io/docs/github-actions/actions/plan.html
Terraform by HashiCorp
2018-10-29

Hi again. I’m still fighting with getting beanstalk and rds work together. Need some help here please:
module "elastic_beanstalk_environment" {
source = "git::<https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tag/0.6.2>"
namespace = "${var.namespace}"
stage = "${var.environment}"
name = "${var.user_account_name}"
zone_id = "${var.parent_zone_id}"
app = "${module.elastic_beanstalk_application.app_name}"
# associate_public_ip_address = false
instance_type = "${var.instance_type}"
autoscale_min = 1
autoscale_max = 4
updating_min_in_service = 0
updating_max_batch = 1
autoscale_lower_bound = 20
autoscale_upper_bound = 80
updating_max_batch = 1
updating_min_in_service = 1
wait_for_ready_timeout = "20m"
loadbalancer_type = "application"
vpc_id = "${module.vpc.vpc_id}"
public_subnets = "${module.subnets.public_subnet_ids}"
private_subnets = "${module.subnets.private_subnet_ids}"
security_groups = ["${module.vpc.vpc_default_security_group_id}"]
solution_stack_name = "64bit Amazon Linux 2018.03 v4.5.3 running Node.js"
tier = "WebServer"
force_destroy = true
keypair = "${aws_key_pair.dev_ssh_key.key_name}"
ssh_listener_enabled = true
ssh_listener_port = "22"
ssh_source_restriction = "0.0.0.0/0"
http_listener_enabled = true ## Enable port 80 (http)
# instance_refresh_enabled = true ## Enable weekly instance replacement.
update_level = "minor" ## The highest level of update to apply with managed platform updates
preferred_start_time = "Sun:10:00"
rolling_update_type = "Health"
root_volume_size = "10"
root_volume_type = "gp2"
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "RDS_USERNAME"
value = "${rds_instance.default.database_user}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "RDS_PASSWORD"
value = "${rds_instance.default.database_password}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "RDS_DATABASE"
value = "${rds_instance.default.name}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "RDS_HOSTNAME"
value = "${rds_instance.default.instance_endpoint}"
}

and RDS
module "rds_instance" {
source = "git::<https://github.com/cloudposse/terraform-aws-rds.git?ref=tag/0.4.1>"
namespace = "${var.namespace}"
stage = "${var.environment}"
name = "${var.user_account_name}-db"
dns_zone_id = "${var.parent_zone_id}"
host_name = "db"
dns_zone_id = "${var.parent_zone_id}"
security_group_ids = ["${module.vpc.vpc_default_security_group_id}"]
database_name = "app_db"
database_user = "dbuser"
database_password = "password"
database_port = 5432
multi_az = "false"
storage_type = "gp2"
allocated_storage = "5"
storage_encrypted = "false"
engine = "postgres"
engine_version = "9.6.6"
instance_class = "db.t2.micro"
db_parameter_group = "postgres9.6"
#parameter_group_name = "mysql-5-7"
publicly_accessible = "false"
subnet_ids = ["${module.subnets.public_subnet_ids}"]
vpc_id = "${module.vpc.vpc_id}"
auto_minor_version_upgrade = "true"
allow_major_version_upgrade = "false"
apply_immediately = "false"
maintenance_window = "Mon:03:00-Mon:04:00"
skip_final_snapshot = "true"
copy_tags_to_snapshot = "true"
backup_retention_period = 7
backup_window = "22:00-03:00"

Getting
Error: module 'elastic_beanstalk_environment': unknown resource 'rds_instance.default' referenced in variable rds_instance.default.database_user
Error: module 'elastic_beanstalk_environment': unknown resource 'rds_instance.default' referenced in variable rds_instance.default.database_password
Error: module 'elastic_beanstalk_environment': unknown resource 'rds_instance.default' referenced in variable rds_instance.default.instance_endpoint
Error: module 'elastic_beanstalk_environment': unknown resource 'rds_instance.default' referenced in variable rds_instance.default.name
Error: module "elastic_beanstalk_environment": "setting" is not a valid argument

what I’m doing wrong

hey @bober2000

so let’s see here

I’ve found that I missed value = “${module.rds_instance.database_user}”

first, fix the errors in referencing the module value = "${module.rds_instance.default.instance_endpoint}"

but now I’m getting only Error: module “elastic_beanstalk_environment”: “setting” is not a valid argument

(add module...
in front)

ok, to provide ENV vars to the elastic beanstalk module, use it like this https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf#L49
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Let me try this

(setting
is not a valid argument b/c it’s not exposed as var in the current release of the module. We have some PRs and issues opened to do it, we’ll look into that since it would fix some other issues)

added
env_vars = "${
merge(
map("RDS_HOSTNAME","${module.rds_instance.instance_endpoint}",
"RDS_USERNAME","${module.rds_instance.database_user}",
"RDS_PASSWORD","${module.rds_instance.database_password}",
"RDS_DATABASE","${module.rds_instance.name}"
), env_vars
)
}"

Now getting Error: module ‘elastic_beanstalk_environment’: “database_user” is not a valid output for module “rds_instance” Error: module ‘elastic_beanstalk_environment’: “name” is not a valid output for module “rds_instance” Error: module ‘elastic_beanstalk_environment’: “database_password” is not a valid output for module “rds_instance

Have separate
env_vars
defined previously to set some app staff

env_vars = "${
map(
"environment", "${var.environment}",
"namespace", "${var.namespace}",
"user", "${var.user_account_name}",
"API_HOST", "${var.api_host}",
...
)
}"

here are the outputs from the RDS module https://github.com/cloudposse/terraform-aws-rds/blob/master/outputs.tf
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

HM, than how could I get all staff to connect my app to RDS DB ?

database_user
and database_password
you already know when providing them here https://github.com/cloudposse/terraform-aws-rds/blob/master/variables.tf#L44`
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

so you need user and password (which you know and you provide them to the RDS module and to the EB module as ENV vars)

then you need endpoint https://github.com/cloudposse/terraform-aws-rds/blob/master/outputs.tf#L11
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

My fault

Thanks for help - all clear now

nice

then in you app (e.g. NodeJs), you use those ENV vars, like this:

function getDbSettings() {
return (
{
host: process.env.DB_HOST,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD
});
}

so you need 4 ENV vars

env_vars = "${
map(
"DB_HOST", ""${module.rds_instance.instance_endpoint}",
"DB_NAME", "xxxxxxxx",
"DB_USER", "xxxxxxxxxx",
"DB_PASSWORD", "xxxxxxxxx",
)
}"

Quick question

could password be autogenerated ?

it could be, it’s outside the module anyway

and you can use any script to autogenerate it, even TF like this https://www.terraform.io/docs/providers/random/index.html
The Random provider is used to generate randomness.

How could I incert it in DB_PASSWORD ?

Produces a random string of a length using alphanumeric characters and optionally special characters.

Thanks

resource "random_string" "password" {
length = 16
special = true
override_special = "/@\" "
}
module "rds_instance" {
source = "git::<https://github.com/cloudposse/terraform-aws-rds.git?ref=tag/0.4.1>"
namespace = "${var.namespace}"
stage = "${var.environment}"
name = "${var.user_account_name}-db"
dns_zone_id = "${var.parent_zone_id}"
host_name = "db"
dns_zone_id = "${var.parent_zone_id}"
security_group_ids = ["${module.vpc.vpc_default_security_group_id}"]
database_name = "app_db"
database_user = "dbuser"
database_password = "${random_string.password.result}"
....

And than use the same
"DB_PASSWORD", "${random_string.password.result}"
in setting envs ?

Yes

Thanks for help. Will try to contribute to code next week

hi I am a newbie on terraform, and got a good amount of AWS experience.. I want to know how to start scripting in terraform

Welcome @ALI

Probably take a look at some modules

thanks @Andriy Knysh (Cloud Posse)

yea I am looking at some of the modules .

What AWS resources do you want to script?

I want to script on VPC, EC2 ,Dynamo DB

for now

We have that :)

Give me a few minutes

sure!

Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

Terraform Module for providing a general EC2 instance provisioned by Ansible - cloudposse/terraform-aws-ec2-instance

Terraform Module for providing a EC2 instance capable of running admin tasks and provisioned by Ansible - cloudposse/terraform-aws-ec2-admin-server

Terraform Module for provisioning multiple general purpose EC2 hosts for stateful applications. - cloudposse/terraform-aws-ec2-instance-group

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

@ALI take a look at these modules

usage example https://github.com/cloudposse/terraform-root-modules/blob/master/aws/backing-services/vpc.tf
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

At first it might be daunting but itll get easier

let us know if any questions

thanks @Andriy Knysh (Cloud Posse)

haha I will take that @pericdaniel


@ALI you’ve signed up for a master class in terraform

I bet @Erik Osterman (Cloud Posse), @Andriy Knysh (Cloud Posse) I will if I am stuck anywhere ..

How do we all handle modelling the infrastructure within accounts in multi-account architectures? Do you have a declarative file that lists all of it? Some of it? Or do you bundle it all per project and just rely on the knowledge of the team to know what’s live?

@Dombo are you referring to our reference architectures or asking generally?

Generally - but also reference arch if you want to talk in terms of that

I’m interested in how others handle this - I don’t see many mature IaC code bases

There are some things that are persistent and not tied to apps like management servers, bastions, VPCs, sec groups, IAM roles/policies/users

These are genreally per account

Agreed - this is how we deploy them

Then there are application specfiic deployment dependencies

We have been deploying this stuff along side the other platform related services in the account repos. However, we’re in the early stages of using #atlantis to enable applications to have their own terraform/
folder, which defines their dependencies.

e.g. if a microservice needs an RDS database, it should be able to define it near the app itself, which out defining it in the account repos.

I wonder if you treat them the same or differently?

Common modules folder + declarative master stack/.tf file per account + .tf file per project? Where do you track state in both cases? Other ways of tackling this?

So we publish all of our reference architectures here: https://cpco.io/reference-architectures

high-level, we have one repo (terraform-root-modules
) which contain the “root level” terraform module invocations

then we have one repo per AWS account. This allows us to easily keep stages separately, but also reuse code between stages in the form of modules.

(@Andriy Knysh (Cloud Posse) re-share that thread)

Andriy explained it pretty well in this thread: https://sweetops.slack.com/archives/CB6GHNLG0/p1540514525000100
Although there are many possible ways of doing that, we use containers + ENV vars
pattern.
As you mentioned, template rendering is another pattern (as implemented in terragrunt
).
We store the ENV vars in either AWS SSM (secrets) or in Dockerfiles (not secrets).
Here are more details:
-
We have a collection of reusable TF modules https://github.com/cloudposse/terraform-root-modules. The modules have no identity, everything is configurable via ENV vars. (In other words, they don’t care where they will be deployed and how).
-
We deploy each stage (root, prod, staging, dev, testing) in a separate AWS account for security and better management
-
For each AWS account/stage (root, prod, staging, dev, testing), we have a GitHub repo which is a container (for which we use
geodesic
https://github.com/cloudposse/geodesic):
https://github.com/cloudposse/root.cloudposse.co https://github.com/cloudposse/prod.cloudposse.co https://github.com/cloudposse/staging.cloudposse.co https://github.com/cloudposse/dev.cloudposse.co https://github.com/cloudposse/testing.cloudposse.co
Not secret ENV vars are defined in the Dockerfiles, e.g. https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L17 In other words, the account containers have identity defined via the ENV vars.
-
https://github.com/cloudposse/terraform-root-modules is added to the containers https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L36
-
Inside the containers, users assume IAM roles ro access the corresponding AWS account and then provision TF modules.
-
Inside the containers we use
chamber
(https://github.com/segmentio/chamber) to read secrets from SSM (per AWS account)
So when we run a container (e.g. prod
), we already have all ENV vars setup, and we read all the secrets from the account SSM store.
An account/stage can be in any region (also specified via ENV var, e.g. https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L14)
Take a look at our docs for more details: https://docs.cloudposse.com/reference-architectures/ https://docs.cloudposse.com/reference-architectures/cold-start/ https://docs.cloudposse.com/reference-architectures/notes-on-multiple-aws-accounts/ https://docs.cloudposse.com/geodesic/

thanks @Erik Osterman (Cloud Posse)

so yea, the main idea is when a user logs into an account from a geodesic
module (let’s say staging
), they can’t see and do any damage (even accidentally) to other accounts (root
, prod
, etc.) - completely separated

Of course - I also practice account separation iwth about 15-20 accounts implemented in my org.
Some interesting patterns described in that thread. What do you do about provisioning IAM users/tfstate backing resources?

Bootstrap 0 if you will

If this is documented somewhere feel free to point me there

I know you guys aren’t just here to answer q;s

we’re not optimizing for the coldstart right now - as in one command to spin up 20 accounts

namely the problem is an order of operations that needs to be performed that terraform is not well suited for, especially if there are any cycles in the graph

some of it could be optimized using terragrunt, however, customers seldom if ever start over from scratch again from bootstrap 0

we also provision one state bucket per account, rather than a centralized state bucket

this is just sticking with our “share nothing” approach, which also convolutes the process of bootstrapping

Interesting choice regarding one bucket/dynamo table per account - is there a reason why?

share nothing

so for example, if you share the state bucket, there’s no way to stage changes to the statebucket without if affecting all stages all at once

Hmmm good point

but by allowing each stage/account to have their own bucket, they are entirely standalone

yea, and we do provision accounts and add users to roles

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Powerful way of doing it

Impressed - kudos

problem with it is terraform seems to be nudging people to a single shared state bucket

and using workspaces

we don’t use terraform workspaces

Yeah neither do I - not a fan for reasons

terraform is also coming out with “managed state as a service” free for all

curious to see how that works and fits in

Yeah that’s off the back of the atlantis acquisition I presume



i don’t think it’s related persay, but to your point atlantis is big on workspaces

and aligned therefore with their trajectory.

Doesn’t have to be though - pretty sure you can define the lifecycle as you wish with custom workflows and such

yea

we have it working without it

At which point it could compliment your guys system quite nicely

are you using terragrunt?

Collaborative plan & release to certain stages

terragrunt at work

cool

atlantis + similar system to yours when consulting in Aus

have you seen our fork of atlantis?

Yeah I did

Just so you could add your own name?

hahaha

haha

actually, have zero interest in maintaining it

I wish I could hard fork alexa & google assistant for the same reason

just it’s very hard to get features accepted into atlantis right now

but luke hears our requests loud and clear and is working to incorporate them

Yeah I guess that’s them signalling the future of the project?

yea

My bet is EOL and roll internal to Hashi

heh

are you excited about GitHub actions?

Reasonably - lots of my customers are on Bitbucket/Gitlab/Self hosted stuff

I’m curious to see how it plays out in the community

Actions are a direct shot at the best monetised section of githubs partner ecosystem

yes, to a degree

or it could also be seen as a way of allowing them to have tighter integration with GitHub

i see actions evolving into something like Salesforce for GitHub

Yeah I’d be interested to see how that integration goes down

Good to meet some other people pushing the limits of the modern IAC toolchain

Even if you are on the other side of the world

thanks!! you’re among friends

Anyway it’s the middle of the day over here

Gotta get back to work

ttyl

@Dombo I forgot where you are located?

2018-10-30

Hi, if using terraform-aws-dynamic-subnets what Actions should I allow in policy to make it work? Creating separate user for terraform now and trying to limit access

Now getting
module.subnets.aws_eip.default[0]: aws_eip.default.0: UnauthorizedOperation: You are not authorized to perform this operation.
on apply

@bober2000 The same logic to Terraform applies to general AWS usage. Limiting the Terraform user is generally not making things easier.

@maarten is there any kind of recommendations to read? Idea is to give developers terraform files so they could spin up envs for usage - so I don’t want them to create or destroy something with admin access

sure, one moment

Account-level considerations, best practices, and high-level strategic guidance to help structure and manage multiple AWS accounts for security purposes

Most companies do this adding AWS accounts for different purposes like testing.

Thanks a lot.

Sure man, good luck

After reading those article I see that I really need it…

this isn’t sweetops specific, but hoping someone here can give me some insight. I’m using resource “aws_iam_user_policy” and getting a limit error “Error putting IAM user policy CloudCheckr_RO: LimitExceeded: Maximum policy size of 2048” when running terraform apply. But I can create this policy just fine in the console

hey @shaiss

Hi @Andriy Knysh (Cloud Posse) looks like it’s an AWS limit https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html. odd that it works in the console though
Describes the maximum number, size, and name limits of entities such as users, groups, roles, policies, keys, certificates, and MFA devices in AWS Identity and Access Management (IAM).

you create exactly the same policy in the console?

it’s better not to attach policies to individual users

use groups instead

yes, I’m aware, this a requirment of a customer

the limit for a group is higher

Or create a customer managed policy and attach that one.

yes, that will be my suggestion to them again, but they claim they don’t need to b/c they can create that policy in the console

yea, we can create everything in the console

Maybe Terraform has pre-flight checks which aren’t valid, or calculates differently from AWS.

Like AWS does not count spaces of the document and Terraform does.

IAM does not count white space when calculating the size of a policy against these limitations

seems like the policy is 6850 bytes, or 4230 if ignore whitespaces

so either way, it should technically fail since it’s over the 2048 limit

Where did you read that ?


so maybe you could break it into a few aws_iam_user_policy(s)
@shaiss

@Andriy Knysh (Cloud Posse) yeah, that’s a good option!

or

take this as input for your iam_user_policy “${replace(data.template_file.init.rendered,”/\s/”,”” )}”

@maarten sorry, not sure I’m following. atm, I’m going to try creating the policy as an IAM policy vs the iam_user_policy, then attaching it to the user, we’ll see if that works

sure, what ever works for you.

ha, that worked

I agree not the best way to do it, but it worked, and it’ll have to do for now

what did you try now ?

@maarten I created the user, created the iam policy, then attached the iam policy to the user. b/c it’s a generic iam policy and not a user policy, that limit doesn’t imply

*apply

Anyone here using terraform w/ a marketplace ami? Seems like you have to use the we console to subscribe to the marketplace item first b4 you can call it from TF

Yes that’s correct. We use terraform with pfSense firewall Ami from market place

I believe we first had to activate the subscription

We didn’t attempt to automate market place subscriptions. This was more than a year ago. Not sure if it is possible.

hi, any plans to add support for ssl negotiating policy to the terraform-aws-elastic-beanstalk-environment module? (https://www.terraform.io/docs/providers/aws/r/lb_ssl_negotiation_policy.html)
Provides a load balancer SSL negotiation policy, which allows an ELB to control which ciphers and protocols are supported during SSL negotiations between a client and a load balancer.

or to specify an existing aws policy?

@i5okie if you open a PR, we’ll promptly review

thanks

alrighty. was just wondering. thanks

thank you for pointing that out, nice addition to the module
2018-10-31

I have a question related to https://github.com/cloudposse/terraform-aws-route53-cluster-zone. How do you deal with the “production” stage + zones? Typically, I’d have a zone for “dev” (e.g. dev.example.com, containing e.g. api.dev.example.com), and have the prod zone on the apex (example.com containing e.g. api.example.com). I’m wondering whether you guys do something like creating a prod.example.com zone, with Alias in the parent / apex zone? Or how else do you deal with the prod stage to apex mapping?
Terraform module to easily define consistent cluster domains on Route53 (e.g. [prod.ourcompany.com](http://prod.ourcompany.com)
) - cloudposse/terraform-aws-route53-cluster-zone

(this is particularly relevant on public, customer facing URLs in the prod zone. where you don’t want them to see prod. in every URL. Like websites or public API endpoints)

p.s. I also typically have a “global” stage or something similar that will have my MX, SPF and DKIM records.

Hi all

Having problems on terraform init
Error downloading modules: Error loading modules: error downloading '<https://github.com/cloudposse/terraform-aws-s3-website.git?ref=tag/0.5.3>': /usr/bin/git exited with 1: error: pathspec 'tag/0.5.3' did not match any file(s) known to git.

How to correctly set revision ?

source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tag/0.3.7>"
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

or
source = "github.com/cloudposse/terraform-aws-dynamic-subnets/releases/tag/0.3.7"

@Kenny Inggs it’s depends on a use case and a customer. We usually have two cases here:

- Use subdomains for stages (prod.example.net, staging.example.net), and use an alias or CNAME from the public domain (e.g. example.com) to the prod stage (CNAME prod.example.net for example.com)

- The same as #1, but using diff TLDs for stages, e.g.
[example.net](http://example.net)
forprod
and[example.qa](http://example.qa)
forstaging
. Then CNAME for[example.com](http://example.com)
pointing to[example.net](http://example.net)

all MX, SPF and DKIM records are in the global/public
domain

@bober2000 we alwayd use tags like this https://github.com/cloudposse/terraform-aws-s3-website/blob/master/main.tf#L13
Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

Awesome. Thanks @Andriy Knysh (Cloud Posse)

np

@bober2000 are you missing git::
in front of <https://github.com/cloudposse/terraform-aws-s3-website?ref=tag/0.5.3>
?

if not, maybe some DNS or caching issues on you computer

@Andriy Knysh (Cloud Posse) according to https://www.terraform.io/docs/modules/sources.html#github for github git:: could be missed
The source argument within a module block specifies the location of the source code of a child module.

about DNS or caching - we tried this on two PCs in Ukraine and in Germany

can you try adding git::
and test again?

It was there from beginning



removed .terraform and tried again

get this

try tags/0.5.3 with an S not just tag?

<https://github.com/cloudposse/terraform-aws-s3-website?ref=tags/0.5.3>
Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

yup @bober2000 I just tried without the s and received the same error

Nice catch! @Andy you saved me!

ah yea :slightly_smiling_face: no s

thanks @Andy

just toying with you guys terraform for creating VPCs.. any suggestions on where to start?

hey @nukepuppy

Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

example here https://github.com/cloudposse/terraform-root-modules/blob/master/aws/backing-services/vpc.tf
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

i been messing with dynamic subnets.. its neat.. though it insists on cutting up a VPC in an odd way

like a /24 vpc into 4 /28s instead of 4 /26s when giving 2 azs priv/pub was odd.. was trying to see how to force that if even possible

there are many ways of cutting up a VPC

im getting the gist of it.. really cool collection of stuff you guys going

we have a few diff modules for that

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

i love the uber high re-use of these

Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.

so i just want my /24 into 2 AZs priv/pub as /26s what would be your recommendation on which one to try out?

actually.. terraform-aws-multi-az-subnets is one im trying now

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

you can specify any number of subnets you need

ooh so i assume if the value isnt set.. it uses AZ count as the subnet count maybe?

i didnt look that deep into it yet ..

yes, if not set, it uses all AZs

ah i had done this ` availability_zones = [“us-east-1a”, “us-east-1c”]`

and it did make 4 subnets.. but still cut up into /28s

making me only have 10ips in each hehe

try to set max_subnet_count

it should divide correctly

or maybe this will be better for your use-case https://github.com/cloudposse/terraform-aws-multi-az-subnets#usage
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

hm still did /28s

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

since you provide CIDR blocks

is what im using as an example.. and modifying appropriately

im using the usage.. modified the AZ lists per pub/priv to just 2 AZs set.. az nat gw to 2

setup variables to be some of the usual stuff /namespace/app etc

cidr block set to a /24 but still get a /28 cut up

ok try to set this https://github.com/cloudposse/terraform-aws-multi-az-subnets/blob/master/variables.tf#L40
Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

private_az_subnet_ids = {
us-east-1a = subnet-xxxxx
us-east-1c = subnet-xxxxx
}
public_az_subnet_ids = {
us-east-1a = subnet-xxxxx
us-east-1c = subnet-xxxxx
}

let me have a look

oh hmm different format of max_subnets o.O

if you need any changes to the subnet modules to accommodate your requirements, please open a PR

oh hmm.. still cut into /28s despite having that var set

yeah ill have a look..

i mean terraform vpc there is bazillion templates

i just liked the idea of minimal go for re-use

its probably very useful for most normal larger VPCs but smaller ones may just be a bit too much?

yea it’s not easy to come up with a universal module to create subnets, too many possible ways of doing it

for our usage, the three modules were enough

yeah its doing 99% of everything id want it to do

except.. cut up the /24 into 4 /26s instead of /28s

the calculations are here

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

legit just trying to make this cut: http://jodies.de/ipcalc?host=10.147.223.0&mask1=24&mask2=26

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

(been long time since we reviewed that)

thanks again @Andriy Knysh (Cloud Posse)… ill re-evaluate some other time.. i have this working in terraform i built by hand.. just wanted to give something shinier a whirl

@nukepuppy outsider question, why do you want to have something smaller than a /24 when you have a /8 at disposal ?

yea that’s why we did not pay much attention to how /8 was divided

@maarten because when you own IP space that must be routable between things and use IPAM at enterprise level

smaller VPCs are created for specific purposes..

and multi- account (aws) strategies become a bigger thing

no one should or would cut up a /8 into 1 VPC in aws i hope and run a company out of it

use about 20-30 aws accounts for stag and 25ish for prod

and all have different use cases / purposes

and different peering requirements

so.. its always viable

but a /24 for an individual person.. to use in multiple subnets in a VPC

seems pretty much a normal use case to me

Wow, those are a lot of accounts per stack. Using /16 per vpc/account ourselves, I guess you have a lot of different teams/apps then

yes a lot… quite a lot

but in general wanted to just “can” as in soup.. the process of getting smaller stuff without everyone re-writing stuff ya know

all good .. very cool to see

i was just building a small VPC for a specific reason and had a /24 available to toy with

Thanks for explaining the use-case. I know someone who works at a company which makes route planning sw for cars .. the sheer size of different aws/azure accounts is just mindblowing.

oh right thats other thing the /8 has multi cloud uses etc etc.. so getting a small cut etc is usually something like a /20 for a team until its used up

and from that /20 you cut up what you can

but even then a class C VPC can host a bunch of things ya know

True, but on the other hand, if you divide a /8 in /16’s you have 255 VPC’s. If the company can’t fit inside 255 VPCs it’s maybe time to do things differently

well there is data centers using up a ton of the space too

all good though.. everyone’s got different use cases..

yep

@nukepuppy we can create another subnets
module specifically for that use-case. want to try together?

@Andriy Knysh (Cloud Posse) for sure ! in a few days.. i still gotta get something finished up here.. gonna roll my manual made one for now and wrap that up

then id love to re-visit as i feel everytime someone needs to make a vpc
here.. they basically re-invent the wheel

last cents @nukepuppy A /21 for a VPC allows 6 /24’s public+private subnets in 3 az’s which is also quite economical, and does allow for enough growth within a VPC. And 8K of VPC’s

i cut up a /20 into 7 /23s and 2 /24s for smaller things

the /23s usually used for rebuilding kops/eks clusters to test out

and the small /24s for things like permanent smaller infra / smaller management vpcs etc

Sounds like you have a huge platform, how much is terraformed ?

depends on diff teams and use cases

most is hybrid stuff.. some are heavily orchestrated etc… so it isnt all one stop shop for things

Beanstalk PR for logging abilities: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/59
Feature Support log settings. Solution Added support for awshostmanager, awscloudwatch:logs, and awscloudwatchhealth options https://do…