#terraform (2019-09)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2019-09-02

sahil avatar

Hi, I have been using the resource aws_ami_from_instance to create AMI. The problem with this approach is that I cannot delete instance after creating the AMI. The instance is useless after this. So basically my workflow is as follows:

  1. Create an Instance
  2. Run some script inside the instance
  3. Create an AMI from the instance
  4. Terminate the instance I have been recommended to use packer for this but problem with packer is that it is doesn’t have a good integration with Terraform (Also I’m passing a lot of variables in the scripts in step 2) Any suggestions please?
sahil avatar

@jaykm FYI

Nikola Velkovski avatar
Nikola Velkovski

@sahil I haven’t used packer but I am not sure why would you want to control packer with terraform.

Nikola Velkovski avatar
Nikola Velkovski

Packer should be run inside a pipeline like process , ci/cd etc…

1
sahil avatar

@Nikola Velkovski So the script I’m talking about in step 2 takes a lot of variables computed while running terraform. In case of aws_ami_from_instance, I can easily pass those variables inside the bash script, but same is not true for packer + terraform

Nikola Velkovski avatar
Nikola Velkovski

hmmm you might want to drop terraform for that and maybe stick to aws cli since blocking/maintaing a terraform state for baking amis doesn’t sound quite right.

Nikola Velkovski avatar
Nikola Velkovski

what kind of data you are computing with terraform? I am guessing ids/arns of resources ?

sahil avatar

@Nikola Velkovski Yes, ids and arns.

Nikola Velkovski avatar
Nikola Velkovski

that is easily doable with aws cli

Nikola Velkovski avatar
Nikola Velkovski

you are most probably using it for baking the ami, the dependency of terraform just makes it more complex.

sahil avatar

I guess I’ll have to use aws cli instead. Thanks for your help.

Nikola Velkovski avatar
Nikola Velkovski

You are welcome, usually I do not use terraform for things that are contstantly changing, deploys etc.

1
Nikola Velkovski avatar
Nikola Velkovski

It gets cumbersome pretty quickly.

oscar avatar

How come you aren’t using Packer to bake AMIs?

Maciek Strömich avatar
Maciek Strömich

—deleted because i’ve realized im stupid and can’t read —

maarten avatar
maarten

@sahil you can use terraform to setup codebuild with packer to setup your ami building pipeline.

After that you can use the aws_ami datasource to get the last built ami.

4
davidvasandani avatar
davidvasandani

@sahil what about building a base AMI and passing in the Terraform computed variables as part of the user_data script?

1
sahil avatar

@davidvasandani Actually that might work. Thanks!

sahil avatar

Thanks for your valuable input guys! Much appreciated!

1
davidvasandani avatar
davidvasandani

@sahil No problem! Building a base AMI in Packer that can be used in both a staging and prod environment with different vars loaded at boot via Terraform make testing much easier! Keep us updated with your progress, if you run into any issues, or with a success!

1
Cloud Posse avatar
Cloud Posse
04:04:04 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Sep 11, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2019-09-03

Brij S avatar

I’m not sure if this is a TF12 problem or not, but I made another module just recently and this seemed to work, however - providers and their aliases are not found by the module anymore:

Error: Provider configuration not present

To work with
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment its
original provider configuration at module.cicd-web.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment,
after which you can remove the provider configuration again.

I found this link: https://github.com/hashicorp/terraform/issues/21472 that states that providers need to be explicitly passed down to the module, which I tried but still doesnt work

Module cannot find alias AWS provider in 0.12.0 · Issue #21472 · hashicorp/terraform

Hi, I'm having problems upgrading to 0.12.0. We're running in eu-west-1 but one of my modules requires a cloudfront certificate that is only available in us-east-1. The main terraform file …

2019-09-04

Matt avatar

Cross posting this from r/terraform: https://old.reddit.com/r/Terraform/comments/czjnvq/analysis_paralysis_bootstrapping_a_new_terraform/ Anyone here have good examples for bootstrapping a clean parameterized Terraform deployment on AWS?

Analysis paralysis - bootstrapping a new Terraform environment.

I’m working on a personal project and hitting a bit of a wall. I’ve been using Terraform for a while but other than a few tiny environments, I’ve…

oscar avatar

@Matt is this your thread? If so #geodesic is a great tool that many of us will talk to you about

oscar avatar

It avoids Workspaces & wrappers like Terragrunt

Matt avatar

yes, that’s my thread @oscar

Matt avatar

I will take a look at Geodesic

oscar avatar

It is Sweetops weekly hour session

oscar avatar

Great chance to get a demo and ask Qs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

#office-hours starting in 15 minutes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

#office-hours starting now! ask questions, get answers. free for everyone. https://zoom.us/j/508587304

Matt avatar

not sure I can make it @oscar, not this week

Matt avatar

but this is one of my major grips about Terraform which I generally like a lot

Matt avatar

otherwise

2019-09-05

Tum avatar

Sorry, for your modules, they compatible with Terraform version 0.12+?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not all of them are converted to 0.12 yet (we are working on it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Tum avatar

May I can help you to convert it into hcl2?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can thanks. We also adding Codefresh instead of Travis, and adding tests which are deployed to AWS using Codefresh pipelines (this complicates the task for you)

Jonathan Le avatar
Jonathan Le

https://github.com/terraform-providers/terraform-provider-aws/issues/9995

Add your thumbs up if that would be useful to you.

Amazon EKS Cluster OIDC Issuer URL · Issue #9995 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Amazon EKS Cluster OIDC Issuer URL · Issue #9995 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…

Julio Tain Sueiras avatar
Julio Tain Sueiras

being quite busy the past few weeks (going back to fixing terraform-lsp on the weekend) was working on a nice research and production project

Julio Tain Sueiras avatar
Julio Tain Sueiras

the project being openstack on nomad

2
Alex Siegman avatar
Alex Siegman

Now I’m just curious was CloudKitty is

Julio Tain Sueiras avatar
Julio Tain Sueiras

Billing and Chargeback service of OpenStack

2019-09-06

Bruce avatar

Hey guys, I am looking for a way to have my AWS autoscaling group perform a shutdown script before scaling down. The only way I can find to do this is using lifecycle hooks > Cloudwatch Events > lambda > SSM . But this seems quite a chain to string together. Any suggestions?

Jonathan Le avatar
Jonathan Le

the life cycle hook is probably the way to go, but you could try https://opensource.com/life/16/11/running-commands-shutdown-linux as well

How to run commands at shutdown on Linux

Linux and Unix systems have long made it pretty easy to run a command on boot. But as it turns out, running a command on shutdown is a little more complicated.

Jonathan Le avatar
Jonathan Le

i’m not sure what your use case is and how important it is to get the shutdown script to run appropriately OR if you OS is event linux.

unless the OS goes bad, the scale in in an ASG will try to let the instance shutdown gracefully - this would let units in /usr/lib/systemd/system-shutdown/ run. i’m not sure what the timeouts would be before a forceful termination by the ASG.

davidvasandani avatar
davidvasandani

K99runmycommandatshutdown from the link above works really well in both ASG’s and SpotFleet instances.

Bruce avatar

Thanks!

Bruce avatar

I will give it a crack and see if this is for for purpose as it has a lot less moving parts. Thanks for the assistance @Jonathan Le @davidvasandani.

Nikola Velkovski avatar
Nikola Velkovski

imho that’s the only way

Nikola Velkovski avatar
Nikola Velkovski

it is normally used for ecs/ecs2 connection draining on scale in(down)

Nikola Velkovski avatar
Nikola Velkovski

but in your case it might be even more complex since the script has to report success

Bruce avatar

Thanks @Nikola Velkovski

Nikola Velkovski avatar
Nikola Velkovski

you are welcome

Brij S avatar

was hoping someone with regex expertise can help me out here, Im trying the following: ${replace(var.project, "/\\s$/", "")} where var.project is a string that will end in the letter s. I’m trying to strip the s at the end but im not having any luck. When I run this the s remains. Any ideas?

antonbabenko avatar
antonbabenko
Igor avatar

@Brij S Your regex is replacing \s, not s

Igor avatar

Try ${replace(var.project, "/s$/", "")}

Igor avatar

The other approach, to @antonbabenko point, is to match the whole string: ${replace(var.project, "/^(.*)s$/", "$1")}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or just use substr("str", 0, length("str") - 1) without messing up with regex (https://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/)

2

2019-09-09

Michał Czeraszkiewicz avatar
Michał Czeraszkiewicz

Hi, How can I reference resources created with for_each? Below the example what I try to accomplish:

locals {
  users = ["user1", "user2"]
}

resource "aws_iam_user" "this" {
  for_each = toset(local.users)
  name     = "${each.value}"
}

resource "aws_iam_access_key" "this" {
  for_each = toset(local.users)

  user = # Reference above created users
}
maarten avatar
maarten

Hi @Michał Czeraszkiewicz

    user = aws_iam_user.this[each.key].name
Michał Czeraszkiewicz avatar
Michał Czeraszkiewicz

@maarten thx thumbsup_all

Marcio Rodrigues avatar
Marcio Rodrigues

Hello

Marcio Rodrigues avatar
Marcio Rodrigues

is terragrunt considered a “best practice” tool to be using?

sarkis avatar

I don’t know if I consider it a “best practice” tool… most of the must have features of terragrunt (i.e. state locking during apply) have made it to terraform, my answer would be different if this was asked a year or so ago

Marcio Rodrigues avatar
Marcio Rodrigues

just asking because i previously worked in a place that we had one terraform repo per environment(dev,prod,staging)

Marcio Rodrigues avatar
Marcio Rodrigues

and we had to do alot of repeatable work in each env

sarkis avatar

It could be used for a very specific style of writing TF to keep things DRY, though you can do this now with workspaces, as well… I’m a fan of this workflow: https://github.com/cochransj/tf_dynamic_environment_regions

cochransj/tf_dynamic_environment_regions

This repository is an example of how to use terraform workspaces to implement the same resource declarations across multiple aws accounts across multiple regions. It also shows how to have a data d…

sarkis avatar

Note: a lot of that assumes you are using TF 0.12+

Marcio Rodrigues avatar
Marcio Rodrigues

thanks

Marcio Rodrigues avatar
Marcio Rodrigues

another question

Marcio Rodrigues avatar
Marcio Rodrigues

imagine that i have an RDS instance in prod env, but i do not have it in dev env

Marcio Rodrigues avatar
Marcio Rodrigues

i would be able to accomplish this with terragrunt or workspaces?

sarkis avatar

sure

sarkis avatar

conditional statement on count which i believe is possible in 0.12, psuedo code: IF $terraform.workspace == “prod” THEN count = 1 ELSE count = 0 on the RDS resource

Fernando Torresan avatar
Fernando Torresan

When we talking about multple accounts, teams, environments…for me, terragrunt has been totally necessary to keep my terraform code organized and without so much boilerplate code

Marcio Rodrigues avatar
Marcio Rodrigues

thx

Marcio Rodrigues avatar
Marcio Rodrigues

one last question

Marcio Rodrigues avatar
Marcio Rodrigues

i used to work with 1 big tfstate per environment

Marcio Rodrigues avatar
Marcio Rodrigues

is a best approach to be using multiple tfstates per resourcegroup?

Marcio Rodrigues avatar
Marcio Rodrigues

so we can manage VPCs individually, EC2s individually, etc….

loren avatar

we like to split tfstate on multiple dimensions… such as team and stage and app and stateful/stateless…

1
loren avatar

though, “individually” is a bit relative… for many actions, you can use -target to restrict the scope of an action… splitting tfstate helps reduce the blast radius of accidents better IMO though

Marcio Rodrigues avatar
Marcio Rodrigues

nice

loren avatar

also see the recent thread/posts by @Erik Osterman (Cloud Posse) in #geodesic for another approach… https://sweetops.slack.com/archives/CB84E9V54/p1567187759027700

but this also is freggin scary. i think it’s optimizing for the wrong use-case where you start from scratch. i think it’s better to optimize for day to day operations and stability.

Marcio Rodrigues avatar
Marcio Rodrigues

each team has autonomy to apply without interfering in other team infrastructure

Marcio Rodrigues avatar
Marcio Rodrigues

basically, i just started in a new company, nothing in IAC yet

Marcio Rodrigues avatar
Marcio Rodrigues

and i’m researching good strategies/architectures to start our environments

Marcio Rodrigues avatar
Marcio Rodrigues

and i’m thinking about splitting into multiple tfstates, because we have a project to make a disaster recovery plan. We should be able to disaster recover only some portions of our infrastructure, into another region

loren avatar

we use terragrunt because it is visual (hierarchy/tfstate by directory structure) and easy to comprehend. tf workspaces are less visible in that sense and IMO harder to “know” where you are working. geodesic is solving similar problems in another way entirely

Cloud Posse avatar
Cloud Posse
04:05:22 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Sep 18, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Brij S avatar

does anyone know how to get the function name of a created lambda? resource "aws_lambda_function" "s3_metadata" , can this resource be accessed via aws_lambda_fuction.s3_metadata.id ?

Brij S avatar

the docs dont make it apparent..

aaratn avatar

aws_lambda_fuction.s3_metadata.function_name

Julio Tain Sueiras avatar
Julio Tain Sueiras

since tomorrow is HashiConf

maarten avatar
maarten

is that a prediction or a question

Julio Tain Sueiras avatar
Julio Tain Sueiras

Prediction

2
Julio Tain Sueiras avatar
Julio Tain Sueiras

I am giving my prediction that maybe they will officially announce packer 2.0 with HCL2?

rohit avatar

how can i pass db_subnet_group_name to aws_rds_cluster resource using data object ?

rohit avatar

currently i am trying to use

  db_subnet_group_name = "${element(data.my_state.networking.database_subnets,1)}"

2019-09-10

rohit avatar

does anyone know how to read subnet name from state file ?

sarkis avatar

@rohit doesn’t look like there is a data source for this yet: https://github.com/terraform-providers/terraform-provider-aws/pull/9525

data-source/aws_db_subnet_group: create aws_db_subnet_group data-source by maxenglander · Pull Request #9525 · terraform-providers/terraform-provider-aws

Adds a data source for aws_db_subnet_group. Used aws_db_instance as a model for this work. Currently only allows looking up exactly one database subnet group using name as the argument, although th…

rohit avatar

@sarkis thanks. I will try a different alternative then

sarkis avatar
HashiConf | Watch the Opening Keynote Liveattachment image

Join us live as HashiCorp Founders Armon Dadgar and Mitchell Hashimoto deliver the opening keynote at HashiConf in Seattle, WA.

1
sarkis avatar

terraform plan getting a cost estimation feature on TF Cloud interesting…

Igor avatar

I didn’t find any references to ECS Service Discovery in CP modules. Is it because everyone is running an alternative solution?

Igor avatar

For someone getting started with containers, and not having more than 3-4 services at the most, should I even bother with orchestration and/or sophisticated methods of service discovery?

Igor avatar

Or will ALB/ECS combo get the job done?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for that we usually deploy https://istio.io/docs/concepts/what-is-istio/ in the k8s cluster

What is Istio?

Introduces Istio, the problems it solves, its high-level architecture and design goals.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

don’t have anything in TF

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
What is Istio?

Introduces Istio, the problems it solves, its high-level architecture and design goals.

Igor avatar

That’s what AWS AppMesh does, right? I wonder if that’s an overkill for my use case though.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes AppMesh should do similar things. we did not use it yet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for 3 static services might be an overkill but at the same time, you get an experience and be able to use it with tens of services

1
Callum Robertson avatar
Callum Robertson

Has anyone played with the new Terraform SaaS offering?

Callum Robertson avatar
Callum Robertson

Looks like TF cloud has hit GA

2019-09-11

oscar avatar

Not yet but took a read. Keen to hear someone’s experience & comparison to local Geodesic workflow / CI tools using Geodesic workflow / Atlantis

1
Callum Robertson avatar
Callum Robertson

definitely +1 on this. This is the workflow that we’ve just committed to, so keen on hearing peoples experiences!

Haydar Ciftci avatar
Haydar Ciftci

I have a hard time getting the cloudposse modules to work with the recent terraform version (v0.12.8). I feel like I’m missing something, any ideas?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make sure all modules you are using are converted to TF 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

For example, this one is now cloudposse/terraform-aws-alb

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Don’t know about all modules in terraform-aws-modules/....... (they are CloudPosse’s)

Haydar Ciftci avatar
Haydar Ciftci

Yeah, so it is indeed an issue with the module implementation itself?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not implementation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the modules that are still in TF 0.11 syntax will not work in TF 0.12 (with a few small exceptions)

oscar avatar

Try:

  on .terraform/modules/alb_magento2/main.tf line 33, in resource "aws_security_group_rule" "http_ingress":
  33:   cidr_blocks       = [var.http_ingress_cidr_blocks]
oscar avatar

Change: removal of "${ and }"

oscar avatar

If that doesn’t work, try: cidr_blocks = var.http_ingress_cidr_blocks .. since it is already a list

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

#office-hours starting now! ask questions, get answers. free for everyone. https://zoom.us/j/508587304

2019-09-12

joshmyers avatar
joshmyers

How are folks doing multi region as far as Terraform goes…?

loren avatar

provider per region, pass the provider explicitly to each module/resource

loren avatar

these guys have the best reference i’ve seen for it, https://github.com/nozaq/terraform-aws-secure-baseline/blob/master/providers.tf

nozaq/terraform-aws-secure-baseline

Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations. - nozaq/terraform-aws-secure-baseline

Nikola Velkovski avatar
Nikola Velkovski

Workspaces ?

joshmyers avatar
joshmyers

I’m more interested in things like what you do with the state file

loren avatar

is it the age-old question of one giant state, or many smaller states? i think either way it would be controlled by the backend config…

you can have a backend config with a credential that keeps it all in one region if it is one state, should work fine, even if the resources are in multiple regions

or a backend config per state where you apply some rationale/logic to where you want that state stored…

joshmyers avatar
joshmyers

I don’t think this is so simple. You can’t have state for multi regions all in a bucket in one of the regions

joshmyers avatar
joshmyers

the region goes down, which maybe the reason you have gone multi region in the first place, now you can’t get to your TF state

loren avatar

why not?

loren avatar

that’s a different issue, not a technical limitation of tf

joshmyers avatar
joshmyers

I wasn’t talking specifically about restrictions by TF, I’m wondering how people are doing it in a sane way

joshmyers avatar
joshmyers

re a conversation I’ve just had with @Nikola Velkovski

loren avatar

would cross-region bucket replication be sufficient?

joshmyers avatar
joshmyers
loren avatar

set that up on your backend, then repoint your backend config in tf if you need to use another region

joshmyers avatar
joshmyers

Yeah that could get you out of a bit of a hole, but I don’t want to have to repoint backends etc

loren avatar

what is your backend? can you do consul or something in a cross-region way?

joshmyers avatar
joshmyers

S3

Nikola Velkovski avatar
Nikola Velkovski

This is what I found regarding remote state and workspaces

Nikola Velkovski avatar
Nikola Velkovski
Backend Type: remote - Terraform by HashiCorp

Terraform can store the state and run operations remotely, making it easier to version and work with in a team.

Nikola Velkovski avatar
Nikola Velkovski

sorry here it is for s3

Nikola Velkovski avatar
Nikola Velkovski
Backend Type: s3 - Terraform by HashiCorp

Terraform can store state remotely in S3 and lock that state with DynamoDB.

Nikola Velkovski avatar
Nikola Velkovski

hmm no mention of changing the bucket with workspaces

joshmyers avatar
joshmyers

IIRC you can’t use interpolation in the backend block

loren avatar

with s3, to avoid manually re-jiggering your backend, you would need to be managing the s3 endpoint rather explicitly, doing some kind of health check on the real endpoints and re-pointing things as necessary

loren avatar

and you may still hit problems when running tf, since you’d have to also be quite careful about targeting resources to avoid running against the downed region

1
joshmyers avatar
joshmyers

I haven’t yet seen a setup that actually addresses these problems. setting up multiple providers in the same state feels like half a solution, and one that will likely bite you when you need to reach for it

joshmyers avatar
joshmyers

It isn’t easy

loren avatar

yeah, if this is that big a concern, you may be best off confining a state to a single region as much as possible, and setting up your app accordingly (deploy independently to multiple regions)

loren avatar

still may need some coordination layer perhaps that your app states depend on, but now your cross-region blast radius is confined to just that resource

asmito avatar

i would recommend having a state for each region

1
joshmyers avatar
joshmyers

which goes into a state bucket for each said region

joshmyers avatar
joshmyers

replicate between each other maybe

asmito avatar

using one bucket, different paths of state per regions you can do that manually means having below tree:

providers/aws
├── eu-east-1
│   ├── dev
│   ├── pre
│   ├── pro
│   └── qa
└── eu-west-1
    ├── dev
    ├── pre
    ├── pro
    └── qa

or you can use terraform workspaces

joshmyers avatar
joshmyers

terraform workspaces can’t be interpolated into backend config AFAICR

joshmyers avatar
joshmyers

I think one bucket isn’t ideal…

asmito avatar

for me one bucket seems ideal, and you can only play with paths inside it.

asmito avatar
joshmyers avatar
joshmyers

and if eu-west-1 goes down? you can’t provision in eu-west-1 OR other region ?

asmito avatar

s3 is a global service

joshmyers avatar
joshmyers

buckets are regional, have def seen S3 in a region go down before (not often but has happened and one of the drivers for going multi region for me)

asmito avatar

Ah my bad S3 bucket name is unique globally, confused totally agree with you on that spin up a bucket for each region is ideal

davidvasandani avatar
davidvasandani

@joshmyers would Aurora Serverless Postgres as a TF backend solve this problem?

davidvasandani avatar
davidvasandani

I believe that if a region went down the DNS would just failover to the new promoted master in a new region.

1
davidvasandani avatar
davidvasandani

or leveraging Minio distributed across multiple regions (or even cloud providers!) https://dickingwithdocker.com/2019/02/terraform-s3-remote-state-with-minio-and-docker/

joshmyers avatar
joshmyers

Thanks @davidvasandani, will have a look!

davidvasandani avatar
davidvasandani

Let us know what you end up going with? I know at some point I’ll need to address a more robust TF backend.

Nikola Velkovski avatar
Nikola Velkovski

continuing the thread in order to go multi region/environment we can do something like this

Nikola Velkovski avatar
Nikola Velkovski
locals {
  environment = element(split("_", terraform.workspace), 1)
  region      = element(split("_", terraform.workspace), 0)

}

output "region" {
  value = local.region
}

output "environment" {
  value = local.environment
}
Nikola Velkovski avatar
Nikola Velkovski

and the the workspace should be set like

Nikola Velkovski avatar
Nikola Velkovski

eu-west-1_staging

Nikola Velkovski avatar
Nikola Velkovski

it’s a bit hacky but does the trick

joshmyers avatar
joshmyers

backends don’t allow interpolation, so you are gonna need some kind of wrapper to get different buckets per region without inputting vars etc

Nikola Velkovski avatar
Nikola Velkovski

yes it also doesn’t tackle the state problem

Nikola Velkovski avatar
Nikola Velkovski

but it sounds like you don’t want to put your state in an s3 bucket

Nikola Velkovski avatar
Nikola Velkovski

maybe other backends might work for you ?

joshmyers avatar
joshmyers

No I think S3 is fine, but it needs to be regional specific and therefore named buckets, so need some way to easily toggle the backend bucket too

joshmyers avatar
joshmyers

The Aurora Postgres idea is interesting, but a few things. Requires much more setup, automating that is possible, but a pain. Requires credentials. Doesn’t solve one of the problems we spoke about. State would be all good in the case of a regional failure as DNS should flip over to the other region and should be all good, but if you have multi region provider in a single run anyway, you are gonna have a hard time if one of those regions is down

joshmyers avatar
joshmyers

Half of your apply or so is gonna fail, potentially leaving you in an interesting state

1
davidvasandani avatar
davidvasandani

Really good point.

davidvasandani avatar
davidvasandani

Have you thought of ways to simulate this?

joshmyers avatar
joshmyers

Nope, and my guess is that when AWS breaks in such a way, all bets are off anyway, but moving onto a client where this is a major concern and wanted to know others feelings

1
joshmyers avatar
joshmyers

@Erik Osterman (Cloud Posse) any thoughts?

davidvasandani avatar
davidvasandani

I believe the plan is to usually decouple the infrastructure and application so that the application self heals until the provider resolves the outage (ie don’t try to terraform while S3 is offline )

1
davidvasandani avatar
davidvasandani

but looking to hear Erik’s thoughts on this.

loren avatar

terragrunt is a wrapper that lets you use some interpolation in the backend config, it resolves it and constructs the init command for you troll

joshmyers avatar
joshmyers

heh, I know, that is about all I want out of it at this point! lol

davidvasandani avatar
davidvasandani
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Mike Whiting can you share how you are invoking the module?

Mike Whiting avatar
Mike Whiting
02:46:23 PM

@Mike Whiting has joined the channel

Mike Whiting avatar
Mike Whiting

@Nikola Velkovski yep that’s the one

Nikola Velkovski avatar
Nikola Velkovski

oh that woiuld be me

Nikola Velkovski avatar
Nikola Velkovski

what a coincidence

Nikola Velkovski avatar
Nikola Velkovski

are you using terraform 0.12 by any chance ?

Mike Whiting avatar
Mike Whiting

yeah

Nikola Velkovski avatar
Nikola Velkovski

unfortunately it has not been ported yet

Nikola Velkovski avatar
Nikola Velkovski

Nikola Velkovski avatar
Nikola Velkovski

but I can dedicate some time and do it

Mike Whiting avatar
Mike Whiting

that would be awesome

Nikola Velkovski avatar
Nikola Velkovski

cool

Mike Whiting avatar
Mike Whiting

just to clarify…

Mike Whiting avatar
Mike Whiting

this will enable ec2 instances to log to cloudwatch events

Nikola Velkovski avatar
Nikola Velkovski

what do you mean by cloudwatch events

Nikola Velkovski avatar
Nikola Velkovski

Cloudwatch events are cron like jobs

Mike Whiting avatar
Mike Whiting

ah ok

Nikola Velkovski avatar
Nikola Velkovski

it will add additional metrics to cloudwatch

Mike Whiting avatar
Mike Whiting

I just want to see logs from docker

Nikola Velkovski avatar
Nikola Velkovski

ah that’s not it.

Nikola Velkovski avatar
Nikola Velkovski

in order to see the logs

Nikola Velkovski avatar
Nikola Velkovski

from docker you’‘ll need to have:

  • the dockerized app to write to stdout
  • iam role for the ec2 machines to write to cloudwatch logs
  • a log group in cloudwatch logs
Nikola Velkovski avatar
Nikola Velkovski

I think that should do it

Mike Whiting avatar
Mike Whiting

sounds good…

Mike Whiting avatar
Mike Whiting

however let me explain how I got here

Nikola Velkovski avatar
Nikola Velkovski

oh and this

Nikola Velkovski avatar
Nikola Velkovski
Using the awslogs Log Driver - Amazon ECS

You can configure the containers in your tasks to send log information to CloudWatch Logs. This allows you to view the logs from the containers in your Fargate tasks. This topic helps you get started using the awslogs log driver in your task definitions.

Mike Whiting avatar
Mike Whiting

I’m creating a aws_ecs_task_definition and I suspect the service is failing to start because the docker image resides on a gitlab image registry and I imagine it’s not possible to use a docker image somewhere where the authentication isn’t through AWS

Mike Whiting avatar
Mike Whiting

but I was hoping to see evidence of that through some kind of logging

Nikola Velkovski avatar
Nikola Velkovski

if it’s ECS/EC2 then you can ssh into the machine and check the agent logs

Nikola Velkovski avatar
Nikola Velkovski

otherwise you’ll need to set up logging

Nikola Velkovski avatar
Nikola Velkovski

from experience the most usual problem is that the instances do not have internet

Nikola Velkovski avatar
Nikola Velkovski

you can try with a simple docker image

Nikola Velkovski avatar
Nikola Velkovski

e.g. nginx

Nikola Velkovski avatar
Nikola Velkovski

and see if it works

Mike Whiting avatar
Mike Whiting

if I use a vanilla docker image e.e. jenkins:lts which is available publically then everything works

Mike Whiting avatar
Mike Whiting

*e.g

Nikola Velkovski avatar
Nikola Velkovski

so it’s not an internet issue

Mike Whiting avatar
Mike Whiting

Nikola Velkovski avatar
Nikola Velkovski

you should see how to authenticate through docker with gitlab

Mike Whiting avatar
Mike Whiting

makes sense.. I suppose actually I just need to perform the authentication through the user_data field of aws_launch_configuration

Nikola Velkovski avatar
Nikola Velkovski

pretty much

Mike Whiting avatar
Mike Whiting

thanks.. that’s given me some direction

Nikola Velkovski avatar
Nikola Velkovski

@Erik Osterman (Cloud Posse) what’s the workflow in this case, should Mike create an issue ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya if he needs it now, the best bet is to fork and run terraform upgrade

Nikola Velkovski avatar
Nikola Velkovski

cool thanks

Hemanth avatar
Hemanth

hey guys i am trying to create a ec2 (after taint’ing the existing ec2), attaching the ebs volume(using aws_volume_attachment), and using an user-data script in my tf to mount the volume (which is /home), and also trying to import some data from /home to the newly created instance, problem is many times the /home is not mounted, and the /var/log/cloud-init-output.log shows No such file or directory to the files i am trying to import, any thoughts on this ?

Hemanth avatar
Hemanth

^ hope that question is not confusing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


problem is many times the /home is not mounted

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

is it never mounted, or just sometimes fails?

Hemanth avatar
Hemanth

well most of the times it is never mounted(9/10 times) , i manually ssh into the instance and do a sudo mount -a and it mounts, i tried adding sudo mount -a to the user-data script itself – doesn’t help

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

might be some race conditions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. something (EBS) is not ready yet, but the code tries to mount it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to add a delay for testing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or maybe there are some settings to wait

Hemanth avatar
Hemanth

i tried adding sleep 60 in the user-data script which didn’t work, OR should i add something to the terraform itself for the wait thingee ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no, tf does not wait for ramdom things, just for resources to be created (and not for all in all cases)

Hemanth avatar
Hemanth

oh got it, will try some combinations of wait in my user-data script itself

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
How to make a bash script wait till a pendrive is mounted?

I have a bash script which has a line cd /run/media/Username/121C-E137/ this script is triggered as soon as the pen-drive is recognized by the CPU but this line should be executed only after the mo…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Check if directory mounted with bash

I am using mount -o bind /some/directory/here /foo/bar I want to check /foo/bar though with a bash script, and see if its been mounted? If not, then call the above mount command, else do somethin…

Hemanth avatar
Hemanth

still trying ways mentioned from ^ stackoverflows, came up with

if sudo blkid | grep /dev/xvdb > /dev/null; then
    sudo mount -a
else
    sleep 10
fi

any elegant approaches to make that in a loop ?

Hemanth avatar
Hemanth

update: right now adding a direct sleep 10 (without any loop) seems to have solved the problem

1
Brij S avatar

hey all, all of a sudden i’m getting this error when running terraform apply

Error: error validating provider credentials: error calling sts:GetCallerIdentity: NoCredentialProviders: no valid providers in chain. Deprecated.

I have no idea why this is happening. The only thing I did was add some credentials to my .aws/credentials file.

My providers look like this

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  profile = "storesnonprod"
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  profile = "storesprod"
  alias   = "prod"
}
Brij S avatar

does anyone know what might be causing this?

sarkis avatar

@Brij S is it possible you mucked up your .aws/credentials toml format so it’s not being parsed correctly by the TF provider?

Brij S avatar

@sarkis, it seems when I add some old profiles back to the credentials file it works. But when I remove them i get the error

sarkis avatar

do your profiles depend on each other? i think there was something like source_profile i can’t remember the exact param

Brij S avatar

no

Brij S avatar

two seperate profiles as you see in the snippet above

sarkis avatar

can you share your ~/.aws/credentials file in DM and redact sensitive data

Brij S avatar

sure

mpmsimo avatar
mpmsimo

Anyone using a tool like drifter or terraform-monitor-lambda to detect state drift?

mpmsimo avatar
mpmsimo

Any success or best practices for identifying and correcting Terraform changes over time?

mpmsimo avatar
mpmsimo
digirati-labs/drifter

Check for drift between Terraform definitions and deployed state. - digirati-labs/drifter

mpmsimo avatar
mpmsimo
futurice/terraform-monitor-lambda

Monitors a Terraform repository and reports on configuration drift: changes that are in the repo, but not in the deployed infra, or vice versa. Hooks up to dashboards and alerts via CloudWatch or I…

Steven avatar

While these can be useful in a small environment, they are supporting a problematic process and are not going to scale well

Steven avatar

If using micro services, there will be a state file per microservice. Lets say you a small environment with only 10 service and you have 3 environments. That’s 30 state files plus a few more for environment infrastructure and a few for account infrastructure. This can grow fast

Steven avatar

Then there is the bad process that it is supporting. People making production changes from their local systems with possibly no testing or any audit tracking. A much better process would be to commit the change to git repo and have that trigger the terraform run. This gets rid of the drift issue due to uncommitted changed. It also allows you to add testing and ensure it is run, as well as having an audit trail

3
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

master brach reflects what is deployed to prod. With all the history from the PRs. There should be no drift since what’s is currently in master is deployed to prod with terraform/helm/helmfile

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what we usually do to make and deploy a change to apps in k8s and serverless: create a new branch, make changes, open a PR, automatically (CI/CD) deploy the PR to unlimited staging so people could test it, approve the PR, merge the PR to master, cut a release which gets automatically deployed to staging or prod depending on release tag

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for infrastructure (using terraform): create new branch, open a PR, make changes, run terraform plan automatically, review the plan, approve the PR, run terraform apply, if everything is OK, merge the PR to master

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use atlantis for that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

2019-09-13

Nikola Velkovski avatar
Nikola Velkovski

@Brij S you can verify if the profiles are properly set with aws cli

Nikola Velkovski avatar
Nikola Velkovski

e.g. aws s3 ls --profile storesnonprod

Nikola Velkovski avatar
Nikola Velkovski

because terraform uses that

Maciek Strömich avatar
Maciek Strömich

or by AWS_PROFILE=profilename aws s3 ls

ciastek avatar
ciastek

I need something like “random_string” resource, but with a custom command. So, execute a command only if the resource isn’t in the state yet (or was tainted), and use commands output as a value to put in the state. Any idea what kind of magic to use to achieve such result?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you provide the param in var, use it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if not, use random string to generate it

ciastek avatar
ciastek

Thank you. Unfortunately it’s not the thing I look for. I need something like:

resource "somemagicresource" "pass" {
  command = "openssl rand -base64 12"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Provisioner: local-exec - Terraform by HashiCorp

The local-exec provisioner invokes a local executable after a resource is created. This invokes a process on the machine running Terraform, not on the resource. See the remote-exec provisioner to run commands on the resource.

ciastek avatar
ciastek

Unfortunatelly provisioners doesn’t store any kind of result in a state.

ciastek avatar
ciastek

I’ll try to go with https://github.com/matti/terraform-shell-resource , but thanks for all the links provided

matti/terraform-shell-resource

Contribute to matti/terraform-shell-resource development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

oscar avatar

Have you guys seen this before?

Error: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
	status code: 409, request id: aaaa, host id: aaaa//bbbb+cxxx=
oscar avatar

It doesn’t exist on ANY of our accounts

oscar avatar

its a very, very, specific and niche bucket name

oscar avatar

the changes someone else owns it is extremely slim

oscar avatar

I’m sure I saw this about 4-5 months ago, but it actually was created on one of our accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we saw that happens when you create a resource (e.g. bucket) not using the TF remote state. Then the state was lost on local machine. Then TF tries to create it again, but it does exist in AWS

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

check that you use remote state and not losing it

oscar avatar

Yeh that adds up with what potentially happened

oscar avatar

That local state file is long gone

oscar avatar

How can I recover the S3 bucket? :S

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to find it in AWS

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and either destroy it manually in the console, or import it

oscar avatar

It isn’t there

oscar avatar

I’ve searched hte account (it has no buckets) - new account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make sure you have permissions to see it (maybe it was created under diff permissions)

oscar avatar

Yeh I’ve checked sadly with Admin permissions in console

oscar avatar

It genuinely isn’t there, even got one of the IT guys to look

oscar avatar

I’ve opened a ticket with AWS but slow

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

S3 is global, so you need to check all your accounts, even those you don’t know about

oscar avatar

Wait so, it could be on a different aws account to that on which I ran terraform?!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it could be. I don’t remember if AWS shows the error Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it in this case

oscar avatar

AWS console just shows name already in use

oscar avatar

when attempting to replicate but in the console

oscar avatar

aghhh

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, you created it before and lost the state (if you are saying that the chance is very slim that some other people used the same name)

oscar avatar

But surely even if I lost the state file

oscar avatar

the s3 bucket would be on the aws account

oscar avatar

btw the context is the backend module

oscar avatar

Weird it has happened again on another account

oscar avatar

it is in my local state file

oscar avatar

a resource

oscar avatar

but it isn’t on the console

oscar avatar

what is going on

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have to follow exact steps when provisioning the TF state backend because you don’t have the remote backend yet to store the state in

oscar avatar

Yeh no I know

oscar avatar

it was an accident

oscar avatar

I’m familiar with it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have to provision it first with local state, then update TF code to use remote S3 backend, then import the state

oscar avatar

Probably the 18th account I’ve used your module on.. just something weird happened this time

oscar avatar

Ya, I do this:

run:
	# Apply Backend module using local State file
	direnv allow
	bash toggle-s3.sh local
	terraform init && terraform apply
	# Switch to S3 State storage
	bash toggle-s3.sh s3
	terraform init && terraform apply
oscar avatar

and my toggle-s3.sh script basically comments out hte backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok

oscar avatar

It’s worked plenty time

oscar avatar

not sure what happened this time though

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i guess the bucket with that name exists for any reason (you created it, other people created it on diff account, or other people from diff orgs created it). Try to use a diff name

oscar avatar

No I think something weird is happening

oscar avatar

on a second account I’m getting this

oscar avatar
Error: Error in function call

  on .terraform/modules/terraform_state_backend/main.tf line 193, in data "template_file" "terraform_backend_config":
 193:       coalescelist(
 194:
 195:
 196:
    |----------------
    | aws_dynamodb_table.with_server_side_encryption is empty tuple
    | aws_dynamodb_table.without_server_side_encryption is empty tuple

Call to function "coalescelist" failed: no non-null arguments.
oscar avatar

I’ve followed the same pattern and commands as many accounts previously

oscar avatar

all version locked etc

oscar avatar

No clue why it isn’t having it today

oscar avatar

annnnnd its working again

oscar avatar

what the ?

oscar avatar

what theee

oscar avatar

that magic bucket thats there but not there?

oscar avatar

can see it on aws cli

oscar avatar

but not console

oscar avatar

whaat

oscar avatar

same permissions (iam role)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@oscar I think you mixed up TF versions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you use 0.11, use 0.11 state backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

same for 0.12

oscar avatar

Its aws

oscar avatar

look at htis

oscar avatar
 ✗ . (none) state_storage ⨠ aws s3 rm <s3://xxx-terraform-state>
-> Run 'init-terraform' to use this project
 ⧉  xxx
 ✗ . (none) state_storage ⨠ aws s3 ls
2019-09-13 14:20:17 xxx-terraform-state
oscar avatar

even after removing it is stil there hahaaha jeez

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or, another posibility, aws provider was not pinned, got updated, and the new one has issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we had a few 0.11 modules basted after the aws provider was updated

oscar avatar

I think you nailed it akynsh#

oscar avatar

andriy*

oscar avatar

managed torecover the state file by cli#

oscar avatar

Error: Failed to load state: Terraform 0.12.6 does not support state version 4, please update.

oscar avatar

that was released just this morning

oscar avatar

I wonder if I had the .terraform/modules directory already there

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the conclusion is, always pin everything, TF modules, TF version, providers, etc.

oscar avatar

Damn I have it pinned to major

oscar avatar

aws = “~> 2.24”

Brij S avatar

so I found out why my module was using different credentials. It was because i had a [main.tf](http://main.tf) in my module with the following content:

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  profile = "ambassadorsnonprod"
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  profile = "ambassadorsprod"
  alias   = "prod"
}

However, when I remove this [main.tf](http://main.tf) file from the module, and run tf plan with configuration that references this module I get the following error:

To work with
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment its
original provider configuration at module.cicd-web.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment,
after which you can remove the provider configuration again.

I have a main.tf that is setup so I’m not sure why im getting this error

oscar avatar

ominously similar to my situ

Todd Linnertz avatar
Todd Linnertz

I am looking at using the SweetOps s3_bucket module but I am not sure how to enable server access logging using the module. Does the module support enabling server access logging?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depending on the s3 module you want to use

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

1
jose.amengual avatar
jose.amengual

I’m having some issues with Terraform and terraform-aws-rds-cluster module, I’m creating a Global cluster ( I forked the cloudposse module and added one line) but this is not just related to to global aurora clusters but the problem is that the cluster finish creating but terraform for some reason keeps pooling for status until it times out after 1 hour , this is what I see :

module.datamart_secondary_cluster.aws_rds_cluster.default[0]: Creation complete after 9m10s [id=example-stage-pepe1secondary]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Creating...
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Creating...
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Still creating... [10s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Still creating... [10s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Still creating... [20s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Still creating... [20s elapsed]
jose.amengual avatar
jose.amengual

that will continue for 1 hour…..

jose.amengual avatar
jose.amengual

and the console will show it as available

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you see that eveytime you provision or just saw one time?

jose.amengual avatar
jose.amengual

have you seen this before ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it only once, I’d say your session had expired

jose.amengual avatar
jose.amengual

it is pretty consistant

jose.amengual avatar
jose.amengual

the workaround was to create the secondary cluster with 0 instances

jose.amengual avatar
jose.amengual

then change it two instances

jose.amengual avatar
jose.amengual

pretty much every time

jose.amengual avatar
jose.amengual

I mean I have not been able to successfully complete the creation the cluster

Brij S avatar

has anyone used multiple providers for a module?

Brij S avatar

Ive done this multiple times with success, but now I’m facing an issue where all resources are created in one account(provider) and not the other and i’m not sure why

davidvasandani avatar
davidvasandani

@Brij S you’ll need to post some code or errors for us to help you diagnose.

jose.amengual avatar
jose.amengual

I had an issue like that yesterday, the name of the resource needs to be different and you need to pass the provider alias to every resource

Brij S avatar

in my /terraform/modules/cicd folder Ive got a [main.tf](http://main.tf) file with the following:

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "prod"
}

in my /terraform/cicd/stores folder Ive got a [main.tf](http://main.tf) with the following:

provider "aws" {
  version    = "~> 2.25.0"
  region     = var.aws_region
  profile    = "storesnonprod"
  alias      = "nonprod"
}

provider "aws" {
  version    = "~> 2.25.0"
  region     = var.aws_region
  profile    = "storesprod"
  alias      = "prod"
}

and ive got a /terraform/cicd/stores/web.tf file Ive got

module "cicd-web" {
  source = "../../modules/cicd-web"

  providers = {
    aws.nonprod = "aws.nonprod"
    aws.prod    = "aws.prod"
  }
........

in all of my resources ive got either a provider = "aws.nonprod" or provider = "aws.prod" but they all get created in aws.nonprod

Brij S avatar

@davidvasandani ^

Brij S avatar

However, I realized that if I put profiles in /terraform/modules/cicd/main/tf then it works! However, that defeats my purpose of the module since id want to use different profiles for different accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is no difference between these providers

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "prod"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are the same

Brij S avatar

thats a good point.. didnt notice that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they need to have some diff, e.g. region

Brij S avatar

but the region is the same as well

Brij S avatar

if i remove that main.tf from the module I get an error saying it needs it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they have to be diff otherwise why do you need them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
provider "aws" {
  region                  = "us-west-2"
  shared_credentials_file = "/Users/tf_user/.aws/creds"
  profile                 = "customprofile"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

diff region, or diff profile, or diff shared_credentials_file

Brij S avatar

right, I can add profile but if that lives in the module I cant reuse it

Brij S avatar

for another set of accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which tells the provider to use diff credentials from diff profile to access diff account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to add that

Brij S avatar

if I leave profile in the [main.tf](http://main.tf) in the module, then I cant reuse the module

Brij S avatar

because another account will have a different profile

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

whatever you are saying you can’t reuse, does not make any diff for terraform

Brij S avatar

so in my module, ``/terraform/modules/somemodule` i have a main.tf which includes a profile which is used for account A

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you create a set of providers (they should differ by region or profile)

Brij S avatar

differ by profile, yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then for each module, you send a set of required providers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and in each resource use the provider aliases

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is no other way of doing it

Brij S avatar

wait, in the module, the main.tf if I put a profile in

Brij S avatar

how does the module become reusable if the profile is hardcoded for a certain account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module is reusable because you send it a list of providers (which can contain only one)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the module uses that provider

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

w/o knowing the provider details

Brij S avatar

yes I understand that

Brij S avatar

so that means, I remove [main.tf](http://main.tf) from my module?

Brij S avatar

(which causes errors)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not sure I understand that

Brij S avatar

ok let me explain

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you fix the error in main.tf

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not remove it

Brij S avatar

in /terraform/modules/somemodule/main.tf I have:

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "prod"
}

In /terraform/folder/main.tf I have:

provider "aws" {
  version    = "~> 2.25.0"
  region     = var.aws_region
  profile    = "storesnonprod"
  alias      = "nonprod"
}

provider "aws" {
  version    = "~> 2.25.0"
  region     = var.aws_region
  profile    = "storesprod"
  alias      = "prod"
}

In /terraform/folder/web.tf I have:

module "cicd-web" {
  source = "../../modules/somemodule"

  providers = {
    aws.nonprod = "aws.nonprod"
    aws.prod    = "aws.prod"
  }
Brij S avatar

that is how im using the providers

jose.amengual avatar
jose.amengual

can you have multiple providers in a module ?

loren avatar

you can. kinda need to when you want to implement a cross-account workflow, for things like vpc peering, resource shares, etc…

jose.amengual avatar
jose.amengual

I think you can but should you do it ?

Brij S avatar

if I remove /terraform/somemodule/main.tf I get this error:

Error: Provider configuration not present

To work with
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment its
original provider configuration at module.somemodule.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment,
after which you can remove the provider configuration again.
jose.amengual avatar
jose.amengual

in my case I instantiate the module twice one with one provider and one with the other

jose.amengual avatar
jose.amengual

look at this example

jose.amengual avatar
jose.amengual

they don’t create one resource within two providers

Brij S avatar

in my module I have multiple resources that have either provider = "aws.nonprod" or provider = "aws.prod"

jose.amengual avatar
jose.amengual

mmm, maybe moving to a module that can do do any provider and the pass one provider to the module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, we are mixing up at least 4-5 diff concepts here

jose.amengual avatar
jose.amengual

sorry

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. @Brij S if you created resources using a provider, you can’t just remove it. Delete the resources, then remove the providers from [main.tf](http://main.tf), then re-apply again
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. @Brij S your providers must be different (that’s after you do #1). Otherwise TF uses just the first one since they are the same (that’s why eveything gots created in just one account)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. @jose.amengual you create a module, but don’t hardcode any provider in it. You can send the provider(s) to it IF nessessary
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. But in (almost) all cases, it’s not necessary. The only use-case you need to send provider(s) to a module is when your module is designed in such a way so it creates resources in diff regions or in diff accounts (bad idea)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Creating such a module that creates resources in diff region is OK (in this case you can send it a list of providers that differ by region)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Creating such a module that creates resources in diff accounts is bad idea
Brij S avatar

@Andriy Knysh (Cloud Posse) could I show you the problem Im having? I dont have any resources created but i’m still getting the error

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sounds like you have resources created

To work with
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment its
original provider configuration at module.somemodule.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
Brij S avatar

i just ran terraform destroy, no resoruces found

Brij S avatar

could we zoom possibly?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding #6 above: instead of thinking of creating modules that uses providers for diff accounts, it’s better to create yourself an environment which will allow you to login into diff accounts (by using diff profiles in ~./aws, and eveb better by assuming roles)

2019-09-15

SweetOps avatar
SweetOps
06:00:43 PM

Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.

James D. Bohrman avatar
James D. Bohrman

Hey guys! I’m looking for some advice on how to approach an issue. I’m trying to figure out a way to use Terraform to provision a Windows Server 2016 instance that will run this cloud prep tool once it’s provisioned. I want to do something with Packer down the line but right now I’m just trying to make an easy way to spin up cloud gaming rigs on AWS for myself.

Prep tool: https://github.com/jamesstringerparsec/Parsec-Cloud-Preparation-Tool

Andrew Jeffree avatar
Andrew Jeffree

Is what you’re after. There are plenty of examples out there on how to pass user-data to an ec2 instance in terraform.

James D. Bohrman avatar
James D. Bohrman
davidvasandani avatar
davidvasandani

@James D. Bohrman this link didn’t work for me.

2019-09-16

Bruce avatar

Hi @James D. Bohrman this might help “Deploying a Windows 2016 server AMI on AWS with Packer and Terraform. Part 1” by Bruce Dominguez https://link.medium.com/8hIu8JaK1Z

Deploying a Windows 2016 server AMI on AWS with Packer and Terraform. Part 1attachment image

Automating a deployment of a Windows 2016 Server on AWS should be easy right, after all deploying an ubuntu server with Packer and…

Bruce avatar

Does anyone have a good suggestion on creating a snapshot from a Rds database (that’s encrypted) and restoring it to a Dev/testing Env and doing some data scrubbing?

joshmyers avatar
joshmyers

Suggestions yes, any of them any good? Not so sure

joshmyers avatar
joshmyers

Have seen this done in several ways, none of which were particularly nice

joshmyers avatar
joshmyers

@Bruce https://github.com/hellofresh/klepto looked interesting in this space last time I checked

hellofresh/klepto

Klepto is a tool for copying and anonymising data. Contribute to hellofresh/klepto development by creating an account on GitHub.

joshmyers avatar
joshmyers

(probably not a discussion for this particular channel)

Bruce avatar

Thanks @joshmyers I will check it out.

Mike Whiting avatar
Mike Whiting

is anyone able to advise on aws_ecs_task_definition. If I specify multiple containers in the task definition file then neither of the containers come up.

Mike Whiting avatar
Mike Whiting

but if I have just one it works

joshmyers avatar
joshmyers

@Mike Whiting you are really going to need to post your instantiation of the Terraform resource or whatever. What you expected. What the actual error message is etc

Mike Whiting avatar
Mike Whiting

did you mean to @ me?

Mike Whiting avatar
Mike Whiting

Mike Whiting avatar
Mike Whiting

these are the resources:

resource "aws_ecs_task_definition" "jenkins_simple_service" {

//  volume {
//    name      = "docker-socket"
//    host_path = "/var/run/docker.sock"
//  }

  volume {
    name      = "jenkins-data"
    host_path = "/home/ec2-user/data"
  }
  family                = "jenkins-simple-service"
  container_definitions = file("task-definitions/jenkins-gig.json")
}

resource "aws_ecs_service" "jenkins_simple_service" {
  name            = "jenkins-gig"
  cluster         = data.terraform_remote_state.ecs.outputs.staging_id
  task_definition = aws_ecs_task_definition.jenkins_simple_service.arn
  desired_count   = 1
  iam_role        = data.terraform_remote_state.ecs.outputs.service_role_id

  load_balancer {
    elb_name       = data.terraform_remote_state.ecs.outputs.simple_service_elb_id
    container_name = "jenkins-gig"
    container_port = 8080
  }
}
Mike Whiting avatar
Mike Whiting
[
  {
    "name": "jenkins-gig",
    "image": "my-image",
    "cpu": 0,
    "memory": 512,
    "essential": true,
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 8000
      }
    ],
    "environment" : [
      {
        "name" : "VIRTUAL_HOST",
        "value" : "<host>"
      },
      {
        "name": "VIRTUAL_PORT",
        "value": "8080"
      }
    ],
    "mountPoints": [
      {
        "sourceVolume": "jenkins-data",
        "containerPath": "/var/jenkins_home",
        "readOnly": false
      }
    ]
  },
  {
    "name": "nginx-proxy",
    "image": "jwilder/nginx-proxy",
    "cpu": 0,
    "memory": 512,
    "essential": true,
    "portMappings": [
      {
        "containerPort": 80,
        "hostPort": 80
      }
    ]
  }
]
Mike Whiting avatar
Mike Whiting

if I remove the nginx-proxy container from the definition then ecs-agent successfully pulls and launches the jenkins container but with it included nothing happens

Mike Whiting avatar
Mike Whiting

nb: ‘my-image’ is from a private registry and nginx-proxy is public

joshmyers avatar
joshmyers

Do you have any error events being logged?

joshmyers avatar
joshmyers

Are there creds for the private repo?

Mike Whiting avatar
Mike Whiting

I’m just observing the ecs-agent logs currently (within the instance)

Mike Whiting avatar
Mike Whiting

as I say, the container from the private image launches fine when I don’t specifiy the proxy container in the definition file.. i.e. one container object

joshmyers avatar
joshmyers

You hadn’t specific which one you can bring up on it’s own, or that one is in a private registry at that point

joshmyers avatar
joshmyers

ECS agent logs should give you an idea

Mike Whiting avatar
Mike Whiting

I can bring up the jenkins container (private image) on it’s own

Mike Whiting avatar
Mike Whiting

when the nginx-proxy definition is present ecs-agent just sits idle

Mike Whiting avatar
Mike Whiting

does that make sense?

joshmyers avatar
joshmyers

yes

Mike Whiting avatar
Mike Whiting

what do you think I should try?

Mike Whiting avatar
Mike Whiting

starting to wonder if terraform is really for me if I can’t get help

Mike Whiting avatar
Mike Whiting

(from anywhere)

joshmyers avatar
joshmyers

Terraform is just making API calls for you

Mike Whiting avatar
Mike Whiting

yep

oscar avatar

The tags for this module are so confusing: https://github.com/cloudposse/terraform-aws-rds/releases

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

oscar avatar

I’ve been using 0.11 by mistake as I took the ‘latest’ tag

oscar avatar

but that’s actually just a hotfix

oscar avatar

the latest .12 tag ios 0.10

oscar avatar

true I could have read the list and lesson learned, but had me stumped for a while as to why it wasn’t working!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@oscar don’t pin to master/latest, always pin to a release. In the module, TF 0.12 started with tag 0.10.0, but when we needed to add some features to TF 0.11 code, we created the tag 0.9.1 which is the latest tag, but not for TF 0.12 code

oscar avatar

Yes that’s what I mean

oscar avatar

a 0.11 tag is at the top of the tags list

oscar avatar

bamboozled me, logically I would have thought only 0.12 tags would be at the top of the ‘releases’ tab

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s how GitHub works

oscar avatar

so I had it pinned to a 0.11 until I realised what was going on

loren avatar

i don’t even see a 0.11 tag in there. there is a 0.11 branch

loren avatar

exactly

oscar avatar

0.9.1 is a TF 0.11 tag

loren avatar

oh you mean the 0.9.1 tag only supports tf 0.11

loren avatar

not that there is a 0.11 tag

oscar avatar

Aye

loren avatar

confusing

oscar avatar

bamboozles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did not find a better way to support both code bases and tag them

oscar avatar

Haha its fine, I was just pointing out it is a bamboozle

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so we started a TF .12 code with some tag and continue incresing it for 0.12

oscar avatar

It makes sense

loren avatar

what you are doing makes sense to me, releasing patch fixes on the 0.9 minor stream

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for 0.11, usually increase the last tag for 0.11 branch

oscar avatar

The lesson learned was ‘don’t just grab the top most tag’

loren avatar

would be cool if tf/go-getter supported more logic in the ref than an exact committish… a semver comparator (like ~>0.9) would be awesome

oscar avatar

tf/go-getter waht does this do?

loren avatar

terraform uses go-getter under the covers to retrieve modules specified by source

oscar avatar

I see, yeh that would be smart

loren avatar
hashicorp/go-getter

Package for downloading things from a string URL using a variety of protocols. - hashicorp/go-getter

oscar avatar

checks the versions.tf file and cehcks for compatibility

oscar avatar

@Andriy Knysh (Cloud Posse) I think I was doing the PR as you commented! https://github.com/cloudposse/terraform-aws-rds/pull/38

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

running automatic tests now, if ok will merge

oscar avatar

where abouts are your tests?

oscar avatar

I couldn’t see them

oscar avatar

I noted Codefresh wasn’t in the PR either

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

oscar avatar

Oh I see. When I navigated the test/ directory it looked like an example

oscar avatar

but I realise now that examples_complete_test.go is related ot the examples/ dir

oscar avatar

and that examples/ isn’t just documentation. Nice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
oscar avatar

Yah that’s some nice gitops

oscar avatar

I was expecting a trigger

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
oscar avatar

but that’s cooler

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it is a trigger, but we have to trigger it (for security when dealing with PRs from forks)

oscar avatar

Oh that makes sense actually

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

otherwise you could DDOS it

oscar avatar

Yeh

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

merged and released 0.11.0 (now you have that tag ) thanks

oscar avatar

woop, thanks

oscar avatar

Debate/Conversation:

“We should enable deletion_protection for production RDS”

https://www.terraform.io/docs/providers/aws/r/db_instance.html#deletion_protection

1
oscar avatar

For: anyone in console / terraform cannot accidentally delete (assuming IAM permissions are not super granular & TF is being operated manually)

oscar avatar

Against: presumably this would mean the resource cannot be updated? I’m not too familiar with RDS so unsure on how many settings actually cause a re-create

asmito avatar

better to enable it, but usually when you want to delete an RDS instance aws takes a snapshot of it as back up.

asmito avatar

guys do you know when we will have count enabled for module

oscar avatar

Not seen an ETA yet, just that it is reserved alongside for_each

asmito avatar

?

Cloud Posse avatar
Cloud Posse
04:03:44 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Sep 25, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Mike Whiting avatar
Mike Whiting

I’ve had a brainwave that perhaps I need to add another dedicated aws_ecs_service resource for the nginx-proxy - see my example code above. is this a possibility?

oscar avatar

Is there a MKS/Kafka module anywhere?

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

Has anyone solved a solution for dynamically determining which subnets are free in a given VPC to then use for deploying some infrastructure into? Or know of some examples?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what do you mean by are free?

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

available ip address space

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s not easy

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

yup; plus we have multiple cidr blocks (secondaries) being added to the VPC so in some cases the the secondary blocks are barely usable because subnets created off of them don’t garnish many ip address space (e.g. \28)

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

so yeah - in those cases basically need a way to filter away “unusable” subnets

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

the closet thing i’ve found is running a local cmd and finding a way to stuff it into a data template to somehow use downstream - kind of like the solution here: https://medium.com/faun/invoking-the-aws-cli-with-terraform-4ae5fd9de277

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

but all very ugly

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use https://www.terraform.io/docs/providers/aws/d/subnet_ids.html to get all subnets for a VPC

AWS: aws_subnet_ids - Terraform by HashiCorp

Provides a list of subnet Ids for a VPC

Brij S avatar

does TF support inline code for lambda functions like cloudformation?

2019-09-17

Hemanth avatar
Hemanth

Inside terraform(.tf) i can use assign dynamic stuff using variables like - key_name = "${var.box_key_name}" for different environments, how can i do the same inside the user-data scripts attached to tf, i am tyring to have unique values for sudo hostnamectl set-hostname jb-*environtmenthere* in the user-data script

PiotrP avatar

hi gents, has any one of you succesfully created s3 bucket module with dynamic cors configuration?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not sure what you mean by ‘dynamic configuration’, but take a look here https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf#L79

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

PiotrP avatar

by dynamic configuration, I thought about utilizing terraform’s ‘dynamic’ feature

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

PiotrP avatar

the same approach you linked I use right now but it forces to have any kind of CORS configuration applied to the bucket, even when you do not need CORS at all

PiotrP avatar

with dynamic configuration I thought I will be able to create s3 buckets with or without cors configuration

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s easy to implement

PiotrP avatar

I ended up with something like this:

dynamic "cors_rule" {
    for_each = var.cors_rules
    content {
      allowed_headers = [lookup(cors_rule.value, "allowed_headers", "")]
      allowed_methods = [lookup(cors_rule.value, "allowed_methods")]
      allowed_origins = [lookup(cors_rule.value, "allowed_origins")]
      expose_headers = [lookup(cors_rule.value, "expose_headers", "")]
      max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
    }
  }
PiotrP avatar

when variable cors_rules is a list of maps like this:

cors_rules = [{
    allowed_origins = "*"
    allowed_methods = "GET"
  }]
PiotrP avatar

however, this approach is still not perfect, because values not mentioned in the cors_rules variable will be applied anyway with default values

PiotrP avatar

am I missing something ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t think it’s possible to do it, unless you want to use many permutations of dynamic blocks with for_each with different conditions

PiotrP avatar

I see

PiotrP avatar

thanks for answering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s how we deploy https://docs.cloudposse.com/

James D. Bohrman avatar
James D. Bohrman

Here’s a little tool I’ve been working on that the gamers here might like. I used a lot of Cloud Posse modules also

https://github.com/jdbohrman/parsec-up

jdbohrman/parsec-up

Terraform module for deploying a Parsec Cloud Gaming server. - jdbohrman/parsec-up

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for discoverability, have you considered renaming it to terraform-aws-parsec-instance? this is the format hashicorp suggests for the registry

jdbohrman/parsec-up

Terraform module for deploying a Parsec Cloud Gaming server. - jdbohrman/parsec-up

James D. Bohrman avatar
James D. Bohrman

I haven’t but I will probably do that!

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Andriy Knysh (Cloud Posse) finally so apparently A) Terraform state loading is private to itself in the UI and Command code, so I will need to talk to either paul or terraform team about it, B) and good news finally find out that loading in terraform is implicit cascading

Julio Tain Sueiras avatar
Julio Tain Sueiras

so if you have two file , call it main.tf and data.tf, then you state ForceFileSource on main.tf to be “” then do LoadConfigDir

Julio Tain Sueiras avatar
Julio Tain Sueiras

terraform will declare main.tf to be empty

Julio Tain Sueiras avatar
Julio Tain Sueiras

and skip reading

Julio Tain Sueiras avatar
Julio Tain Sueiras

is useful, since I need it to do resource & data types gathering for error checking

1

2019-09-18

oscar avatar

Anyone seen the issue where you curl from an EKS worker node to the cluster and get SSL issues?

oscar avatar

Using CP worker / cluster / asg modules.

oscar avatar

curl: (60) SSL certificate problem: unable to get local issuer certificate

oscar avatar

… this is curling the API endpoint as per EKS

oscar avatar

@Addison Higham I’m using your branches from here https://sweetops.slack.com/archives/CB6GHNLG0/p1566415698381800

Error: Invalid count argument

  on .terraform/modules/eks_workers.autoscale_group/ec2-autoscale-group/main.tf line 120, in data "null_data_source" "tags_as_list_of_maps":
 120:   count = var.enabled ? length(keys(var.tags)) : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20

I forgot to update the version for CI to 0.12, will try and push that out

oscar avatar

but getting the following error. Could you provide any guidance on what you think that might be?

Addison Higham avatar
Addison Higham

yeah, that was an oopsie, a fix got merged… but maybe it didn’t make it onto the branch I was trying to upstream

Addison Higham avatar
Addison Higham

lemme find it

oscar avatar

Thanks. If possible could you push it to your fork’s master? :slightly_smiling_face: I did try your inst-* branch but that didn’t seem to quite fix it

Addison Higham avatar
Addison Higham

oh that is a different issue @oscar, what are you passing to tags? as the error message says, it can’t have anything dynamic being passed in

oscar avatar

tags is actually empty

oscar avatar

I’m passing var.tags which is an empty {} in my terraform proejct that calls your eks_worker module

oscar avatar

so am I correct in using your worker & cluster branches @master branch?

oscar avatar

because I’m aware you also have the ASG one updated, but do the master branches of worker and cluster point to that?

Addison Higham avatar
Addison Higham

oh yeah, so that is why we use the inst-version, which does this: https://github.com/instructure/terraform-aws-eks-cluster/pulls?utf8=%E2%9C%93&q=is%3Apr

instructure/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to instructure/terraform-aws-eks-cluster development by creating an account on GitHub.

Addison Higham avatar
Addison Higham

to be safe, whenever I change refs, I also just delete .terraform directory and re-init

Addison Higham avatar
Addison Higham

it is sorta weird, we didn’t want to open a PR to our updated module, but they do need to merge them in order for these to work

oscar avatar

Ya I understand the need for the branch. I’ll give another go later on.

oscar avatar

So worker inst Cluster master

oscar avatar

And that should fix my previous issue with count?

Addison Higham avatar
Addison Higham

I think so? at least that is what we have and don’t have any issues

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

oscar avatar

@Addison Higham - darn still got the same issue

oscar avatar
module "eks_cluster" {

  source     = "git::<https://github.com/instructure/terraform-aws-eks-cluster.git?ref=master>"
...
}

module "eks_workers" {
  source = "git::<https://github.com/instructure/terraform-aws-eks-workers.git?ref=inst-version>"
...
Addison Higham avatar
Addison Higham

same error?

oscar avatar

Yeh

oscar avatar
Error: Invalid count argument

  on .terraform/modules/eks_workers.autoscale_group/main.tf line 120, in data "null_data_source" "tags_as_list_of_maps":
 120:   count = var.enabled ? length(keys(var.tags)) : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
module "eks_workers" {
  source = "git::<https://github.com/instructure/terraform-aws-eks-workers.git?ref=inst-version>"

  namespace     = var.namespace
  stage         = var.stage
  name          = var.name
  tags          = var.tags
...
}
oscar avatar

var.tags is empty (defaulting to {})

Addison Higham avatar
Addison Higham

is your cluster_name dynamic? see https://github.com/instructure/terraform-aws-eks-workers/blob/master/main.tf#L2, the workers module computes some tags, so your cluster_name needs to be known at plan time

instructure/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - instructure/terraform-aws-eks-workers

oscar avatar

Omg

oscar avatar

that must be it

Addison Higham avatar
Addison Higham

that is why in the example you see them use the label module to compute the name of the cluster in multiple distinct places

oscar avatar
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

oscar avatar
# mine
  cluster_name                       = "${module.eks_cluster.eks_cluster_id}"
oscar avatar

Will hardcode to a string for now

oscar avatar

Super thanks. Cluster and workers up now

oscar avatar

But back to workers not connecting to cluster.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

oscar avatar

Ah - no thank you. I saw this before but didn’t honestly understand it! Should this be run at cluster creation or can be applied afterwards?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so at the time we did it, in some cases there were some race conditions, that’s why we did not enable it by default

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

after the cluster applied, we set the var to true and applied that

oscar avatar

Many thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but now you can test it with the var enabled from start

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did that almost a year ago so a lot prob has changed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and we will convert the EKS modules to 0.12 and add auto-tests this/next week (finally )

oscar avatar

Would love to get a hold of those updated modules

oscar avatar

Andriy you are my hero

oscar avatar

My workers are now connected

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oscar avatar

TF weirdly got an unauthorized response when applying the command:

 kubectl apply -f config-map-aws-auth-xxx-development-eks-cluster.yaml --kubeconfig kubeconfig-xxx-development-eks-cluster.yaml
oscar avatar

but my kubectl already had the context activated

oscar avatar

so I just ran the apply configmap without the –kubeconfig

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Andriy Knysh (Cloud Posse) XD XD XD XD XD, so I found the biggest issue that is causing vs code users for using the terraform lsp plugin, I forgot to omit the hover provider from the first release that I was trying out(so is very error prone), since I only use vim, so there is no hover that get activated

1
Julio Tain Sueiras avatar
Julio Tain Sueiras

so now is alot more stable for any GUI based Editor that is going to use terraform-lsp

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice @Julio Tain Sueiras

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@justingrote btw, didn’t realize you were on sweetops. We discussed your comment today #office-hours today https://github.com/hashicorp/terraform/issues/15966#issuecomment-520102463 (@sarkis had originally directed my attention to it)

Feature: Conditionally load tfvars/tf file based on Workspace · Issue #15966 · hashicorp/terraform

Feature Request Terraform to conditionally load a .tfvars or .tf file, based on the current workspace. Use Case When working with infrastructure that has multiple environments (e.g. &quot;staging&q…

justingrote avatar
justingrote
08:41:25 PM

@justingrote has joined the channel

rohit avatar

i am facing issues with pre-commit when using in my terraform project

rohit avatar
repos:
- repo: <git://github.com/antonbabenko/pre-commit-terraform>
  rev: v1.15.0
  hooks:
    - id: terraform_fmt
    - id: terraform_docs_replace
rohit avatar

i receive the following error

rohit avatar
 pkg_resources.DistributionNotFound: The 'pre-commit-terraform' distribution was not found and is required by the application
rohit avatar

any ideas on what could be the problem ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@antonbabenko

roth.andy avatar
roth.andy

@rohit Not sure if it will fix anything, but you can try changing the git:// to https://. Here’s mine for reference:

- repo: <https://github.com/antonbabenko/pre-commit-terraform>
    rev: v1.19.0
    hooks:
      - id: terraform_fmt
      - id: terraform_docs
rohit avatar

i think the problem is with terraform_docs_replace

rohit avatar

and maybe it has terraform version 0.11.13

rohit avatar

i want to replace the README file automatically as part of commit

rohit avatar

do you know if the same can be achieved using terraform_docs ?

roth.andy avatar
roth.andy

its possible. I contributed terraform_docs_replace several months ago, it probably hasn’t been touched since then.

rohit avatar

i think terraform_docs_replace is only supported in terraform v12

roth.andy avatar
roth.andy

terraform_docs just makes changes to an existing README between the comment needles

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
stuff gets changed here
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
roth.andy avatar
roth.andy

terraform_docs_replace was made quite a while ago, before 12 came out

rohit avatar

when i update variables and their description in [variables.tf](http://variables.tf), my README.md files does not gets updated using terraform_docs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

good read

Brij S avatar

if ive got a module such as:

module "vpc_staging" {
  source = "./vpc_staging"
}

can I access a variable/output created in that module in another module like so?

module "security-group" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "1.25.0"

  name = "sg"
  description = "Security group for n/w with needed ports open within VPC"
  vpc_id     = "${module.vpc_staging.vpc_id}"

}

Would I use the variable name, or output id? What do I reference basically?

roth.andy avatar
roth.andy

The second module can use the outputs of the first module. So, the vpc_staging module would need an output called vpc_id for that example you gave.

Brij S avatar

right! I thought so, just wanted to confirm

Brij S avatar

thanks

Claudio Palmeira avatar
Claudio Palmeira

Hey guys, I do have a problem with the examples on the eks_cluster, more specifically on the subnets module It has an unssoported argument there:

Claudio Palmeira avatar
Claudio Palmeira

An argument named “region” is not expected here.

Claudio Palmeira avatar
Claudio Palmeira

unsupported

Claudio Palmeira avatar
Claudio Palmeira

module subnets on main.tf: this line -> region = “${var.region}” Terraform complains about it not being an expected argument

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the example is not actually correct since the EKS modules are TF 0.11, but the subnet module are pinned to master which is already 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are working on converting EKS modules to 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for now, pin the subnet module to a TF 0,11 release

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "subnets" {
  source              = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.12.0>"
cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

pin to 0.4.1 which is TF 0.11

Claudio Palmeira avatar
Claudio Palmeira

Thank you mate

2019-09-19

oscar avatar

How come only the creator of the EKS cluster can connect using the CP moduels?

roth.andy avatar
roth.andy

By default, only the creator of the cluster has access to it using IAM. The aws-auth ConfigMap in the kube-system namespace controls it. You can add an IAM role mapped to a K8s group that will give anyone who is able to assume that role the ability the log in. Looks like CloudPosse’s implementation of the terraform-aws-eks-workers module doesn’t make this configurable yet.

Looks like the template for the ConfigMap is here: https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/config_map_aws_auth.tpl

The EKS cluster example shows it being applied here: https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf

Here’s an example of what it would look like with an IAM role bound to a K8s group that would give anyone that is able to assume the role my-eks-cluster-admin the ability to log into the cluster with cluster-admin privileges:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::REDACTED:role/REDACTED
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

    - rolearn: arn:aws:iam::REDACTED:role/my-eks-cluster-admin
      username: my-eks-cluster-admin
      groups:
        - system:masters

  mapUsers: |

  mapAccounts: |

Then, you would change the command being run in your kubeconfig to use the role by using the -r flag in the aws-iam-authenticator token command.

roth.andy avatar
roth.andy

Be advised that this will defeat some auditability because Kubernetes will see everyone as the user my-eks-cluster-admin. You can do a very similar thing with the mapUsers section in order to map each user you want to give access to with a username in Kubernetes.

roth.andy avatar
roth.andy

The syntax for mapUsers is

mapUsers: |
  - userarn: <theUser'sArn>
    username: <TheUsernameYouWantK8sToSee>
    groups:
      - <TheK8sGroupsYouWantTheUserToBeIn>
oscar avatar

Thank you we found the answer to this earlier on! Really apporeciate your detail!

oscar avatar

We’re planning to fork it when 0.12 of the module goes live to support this customizability

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks guys, we will add additional roles and users mapping (working on 0.12 of the modules now)

oscar avatar

Ah that’s cool. My new firm is really keen to use CP’s own version of 0.12 (not the fork/PR branch). We have our own customizability reqs so once 0.12 is done and pushed we can start extending

oscar avatar

https://github.com/hashicorp/terraform/issues/22649 anyone experiencing this out of nowhere? (All devs using the state file are on 0.12.6)

Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6 · Issue #22649 · hashicorp/terraform

Terraform Version v0.12.7 Debug Output Error: Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6; upgrade to Terraform v0.12.7 or greater to w…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they have been busy adding new features

Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6 · Issue #22649 · hashicorp/terraform

Terraform Version v0.12.7 Debug Output Error: Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6; upgrade to Terraform v0.12.7 or greater to w…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

usually that happened when using 0.12 then trying to read the state with 0.11

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but now looks like any version bump causes that

oscar avatar

But everyone (2 people - we’re next to eachother) using that project are using the same geodesic shell and have the same version 0.12.6… yet the statefile in S3 says 0.12.7 O.O

oscar avatar

neither of us have 0.12.7 which is super weird

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

geodesic has 0.12.6 as well?

oscar avatar

yep!

oscar avatar

or rather

oscar avatar

we are both in geodesic

oscar avatar

and terraform version is 0.12.6 on both our PCs

oscar avatar

No one else feasibly ran this

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

inside geodesic, terraform version is 0.12.6 as well?

oscar avatar

Yes

oscar avatar

on our locals: 0.12.0

oscar avatar

on our geodesics: 0.12.6

oscar avatar

Whilst we’d like to know why, we’re happy to use 0.12.9 etc

oscar avatar

.. but we’re using cloudposses terraform_0.12

oscar avatar

@Andriy Knysh (Cloud Posse) I see that 0.12.7 is in your packages https://github.com/cloudposse/packages/blob/master/vendor/terraform-0.12/VERSION

however apk add –update –no-cache terraform_0.12 does not work as expected

cloudposse/packages

Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages

oscar avatar

Ok updated my geodesic FROM to 0.122.4 and that cleared the cache

oscar avatar

now on 0.12.7

Alex Siegman avatar
Alex Siegman

i thought you needed apk update && apk add --update terraform_0.12@cloudposse is the @cloudposse not required?

oscar avatar

doh that must be it

oscar avatar

merci beaucoup

Alex Siegman avatar
Alex Siegman

granted, using the newest geodesic is also nice~ features and bugfixes, oh my

oscar avatar

I was only coming from 0.119 - not that far behind!

Alex Siegman avatar
Alex Siegman

I also usually customize that in my own dockerfile that wraps geodesic:

RUN apk add terraform_0.12@cloudposse terraform@cloudposse==0.12.7-r0

Is what’s in ours, but we only have one or two 0.12 projects, everything is mostly on 0.11 still

jose.amengual avatar
jose.amengual

stupid question I’m using

locals {
  availability_zones = slice(data.aws_availability_zones.available.names, 0, 2)
}
jose.amengual avatar
jose.amengual

but sometimes my resources end up in the same AZ

jose.amengual avatar
jose.amengual

better to just hardcode them ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual what do you mean by sometimes? When in diff regions?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the code above is ok and should work

jose.amengual avatar
jose.amengual

same region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no need to hardcode anything

jose.amengual avatar
jose.amengual
jose.amengual avatar
jose.amengual

I’m using the terraform terraform-aws-rds-cluster module

jose.amengual avatar
jose.amengual

which I’m going to send a PR to support global clusters

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that example worked many times

jose.amengual avatar
jose.amengual

I know is weird because if I recreate the cluster then it will work

jose.amengual avatar
jose.amengual

I wonder now….maybe I just have a problem in one region

jose.amengual avatar
jose.amengual
jose.amengual avatar
jose.amengual

we use TF to create the accounts so in every region we subnets for every AZ

jose.amengual avatar
jose.amengual

I was wondering if for some reason we made a mistake or something

jose.amengual avatar
jose.amengual

but I’m using a data lookup to find them base on tags

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea check the data lookup if it returns the correct result

jose.amengual avatar
jose.amengual

exactly what I’m doing

jose.amengual avatar
jose.amengual

I’m gettin 3 subnet ids in us-east-1 and 4 in us-west-2

jose.amengual avatar
jose.amengual

so the data lookups are good

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
jose.amengual avatar
jose.amengual

cluster_size = 2 and I pass 4 subnets, then it should be ok

sweetops avatar
sweetops

Has anyone else run into the issue where you can’t pass variables via the command line when using the remote backend since last week when they released terraform cloud?

sweetops avatar
sweetops
Error: Run variables are currently not supported

The "remote" backend does not support setting run variables at this time.
Currently the only to way to pass variables to the remote backend is by
creating a '*.auto.tfvars' variables file. This file will automatically be
loaded by the "remote" backend when the workspace is configured to use
Terraform v0.10.0 or later.

Additionally you can also set variables on the workspace in the web UI:
<https://app.terraform.io/app/Boulevard/sched-dev-feature-branch-environments/variables>
jose.amengual avatar
jose.amengual

Global cluster support PR @Andriy Knysh (Cloud Posse) https://github.com/cloudposse/terraform-aws-rds-cluster/pull/56

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @jose.amengual

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

commented

jose.amengual avatar
jose.amengual

fixed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you run

make init
make readme/deps
make readme
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like README was not updated

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and docs/terraform.md was deleted

jose.amengual avatar
jose.amengual

weird

jose.amengual avatar
jose.amengual

mmm

❰jamengual❙~/github/terraform-aws-rds-cluster(git:globalclusters)❱✔≻ make readme                                                                                                                                                                  5.2s  Thu 19 Sep 18:51:17 2019
curl --retry 3 --retry-delay 5 --fail -sSL -o /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs <https://github.com/segmentio/terraform-docs/releases/download/v0.4.5/terraform-docs-v0.4.5-darwin-amd64> && chmod +x /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs
2019/09/19 18:51:24 At 3:16: Unknown token: 3:16 IDENT var.namespace
make: *** [docs/terraform.md] Error 1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmmm

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like something is broken (will have to look)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looks like an old build harness

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The ident error tells me that it’s using an old version of terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Terraform-docs does not support it natively, so we have a wrapper around terraform docs

jose.amengual avatar
jose.amengual

ohhhh

jose.amengual avatar
jose.amengual

one sec

jose.amengual avatar
jose.amengual

I have two binaries

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also might get fixed if you blow away build harness and rerun make init. Just a hunch.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(On my phone so cant provide more detail)

jose.amengual avatar
jose.amengual

done

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Unknown token: 3:16 IDENT happened to me when TF versions mismatched

jose.amengual avatar
jose.amengual

thanks guys

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

tested on AWS and merged

cytopia avatar
cytopia

I am currently working on a new fix for the terraform-docs.awk wrapper here: https://github.com/antonbabenko/pre-commit-terraform/issues/65

If there are any other issues coming up, let me know

terraform_docs failing on complex types which contains "description" · Issue #65 · antonbabenko/pre-commit-terraform

How reproduce Working code: staged README.md <!– BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK –> <!– END OF PRE-COMMIT-TERRAFORM DOCS HOOK –> staged vars.tf variable &quot;ingress_ci…

2019-09-20

Szymon avatar

Azure Hi everyone, I’m about to move my big terraform configuration into separate modules, but I have a question about best practice regarding resource-groups. If I will create resource-group resource in every of my modules, it will be fine, because it will be created once, but when for some reason I will remove entire module or I will try to redeploy it, wouldn’t Terraform want to delete my resource group (and all other resources-modules)? Should I rather use data resource to make reference to resource group created in another module or what are your ideas? Thanks

gyoza avatar

hey guys… not sure whats going on but it looks like the 0.9.0 - terraform-aws-cloudfront-s3-cdn module is creating ARN IDs like

“arnawsiam:user/CloudFront Origin Access Identity XXXXXXXXXXXXXXXX”

for S3 policies to allow Cloudfront access

Nikola Velkovski avatar
Nikola Velkovski

ah that’s a new issue

Nikola Velkovski avatar
Nikola Velkovski

I just encountered it today

gyoza avatar

oh thank god.

Nikola Velkovski avatar
Nikola Velkovski

AWS changed how the API behaves

Nikola Velkovski avatar
Nikola Velkovski

in the background

Nikola Velkovski avatar
Nikola Velkovski

if you need a quick fix

gyoza avatar

I literally thought i was going crazyh

Nikola Velkovski avatar
Nikola Velkovski

hahah it happened to me as well

gyoza avatar

i do ,. please

Nikola Velkovski avatar
Nikola Velkovski

sec

Nikola Velkovski avatar
Nikola Velkovski
S3 bucket policy invalid principal for cloudfront · Issue #10158 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Nikola Velkovski avatar
Nikola Velkovski

the glorious fix is

gyoza avatar

aaaaah

gyoza avatar

can i just downgrade my provider version?

gyoza avatar

lets take a peak

Nikola Velkovski avatar
Nikola Velkovski
    principals {
      type        = "AWS"
      identifiers = [replace("${aws_cloudfront_origin_access_identity.this.iam_arn}", " ", "_")]
    }
gyoza avatar

Thank you!

Nikola Velkovski avatar
Nikola Velkovski

for now you should be able to patch it until @Andriy Knysh (Cloud Posse) or @Erik Osterman (Cloud Posse) wake up and officialy fix it

gyoza avatar

haha Erik is a long time friend of mine, i can hold something over him i think to get it fixed

Nikola Velkovski avatar
Nikola Velkovski

gyoza avatar

although, I was the one who was usually embarrassing themselves…

gyoza avatar

I think using the replacements only works for current state files, if you’re doing new policies you have to use type CanonicalUser and identifier s3_canonical_user_id

gyoza avatar

aaaah

Nikola Velkovski avatar
Nikola Velkovski

nope that’s not going to work

gyoza avatar

It just applied for me.

Nikola Velkovski avatar
Nikola Velkovski

even though CanonicalUser and identifier s3_canonical_user_id will pass tf apply

Nikola Velkovski avatar
Nikola Velkovski

try it again

Nikola Velkovski avatar
Nikola Velkovski

aws is changing it in the background

Nikola Velkovski avatar
Nikola Velkovski

you’ll get a change on every apply

gyoza avatar

really

gyoza avatar

ugh

Nikola Velkovski avatar
Nikola Velkovski

at least that’s what happened to me

gyoza avatar

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

gyoza avatar

damn

gyoza avatar

It wont take the replace suggestion, keeps telling me bad

gyoza avatar
Error: Error putting S3 policy: MalformedPolicy: Invalid principal in policy
gyoza avatar

gonna try something

gyoza avatar

it was too early, i was using dashes lol….

gyoza avatar

underscores work

gyoza avatar

thanks for the help Nikola!

gyoza avatar

gonna lurk here now….

Nikola Velkovski avatar
Nikola Velkovski

you are welcome

2019-09-22

guigo2k avatar
guigo2k

guys, any update on this https://sweetops.slack.com/archives/CB6GHNLG0/p1566415698381800 ? Really looking forward to use these modules with TF 0.12

for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20

I forgot to update the version for CI to 0.12, will try and push that out

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, we are working on that now, will be done in the next 2-3 days

for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20

I forgot to update the version for CI to 0.12, will try and push that out

2
1
guigo2k avatar
guigo2k

thanks for the update @Andriy Knysh (Cloud Posse)

2019-09-23

pericdaniel avatar
pericdaniel

If I create NATs in one module, is there a way to get a list of NAT GW and pass it to a new sg with TF?

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Your module can output the list of NAT GWs and you can do whatever you desire with that list

pericdaniel avatar
pericdaniel

is that only if I am creating that sg within the same module?

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Nope.

So there are levels. Think of them as boxes. Terrafrom resources have attributes( variables you set say, ami_name for an EC2 instance) and outputs( say instance_name). You can take that output and play around with it in the same module. Or you can get that output and push it out of your module — your module now outputs that value too.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)
Output Values - Configuration Language - Terraform by HashiCorp

Output values are the return values of a Terraform module.

pericdaniel avatar
pericdaniel

Thank you! Is there another way to do it with using just data? like data aws_nat_gatway and then scrape for a list with tags

russell.t.sherman avatar
russell.t.sherman

there are examples in terraform-root-modules of reading the output of other modules using their remote state.. https://github.com/cloudposse/terraform-root-modules/blob/9301b150c89a5543bdd2785ecdacf000ee6c5561/aws/iam/audit.tf#L15

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

guigo2k avatar
guigo2k

@pericdaniel I believe this post will answer your questions https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa#7077

1
pericdaniel avatar
pericdaniel

Thank you!

oscar avatar
Ignore changes to database password by osulli · Pull Request #41 · cloudposse/terraform-aws-rds

why To use this module and not cause a re-creation, you would have to hardcode the password somewhere in your config / terraform code. This is not a secure method. Naturally if you use a secrets sy…

Cloud Posse avatar
Cloud Posse
04:02:28 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 02, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Szymon avatar

Hi guys, Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer? In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core module and passing some vars like name, location, some scaling properties. Any thoughts about it, is it good or bad idea in your opinion?

I appreciate your input.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the idea is good. That’s how we create terraform environments (prod, staging, dev, etc.). We have a catalog of terraform modules (just code without settings/configs). Then for each env, we have a separate GitHub repo where we import the modules we need (using semantic versioning so we know exactly which version we are using in which env) and provide all the required config/settings for that environment, e.g. AWS region, stage (prod, staging, etc.), and security keys (from ENV vars or AWS SSM)

Szymon avatar

As I understand, you’re actually not creating a Terraform Module of your core/base infra, but instead you have catalogs/repos per environment with versioned “module puzzles”?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example, we have a catalog of TF modules - reusable code which we can use in any env (prod, staging, dev, testing) https://github.com/cloudposse/terraform-root-modules/tree/master/aws

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the code does not have any identity, it could be deployed anywhere after providing the required config/settings

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then for example, in testing env, we create projects for the modules we need (e.g. eks), https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and load the module code from the catalog https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks/.envrc (uisng semantic versioning)

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but all the config/settings are provided from a few places:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Dockerfile (in which we have settings common for all modules in the project) https://github.com/cloudposse/testing.cloudposse.co/blob/master/Dockerfile
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Secrets are from ENV vars (which get populated from diff sources, e.g. AWS SSM, Secrets Manager, Vault, etc.) when the CI/CD deployment pipeline runs, or on dev machine by executing some commands)
Szymon avatar

I see, thank you very much I started with different approach, I keep all my environments in one Terraform Repository with projects and I include modules from external git repositories (each module in separate git repository)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s what we do too

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

https://github.com/cloudposse/terraform-root-modules is a (super)-catalog of top level modules which are aggregations of low-level modules

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each projects in there connects low-level modules together into a reusable top-level module https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks/eks.tf#L31

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Szymon avatar

ah, right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those aggregations are opionated since you can have many diff ways to connect low-level modules to create a top-level module you need

Szymon avatar

Interesting approach. I was reading quite a lot recently, best practices with Terraform,TF Up & Running etc. and in most cases people don’t recommend using nested modules, but it looks really reasonable in your case.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are not nested (in that sense)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those are module instantiation and connecting them together into a bigger module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we have modules in TF - to reuse them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in other modules

Szymon avatar

Actually that was my understanding of nested word, sorry. English is not my first language

joshmyers avatar
joshmyers

By nested modules, they mean modules of modules. Cloudposse stuff does use modules of modules e.g. module A may use module B, and module B may use module C

joshmyers avatar
joshmyers

It works fine, but can be interesting to debug several layers down

joshmyers avatar
joshmyers

If you want composable modules, there isn’t much of a way around that

joshmyers avatar
joshmyers

And by they, I mean folks behind tf up and running etc

2
Vlady Veselinov avatar
Vlady Veselinov

hi y’all bananadance

wave1
Hemanth avatar
Hemanth

Any samples/examples for implementing Cloudwatch events>create new rule>Ec2 Instance State-Change Notification > Target > SNS > email, currently going through official docs

Igor avatar

@Hemanth you cannot create email subscription to an SNS topic with terraform, because they require a confirmation

Callum Robertson avatar
Callum Robertson

Hey All, has anyone had issues creating azure resources with an s3 backend?

Callum Robertson avatar
Callum Robertson

@Andriy Knysh (Cloud Posse) have you ever used an s3 backend with other providers for resources? I’m getting an issue where my declared resources are being pick up in the state file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did not use azure, but you can give more details about the issue, maybe somebody here will have some ideas

Igor avatar

Otherwise, you just want to create the following resources: aws_cloudwatch_metric_alarm, aws_sns_topic, and aws_sns_topic_subscription

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Hemanth ^

Hemanth avatar
Hemanth

@Andriy Knysh (Cloud Posse) the https://github.com/cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms is empty. but thanks those samples are helpful

cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms

Terraform module that configures CloudWatch SNS alerts for EC2 instances - cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that one was not implemented

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

PRs are welcome

1

2019-09-24

gyoza avatar

you just need to replace ” “, “_” on the old value

cabrinha avatar
cabrinha

You guys using terraform cloud at all yet?

pete avatar

What are the overall benefits?

cabrinha avatar
cabrinha

Visualization into runs via the web UI. You can see whats been applied recently and how that run went.

cabrinha avatar
cabrinha

You can lock down certain users, you can also plan/apply automatically based on changes to git.

pete avatar

Interesting. I’ll have to check it out. Used to getting the auto features baked into my CI workflow, so if tf-cloud can potentially simplify that, it could be a win.

pete avatar

Does the visualization piece look at anything outside the tf-state?

Igor avatar

Using #atlantis for now, as it is more flexible

Igor avatar

Though terraform cloud does look appealing

leonawood avatar
leonawood

Can you use terraform_remote_state data source as an input attribute for subnet in the cloudposse aws ec2 module?

leonawood avatar
leonawood

I am using the terraform approved aws vpc module to create my VPC, and have correctly setup all my outputs, one specific being a public_subnet ID and I am trying to reference said subnet ID as a terraform_remote_state data source as the subnet attribute but am not sure of the proper syntax

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If I create NATs in one module, is there a way to get a list of NAT GW and pass it to a new sg with TF?

leonawood avatar
leonawood

hey @Andriy Knysh (Cloud Posse) that helped me

1
leonawood avatar
leonawood

thank you!

Brij S avatar

I have a terraform module in which we use to setup new AWS accounts with certain resources. So this module is generic enough to use on ‘dev’ aws account, ‘qa’ account and ‘prod’ account for say. However, I need to only create some resources based on the environment. How can I achieve this with a module? I saw this online: https://github.com/hashicorp/terraform/issues/2831

Ignore resource if variable set. · Issue #2831 · hashicorp/terraform

We have a couple of extra terraform resources that need creating under certain conditions. For example we use environmental overrides to create a &quot;dev&quot; and a &quot;qa&quot; environment fr…

Brij S avatar

is this still the best way?

Brij S avatar

was about to try that out but read that if the count is set to 0, it would destroy the resource ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for all resources in the module, you could use count = var.environment == "prod" ? 1 : 0 or count = var.environment == "qa" ? 1 : 0 etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or any combination of the conditions

Brij S avatar

so adding count = var.environment == "prod" ? 1 : 0 would ensure the resource is only created in prod?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will ensure that if var.environment == "prod" then the resource will be created. If you run it in prod, it will be in prod.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

at the same time, you could make a mistake and set var.environment == "prod" and run it in dev, then it will be created as well in dev

2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Brij S you need some kind of container (or separate repo) where you set all configs for let’s say prod (e.g. region and AWS prod account ID) and where you set var.environment == "prod"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when you run it, it will be used only in the prod account and since var.environment == "prod", the resource will be created

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so a better strategy would be not to create a super-giant module with many conditions to create resources or not

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

divide the big module into small modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then use some tools to combine only the required modules into each account repo

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the tool could be terragrant or what we do using geodesic and remote module loading https://www.reddit.com/r/Terraform/comments/afznb2/terraform_without_wrappers_is_awesome/

Terraform Without Wrappers is AWESOME!

One of the biggest pains with terraform is that it’s not totally 12-factor compliant (III. Config). That is,…

leonawood avatar
leonawood

anyone here split up state files? we use tf workspaces and it works quite nicely. I am interested if theres a way to combine all the outputs into one file tho for reference?

leonawood avatar
leonawood

so I can just send to our sys admin and it contain all the relevant details

Brij S avatar

@Andriy Knysh (Cloud Posse) i will look into terragrunt, as for now Id like to use the above suggestion with TF11, but having some issue with syntax: ${var.aws_env} == "prod" ? "1" : "0" doesnt work - what am i missing?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

"${var.aws_env == "prod" ? 1 : 0}"

Brij S avatar

what about the closing }

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

need it too

Brij S avatar

cool let me try that

johncblandii avatar
johncblandii
#21: Add support for more config options by johncblandii · Pull Request #22 · cloudposse/terraform-aws-alb-ingress

Feature Added the following with sensible defaults to not break the current consumers: health check variables to enable/disable and control the port + protocol slow_start stickiness // CC @aknysh…

johncblandii avatar
johncblandii

I did not check the provider versions so unsure if it’ll break consumers or not

#21: Add support for more config options by johncblandii · Pull Request #22 · cloudposse/terraform-aws-alb-ingress

Feature Added the following with sensible defaults to not break the current consumers: health check variables to enable/disable and control the port + protocol slow_start stickiness // CC @aknysh…

johncblandii avatar
johncblandii

added a simple example too

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @johncblandii

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-alb-ingress

Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress

johncblandii avatar
johncblandii

no prob

Brij S avatar

is there a way to create an IAM user, generate access keys and plug them into paramstore with terraform?

2019-09-25

Sharanya avatar
Sharanya

Components for secure UI hosting in S3

• S3 — for storing the static site

• CloudFront — for serving the static site over SSL

• AWS Certificate Manager — for generating the SSL certificates Route53 — for routing the domain name to the correct location Did anyone come across any modules for this in terraform ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s how we provision static site from S3 https://docs.cloudposse.com/

2
Sharanya avatar
Sharanya

thankq @Andriy Knysh (Cloud Posse)

oscar avatar
amancevice/terraform-aws-serverless-pypi

S3-backed serverless PyPI. Contribute to amancevice/terraform-aws-serverless-pypi development by creating an account on GitHub.

kj22594 avatar
kj22594

Hi all,

I’ve been using terraform (0.10 & 0.11) for close to three years now and as terraform 0.12 gets more support/becomes more of the industry standard, my team is looking to adopt it in a way where we can rearchitect our terraform structure, and reduce the general number of pain points across the team.

Currently we are a multi-region AWS shop that has single terraform repos for every service we deploy, with modules at the root of the repo, and directories representing each of our environments (qa-us-east-1, qa-eu-west-1). We run terraform from within those environment specific directories and push remote state to S3 to maintain completely separate state.

We’re thinking about how we can merge all of this into a single repo where:

  • There are modules that can be reused across all of our different services (they’d either live at the root of the base terraform repo or in a separate terraform modules repo that we can reference from within our base repo)
  • We duplicate as little code as possible (probably obvious but still worth mentioning)
  • We continue to keep all state separate on a per environment basis
  • Follow terraform best practices to make sure that upgrade paths continue to be easy/straightforward

We also want to keep in mind that we are shifting to a multi account AWS organization where our terraform will be deploying into different AWS accounts as well.

The team so far has demoed both Terragrunt and Terraform Workspaces. We are also considering not using workspaces or Terragrunt but still migrating to the single repo structure. There have been mixed opinions about all options considered. I’d love to get feedback from the community if anyone has opinions based on current or previous experiences with either.

kj22594 avatar
kj22594

Please note that we are currently not using Terraform Enterprise but that has been an option that could be considered as well

Tom de Vries avatar
Tom de Vries

Regarding the multiple AWS account, we have a similar setup where, depending on the env directory you’re in, we hop into the correct AWS Account. Would that work for you are are you planning on deploying the same environment within multiple accounts?

kj22594 avatar
kj22594

it would be different environments within multiple accounts. The rough plan is to have each of our teams have a production & development/test account. So one thought was that the specific account would be another extracted layer of directories, either a level above or below the env directory

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@kj22594 take a look here, similar conversation https://sweetops.slack.com/archives/CB6GHNLG0/p1569261528160800

Hi guys, Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer? In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core module and passing some vars like name, location, some scaling properties. Any thoughts about it, is it good or bad idea in your opinion?

I appreciate your input.

kj22594 avatar
kj22594

thanks. I’ll take a look

Hi guys, Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer? In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core module and passing some vars like name, location, some scaling properties. Any thoughts about it, is it good or bad idea in your opinion?

I appreciate your input.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in short, we use the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Terraform modules to provision resources on AWS https://github.com/cloudposse?utf8=%E2%9C%93&q=terraform&type=&language=
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. A catalog of top-level modules where we assemble the low-level modules together and connect them. They are completely identity-less and could be deployed in any AWS account in any region https://github.com/cloudposse/terraform-root-modules/tree/master/aws
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. A container (geodesic https://github.com/cloudposse/geodesic( with all the tools required to provision cloud infrastructure
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Then, for a specific AWS account and specific region, we create a repo and Docker container, e.g. https://github.com/cloudposse/testing.cloudposse.co
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it provides:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

1) all the tools to provision infrastructure

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

2) Settings and configs for a specific environment (account, region, stage/env, et.) NOTE that secrets are read from ENV vars or SSM using chamber

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

3) The required TF code for each module that needs to be provisioned in that account/region gets loaded dynamically https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks/.envrc

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

4) to login to AWS, an AIM role gets assumed in the container (we use aws-vault)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so once inside that particular container (testing.cloudposse.co), you have all the tools, all required TF code, and all the settings/configs (that specify where and how the modules get provisioned)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the code (logic) is separated from data (configs) and the tools (geodesic), but get combined in a container for a particular environment

kj22594 avatar
kj22594

Wow, thanks. That makes a ton of sense and seems to be a very sound way of approaching this problem. I do really like the idea of having root level modules repo where you can interconnect different modules for use cases that happen numerous times but also having the modules split out so that they can be reused separately too

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, while terragrunt helps you to organize your code and settings, this approach gives you much more -code/settings/tools in one container related to a particular environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(terragrunt still can be used to organize the code if needed) https://www.reddit.com/r/Terraform/comments/afznb2/terraform_without_wrappers_is_awesome/

Terraform Without Wrappers is AWESOME!

One of the biggest pains with terraform is that it’s not totally 12-factor compliant (III. Config). That is,…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the nice part about all of that is that the same container could be used from three different places: developer machine, CI/CD pipelines (those that understand containers like Codefresh or GitHub Actions), and even from GitHub itself using atlantis (which is running inside geodesic container) - that’s how we do deployment and testing of our modules on real AWS infrastructure

kj22594 avatar
kj22594

That is really cool. Atlantis is something that I’ve had conversations with a friend about but we’ve never actually implemented it or even tested it

kj22594 avatar
kj22594

I really appreciate this, this is all great knowledge and insight

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

party_parrot3

2019-09-26

Robert avatar

Hey does anyone have a terraform party slackmoji?

Robert avatar

I will trade you one terraform-unicorn-dab slackmoji.

Robert avatar
fast_parrot1
roth.andy avatar
roth.andy

omg I love this

2
maarten avatar
maarten

lol

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I’d love a terraform-parrot

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

hahaha

Robert avatar

I was hoping for something like my kubernetes party:

Robert avatar

I stole that form kubernetes.slack.com

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Nice!

Robert avatar

I just made it!

Robert avatar
2
terraform1
Robert avatar

Probably not my best work, but not bad for a first gif

Robert avatar
Robert
02:37:27 PM

¯_(ツ)_/¯

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve added and

6
5
6
Robert avatar
Robert
06:32:12 PM
Joan Hermida avatar
Joan Hermida

Niiice!

Joan Hermida avatar
Joan Hermida

Where do I get that unicorn XD

Joan Hermida avatar
Joan Hermida

I really need it in my workspace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

1) download the icons above 2) go here https://$[team.slack.com/customize/emoji](http://team.slack.com/customize/emoji) where $team is your slack team

Joan Hermida avatar
Joan Hermida

Ohhh it is above

1
Joan Hermida avatar
Joan Hermida

XD

2019-09-27

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

@here I am trying to upgrade from v.11.14 to v.12 and after going through the upgrade steps and fixing some code changes … now I am seeing following issue

Error: Missing resource instance key

  on .terraform/modules/public_subnets.public_label/outputs.tf line 29, in output "tags":
  29:         "Stage", "${null_resource.default.triggers.stage}"

Because null_resource.default has "count" set, its attributes must be accessed
on specific instances.

For example, to correlate with indices of a referring resource, use:
    null_resource.default[count.index]

did anyone faced similar issue and was able to fix it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try "${join("", null_resource.default..*.triggers.stage}"

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Hi

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I would apreciate some help with the terraform-aws-elasticsearch module

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

when trying to use it from the complete example

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

i get in a plan the following

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini
AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

tha’s one example but i get that for all the variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have to provide values for all variables

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

it seems as if it were not reading the set variables

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

yeah, but in the variables.tf file?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or to use the .tfvar files, use :

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform plan -var-file="fixtures.us-west-1.tfvars"
terraform apply -var-file="fixtures.us-west-1.tfvars"
AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

hoooo i see

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but don’t use out values

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

change them

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

so the modules as module "elasticsearch" { blah blah should be empty of values if i use a tfvars file rigth ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can provide values for the vars from many diff places https://www.terraform.io/docs/configuration/variables.html

Input Variables - Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

rohit avatar

how do you provide credentials to private terraform github repository module ?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

like this in your providers.tf

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini
AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

then just set in variables.tf files the values

rohit avatar

thanks

rohit avatar

and how do i provide the path to github module if it is not at the root level

rohit avatar

for example, source = "[email protected]:hashicorp/example.git"

rohit avatar

but my main.tf is under modules directory

rohit avatar

@AgustínGonzalezNicolini how would i access it ?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

[email protected]:hashicorp/example.git//myfolder?ref=tags/x.y.z

rohit avatar

thanks

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Thanks @Andriy Knysh (Cloud Posse)!!!

2019-09-29

Bruce avatar

Hey guys, I am looking for the best way to roll back a change to an ASG to the previous known working ami as part of CICD pipeline with Terraform. Thinking of using a script to tag the previous AMI and using that to identify last known config. Has anyone else solved this problem?

roth.andy avatar
roth.andy

I’ve been asked to provision 3 EKS clusters: Dev, Staging, and Prod. What is the way that you guys do this? Currently, I’m thinking of

  • Having 3 branches in my git repo called “dev”, “staging”, and “prod”
  • Having 3 .tfvars files called dev.tfvars, staging.tfvars, prod.tfvars
  • If I commit to dev, My CICD runs terraform apply using a workspace called dev, using dev.tfvars
Nikola Velkovski avatar
Nikola Velkovski

Hi @roth.andy, personally I am a fan of workspaces. We used to have to have this setup but without the fixed branches, CI/CD automaticaly deployed a branch to staging and for prod it was a interactive apply ( if tests passed )

2019-09-30

Cloud Posse avatar
Cloud Posse
04:03:51 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 09, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Milos Backonja avatar
Milos Backonja

Guys, i’ve been using https://github.com/cloudposse/terraform-aws-vpc-peering to peer two vpcs and it works awesome. On current project I need to peer N numbers of VPC’s all with each other. As number of VPCs grows it become pretty hard to manage everything even with terraform. Is there any way to dynamically create peering mesh? CIDRs are carefully chosen so there will be no overlapping and I can fetch all vpcs with single data source. This shot from AWS describes my setup perfectly

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I’d suggest using transit gateway

jose.amengual avatar
jose.amengual
Working with Shared VPCs - Amazon Virtual Private Cloud

VPC sharing allows multiple AWS accounts to create their application resources, such as Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, Amazon Redshift clusters, and AWS Lambda functions, into shared, centrally-managed Amazon Virtual Private Clouds (VPCs). In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.

Stephane Minisini avatar
Stephane Minisini

@Milos Backonja I would look into Transit Gateway. This allows you to have a hub and spoke type of network and manage the routing tables centrally.

loren avatar

second this

Milos Backonja avatar
Milos Backonja

Awesome, thanks a lot, This simplifies my setup enormously. I will need to check/estimate costs.

loren avatar

you can do a bunch of other cool things with transit gateways, like centralize nat gateways, or hook in a central direct connect

rbadillo avatar
rbadillo

Hi Guys, does anyone here using Terraform Enterprise ?

Joan Hermida avatar
Joan Hermida

Hub n’ Spoke with VPC Transit Gateway

Brij S avatar

does anyone know how to add private subnets to the default vpc using terraform?

jose.amengual avatar
jose.amengual

Don’t use the default pvc, it is bad practice…

Brij S avatar

is there a module that creates a vpc with private subnet?

jose.amengual avatar
jose.amengual

yes, just go to clousposse github and search for vpc and subnets

jose.amengual avatar
jose.amengual

we use their modules and they work great

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @jose.amengual

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster

Brij S avatar

specifically lines 5-24, right?

jose.amengual avatar
jose.amengual

yes

rohit avatar

does aws_alb_listener resource multiple certificate_arn ?

rohit avatar

i think it does using aws_lb_listener_certificate

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

For those interested in the EKS modules, we’ve converted them to TF 0.12

    keyboard_arrow_up