#terraform (2020-05)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-05-01

Zachary Loeber avatar
Zachary Loeber

Anyone take the HashiCorp Certified Terraform Associate exam yet?

Eric Malenfant avatar
Eric Malenfant

It may have been asked before, but I’m new to checking out these awesome modules. I have an environment/account already setup (out of my control) - ie: vpc, ig, subnets, etc.. already created. Is it possible to use a module, like: terraform-aws-elastic-beanstalk-environment and not create all the extras, or would I have to go through an import everything ?

jose.amengual avatar
jose.amengual

no need to create those resources again

jose.amengual avatar
jose.amengual

you can do Data lookups

jose.amengual avatar
jose.amengual

base on tags

jose.amengual avatar
jose.amengual
Eric Malenfant avatar
Eric Malenfant

that sounds like it could be easier.

jose.amengual avatar
jose.amengual

like that

Eric Malenfant avatar
Eric Malenfant

I’m going to have to lookup a lot of tags..

Eric Malenfant avatar
Eric Malenfant

thanks

jose.amengual avatar
jose.amengual

you need to define enough to find it

jose.amengual avatar
jose.amengual

not all

jose.amengual avatar
jose.amengual

if it has a tag that is unique, then that is it

msharma24 avatar
msharma24

Hi everyone A team at work has been developing their entire infrastructure in one single Cloudformation Template file which now over 4000 lines of spaghetti I helped them fix a bunch of cyclic dependencies and now I want to re write the CFT => TF module’s which mainly has alot of I inline lambda Glue jobs, catalog, tables CW alarms and events A many S3 buckets

This infrastructure is live in production in five environments

Im seeking advise on what will the best approach to deploy the infrastructure I have developed into Terraform?

What about the existing S3 buckets which has TB of data? Should do a TF import on S3 bucket? Deploy a parallel env with TF and delete CFT elements manually as I can’t delete the CFT stack?

jose.amengual avatar
jose.amengual

I will use terraformer

jose.amengual avatar
jose.amengual
GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer

jose.amengual avatar
jose.amengual

we recently used to import some stuff

jose.amengual avatar
jose.amengual

worked very well

jose.amengual avatar
jose.amengual

we just changed the resources names a bit

msharma24 avatar
msharma24

Noice

msharma24 avatar
msharma24

I will take a look at this.

jose.amengual avatar
jose.amengual

I will say, creating each resource bit by bit will be better, using modules reduces de time for implementation

jose.amengual avatar
jose.amengual

but it is possible to do it with this

msharma24 avatar
msharma24

@jose.amengual thanks for u advise

jose.amengual avatar
jose.amengual

np

vFondevilla avatar
vFondevilla

+1 for Terraformer, we’re using for moving parts of the current infrastructure that was manually deployed by the previous lead to Terraform Modules.

1
Alex Lam avatar
Alex Lam

what about cdk? seems neat, gives you full benefit of a real programming language

jose.amengual avatar
jose.amengual

it is going to take you a REALLY long time tod o

msharma24 avatar
msharma24

It’s a mountain of a task to climb :)

jose.amengual avatar
jose.amengual

jose.amengual avatar
jose.amengual

anyone have seen this ?????

security_groups  = [
                  + "sg-015133333333d473b",
                  + "sg-05294444444432970",
                  + "sg-0a35555555553ea35",
                  + "sg-0c8a33333dbca389b",
                  + "sg-022222229ccf66e24",
                  + "terraform-20200502011517105000000003",
                ]
jose.amengual avatar
jose.amengual

Data lookup where that comes from is this :

data "aws_security_groups" "atlantis" {
  filter {
    name   = "group-name"
    values = ["hds-${var.environment}-atlantis-service"]
  }
  provider = aws.primary
}
msharma24 avatar
msharma24

Weird looks like a timestamp to me, what happens if u apply it

msharma24 avatar
msharma24

Do not know why it is under =[]

jose.amengual avatar
jose.amengual

it complain that is expecting sg- as a value

jose.amengual avatar
jose.amengual

never seen such thing before

msharma24 avatar
msharma24

Taint the resource?

msharma24 avatar
msharma24

If you you can’t cure it, kill it lol

jose.amengual avatar
jose.amengual

lol

jose.amengual avatar
jose.amengual

it was created by another TF

jose.amengual avatar
jose.amengual

is there, it works

jose.amengual avatar
jose.amengual

I can add it manually in the UI

jose.amengual avatar
jose.amengual

I used .id instead of .ids.

1
jose.amengual avatar
jose.amengual
data.aws_security_groups.atlantis.id
jose.amengual avatar
jose.amengual

returns the terraform-2222

jose.amengual avatar
jose.amengual
data.aws_security_groups.atlantis.ids.* returns the id
jose.amengual avatar
jose.amengual

stupid

jose.amengual avatar
jose.amengual

it should not return anything

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


is there, it works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you mean you could see the id in the AWS UI??

jose.amengual avatar
jose.amengual

I mean it should give you some error since the data resource is aws_security_groups and now a not a resource “aws_security_group”

jose.amengual avatar
jose.amengual

very subtle difference

jose.amengual avatar
jose.amengual
# atlantis
data "aws_security_groups" "atlantis" {
  filter {
    name   = "group-name"
    values = ["hds-${var.environment}-atlantis-service"]
  }
  provider = aws.primary
}

resource "aws_security_group" "ods-purge-lambda-us-east-2" {
  name        = "hds-${var.environment}-ods-purge-us-east-2"
  description = "Used in lambda script"
  vpc_id      = local.vpc_id
  tags        = local.complete_tags
  provider    = aws.primary

}

locals {
  sg-atlantis             = join("", data.aws_security_groups.atlantis.ids.*)
  sg-ods-purge-us-east-2  = join("", aws_security_group.ods-purge-lambda-us-east-2.id.*)
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha! now i get it

jose.amengual avatar
jose.amengual

you see how easy is to make a mistake ?

jose.amengual avatar
jose.amengual

WTH is that terraform-XXXX???

2020-05-02

2020-05-03

Carlos R. avatar
Carlos R.

Hello, I have a newbie-type question. Is there a way to force rebuilding a specific resource? Basically, using “terraform taint” in other resources than ec2 instances such as aws kinesis stream or dynamodb table, etc (My current workaround is changing the resource name which usually forces the rebuild, however it not very practical.)

jose.amengual avatar
jose.amengual

well is you remove the resource and add it again it does it too

jose.amengual avatar
jose.amengual

you can do a count argument and enable and disable it

Carlos R. avatar
Carlos R.

yeah that’s a better option than renaming on one hand. On the other hand it requires more steps. In the setup I have in mind (automated testing), it actual might be better .

Carlos R. avatar
Carlos R.

nvm, got it working with terraform taint actually

jose.amengual avatar
jose.amengual

I’m not so familiar with taint

2020-05-04

Zach avatar

taint marks the state as ‘bad’ and terraform will destroy and recreate it

1
Zach avatar

You can also use ‘untaint’ for times when terraform gives up on a resource or gets confused and wants to destroy it, even though it turned out ok

RB avatar
RB
01:21:46 PM

If a terraform module (module UP) is updated, is there a way for atlantis or something else to rerun a terraform plan for a module (module DOWN) that depends on UP ?

loren avatar

personally, we version all our modules, and use dependabot to update the version refs with a pr…

If a terraform module (module UP) is updated, is there a way for atlantis or something else to rerun a terraform plan for a module (module DOWN) that depends on UP ?

RB avatar

interesting so dependabot will notice that the module UP has changed and somehow find all modules that depend on UP and submit a PR to them to update the git reference tag in the source which will trigger atlantis’ terraform plan comment ?

RB avatar

if i understand that correctly… how do you configure dependabot to do this ?

loren avatar

if it’s a public project, and tf 0.11, then the free dependabot service works great

loren avatar

if you need tf 0.12, we’ve built a fork and a github-action

RB avatar

could you link this ?

RB avatar

im surprised you folks havent’ blogged about this.

loren avatar
plus3it/terraform-aws-codecommit-pr-reminders

Terraform module that deploys a lambda function which will publish open pull requests to Slack - plus3it/terraform-aws-codecommit-pr-reminders

loren avatar

(note that’s a read-only github token, please don’t flay me)

RB avatar

haha i was just wondering about that. might be good to comment that inline

loren avatar

here’s the repo for the gh-action, https://github.com/plus3it/dependabot-terraform-action/

plus3it/dependabot-terraform-action

Github action for running dependabot on terraform repositories with HCL 2.0 - plus3it/dependabot-terraform-action

party_parrot1
loren avatar

and our fork of dependabot-core, which the gh-action uses, https://github.com/plus3it/dependabot-core

plus3it/dependabot-core

The core logic behind Dependabot’s update PR creation - plus3it/dependabot-core

cool-doge1
loren avatar

hopefully dependabot will eventually get around to merging tf012 support upstream… https://github.com/dependabot/dependabot-core/pull/1388

Adds terraform 0.12 support by userhas404d · Pull Request #1388 · dependabot/dependabot-core

Fixes #1176 I opted for both hcl2json and terraform-config-inspect. hcl2json for terragrunt and terraform-config-inspect for tf 0.12 I wanted to go with terraform-config-inspect for both, but it di…

this1
Cloud Posse avatar
Cloud Posse
04:00:17 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is May 13, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Mr.Devops avatar
Mr.Devops

Hoping someone can help - I’m curious how would you use the import command to import an existing resource within TFC if i’m using TFC to trigger plan/apply and not via cli?

Matt Gowie avatar
Matt Gowie

@Mr.Devops You can still use import to update your TFC state from your local environment AFAIK. Once you update it once from your local environment then it will be in your state and TFC should just know it’s there.

Mr.Devops avatar
Mr.Devops

ah ok i will look into this thx @Matt Gowie

2020-05-05

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
04:25:30 PM

HashiCorp Consul Service for Azure (HCS) Affected May 5, 16:23 UTC Investigating - We are currently experiencing a potential disruption of service to the HashiCorp Consul Service for Azure (HCS). HCS Cluster creations are currently failing. Teams are currently working to identify a solution and will update as soon as information is available.

HashiCorp Consul Service for Azure (HCS) Affected

HashiCorp Services’s Status Page - HashiCorp Consul Service for Azure (HCS) Affected.

OliverS avatar
OliverS

Hi I’m using CloudPosse’s terraform-aws-eks-cluster module, how do I decide whether to instantiate the worker nodes with their terraform-aws-eks-node-group module vs their terraform-aws-eks-workers modules? The node-group approach seems to use what is intended for EKS, whereas workers module uses autoscaling group. Am I correct that node-group module is the way to go?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there are pros and cons with managed node groups

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, the workers module was created long, long before managed node groups were supported

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would start with managed node groups until you find a use-case that doesn’t work for you

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for example, if you need to run docker-in-docker for Jenkins, then the managed node groups won’t work well for that and you should use the workers module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Keep in mind, that a kubernetes cluster can have any number of node pools

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and you can mix and match node pools from different modules concurrently in the same cluster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can have a fargate node pool along side a managed node pool, along side a ASG workers node pool

OliverS avatar
OliverS

thanks much appreciated

2
s_slack avatar
s_slack

@Erik Osterman (Cloud Posse) Would you mind elaborating why dind won’t work well with managed node groups? I use dind and was thinking of moving my nodes to managed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@s_slack EKS disables the Docker bridge network by default, and Managed Node Groups to not allow you to turn it back on. This is a problem if your Dockerfile includes a RUN command that installs a package, for example, as it will not have internet connectivity. Another issue for us is that you cannot configure Managed Node Groups to launch nodes with a Kubernetes taint on startup, which we wanted to keep our spot instances from having non-whitelisted applications running on them.

Docker in Docker no longer works without docker0 bridge · Issue #183 · awslabs/amazon-eks-ami

What happened: We utilize docker-in-docker configuration to build our images from CI within our dev cluster. Because docker0 has been disabled, internal routing from the inner container can no long…

s_slack avatar
s_slack

Perfect. Much appreciated. @Jeremy G (Cloud Posse) @Erik Osterman (Cloud Posse)

breanngielissen avatar
breanngielissen

Hi. I am hoping someone can help me with an interesting problem. I am trying to use the data lifecycle manager to create snapshots of the root volume of my instance but the root ebs volume needs to be tagged. Terraform doesn’t have a way to add tags to the root_block_device of the aws_instance. I tried to use the data.aws_ebs_volume to find the ebs volume that is created but I can’t figure out how to use that to tag it. The resource.aws_ebs_volume doesn’t seem to have a way to reference the id from the data.aws_ebs_volume which means that I can’t import the volume either. Hope that makes sense.

Matt Gowie avatar
Matt Gowie

@breanngielissen Can you tag via the AWS CLI? If you can then you can use the local-exec provisioner to tag it.

resource "null_resource" "tag_aws_ebs_volume" {
  provisioner "local-exec" {
    command = <<EOF
      $YOUR_AWS_CLI_CODE_TO_TAG
EOF
  }
}
breanngielissen avatar
breanngielissen

@Matt Gowie That worked perfectly. Thank you!

Matt Gowie avatar
Matt Gowie

@breanngielissen Glad to hear it!

2020-05-06

RB avatar

anyone know of a terraform module for opengrok or a similar code search app that can be run in aws ?

RB avatar

their docker container description says to use the standalone one …

https://hub.docker.com/r/opengrok/docker

RB avatar

i was looking into ECS using its docker, but it might be better to use an EC2 instance with userdata to setup opengrok, and keep code on an EFS

RB avatar

or just use an EBS

RB avatar

so then the setup would be…

• EC2

• userdata to retrieve ssh key, configure opengrok, clone all github org’s repos

• EBS / EFS for storage of github org repos

• ALB with a target group

• route53 record

• acm cert curious to hear other peoples’ thoughts

Joe Presley avatar
Joe Presley

Can Sentinel be used to enforce policies on IAM roles? For example, don’t grant IAM roles related to networking to a user? I searched through the documentation but couldn’t find any examples related to IAM.

cabrinha avatar
cabrinha

It’d be nice if this module allowed additional IAM permissions: https://github.com/cloudposse/terraform-aws-emr-cluster

cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster

RB avatar

ooo let me take a look at this

cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster

RB avatar
cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster

RB avatar

you can use a data source to grab the IAM role that it creates and then do a aws_iam_role_policy_attachment to attach a new policy to it, outside of its module reference.

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, this is our typical pattern - provision the role, expose it, then let the caller attach policies

RB avatar

it would be nice if those modules that are already creating the iam role(s) would also output their respective arns. then you wouldn’t need to use a data source to get the arn

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB ya I’d consider it a “bug” if they don’t

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using data provider shouldn’t be needed.

cabrinha avatar
cabrinha

because in order to use bootstrap actions, the instance needs permissions to download the file

raghu avatar

Hi Guys, Is there any example that creates global accelerator with alb as endpoint?

raghu avatar

I think i find it

2020-05-07

Haroon Rasheed avatar
Haroon Rasheed

Hi All - I would like to setup a simple AWS EKS cluster with 1 master and 2 worker nodes..which set of terraform files I need to use. Please guide..I have been trying to figure out and end up bringing up something I could not access from local machine

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you seen our EKS cluster module? It has a good example to get you started

Haroon Rasheed avatar
Haroon Rasheed

yes but it has lot of things in root folder there are some .tf files and also in example I could see ‘complete’ folder has lot of files..Not sure from which folder i need to trigger “terraform apply” and also which variable.tf file I need to feed my details

Martin Tooming avatar
Martin Tooming

Hey, I’m thinking about how to properly manage resources created by AWS with Terraform. One example is the S3 bucket which is created for Elastic Beanstalk (elasticbeanstalk-<region>-<account_id>). I would like to add cross-region replication and encryption for this bucket due to compliancy reasons. Any ideas?

jose.amengual avatar
jose.amengual

here better

Martin Tooming avatar
Martin Tooming

this is for Security Hub PCI standard, which requires for S3 buckets to have cross-region replica and encryption

jose.amengual avatar
jose.amengual

cross account or cross region ?

Martin Tooming avatar
Martin Tooming

cross region

Martin Tooming avatar
Martin Tooming

on one account

jose.amengual avatar
jose.amengual

so then you do not need the account id in the name in that case ?

jose.amengual avatar
jose.amengual

we did team-nameofthing-region

jose.amengual avatar
jose.amengual

we added the regions to everything

jose.amengual avatar
jose.amengual

we even duplicated similar IAM roles for that

jose.amengual avatar
jose.amengual

because IAM is a global service

jose.amengual avatar
jose.amengual

but resources are not

jose.amengual avatar
jose.amengual

which makes things very difficult to troubleshoot in case of access issues and such

jose.amengual avatar
jose.amengual

and you need to keep in mind that some resources like ALBs can’t have names longer than 32 characters

jose.amengual avatar
jose.amengual

so is better to keep it short

Martin Tooming avatar
Martin Tooming

the bucket is created by AWS automatically like that

Martin Tooming avatar
Martin Tooming

and I haven’t found a place where could I specify the bucket for Beanstalk myself

jose.amengual avatar
jose.amengual

mmm I do not know much about EB

jose.amengual avatar
jose.amengual

is this for a multi region setup ?

drexler avatar
drexler

Hi. I’m using TF modules and wondering if anyone has a hack to conditionally enable/disable one based on a variable.

loren avatar

currently, the module has to support it, by accepting a variable that is used in the count/for_each expression on every resource in the module

loren avatar
terraform-aws-modules/terraform-aws-vpc

Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc

loren avatar

follow the create_vpc value through the module .tf files…

drexler avatar
drexler

thanks. i’ll take a look.

loren avatar

terraform core is working on module-level count/for_each support. it is in-scope for tf 0.13… https://github.com/hashicorp/terraform/issues/17519#issuecomment-605003408

count and for_each for modules · Issue #17519 · hashicorp/terraform

Is it possible to dynamically select map variable, e.g? Currently I am doing this: vars.tf locals { map1 = { name1 = &quot;foo&quot; name2 = &quot;bar&quot; } } main.tf module &quot;x1&quot; { sour…

Haroon Rasheed avatar
Haroon Rasheed

Managed to deploy EKS cluster using Terraform..with basic thing running. However when I try to connect from my local machine from where I ran terraform. I get below error message. An error occurred (AccessDenied) when calling the AssumeRole operation: User: arnxxxxxxx:user/xxxx is not authorized to perform: sts:AssumeRole on resource: arnxxxxxxx:user/xxxx

roth.andy avatar
roth.andy
Haroon Rasheed avatar
Haroon Rasheed

I am using same aws access key and secret key to deploy the EKS cluster using terraform but when I try to connect it from the same machine with same AWS config i get this error message..Any help would be really helpful?

David Scott avatar
David Scott

Is that error from running kubectl? What does your kubeconfig look like? It’s probably in ~/.kube/config. (Don’t paste the whole config in here; we don’t need to see your cluster address or certificate) If the access keys on your local system already have permissions to connect to EKS, make sure your kubeconfig doesn’t have a redundant role assumption in the args:

args:
  ...
  - "--role"
  - "<role-arn>"
Haroon Rasheed avatar
Haroon Rasheed

Actually yes I have added my role again using command.same role which I am using to connect to EKS. This I have done using command aws eks --region us-east-1 update-kubeconfig --name test-xxx --role-arn arn:awsiam:xxxx:user/xxx

Haroon Rasheed avatar
Haroon Rasheed

this is because when I checked in kubeconfig file it had new iam role created by Terraform but not mine which i connected in my local machine..only due to this i assumed I couldnt connect so I ve added it. So you mean if we dont add it should be able to connect?

Haroon Rasheed avatar
Haroon Rasheed

It works when if I dont add the redundant roles. Thanks @David Scott for your help. I really saved lot of time

1
David Scott avatar
David Scott

Glad I could help!

x80486 avatar

One silly question: I use the terraform_state_backend, I was on version 0.16.0 and when I try to upgrade and use version 0.17.0 it’s telling me something around Provider configuration not present because of null_data_source. I don’t know how to solve that conflict and I can’t run destroy :rolling_on_the_floor_laughing: …so is it safe to just remove all the modules with type "null_data_source" from terraform.tfstate? That’s the only way I’ve found to make it work, or to better say: to not show that error message, I don’t know if it works, I’m just running until terraform plan

x80486 avatar

All right! I found that going back to 0.16.0 and running terraform destroy -target=the_troublemaker, then terraform plan will do it…after that, upgrading to 0.17.0 didn’t give me any problems

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, we’ve seen that before as well. The problem is the explicitly defined providers in modules, e.g. https://github.com/cloudposse/terraform-null-label/blob/0.13.0/versions.tf#L5

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you define providers in modules, then you can’t just remove it from the code or rename it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF will not be able to destroy it w/o the original provider present

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

label module 0.16.0 does not have it defined anymore

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform - refactoring modules: Error: Provider configuration not present

I’m refactoring some Terraform modules and am getting: Error: Provider configuration not present To work with module.my_module.some_resource.resource_name its original provider configuration at m…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(the moral of the store: try not to define providers in sub-modules, provide the providers from top-level modules. In most cases, they are inherited from top-level modules automatically)

jose.amengual avatar
jose.amengual

this is very true….I just hit this issue with the github webhooks module

maarten avatar
maarten

I’m dealing with the following; different customers, different terraform projects across different github repositories, some on the customer github org some on mine. The different projects can have information in common like my ssh-keys, e-mail addresses, whitelisted ip’s. An idea which was opted was, why not put those semi-static vars in a private git terraform module and distribute it like that. I’m personally afraid this module will end up like the windows registry, but I have no valid alternative either. I’m curious to know what everyones take is on this.

Joe Presley avatar
Joe Presley

I heard of that solution from a colleague who worked on a project with a customer where they did that. I didn’t go down that route, because the things that were shared were things that didn’t change much, like project ids (this is for GCP). I simply hardcoded the outputs from various runs into the vars as I ran the environment from start to finish.

Joe Presley avatar
Joe Presley

But I can see the central terraform module as either a God Class or as a glorified parameter store.

Matt Gowie avatar
Matt Gowie

@Andriy Knysh (Cloud Posse) Does that mean you folks at CP will eventually be removing your usage of providers (like GH provider for GH webhooks) in your modules?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if they are still in some modules, yes, need to be removed. We tried to clean it up in all modules converted to TF 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you see providers in modules, let us know

Matt Gowie avatar
Matt Gowie

I think this is the only one that I know of. I hit issues with it with the codepipeline module on a project. And I know @jose.amengual had troubles with it a week or two back.

Matt Gowie avatar
Matt Gowie
cloudposse/terraform-github-repository-webhooks

Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, that one will trigger changes across many modules

jose.amengual avatar
jose.amengual

yes that is the one, I had that problem

Matt Gowie avatar
Matt Gowie

@Andriy Knysh (Cloud Posse) Yeah… I could see that being a PITA. Ya’ll doing a good job with versioning won’t make it too bad I would hope?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes versioning will help not to break everything (if people don’t pin to master )

jose.amengual avatar
jose.amengual

I bet there is a few millions doing that lol

Matt Gowie avatar
Matt Gowie

Ah yes, hopefully people listen to you folks and don’t pin master. But if they do then they’re outta luck.

Matt Gowie avatar
Matt Gowie
08:07:42 PM

¯_(ツ)_/¯

jose.amengual avatar
jose.amengual

I think the examples in the README should be pinned to the latest version

jose.amengual avatar
jose.amengual

not to master

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes they should (the latest version always changes though)

Matt Gowie avatar
Matt Gowie

Sounds like a good automation task for release:

  1. Build README with new tag as input, README example is updated to use new tag
  2. Tag repo
  3. Push repo
jose.amengual avatar
jose.amengual

but codefresh will have to inject that, right ?

Tony avatar

Does anyone know if its possible to create an AWS ClientVPN completely with Terraform? It seems like there are some resources missing from the provider such as Route Tables and authorizations

jose.amengual avatar
jose.amengual

correct

jose.amengual avatar
jose.amengual

I just went trough this a few months back

jose.amengual avatar
jose.amengual

Route Tables and authorizations need to be done manually

jose.amengual avatar
jose.amengual

or over CLI

Tony avatar

ok thank you, ive been going crazy wondering why I stopped where I did on writing Terraform to create this a few weeks back

Tony avatar

now it makes sense

jose.amengual avatar
jose.amengual

I was surprised too since without those, nothing works!!!!!

Tony avatar

lol right? lets just give them half of what they need! One more quick question, do you have to associate the security groups with the networks you associate with the VPN manually as well?

Tony avatar

i dont see a way to do that with that resource either

loren avatar
Supporting the HashiCorp Terraform Extension for Visual Studio Codeattachment image

We are working internally to update the community VS Code extension to fully support Terraform 0.12 syntax and use our Language Server by default. A new version will be shipping later this year with the updates.

4
Matt Gowie avatar
Matt Gowie

Question for VSCode folks — Is it that much better than Atom?

I’ve switched text editors a few times over the years from emacs => Sublime => Atom. Now I’m finally considering another switch to VSCode cause people love it so much, but I’m not sure if I get the hype.

Supporting the HashiCorp Terraform Extension for Visual Studio Codeattachment image

We are working internally to update the community VS Code extension to fully support Terraform 0.12 syntax and use our Language Server by default. A new version will be shipping later this year with the updates.

Tony avatar

its pretty good. I made the switch and love it. I’ve even slowly been forgetting about pycharm when working on Python and just defaulting to vscode

Tony avatar

id recomend just installing it and checking it out. I can give you a list of good starting extensions to grab if you want too

Tony avatar

and my settings.json file with a bunch of customizatiohn, just so you can see how it all works

Matt Gowie avatar
Matt Gowie

Those are some nice offers — makes this more attractive… Yeah, I’ll happily take you up on that.

Tony avatar

sure give me about 20 mins, on a call until about 5pm cst

Matt Gowie avatar
Matt Gowie

Thanks man!

loren avatar

i find vscode significantly faster than atom. or did when i switched away from atom several years ago…

1
Tony avatar

@Matt Gowie sorry about that, completely forgot about you. I forgot I went through and commented my settings file as well the other day, so that may help you understand some of it too. https://gist.github.com/afmsavage/1bf0241472f74fa21112d4d3698bcb80

Matt Gowie avatar
Matt Gowie

Awesome — Thanks @Tony

RB avatar

terraform registry now exposes the # of downloads per module. 1 more metric for vetting open source modules.

https://registry.terraform.io/search?q=shell

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nice! thanks for sharing

Julio Tain Sueiras avatar
Julio Tain Sueiras

@loren working with the guys for concurrent development with a unified goal, right now the hashicorp one is more stable , alot less features, and mine is less stable but more experimental features (is explained in the repo)

2
loren avatar

Thank you for all your work! Been using your language server for quite a while. Brilliant work!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

2
1
Joe Niland avatar
Joe Niland

Thanks @Matt Gowie and @Andriy Knysh (Cloud Posse) - definitely appreciate the 0.12 upgrade!

cloudposse/terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

btai avatar

any eks + spotinst + terraform integration around here? Is it possible to do everything within terraform? Im POC-ing it right now through the portal (where it basically drains all your nodes to their nodes) but I’m curious how that would work w/ my existing terraform state

jedineeper avatar
jedineeper

I have it all within terraform but not as a single run. Cant figure out how best to reinstate the same number of nodes so i swap out to an ondemand scaling group first, drain to it then swap back in the spot asgs

btai avatar

ugh. not a fan of that.

jedineeper avatar
jedineeper

Yeah :(

2020-05-08

conzymaher avatar
conzymaher

Hi Folks, I heard about this Slack on one of Anton’s talks on YouTube videos

1
conzymaher avatar
conzymaher

I have a pretty broad and general question about modules and module composition. In a previous role I built out a multi account / multi region AWS architecture very much following the terragrunt methodology. i.e a single repo that defined what modules were live in what accounts/regions based on directory structure and git tagged modules

conzymaher avatar
conzymaher

In a new role I have a clean slate. Introducing Terraform to the organisation and using Terraform Cloud. Terraform Cloud supports private module registry where a repo of the form terraform-<provider>-<name> can be automatically published as a module when a git tag is pushed

conzymaher avatar
conzymaher

Historically I am used to working with a big monorepo where all modules reside. A huge advantage of this is easier module composition. i.e a module called service_iam could include dozens of other IAM helper modules from the same repo

conzymaher avatar
conzymaher

I suppose I am having a bit of trouble in my head figuring out what my new approach should be. I want to avoid code duplication and also a spaghetti of modules referring to other modules at specific versions

conzymaher avatar
conzymaher

Theres a question in there somewhere…

conzymaher avatar
conzymaher

Has anyone else done this “transition” I dont think I want to end up with a repo per module as that has a management / operational costs

conzymaher avatar
conzymaher

Should I just create registry modules that have many many sub modules nested inside?

Matt Gowie avatar
Matt Gowie

@conzymaher If you’re wanting lean away from doing a module per-repo and you’re looking to use the registry then I’d check out of the modules from the terraform-aws-modules GH org. The terraform-aws-iam one is great. They do the multiple modules in one repo pattern and works well in my experience.

https://github.com/terraform-aws-modules/terraform-aws-iam

terraform-aws-modules/terraform-aws-iam

Terraform module which creates IAM resources on AWS - terraform-aws-modules/terraform-aws-iam

conzymaher avatar
conzymaher

I’m a big fan of those modules. None of them have a dependancy on each other though. But theres nothing to stop that

terraform-aws-modules/terraform-aws-iam

Terraform module which creates IAM resources on AWS - terraform-aws-modules/terraform-aws-iam

conzymaher avatar
conzymaher

Thanks for the pointer. I hadn’t considered looking at some of the other modules in that repo. The atlantis module is basically the model I want. A “top level” module that uses other modules that are outside of that module, but can also use optional sub modules

Matt Gowie avatar
Matt Gowie

Np. Glad you found one that is what you’re looking for!

x80486 avatar

Cloud Posse folks, do you have any plans to publish a Terraform module for deploying a typical Lambda function? I can’t find anything on your GitHub account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have a few very specific modules that deploy lambdas for very specific tasks https://github.com/cloudposse?q=lambda-&type=&language=

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe you can extract what you need from one of them

1
x80486 avatar

Yeah, I get it…that’s what I’m doing…I was just thinking that it would be nice to have one module for a generic Lambda; I know it’s (almost) straightforward, but there are a few corner cases that usually modules takes care about it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

your contribution to the open-source community would be very welcome @x80486

1
loren avatar

@randomy has several awesome modules for managing lambdas

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(we have no plans right now to publish a module for generic lambda workflows)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
claranet/terraform-aws-lambda

Terraform module for AWS Lambda functions. Contribute to claranet/terraform-aws-lambda development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but haven’t used it

randomy avatar
randomy

Thanks. https://github.com/raymondbutcher/terraform-aws-lambda-builder is better than the Claranet one IMO (I made both)

raymondbutcher/terraform-aws-lambda-builder

Terraform module to build Lambda functions in Lambda or CodeBuild - raymondbutcher/terraform-aws-lambda-builder

1
1
Matt Gowie avatar
Matt Gowie

Has anyone used https://github.com/liamg/tfsec and had it actually find legit security vulnerabilities? I’m skeptical.

liamg/tfsec

Static analysis powered security scanner for your terraform code - liamg/tfsec

RB avatar

ya, i use that currently in atlantis

liamg/tfsec

Static analysis powered security scanner for your terraform code - liamg/tfsec

Matt Gowie avatar
Matt Gowie

What’re your thoughts? Has it caught any serious gotchas for you?

RB avatar

it’s nice and i believe it has some overlap with checkov and tflint

RB avatar

but it does find certain things the others dont.

we really need a nice comparison betw linters and their rules

Matt Gowie avatar
Matt Gowie

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

So I’m noticing that we’re having issues w/ our child modules when folks remove them from their root module due to having a provider block in the child module. We currently use the provider block to setup the following for aws:

assume_role {
    role_arn = var.workspace_iam_roles[terraform.workspace]
  }

Is there a way to set up assume_role for the child module so we can test it without the provider block then as to not have missing provider error messages like the following?:

To work with module.kms.aws_kms_key.this its original provider configuration

at module.kms.provider.aws is required, but it has been removed. This occurs

when a provider configuration is removed while objects created by that

provider still exist in the state. Re-add the provider configuration to

destroy module.kms.aws_kms_key.this, after which you can remove the provider

configuration again.

Or a cleaner pattern to follow?

TIA

loren avatar

we always setup the provider credentials in the root module, using multiple providers with aliases if there are multiple credentials (e.g. different roles). we then pass the provider alias to the module that needs it

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

We compose our modules as well and run tests against them for that larger piece of infrastructure they build. I’m just wondering how I could continue testing the child module w/o having a way to setup some necessary config. We use kitchen-terraform currently. Maybe it entails working w/ that to test the child module w/o baking in a provider block which causes headaches down the road.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, this issue is coming up more an more often. we’ve (@Andriy Knysh (Cloud Posse)) fought a few issues in the past week

loren avatar

yeah, i think you’d have to setup the credential through kitchen-terraform

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

OK - so I’m not crazy then?

1
Tehmasp Chaudhri avatar
Tehmasp Chaudhri

At least we’re in agreement - cause after re-reading the docs - I see the issue w/ nested providers - but now need to figure out how we can still test independently our child modules w/o those provider blocks.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Terraform - refactoring modules: Error: Provider configuration not present

I’m refactoring some Terraform modules and am getting: Error: Provider configuration not present To work with module.my_module.some_resource.resource_name its original provider configuration at m…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

see the thread below that message

loren avatar

we also have multi-provider modules, for cross-account actions (e.g. vpc peering). in that case you must have a provider block in the module. but we only define the alias. the caller then passes the aliased providers they define to each aliased provider in the module

loren avatar

and the setup in the test config (though in this case it is vpc peering in the same account)… https://github.com/plus3it/terraform-aws-tardigrade-pcx/blob/master/tests/create_pcx/main.tf#L38-L41

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

Cool. I’ll take a look at all these. Hopefully there’s a way forward that reduces the issues we see when our dev teams remove child modules w/ their own provider modules but still allows us to test the child modules and the composed repos independently

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

Thanks @loren @Erik Osterman (Cloud Posse) !!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You don’t specify providers in child modules. This is logically correct since your submodules don’t need to know or deal with how they are being provisioned.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You specify providers in root modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Or in the test fixtures

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Those could have different providers with different roles or credentials

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Or for different regions or accounts

Tehmasp Chaudhri avatar
Tehmasp Chaudhri

Right. It’s mainly now how to remove the provider blocks (which I see from the other thread that Erik posted - you guys have been removing from all your modules) and continue to have testing ability for these child modules as well.

Zachary Loeber avatar
Zachary Loeber

I’m only going to add that I personally avoid mapping providers into modules as much as humanly possible

Zachary Loeber avatar
Zachary Loeber

Every time you have to do this I’d ask yourself if you are doing the right thing or not.

Zachary Loeber avatar
Zachary Loeber

sometimes is simply unavoidable

Zachary Loeber avatar
Zachary Loeber

sorry, I’m late to the party and likely missed the point but all this self-isolation made me feeling like proclaiming some stuff out loud….

loren avatar

we hear you and see you @Zachary Loeber;)

3
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

right, in almost all cases you don’t need to specify providers in child moduler, nor map providers from the top-level modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just define them in root modules or test fixtures, they are inherited automatically

1
sheldonh avatar
sheldonh

Anyone have a cli or quick way to trigger a terraform cloud run? I’m can cobble together rest call but just checking. Have azure devops pipeline running packer and want to trigger it to run a terraform plan update to sync the SSM parameters for ami images after I’m done.

2020-05-09

Mr.Devops avatar
Mr.Devops

I’m sure someone out there may have thought about this, but it would be nice if terraform would have the ability to output its graph to lucid chart(3rd party integration) -feat request for Hasicorp?

Zachary Loeber avatar
Zachary Loeber

I suppose if you can figure out how to transform graphviz language into a csv you can simply import that into lucidchart (https://lucidchart.zendesk.com/hc/en-us/articles/115003866723-Process-Diagram-Import-from-CSV)

Process Diagram Import from CSV

Use Lucidchart’s CSV Import for Process Diagrams to create flowcharts, swim lanes, and process diagrams quickly from your data. This tutorial will walk you through the steps of formatting your data…

Zachary Loeber avatar
Zachary Loeber

though it is an interesting challenge I’ll leave that task up to you to figure out

Mr.Devops avatar
Mr.Devops

thx @Zachary Loeber

2020-05-11

Cloud Posse avatar
Cloud Posse
04:00:08 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is May 20, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

curious deviant avatar
curious deviant

Hello,

Do folks feel workspaces serve a purpose while using terraform open source? I personally have found the use of any other variable such as environment sufficient to distinguish between different environments. It may also be so that I haven’t understood fully the purpose of workspaces in terraform. Any advice/insights appreciated.

RB avatar

i dont find workspaces to be that useful. i’d much rather a module references with an environment argument and an environment directory with those module references

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve (cloudposse) switch over to using workspaces. previously we would provision a lot of s3 state backends, but that process is tedious and difficult to automate using pure terraform

jose.amengual avatar
jose.amengual

I’m not too familiar with workspaces but how does it work with atlantis ? you guys have a repo per environment still or due to using workspaces you are now have one repo and multiple workspaces ? @Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we now use a flat project structure. we define a root module exactly once. we don’t import it anywhere. we define an a terraform varfile for each environment. we don’t use the hierarchical folder structure (e.g. aws/us-east-2/prod/eks/ ) and instead just have projects/eks) with files like projects/eks/conf/prod.tfvars )

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this enforces that we have the identical architecture in every environment and do not need to copy and paste changes across folders. what I don’t like about the multiple folders is that architectures drift between the folders without human oversight.

this2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


you guys have a repo per environment still or due to using workspaces you are now have one repo and multiple workspaces ?

@jose.amengual: yes, we used to, but now have abandoned that approach.

jose.amengual avatar
jose.amengual

cool so we are using the same aproach

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we adopted that approach before gitops was a thing. it proved to not be very gitops friendly and changes were left behind leading to drift. we moved to the flat folder architecture this year in all new engagements.

1
jose.amengual avatar
jose.amengual

almost, we do not have subfolders for the conf/ we have it all int the root

1
jose.amengual avatar
jose.amengual

for us, it has worked well, the main reason we ended up with one repo was due to multiregion deployments, if we decided to split by env we could endup with an exponential number of repos

2
jose.amengual avatar
jose.amengual

Terraform state supposed to help you with infrastructure drift but by having so many repos or folders then who solves the problem of configuration drift when using multi-folder or env-repos?

jose.amengual avatar
jose.amengual

pretty much impossible

mfridh avatar

I’d like what you talk about @Erik Osterman (Cloud Posse) to be a decent norm but the diminishing returns come fast when you try to do all this rigorously with only one or two envs and requirements popping up all the time where they end up having to differ in the end for various reasons.

For the things which I do repeat in a very identical fashion I could probably use workspaces.

I have (pretty much) a structure such as:

  • infra/logical_grouping/myprefix-env-stack1
  • infra/logical_grouping/myprefix-env-stack2
  • infra/logical_grouping/myprefix-env-stack2-substack

where -extension in many cases either utilize remote data from the “parent” stacks, sometimes several of them. Sometimes, in the case where it makes more sense to not tie things so closely, the information sharing is done via regular data sources instead.

mfridh avatar

I’ve made the mistake of putting too many things in the same “stack” way too many times. I’ve learned to separate early now.

mfridh avatar

(While also learning to become really proficient in terraform state mv )

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@mfridh just to be clear, what I’m describing still breaks terraform state apart so it’s not a terralith. we also have taken this a step further, so we have a single YAML config that describes the entire environment and all dependencies. so as @joshmyers was pointing out in the #terragrunt channel, we don’t have this problem anymore with hundreds of config fragments all over the place.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the key here is the terraform state is stored in workspaces, the desired state is stored in the YAML configuration, and the declarative configuration of infrastructure is in pure terraform HCL2 + modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s a hint as to what that looks like: https://archive.sweetops.com/aws/2020/04/#ebb3d28d-3753-417c-a88a-7775f8df633c

but we’ve progressed beyond this point.

SweetOps #aws for April, 2020

SweetOps Slack archive of #aws for April, 2020. aws Discussion related to Amazon Web Services (AWS)

1
joshmyers avatar
joshmyers

All config fragments are in a single repo, but still a lot of them…multi region definitely doesn’t help that

joshmyers avatar
joshmyers

@Erik Osterman (Cloud Posse) not using the https://github.com/cloudposse/testing.cloudposse.co approach still?

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

randomy avatar
randomy

Does the use of workspaces mean your states are all in the same s3 bucket? Last time I tried it, I found this to be problematic. (Can’t remember the exact issue/limitation)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@joshmyers using geodesic (and still using that repo), but the 1-repo-per-account proved not very friendly for pipeline style automation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@randomy yes, all in one bucket, but with IAM policies restricting access based on roles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

note, workspaces get their own path within the bucket so it’s easy to restrict.

joshmyers avatar
joshmyers

are they any different to the usual environment variable? Could do all states in same s3 bucket without workspaces, right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@joshmyers, yes, you’re correct. Could implement “workspace” like functionality without actually using the workspaces feature just by manipulating the backend settings for S3 during initialization.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t know what the advantage would be to doing so

mfridh avatar

That’s what my Makefile does. No advantage over workspaces I guess but to be fair, workspaces didn’t exist when I started to do this ;).

mfridh avatar

I like that yaml defined ordering you linked to Erik. Looks legit. Although I’m not sure how you actually implement it… variant reads it and runs various targets in order?

mfridh avatar

That’s one thing to consider…. I’m also considering Atlantis. But not sure if that would actually be a good idea for practical reasons or just “for show”.

Tony avatar

has anyone ever created a ClientVPN configuration using Terraform to call CloudFormation templates? Or even if anyone has created a ClientVPN config in Cloudformation you might be able to help. I am getting this error when trying to create routes via cloudformation.

Error: ROLLBACK_COMPLETE: ["The following resource(s) failed to create: [alphaRoute]. . Rollback requested by user." "Property validation failure: [Encountered unsupported properties in {/}: [TargetVPCSubnetId]]"]

Code:

---
Resources:
  alphaRoute:
    Properties:
      ClientVpnEndpointId: "${aws_ec2_client_vpn_endpoint.alpha-clientvpn.id}"
      Description: alpha-Route-01
      DestinationCidrBlock: 172.31.32.0/20
      TargetVPCSubnetId: subnet-5c4a7916
    Type: AWS::EC2::ClientVpnRoute
Adrian avatar

I’m using terraform aws_ec2_client_vpn_endpoint to create CVPN and aws_cloudformation_stack to add routes

Tony avatar

I can make the exact resource in the console without issue

Joe Presley avatar
Joe Presley

I have a bash command that outputs a list in yaml format. I use yq to put that list into a file (each line is a value). There are about 2700 items in the file. How can I get that list from a file into a terraform variable? The only other approach I see is to do some magic to get a plain list into Terraform list variable file. Basically a *.txt -> *.tf transformation.

mfridh avatar
frimik/tf_module_plugin_example

Terraform module plugin concept for external data integration - frimik/tf_module_plugin_example

mfridh avatar

That’s from days of yonder before Terraform had proper provider plugin support.

mfridh avatar

Another alternative in your case, if you don’t want to pass through files on disk is to run that bash command as an external data source script: https://www.terraform.io/docs/providers/external/data_source.html

External Data Source - Terraform by HashiCorp

Executes an external program that implements a data source.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you considered using yamldecode(file(...)) with a local?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
yamldecode - Functions - Configuration Language - Terraform by HashiCorp

The yamldecode function decodes a YAML string into a representation of its value.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use this pattern and it works well.

Joe Presley avatar
Joe Presley

I’ll look into yamldecode. I’m also wondering if I could have just used a data source for the initial load of values.

loren avatar

if you can output to json instead, you can write <foo>.auto.tfvars.json to your tf directory, and terraform will read it automatically and assign the tfvars

loren avatar
Input Variables - Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

loren avatar


Terraform also automatically loads a number of variable definitions files if they are present:

* Files named exactly terraform.tfvars or terraform.tfvars.json.

  • Any files with names ending in .auto.tfvars or .auto.tfvars.json.
loren avatar

i’ve also used terragrunt to do this kind of pre-processing

loren avatar
raymondbutcher/pretf

Generate Terraform code with Python. Contribute to raymondbutcher/pretf development by creating an account on GitHub.

loren avatar

here’s an example of using terragrunt and hcl string templating to massage some input into a auto.tfvars file… i think .auto.tfvars.json is better, but you can pick your own poison… https://github.com/gruntwork-io/terragrunt/issues/1121#issuecomment-610529140

generate: Not all file types support comments · Issue #1121 · gruntwork-io/terragrunt

Is there any way to disable the addition of a comment when using generate blocks? I was attempting to write an .auto.tfvars.json file, and it is written fine, but terragrunt injects a comment into …

Joe Presley avatar
Joe Presley

Thanks @loren Those are awesome options.

Joe Presley avatar
Joe Presley

I confirmed I could just use a data source for my use case.

3
sheldonh avatar
sheldonh

Merge Issue With Terraform Maps

sheldonh avatar
sheldonh
locals {
  default_settings = {
    topics                          = ["infrastructure-as-code", "terraform"]
    gitignore_template              = "Terraform"
    has_issues                      = false
    has_projects                    = false
    auto_init                       = true
    default_branch                  = "master"
    allow_merge_commit              = false
    allow_squash_merge              = true
    allow_rebase_merge              = false
    archived                        = false
    template                        = false
    enforce_admins                  = false
    dismiss_stale_reviews           = true
    require_code_owner_reviews      = true
    required_approving_review_count = 1
    is_template                     = false
    delete_branch_on_merge          = true
  }
}
sheldonh avatar
sheldonh

Ok… this is what I’m trying to do

Default settings above… new item below

repos = {
    terraform-aws-foobar = {
      description  = ""
      repo_creator = "willy.wonka"
      settings     = merge(local.default_settings, {})
    }
}
sheldonh avatar
sheldonh

But when I some map key values that I want to override, they don’t seem to get picked up, despite the behavior of merge being the last one should replace for simple (not deep) merges.

repos = {
    terraform-aws-foobar = {
      description  = ""
      repo_creator = "willy.wonka"
        settings = merge(local.default_settings, {
                topics = ["terraform", "productivity", "github-templates"]
            },
            {
                is_template = true
            }
        )
    }
}
sheldonh avatar
sheldonh

Any ideas before I go to devops.stackexchange.com or terraform community?

loren avatar

what is posted should work, i think. do you have an error? or an output displaying exactly what is not working?

sheldonh avatar
sheldonh

It just doesn’t seem to pick up the “override” values from the is_template = true when the default at the top was false. No changes detected. Applying my new topics also doesn’t get picked up. This doesn’t align with my understanding of merge in hcl2

loren avatar

i do exactly this quite a bit, and it does indeed work. hence, we need to see what you are seeing

loren avatar

try creating a reproduction case with your locals and just an output so we can see what the merge produces

sheldonh avatar
sheldonh

Well a good example would be running terraform console on this

sheldonh avatar
sheldonh

I just created a new repo. I added topics to it, like you see above. The new topics I’m overriding don’t even show

sheldonh avatar
sheldonh
    terraform-devops-datadog = {
      description  = "Datadog configuration and setup managed centrally through this"
      repo_creator = "me"
      settings = merge(local.default_settings, {
        topics = ["observability"]
      })
    }
sheldonh avatar
sheldonh

and when i run doesn’t show topics at all in the properties list

sheldonh avatar
sheldonh

Yeah so I just ran console against on specific on that has the topics override it shows empty on topics, not taking it at all

sheldonh avatar
sheldonh

this item

 devops-stream-analytics = {
      description  = "foobar"
      repo_creator = "sheldon.hull"
      settings = merge(local.default_settings, {
        topics = [
          "analytics",
          "telemetry",
          "azure"
        ]
      })
    }
sheldonh avatar
sheldonh

Here

> github_repository.repo["devops-stream-analytics"]
{
  "allow_merge_commit" = false
  "allow_rebase_merge" = false
  "allow_squash_merge" = true
  "archived" = false
  "auto_init" = true
  "default_branch" = "master"
  "delete_branch_on_merge" = true
  "description" = "foobar"  
  "etag" = "foobar"
  "full_name" = "foobar"
  "git_clone_url" = "foobar"
  "gitignore_template" = "Terraform"
  "has_downloads" = false
  "has_issues" = false
  "has_projects" = false
  "has_wiki" = false
  "homepage_url" = ""
    "html_url" = "<https://github.com/foobar>"
    "http_clone_url" = "<https://github.com/foobar>"
  "id" = "devops-stream-analytics"
  "is_template" = false
  "name" = "devops-stream-analytics"
  "node_id" = "MDEwOlJlcG9zaXRvcnkyNTQ3NTEwMzc="
  "private" = true
    "ssh_clone_url" = "[email protected]:foobar.git"
    "svn_url" = "<https://github.com/foobar>"
  "template" = []
  "topics" = [] <------ this should be overriden by my logic?
}
sheldonh avatar
sheldonh

see the last item. It’s empty. Not sure why my merge syntax would fail as you see above.

loren avatar

looks fine to me?

$ cat main.tf
locals {
  default_settings = {
    topics                          = ["infrastructure-as-code", "terraform"]
    gitignore_template              = "Terraform"
    has_issues                      = false
    has_projects                    = false
    auto_init                       = true
    default_branch                  = "master"
    allow_merge_commit              = false
    allow_squash_merge              = true
    allow_rebase_merge              = false
    archived                        = false
    template                        = false
    enforce_admins                  = false
    dismiss_stale_reviews           = true
    require_code_owner_reviews      = true
    required_approving_review_count = 1
    is_template                     = false
    delete_branch_on_merge          = true
  }
}

locals {
  repos = {
    devops-stream-analytics = {
      description  = "foobar"
      repo_creator = "sheldon.hull"
      settings = merge(local.default_settings, {
        topics = [
          "analytics",
          "telemetry",
          "azure"
        ]
      })
    }
  }
}

output "repos" {
  value = local.repos
}
loren avatar
$ terraform apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

repos = {
  "devops-stream-analytics" = {
    "description" = "foobar"
    "repo_creator" = "sheldon.hull"
    "settings" = {
      "allow_merge_commit" = false
      "allow_rebase_merge" = false
      "allow_squash_merge" = true
      "archived" = false
      "auto_init" = true
      "default_branch" = "master"
      "delete_branch_on_merge" = true
      "dismiss_stale_reviews" = true
      "enforce_admins" = false
      "gitignore_template" = "Terraform"
      "has_issues" = false
      "has_projects" = false
      "is_template" = false
      "require_code_owner_reviews" = true
      "required_approving_review_count" = 1
      "template" = false
      "topics" = [
        "analytics",
        "telemetry",
        "azure",
      ]
    }
  }
}
sheldonh avatar
sheldonh

hmm. What terraform version are you running

loren avatar
$ terraform -version
Terraform v0.12.24

2020-05-12

conzymaher avatar
conzymaher

More terraform thinking out loud. I’ve been reading some of the cloudposse repos just getting a feel for how other organisations do terraform. The scale at my organization is small (handful of engineers and we will have at most 5 aws accounts across one maybe 2 regions) In the past I have handled all IAM in a single “root” module

conzymaher avatar
conzymaher

This has its pros and cons

conzymaher avatar
conzymaher

I’ve noticed in some cloudposse examples IAM resources are created alongside other resources. e.g an ECS service and the role it uses may be defined together. this is massively convenient

conzymaher avatar
conzymaher

But its easier end up with issues like: Team A create a role called “pipeline_foo” and Team B (in another IAM state) create a role called “pipeline_foo”

conzymaher avatar
conzymaher

They have no indication until the apply phase fails that this is an issue

conzymaher avatar
conzymaher

As I move towards Terraform Cloud I also see the advantage of having a single IAM state / workspace

conzymaher avatar
conzymaher

As just that state can be delegated to a team that manage IAM

conzymaher avatar
conzymaher

Anyone have strong feelings on either approach?

vFondevilla avatar
vFondevilla

Personally I prefer to have unique IAM resources for each module as it’s easily repeatable and self-contained solution. IMO the label module helps to avoid the naming collision

loren avatar

encourage them to use name_prefix instead of name ?

loren avatar

ask them what they do when they deploy the same tf module more than once?

conzymaher avatar
conzymaher

This is not an issue with modules having hardcodes names. The “submodules” are very configurable but the root module in this case defines a very specific configuration and should only be deployed once per account

conzymaher avatar
conzymaher

My worry is that if IAM resources are created in dozens of states it could become very hard to reason about the state of IAM in an account

conzymaher avatar
conzymaher

I hope that makes sense

conzymaher avatar
conzymaher

@vFondevilla That is an interesting solution. Although for example if that module defines a simple barebones ecs_task_execution_role and you deploy 20 copies of that module

loren avatar

well alright, but that was the scenario presented

conzymaher avatar
conzymaher

You end up with 20 ecs_task_execution_role_some_id with identical policies attached? That always felt a bit wrong to me. If they have different policies great but just having many copies of identical IAM resources seems redundant

loren avatar

iam policies have no costs. multiple instances of a policy with the same permissions is worth it to us, for the flexibility of a self-contained module. i suppose we tend to reason about the service we are creating. not the state of an aws service as a whole

conzymaher avatar
conzymaher

That is interesting. Its great to talk about this stuff. Historically I have had a very “service centric” split of state

vFondevilla avatar
vFondevilla

Yes, that’s the situation in my accounts. Each ECS service has its own IAM execution role managed by the terraform module. This enables us to repeatedly deploy the module in one of the multiple aws accounts we’re using

conzymaher avatar
conzymaher

While it can be easier to reason about / delegate responsibility it does also lead to a lot of remote state hell and dependancy hell

conzymaher avatar
conzymaher

e.g I need to update the vpc state to add a security group and then I need to use remote state or data sources in my ecs state to now refer to that security group

conzymaher avatar
conzymaher

Maybe I should relax my service per state thinking

loren avatar

we do separate IAM for humans into its own module and control it centrally. but IAM for services goes into the service module

1
conzymaher avatar
conzymaher

and move towards an “application” or “logical grouping of resources” per state

conzymaher avatar
conzymaher

Yes for IAM for humans I tend to just create a module that uses lots of the modules in https://github.com/terraform-aws-modules/terraform-aws-iam

terraform-aws-modules/terraform-aws-iam

Terraform module which creates IAM resources on AWS - terraform-aws-modules/terraform-aws-iam

conzymaher avatar
conzymaher

And that is deployed in a single account

conzymaher avatar
conzymaher

Thanks for rubber ducking this. Do you ever end up with the opposite problem to me? i.e because IAM roles are defined alongside the service, can it be difficult to for example allow access to another principal? Because its created in another module and you can have chicken egg problems?

conzymaher avatar
conzymaher

Or would a scenario like that be moved out to another state

conzymaher avatar
conzymaher

I’m probably overthinking this a bit. Its a greenfields project so trying not to make any decisions that will shoot me in the foot in 12 months

vFondevilla avatar
vFondevilla

I’m trying to avoid state dependencies just to simplify my life and reduce the steep learning curve for the rest of the team, so sometimes instead of referencing the remote state managed via another module, I’m using data resources as variables for the modules. In our case is working awesome with a team which before me doesn’t knew anything about terraform.

loren avatar

the thing that shoots us in the foot is when we do use iam resources (or most anything else, really) managed in a different state. when we keep all the resources contained in the services modules, it stays clean

conzymaher avatar
conzymaher

Do you mean you use data sources for most module inputs @vFondevilla something like this?

data "aws_vpc" "default" {
  default = true
}

data "aws_subnet_ids" "all" {
  vpc_id = data.aws_vpc.default.id
}

module "db" {
  vpc_id = data.aws_vpc.default.id
  subnets = data.aws_subnet_ids.all.ids
  #.......
}
vFondevilla avatar
vFondevilla

yep

conzymaher avatar
conzymaher

Sounds good @loren I will definitely try out this approach. I think it will save me some pain going forward

vFondevilla avatar
vFondevilla

Much easier than explain someone with barely 0 experience working with Terraform modules how to use remote state and the possible ramifications. It’s pretty manual for deploying new modules (as you have to change the data and/or variables of the module), but in our case is working well.

conzymaher avatar
conzymaher

Yeah I find data sources a cleaner solution than remote state where possible. Nothing worse than the “output plumbing” required when you need a new output from a nested module for example

sahil kamboj avatar
sahil kamboj

Hey Guys, having issue after adding vpc to my terraform(0.12), before i am using default vpc to extract info like subnet ids etc. Error: Invalid for_each argument

on resource.tf line 63, in resource “aws_efs_mount_target” “efs”: 63: for_each = toset(module.vpc-1.private_subnets)

The “for_each” value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on. how can i solve

aaratn avatar

You can try this for_each          = toset(compact(module.vpc-1.private_subnets))

sahil kamboj avatar
sahil kamboj

this are same with that Error: Invalid for_each argument

on resource.tf line 63, in resource “aws_efs_mount_target” “efs”: 63: for_each = toset(compact(module.vpc-1.private_subnets))

The “for_each” value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.

maharjanaabhusan avatar
maharjanaabhusan

Hello everyone

maharjanaabhusan avatar
maharjanaabhusan

I am using cloudpoose module for vpc peering. I am having an issue can any one help me with it. Asap. Thank you

aaratn avatar

@maharjanaabhusan please post your actual question so that someone can answer

maharjanaabhusan avatar
maharjanaabhusan

Error: Invalid count argument

on .terraform/modules/vpc_peering-1/main.tf line 62, in resource “aws_route” “requestor”: 62: count = var.enabled ? length(distinct(sort(data.aws_route_tables.requestor.0.ids))) * length(data.aws_vpc.acceptor.0.cidr_block_associations) : 0

The “count” value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on.

maharjanaabhusan avatar
maharjanaabhusan

module “vpc_peering-1” { source = “git://github.com/cloudposse/terraform-aws-vpc-peering.git?ref=master)” namespace = “eg” stage = “test-1” name = “peering-1” requestor_vpc_id = module.vpc1.vpc_id acceptor_vpc_id = module.vpc3.vpc_id }

cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account. - cloudposse/terraform-aws-vpc-peering

Adrian avatar

Do you already created routes in your vpc? Count depend on output of:

  • data.aws_route_tables.requestor.0.ids
  • data.aws_vpc.acceptor.0.cidr_block_associations
cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account. - cloudposse/terraform-aws-vpc-peering

maharjanaabhusan avatar
maharjanaabhusan

I am using aws vpc module of cloudpoose.

Adrian avatar
cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs in the same AWS account. - cloudposse/terraform-aws-vpc-peering

Adrian avatar

they create

• vpc

• subnets

• peering

Adrian avatar

probably you are missing subnets creation

Adrian avatar

RTFM

aaratn avatar
requestor_vpc_id = module.vpc1.vpc_id
  acceptor_vpc_id  = module.vpc3.vpc_id
aaratn avatar

are these vpcs created ?

maharjanaabhusan avatar
maharjanaabhusan

Yes

sahil kamboj avatar
sahil kamboj

i think me and @maharjanaabhusan have same problem

Matt avatar

Does anyone here have a Terraform example for provisioning a forecast monitor in DataDog? Not sure it’s possible with the current provider, couldn’t find any examples for this.

jose.amengual avatar
jose.amengual

forecast metric and a monitor on that metric ?

jose.amengual avatar
jose.amengual

usually that is how it works

Matt avatar

yeah, I think I got it

Matt avatar

an example would be nice though. . . for that provider

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Matt do you still need help with that one?

Matt avatar

@Julio Tain Sueiras think I have it now

Matt avatar

need to test tomorrow

Matt avatar

it’s basically just a variant of a query alert

2020-05-13

Release notes from terraform avatar
Release notes from terraform
05:04:33 PM

v0.12.25 NOTES: backend/s3: Region validation now automatically supports the new af-south-1 (Africa (Cape Town)) region. For AWS operations to work in the new region, the region must be explicitly enabled as outlined in the AWS Documentation. When the region is not enabled, the Terraform S3 Backend will return errors during credential validation (e.g. error validating provider credentials:…

Managing AWS Regions - AWS General Reference

Learn how to enable and disable AWS Regions.

Matt Gowie avatar
Matt Gowie

Does anyone know of a terraform plan review tool? Something like GitHub Pull Requests but for Terraform Plan? I know Atlantis will comment on a PR with the plan and allow review and what not, but I would love a tool that I can push a plan to it and then discuss that plan with my team.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Push plan to Jira?

Matt Gowie avatar
Matt Gowie

Hahah that strikes a cord — I do PM / Architecture consulting for one client and I use Jira too much as it is. I think that’d be my nightmare.

Matt Gowie avatar
Matt Gowie

I’m going to guess after my quick google search that there is no such tool. Which is interesting to me… Too many ideas, not enough time.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You get that with terraform cloud

jedineeper avatar
jedineeper

Use your ci platform to report the plan as a test result on the pr in Github?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the ability to comment on the plan

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Matt Gowie avatar
Matt Gowie

Huh I thought and looked at my TF cloud project for one client. It doesn’t go into the discussion level that I wanted, but maybe that solves enough of the problem. I just haven’t used it enough.

Matt Gowie avatar
Matt Gowie

I was hoping to see something that has the ability to comment on a line by line change in a plan so I can explain / discuss the specific results of resource change to non-ops team members before we move it forward.

David Scott avatar
David Scott

That’s an interesting idea. Atlantis had to work around Terraform plans hitting the max character length in GitHub comments by splitting up the plan into multiple comments. Sticking the plan output into a file in the pull request would get around the comment length limitation, and allow for inline commenting.

I have a to-do item to implement terraform in GitHub Actions in my org. When I get around to that task I’ll see if I can add the plan to the pull request as a file when the PR is opened.

Allowing the CI/CD process to change the contents of a commit or add a commit poses its own challenges, but we already have to deal with it using @lerna/version

1
sheldonh avatar
sheldonh

I guess you could have a dedicated repo for plan artifacts and commit to that via automation and open pull request and all. That sounds like what you need, but dang the house of cards starts getting higher. How about just the review in page and then fail fast and fix if an issue :-)

jose.amengual avatar
jose.amengual

you could fork atlantis and add what you need

jose.amengual avatar
jose.amengual

time to learn golang

sheldonh avatar
sheldonh

Took my first swing at some Go terraform-tfe sdk stuff today to create runs from azure devops/cli. Learned a bunch, including finding a much more mature project with great examples. Might fork and modify a bit. Looks like with this go-tfe project on you can easily run terraform cloud actions from github actions now. Super cool!

I’m going to modify this probably to accept command line args for my first PR on a go project https://github.com/kvrhdn/tfe-run

kvrhdn/tfe-run

GitHub Action to create a new run on Terraform Cloud - kvrhdn/tfe-run

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

slick

kvrhdn/tfe-run

GitHub Action to create a new run on Terraform Cloud - kvrhdn/tfe-run

2020-05-14

Mo_Nazib avatar
Mo_Nazib

Hi All, I’m looking for a solution in managing/creating multiple AWS route53 zones/records. Any suggestions

randomy avatar
randomy

What’s the problem? Terraform’s AWS R53 resources work as advertised.

Mo_Nazib avatar
Mo_Nazib

resource "aws_route53_zone" "main" { count = length(var.domains) > 0 ? length(var.domains) : 0 name = element(var.domains, count.index) }

Mo_Nazib avatar
Mo_Nazib

for CNAME record, how can i create for specific domain?

Mo_Nazib avatar
Mo_Nazib

resource “aws_route53_record” “www” { count = length(aws_route53_zone.main) zone_id = element(aws_route53_zone.main.*.zone_id, count.index) name = “www” type = “CNAME” ttl = “300” records = [“test.example.com”] }

Stratos avatar
Stratos

don’t know if I understand specifically but: count and index are blunt instruments and often will fail you when you try to implement more complex use cases

Stratos avatar
Stratos

I’ve had success with maps and for_each statements

1
Mo_Nazib avatar
Mo_Nazib

my use cases is, i’m working on creating a poc with route53 using terraform.

I had to create multiple domains in route53 and take care of operation tasks like adding new/updating/deleting records

Mo_Nazib avatar
Mo_Nazib

i haven’t tried with map and for_each. i will give a try

randomy avatar
randomy

yeah, for_each is better. count would be a disaster there, it would likely want to delete and recreate your zones when you change var.domains

2
Mo_Nazib avatar
Mo_Nazib

Thank you, i will give a try and come back

amelia.graycen avatar
amelia.graycen

Hey y’all, I did something really dumb and I’m still a bit too green at Terraform to understand how to resolve it. I’m keeping multiple state workspaces in S3 with a dynamo lock DB. I wanted to purge one of the workspaces and rebuild things from scratch. I didn’t have any resources outstanding, so I blindly deleted the file directly from S3. Now I can’t rebuild it, I suspect, because the lock db expects it to exist. Is there anyway to get back to a blank slate from here so that I can start over for this particular workspace?

loren avatar

go into dynamodb and clear the lock table…

amelia.graycen avatar
amelia.graycen

Okay, thank you. I haven’t worked with dynamo yet and was worried I may break something else by clearing it. I’m especially cautious because I’ve already dug myself in to a deep enough hole.

loren avatar

nah, much more likely to break something on the s3 side. the dynamodb table is very lightweight

loren avatar

if you haven’t already, highly recommend turning on versioning for any s3 bucket you use for tfstate

amelia.graycen avatar
amelia.graycen

yeah, as soon as I realized what was going on, I wished that I had.

loren avatar

you might have a copy of the tfstate locally, from your last apply

loren avatar

check the .terraform directory wherever you ran terraform from

amelia.graycen avatar
amelia.graycen

I wish. It was a new terraform script that I had only applied in Jenkins.

loren avatar

ah, well, jenkins. there’s the problem

amelia.graycen avatar
amelia.graycen

amelia.graycen avatar
amelia.graycen

I think I actually have my script working fine at this point, but I ran taint locally (which was on .25) and it bumped the version of the state file. I can only automatically install up to .24 on Jenkins at the moment. :x

loren avatar

Oof. Yeah, might also want to add a terraform version constraint…

loren avatar

we do this in all our root modules:

terraform {
  required_version = "0.12.24"
}
loren avatar

pin the version, then update it intentionally as a routine task

Joe Presley avatar
Joe Presley

Does anyone have thoughts on how to scale Terraform when working with thousands of internal customers. The concurrent requests would be about 10 at a time. There’s a talk Uber did where they said DevOps tools don’t scale when you’re dealing with 100k servers, so there’s some upper limit. What would you say the limit is for Terraform? Is there a way to wrap Terraform around an api call so an application platform could enter a few parameters for a Terraform module to render the Terraform?

jose.amengual avatar
jose.amengual

I don’t know if you could do this but maybe you could fork runatlantis.io and instead of receiving a web hook payload change it to receive a regular Rest request

Joe Presley avatar
Joe Presley

That’s an idea. I remember there is a company that created a service catalog for Terraform templates. I couldn’t find the company, but I did run into this when I did a Google search for it. https://www.hashicorp.com/products/terraform/self-service-infrastructure/

HashiCorp Terraform: Self-Service Infrastructureattachment image

Enable users to easily provision infrastructure on-demand with a library of approved infrastructure. Increase IT ops productivity and developer speed with self-service provisioning

Joe Presley avatar
Joe Presley

It doesn’t go into details on how the self-service part works.

jose.amengual avatar
jose.amengual

Self-service = pay for nomad and terraform enterprise

Joe Presley avatar
Joe Presley

If Terraform Enterprise can provide self-service, that would be a plus if you can afford it. For larger companies, it would pay for itself in not having engineers reinvent the wheel.

Chris Fowles avatar
Chris Fowles

my old team did some work on addressing scaling issues on one of the largest terraform enterprise installations deployed - i’d say if you’re getting to that size you want to divide the scope of what an individual deployment of TFE is addressing - maybe broken down by business units or some other logical boundary

Chris Fowles avatar
Chris Fowles

there is an upper limit and it gets ugly when things go wrong at that limit

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so i’m a bit confused here… where is the concern about scaling?
when working with thousands of internal customer
or
when you’re dealing with 100k servers, so there’s some upper limit.
And I think they are different problems to optimize for.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you have thousands of internal customers (e.g. developers), I still would imagine it scales well, if:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

• massively bifurcated terraform state across projects and AWS accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

• teams have their own projects, aws accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

• teams are < 20 developers on average

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then i really don’t see how terraform would have technical scaling challenges e.g. by hitting AWS rate limits.

Chris Fowles avatar
Chris Fowles

terraform itself yes - terraform enterprise (as in the server) can hit scaling issues at high usage

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha, gotcha - i could see that with something like terraform enterprise.

Chris Fowles avatar
Chris Fowles

and (at least when we were working on it a while back - older versions) did not scale out well

Chris Fowles avatar
Chris Fowles

i imagine that they’ve implemented a lot of lessons learned in deploying the new terraform cloud platform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but with standard CI/CD systems that can fan out to workers, I see it less of a problem.

Chris Fowles avatar
Chris Fowles

yeh - as long as you can scale your worker fleet then it should be easier

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the practical problem I see is enforcing policies at that scale.

Chris Fowles avatar
Chris Fowles

i’m in a bit of a quandary there between opa and sentinel - i really really like opa

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, same. OPA is where I’d want to put my investment.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but then going against the TFE ecosystem

Chris Fowles avatar
Chris Fowles

i mean if you’re not going to use TFE as your runner it makes sense

Chris Fowles avatar
Chris Fowles

the other issue i see is getting support from hashicorp

1
Joe Presley avatar
Joe Presley

I was also thinking of scaling in terms of management and implementation. For example if a core team has say 10 developers, writing code for thousands of internal teams, each with their own accounts/projects doesn’t scale well. So the urge is to create some system where the end user has a gui they can go to request their infrastructure while using Terraform. What I don’t get is how to use Terraform when you scale beyond what manually writing code in a GitOps fashion does.

Chris Fowles avatar
Chris Fowles

generally i’d say you want to scale out to teams via code rather than a gui

Chris Fowles avatar
Chris Fowles

but i realise that that’s not always a possible reality for some enterprises

Chris Fowles avatar
Chris Fowles

somewhat embarrassed to admit i built a demo PoC for basically this before it existed: https://www.terraform.io/docs/cloud/integrations/service-now/index.html

Setup Instructions - ServiceNow Service Catalog Integration - Terraform Cloud - Terraform by HashiCorp

ServiceNow integration to enable your users to order Terraform-built infrastructure from ServiceNow

Joe Presley avatar
Joe Presley

I was reading about that. It looks pretty cool. I think what the company I’m working with now wants to do is integrate Terraform with their own infrastructure request toolchain.

Chris Fowles avatar
Chris Fowles

that’s something that i’ve seen asked for a lot - i’ve worked on projects to build it a couple of times. i understand why it’s a reality - i just don’t think i’d ever want to have to request infrastructure in that way

Chris Fowles avatar
Chris Fowles

as an engineer on the other side of the gui i’d probably hate it

Chris Fowles avatar
Chris Fowles

but that’s possibly because i’ve always had the privilege of not being blocked by systems of approval like that

Chris Fowles avatar
Chris Fowles

enterprise realities kind of suck

Chris Fowles avatar
Chris Fowles

which is why i don’t work in enterprise

Chris Fowles avatar
Chris Fowles

i know realistically that a system to request from a service catalogue like that is 1000x better than what a lot of enterprises are dealing with today

jose.amengual avatar
jose.amengual

but then going against the TFE ecosystem

jose.amengual avatar
jose.amengual

I have read about OPA, and every time I go the the website it looks to good to be true

jose.amengual avatar
jose.amengual

I keep thinking in how I’m to somehow get charged, of screwed for lack of support or something

2020-05-15

Karoline Pauls avatar
Karoline Pauls

find . -name '*.tf' | xargs -n1 sed 's/\(= \+\)"${\([^{}]\+\)}"/\1\2/' -i

Andrew avatar

has anyone used this and have any opinions about it? https://github.com/liatrio/aws-accounts-terraform

liatrio/aws-accounts-terraform

Contribute to liatrio/aws-accounts-terraform development by creating an account on GitHub.

Andrew avatar

or is control tower to be preferred?

Haroon Rasheed avatar
Haroon Rasheed

Hi All, I have my kubeconfig as terraform local value after eks deployment.. Now when I try to run a command “echo ${local.kubeconfig} > ~~~/.kube/config” using null_resource command option. I get command not found error guess due to multiple lines get replaced as part of local.kubeconfig..Need help on how to run this command. Right now I am coming out of terraform and doing “terraform output > ~~~kube/config” Any help to achieve it as part of null_resources are any terraform resources?

David Scott avatar
David Scott

I ended up outputing my kubeconfig(s) to local_file and storing them back in source control. I reference them in null_resource with --kubeconfig=${local_file.kubeconfig.filename}. It’s not the most elegant solution, but neither is using null_resource for kubectl apply.

Haroon Rasheed avatar
Haroon Rasheed

Thanks for the reply. Can this be achieved in single terraform run like generating kubeconfig n copying in config path? Also can we use --kubeconfig=${local_file.kubeconfig.filename} like this inside null resources?

Haroon Rasheed avatar
Haroon Rasheed

Please ignore my previous msg.. I understood your answer thanks for the help. Will try it n get back if any issues

1
Kevin Chan avatar
Kevin Chan

Question how am I supposed to be running the tests? in a container or does calling the make file work? I’m trying to upgrade the iam module, but first I want to know how you guys are writing tests for the other modules first.

1
sheldonh avatar
sheldonh

Keeping my eyes on this. You can see some stuff on this in some repos but I haven’t yet had capacity to dive into tests yet.

2020-05-16

Haroon Rasheed avatar
Haroon Rasheed

I am trying to get the Kubernetes Pod IP..I have launched a kubernetes deployment…in terraform.tfstate file I dont see the IP address of the POD. I need to use that POD IP address to bring another POD. How to get kubernetes POD IP which is launched by Terraform. Any help would be great!

Haroon Rasheed avatar
Haroon Rasheed

Really appreciate if any help on this?

Tim Birkett avatar
Tim Birkett

I’m not sure you would use the pod IP for that.

What do you mean by:
I need to use that POD IP address to bring another POD

Tim Birkett avatar
Tim Birkett

In Kubernetes pods get assigned ephemeral IP addresses that change when the pod is moved, restarted, scaled. Using the IP of the pod to communicate with would not be a good experience. instead, Kubernetes depends on DNS names and Kubernetes Service resources.

Tim Birkett avatar
Tim Birkett

The whole talk is great but the relevant part is at 10 minutes in.

Haroon Rasheed avatar
Haroon Rasheed

Actually I need that POD to get some information from that POD..anyways let me check on DNS config..that would be appropriate as you say..

Haroon Rasheed avatar
Haroon Rasheed

I will go over this video..

Haroon Rasheed avatar
Haroon Rasheed

one basic question..i never tried with dns config..is it default dns config we can give are we need to have dns server configured in K8s cluster then set a dns for a particular pod..

Haroon Rasheed avatar
Haroon Rasheed

BTW Thank you for the response..really appreciate

Tim Birkett avatar
Tim Birkett

A good place to start to understand DNS and services in Kubernetes is: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service

Tim Birkett avatar
Tim Birkett

If your deployments (and the pods they create) are in the same namespace, once you’ve created the service, you’ll be able to access the pods with the host part of the DNS name.

Haroon Rasheed avatar
Haroon Rasheed

hmm I get it..it really makes sense to go with service

Tim Birkett avatar
Tim Birkett

Say I have a Ghost blog in namespace my-blog and a Percona DB server in my-blog - I’d create a Service of type ClusterIP for the database server named db and my Ghost blog could access it with the host name db

Haroon Rasheed avatar
Haroon Rasheed

i was trying out raw thing out here..this looks complete solution..

Tim Birkett avatar
Tim Birkett

If I were to deploy the DB server into another namespace I could reference it with it’s FQDN: db.other-namespace.svc.cluster.local

Tim Birkett avatar
Tim Birkett

Another fantastic talk to watch to understand how all these Kubernetes resources interact is this video: https://www.youtube.com/watch?v=90kZRyPcRZw

Haroon Rasheed avatar
Haroon Rasheed

Ok that sounds clear.. for now I can go with same namespace that should be fine for my case..so that it would be simple for me to refer that..

1
Haroon Rasheed avatar
Haroon Rasheed

Thanks Tim..guess these details would really help me kick start on these areas..which is new to me as a k8s beginner..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

anyone have a lambda for provisioning the databases inside of an RDS postgres instance? problem we’re trying to solve is creating the databases (e.g. granana, keycloak, and vault) on the RDS cluster without direct access since it’s in a private VPC. In this case we do not have #atlantis or VPN connectivity to the VPC. Looking for a solution we can use in pure terraform.

jose.amengual avatar
jose.amengual

mm we do have lambdas that interact with the db but not to create them

jose.amengual avatar
jose.amengual

usually we create from a snapshot

jose.amengual avatar
jose.amengual

can you just create a snapshot with the schema and whatever data you need and put it on a s3 bucket ?

1
jose.amengual avatar
jose.amengual

or share the snapshot

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ohh that is an interesting idea. The s3 import option.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Only downside is that it needs to be in the percona xtradb format, but haven’t looked at it yet.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
RestoreDBInstanceFromS3 - Amazon Relational Database Service

Amazon Relational Database Service (Amazon RDS) supports importing MySQL databases by using backup files. You can create a backup of your on-premises database, store it on Amazon Simple Storage Service (Amazon S3), and then restore the backup file onto a new Amazon RDS DB instance running MySQL. For more information, see

jose.amengual avatar
jose.amengual

is just the percona tools, you can use the tool to dump in that format and you are set

jose.amengual avatar
jose.amengual

it is fairly simple

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

We need to provision users, too.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)


Limitations and Recommendations for Importing Backup Files from Amazon S3 to Amazon RDS
The following are some limitations and recommendations for importing backup files from Amazon S3:
You can only import your data to a new DB instance, not an existing DB instance.  (emphasis added)
That is a deal-killer

jose.amengual avatar
jose.amengual

can’t you add the users before you do the dump?

jose.amengual avatar
jose.amengual

we do that too, we never had to create the users

jose.amengual avatar
jose.amengual

plus you can use IAM auth and have “shared” users OR use SM autorotation secrets for apps ( requires code changes in app)

jose.amengual avatar
jose.amengual

IAM auth is not recommended for apps with more than 200 connections per second

jose.amengual avatar
jose.amengual

you can use the lambda examples for autorotate secrets to add users too since you basically need to setup the lambda access to the DB

jose.amengual avatar
jose.amengual
aws-samples/aws-secrets-manager-rotation-lambdas

Contains Lambda functions to be used for automatic rotation of secrets stored in AWS Secrets Manager - aws-samples/aws-secrets-manager-rotation-lambdas

jose.amengual avatar
jose.amengual
Rotating Secrets for Supported Amazon RDS Databases - AWS Secrets Manager

Automatically rotate your Amazon RDS database secrets by using Lambda functions provided by Secrets Manager invoked automatically on a defined schedule.

jose.amengual avatar
jose.amengual

I’m working on making all this to work on TF

jose.amengual avatar
jose.amengual
giuseppeborgese/terraform-aws-secret-manager-with-rotation

This module will create all the resources to store and rotate a MySQL or Aurora password using the AWS Secrets Manager service. - giuseppeborgese/terraform-aws-secret-manager-with-rotation

github140 avatar
github140

Once the private database is provisioned how do you apply schema changes, serverless?

github140 avatar
github140
RDS database migration with Lambda - codecentric AG Blog

Lambdas are handy for RDS MySQL database migrations. The referenced Github repo offers forward migration using semantic versioning.

jose.amengual avatar
jose.amengual

for the original use case I think once is provisioned is done, some one else takes over

jose.amengual avatar
jose.amengual

we are going to start using liquibase

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


You can only import your data to a new DB instance, not an existing DB instance.
@Jeremy G (Cloud Posse) this is a cold-start problem for us and it’s a new instance. As a compromise we can provision all 3 databases this way at start. Agree, it would be nicer to decouple each database creation from the RDS instance creation, but it’s less a deal-breaker than it is an inconvenience.

maarten avatar
maarten

A lambda like that would be very cool, especially when combined with a new postgres / mysql provider

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Erik Osterman (Cloud Posse)
User accounts are not imported automatically. Save your user accounts from your source database and add them to your new DB instance later.
This is also a deal killer.

Remember, all we need to do is run 3 MySQL commands per database: CREATE DATABASE, CREATE USER, and GRANT. This does not need to be overly complicated.

What we need are

• a secure way to get master credentials to the script runner

• a way to get other dynamic parameters to the script runner

• a way for the script runner to have network connectivity to the database cluster

• a way for the script runner to communicate status

• a way to integrate and synchronize running the script with the rest of our Terraform operations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, given these requirements, the lambda will be the most generalizable solution that I can think about. Or a lambda that listens to an SNS topic to run permitted psql commands.

jose.amengual avatar
jose.amengual

I do not agree with that paragraph about user accounts not imported , we use the snapshots and we never had to recreate the users

jose.amengual avatar
jose.amengual

but that is from a dump that already had the users in it

Stratos avatar
Stratos

Have you thought of launching one off task containers for these types DB administration use cases? Lambda always feels like overkill to me when what you really need is an ECS container with a bash script and a bunch of SQL.

jose.amengual avatar
jose.amengual

ECS is far more convoluted to setup than lambda

jose.amengual avatar
jose.amengual

in my opinion

jose.amengual avatar
jose.amengual

you could run and EC2 instance with session manager and run stuff manually trough the web console which does not require VPC conectivity

jose.amengual avatar
jose.amengual

actually you can use a SSM document to run a script that will do the mysql stuff for you in an instance controlled by SSM

Aleksandr Fofanov avatar
Aleksandr Fofanov

@Erik Osterman (Cloud Posse) I recently developed such a module. It is a lambda function that can provision a database and optionally a user in mysql and postgres in RDS instances without public access in VPC. Not very polished, but does it job perfectly. I can open source it on github within the next 12 hours or so if that helps.

3
Aleksandr Fofanov avatar
Aleksandr Fofanov
aleks-fofanov/terraform-aws-rds-lambda-db-provisioner

Terraform module to provision database in AWS RDS instance in a VPC - aleks-fofanov/terraform-aws-rds-lambda-db-provisioner

2020-05-17

2020-05-18

OliverS avatar
OliverS

I submitted a bug that appears to break aws-tfstate-backend upgrade from 0.16 to 0.17 of the module, https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/47, meanwhile if anyone can explain what the error means, I am unable to make sense of it.

Cloud Posse avatar
Cloud Posse
04:00:39 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is May 27, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

maarten avatar
maarten

Has anyone worked with the golang hcl2 lib before ? I’m having the issue that the unquoted fields, for example the terraform type field for the variables is not parsing well. Just like if the lib would still be hcl1 which it’s not.

2020-05-19

sahil kamboj avatar
sahil kamboj

Hey guys i formatted my laptop and git pull my terraform script and facing error after terraform init Error: Failed to instantiate provider “aws” to obtain schema: fork/exec /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.62.0_x4: permission denied

Error: Failed to instantiate provider “kubernetes” to obtain schema: fork/exec /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-kubernetes_v1.11.2_x4: permission denied

Error: Failed to instantiate provider “local” to obtain schema: fork/exec /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-local_v1.4.0_x4: permission denied

Error: Failed to instantiate provider “null” to obtain schema: fork/exec /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-null_v2.1.2_x4: permission denied

Error: Failed to instantiate provider “random” to obtain schema: fork/exec /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-random_v2.2.1_x4: permission denied

Joe Niland avatar
Joe Niland
Error: Failed to instantiate provider "aws" to obtain schema: fork/exec · Issue #24010 · hashicorp/terraform

Terraform Version 0.12.6. + provider.aws v2.47.0 (s3 backend) Terraform Configuration Files … Expected Behavior terraform init succeedss. Actual Behavior after a validate: Error: Failed to instan…

sahil kamboj avatar
sahil kamboj

done everything in that but no solution

Adrian avatar

when you run direclty /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-local_v1.4.0_x4 can you provide output?

Adrian avatar

if the same permission denied, then chmod +x /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-local_v1.4.0_x4 and try again

sahil kamboj avatar
sahil kamboj

same output

sahil kamboj avatar
sahil kamboj

given 777 still same issue

Adrian avatar

use strace

sahil kamboj avatar
sahil kamboj

can you help me regarding strace idk how to use it and what m looking for

Adrian avatar

use google

sahil kamboj avatar
sahil kamboj

i mount another drive to this folder , it may be the case ?

Karoline Pauls avatar
Karoline Pauls

strace terraform init

Adrian avatar

directly on /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-local_v1.4.0_x4

Adrian avatar

strace /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/terraform-provider-local_v1.4.0_x4

Karoline Pauls avatar
Karoline Pauls


i mount another drive to this folder , it may be the case
possibly, try some other writable location

sahil kamboj avatar
sahil kamboj

strace permission denied

sahil kamboj avatar
sahil kamboj

on that folder

Adrian avatar

:joy: and maybe ls /home/kratos/Documents/Projects/terra/.terraform/plugins/linux_amd64/

Adrian avatar

fix your permissions on documents folder

sahil kamboj avatar
sahil kamboj

i mount on Document folder

sahil kamboj avatar
sahil kamboj

no bad words

Adrian avatar

use mount properly

sahil kamboj avatar
sahil kamboj

THNX MAN

sahil kamboj avatar
sahil kamboj

deleted .terraform folder and terraform init but same problem

jose.amengual avatar
jose.amengual

@Erik Osterman (Cloud Posse) do you guys have a delay when publishing modules to the terraform registry?

jose.amengual avatar
jose.amengual

I was looking at the Terraform modules tutorial and I saw this :

source  = "terraform-aws-modules/ec2-instance/aws"
  version = "2.12.0"
jose.amengual avatar
jose.amengual

and I was wondering about the benefit of using the registry over git

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Nope, no such concept. The registery is just a proxy for GitHub

1
jose.amengual avatar
jose.amengual

ahhhhhh I did not know what

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It doesn’t work like other registries like Docker hub or package registries that you upload to

jose.amengual avatar
jose.amengual

so how do they know what to pull ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The registry requires a strict repository naming convention

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Modules still need to be manually added

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But from that point on all “updates” are in real-time

jose.amengual avatar
jose.amengual

but there is a registration step somewhere I guess ?

jose.amengual avatar
jose.amengual

I will read the registry docs

Steven avatar

Using the terraform registry is not the same as git. It is a proxy, but it only gets releases with the right name format. It cannot get branches, tags, or other git references. Because of this, it is faster than using git. Git does a clone (which is larger) and if used mutiple time will do multipl clones. Where using registry will only download once per versions

jose.amengual avatar
jose.amengual

Interesting thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, good distinctions! Wasn’t thinking about that. @Steven

btai avatar

is there a way to use the terraform 12 for loops to create n number of terraform resources? or do we still do the count thing?

RB avatar

look up for_each

Joe Presley avatar
Joe Presley

for_each is if you’re iterating over a map. If you just want to create n number of resources, you would still use count.

loren avatar

for_each also works with a unique list of things (cast to a set with toset()), as well as maps… I’d only recommend using count if it is literally just creating multiples of the same thing, i.e. it’s original, literal purpose. Otherwise when the the list/map changes, count will want to destroy/create way more than expected since the index changes for all items in the list

Joe Presley avatar
Joe Presley

I’m trying to think of a workflow where a ci/cd server would pass in parameters to a module. Is that a possibility? Are there any gotchas? I’m thinking the parameters would be passed in as TF_VAR variables. The downside that I see is that it would make it a mess to manage state.

Joe Presley avatar
Joe Presley

What I’m trying to explore is the boundaries of whether Terraform is ultimately a tool that is dependent on GitOps.

jose.amengual avatar
jose.amengual

I have no idea what do you mean with this ?

jose.amengual avatar
jose.amengual

what does it mean to be dependant in gitops ?

Steven avatar

Yes, ci/cd can pass parameters to Terraform. This is not uncommon. Terraform state needs to be remote or you risk losing it. Terraform manages state. What you pass to it should be things that change the state. If that is not what you are trying to do, then job either needs to change to different state files or use workspaces.

Steven avatar

Terraform doesn’t care about GitOps. You can use with or without

Joe Presley avatar
Joe Presley

That’s useful information. I’m so used to thinking of GitOps as the goal that I’m trying to wrap my mind around other ways of interacting with Terraform besides through git commits.

jose.amengual avatar
jose.amengual

gitops is a methodology not a tool

Chris Fowles avatar
Chris Fowles

Terraform Cloud is fundamentally “GitOps” driven.

2020-05-20

Harshal Vaidya avatar
Harshal Vaidya

Hello I’m facing issues while spinning an eks_cluster using the cloudposse module

Harshal Vaidya avatar
Harshal Vaidya

Need some help with that ..

Harshal Vaidya avatar
Harshal Vaidya
module.eks_cluster.null_resource.wait_for_cluster[0]: Still creating... [2m10s elapsed]
module.eks_cluster.null_resource.wait_for_cluster[0]: Still creating... [2m20s elapsed]
module.eks_cluster.null_resource.wait_for_cluster[0]: Still creating... [2m30s elapsed]
module.eks_cluster.null_resource.wait_for_cluster[0]: Creation complete after 2m32s [id=2129570020838525894]
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Creating...

Error: configmaps "aws-auth" already exists

  on .terraform/modules/eks_cluster/auth.tf line 84, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
  84: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
Harshal Vaidya avatar
Harshal Vaidya

My tf ends with this message ..

Harshal Vaidya avatar
Harshal Vaidya

I’ve searched around on the web and there are some threads that discuss this ..but none of those solutions have worked

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

The config map has probably been created before (e.g. by using older version of the module). You need to manually delete it using kubectl. The module uses kubernetes provider to create the config map, and the provider just can’t update it, only create a new one

Harshal Vaidya avatar
Harshal Vaidya

@Andriy Knysh (Cloud Posse) I’ve done a terraform run from scratch into a completely clean environment, still I consistently get this error. I’ve cleaned up the state files, .terraform folder .. pretty much everything. And still keep getting this error.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the config map is already provisioned (can’t say how). To fix this, there are only two ways: 1) destroy the EKS cluster and provision again; 2) from the command line, find the config map and delete it using kubectl

Harshal Vaidya avatar
Harshal Vaidya

Andriy, I’m spinning an EKS cluster from scratch. It wasn’t pre-existing. Still I continue to hit this. There is no other piece of code that is creating aws_auth configmap.

Harshal Vaidya avatar
Harshal Vaidya

If I set the following flag to false will it hurt? apply_config_map_aws_auth

Harshal Vaidya avatar
Harshal Vaidya

looking at the code it seems that will stop the code from creating the aws_auth map altogether.

Harshal Vaidya avatar
Harshal Vaidya

@Andriy Knysh (Cloud Posse), What do you think?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when you create the cluster first time, there should be no config map

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you delete all the resources and start again?

Harshal Vaidya avatar
Harshal Vaidya

That is correct .. I deleted the entire cluster and then started again ..

Harshal Vaidya avatar
Harshal Vaidya

I’ll look at the example tomorrow

aaratn avatar

Checkout terraform version manager written by me, supports pip, docker and homebrew !!

Its already becoming popular with 40  .

https://github.com/aaratn/terraenv

aaratn/terraenv

Terraform & Terragrunt Version Manager. Contribute to aaratn/terraenv development by creating an account on GitHub.

1
Andrew avatar

anyone using terraform cloud? I am running a module that requires running awscli locally and their remote server does not have it installed. anyone have any advice?

Andrew avatar

I tried the obvious thing of apt-get installing it but that didn’t work

Zachary Loeber avatar
Zachary Loeber

A good question for todays office hours but I’ll say that anything you can shift to pure terraform provider code will make your life much easier in the long run

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you have basically 2 options.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

One is to add any dependencies to the git repo itself. PIA if you need to copy that 100x over.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the other is to use the exec-provider to install what you need. we’ve done this in the past. @Andriy Knysh (Cloud Posse) probably has a snippet somewhere.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

either way, it’s a slippery slope.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What terraform cloud needs (and the competitors like Spacelift and Scalr support) is a model where you bring your own docker image

@Jake Lundberg (HashiCorp) @sarkis

1
Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Terraform Enterprise does have this ability. I’ll check if we’ll enable this for TFC.

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

We may only allow that for paid accounts (in the future).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Right - that’s fair. But supporting it in some capacity I think is needed. Though support for thirdparty providers is improving (i think?) which mitigates it to some degree.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We had a recent use case of using a third-party provider terraform-helmfile-provider, which depends on helmfile binary, which depends on the helm binary and helm-diff , helm-secrets, etc plugins. Getting all that to run on terraform cloud was a major PIA.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

:smiley: oh and kubectl

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

@Erik Osterman (Cloud Posse) can you go through the process to get this to work? I’d like to make sure we’re capturing this from the field.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jake Lundberg (HashiCorp) Here’s what we had… (not proud of this!)

provider "shell" {}

data "shell_script" "install_external_packages" {
  lifecycle_commands {
    read = <<EOF
        set -e
        export PATH=$PATH:${local.external_packages_install_path}:${local.external_packages_install_path}/bin
        mkdir -p ${local.external_packages_install_path}
        cd ${local.external_packages_install_path}
        echo Installing AWS CLI ...
        curl -LO <https://s3.amazonaws.com/aws-cli/awscli-bundle.zip>
        unzip ./awscli-bundle.zip
        ./awscli-bundle/install -i ${local.external_packages_install_path}
        aws --version
        echo Installed AWS CLI
        echo Installing kubectl ...
        curl -LO <https://storage.googleapis.com/kubernetes-release/release/${local.kubectl_version}/bin/linux/amd64/kubectl>
        chmod +x ./kubectl
        echo Installed kubectl
        echo Installing helm ...
        curl -LO <https://get.helm.sh/helm-v${var.helm_version}-linux-amd64.tar.gz>
        tar -zxvf ./helm-v${var.helm_version}-linux-amd64.tar.gz
        mv ./linux-amd64/helm ./helm
        chmod +x ./helm
        helm version --client
        echo Installed helm
        echo Installing helmfile ...
        curl -LO <https://github.com/roboll/helmfile/releases/download/v${var.helmfile_version}/helmfile_linux_amd64>
        mv ./helmfile_linux_amd64 ./helmfile
        chmod +x ./helmfile
        which helmfile
        helmfile --version
        echo Installed helmfile
        aws_cli_assume_role_arn="${var.aws_assume_role_arn}"
        aws_cli_assume_role_session_name="${module.label.id}"
        echo Assuming role "$aws_cli_assume_role_arn" ...
        curl -L <https://github.com/stedolan/jq/releases/download/jq-${var.jq_version}/jq-linux64> -o jq
        chmod +x ./jq
        # source <(aws --output json sts assume-role --role-arn "$aws_cli_assume_role_arn" --role-session-name "$aws_cli_assume_role_session_name"  | jq -r  '.Credentials | @sh "export AWS_SESSION_TOKEN=\(.SessionToken)\nexport AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey) "')
        echo Assumed role "$aws_cli_assume_role_arn"
        echo Getting kubeconfig from the cluster...
        aws eks update-kubeconfig --name=${var.cluster_name} --region=${var.region} --kubeconfig=${var.kubeconfig_path} ${var.aws_eks_update_kubeconfig_additional_arguments}
        export KUBECONFIG=${var.kubeconfig_path}
        kubectl version --kubeconfig ${var.kubeconfig_path}
        kubectl get nodes --kubeconfig ${var.kubeconfig_path}
        kubectl get pods --all-namespaces --kubeconfig ${var.kubeconfig_path}
        echo Got kubeconfig from the cluster
		echo '{"helm_version": "$(helm version --client)", "helmfile_version": "$(helmfile --version)"}' >&3
      EOF
  }
}

output "helm_version" {
  value = data.shell_script.install_external_packages.output["helm_version"]
}

output "helmfile_version" {
  value = data.shell_script.install_external_packages.output["helmfile_version"]
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so, if we just had a docker image with our toolchain inside of it, it would have been a lot easier.

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Thanks for this Erik. What’s strange is we have custom workers in TFE. I think there’s some major concerns about security, but that you can install anything you want anyway, just slows things down.

I’ll pass this on to the developers so they see what kind of gymnastics folks are going through.

1
Andrew avatar

nvm looks like “Your plans and applies occur on machines you control. Terraform Cloud is only used to store and synchronize state. Save settings” is the way to go

sheldonh avatar
sheldonh

Terraform cloud is more than that. That’s just the state file stuff. I use for running plan and apply in their hosted containers

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

hello I am having the following code to create a launch configuration which creates one secondary volume but I need to create multiple EBS volumes as per the requirement … can anyone suggest what would be the best approach to have that functionality

resource "aws_launch_configuration" "launch_config_with_secondary_ebs" {
  count = var.secondary_ebs_volume_size != "" ? 1 : 0

  ebs_optimized     = var.enable_ebs_optimization
  enable_monitoring = var.detailed_monitoring
  image_id          = var.image_id != "" ? var.image_id : data.aws_ami.asg_ami.image_id
  instance_type     = var.instance_type
  key_name          = var.key_pair
  name_prefix       = join("-", compact(["LaunchConfigWith2ndEbs", var.name, format("%03d-", count.index + 1)]))
  placement_tenancy = var.tenancy
  security_groups   = var.security_groups
  user_data_base64  = base64encode(data.template_file.user_data.rendered)

  ebs_block_device {
    device_name = local.ebs_device_map[local.ec2_os]
    encrypted   = var.secondary_ebs_volume_existing_id == "" ? var.encrypt_secondary_ebs_volume : false
    iops        = var.secondary_ebs_volume_iops
    snapshot_id = var.secondary_ebs_volume_existing_id
    volume_size = var.secondary_ebs_volume_size
    volume_type = var.secondary_ebs_volume_type
  }

  iam_instance_profile = element(
    coalescelist(
      aws_iam_instance_profile.instance_role_instance_profile.*.name,
      [var.instance_profile_override_name],
    ),
    0,
  )

  root_block_device {
    iops        = var.primary_ebs_volume_type == "io1" ? var.primary_ebs_volume_size : 0
    volume_size = var.primary_ebs_volume_size
    volume_type = var.primary_ebs_volume_type
  }

  lifecycle {
    create_before_destroy = true
  }
}
Chris Fowles avatar
Chris Fowles

you can have multiple ebs_block_device blocks

Rajesh Babu Gangula avatar
Rajesh Babu Gangula
variable "secondary_ebs_volume_size" {
  description = "EBS Volume Size in GB"
  type        = list(string)
  default     = []
}

ebs_block_device {
    count = length(var.secondary_ebs_volume_size) > 0 ? 1 : 0
    device_name = local.ebs_device_map[local.ec2_os]
    encrypted   = var.secondary_ebs_volume_existing_id == "" ? var.encrypt_secondary_ebs_volume : false
    iops        = var.secondary_ebs_volume_iops
    snapshot_id = var.secondary_ebs_volume_existing_id
    volume_size = var.secondary_ebs_volume_size[count.index]
    volume_type = var.secondary_ebs_volume_type
  }
Rajesh Babu Gangula avatar
Rajesh Babu Gangula

does this work

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

I mean we need to have the functionality that can have any number of devices added based on the requirement

Chris Fowles avatar
Chris Fowles

no count wont work

Chris Fowles avatar
Chris Fowles

you’ll need to use dynamic blocks for that and terraform 0.12

Chris Fowles avatar
Chris Fowles
Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

2020-05-21

Aziz avatar

Guys, there is PR raised by one of my colleague for fixing the auto-scaling issue in RDS cluster - can you guys review it ? https://github.com/cloudposse/terraform-aws-rds-cluster/pull/67

fix(#63): instance_count to be independent of autoscaling_min_capacity by sumeetshk · Pull Request #67 · cloudposse/terraform-aws-rds-cluster

Fix for #63 The cluster_size or the instance_count should be independent of autoscaling_min_capacity as autoscaling_min_capacity is automatically taken care of by AWS through the autoscaling policy…

Sumeet Shukla avatar
Sumeet Shukla

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) Could you please review the PR or have someone review it: https://github.com/cloudposse/terraform-aws-rds-cluster/pull/67

fix(#63): instance_count to be independent of autoscaling_min_capacity by sumeetshk · Pull Request #67 · cloudposse/terraform-aws-rds-cluster

Fix for #63 The cluster_size or the instance_count should be independent of autoscaling_min_capacity as autoscaling_min_capacity is automatically taken care of by AWS through the autoscaling policy…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, let’s move to #pr-reviews

fix(#63): instance_count to be independent of autoscaling_min_capacity by sumeetshk · Pull Request #67 · cloudposse/terraform-aws-rds-cluster

Fix for #63 The cluster_size or the instance_count should be independent of autoscaling_min_capacity as autoscaling_min_capacity is automatically taken care of by AWS through the autoscaling policy…

Sumeet Shukla avatar
Sumeet Shukla

sure

Andreas P avatar
Andreas P

Guys do you have any reference articles/pointers on how to go to a ECS Fargate deployment for my microservices with CI/CD enabled?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

conzymaher avatar
conzymaher

Awesome example. Out of interest in this case what updates / registers the new task definition after the latest image is built in CI? And how do you stop that from conflicting with the initial task definition created by terraform?

conzymaher avatar
conzymaher

Any time I have implemented something like this I need to create the Initial skeleton task definition with Terraform. But after I build / tag the image I need some additional automation to generate the new task definition ( that references the new image tag, typically a git SHA) and then register that task definition and restart the service

conzymaher avatar
conzymaher

I’d be interested to hear your approach to this problem. Or is it an exercise left to the reader

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This uses codebuild/codepipeline and the aws cli to bump the version of the task definition (if I recall correctly)

conzymaher avatar
conzymaher

The problem that I usually encounter though. Is that the next time you plan / apply that terraform you will have a bad time

conzymaher avatar
conzymaher

As it will want to revert those resources which will typically take down the service.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s why you have to ignore_changes on the task definition and explicitly taint it when you need to change it.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you using terraform cloud by anychance?

Andreas P avatar
Andreas P

@Erik Osterman (Cloud Posse) Awesome Thanks!!

conzymaher avatar
conzymaher

Yup just started with Terraform cloud as I used terragrunt on previous projects

conzymaher avatar
conzymaher
cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So terraform cloud will reduce the complexity of running the terraform as part of your workflow.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@johncblandii can @conzymaher trigger a build in TFC with a webhook (esque) functionaltiy and pass a parameter like an image name/tag?

conzymaher avatar
conzymaher

Ah. So traditionally I have separated infrastructure deployment and “code” deployment. I suppose I could run terraform triggered from VCS with “auto apply” to deploy the code by pushing an updated container definition?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so, for example, maybe you’re using CircleCI/GitHub Actions/etc and you use that for all your integration testing. Then you push an artifact to ECR. Then you trigger a deployment pipeline, and as part of that you trigger TFC.

conzymaher avatar
conzymaher

This is the one fuzzy bit of CI/CD for ECS with Terraform so was just interested to see how you guys have solved it. I have also used the AWS GitHub actions to render and update the task definition

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I agree about separating infrastructure from code deployments. This is where AWS has muddled the water for us.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is also why we almost exclusively focus on Kubernetes.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

johncblandii avatar
johncblandii

I did a quick tutorial on it

conzymaher avatar
conzymaher

Thanks John. I’ll take a look

johncblandii avatar
johncblandii
fabfuel/ecs-deploy

Powerful CLI tool to simplify Amazon ECS deployments, rollbacks & scaling - fabfuel/ecs-deploy

johncblandii avatar
johncblandii

It recreates a task definition allowing changes to docker image, env vars, etc so you can customize it for a specific run, if so desired

johncblandii avatar
johncblandii

All of that is based on the CP module Erik linked above

johncblandii avatar
johncblandii

…and others from CP

wannafly37 avatar
wannafly37

Is there any way to lookup names of IAM roles with wildcards? Situation: AWS SSO Federated roles are appended with a random string

Zach avatar

Doesn’t look like the iam_role data source supports filters … you could use an external data source and run an AWS API call to get all your roles and then filter it down, and return the one you need. We do that for some similar limits with looking up RDS because ‘describe datatbase instance’ doesn’t take filters

RB avatar

just dump them all to a file

aws iam list-roles --query 'Roles[].RoleName' > all-my-roles.txt
RB avatar

then you can grep them

RB avatar

if you have to use terraform, you can create a null resource that runs the above command, saves to a file, reads the file, creates data sources for each iam role, and then you can use other iam attributes

wannafly37 avatar
wannafly37

Thanks for the input, that should work for me…

loren avatar

Is there an issue yet to support filters on the data source? That would be useful

RB avatar

that would be a nice addition. i dont see an issue for it

RB avatar

Closest one i could find regarding filtering on tags https://github.com/terraform-providers/terraform-provider-aws/issues/7419

aws_iam_role data source doesn't support filtering by tags · Issue #7419 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

RB avatar

The limitation is that GetRole api call doesn’t support anything other than the name supplied. I wonder if tf aws provider would list roles first before applying filtering. I also wonder if they already do that in other data sources

loren avatar

the GetRole API says RoleName accepts regex…. https://docs.aws.amazon.com/IAM/latest/APIReference/API_GetRole.html

GetRole - AWS Identity and Access Management

Retrieves information about the specified role, including the role’s path, GUID, ARN, and the role’s trust policy that grants permission to assume the role. For more information about roles, see Working with Roles .

loren avatar

oh, no, that’s the regex pattern it accepts. dang.

loren avatar

they’d have to use ListRoles and then client-side filtering

loren avatar

Looks interesting… Anyone use checkcov?

https://www.checkov.io/1.Introduction/Getting%20Started.html

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I had a call with them today

RB avatar

using checkov, tflint, and tfsec in our atlantis for every tf module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

product looks nice. we’re going to be doing a poc with the cloud posse terraform modules

RB avatar

it catches some stuff the others dont catch which is nice.

RB avatar

all of the linters have a certain amount of overlap

loren avatar

Cool, was wondering how it would work with modules that are not super opinionated

RB avatar

i like checkov more than tflint because it doesnt require a configuration file to work. it just works out of the box.

loren avatar

Is it pure static analysis, or does it need to run a plan or apply?

RB avatar

works after i run terraform init

1
1
loren avatar

tf 0.13.0 betas coming soon… Sure hope this upgrade is less painful than 0.12!

https://discuss.hashicorp.com/t/terraform-v0-13-0-beta-program/9066

Terraform v0.13.0 beta program

I’m very excited to announce that we will start shipping public Terraform 0.13.0 beta releases on June 3rd. The full release announcement is posted as a GitHub issue, and I’m opening up this discuss post as a place to host community discussions so that the GitHub issue can be used just for announcements.

5
maarten avatar
maarten

Nice, so 2020 is going to be a good year after all

Terraform v0.13.0 beta program

I’m very excited to announce that we will start shipping public Terraform 0.13.0 beta releases on June 3rd. The full release announcement is posted as a GitHub issue, and I’m opening up this discuss post as a place to host community discussions so that the GitHub issue can be used just for announcements.

Chris Fowles avatar
Chris Fowles
module expansion: modules will support count and for_each. We're still working on depends_on, but it's looking good and I think it'll make 0.13.0.

cool-doge2
loren avatar

I mean, what is a terraform module, if I no longer have to code an enable variable into every resource!

Chris Fowles avatar
Chris Fowles

i’m going to delete so much code!

2
sheldonh avatar
sheldonh

Omg. That will solve so much . Provider is normally outside a module though so still have one challenge which is I want to loop the AWS accounts too. This is a major plus regardless! Less repeated code and more reason to use modules

sheldonh avatar
sheldonh

It’s in beta….

Chris Fowles avatar
Chris Fowles

would rather a long beta than a fast bug release

loren avatar

i started using it with beta1, didn’t hit any issues. beta2 fell over with module nesting more than 1 deep. fixed in master, waiting on the next beta to test again…

bananadance1
Chris Fowles avatar
Chris Fowles

oh happiness

cli: Add state replace-provider subcommand to allow changing the provider source for existing resources [GH-24523]
1
raghu avatar

Hi Folks, Is there any command or something if i would like to use specific aws provider. Currently I am using latest + provider.aws v2.62.0 and would like to use v2.59.0 to get rid of below error. I am getting below error in v2.62.0 provider.

1 error occurred:
	* aws_autoscaling_group.asg: Error creating AutoScaling Group: ValidationError: You must use a valid fully-formed launch template. You cannot use PartitionNumber with a Placement Group that does not exist. Specify a valid Placement Group and try again.
	status code: 400, request id: 7db8c6a4-8061-47bb-9440-ded120c14d03
Chris Fowles avatar
Chris Fowles
Providers - Configuration Language - Terraform by HashiCorp

Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.

raghu avatar

Awesome. Thankyou @Chris Fowles

Providers - Configuration Language - Terraform by HashiCorp

Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.

Chris Fowles avatar
Chris Fowles

use:

terraform {
  required_providers {
    aws = "= 2.59.0"
  }
}

2020-05-22

raghu avatar

Hi Guys, I am kind of new to terraform enterprise and we just starting using it.When I initiate queue plan from my workspace though configuring keys and all those, it just showing blank, doesn’t do anything. I just canceled it.Am I missing anything? Appreciate your suggestion.

raghu avatar

I have restarted replicatedctl and that fixed the issue

Andrew avatar

why does terraform init download the repo again from github and then terraform apply ignores the cwd and only executes the modules in .terraform? I have to edit my source files twice every time I do this and it is annoying.

Zachary Loeber avatar
Zachary Loeber

This a generic rant or do you really want an answer? )

Zachary Loeber avatar
Zachary Loeber

2020-05-23

2020-05-24

2020-05-25

Zach avatar

I keep hearing lately that ‘terraform should build your infra and not deploy your code’ - but how does that work if you are building immutable AMIs as the ‘code package’? If terraform isn’t the one ‘deploying’ the AMI, what else is being used, and how is the terraform state for the ASG/launch-template/etc being maintained?

jose.amengual avatar
jose.amengual

that is a good question, I usually say do not use TF to deploy code and I mostly refer to people using a ssh trough TF to ssh a zip file and run a bash script, which is not your case

jose.amengual avatar
jose.amengual

and since you are changing ASG then I do not see why you should not use TF for that , I think it comes down to trying it and if it fits your needs then you are good

Matt Gowie avatar
Matt Gowie

I feel like this would be a good question for the Hashicorp Terraform message board. It’s a pretty interesting question — I’d like to know how others are handling that when AMIs are their method of deployment.

jose.amengual avatar
jose.amengual

When we started deployed IAM in my old company we did not used TF but we used packer to create the images and we had Spinnaker to switch the AMIs by using new ASGs

jose.amengual avatar
jose.amengual

that is an example were the deployment tool can handle some infrastructure part of the deployment but now that I’m using TF I could see how this could be implemented differently, in fact Sninnaker supports TF stages now

Chris Fowles avatar
Chris Fowles

terraform kind of sucks at rolling updates of asgs

1
Chris Fowles avatar
Chris Fowles

so i’d still recommend something else to handle that (not sure what the else is)

Zach avatar

It’s been mentioned here in another thread, there’s several ways of doing ASG updates. We just tie the name of the ASG to the AMI ID, so new AMI = new ASG, and it has a create_before_destroy lifecycle. Some other folks pointed out a blog describing how you can use some AWS CF templates in your terraform, and that can access some hidden ASG APIs that terraform can’t use.

Chris Fowles avatar
Chris Fowles

yeh - you can do all that @Zach, and I do, but it still kind of sucks

Chris Fowles avatar
Chris Fowles

it’s a hack - not a feature

Cloud Posse avatar
Cloud Posse
04:00:30 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Jun 03, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2020-05-26

wannafly37 avatar
wannafly37

Does anyone have recommendations on how to best secure a terraform CI/CD pipeline when using federated logins/SSO? My initial thoughts are to create a IAM user in each account (dev/staging/prod) that would have static credentials, but there has to be a better way…

roth.andy avatar
roth.andy

You could add an Instance Profile to the EC2 instance(s) that the pipeline is running on. That would give the pipeline access to the same role that federated users get without having iam user creds. You just have to make sure only trusted users and/or pipelines run on them since the instance profile is available to any running process. In a Kubernetes world that means using something like kiam to hand out access to selected pods

uswitch/kiam

Integrate AWS IAM with Kubernetes. Contribute to uswitch/kiam development by creating an account on GitHub.

roth.andy avatar
roth.andy

If you are on k8s, you want to add the instance profile to a separate node pool that only kiam runs on. All other workloads should run on a node pool that does not have an instance profile. kiam delegates access to other pods that need it based on rules you set up

wannafly37 avatar
wannafly37

Sorry, I forgot an important detail there - a 3rd party cloud based CI/CD pipeline

roth.andy avatar
roth.andy

Ah, then maybe a service account user in your SSO? We have done that as well. It can even work when MFA is required by saving off the MFA token using the text-based version of the QR code that is used to register an MFA app

wannafly37 avatar
wannafly37

Ok, yea I think that may be better. We’re using AWS SSO, I don’t think it has a concept of a “service acccount user” - so it would just be a regular user with permissions to do what is needed in the child accounts? I’m still a little confused how that would work since the credentials change

wannafly37 avatar
wannafly37

since theres no IAM user, just assumed roles…

roth.andy avatar
roth.andy

It would be a regular user that isn’t associated with an actual person. Save the password off as a secret that will get used by the pipeline.

It would use the same IAM roles that your other users use

wannafly37 avatar
wannafly37

Ahh - saving password instead of access/secret key - I guess thats possible too. I wonder which would be better, that or a terraformed IAM user in each account

roth.andy avatar
roth.andy

I prefer to avoid using one-off IAM users, since you have to manage rotation of that cred. If you have a service account user in SSO, the rotation of that password can be managed according to your corporate policy the same as all your other users

wannafly37 avatar
wannafly37

Good point. Thanks for the input, Andrew.

RB avatar

anyone running celery in ecs / fargate ? looking to run the worker and celerybeat (scheduler) in ecs if possible

• is it OK to run celery worker and beat in ECS ? is it OK to run in fargate?

• what tf modules, if any, would help ?

• any gotchas ?

RB avatar

we’re already using the terraform-aws-modules/terraform-aws-atlantis which doesn’t use a module but simply uses the ecs service and task definition resources using FARGATE as an example

Matt Gowie avatar
Matt Gowie

@RB I’m running these for a client project, just like you’re saying: Beat + Celery both in their own services on Fargate. My setup is pretty stock standard — Nothing fancy.

Matt Gowie avatar
Matt Gowie

I actually used https://github.com/masterpointio/terraform-aws-ecs-alb-service-task and then just didn’t front the resulting service with an ALB which works great.

masterpointio/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - masterpointio/terraform-aws-ecs-alb-service-task

1
Matt Gowie avatar
Matt Gowie

Healthcheck was tiny bit weird — Looks like I wrote a start_celery and start_beat script which the CMD calls. These start the underlying process and create a PID file. Then I have a healthcheck script which checks on the known PID file to ensure the task is up and running.

# Start celery worker
celery worker \
    -l INFO \
    --app=app_name \
    --time-limit=300 \
    --concurrency=2 \
    --workdir=/app/ \
    --pidfile=/app/celery_pid 

That was my start_celery script.

RB avatar

oh interesting!

Matt Gowie avatar
Matt Gowie

Or actually, looking back at it — I just passed a healthcheck_command to my fargate_background_job module (wrapper for the CP module) like this:

  healthcheck_command          = "kill -0 $(cat /app/beat_pid)"
  healthcheck_command          = "kill -0 $(cat /app/celery_pid)"
RB avatar

so youre running both the tasks in the same fargate cluster too ?

Matt Gowie avatar
Matt Gowie

Yeah, all services running in the same cluster.

RB avatar

if you get a chance, would love an open sourced celery specific fargate example using your module

RB avatar

ill work through it and try to figure it out

Matt Gowie avatar
Matt Gowie


celery specific fargate example
It’s all super light — Not sure I have much of an example I could show. I just wrapped the CP terraform-aws-ecs-alb-service-task (seems I linked my fork above by accident since I had to fork and modify for my client) and terraform-aws-ecs-container-definition, but beyond that there is no secret sauce. I’m sure you’ll get it figured.

1
RB avatar

hmmm, having an issue with the port mappings. can’t seem to find the celery workers ports online. does this look right ?

module "container_definition" {
  source = "git::<https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=tags/0.21.0>"

  container_name  = local.application
  container_image = "${module.ecr.repository_url}/${local.application}:latest"

  port_mappings = [
    {
      containerPort = 8080
      hostPort      = 8080
      protocol      = "tcp"
    }
  ]

  command = [
    <<EOF
celery worker \
  -l INFO \
  --app=app.tasks \
  --time-limit=300 \
  --workdir=/opt/celery \
  --pidfile=/opt/celery/celery_pid
EOF
  ]

  healthcheck = {
    command     = ["kill -0 $(cat /opt/celery/celery_pid)"]
    interval    = 60
    retries     = 1
    startPeriod = 0
    timeout     = 60
  }
}
RB avatar

when i ssh to my container, i can run the same celery worker command without any issues

Matt Gowie avatar
Matt Gowie

I didn’t expose any ports for my celery worker AFAIR.

Matt Gowie avatar
Matt Gowie

Are you trying to get to the Celery UI @RB?

Matt Gowie avatar
Matt Gowie

(Is there a celery UI? I have no idea haha.)

RB avatar

lol there is a flower ui for celery on port 5555 or 8080 i believe

RB avatar

i think im making some noob mistakes here

RB avatar

yes i definitely am because im using celery worker to read off the sqs queue so it doesn’t need any port access

RB avatar

it just needs access to the sqs queue and that should do it

Matt Gowie avatar
Matt Gowie

Ah looking at your command — that might be the issue:

  command = [
    <<EOF
celery worker \
  -l INFO \
  --app=app.tasks \
  --time-limit=300 \
  --workdir=/opt/celery \
  --pidfile=/opt/celery/celery_pid
EOF
  ]

Not 100% on this, but that might blow up because it’s not following the comma separated strings command syntax. AFAIK docker / ECS won’t like that.

Matt Gowie avatar
Matt Gowie

My similar example that I posted earlier — I had that command in a bash script that I bundled with the container and then my CMD was to invoke that bash script.

RB avatar

ahhh ok so every argument needs to be broken up into a separate value in the array ?

Matt Gowie avatar
Matt Gowie

Yes, typically. It’s a PITA, but that’s the way Docker wants it.

RB avatar

oh i see, that makes this a lot easier

RB avatar

i have flask and celery worker tied together

RB avatar

so i wanted to use the same docker file

RB avatar

i suppose i can make a base image and have 2 child images, one for flask api, the other for celery worker

RB avatar

and each child can have its own CMD argument

Matt Gowie avatar
Matt Gowie

I did the same. One Dockerfile that deployed a Django, Celery, and Beat application. You should be able to make it happen.

cool-doge1
Matt Gowie avatar
Matt Gowie

Or keep the one dockerfile / image and then the command dictates what the container is actually going to do.

RB avatar

thx again for all the help. been wacking away at this all day. if i get stuck again tomorrow, i might ping you again in this thread if you dont mind.

Matt Gowie avatar
Matt Gowie

@RB No problem — Happy to help!

RB avatar

lots of gotchas… one that i didnt expect was that the container command needs to be an array like this

command = split(
    " ",
    "celery worker -l INFO --app=app.tasks --time-limit=300 --workdir=/opt/celery --pidfile=/opt/celery/celery_pid",
  )

whereas the healthcheck command cannot be split on space and instead should be like this

container_healthcheck = {
    command     = ["kill -0 $(cat /opt/celery/celery_pid)"]
RB avatar

now celery worker is in a steady state

Matt Gowie avatar
Matt Gowie

Huh interesting. Well, glad you got it figured!

RB avatar

nope nvm, had to do the same split. not sure why it wasn’t working before.

RB avatar

anyway thanks again!

RB avatar

ah shit, spoke too soon. getting this into a steady state is a pain. just like you said, its a pain to get it to pass healthchecks

RB avatar

ok so when i changed the healthcheck to command = split(" ", "cat /opt/celery/beat_pid") thats when i could get it to work correctly

RB avatar

idk how your kill command was a healthcheck. it kept killing the process for me.

Matt Gowie avatar
Matt Gowie

Huh… wierd.

Matt Gowie avatar
Matt Gowie

kill -0 checks that a particular process ID is up.

1
Matt Gowie avatar
Matt Gowie

cat $PID_FILE won’t actually do a health check. The underlying beat or celery process could die and that file will still be around.

1
Matt Gowie avatar
Matt Gowie
What does `kill -0` do?

I recently came across this in a shell script. if ! kill -0 $(cat /path/to/file.pid); then … do something … fi What does kill -0 … do?

Matt Gowie avatar
Matt Gowie

Oh are you using the right task def syntax?

  # <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_HealthCheck.html>
  healthcheck = {
    command     = ["CMD-SHELL", "${var.healthcheck_command} || exit 1"]
    retries     = 3
    timeout     = var.healthcheck_timeout
    interval    = var.healthcheck_interval
    startPeriod = 30
  }
Matt Gowie avatar
Matt Gowie

You need that “CMD-SHELL” bit.

party_parrot1
Matt Gowie avatar
Matt Gowie

The linked AWS API Ref should give you an idea about that.

RB avatar

brilliant

Matt Gowie avatar
Matt Gowie

@RB Ah that ended up doing the trick? Glad to hear it!

RB avatar

Separate from this thread but related to celery. I attempted to use sqs for celery’s queue which seems to work however the flower ui doesn’t support it

RB avatar

Do you use redis / elasticache or rabbitmq as the queue?

RB avatar

I’m considering elasticache redia or amazon mq since it will remove our reliance on our current self hosted rabbit

Matt Gowie avatar
Matt Gowie

We used Elasticache. That would worked well. Though it is very likely to be more expensive than SQS.

RB avatar

did you get good results with flower ui ?

Matt Gowie avatar
Matt Gowie

Unless you need Rabbit for something else then I would steer clear of that route. More complicated than Redis.

RB avatar

we’re looking at the costs betw amazon mq and elasticache now

Matt Gowie avatar
Matt Gowie

We’re not using Flower UI. My client’s celery usage is very minimal. We just let it run.

RB avatar

ya we dont use rabbitmq for anything else

RB avatar

oh i see interesting so you have no use for the UI

Matt Gowie avatar
Matt Gowie

Yeah, we just schedule jobs through our Django app and it’s our simple background job runner. We have a few scheduled jobs through beat.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
06:55:35 PM

Service degradation for Terraform Cloud May 26, 18:50 UTC Investigating - We have identified service degradation for Terraform Cloud and are investigating.

Service degradation for Terraform Cloud

HashiCorp Services’s Status Page - Service degradation for Terraform Cloud.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
08:07:39 PM

Service degradation for Terraform Cloud May 26, 20:02 UTC Update - We are continuing to investigate this issue.May 26, 18:50 UTC Investigating - We have identified service degradation for Terraform Cloud and are investigating.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
09:45:28 PM

Service degradation for Terraform Cloud May 26, 21:24 UTC Resolved - We’ve mitigated the issue causing the service degradation and are now monitoring. All services are currently operating normally.May 26, 20:02 UTC Update - We are continuing to investigate this issue.May 26, 18:50 UTC Investigating - We have identified service degradation for Terraform Cloud and are investigating.

Zach avatar

https://github.com/hashicorp/terraform/issues/25016
module expansion: modules will support count and for_each. We’re still working on depends_on, but it’s looking good and I think it’ll make 0.13.0.

Terraform v0.13.0 beta program · Issue #25016 · hashicorp/terraform

I&#39;m very excited to announce that we will start shipping public Terraform 0.13.0 beta releases on June 3rd. You will be able to download them on releases.hashicorp.com, like the 0.12.0-beta ser…

Chris Fowles avatar
Chris Fowles
Deploy Any Resource With The New Kubernetes Provider for HashiCorp Terraform

We are pleased to announce the alpha release of a new version of the Kubernetes Provider for HashiCorp Terraform. The kubernetes-alpha provider lets you package, deploy, and manage…

roth.andy avatar
roth.andy

If I get this demo working I’ll be using the new Kubernetes provider for Terraform during my keynote at the Crossplane Community Day virtual event. https://www.eventbrite.com/e/crossplane-community-day-tickets-104465284478 https://twitter.com/mitchellh/status/1265414263281029120

attachment image

Yes! An alpha release of a new Kubernetes provider for Terraform that can represent ANY K8S resource (including any CRDs). You can also run this one-liner (image) to convert any YAML over. https://www.hashicorp.com/blog/deploy-any-resource-with-the-new-kubernetes-provider-for-hashicorp-terraform/ https://pbs.twimg.com/media/EY-nj__U8AAzI4C.jpg

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Finally! This is great news.

Chris Fowles avatar
Chris Fowles

this is excellent - it actually answers a really big question with how we move forward

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Do terraform modules now begin to replace helm charts?

Haroon Rasheed avatar
Haroon Rasheed

What would be the approach if my infra deployed by Terraform is messed up on (few resources got deleted manually) due to some external factor and I am trying to run terraform destroy but it fails with resources missing? How to get over this issue? I tried with terraform state rm <resource> but this is not viable if we have multiple resources. coz need to identify them one by one and delete? Is there any approach or recommendations?

Chris Fowles avatar
Chris Fowles

terraform plan | grep '#'

1
4
1
Chris Fowles avatar
Chris Fowles
Chris Fowles avatar
Chris Fowles

i can’t believe it took me this long to figure that out

roth.andy avatar
roth.andy

oooh that’s a good one

Chris Fowles avatar
Chris Fowles

i honestly feel like the sun just rose

2020-05-27

Carlos R. avatar
Carlos R.

Terraform AWS data aws_iam_policy_document

Problem: every time I do tf plan/apply when using a aws_iam_policy_document, terraform recomputes the policy document. Thus, it creates changes always in its plan. (See image below)

Question: How do you guys deal with it? Do you simply ignore? Do you avoid using the aws_iam_policy_document?

maarten avatar
maarten

@Carlos R. do you have a depends_on defined in the datasource maybe ?

Carlos R. avatar
Carlos R.

nop

maarten avatar
maarten

would you mind copy/pasting the code-block ?

Carlos R. avatar
Carlos R.

not at all

Carlos R. avatar
Carlos R.
data "aws_iam_policy_document" "gitlab_backup_policy_document" {

  statement {
    sid    = "ListOnlyMyBucketInfo"
    effect = "Allow"
    actions = [
      "s3:List*",
      "s3:Get*"
    ]
    resources = [module.gitlab_backup_bucket.bucket_arn]
  }

  statement {
    sid    = "ListAllBuckets"
    effect = "Allow"
    actions = [
      "s3:ListAllMyBuckets"
    ]
    resources = [
      "*",
    ]
  }

  statement {
    sid    = "WriteObjectActions"
    effect = "Allow"
    actions = [
      "s3:Put*",
      # TODO: temporary actions below. To be used during testing only.
      "s3:List*",
      "s3:Get*"
    ]
    resources = ["${module.gitlab_backup_bucket.bucket_arn}/*"]
  }

  statement {
    sid    = "ListKMSAliases"
    effect = "Allow"
    actions = [
      "kms:ListAliases"
    ]
    resources = [
      "*",
    ]
  }
}

resource "aws_iam_policy" "gitlab_backup_policy" {
  name   = "${module.label.id}-gitlab"
  policy = data.aws_iam_policy_document.gitlab_backup_policy_document.json
}

resource "aws_iam_policy_attachment" "gitlab_backup_policy_attachment" {
  name       = module.label.id
  users      = [aws_iam_user.gitlabl_backup_application.name]
  policy_arn = aws_iam_policy.gitlab_backup_policy.arn
}
maarten avatar
maarten

I don’t see the bucket_policy in there ?

Carlos R. avatar
Carlos R.

it is a iam policy, not a bucket policy

Carlos R. avatar
Carlos R.

I am attaching it to an IAM User

Carlos R. avatar
Carlos R.

you can see it in the last resource block

maarten avatar
maarten

ah ok, because the screenshot yous showed was about a a data "aws_iam_policy_document" "bucket_policy" {

Carlos R. avatar
Carlos R.

it’s the same block, but yeah.. the name can be misleading

loren avatar

i think more what @maarten is getting at is that this is not typical tf behavior, so it is something in your config. without seeing the exact config, and probably more of the plan result, it is very difficult to just intuit a reason…

loren avatar

i like to encourage people to write a second, simpler, minimal config to try reproduce the problem and pinpoint what change causes it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) I feel like this is something you dealt with recently. Do you recall what you did to prevent it?

maarten avatar
maarten

Ah we talked in private, there was a misunderstanding regarding the terraform code snippit earlier.. But most likely it is not related to a faulty terraform configuration as it could not be reproduced.

Carlos R. avatar
Carlos R.

Should I delete the post? so that it does not generate confusion

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On the contrary! I think it would be more helpful to summarize the confusion

Carlos R. avatar
Carlos R.

Alright, sounds good. Will do.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @maarten for helping out so much!

1
Carlos R. avatar
Carlos R.

Indeed, thank you very @maarten

maarten avatar
maarten

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) The problem is that the moment a change happpened to the S3 bucket, for some reason the bucket policy get’s recalculated because here the splat operator is not used : https://github.com/cloudposse/terraform-aws-s3-bucket/blob/master/main.tf#L133 , on line 115 it is. If they both use the splat operator it will be fine.

cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, we missed that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and I don’t like [0]

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

splat+join works in all cases

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @maarten

Release notes from terraform avatar
Release notes from terraform
05:04:31 PM

v0.12.26 v0.12.26

Release notes from terraform avatar
Release notes from terraform
05:24:37 PM

v0.12.26 ENHANCEMENTS: backend/remote: Can now accept -target options when creating a plan using remote operations, if supported by the target server. (Server-side support for this in Terraform Cloud and Terraform Enterprise will follow in forthcoming releases of each.) (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”611026832” data-permission-text=”Title is private” data-url=”https://github.com/hashicorp/terraform/issues/24834” data-hovercard-type=”pull_request”…

backend/remote: Support -target on plan and apply by apparentlymart · Pull Request #24834 · hashicorp/terraform

This is the local-CLI portion of allowing -target to be used with remote operations. -target addresses given on local CLI are copied into the API request to create a run, which Terraform Cloud will…

Mr.Devops avatar
Mr.Devops

Hi - I’m hoping the pros here can help me with understanding how to use dynamic block expression within the resource aws_autoscaling_group I’ve went over the TF doc many times and i still can’t seem to wrap my head around understanding how to use this.

Maybe an example would be helpful to shed some light on how to use it.

Here’s what i’m attempting to do with the resource aws_autoscaling_group. I would like to be able to use mixed_instances_policy but iterate through multiply instance_type (instance type varies) using the override block.

e.g

resource "aws_autoscaling_group" "example" {
  availability_zones = ["us-east-1a"]
  desired_capacity   = 1
  max_size           = 1
  min_size           = 1

  mixed_instances_policy {
    launch_template {
      launch_template_specification {
        launch_template_id = "${aws_launch_template.example.id}"
      }

      override {
        instance_type     = "c4.large"
        weighted_capacity = "3"
      }

      override {
        instance_type     = "c3.large"
        weighted_capacity = "2"
      }
    }
  }
}
Matt Gowie avatar
Matt Gowie
cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Matt Gowie avatar
Matt Gowie

If I understand you correctly, you’d want to do something like the following:

mixed_instances_policy {
    launch_template {
      launch_template_specification {
        launch_template_id = "${aws_launch_template.example.id}"
      }
      
      dynamic "override" {
        for_each = var.instance_type_overrides
        iterator = i
        content {
          instance_type = i.instance_type
          weighted_capacity = i.weighted_capacity
        }
      }
    }
  }
Matt Gowie avatar
Matt Gowie

Not sure about using dynamic when nested… but does that look like what you’re looking for @Mr.Devops?

Mr.Devops avatar
Mr.Devops

yes thx @Matt Gowie this looks perfect. Maybe i can also var.instance_type_overrides to use type list(string) to inlcude all diff instance type?

Matt Gowie avatar
Matt Gowie

Ah instance_type_overrides in the above example would be of type:

list(object({
    instance_type = string
    weighted_capacity     = string
}))
Mr.Devops avatar
Mr.Devops

Ah yes list(object) would do it

1
Mr.Devops avatar
Mr.Devops

I will try and let you know if all is good thx again

1
Mr.Devops avatar
Mr.Devops

i totally forgot to update you (got too excited that it’s working)

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
10:45:26 PM

Service impact to Terraform runs May 27, 22:24 UTC Investigating - We are currently investigating an issue affecting a subset of Terraform runs.

Service impact to Terraform runs

HashiCorp Services’s Status Page - Service impact to Terraform runs.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
10:55:32 PM

Service impact to Terraform runs May 27, 22:51 UTC Identified - The issue has been identified and a fix is being implemented.May 27, 22:51 UTC Update - We are continuing to investigate this issue.May 27, 22:24 UTC Investigating - We are currently investigating an issue affecting a subset of Terraform runs.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
11:55:26 PM

Service impact to Terraform runs May 27, 23:45 UTC Resolved - We’ve implemented a confirmed a fix to remedy this issue.May 27, 22:51 UTC Identified - The issue has been identified and a fix is being implemented.May 27, 22:51 UTC Update - We are continuing to investigate this issue.May 27, 22:24 UTC Investigating - We are currently investigating an issue affecting a subset of Terraform runs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
mumoshu/terraform-provider-eksctl

Manage AWS EKS clusters using Terraform and eksctl - mumoshu/terraform-provider-eksctl

1

2020-05-28

maarten avatar
maarten

Has anyone done this Terraform certification, how much Terraform Cloud is in there ?

aaratn avatar

I have done certification, if you are already working with terraform I think it should be pretty straight forward to pass. They don’t ask much about cloud, just overview. Atleast I didn’t see deep questions around cloud

maarten avatar
maarten

how long did it take you ?

aaratn avatar

I completed exam in 30 minutes with minimal to no preperation

maarten avatar
maarten

ah perfect, thanks

Zachary Loeber avatar
Zachary Loeber

You should know the difference between a terraform cloud workspace vs. a standard terraform workspace for certain

Zachary Loeber avatar
Zachary Loeber

I spent less time preparing for the terraform exam than any exam I’ve ever taken. But I’d also been using tf for over a year prior

maarten avatar
maarten

For all the normal stuff I’m confident, terraform cloud never interested me much so .. Thanks for the heads-up on workspaces So can you have different workspaces in your workspace ?

Zachary Loeber avatar
Zachary Loeber

more that it’s one workspace per repo in tf cloud

maarten avatar
maarten

so it does not work with multiple tfvars ?

maarten avatar
maarten

as in, one for dev, one for staging etc.

Zachary Loeber avatar
Zachary Loeber

I’d just setup tf cloud and do a few things in it to get the feel for it. Workspaces is a fundamentally changed term

Matt Gowie avatar
Matt Gowie

Interesting to hear others say they didn’t need to study much. I took a practice exam and passed with a 90% but figured that must’ve been easy…. Maybe I’m wrong and should just take that cert.

Zachary Loeber avatar
Zachary Loeber

it’s pretty cheap to take

Matt Gowie avatar
Matt Gowie

Yeah — good point

maarten avatar
maarten

95% correct, and it seems I do not understand terraform modules I hope I get to know what I did wrong..

Matt Gowie avatar
Matt Gowie

@maarten Are those your final exam results? That’s awesome if so — quick turnaround.

Piotr Maksymiuk avatar
Piotr Maksymiuk

sooo, tfmask only supports masking variables when they’re changed and not created?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not by design - it might be a bug. Someone recently (and graciously) updated it to support 0.12 - maybe something got overlooked in the tests. Please have a look at the (very simple) code, if you have a chance. Maybe something stands out?

Haroon Rasheed avatar
Haroon Rasheed

I am trying to use this git repo for converting yaml to HCL code. But make install is not working..so I could not proceed further. Any idea how to make it work https://github.com/jrhouston/tfk8s

jrhouston/tfk8s

A tool for converting Kubernetes YAML manifests to Terraform HCL - jrhouston/tfk8s

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I read somewhere with the recent announcement of the new alpha kubernetes provider, that there was a tool for converting the yaml to HCL

jrhouston/tfk8s

A tool for converting Kubernetes YAML manifests to Terraform HCL - jrhouston/tfk8s

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is this the tool?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

looks like it.

Haroon Rasheed avatar
Haroon Rasheed

Yes same tool. But not working for me after doing make install. Not sure what am I missing

Haroon Rasheed avatar
Haroon Rasheed

Below is how I tried it.

auto@auto:~/tfk8s$ ls
CODEOWNERS  go.mod  go.sum  LICENSE  Makefile  README.md  tfk8s.go  tfk8s_test.go
auto@auto:~/tfk8s$ make install
go install -ldflags "-X main.toolVersion=0.1.3"
auto@auto:~/tfk8s$ tfk8s
tfk8s: command not found
auto@auto:~/tfk8s$
Rajesh Babu Gangula avatar
Rajesh Babu Gangula

I need value a should be z if var.x is null and y if var.x has some value , does the following statement works? a = var.x != “” ? y || var.x == “” ? z

Eric Berg avatar
Eric Berg

why don’t you just try it out, @Rajesh Babu Gangula? Set up a quick experiment with a local block and an output.

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

yep I am little skeptical about it … so just want to get quick inputs from the group before I fire it up ..

Tyrone Meijn avatar
Tyrone Meijn

Hey guys I have a question, what is the difference between pinning your providers in the terraform block

terraform {
  required_providers {
    aws = "2.6.0"
  }
}

vs doing it like in the provider:

provider "aws" {
  version = "2.6.0"
}
roth.andy avatar
roth.andy

The first approach is the recommended one for modules, the second one is what you would use in your own code that utilizes modules

Tyrone Meijn avatar
Tyrone Meijn

Thanks for the quick response, seems logical indeed, thanks!

Haroon Rasheed avatar
Haroon Rasheed

I would like to have AWS EKS setup using Terraform in such way that we have 2 VPCs. One VPC where I should deploy AWS control plane and other VPC I should have my worker nodes running. Do we terraform suite for this in cloudposse or any other repo?

Mikhail Naletov avatar
Mikhail Naletov

Are you sure it will work? As far as I remember your node group and EKS cluster must be in the same VPC.

Haroon Rasheed avatar
Haroon Rasheed

It worked for meMy setup is Cluster and Node1 running on VPC1 and Node2 running on VPC2. It worked well. Able to launch pods on Node2 as well. One prob yet to solve is not access or login to pod using kubectl exec command. Tried all security group changes between cluster n Node2. still no luck. That is not blocker for though cos pods in Node1 n Node2 able to communicate. Cluster is able to launch n configure things on Node2. I’m good

Mikhail Naletov avatar
Mikhail Naletov

I guess this can work using VPC Peering or something like this. What is the point? Can’t you just use multiple subnets and network ACLs?

Haroon Rasheed avatar
Haroon Rasheed

Yep using VPC peering. Our customer has something like this. So we need verify our pods able to work seamlessly on this type of setup.

2020-05-29

Milosb avatar

Could someone explain me why outputs interpolation works differently if I reference locals in outputs.tf and for example in main.tf? If do something like

locals {
  s3_arn = "aws_iam_policy.s3_${var.environment}[0].arn"
}
output "s3_arn" {
  value = local.s3_arn
}
maarten avatar
maarten
"aws_iam_policy.s3_${var.environment}[0].arn"

is like an evaluation of a variable, you can not interpolate the resource names to reference to. You can do that when you use for_each and then you can refer to it by the name of the key.

Milosb avatar

If i put locals in main it will output literal string, but if i put locals in outputs.tf it will interpolate correctly Update: Strange enough i cant reproduce it now…

praveen avatar
praveen

hi, May I know if we have latest/working Terraform module available for enabling Azure Diagnosticas logging for all azure resources ?

praveen avatar
praveen

# terraorm , can you help me provide module for enabling diagnostics logging for all azure resources ?

2020-05-30

2020-05-31

Yage Hu avatar
Yage Hu

I’m using terragrunt . Let’s say I want to deploy a VPC and an EKS cluster. Is it possible to put the VPC terragrunt code in one module and reference the vpc_id from the eks module?

Joe Niland avatar
Joe Niland
Configuration Blocks and Attributes

Learn about all the blocks and attributes supported in the terragrunt configuration file.

Yage Hu avatar
Yage Hu

Thanks!!

1
Brij S avatar

Hello, does anyone know of a terraform trick on how to turn a map like this

  tags = {
    "abc"  = "123"
    "bbb"  = "aaa"
  }

into

  [{Key=abc,Value=123},{Key=bbb,Value=aaa}]
Matt Gowie avatar
Matt Gowie

@Brij S Something like the following should do the trick:

[ for key, val in tags: { "Key" = key, "Value" = val } ]
2
Brij S avatar

oh man that did the trick! thank you!

1
    keyboard_arrow_up