#terraform (2020-10)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-10-31

2020-10-30

Igor Bronovskyi avatar
Igor Bronovskyi

Help me how to organize blue/green deployment on the terraform. I need to create a new service, do migration and then switch traffic to the new service.

Joe Niland avatar
Joe Niland

Do you mean for ECS, EKS or generic?

loren avatar
loren

Not in terraform yet, but aws just released this feature for ALB… https://aws.amazon.com/blogs/devops/blue-green-deployments-with-application-load-balancer/

Fine-tuning blue/green deployments on application load balancer | Amazon Web Services attachment image

In a traditional approach to application deployment, you typically fix a failed deployment by redeploying an older, stable version of the application. Redeployment in traditional data centers is typically done on the same set of resources due to the cost and effort of provisioning additional resources. Applying the principles of agility, scalability, and automation capabilities […]

:--1:3
Joe Niland avatar
Joe Niland

Thanks @loren this is great! Just as an aside, I wonder if there’s a way to set target group for a particular group of clients.

loren avatar
loren

I imagine you could do something creative with path-based routing…

Joe Niland avatar
Joe Niland

I shall try

Zach avatar


aws just released this feature for ALB…
what feature is new in that? seems more of a ‘how to’ blog to me

Psy Shaitanya avatar
Psy Shaitanya

Yeah i agree, it’s showing how to use Application Load Balancer’s weighted target group feature

loren avatar
loren

sorry perhaps the wording threw me off:
Application Load Balancers now support weighted target groups routing></span

loren avatar
loren

the use of “now” just made it sound like a new feature

simplepoll avatar
simplepoll
03:45:58 PM

If a TF-security tool asks you to send your TF plan and state to the cloud for analysis, what’s your reaction?

Yoni Leitersdorf avatar
Yoni Leitersdorf

Appreciate your guys’ help with the above poll.

loren avatar
loren

i think i’d want some kind of anonymization also… things like account ids aren’t strictly sensitive but can become so in aggregate

loren avatar
loren

terraform 0.14 is also doing cool things with inputs by marking them as sensitive and propagating that through the state and plan

Yoni Leitersdorf avatar
Yoni Leitersdorf

That’s useful. What secrets would you care about most? Username’s and passwords, key files, anything else?

PePe avatar

Secret manager and parameter store arns

PePe avatar

And same in task defs

Zach avatar

if you set the RDS master password via terraform for example, that’s in the State

Zach avatar

and yah just stuff like names of params or buckets suddenly tells you “oh this company uses X product/service”

Yoni Leitersdorf avatar
Yoni Leitersdorf

Is it a problem to know if a specific company uses a specific service?

PePe avatar

yes, huge

PePe avatar

competitors could use that in a bad way

loren avatar
loren

i’d not be super concerned about that myself

loren avatar
loren

but i’d try to handle that with anonymization of data rather than needing to mark it out

Zach avatar

Tags on resources would be another thing to handle

Yoni Leitersdorf avatar
Yoni Leitersdorf

Great feedback everyone, thank you. I’ve put all of this into the ticket for this capability.

2020-10-29

Ikana avatar
Ikana

Hello, I was about to submit a bug for this module: https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster but the bug template suggested to ask here first

cloudposse/terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.

Ikana avatar
Ikana

out of the box I see this error when trying to plan it

Ikana avatar
Ikana
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Reserved argument name in module block

  on [main.tf](http://main\.tf) line 129, in module "hostname":
 129:   count   = var.number_of_broker_nodes > 0 ? var.number_of_broker_nodes : 0

The name "count" is reserved for use in a future version of Terraform.

[terragrunt] 2020/10/29 14:47:58 Hit multiple errors:
exit status 1
pjaudiomv avatar
pjaudiomv

I dont think its a bug, the module doesnt support 0.12.x

pjaudiomv avatar
pjaudiomv
cloudposse/terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.

1
:--1:1
Ikana avatar
Ikana

oh

Ikana avatar
Ikana

I taught that was it

Ikana avatar
Ikana

Is this bug or expected when using tf 0.12.26?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I’m writing a CD pipeline which will run a plan stage, then run an apply stage only if the plan has changes. How can I detect if a plan file contains changes? Running terraform show terraform.plan I can parse the output for Plan: 0 to add, 0 to change, 0 to destroy. but this feels very fragile (for example, I need a more complex check to detect if there are output-only changes).

:--1:1
Alex Jurkiewicz avatar
Alex Jurkiewicz

terraform plan -detailed-exitcode will exit 0 if there are no changes, 2 if there are changes

loren avatar
loren

yeah, i use -detailed-exitcode

Chris Fowles avatar
Chris Fowles

yeh ditto

2020-10-28

Matt Gowie avatar
Matt Gowie

Hey folks — Could use more ’s on the below issue and corresponding PR. They’ve totally stalled out (open for 8+ months) and I have a couple projects that I would like to upgrade off of a custom terraform fork (made a mistake thinking those would be merged by now ). Any help appreciated!

  1. https://github.com/terraform-providers/terraform-provider-aws/issues/6917
  2. https://github.com/terraform-providers/terraform-provider-aws/pull/11928
loren avatar
loren

looks like whoever opened the PR has abandoned it… has conflicts and has not been updated with the new requirements

loren avatar
loren

you can open a new PR based on the current one, and clean it up. might get more traction that way

Matt Gowie avatar
Matt Gowie

He’s been very responsive to questions / bugs that I’ve had in the issue — I feel like he would update to get it moving forward. Maybe I should ping him on that.

Matt Gowie avatar
Matt Gowie

But that is a good point… Maybe I ping him too to pull in the latest.

Matt Gowie avatar
Matt Gowie

Ah I’m just seeing his follow up after my comment. Damn!

1
Matt Gowie avatar
Matt Gowie

I know enough go to probably take it over and botch my way through it… but I have a talk coming up in December that I am focused on at the moment.

PePe avatar

voted

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

I hope these merge some time. We use cloudformation to manage some amplify resources currently

:100:1
aaratn avatar
aaratn

Done

aaratn avatar
aaratn

I created one issue aswell recently !! Hope it gets some traction

https://github.com/terraform-providers/terraform-provider-aws/issues/15855

updating API Gateway Stage failed: BadRequestException: Invalid method setting path: /{proxy+}/ANY//throttling/burstLimit · Issue #15855 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a :–1: reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

:--1:1
Nicolás de la Torre avatar
Nicolás de la Torre

Hello, when using helmfile provider and helmfile_release resource, is there a way to manage helm repositories?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe someone in #helmfile will have more experience

Nicolás de la Torre avatar
Nicolás de la Torre

yes, already asked there

Nicolás de la Torre avatar
Nicolás de la Torre

but i finally decided using helmfile_release_set and reuse helmfile.yaml

Ben avatar

In terms of debugging I am using the classic ‘export TF_LOG=DEBUG’ but I would like to have a way to see the values of variables

David Lozano avatar
David Lozano

For that I think you should use outputs. You can manage them to get the value of the variables, resource parameters or locals you want to see.

Output Values - Configuration Language - Terraform by HashiCorp

Output values are the return values of a Terraform module.

:--1:1
Ben avatar

Yes already using it but the problem is that when debugging something that is not working

Ben avatar

You do not get the output

Ben avatar

I want the output value right before terraform crash

David Lozano avatar
David Lozano

ahh, got it. Is tf crashing because the value you are passing to the resource is not in the right format and you want to see the value before it gets passed to the resource or it’s a terraform engine error?

Ben avatar

I want to see the value before it gets passed to the ressource

Ben avatar

I’m creating an array and passing it to an openstack resouce block

Ben avatar
locals {
  extra_network_interface = flatten([
    for interface in var.extra_network_interface: [
      for instance in range(var.instance_count): {
        "name": format("%s0%s-v-%s.%s", var.cluster_name, instance + 1, interface.name, var.cluster_domain), "subnet": interface.ipam_cidr
      }
    ]
  ])
}
Ben avatar

Creation of the array

Ben avatar
resource "openstack_compute_instance_v2" "server" {
  count           = var.instance_count
  name            = format("%s0%s-v.%s", var.cluster_name, count.index + 1, var.cluster_domain)
  image_id        = var.os_image
  flavor_name     = var.flavor_name
  key_pair        = openstack_compute_keypair_v2.keypair.name
  security_groups = var.security_group

  network {
    name        = var.network_name_id
    fixed_ip_v4 = ipam_ip_allocation.ip_allocation[count.index].ip_addr
  }
 
  dynamic "network" {
    for_each = {
      for interface in ipam_ip_allocation.extra_ip_allocation : interface.name => interface if interface.vm_name == format("%s0%s-v-%s.%s", var.cluster_name, count.index + 1, interface.vm_name, var.cluster
_domain)
    }
    content {
      name        = network.network_name_id
      fixed_ip_v4 = ipam_ip_allocation.extra_ip_allocation[count.index].ip_addr
    }
  }
Ben avatar

and then execution in the Openstack resource block

David Lozano avatar
David Lozano

have you tried using the terraform console to print local.extra_network_interface value and make sure the array has the format you want?

Command: console - Terraform by HashiCorp

The terraform console command provides an interactive console for evaluting expressions.

Ben avatar

Yes I’m trying right now

Ben avatar

but wanted to know it there is something else

Ben avatar

not sure what I’m doing on this console ^^

David Lozano avatar
David Lozano

if you open the console in the same path where the file with the locals is. just type local.extra_network_interface and it should output the list value.

Unfortunately, I don’t know a way of doing that dynamically with a breakpoint like debugging in python. I don’t think there is one

:--1:1
tim.j.birkett avatar
tim.j.birkett

What error are you getting on crash?

Ben avatar

Like a good old print() in Python

Ben avatar

Is there a way to do that in Terraform?

1
HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
07:55:12 PM

Upgrade Maintainence Oct 28, 19:44 UTC Investigating - The URL Service is going to be undergoing upgrade maintenance. Users may experience some errors using the service.

Upgrade Maintainence

HashiCorp Services’s Status Page - Upgrade Maintainence.

Release notes from terraform avatar
Release notes from terraform
09:44:13 PM

v0.14.0-beta2 0.14.0-beta2 (This describes the changes since v0.13.4, rather than since v0.14.0-beta1.) NEW FEATURES:

Terraform now supports marking input variables as sensitive, and will propagate that sensitivity through expressions that derive from sensitive input variables.

terraform init: Terraform will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (<a…

loren avatar
loren

This experiment relieves a major pain point with complex typed objects! module_variable_optional_attrs

loren avatar
loren


module_variable_optional_attrs: When declaring an input variable for a module whose type constraint (type argument) contains an object type constraint, the type expressions for the attributes can be annotated with the experimental optional(…) modifier.

Marking an attribute as “optional” changes the type conversion behavior for that type constraint so that if the given value is a map or object that has no attribute of that name then Terraform will silently give that attribute the value null, rather than returning an error saying that it is required. The resulting value still conforms to the type constraint in that the attribute is considered to be present, but references to it in the recieving module will find a null value and can act on that accordingly.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Wow. Excellent.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@ @Jeremy (Cloud Posse) @Andriy Knysh (Cloud Posse)

:--1:1
HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
10:15:12 PM

Upgrade Maintainence Oct 28, 22:03 UTC Resolved - This incident has been resolved.Oct 28, 22:02 UTC Update - Maintenance has finished. The system appears to be stable. Regular system monitoring will continue.Oct 28, 19:44 UTC Investigating - The URL Service is going to be undergoing upgrade maintenance. Users may experience some errors using the service.

Fernando Torresan avatar
Fernando Torresan

Hi guys,

What do you think when you need to create an SNS-SQS subscription cross account in AWS with terraform? it’s painful, isn’t it?! You must have to set up multiple providers for terraform apply command, work successfully, to have access to both accounts;

A friend of mine, looked inside terraform-provider-aws to know why this is necessary, because doing the same procedure using AWS console, you don’t need access to both accounts.

That said, he opened a pull request that solves this problem and allows the current behavior to continue to work. So, if you are interested in this solution, leave a to help this improvement get into the next version as soon as possible.

Thanks!

https://github.com/terraform-providers/terraform-provider-aws/pull/15633e

F/aws_sns_topic_subscription: Provide full support to HTTP/HTTPS/EMAIL/EMAIL-JSON protocols / SQS Subscription without Assume Role in both Accounts by smailli · Pull Request #15633 · terraform-providers/terraform-provider-aws

Community Note Please vote on this pull request by adding a :–1: reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anybody using rennovatebot? I am really upset with the onboarding experience. I enabled it for all repos, like mergify, only rennovatebot is now opening 350 PRs and adding a public deploy key to all of our repos. Had no idea this was going to happen.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Each PR says they will only ever open 2 per hour or something and 20 total

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have hundreds now

Zach avatar

Hadn’t seen that one, but I’ve had ok luck with Dependabot

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Except no HCL2 support :-)

Zach avatar

Ahhh I was wondering what the difference was

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And wanted to avoid the one off GitHub action to run the patched fork

Cody Moore avatar
Cody Moore

Yea, dependabot is very alpha for the gradle ecosystem too. Seems like the main diff I found between the two is that:

• Dependabot is very “batteries included”

• Rennovatebot is more easily customizable But I could be wrong based on my limited exposure to both

2020-10-27

Shankar Kumar Chaudhary avatar
Shankar Kumar Chaudhary

on terragrunt plan am getting following errors I have updated modules to latest tags as i wish to update my eks version any help would be muchh appreciated Error: Provider configuration not present

To work with module.eks.data.aws_region.current its original provider configuration at provider[“registry.terraform.io/-/aws”] is required, but it has been removed. This occurs when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy module.eks.data.aws_region.current, after which you can remove the provider configuration again.

Releasing state lock. This may take a few moments…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you upgrading from the old module versions?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a known issue with the latest versions of TF

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform - refactoring modules: Error: Provider configuration not present

I’m refactoring some Terraform modules and am getting: Error: Provider configuration not present To work with module.my_module.some_resource.resource_name its original provider configuration at m…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
"Error: Provider configuration not present" when aliased provider is used · Issue #21416 · hashicorp/terraform

Terraform Version Terraform v0.12.0 With terraform version 0.11.10, the files below work as expected. Terraform Configuration Files initially the providers section in the config below was absent, b…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in some of the modules, a provider was specified in the module itself

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF does not allow that anymore

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

solution depends on your environments

Shankar Kumar Chaudhary avatar
Shankar Kumar Chaudhary

ohh how can i resolve?

Shankar Kumar Chaudhary avatar
Shankar Kumar Chaudhary

environment in sense?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. If you can just destroy the resources and redeploy the new version, it’s the easiest path (not good for prod though)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Otherwise, use the old code, remove just the resources with the old provider (using -target ), then add new versions of the modules and provision
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

look at the links above ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Unable to remove module when provider contains a region. · Issue #22907 · hashicorp/terraform

Terraform Version Terraform v0.12.9 + provider.aws v2.28.1 Terraform Configuration Files sample module provider &quot;aws&quot; { alias = &quot;us-east-1&quot; region = &quot;us-east-1&quot; } reso…

Shankar Kumar Chaudhary avatar
Shankar Kumar Chaudhary

@Andriy Knysh (Cloud Posse) i tried using 0.12.24 terragrunt plan is working fine but on terragrunt apply its going to replace cluster and i am getting following error
Error: error creating EKS Cluster (dev_cluster): ResourceInUseException: Cluster already exists with name: dev_cluster
{
RespMetadata: {
StatusCode: 409,
RequestID: “6a650024-bdab-4965-9940-d15506218621”
},
ClusterName: “dev_cluster”,
Message_: “Cluster already exists with name: dev_cluster”
}

on .terraform/modules/eks/cluster.tf line 9, in resource “aws_eks_cluster” “this”:
9: resource “aws_eks_cluster” “this” {

Eric Berg avatar
Eric Berg

My plugin_cache_dir is now at 7.7G. There doesn’t seem to be any clean command. Anybody have any guidance on best practices for cleaning out this cache. One thing I noticed in the docs, is that the dirs (i.e., “${plugin_cache_dir}/darwin_amd64” must exist, before TF will cache files there.

Mikael Fridh avatar
Mikael Fridh

Delete whatever you want? It’s a cache so will be recreated. Bandwidth expensive? Then delete by date or version numbers as you see fit.

Yoni Leitersdorf avatar
Yoni Leitersdorf

Any ideas for how to use the hashicorp-provided providers to make an API POST call with specific parameters? Looked at the http provider, but it’s only GET. (I don’t want to rely on curl or wget locally)

roth.andy avatar
roth.andy

You could use the Shell Provider, but it will still require something like curl to be installed locally

Yoni Leitersdorf avatar
Yoni Leitersdorf

Which I’m trying to avoid

roth.andy avatar
roth.andy

@mumoshu is doing some interesting things with a project called Shoal in his Helmfile Provider. Shoal automatically downloads and uses missing dependencies when running Terraform

Yoni Leitersdorf avatar
Yoni Leitersdorf

The providers obviously do this all the time via their Go capabilities, but I am trying not to write a provider.

Matt Gowie avatar
Matt Gowie

I think the Shell provider would be your best bet then. It can install curl as part of the process if that helps any pain.

Yoni Leitersdorf avatar
Yoni Leitersdorf

Thanks Matt. I can’t rely on the shell unfortunately. Looks like Go code is in my future.

Matt Gowie avatar
Matt Gowie

Ah bummer. Worth asking around a bit more then possibly. Reddit or Terraform community forums might have a better option. I just don’t know of one unfortunately.

Matt Gowie avatar
Matt Gowie

Then again — Go is a fun language to dive into.

roth.andy avatar
roth.andy
08:14:12 PM

golang

3
github140 avatar
github140
Mastercard/terraform-provider-restapi

A terraform provider to manage objects in a RESTful API - Mastercard/terraform-provider-restapi

Yoni Leitersdorf avatar
Yoni Leitersdorf

Great find!

github140 avatar
github140

There’s an open issue for getting it into Terraform registry. Hopefully they’ll get there soon since it’s pending for few months.

Matt Gowie avatar
Matt Gowie

Very cool and agreed: great find! Staring that one for later for sure.

roth.andy avatar
roth.andy

With Terraform 0.13 there shouldn’t need to be an “official” addition to terraform registry

Yoni Leitersdorf avatar
Yoni Leitersdorf

Needed to do this:

terraform {
  required_providers {
    restapi = {
      source  = "fmontezuma/restapi"
      version = "~> 1.14.0"
    }
  }
}
Yoni Leitersdorf avatar
Yoni Leitersdorf

Because “mastercard” haven’t published their as a provider in the registry.

Yoni Leitersdorf avatar
Yoni Leitersdorf

(I’m not fmontezuma)

roth.andy avatar
roth.andy

Yep, this is the new way of doing it. Even the “official” ones are done this way, like "hashicorp/aws".

Matt Gowie avatar
Matt Gowie

Not surprised that held them up with them being Mastercard and all.

tair avatar

Hey folks :wave: I was testing out https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment and bumped into this problem.


\# [main.tf](http://main\.tf)
module "vpc" {...}
module "subnets" {...}
module "rds_instance" {...}
module "redis" {...}
module "elastic_beanstalk_application" {...}
module "elastic_beanstalk_environment" {...}
data "aws_iam_policy_document" "minimal_s3_permissions" {...}

When I run it, I am getting this error: Error: Error creating Security Group: InvalidGroup.Duplicate: The security group 'NAME' already exists for VPC 'vpc-ID'` failing on elastic_beanstalk_environment.

Has anybody else had a similar issue or know what might cause this? Additionally, after each run, I am getting a bunch of Error: Duplicate variable declaration and removing .terraform and re-init helps.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you following the example implementation? We validate that the module works using terratest before every merge to master

tair avatar

Yep, precisely. Copy paste from the README for the latest tag, but combining that with RDS/Redis.

I’ve tried twice, destroying and reapplying but still getting stuck on the same things.

tair avatar

Btw, you’ve got a really nice collection of modules. Thanks for providing and maintaining those

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@ thanks! sorry for the troubles. did you setup your remote state correctly before running? is there a chance that something was already provisioned?

tair avatar

I am not using remote state yet, is that required? Just getting started with Terraform actually.

tair avatar

And on a clean AWS account

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha, so chances are that you might have some orphaned resources

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

using remote state is definitely a requirement if anyone else will ever work on the project

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(or even if you lose your laptop, for instance)

tair avatar

yep for sure, I thought of moving it later, first tried to get up-n-running. I will then destroy, remove local state and reprovision to see if it makes any difference. Will report here, thank you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ you prob got into the issue of using a few top-level modules, each of which creates a SG. Since you provide the same namespace-stage-name to each module, a few SG get created with the same name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to add attributes=["1"] to one of the modules (e.g. RDS)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the names will be unique, but all the names for RDS will end with -1

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the modules can be updated to take care of that, we can get into it when we get time

tair avatar

Ah this explains it, thanks @Andriy Knysh (Cloud Posse) I am wondering, shouldnt they all be part of the same SG? If they are not, how for instance Beanstalk instance would communicate to RDS, or ElasticCache.

Also, do you want me to add different attributes to each dependent module, like 1 for RDS, 2 for Cache, 3 for Beanstalk?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one module can be w/o the additional attributes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the other two, yes, try to add 1 and 2 correspondently

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you then connect the SGs together by using the security_groups variables to add one SG as ingress of the other

tair avatar

Cool. I will give it a spin. Do you happen to have an example on how to use multiple modules together with respect to SGs?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
security_groups = [module.elastic_beanstalk.security_group_id]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all CloudPosse modules support this concept

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, all modules output the created SG, so you can add any additional ingress rules to them

tair avatar

thx! Probly newbie question, but does that mean the elastic beanstalk module should be coming before RDS or the order does not matter? would be really good to have an example of combined modules for popular stacks on GitHub

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the order does not matter, Terraform will handle the order of creation even within modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if the order matters to you for any reason, you can use TF 0.13 depends_on on the modules, so module A will always be created before any resources in module B

tair avatar

gotcha

tair avatar

Nice, looks like everything succeeded this time. I will still have to check if connections work. But I am still bumping into a bunch of Error: Duplicate output definition errors on subsequent runs, coming from .terraform modules. Basically following https://github.com/cloudposse/terraform-aws-tfstate-backend and on step 4

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

paultath81 avatar
paultath81

Anyone know of a terraform extension in vs code I can use for fmt? The ones I’ve found are not supported

roth.andy avatar
roth.andy

Potentially unpopular opinion: IMO, while VS Code is awesome for a number of languages, it seems pretty shit for Terraform. I’ve fully switched over to IntelliJ for my Terraform work

2
aaratn avatar
aaratn

I use sublime . Supports terraform syntax highlighting and terraform fmt when saving file

Matt Gowie avatar
Matt Gowie

Yeah, the VC Code plugin is pretty awful. Even now being maintained by HashiCorp. I’m just waiting for the day when they finally get it right and being bull headed until then.

:100:1
roth.andy avatar
roth.andy
02:06:59 PM
roth.andy avatar
roth.andy
02:07:26 PM
Matt Gowie avatar
Matt Gowie

Yeah yeah, we know it’s vastly superior

imiltchman avatar
imiltchman

+1 on the IntelliJ plugin. It’s great. I’m sure HashiCorp will get the VSCode plugin fixed up in the near future as well though.

kareem.shahin avatar
kareem.shahin

i’m oldschool. pretty decent: https://github.com/hashivim/vim-terraform

imiltchman avatar
imiltchman

@ HashiCorp Terraform

paultath81 avatar
paultath81

I have that extension installed as well but it doesn’t seem to auto perform the fmt function

paultath81 avatar
paultath81

Ah nvm next time I should rtfm lol

paultath81 avatar
paultath81

Thx @imiltchman

imiltchman avatar
imiltchman

You can enable format on save in TF

imiltchman avatar
imiltchman

It also won’t format if there are syntax errors

paultath81 avatar
paultath81

Got it thx

Alex Jurkiewicz avatar
Alex Jurkiewicz

Sometimes I need to convert a resource from a single instance to count = local.create_instance? 1 : 0, and Terraform wants to re-create the resource in this case. How can I avoid this re-creation? I can perform manual state manipulation terraform state rm myresource ; terraform import ... , but it would be nice if there was a solution that didn’t require out of band state fiddling.

Chris Fowles avatar
Chris Fowles

terraform state mv

:--1:2
Chris Fowles avatar
Chris Fowles
Command: state mv - Terraform by HashiCorp

The terraform state mv command moves items in the Terraform state.

Alex Jurkiewicz avatar
Alex Jurkiewicz

While researching the above question, I found this 1 month old message from a Terraform developer saying “no major new features until 1.0 comes out next year”: https://github.com/hashicorp/terraform/issues/24476#issuecomment-700368878 Sad, there are some warts with TF

2020-10-26

Steve Wade avatar
Steve Wade

can anyone help me with the below please


\# Ref: <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticsearch_domain#warm_type>
variable "ultra_warm_instance_type" {
  type        = string
  description = "The instance type for the ultra warm nodes."
  default     = "ultrawarm1.medium.elasticsearch"

  validation {
    condition     = can(var.ultra_warm_instance_type == "ultrawarm1.medium.elasticsearch") || can(var.ultra_warm_instance_type == "ultrawarm1.large.elasticsearch") || can(var.ultra_warm_instance_type == "ultrawarm1.xlarge.elasticsearch")
    error_message = "Valid values are ultrawarm1.medium.elasticsearch, ultrawarm1.large.elasticsearch and ultrawarm1.xlarge.elasticsearch"
  }
}
  ultra_warm_enabled        = var.elasticsearch_configuration["ultra_warm_enabled"]
  ultra_warm_instance_count = var.elasticsearch_configuration["ultra_warm_instance_count"]
  ultra_warm_instance_type  = var.elasticsearch_configuration["ultra_warm_instance_type"]
elasticsearch_configuration = {
    ultra_warm_enabled        = true
    ultra_warm_instance_count = 2
    ultra_warm_instance_type  = "ultrawarm1.medium.elasticsearch"
  }
Error: Invalid validation error message

  on .terraform/modules/data_platform.es_application_logging/modules/elasticsearch/variables.tf line 79, in variable "ultra_warm_instance_type":
  79:     error_message = "Valid values are ultrawarm1.medium.elasticsearch, ultrawarm1.large.elasticsearch and ultrawarm1.xlarge.elasticsearch"

Validation error message must be at least one full English sentence starting
with an uppercase letter and ending with a period or question mark.
Troy Taillefer avatar
Troy Taillefer
error_message = "Valid values are ultrawarm1.medium.elasticsearch, ultrawarm1.large.elasticsearch and ultrawarm1.xlarge.elasticsearch."

Try ending the error message with a period, if that doesn’t work maybe the other periods in the string are causing a problem

Steve Wade avatar
Steve Wade

makes sense

Yoni Leitersdorf avatar
Yoni Leitersdorf

This is the code from Terraform’s actual repo about this:

// looksLikeSentence is a simple heuristic that encourages writing error
// messages that will be presentable when included as part of a larger
// Terraform error diagnostic whose other text is written in the Terraform
// UI writing style.
//
// This is intentionally not a very strong validation since we're assuming
// that module authors want to write good messages and might just need a nudge
// about Terraform's specific style, rather than that they are going to try
// to work around these rules to write a lower-quality message.
func looksLikeSentences(s string) bool {
        if len(s) < 1 {
                return false
        }
        runes := []rune(s) // HCL guarantees that all strings are valid UTF-8
        first := runes[0]
        last := runes[len(runes)-1]

        // If the first rune is a letter then it must be an uppercase letter.
        // (This will only see the first rune in a multi-rune combining sequence,
        // but the first rune is generally the letter if any are, and if not then
        // we'll just ignore it because we're primarily expecting English messages
        // right now anyway, for consistency with all of Terraform's other output.)
        if unicode.IsLetter(first) && !unicode.IsUpper(first) {
                return false
        }

        // The string must be at least one full sentence, which implies having
        // sentence-ending punctuation.
        // (This assumes that if a sentence ends with quotes then the period
        // will be outside the quotes, which is consistent with Terraform's UI
        // writing style.)
        return last == '.' || last == '?' || last == '!'
}
Steve Wade avatar
Steve Wade

thanks guys appreciated

Yoni Leitersdorf avatar
Yoni Leitersdorf

Basically, valid unicode characters ending with period.

loren avatar
loren

also, you can dramatically simplify the validation condition using contains()

loren avatar
loren

and the can() is totally unnecessary here…

Steve Wade avatar
Steve Wade

contains() with just a list?

loren avatar
loren

yep… condition = contains([<list of valid values>], var.ultra_warm_instance_type)

Steve Wade avatar
Steve Wade

oh that is way better, thanks man appreciated

:--1:1
loren avatar
loren

and you can break lines using parens also, to help readability, if you like

:--1:1
Steve Wade avatar
Steve Wade

i am trying to turn the below into a valid variable type can anyone help please?

Steve Wade avatar
Steve Wade
elasticsearch_configuration = {
    instance_node_count       = 3
    instance_node_type        = "i3.xlarge.elasticsearch"
    master_node_count         = 3
    master_node_type          = "c5.large.elasticsearch"
    ultra_warm_enabled        = true
    ultra_warm_instance_count = 2
    ultra_warm_instance_type  = "ultrawarm1.medium.elasticsearch"
  }
loren avatar
loren
type = object({
  instance_node_count = number
  instance_node_type = string
  ...
})
Steve Wade avatar
Steve Wade

legend thanks man appreciated

:--1:1
Lyubomir avatar
Lyubomir

Hi all, I’ve been working with the EKS terraform modules, and I ran into an issue with scaling nodegroups from this repo - https://github.com/cloudposse/terraform-aws-eks-node-group So the problem is that I try to increase desired_size by specifying higher value, however the changes for desired_size are being ignored because of the following code in the [main.tf](http://main\.tf)

  lifecycle {
    create_before_destroy = false
    ignore_changes        = [scaling_config[0].desired_size]
  }

Can anyone explain why desired_size has to be ignored in this situation ?

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Mikael Fridh avatar
Mikael Fridh

You should manage desired size outside of the stack. It is usually a variable with such a high variability that it’s usually not desirable to manage it in the base Terraform stack.

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Lyubomir avatar
Lyubomir

It makes sense, I see the reasoning now. When enable_cluster_autoscaler is enabled the desired count changes, and it could lead to breaking changes on many new apply/deployments.

I have enable_cluster_autoscaler disabled so it didn’t pop in my mind at first.

thanks for the reply.

Lyubomir avatar
Lyubomir

a followup question - when having the autoscaler disabled, how does one manage the desired count? I am not really keen on manually adjusting variables in aws cli/console

kskewes avatar
kskewes

Perhaps by setting the min_size and max_size fields to your desired number? We do this (for one pool that CAS doesn’t handle).

sheldonh avatar
sheldonh

I’m trying to setup a github_team_repository but have this be optional based on the input. Looks like I need team_id instead of team name so I’m thinking of using for_each = repos | WHERE type of approach and just filter down the for_each using an expression. Any examples of a simple “where” clause to filter the for_each inline so I can run on none/matched results in the collection?

GitHub: github_team_repository - Terraform by HashiCorp

Manages the associations between teams and repositories.

loren avatar
loren
for_each = { for key, val in object/map : key => value if <condition> }

or for a set/list:

for_each = toset([ for val in set/list : val if <condition> ])
GitHub: github_team_repository - Terraform by HashiCorp

Manages the associations between teams and repositories.

loren avatar
loren

that if syntax is in the docs for for expressions… https://www.terraform.io/docs/configuration/expressions.html#for-expressions

Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

sheldonh avatar
sheldonh

thank you. I I’m testing that now. good to see a practical example as I was looking at older hashicorp blog post

sheldonh avatar
sheldonh

Always fun to try and find a post about “if, where or filter” expressions….

loren avatar
loren

it’s not for_each, but here’s an example of the “if” in a for expression…

https://github.com/plus3it/terraform-aws-tardigrade-transit-gateway/blob/master/modules/cross-account-vpc-attachment/main.tf#L13

https://github.com/plus3it/terraform-aws-tardigrade-transit-gateway/blob/master/modules/cross-account-vpc-attachment/main.tf#L31 idea with those is we two aws providers for the cross account use case, and we have a set of routes where we need to distinguish which routes to pass to which provider

loren avatar
loren

usage as a for_each expression is not any different, since it’s really a modifier of the for expression instead of for_each…

sheldonh avatar
sheldonh

Perfect! The simple example solved my main issue.

[for repo in local.repos: repo if repo.settings.additional_teams == "dev-team-1"]
sheldonh avatar
sheldonh

just needed simple filtering and that solved it. I find the expression syntax pretty confusing in docs and all so still working on that. Thanks again!

loren avatar
loren

yeah, it helped me that it was a pretty familiar pattern in python

sheldonh avatar
sheldonh

Great point. The => is not a common expression in .NET. I’ve noticed that as I’ve slowly been learning Go, that many of the decisions in terraform for more advanced usage make total sense if you know Go, but from someone with a different background the syntax seems really strange.

I wrote up a bit on this if you are interested sometime. I’m assuming that structure is a common expression format in Python, but in PowerShell nothing like that exists. C# has Linq expressions, but I never use them in PowerShell.

Reflections on Being a New Gopher With A Dotnet Background

Disclaimer Newbie Gopher. Much of what I observe is likely to be half right. I’ll probably look back at the end of the year and shake my head, but gotta start the journey somewhere, right? My Background I’ve learned my development skills primarily in the dotnet world. Coming from SQL Server performance, schema, and development, I transitioned into learning PowerShell and some C#. I found for the most part the “DevOps” nature of what I was doing wasn’t a good fit for focusing on C# and so have done the majority of my development in PowerShell.

loren avatar
loren

heh, no the => is foreign, that’s a go construct. python has list comprehensions, which use a similar [ for ... ] construct but it’s not identical

loren avatar
loren

but the => isn’t too bad. the left side is an expression where the result is the key, the right side is another expression where the result is the value….

loren avatar
loren

good article. for sure, terraform and hcl started making a whole lot more sense once i dove into the source code, took a stab at a pr or two, and learned how to compile from source

sheldonh avatar
sheldonh

So with output no issue.

sheldonh avatar
sheldonh

With for_each having problems still

for_each   = [ for repo in local.repos :repo  => if repo.settings.additional_teams == "dev-team-1" ]
sheldonh avatar
sheldonh

i tried with {} as well. Extra characters after the end of the 'for' expression.

Any idea what silly thing I’m doing?

sheldonh avatar
sheldonh

for_each = [ for repo in local.repos : if repo.settings.additional_teams == "dev-team-1" ]

sheldonh avatar
sheldonh

This is using examples in for expressions

sheldonh avatar
sheldonh

for_each = { _for_ repo _in_ local.repos _:_ repo => repo if repo.settings.additional_teams _==_ "dev-team-1"} If I use [] then it’s a list otherwise with {} it says it creates an object which requires the => . Both having issues

loren avatar
loren

yes, {} generates a map, [] generates a list

loren avatar
loren

for_each only works with maps and sets

loren avatar
loren

so if you use [] you need to wrap it in toset()

loren avatar
loren

if you use {} then you need to use the => syntax

loren avatar
loren

it’s easiest when first starting to take the for_each out of the picture for a bit, and just output the expression so you can see the data structure

loren avatar
loren

taking your example…

for_each   = { for repo in local.repos : repo => repo if repo.settings.additional_teams == "dev-team-1"}

this won’t work because repo is a map. the left side of => becomes the key in the map. your expression is assigning the entire map as the key…

:--1:1
sheldonh avatar
sheldonh

i already did this with console. I’m returning an object collection.

I basically want to foreach on each object returned but filter

loren avatar
loren

try this:

for_each   = { for name, repo in local.repos : name => repo if repo.settings.additional_teams == "dev-team-1"}
loren avatar
loren

in particular, note the structure { for name, repo in ...

sheldonh avatar
sheldonh

Trying now! Go syntax again lol.

loren avatar
loren

heh, and again this part is familiar from python

sheldonh avatar
sheldonh

In powershell foreach($Object in $Objects) { $obj.Name} for example

loren avatar
loren

and that works too, when your object has a name attribute that happens to also be the key of the map

sheldonh avatar
sheldonh

I’m learning the other paradigm is this key/value which is very common in Go, but I rarely need to use in .NET in that manner.

Basically this is similar to the concept with for k,v := range struct/slice {} is what it looks similar to

loren avatar
loren

i’ll often construct a list of objects in terraform, instead of a map…

list(object({
  name = string
  attr1 = string
  attr2 = boolean
  ...
}))

then convert to a map with:

for_each = { for item in var.thing : item.name => item }
sheldonh avatar
sheldonh

I work with json input objects or so often. I’ll have to play around more with the explict object casting as it would make life easier

loren avatar
loren

you don’t really need to cast it explicitly… this would work too…

list(map(any))

but if an item in the list does not have a name key then it will be upset when you try to reference that attribute…

sheldonh avatar
sheldonh

So the “name” is actually important? Ie if I have properties called “key” but not “name” that’s the root of the issue? gotta be kidding me. That would simplify things if I just was missing “name” as an actual property that I needed to include

loren avatar
loren

“name” is important only because I referenced that attribute in my example, i.e. item.name

loren avatar
loren

Basically your code defines the attributes of the object that are important to your code

Doogie avatar
Doogie

Is there an easy way to upgrade a GKE cluster and the nodes to a new kubernetes version w/ terraform? I can’t find any documentation / tuts.

Sean Turner avatar
Sean Turner

Has anyone had any luck deploying an ssm document with by using yamlencode()? I’m able to cut and paste the terraform plan content output and manually make a document that way, however terraform throws an error Error: Error updating SSM document: InvalidDocument

resource "aws_ssm_document" "this" {
...
document_format = "YAML"
content = yamlencode(templatefile(
        "${path.module}/assets/documents/blah.yaml",
        {
          organisation_name = lower(var.organisation_name)
          parameter_name    = foo
        }
      ))
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

are you sure your templated yaml is valid?

Sean Turner avatar
Sean Turner

Yeah, I’m able to cut and paste it in the console

loren avatar
loren

If it’s already a yaml file, you wouldn’t need yamlencode, would you?

Sean Turner avatar
Sean Turner

Ah yep. Tried that as well. Noticed the plan was slightly different. Might be related to how terraform acts when document_format = YAML

loren avatar
loren

yamlencode should take an hcl object and serialize it as a yaml string…

Alex Jurkiewicz avatar
Alex Jurkiewicz

have you tested this in terraform console? Might be worthwhile

Sean Turner avatar
Sean Turner

Oh. Doing it with only templatefile again worked this time. Cheers

1
Sean Turner avatar
Sean Turner

Just got off of a long weekend in nz, that must be it

1

2020-10-23

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
11:25:08 AM

Terraform Cloud runs delayed Oct 23, 10:53 UTC Resolved - We had a brief delay on processing Terraform Cloud runs due to a network change that has been identified and resolved.

Terraform Cloud runs delayed

HashiCorp Services’s Status Page - Terraform Cloud runs delayed.

2020-10-22

Pierre-Yves avatar
Pierre-Yves

Hello, which tool do you use to read a previously generated terraform plan with -out option ? I have found some tool, but would like your opinion .

https://github.com/lifeomic/terraform-plan-parser

https://github.com/palantir/tfjson

lifeomic/terraform-plan-parser

Command line utility and JavaScript API for parsing stdout from “terraform plan” and converting it to JSON. - lifeomic/terraform-plan-parser

palantir/tfjson

Terraform plan file to JSON. Contribute to palantir/tfjson development by creating an account on GitHub.

Tyrone Meijn avatar
Tyrone Meijn

From the tools linked I guess you want to output to be JSON? In that case terraform show -json [path-to-file] but I think that is only valid for the most recent plan…

lifeomic/terraform-plan-parser

Command line utility and JavaScript API for parsing stdout from “terraform plan” and converting it to JSON. - lifeomic/terraform-plan-parser

palantir/tfjson

Terraform plan file to JSON. Contribute to palantir/tfjson development by creating an account on GitHub.

antonbabenko avatar
antonbabenko
$ terraform plan -out=plan.tfplan > /dev/null && terraform show -json plan.tfplan > plan.json
1
:--1:2
1
Andy Hibbert avatar
Andy Hibbert
Fixes DNS which appears to be prepending var.name to local.cluster_dns_name by hibbert · Pull Request #88 · cloudposse/terraform-aws-rds-cluster

what DNS was changing when it shouldn&#39;t have been, it was the value of: ${var.name}-${local.cluster_dns_name} I think this may have changed in https://github.com/cloudposse/terraform-aws-rou

1
1

2020-10-21

Peter Huynh avatar
Peter Huynh

Hi, I am looking to setup a brand new collection of AWS accounts. I was looking for some guidance around this and stumbled across https://github.com/cloudposse/reference-architectures. I am curious if this is still the recommended approach? The docs seems to have marked this topic as archived (https://docs.cloudposse.com/reference-architectures/introduction/), which leads to the above question.

Thanks heap.

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Matt Gowie avatar
Matt Gowie

This repo is out of date AFAIK. The CP folks are looking to update it with their latest and greatest, but it hasn’t come to fruition yet. Erik has mentioned an EOY target.

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, so we’ve gutted the old reference architecture to make sure no one follows that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have a new one coming out this year, with bits and pieces trickling out.

Peter Huynh avatar
Peter Huynh

Thanks for the reply. I’ll wait for the new architecture. Cheers

Pierre-Yves avatar
Pierre-Yves

Hello, for kubernetes, do you use terraform kubernetes_provider or helm ? which one would you recommand ?

Steve Wade avatar
Steve Wade

@Pierre-Yves i use a mixture of both to bootstrap EKS with Flux

Pierre-Yves avatar
Pierre-Yves

Thanks Steve, that is my point. I’ll probably use AKS & Terraform and will use helm at most . ( mainly because there is more example and documentation )

Steve Wade avatar
Steve Wade

you will need both to create the namespace and secret for flux to use to communicate with your repo

Pierre-Yves avatar
Pierre-Yves

I didn’t know about flux (so I am currently reading the doc), what will be the benefit vs a standard helm pipeline ? my understanding is:

• a classic pipeline will push a release to kubernetes vs

• flux running on kubernetes will watch and fetch the version to deploy it on kubernetes ( should move the thread to #kubernetes but I don’t know how )

Aumkar Prajapati avatar
Aumkar Prajapati

Hey guys, I’m experimenting with the cloudposse eks / fargate module but am getting these errors on terraform 0.13.4, any ideas on why this isn’t up to date or does it require a version closer or equal to 12?

  on .terraform/modules/eks_cluster.label/versions.tf line 2, in terraform:
   2:   required_version = "~> 0.12.0"

Module module.eks_cluster.module.label (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0>)
does not support Terraform version 0.13.4. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think the fargate module was not updated to TF 0.13 requirements. We’ll get to it ASAP

Aumkar Prajapati avatar
Aumkar Prajapati

No rush! Was just curious!

Aumkar Prajapati avatar
Aumkar Prajapati

Figured it was updated as the requirements reflected terraform 14

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ this latest release fixes the Fargate module https://github.com/cloudposse/terraform-aws-eks-fargate-profile/releases/tag/0.6.0, should work with TF 0.13

Release v0.6.0 · cloudposse/terraform-aws-eks-fargate-profile
Update to [context.tf>. Correctly pin Terraform providers. Add GitHub Actions @aknysh (#9) what Update to <http://context.tf context.tf](http://context.tf) Correctly pin Terraform providers to support TF 0.13 Add GitHub Actions Use uni…
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-fargate-profile

Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile

Aumkar Prajapati avatar
Aumkar Prajapati

Sweet! Thank you!

Aumkar Prajapati avatar
Aumkar Prajapati

It seems to fix within the documented requirements but the error says otherwise

Release notes from terraform avatar
Release notes from terraform
07:54:16 PM

v0.13.5 0.13.5 (October 21, 2020) BUG FIXES: terraform: fix issue where the provider configuration was not properly attached to the configured provider source address by localname (#26567) core: fix a performance issue when a resource contains a very large and deeply nested schema (<a…

terraform: fix ProviderConfigTransformer [v0.13 backport] by mildwonkey · Pull Request #26567 · hashicorp/terraform

The ProviderConfigTransformer was using only the provider FQN to attach a provider configuration to the provider, but what it needs to do is find the local name for the given provider FQN (which ma…

2020-10-20

Steve Wade avatar
Steve Wade

I am trying to run Terratest in gitlab CI

I have added the necessary AWS environment variables but get the following error ..

Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.

Is there something specific I need to do to get this to use the environment variables?

Alex Jurkiewicz avatar
Alex Jurkiewicz

what environment variables have you added? Do they work with the aws CLI?

Steve Wade avatar
Steve Wade

AWS_ACCESS_KEY_ID AWS_DEFAULT_REGION AWS_SECRET_ACCESS_KEY

loren avatar
loren

i think i needed to also export AWS_REGION to get terratest to work (without setting the values in the provider config). i do not know why. i briefly inspected the source but couldn’t find a reference

loren avatar
loren

i export all these in my local profile:

AWS_SDK_LOAD_CONFIG=1
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
PePe avatar

@ can you put that on a thread? is a bit long

Jørgen Vik avatar
Jørgen Vik

@PePe Yeah, sorry. I think I just solved it actually

PePe avatar

np, what was it?

Jørgen Vik avatar
Jørgen Vik
for_each not working well when creating aws_route53_record · Issue #14447 · terraform-providers/terraform-provider-aws

The aws_acm_certificate.cert.domain_validation_options.0.resource_record_name works in 2.70.0, but this doesn&#39;t work in 3.0.0. Checking the latest 3.0.0 document, now terraform use the for_each…

Jørgen Vik avatar
Jørgen Vik

I solved it by stop using a for_each, since I only had one record per cert anyway

Jørgen Vik avatar
Jørgen Vik

However I do have another problem which is the reasony why I joined this slack. I’m trying to use the cloudposse codepipeline to ecs module. The codecommit from github and build step works fine, but the deploy step is hanging forever. It seems like a new task definition is successfully created, but the deploy is just hanging.

module "ecs_push_pipeline" {
  source                = "git::<https://github.com/cloudposse/terraform-aws-ecs-codepipeline.git?ref=0.17.0>"
  name                  = var.api_ecs_service_name
  namespace             = "eg"
  stage                 = "test"
  github_oauth_token    = var.pipeline_git_pat
  github_webhooks_token = var.pipeline_git_pat
  repo_owner            = var.pipeline_git_repo_owner
  repo_name             = var.api_pipeline_git_repo_name
  branch                = var.api_pipeline_git_pipeline_branch
  service_name          = var.api_ecs_service_name
  ecs_cluster_name      = var.ecs_cluster_name
  privileged_mode       = "true"
  region                = var.aws_region
  image_repo_name       = var.api_docker_repo
  build_image           = var.pipeline_build_image
  environment_variables = var.api_pipeline_env_variables
  s3_bucket_force_destroy = true

}
PePe avatar

did you check the ecs console ?

PePe avatar

you could have a problem with the container not starting and continuously deploying

Jørgen Vik avatar
Jørgen Vik
04:09:23 PM

The deployment is added to the list, but it has a 0 pending and 0 running count. Nothing is logged out in the console and no new task is started

vFondevilla avatar
vFondevilla

In the events tab there’s any message?

Jørgen Vik avatar
Jørgen Vik

Actually no. All the events seems to be from before the deployment

vFondevilla avatar
vFondevilla

weird.

Jørgen Vik avatar
Jørgen Vik

Yes indeed. Not really sure how to troubleshoot it

PePe avatar

so is the deployment % 0?

PePe avatar

what happens if you bump it up to 50%?

Jørgen Vik avatar
Jørgen Vik
04:25:34 PM

I’m not sure which % you are referring to. It looks like this in AWS

Jørgen Vik avatar
Jørgen Vik

Are you talking about the minimum and maximum healthy percentage?

Jørgen Vik avatar
Jørgen Vik

If so: yes, that was it

Jørgen Vik avatar
Jørgen Vik

Seems like it’s spinning up now after i set it to 100% and 200%

PePe avatar

yes that is what I was talking about

PePe avatar

deployment healthy min/mac percentage

PePe avatar

is is 0 it does not do anyting

Jørgen Vik avatar
Jørgen Vik

Running a test deployment now to make sure that it is fixed

Jørgen Vik avatar
Jørgen Vik
04:40:47 PM

That was it! Thanks party_parrot

Jørgen Vik avatar
Jørgen Vik

2,5 hours of my life gone, but it works after all

PePe avatar

you are working on ECS, expect 50% of your life gone troubleshooting this AMAZING service

Tomek avatar
Tomek

am i crazy or was it possible in teraform 0.12 to run terraform init on terraform without an explicit backend definition located in a .tf file as long as you passed in the required backend configs via command line args? It appears with terraform 0.13, you need at least the following to be in some kind of .tf file now

terraform { backend "s3" { } }
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This was still required in 0.11, 0.12 at a minimum. We have these annoying stubs as well

:--1:1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Is it possible to load the rules of an AWS ALB Listener as a data source in Terraform? There’s no data source equivalent of the aws_lb_listener_rule resource, and the aws_lb_listener data source doesn’t seem to include rules

Gysie avatar
Gysie

Hey everyone I’m new to this community and also relatively new to Terraform and all the goodness is brings. I’m in need of some guidance and was hoping someone could help with my problem.

I am trying to turn a map of subnets:

variable "storage_account_network_rule_set_subnets" {
  type = map(object({
    name                = string
    vnet_name           = string
    resource_group_name = string
  }))
  default     = {}
  description = "The Subnet ID(s) which should be able to access this Storage Account."
}

with:

data "azurerm_subnet" "module" {
  for_each = { for s in var.storage_account_network_rule_set_subnets : s.name => s }

  name                 = each.value.name
  virtual_network_name = each.value.vnet_name
  resource_group_name  = each.value.resource_group_name
}

into a list of subnet ids for the azurerm_storage_account provider:

resource "azurerm_storage_account" "example" {
  name                = "storageaccountname"
  resource_group_name = azurerm_resource_group.example.name

  location                 = azurerm_resource_group.example.location
  account_tier             = "Standard"
  account_replication_type = "LRS"

  network_rules {
    default_action             = "Deny"
    ip_rules                   = ["100.0.0.1"]
<< vv INSERT THAT LIST HERE vv >>
    virtual_network_subnet_ids = [azurerm_subnet.example.id]
<< ^^ INSERT THAT LIST HERE ^^ >>
  }

  tags = {
    environment = "staging"
  }
}

What magic piece of Terraform code will produce that list for me and insert it into that list. Link to the provider: https://www.terraform.io/docs/providers/azurerm/r/storage_account.html#network_rules

Alex Jurkiewicz avatar
Alex Jurkiewicz

something like data.azurerm_subnet.module[*].id I guess. See https://www.terraform.io/docs/configuration/expressions.html#splat-expressions

Gysie avatar
Gysie

This was the solution in the end: [for subnet in data.azurerm_subnet.module : subnet.id]

:--1:1
Gysie avatar
Gysie

Thanks for the suggestion!

2020-10-19

Amit Karpe avatar
Amit Karpe

Which terraform plugin is best for vscode? Any any suggestions for vim?

Pierre-Yves avatar
Pierre-Yves

Hello, I use the plugin “terraform” from “Anton Kulikov”. I don’t know if it’s the best but my need was the support of 0.12 tf version

:--1:1
sheldonh avatar
sheldonh

Hashicorp has taken over. Update to their latest one as it’s an officially maintained plugin now.

:--1:1
Steve Wade avatar
Steve Wade

is this the right place to ask about Terratest best practices?

:--1:1
MrAtheist avatar
MrAtheist

Are there any recommended module for running a ecs cluster w/ blue/green codedeploy (instrumented via code pipeline + code build)?

caretak3r avatar
caretak3r

Anyone have any experience using terraform with spinnaker/managing pipelines/etc? Any useful docs or projects would be much appreciated!

MattyB avatar
MattyB

I’m sure someone else has a much better setup than I do but esentially on each PR we run: terraform fmt -resursive -check -diff terraform validate

after merging to master we output the plan and have to approve before applying

manually running tfsec and other tools as needed. trying to figure out how to automate these - daily, weekly, etc..

links: https://github.com/antonbabenko/pre-commit-terraform https://github.com/tfsec/tfsec

antonbabenko/pre-commit-terraform

pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform

tfsec/tfsec

Static analysis powered security scanner for your terraform code - tfsec/tfsec

MattyB avatar
MattyB

We have a Makefile that’s used to call different targets: lint, validate, plan, apply, tfsec, etc…

We only have a handful of parent TF projects so far. Very scalable for all of our app deployments right now. It seems like this is a scalable solution for IaC. No complaints so far.

kskewes avatar
kskewes

Do you have Spinnaker running terraform in pipelines MattyB?

Wrt OP.. we use jsonnet for pipeline code etc and spin CLI to upsert Spinnaker. There’s no state so we have to do manual removals. We haven’t wired this to CI yet as haven’t setup x509 or basic auth to Spinnaker. Manually updating with Makefile leverages our gcloud oauth.

Alex Jurkiewicz avatar
Alex Jurkiewicz

today’s great Terraform error:

The true and false result expressions must have consistent types. The given
expressions are tuple and tuple, respectively.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is not a good way of dealing with that. If you have complex types on both sides of the ternary operator, both sides need to be of exactly the same types

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can 1) specify in the variable definition AND provide the exact object types on both sides (and it means not only the types of the objects, but the types of all the fields)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

2) use jsonencode and jsondecode to work with strings

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Inconsistent conditional result types fails for complex types · Issue #22405 · hashicorp/terraform

Terraform Version Terraform v0.12.6 Terraform Configuration Files variable &quot;default_rules&quot; { default = [] } output &quot;test&quot; { value = var.default_rules != [] ? var.default_rules :…

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yes, it was a mismatched sub-type. The error was real, but the error message is worse than old GCC

loren avatar
loren

on the plus side, they’ve been getting better and better with the error messages. this was way worse before!

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is it possible to configure a Terraform stack so that if a variable’s value changes, a certain resource is forced to be destroyed & rebuilt?

I am deploying an AWS Elastic Beanstalk environment with settings configured by input variables. Some of these settings cannot be changed after environment creation, and the AWS API will return an error if you try. The Terraform AWS provider doesn’t handle this case, so I could change a variable which plans fine but fails on apply. It would be great if I can configure terraform so that it shows “re-creation required” in the plan output.

Pierre-Yves avatar
Pierre-Yves

Hello, if you change a resource name it will be destroyed first and recreated

Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t want the resource name to depend on these variables

Pierre-Yves avatar
Pierre-Yves

I didn’t experience it myself I would suggest to see if there is a trick with a null_resource to do this https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource

Marcin Brański avatar
Marcin Brański

Calculate some kind of hash for thise variables and append it as postfix to field that will recreate the resource you want to depend on it. Im not using much elasticbeanstalk so i dont know which field would be the best but name would be fine imo

loren avatar
loren

the hash is a good idea. i’m also unfamiliar with EB, but i know if you are using ec2 you can force recreation by modifying the userdata, even with just a comment in the script/cloud-init config…

Alex Jurkiewicz avatar
Alex Jurkiewicz

The only attribute that forces recreation is name. And a lot of our infra glue depends on that being predictable I was hoping for some cool Terraform workaround

loren avatar
loren

Interesting. What is the exact resource?

loren avatar
loren

The cloudformation equivalent mentions a few properties that require replacement, so maybe you can work a replacing update implementation around one of those? https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk-environment.html

AWS::ElasticBeanstalk::Environment - AWS CloudFormation

The AWS::Environment resource is an AWS Elastic Beanstalk resource type that specifies an Elastic Beanstalk environment.

loren avatar
loren

If not, you can always taint the resource, though that’s a bit outside the gitops-style workflow I try to prefer

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yeah, good idea. I will try and use cname prefix – the DNS name can be weird

:--1:1
Alex Jurkiewicz avatar
Alex Jurkiewicz

yep. I can set cname_prefix like:

resource random_id eb_cname_prefix {
  keepers = {
    use_shared_alb  = var.use_shared_alb
  }

  byte_length = 1 # Increase this as we add more keepers
}
resource aws_elastic_beanstalk_environment foo {
  cname_prefix = "myapp-${random_id.eb_cname_prefix.hex}"
  settings = ... settings including LoadBalancerIsShared ...
}

Thanks for the ideas all!

:--1:1

2020-10-18

Leonard Tan avatar
Leonard Tan

Hi I have been trying to use the terraform AWS Elastic Beanstalk environment module but I have an issue regarding S3 bucket creations. I have tried different names but it still does not work. Below is the error code when running terraform apply. Please help

Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: A97AF26F4D7B367B, host id: 5VvycnT8xomwlLocjMwYlFK7cFAQ8JWFXgTVQ9Y/uz4e17aOnLY4In0dxiLg9enmSDiNQ1u9fek=

  on .terraform/modules/elastic_beanstalk_environment/main.tf line 935, in resource "aws_s3_bucket" "elb_logs":
 935: resource "aws_s3_bucket" "elb_logs" {
pjaudiomv avatar
pjaudiomv

What’s the bucket names you’ve tried, can you use a name_prefix instead

pjaudiomv avatar
pjaudiomv

Bucket names are global and must be unique

Mikael Fridh avatar
Mikael Fridh

Using Java style domain names are what I prefer.

com.mydomain.logs

1
:--1:1
Mikael Fridh avatar
Mikael Fridh

Unless someone is actively trying to sabotage for you they are usually free..

Leonard Tan avatar
Leonard Tan

I am using the v0.31.0 module from https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment and the error line is from the main.tf in that module.

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Mikael Fridh avatar
Mikael Fridh

I see. I guess your name, namespace etc combination clashes with what someone else is already using then. I think that’s probably something that should be made overrideable in that module.

sheldonh avatar
sheldonh

Any recommended reading on VPC best design practices? I have one account where 90 of the various apps all use the same vpc. Noticing that a lot of ECS, EKS stuff expects its own vpc.

Zach avatar

I’ve avoided using some public open source modules because of that actually

Zach avatar

some of them just assume you get a new vpc everytime

sheldonh avatar
sheldonh

That’s the main thing that slowed me down trying to figure out all the right subnets to use. Terraform wasn’t used to manage the VPC deployment so it wasn’t quite so straightforward to quickly deploy

Zach avatar

we have specific vpcs and just use data lookups on them by name in all our modules

sheldonh avatar
sheldonh

I tried that and mostly ok but one had duplicate names . Maybe I could get the subnets dynamically by filtering for public/private attribute

Zach avatar

yup! we do that too. Add a tag to the subnet ‘tier’ and look it up that way

Marcin Brański avatar
Marcin Brański

One of the guide thats good to read https://gruntwork.io/guides/networking/how-to-deploy-production-grade-vpc-aws/ Another good topic is about placement of ec2 ecs k8s lambda etc and interconnecting it. Maybe aws guide for architect pro will have some answers for you?

How to deploy a production-grade VPC on AWS

Learn how to configure subnets, route tables, Internet Gateways, NAT Gateways, NACLs, VPC Peering, and more.

:--1:3
x80486 avatar
x80486

Hello everyone! I’ve been using the terraform-aws-acm-request-certificate module these days along with terraform-aws-cloudfront-s3-cdn. Everything was fine, but with the latest update for terraform-aws-acm-request-certificate/0.8.0 I started to getting any number of error messages like this one:

Error: Invalid index

  on .terraform/modules/acm_request_certificate/main.tf line 30, in resource "aws_route53_record" "default":
  30:   name            = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
    |----------------
    | count.index is 1
    | local.domain_validation_options_list is set of object with 2 elements

This value does not have any indices.
x80486 avatar
x80486

This is the entire output for the plan command:

x80486 avatar
x80486
[[email protected]:~/Workshop/Development/aws-static-website]$ make plan 
Initializing modules...
Downloading git::<https://github.com/cloudposse/terraform-aws-acm-request-certificate.git?ref=tags/0.8.0> for acm_request_certificate...
- acm_request_certificate in .terraform/modules/acm_request_certificate
Downloading git::<https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=tags/0.35.0> for cloudfront_s3_cdn...
- cloudfront_s3_cdn in .terraform/modules/cloudfront_s3_cdn
Downloading git::<https://github.com/cloudposse/terraform-aws-route53-alias.git?ref=tags/0.8.2> for cloudfront_s3_cdn.dns...
- cloudfront_s3_cdn.dns in .terraform/modules/cloudfront_s3_cdn.dns
Downloading git::<https://github.com/cloudposse/terraform-aws-s3-log-storage.git?ref=tags/0.14.0> for cloudfront_s3_cdn.logs...
- cloudfront_s3_cdn.logs in .terraform/modules/cloudfront_s3_cdn.logs
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for cloudfront_s3_cdn.logs.this...
- cloudfront_s3_cdn.logs.this in .terraform/modules/cloudfront_s3_cdn.logs.this
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for cloudfront_s3_cdn.origin_label...
- cloudfront_s3_cdn.origin_label in .terraform/modules/cloudfront_s3_cdn.origin_label
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for cloudfront_s3_cdn.this...
- cloudfront_s3_cdn.this in .terraform/modules/cloudfront_s3_cdn.this

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding hashicorp/template versions matching ">= 2.0.*"...
- Finding hashicorp/aws versions matching ">= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*"...
- Finding hashicorp/local versions matching ">= 1.2.*, >= 1.2.*, >= 1.2.*, >= 1.2.*"...
- Finding hashicorp/null versions matching ">= 2.0.*, >= 2.0.*, >= 2.0.*"...
- Installing hashicorp/null v3.0.0...
- Installed hashicorp/null v3.0.0 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
- Installing hashicorp/aws v3.11.0...
- Installed hashicorp/aws v3.11.0 (signed by HashiCorp)
- Installing hashicorp/local v2.0.0...
- Installed hashicorp/local v2.0.0 (signed by HashiCorp)

Terraform has been successfully initialized!
Switched to workspace "sandbox".
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

Error: Invalid index

  on .terraform/modules/acm_request_certificate/main.tf line 30, in resource "aws_route53_record" "default":
  30:   name            = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
    |----------------
    | count.index is 1
    | local.domain_validation_options_list is set of object with 2 elements

This value does not have any indices.


Error: Invalid index

  on .terraform/modules/acm_request_certificate/main.tf line 30, in resource "aws_route53_record" "default":
  30:   name            = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
    |----------------
    | count.index is 0
    | local.domain_validation_options_list is set of object with 2 elements

This value does not have any indices.


Error: Invalid index

  on .terraform/modules/acm_request_certificate/main.tf line 31, in resource "aws_route53_record" "default":
  31:   type            = lookup(local.domain_validation_options_list[count.index], "resource_record_type")
    |----------------
    | count.index is 1
    | local.domain_validation_options_list is set of object with 2 elements

This value does not have any indices.


Error: Invalid index

  on .terraform/modules/acm_request_certificate/main.tf line 31, in resource "aws_route53_record" "default":
  31:   type            = lookup(local.domain_validation_options_list[count.index], "resource_record_type")
    |----------------
    | count.index is 0
    | local.domain_validation_options_list is set of object with 2 elements

This value does not have any indices.


Error: Invalid index

  on .terraform/modules/acm_request_certificate/main.tf line 32, in resource "aws_route53_record" "default":
  32:   records         = [lookup(local.domain_validation_options_list[count.index], "resource_record_value")]
    |----------------
    | count.index is 1
    | local.domain_validation_options_list is set of object with 2 elements

This value does not have any indices.


Error: Invalid index

  on .terraform/modules/acm_request_certificate/main.tf line 32, in resource "aws_route53_record" "default":
  32:   records         = [lookup(local.domain_validation_options_list[count.index], "resource_record_value")]
    |----------------
    | count.index is 0
    | local.domain_validation_options_list is set of object with 2 elements

This value does not have any indices.
x80486 avatar
x80486

I didn’t create an issue in GitHub because I’m not sure if this is because something on my end or specific to the update 0.8.0

Matt Gowie avatar
Matt Gowie

Hey @ — I was the one that merge and releases 0.8.0. It passed our tests, but maybe something cropped up that our tests didn’t catch.

Can you check out the below PR / code and see if targeting that will fix this issue that you’ve run into? https://github.com/cloudposse/terraform-aws-acm-request-certificate/pull/27

x80486 avatar
x80486

I’m getting this one now:

Error: Unsupported attribute

  on .terraform/modules/acm_request_certificate/main.tf line 44, in resource "aws_acm_certificate_validation" "default":
  44:   validation_record_fqdns = aws_route53_record.default.*.fqdn

This object does not have an attribute named "fqdn".
x80486 avatar
x80486

What I did was to copy over the files from that branch (that’s in Merge Request now), and changed the source value by pointing to the directory under .terraform/.. (the following file snippet has the correct URL anf reg tag for 0.8.0 which I didn’t use for testing the changes you asked me to)

x80486 avatar
x80486

This is my super-tiny Terraform configuration:

locals {
  tags = {
    Environment = terraform.workspace
    Terraform   = "true"
  }
}

provider "aws" {
  profile = terraform.workspace
  region  = var.aws_region
}

resource aws_route53_zone "default" {
  name = var.domain_name

  tags = local.tags
}

module "acm_request_certificate" {
  source = "git::<https://github.com/cloudposse/terraform-aws-acm-request-certificate.git?ref=tags/0.8.0>"

  depends_on = [aws_route53_zone.default]

  domain_name                       = var.domain_name
  process_domain_validation_options = true
  subject_alternative_names         = ["*.${var.domain_name}"]
  wait_for_certificate_issued       = var.wait_for_certificate_issued
  zone_name                         = var.domain_name

  tags = local.tags
}

module "cloudfront_s3_cdn" {
  source = "git::<https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=tags/0.35.0>"

  depends_on = [module.acm_request_certificate]

  acm_certificate_arn      = module.acm_request_certificate.arn
  aliases                  = [var.domain_name, "www.${var.domain_name}"]
  allowed_methods          = ["GET", "HEAD", "OPTIONS"]
  compress                 = true
  dns_alias_enabled        = true
  error_document           = "not_found.html"
  namespace                = var.company_prefix
  name                     = var.name
  origin_force_destroy     = true
  parent_zone_id           = aws_route53_zone.default.zone_id
  stage                    = var.stage
  use_regional_s3_endpoint = true
  website_enabled          = true

  tags = local.tags
}
x80486 avatar
x80486

The [variables.tf](http://variables\.tf) file:

variable "aws_region" {
  description = "The AWS region to deploy to"
  type        = string
  default     = "us-east-1"
}

variable "company_prefix" {
  description = "The company's name prefix"
  type        = string
  default     = "tld-domain"
}

variable "domain_name" {
  description = "The FQDN name of the Website (e.g.: [example.com](http://example\.com))"
  type        = string
  default     = null
}

variable "force_destroy" {
  description = "Delete all objects from the bucket so that the bucket can be destroyed without error"
  type        = bool
  default     = false
}

variable "name" {
  description = "The identifier/name of the application or solution"
  type        = string
  default     = null
}

variable "stage" {
  description = "Stage, e.g.: prod, staging, test, dev, sandbox, etc."
  type        = string
  default     = null
}

variable "wait_for_certificate_issued" {
  description = "Whether to wait for the certificate to be issued by ACM (status change from PENDING_VALIDATION to ISSUED)"
  type        = bool
  default     = false
}
x80486 avatar
x80486

The .tfvars file:

force_destroy = true

domain_name = "domain.tld"

name = "domain"

stage = "sandbox"

wait_for_certificate_issued = true
x80486 avatar
x80486

My plan target looks like this:

.PHONY: plan
plan: init workspace init ## Create a Terraform execution plan
	@terraform [email protected] -compact-warnings -input=false -lock=true -out=$(WORKSPACE)-plan.out -refresh=true -var-file=$(WORKSPACE).tfvars


\## Private Zone

.PHONY: init
init:
	@terraform [email protected] -input=false -lock=true -verify-plugins=true

.PHONY: workspace
workspace:
	@terraform [email protected] select $(WORKSPACE) || terraform [email protected] new $(WORKSPACE)
x80486 avatar
x80486

@Matt Gowie, did you have a chance to take a look at this one?

2020-10-17

praveen avatar
praveen

#azure #kubernetes #terraform was anyone able to enabled admin group object id’s using terraform azure AKS cluster ? dynamic ”role_based_access_control” {     for_each = list(coalesce(each.value.rbac_enabled, false))     content {       enabled = role_based_access_control.value       dynamic ”azure_active_directory” {         for_each = var.ad : []         content {           managed           = true           admin_group_object_ids = var.admin_group_object_ids         }       }     }   }

praveen avatar
praveen

it throws the following error

praveen avatar
praveen

Error: Missing required argument

on ....\modules\Kubernetes[main.tf](http://main.tf) line 140, in resource “azurerm_kubernetes_cluster” “this”: 140: content {

The argument “server_app_secret” is required, but no definition was found.

Error: Missing required argument

on ....\modules\Kubernetes[main.tf](http://main.tf) line 140, in resource “azurerm_kubernetes_cluster” “this”: 140: content {

The argument “client_app_id” is required, but no definition was found.

Error: Missing required argument

on ....\modules\Kubernetes[main.tf](http://main.tf) line 140, in resource “azurerm_kubernetes_cluster” “this”: 140: content {

The argument “server_app_id” is required, but no definition was found.

Error: Unsupported argument

on ....\modules\Kubernetes[main.tf](http://main.tf) line 141, in resource “azurerm_kubernetes_cluster” “this”: 141: managed = true

An argument named “managed” is not expected here.

Error: Unsupported argument

on ....\modules\Kubernetes[main.tf](http://main.tf) line 142, in resource “azurerm_kubernetes_cluster” “this”: 142: admin_group_object_ids = var.admin_group_object_ids

An argument named “admin_group_object_ids” is not expected here.

praveen avatar
praveen

got it fixed with the azurerm provider version 2.21.0

:--1:1

2020-10-16

Pierre-Yves avatar
Pierre-Yves

Hello, i have added in a terraform tag the deployment timestamp. current_date = formatdate("YYYYMMD hh:mm:ss", timestamp()) is there a way to tell terraform plan to not display this change at each plan ?

Adrian avatar
Adrian
Ignore changes for a single AWS Resource Tag · Issue #6632 · hashicorp/terraform

Is there a way to limit the @ignore_changes to just a single tag e.g. VERSION tag. As apposed to every single tag for a particular resource? I have a VERSION tag which is updated regularly by Jenki…

Pierre-Yves avatar
Pierre-Yves

thanks!. the information will be useful, but the point was to keep it quiet for the change on timestamp tag and not to prevent it.

Luis Muniz avatar
Luis Muniz

Hi I have just found your terraform-aws-tfstate-backend module and I have a newbie question.

Luis Muniz avatar
Luis Muniz

I have followed the documentation to create a remotely managed state with s3. Works well on the environment that initially ran the terraform script.

Luis Muniz avatar
Luis Muniz

But I am trying to figure out how to bootstrap a fresh environment, on a different computer.

Luis Muniz avatar
Luis Muniz

When I try, and run terraform plan, terraform keeps on trying to recreate the existing bucket/dynamodb table, etc.

Luis Muniz avatar
Luis Muniz

is there another way than downloading the tfstate file and dropping it into the current directory?

Pierre-Yves avatar
Pierre-Yves

Hello @, if you switch from one tfstate to an other you should use terraform init -reconfigure

https://www.terraform.io/docs/commands/init.html

1
Pierre-Yves avatar
Pierre-Yves

if the point is fetching again the remote state, terraform refresh will get the update. by default terraform plan will do a refresh

Luis Muniz avatar
Luis Muniz

Hi, thanks for replying Pierre-Yves. The issue is that the backend is not configured, because the module wants to add the s3 bucket

Luis Muniz avatar
Luis Muniz

i’m stuck in this pre-initialization limbo

Luis Muniz avatar
Luis Muniz

refresh does not do anything, because the backend has not yet been switched to s3

Luis Muniz avatar
Luis Muniz

it’s stuck in the local backend, when I run the plan, it tries to create an s3 bucket and dynamo tables that already exist

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


refresh does not do anything, because the backend has not yet been switched to s3
that’s something you do by editing the .tf file and updating the backend

Pierre-Yves avatar
Pierre-Yves

exact if I have resources in two terraform state, to switch from one to an other I do: terraform init -reconfigure -backend-config=

Luis Muniz avatar
Luis Muniz

Thanks, I was misunderstanding how to use the module, and trying to re-generate the terraform.tf file (it was not under sourtce control)

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
06:25:15 PM

Latency and availability issues Oct 16, 18:14 UTC Monitoring - We’ve rolled out some changes to the URL service that we expect to stabilize it, but it is running in a partially degraded state so there may still be latency or other minor delays.Oct 16, 18:06 UTC Update - Following user reports on GitHub, we’ve identified an issue with the Waypoint URL service that occasionally results in high latency for connections to waypoint.run hosts, as well as scenarios where a new deployment will return the “Couldn’t find a Waypoint…

Latency and availability issues

HashiCorp Services’s Status Page - Latency and availability issues.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
06:55:15 PM

Waypoint URL Service latency and availability issues Oct 16, 18:14 UTC Monitoring - We’ve rolled out some changes to the URL service that we expect to stabilize it, but it is running in a partially degraded state so there may still be latency or other minor delays.Oct 16, 18:06 UTC Update - Following user reports on GitHub, we’ve identified an issue with the Waypoint URL service that occasionally results in high latency for connections to waypoint.run hosts, as well as scenarios where a new deployment will return the “Couldn’t find a Waypoint…

Waypoint URL Service latency and availability issues

HashiCorp Services’s Status Page - Waypoint URL Service latency and availability issues.

PePe avatar

early adoptions signs , which is good lol

Waypoint URL Service latency and availability issues

HashiCorp Services’s Status Page - Waypoint URL Service latency and availability issues.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

antonbabenko avatar
antonbabenko

First-class problems definitely a good sign

nileshsharma.0311 avatar
nileshsharma.0311

Hi all , our customer onboarding process in the future will require launching the Infrastructure in a segregated vpc just for that customer , for automating the process of deploying the Infra , we’ll be using terraform , now we don’t have a devops person just yet so need help Architecting the best possible solution for maintaining state files for a customer from an operations perspective , The terraform code won’t change often for the infrastructure , so from a operations perspective , will you use a master directory having the tf code to run terraform and use a workspace per customer for segregating the customer state files or more like having a repo per customer in a git server and then triggering deployments from those repos ? I’m just thinking out loud about the solution since don’t know what’s the standard practice for this use case

PePe avatar

if you are going to use TF cloud, then use workspaces but if you do not then you could use same repo with multiple backend-configs ot a repo per customer

PePe avatar

it depends on how similar are your customers

Yoni Leitersdorf avatar
Yoni Leitersdorf

It also depends on what will happen when you have TF code updates. Will you update all customers, or find yourself with different customers running different versions?

Chris Wahl avatar
Chris Wahl

I’d also point out that versioning the parent code, and calling those specific versions for your customer execution plans, is important. Gives you an easy mechanism to perform selective applies.

nileshsharma.0311 avatar
nileshsharma.0311

@ yes sir , I can think of use cases where customers might have different versions running @ thanks for pointing that out

nileshsharma.0311 avatar
nileshsharma.0311

@PePe thanks for always replying and helping me , so can you provide some more insight how to approach it while rolling out updates for a customer if we only have one repo ? Will the concept of having different back-end configs work ? And can you think of any scalibiliy issues with this ?

PePe avatar

this question have many ways to answer it right

PePe avatar

I will tell you my preference

PePe avatar

for example lets say you have customer with wordpress+mysql

PePe avatar

you have a module to build wordpress and another one for mysql

PePe avatar

if you want to manage customer you could have one repo for all your customers that have that conbination

PePe avatar

and you could have a tree structure like

PePe avatar
terraform-aws-wordpress-mysql
clients
- clientA
-- [main.tf](http://main\.tf)
-- [variables.tf](http://variables\.tf)
-- [backend.tf](http://backend\.tf)
PePe avatar

and then you instantiate the the main.tf per customer using the modules you already have (using versions)

PePe avatar

and you ini terraform by doing terraform init --backend-config=clientA/backend.tf

PePe avatar

obviously you might have more parameters, like vars file etc

PePe avatar

that is one way

PePe avatar

you can have a backend.tf per customer or a backend file for all customers that matche the criteria wordpress+mysql

PePe avatar

but that will get real messy if you have many customers and with slightly different configs

PePe avatar

I do not like that much, I like the CloudPosse way were there are modules and metamodules

PePe avatar

so I will prefer to have a mysql+workpress module

PePe avatar

and have a meta module terraform-aws-wordpress-mysqlthat uses those other modules and that is flexible enough to accept slight different configs per customers and you can use the same way to init the module for all your customers and use a custom backend.tf file that can be templated by an automated job and the data could come from a key/value store or db or whatever

PePe avatar
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

PePe avatar

and if you need environments per customers you can have another module for the environment itself that will call the meta module etc and so on

1
PePe avatar

is like a chain of module dependecies

PePe avatar

but the shallow the dependency chain the better

PePe avatar

that is my humble opinion

PePe avatar

there must be others that have done this for customers/many teams with different workflows

nileshsharma.0311 avatar
nileshsharma.0311

Thanks again

Pierre-Yves avatar
Pierre-Yves

Hi @ @PePe, I am using one project and one tfstate per tools and one tfstate for each env. this makes smaller tfstate and limit access to project , to only allowed people

PePe avatar

yes, that is another thing too, the bigger the state the longer deployments take and the bigger the blast radius

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
keilerkonzept/terraform-module-versions

CLI tool that checks Terraform (0.10.x - 0.12.x) code for module updates. Single binary, no dependencies. linux, osx, windows. #golang #cli #terraform - keilerkonzept/terraform-module-versions

:--1:5
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
11:00:49 PM
HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
11:35:10 PM

Waypoint URL Service latency and availability issues Oct 16, 22:26 UTC Resolved - This incident has been resolved. The Waypoint URL service should be functioning without issue.Oct 16, 18:14 UTC Monitoring - We’ve rolled out some changes to the URL service that we expect to stabilize it, but it is running in a partially degraded state so there may still be latency or other minor delays.Oct 16, 18:06 UTC Update - Following user reports on GitHub, we’ve identified an issue with the Waypoint URL service that occasionally results in high latency for…

Waypoint URL Service latency and availability issues

HashiCorp Services’s Status Page - Waypoint URL Service latency and availability issues.

Mohammed Yahya - محمد المصدّر avatar
Mohammed Yahya - محمد المصدّر

New #terraform releases this week:

  • terraform v0.14.0-beta1
  • terraform-aws-provider v3.11.0

Links: - https://lnkd.in/ev8r4Kahttps://lnkd.in/eAeP8fn

2020-10-15

Release notes from terraform avatar
Release notes from terraform
02:25:16 PM

v0.14.0-beta1 Version 0.14.0-beta1

loren avatar
loren

oh boy, we’re on to betas already?

1
Matt Gowie avatar
Matt Gowie

Huh they’re releasing beta versions earlier than I would have expected.

1
Release notes from terraform avatar
Release notes from terraform
02:35:22 PM

v0.14.0-beta1 0.14.0 (Unreleased) NEW FEATURES:

terraform init: Terraform will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (#26524) If you wish to retain the previous behavior of always taking the newest version allowed…

Initial integration of the provider dependency pinning work by apparentlymart · Pull Request #26524 · hashicorp/terraform

This follows on from some earlier work that introduced models for representing provider dependency &quot;locks&quot; and a file format for saving them to disk. This PR wires the new models and beha…

loren avatar
loren

well that’s an interesting feature… @Erik Osterman (Cloud Posse) @antonbabenko plays into the versioning discussion from office hours yesterday… https://github.com/hashicorp/terraform/pull/26524

Initial integration of the provider dependency pinning work by apparentlymart · Pull Request #26524 · hashicorp/terraform

This follows on from some earlier work that introduced models for representing provider dependency &quot;locks&quot; and a file format for saving them to disk. This PR wires the new models and beha…

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that neat! thanks for point it out.

wow, beta1 already??

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Terraform 0.14: The Dependency Lock File

Ever since Terraform has had support for installing external dependencies (first external modules in early Terraform, and then separately-released providers in Terraform 0.10) it has used a hidden directory under .terraform as a sort of local, directory-specific “lock” of the selected versions of those dependencies. If you ran terraform init again in the same working directory then Terraform would select those same versions again. Unfortunately this strategy hasn’t been sufficient for today’s m…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


The first iteration of this for Terraform 0.14 covers only provider dependencies. Tracking version selections for external modules will hopefully follow in a later release, but requires some deeper design due to Terraform’s support for a large number of different installation methods for external modules.

kskewes avatar
kskewes

Going to need to git gud at updating… Sounds like a make target.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@ heads up: our opsgenie module is now updated with support for services and teams

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Terraform 0.14-beta1 available

Today we’ve released Terraform 0.14.0-beta1, which marks the start of the prerelease testing period for Terraform v0.14. We plan to publish at least one more beta release and one release candidate before the final 0.14.0. During this period, we’d be very grateful if folks could try out the new features in this release and let us know if you see any unusual behavior. We do not recommend using beta releases in production. While many of these features have seen some alpha testing prior to these b…

Alex Jurkiewicz avatar
Alex Jurkiewicz

quick release cycle compared to 0.13

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

they are just reminding us that anything that is painful, we should do more of until it’s not painful any more

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. it’s painful to update core version pinning on 100s of modules

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Chris Fowles avatar
Chris Fowles

0.13 was a beast of an overhaul of internals

Chris Fowles avatar
Chris Fowles

a lot of the work done in 0.13 was done to make future work easier - it was a bit of a big techdebt clean up from what i understand

loren avatar
loren

i recall a few people asking about creating acm dns-validated certs with tf, but can’t recall who… we worked on this a while back and encountered some limitations requiring janky/hacky workarounds in tf 0.12 and v2 of the aws provider. just updated today for tf 0.13 and v3 of the aws provider, and now it seems pretty solid. we can now handle multiple SANs, proper resource cycles, no occasional random diff on future plans, etc… here’s the updated module we’re using, very straightforward now… https://github.com/plus3it/terraform-aws-tardigrade-acm/blob/master/main.tf

:--1:3
loren avatar
loren

if anyone has feedback or sees any patterns we can improve, please let me know! (or open a pr )

David avatar
David

It would be pretty neat to export the cert ARN as a thing of it’s own, instead of the entire certificate object. My IntelliJ can’t really see that the object will be of the cert type and thus can’t do autocompletion for it

:--1:1
loren avatar
loren

oh that’s interesting. i’ve never relied on autocompletion in modules, so never occurred to me. seems like a pain to keep up with and document all the attributes someone might want though

loren avatar
loren

i wonder if autocompletion of object attributes is something the language server might be able to offer

loren avatar
loren

found an issue pretty close, asked for clarification on this specific use case… https://github.com/hashicorp/terraform-ls/issues/93#issuecomment-710057056

Completion of module variables in module block · Issue #93 · hashicorp/terraform-ls

I could not find any issues or references to it, but does autocompletion work from modules? We use a lot of them, and it does not seem to work for us.

2020-10-14

PePe avatar

ANYONE watching the keynote????? HashiConf Digital thread here

4
PePe avatar

HCP Vault announcement

1
PePe avatar

HCP Consul

PePe avatar

Consul 1.9

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(this is great! keep ‘em coming)

PePe avatar

Zero trust focused products

Matt Gowie avatar
Matt Gowie

Watching, but not stoked on anything yet.

What Armon is talking about now is interesting though..

Matt Gowie avatar
Matt Gowie
Announcing HashiCorp Boundary attachment image

Simple and secure remote access — to any system anywhere based on trusted identity.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha #called-it

Matt Gowie avatar
Matt Gowie

Now I’m excited.

Matt Gowie avatar
Matt Gowie

Seems like AWS SSM Session Manager, but cloud agnostic. cool-doge

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Though it does more protocols. Looks like it can do anything TCP (redis, postgres, ssh, etc.) https://www.boundaryproject.io/

Boundary by HashiCorp attachment image

Boundary is an open source solution that automates a secure identity-based user access to hosts and services across environments.

Matt Gowie avatar
Matt Gowie

Yeah, which will be sweet! I can throw away my ugly make target that creates a SSM session to port forward database access.

PePe avatar

I love Zero Trust but one of the principles of Zero trust is the identification of the machine/computer itself and without a way to Catalog for example a users laptop there is no way to reconcile the identity of the machine after the registry change because a new software is installed, which is key for this concept to work. The way reconcile instances is by using packer and other tools to build amis or use containers but what about a users phone or laptop?

RB avatar

oh man and we just signed with banyan’s zerotrust solution

RB avatar

oh well. it’s not like boundary is the first open source zt solution

Robert Horrox avatar
Robert Horrox

Ask @Erik Osterman (Cloud Posse) about open source IDPs

PePe avatar

it is 0.1 you need to wait at least for 0.13.0

:100:3
PePe avatar

lol

Robert Horrox avatar
Robert Horrox

I hope Since its HashiCorp, it won’t just die

1
Vlad Ionescu avatar
Vlad Ionescu

Curious how it compares to https://gravitational.com/teleport/

Teleport: SSH Access and Kubernetes Access attachment image

Teleport allows you to implement industry-best practices for SSH and Kubernetes access, meet compliance requirements, and have complete visibility into access and behavior.

andrey.a.devyatkin avatar
andrey.a.devyatkin

Nomad goes 1.0 on 27th and namespaces will be open source in 1.0

andrey.a.devyatkin avatar
andrey.a.devyatkin

really nice keynore for nomad

andrey.a.devyatkin avatar
andrey.a.devyatkin

Got Boundary up and running in AWS and was able to connect from laptop

andrey.a.devyatkin avatar
andrey.a.devyatkin

so basic functionality seems to work fine

andrey.a.devyatkin avatar
andrey.a.devyatkin

though missing 3rd IDP

andrey.a.devyatkin avatar
andrey.a.devyatkin

for now only local user/password

PePe avatar

nomad is free? I never used and I thought it was all paid

andrey.a.devyatkin avatar
andrey.a.devyatkin

it is free and there is enterprise version that has some additional feature

Chris Fowles avatar
Chris Fowles

Namespaces in OSS is great news

Chris Fowles avatar
Chris Fowles

boundary looks great - it covers off (on the label) a bunch of use cases i’m looking for access control atm

andrey.a.devyatkin avatar
andrey.a.devyatkin
Next Steps
For Boundary's upcoming releases, we have 3 key product themes we're focused on delivering:

Bring your own identity. We feel strongly that Boundary's identity-based controls should use the same identity that users have for their other applications. To do so, we'll progressively add support for new auth methods for Boundary. Our first step will be in delivering an OpenID Connect (OIDC) auth method.

Just-in-time access. A just-in-time access posture will be enforced at multiple levels within Boundary. Upcoming releases will offer integration with Vault or your preferred secret management solution of choice to generate ephemeral credentials for Boundary sessions.

Target discovery. To manage dynamic infrastructure users will need a way to discover and add newly provisioned hosts to targets while enforcing existing access policies on new instances. With Boundary 0.1, you can provision these targets and access policies dynamically with the Boundary Terraform provider. In the releases following launch we'll give administrators the ability to define dynamic host catalogs to discover new hosts based on predefined rules or tags for Consul, each of the major cloud platforms, and Kubernetes.
Matt Gowie avatar
Matt Gowie

New Keynote is going on now and the website for the new product is up!

https://www.waypointproject.io/

Matt Gowie avatar
Matt Gowie

HashiCorp Digital Day 2 — Continuing yesterday’s thread

Yoni Leitersdorf avatar
Yoni Leitersdorf

Yet another CI/CD platform??

1
andrey.a.devyatkin avatar
andrey.a.devyatkin

it is something else

andrey.a.devyatkin avatar
andrey.a.devyatkin

seems to be replacement for a bash scripts we all making to glue things together

andrey.a.devyatkin avatar
andrey.a.devyatkin

but let’s see

Vlad Ionescu avatar
Vlad Ionescu

Yeah. It’s still push-based which is interesting considering that the bleeding edge now is GitOps where you have something running in the cluster, pulling the latest info about what to deploy

Vlad Ionescu avatar
Vlad Ionescu

I like that it’s using Buildpacks

RB avatar

remember otto ?

RB avatar

will waypoint go like otto ?

Matt Gowie avatar
Matt Gowie

Please keep mentioning otto (in the comments). @RB What is that?

RB avatar

it’s an old very dead hashicorp project

RB avatar

almost like waypoint is a rebranded otto

PePe avatar

I’m late , waypoint is like atlantis?

PePe avatar

ohhhh is a ci-cd…..

PePe avatar

another ci tool

Matt Gowie avatar
Matt Gowie

It seems to be only a CD tool. You do execute builds with it, but it isn’t for CI.

Matt Gowie avatar
Matt Gowie

So you might execute a waypoint up as part of your CI tool it seems.

Andrey Nazarov avatar
Andrey Nazarov

Otto looked cool at first when it appeared, but then people soon realised how utopian it was))

andrey.a.devyatkin avatar
andrey.a.devyatkin

I’m not sold yet. This an area where people do a lot of customizatons to fix their workflows

andrey.a.devyatkin avatar
andrey.a.devyatkin

so let see deep dive section

andrey.a.devyatkin avatar
andrey.a.devyatkin

also, not sure what is the game plan there. As a company you supposed to make money. I can understand tools like packer that do not provide any profit but they are useful for the company itself. This one is something else…

andrey.a.devyatkin avatar
andrey.a.devyatkin

and not really their are

Zach avatar

hm yah they’ve already got a github action for it in fact

PePe avatar

waypoint entrypoint = ngrok but opensource

Matt Gowie avatar
Matt Gowie

One of the Terraform 0.14 major functionality targets is “Concise Diff” — I like the sound of that.

andrey.a.devyatkin avatar
andrey.a.devyatkin

+1

andrey.a.devyatkin avatar
andrey.a.devyatkin

waypoint up, links to test envs, test to terraform - it looks like all those releases are Pulumi inspired

andrey.a.devyatkin avatar
andrey.a.devyatkin

trying to take away reasons to move from HashiCorp ecosystem

Matt Gowie avatar
Matt Gowie

“test to terraform” — What’re you referring to there? What’d I miss? I tuned out after 2-3 sessions.

andrey.a.devyatkin avatar
andrey.a.devyatkin

terraform 0.14 will have test provider that you can use to run HCL defined tests for terraform modules

Matt Gowie avatar
Matt Gowie

Damn, sounds awesome… how did I miss that.

Matt Gowie avatar
Matt Gowie

Did you see any information released about that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heh, sigh… exciting news about the test provider. but my level of interest in rewriting all of our tests is zero

1
andrey.a.devyatkin avatar
andrey.a.devyatkin

I couldn’t find any info in writing. They showed it as an experimental feature during presentation. Probably more info will come later

:--1:1
Chris Fowles avatar
Chris Fowles

waypoint i think you can think of like a super powered hcl makefile

Chris Fowles avatar
Chris Fowles

personally i’m keen to start playing with it because i’ve typically found makefiles to be non-transparent to those that are unfamilar with them

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the other superpowered HCL makefile alternative

:--1:1
andrey.a.devyatkin avatar
andrey.a.devyatkin

nice! Thanks for sharing @Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@antonbabenko talks about the testing provider https://youtu.be/nphb0utdKEY?t=666

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

IMO looks like it is complimentary to terratest

antonbabenko avatar
antonbabenko

In its current form - yes, I agree, but later Terraform testing framework (to be developed) should be able to check it everything what is in tfstate, and allow to run “plugin” which does subset of assertion via HCL which currently can be done in terratest.

antonbabenko avatar
antonbabenko

Let’s come back to this in 2-3 months. ;)

Matt Gowie avatar
Matt Gowie

Watched the walkthrough — good stuff! Happy to get some more information on that subject. I could definitely see that pattern being valuable. If that was built first class into Terraform instead of through a provider then that would be sweet. I’ll for 0.15.

:--1:1
OliverS avatar
OliverS

Hey does anyone know how to see the user_data in a plan with terraform 0.11:

-/+ aws_launch_configuration.master-us-east-1c-masters-... (new resource required)
      id:                                        "master-us-east-1c.masters....648338700000004" => <computed> (forces new resource)
      ...
      user_data:                                 "39e5e6f604706....43600e3513aaa2616" => "093492cc54eea....7c0a89df99fa72783" (forces new resource)

It is one of the reasons for new resource so just seeing the hash of the script is not very helpful.

PePe avatar

userdata is base64 encoded

PePe avatar

so whatever value you get you will need to decode it

OliverS avatar
OliverS

I tried base64 -d and putting the value but the result is non-sense. I’m thinking the user_data number shown is a hash into some table of text data. Maybe that data is base64 encoded.

PePe avatar

maybe is truncating it

PePe avatar

ohhhh that is on a plan?

PePe avatar

mmm I think the output will be truncated, I guess you will have to compare at git level

OliverS avatar
OliverS

This user_data is generated by kops, I just noticed (first time!) a folder gets created in the module, /data, that has the scripts created by kops and those are referenced by user_data in the launch configs. Indeed we do save those in git so I was able to see what changed, would have been nice to get such diff in the terraform plan but git diff is better than just the hash that I was seeing before!

Thanks for you help @PePe

PePe avatar

np

2020-10-13

Milosb avatar
Milosb
sns_list = toset(["first", "second", "third"])



\# SNS
resource "aws_sns_topic" "this" {
  for_each = local.sns_list

  name = "${each.key}-${var.environment}"

  tags = merge(
    {
      Environment = var.environment
      Terraform   = "true"
    },
  var.tags)
}

data "aws_iam_policy_document" "this" {
  for_each = local.sns_list

  policy_id = "__default_policy_ID"

  statement {
    actions = [
      "SNS:Publish",
      "SNS:GetTopicAttributes"
    ]

    effect = "Allow"

    principals {
      type        = "AWS"
      identifiers = ["*"]
    }

    resources = [
      aws_sns_topic.this[each.value].arn,
    ]

    sid = "__default_statement_ID"
  }

  depends_on = []

}

resource "aws_sns_topic_policy" "this" {
  for_each = local.sns_list

  arn    = aws_sns_topic.this[each.value].arn
  policy = data.aws_iam_policy_document.this[each.value].json
}

Error: Invalid index

on main.tf line 150, in resource “aws_sns_topic_policy” “this”: 150: policy = data.aws_iam_policy_document.this[each.value].json |—————- | data.aws_iam_policy_document.this is object with 2 attributes | each.value is “third”

The given key does not identify an element in this collection value.

Hi all, does this look like a bug? this works ok if I create from scratch, but if I want to change resource name or to add new one it will complain with this error.

joshmyers avatar
joshmyers

Anyone got a pro/con of Terraform Cloud vs e.g. Terraform + Atlantis? Other than pricing. What gotchas are there with TF Cloud? Easy to move infra into, hard to move out of/workspaces/runners etc…..

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

• Atlantis lacks triggers, you cannot have one workspace trigger another. TFC does.

Atlantis lacks webhooks, so you cannot easily integrate it with other pipelines or continuous delivery frameworks. TFC does.

• Atlantis can only be self-hosted. TFC is also a SaaS.

• TFC cannot comment on PRs (Sometimes this is nice). Atlantis can.

• Moving in/out of TFC is equally difficult.

• TFC does not let you bring your own container without using on-prem runners. Atlantis was a phenomenal first-stab at gitops with terraform. It was pioneering software in a time when most terraform was run locally. However, as release engineering has evolved, there’s clear need for coordinated rollouts spanning tool chains (standard app deployments, db migrations, serverless deployments, infrastructure deployments, etc). Coordinating this via multiple disjoint PRs, is not scalable teams grow. Thus the atlantis-flavor of gitops (operations by github comments), isn’t as optimal in larger in team environments with lots of services.

:100:4
joshmyers avatar
joshmyers

• Atlantis lacks triggers, you cannot have one workspace trigger another. TFC does. How does this work? workspace 1 triggers a plan/apply of workspace 2 after workspace 1 has been applied?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

won’t auto apply though

joshmyers avatar
joshmyers

Gotcha

joshmyers avatar
joshmyers

Cheers @Erik Osterman (Cloud Posse), hope you’re good!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes! had many fun projects this year…. the tfc being among them

1
PePe avatar

@Erik Osterman (Cloud Posse) if atlantis supported similar functionality than TF cloud for workspaces and plus commented the pr back, would that be what is needed to make atlantis better?

PePe avatar

I wonder if TFC workspaces is lacking features that will be nice to have but since do not use it I’m unaware

joshmyers avatar
joshmyers

Main thing I think Atlantis is missing is triggering projects via API and not just PR

joshmyers avatar
joshmyers

How do TFC workspaces work different to normal workspaces? Guessing this means once your in TFC land, there is no easy getting back out…

joshmyers avatar
joshmyers

I should probably have a poke myself…

joshmyers avatar
joshmyers

Also with Atlantis ALL THE COMMENTS, 0.14 with concise diffs should help this though

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
we also have 2 forthcoming PRs for [TFE/TFC> and for the <https://github.com/cloudposse/terraform-kubernetes-tfc-cloud-agent/pull/1 TFC agents](https://github.com/cloudposse/terraform-tfe-cloud-infrastructure-automation/pull/1).
PePe avatar

there is an intention to add api calls

PePe avatar

but the lack of more active maintainers makes it pretty slow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thats cool! hadn’t seen that PR. it’s from back in august though… not optimistic.

PePe avatar

maybe is time to fork atlantis into another project with another name for good

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heh, call it atlanta

PePe avatar

peplantis

PePe avatar

lol

Sebastian Stadil avatar
Sebastian Stadil

That answer should be pinned imho.

Sebastian Stadil avatar
Sebastian Stadil

Good rundown, @Erik Osterman (Cloud Posse)

kskewes avatar
kskewes

Any idea how tfc enterprise pricing works? Experimenting with tfc free plan and secret management across multiple workspaces is toilsome considering we have 50 odd state files (directories -> tfc workspaces). Agent like your cloudposse module is ideal. No access secret keys..

PePe avatar

no one knows about prices that is a huge problem with Hashicorp

Sebastian Stadil avatar
Sebastian Stadil

@kskewes we found out a few things, and trying our best to publish at https://remotebackendpricing.com/ (pending addition of best community understanding of Hashi prices)

Sebastian Stadil avatar
Sebastian Stadil

First, Terraform Enterprise is priced on number of workspaces, and as a product is being phased out in favor of Terraform Cloud (Business Tier).

Sebastian Stadil avatar
Sebastian Stadil

Second, Terraform Cloud (Business Tier) is priced per user, plus a fee for a number of applies per month, plus a fee for the number of concurrent runs you want.

Sebastian Stadil avatar
Sebastian Stadil

Third, they discount / bundle things into an offering based on their guessed ability to pay, i.e. your funding level, or size of your company, which is why prices fluctuate so much from one company to the next.

Sebastian Stadil avatar
Sebastian Stadil

If you get a quote and want to help others know pricing, PRs are welcome at https://github.com/Scalr/remote-backend-pricing

kskewes avatar
kskewes

Cheers, sounds like need to contact sales team. FWIW, I see private scalr pricing is also hidden - Is there a runner we can use with the published pricing?

PePe avatar

so basically they have a way to screw up big customers because they have more money instead of make the prices transparent

:100:1
Sebastian Stadil avatar
Sebastian Stadil

@kskewes price will be published before the end of next week

PePe avatar

When working at EA I remember we asked about Vault enterprise and when we told them we were EA the tone in the conversation changed and they sent us a quote for 2 Million a year

1
Sebastian Stadil avatar
Sebastian Stadil

(and yes it pains me to not have it out in the open yet)

kskewes avatar
kskewes

Cool cool, no biggie. I understand that at some point customers will engineer ways to reduce cost by circumventing published processes - shared users/etc.

1
kskewes avatar
kskewes

TFC almost looks like it could be a per minute pricing like Gitlab CI minutes

kskewes avatar
kskewes

maybe plus per user that need console access/etc and if need be some base. But I have no idea of their costs and constraints, just throwing something out there based on a similar “remote execution as a service”.

joshmyers avatar
joshmyers

“plus a fee for a number of applies per month” any idea what this is?

Sebastian Stadil avatar
Sebastian Stadil

You mean how much it is?

joshmyers avatar
joshmyers

Yes

Callum Robertson avatar
Callum Robertson

Atlantis is the hotness

Callum Robertson avatar
Callum Robertson

what provisioner is running in that null_resource?

Cody Moore avatar
Cody Moore

Not sure if this is the right place to post this, but I was curious if I can get some eyes on: https://github.com/cloudposse/terraform-aws-eks-node-group/pull/36

feat: allow ebs launch template encryption by dotCipher · Pull Request #36 · cloudposse/terraform-aws-eks-node-group

what Surface variable for boolean flag of launch_template_disk_encryption Use launch_template_disk_encryption to flip flag of generated launch_template.ebs.encryption why Allow EBS encryption r…

1
Matt Gowie avatar
Matt Gowie

Your best spot for these is #pr-reviews. But happy to check it out.

feat: allow ebs launch template encryption by dotCipher · Pull Request #36 · cloudposse/terraform-aws-eks-node-group

what Surface variable for boolean flag of launch_template_disk_encryption Use launch_template_disk_encryption to flip flag of generated launch_template.ebs.encryption why Allow EBS encryption r…

:--1:1
Cody Moore avatar
Cody Moore

Thanks

Matt Gowie avatar
Matt Gowie

@ unfortunately, it looks like there are issues with running our tests against this repo due to the module targeting 0.13 but we’re using 0.12 to run the tests. This is the first I’ve seen this, so I’ve brought it up with the rest of the contributor team. We’ll get this merged once I can get that sorted out and those tests pass.

Matt Gowie avatar
Matt Gowie

And I spoke too soon. Easier than I thought — Getting it sorted now.

:--1:1
Cody Moore avatar
Cody Moore

Awesome thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve released our first version of the terraform module for the terraform (tfc) cloud agent for kubernetes: https://registry.terraform.io/modules/cloudposse/tfc-cloud-agent/kubernetes/latest

:--1:3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This enables terraform for business users to run their plans using a custom docker image (or the official one), as well as using IRSA

2020-10-12

Laurynas avatar
Laurynas

Hi, what’s the best way to output everything from terraform module? for example if I have module "alb" {} I can use

output "alb" {
  value = module.alb
}
RB avatar

Yes you can output the entire module as an output

RB avatar

The module that uses the above module would then be able to reference the output using module.whatever.alb.whateveroutputname

1
Laurynas avatar
Laurynas

but that makes accessing outputs a bit strange:

listener_arn = data.terraform_remote_state.alb.outputs.alb.alb_https_listener_arn
Aleksey Tsalolikhin avatar
Aleksey Tsalolikhin

Hello. Can anyone suggest a workaround for managing WAV v2 ACL with Terraform when the ACL is nested more than 3 levels, please? https://github.com/terraform-providers/terraform-provider-aws/issues/15580#issuecomment-706613897

wafv2 rate_based_rule with nested scopedown and/or not working · Issue #15580 · terraform-providers/terraform-provider-aws

This issue was originally opened by @jpatallah as hashicorp/terraform#26530. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version terra…

Alex Jurkiewicz avatar
Alex Jurkiewicz

We ran into this bug and the conclusion was “we can’t” The WAFv2 resources are a bit hacky, as I’m guessing you found out they can’t be infinitely nested as what the AWS API supports

wafv2 rate_based_rule with nested scopedown and/or not working · Issue #15580 · terraform-providers/terraform-provider-aws

This issue was originally opened by @jpatallah as hashicorp/terraform#26530. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version terra…

Alex Jurkiewicz avatar
Alex Jurkiewicz

There is a depressing discussion in one of the tickets about this limitation, where it was suggested to convert the Terraform resource representation from HCL blocks to inline JSON. The hashicorp response was “this would fix the problem but it’s ugly, so we won’t do it”

Aleksey Tsalolikhin avatar
Aleksey Tsalolikhin

Thanks, Alex!! That’s good to know.

Jon avatar

Hi everyone. I’m running into an issue using trying to redeploy CloudTrail at the organizational level but keep running into an issue getting the module to apply successfully. The credentials that I am using have administrator access but I keep running into a permissions issue. Anyone have any idea?

module "aws_cloudtrail" {
  source  = "cloudposse/cloudtrail/aws"
  version = "0.11.0"

module.aws_cloudtrail.aws_cloudtrail.default[0]: Creating… Error: Error creating CloudTrail: InsufficientEncryptionPolicyException: Insufficient permissions to access S3 bucket cloudtrail-bucket or KMS key <<KMS_ARN>> on .terraform/modules/aws_cloudtrail/main.tf line 13, in resource “aws_cloudtrail” “default”: 13: resource “aws_cloudtrail” “default” { Releasing state lock. This may take a few moments… ERROR: Job failed: exit code 1

Jon avatar

found out what that error message is.

*InsufficientEncryptionPolicyException* This exception is thrown when the policy on the S3 bucket or KMS key is not sufficient. HTTP Status Code: 400

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please look at this module for the required permissions https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket

cloudposse/terraform-aws-cloudtrail-s3-bucket

S3 bucket with built in IAM policy to allow CloudTrail logs - cloudposse/terraform-aws-cloudtrail-s3-bucket

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and this is a working example of using both Cloudtrail and Cloudtrail bucket modules https://github.com/cloudposse/terraform-aws-cloudtrail/blob/master/examples/complete/main.tf

cloudposse/terraform-aws-cloudtrail

Terraform module to provision an AWS CloudTrail and an encrypted S3 bucket with versioning to store CloudTrail logs - cloudposse/terraform-aws-cloudtrail

Jon avatar

thank you @Andriy Knysh (Cloud Posse)

sheldonh avatar
sheldonh

Anyone know how to add pagerduty subscribers to an escalation policy/service in pagerduty? I can’t figure out the provider for this. I tried adding someone and they weren’t able to be an observer because they admin cannot take team role observer despite the fact they aren’t an admin in this service.

Alex Jurkiewicz avatar
Alex Jurkiewicz

not quite sure what you’re asking, but does this answer your question?

data pagerduty_user default {
  email    = "[[email protected]](mailto:[email protected]\.com)"
}

resource pagerduty_escalation_policy default {
  name  = "My Escalation Policy"
  teams = [
    data.pagerduty_team.default.id
  ]

  # Notify Alex
  rule {
    escalation_delay_in_minutes = 5
    target {
      id = data.pagerduty_user.default.id
    }
  }
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

you can then create a pagerduty_service resource and specify the escalation policy’s id as the value for escalation_policy

sheldonh avatar
sheldonh

I want to add the subscribers only, not the actual responders. Basically, I see a lot of cc’d folks in chat when activity is on pagerduty already. I’d like to be able to define the subscriber/observer list so they get updated on new incidents but not as a responder.

Alex Jurkiewicz avatar
Alex Jurkiewicz

gotcha. We wanted this too. It’s difficult with how Pagerduty works. What you are after is to set a response play on your service

Alex Jurkiewicz avatar
Alex Jurkiewicz

sadly, this isn’t available via the Terraform provider yet

Alex Jurkiewicz avatar
Alex Jurkiewicz

I saw we found the same GitHub ticket @sheldonh

sheldonh avatar
sheldonh

sheldonh avatar
sheldonh

Great minds think alike

2020-10-10

John McGehee avatar
John McGehee

I have adopted your label Terraform modules and your namespace-environment-name-attribute naming style. I have added an additional naming rule: when the resource is global, such as S3 buckets, I specify namespace, using an abbreviation for my company name (sometimes adding my department). When the resource is not global, such as for EC2 instances, I omit namespace. What do you think of my additional rule?

:--1:1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it suits your needs, and the names are consistent and unique, should be fine

John McGehee avatar
John McGehee

Thank you for your reply. Yes, it does seem to work fine…so far. I’m trying to see if anyone out there can see a flaw in my plan that I cannot.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use namespace to identify the company. We add it to all the resources for consistency

kskewes avatar
kskewes

We use company-prd(or stg etc)-region(or global)-thing.

John McGehee avatar
John McGehee

At the risk of wearing out my welcome today, I’ll ask another question. I just can’t find any documentation, only lots of examples on GitHub. What is module.this?

:100:1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is our standard pattern to provide standard inputs to all modules, and simplify module instantiation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so now all standard inputs (those that we use in all modules) are in one place (in [contect.tf](http://contect\.tf))

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you don’t have to provide namespace, environment, stage, name` when calling modules https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L57

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

John McGehee avatar
John McGehee

Thank you for your answers. I like your label modules and this is a great application.

:--1:1

2020-10-09

Steve Wade avatar
Steve Wade

@loren

:--1:1
loren avatar
loren

welcome!

Jagan Rajagopal avatar
Jagan Rajagopal

Hi @Erik Osterman (Cloud Posse), what is the best way to learn the terraform and integrate with ci and cd pipeline.

Yoni Leitersdorf avatar
Yoni Leitersdorf

I know you referred to Erik, but I’ll add what helped me: Terraform course in Udemy (not expensive at all) and a free account with AWS and CircleCI. (or TF Cloud, if you prefer)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s a great question! I don’t think I can answer you as comprehensively as I’d like to… I think @antonbabenko would probably able to direct you to the best learning materials.

Here’s my “high level” pitch:

Learn by doing, not just by reading. First identify what you want to achieve (because you need a goal), then read and research enough to get started and go from there.

Study our terraform modules. I’d like to think every single one of our modules is a reference example for how to design and implement composable, re-usable, testable modules.

Get started early writing tests. It’s a habit hard to introduce later. We use terratest and everyone of our modules has a simple example of that.

• HashiCorp has invested heavily in their online curriculum and even offers certifications now. Their docs are free, check them out here: https://learn.hashicorp.com/terraform

• For Terraform CI (github actions are sufficient to test). For a proper terraform CD workflow, I think your best bet is to start with a SaaS solution and learn from that. Your options are Terraform Cloud, Scalr, Spacelift, and maybe Env0 (haven’t checked these guys out yet). Terraform CD is non-trivial to do well. You can easily stick it in any pipeline, but a well-built terraform CD pipeline will have a terraform plan → planfile → approval → apply workflow. You’ll need to stash the planfile somewhere and the planfile may contain secrets.

Checkout our weekly #office-hours → [cloudposse.com/office-hours> (podcast.cloudposse.com and <http://youtube.com/c/cloudposse youtube.com/c/cloudposse](http://cloudposse.com/office-hours)) they are free and you can ask questions and get answers from our community of experts.

Hangout in watering holes like this one. You’ll learn a lot in a short amount of time.

:--1:5
2
antonbabenko avatar
antonbabenko

That is a great and very detailed answer I usually give to people myself but in shorter way. I personally see a lot of value in reading documentation from A to Z (or to K-Keywords) when I am learning something.

Also, seeing open-source projects and try to contribute there is very good learning point… Though it requires more than one PR to start enjoying this process and see value in it.

antonbabenko avatar
antonbabenko

You can also do workshop materials yourself for free - https://github.com/antonbabenko/terraform-best-practices-workshop

antonbabenko/terraform-best-practices-workshop

Terraform Best Practices - workshop materials. Contribute to antonbabenko/terraform-best-practices-workshop development by creating an account on GitHub.

Jagan Rajagopal avatar
Jagan Rajagopal

thanks alot

Aumkar Prajapati avatar
Aumkar Prajapati

Anyone have a solution to making use of variables in backend.tf to define statefiles via tfvars?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes/no. Strictly speaking, it’s not possible.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You have a few options though. Terraform supports passing backend parameters via environment variables. We used to use this extensively.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The other option is you can easily generate your [backend.tf> as a JSON file (E.g. <http://backend.tf|backend.tf](http://backend\.tf).json ) This is the route we’re taking today because it’s easier for developers to understand.

Aumkar Prajapati avatar
Aumkar Prajapati

Got any examples of that first solution?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/tfenv

Transform environment variables for use with Terraform (e.g. HOSTNAMETF_VAR_hostname) - cloudposse/tfenv

Aumkar Prajapati avatar
Aumkar Prajapati

I don’t have any issues with doing it as an env var

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We used this simple cli to make it easier

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, and lastly, #terragrunt can also do this for you. But if you’re not using it, it’s a heavy handed solution for simply managing the backend config.

Aumkar Prajapati avatar
Aumkar Prajapati

Yeah, my intention here is to keep things as simple and straightforward as possible as it’s fairly minor in usage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Terraform Without Wrappers is AWESOME!

One of the biggest pains with terraform is that it’s not totally 12-factor compliant (III. Config). That is,…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Environment Variables - Terraform by HashiCorp

Terraform uses environment variables to configure various aspects of its behavior.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
SweetOps #terraform for January, 2019

SweetOps Slack archive of #terraform for January, 2019. terraform Discussions related to Terraform or Terraform Modules

sheldonh avatar
sheldonh

Finally got some interest on a gitops workflow for security group whitelisting. I can plug this in with terraform cloud. Would like to know if there is any thing someone is done to post the preview of changes from terraform cloud as a comment into GitHub pull requests. The person that will approve the pull requests doesn’t have access to terraform cloud so I’d like to show the plan output similar to Atlantis directly in the pull request.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haven’t seen anything for that - not sure if it’s possible natively. Of course, anything can be engineered using APIs, etc. I don’t think that’s your goal though?

sheldonh avatar
sheldonh

Right. I’m trying to ensure it’s as accessible as possible to reduce friction while still promoting a clean git history and workflow. I’ll see what I can swing, just was hoping for something. I might preview the terraform command cli library I used for packer as it calls the api without terraform cli directly. It might be able to help.

PePe avatar

You could have Atlantis to just comment the plan on a PR bases and have tf cloud executed it

sheldonh avatar
sheldonh

Sounds like more infra to manage. :-) I found a github action and with terraform api and libraries I bet I could figure out how to extract this from terraform cloud. Wish me luck

PePe avatar

Good luck

1

2020-10-08

Miguel avatar
Miguel

hi guys! first of all, it’s my first comment so let me thank you for your repos, they are very helpful for me I’m reading the one for autoscaling but I have a doubt about how to use it with custom metrics. I have created policies for predefined metrics, but I’m not sure about how to use it when creating custom metrics. Any tip? Thank you and keep the hard work!

cloudposse/terraform-aws-ecs-cloudwatch-autoscaling

Terraform module to autoscale ECS Service based on CloudWatch metrics - cloudposse/terraform-aws-ecs-cloudwatch-autoscaling

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is a pretty simple module with a few canned policies.

cloudposse/terraform-aws-ecs-cloudwatch-autoscaling

Terraform module to autoscale ECS Service based on CloudWatch metrics - cloudposse/terraform-aws-ecs-cloudwatch-autoscaling

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you want to do anything more probably better just to use the raw resiyrces

Miguel avatar
Miguel

hey, just using it as guide but my question is more about functionality, how I could use the aws_appautoscaling_policy with custom metrics?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anyone know if Terraform Cloud Agents support a healthcheck endpoint or health check command? e.g. something like https://www.terraform.io/docs/enterprise/admin/monitoring.html

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  -name <name>
    An optional user-specified name for the agent. This name may be used in
    the Terraform Cloud user interface to help easily identify the agent.

    Default: The agent's ephemeral ID, assigned during boot.
    Environment variable: TFC_AGENT_NAME

  -log-level <level>
    The log verbosity expressed as a level string. Level options include
    "trace", "debug", "info", "warn", and "error".

    Default: info
    Environment variable: TFC_AGENT_LOG_LEVEL

  -data-dir <path>
    The path to a directory to store all agent-related data, including
    Terraform configurations, cached Terraform release archives, etc. It is
    important to ensure that the given directory is backed by plentiful
    storage.

    Default: ~/.tfc-agent
    Environment variable: TFC_AGENT_DATA_DIR

  -single
    Enable single mode. This causes the agent to handle at most one job and
    immediately exit thereafter. Useful for running agents as ephemeral
    containers, VMs, or other isolated contexts with a higher-level scheduler
    or process supervisor.

    Default: false
    Environment variable: TFC_AGENT_SINGLE

  -disable-update
    Disable automatic core updates.

    Default: false
    Environment variable: TFC_AGENT_DISABLE_UPDATE

  -address <addr>
    The HTTP or HTTPS address of the Terraform Cloud API.

    Default: <https://app.terraform.io>
    Environment variable: TFC_ADDRESS

  -token <token>
    The agent token to use when making requests to the Terraform Cloud API.
    This token must be obtained from the API or UI.  It is recommended to use
    the environment variable whenever possible for configuring this setting due
    to the sensitive nature of API tokens.

    Required, no default.
    Environment variable: TFC_AGENT_TOKEN

  -h
    Display this message and exit.

  -v
    Display the version and exit.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no subcommand available to test health

Sebastian Stadil avatar
Sebastian Stadil

For monitoring the health of the agents, right?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I’d like to use for_each to create a set of resources defined by the product of two arrays:

resource aws_iam_role_policy_attachment {
  for_each = setproduct(["role1", "role2"], ["policy1", "policy2", "policy3"])
  role = each.key[0]
  policy_arn = each.key[1]
}

(Would create six role policy attachments.) But this gives an error:

The given "for_each" argument value is unsuitable: the "for_each" argument
must be a map, or set of strings, and you have provided a value of type list
of tuple.

Suggestions for how to implement this? The list of policies I’m attaching is hardcoded, so I could create three resource blocks. But surely there’s a better way!!

Alex Jurkiewicz avatar
Alex Jurkiewicz

I came up with this, which is pretty ugly:

locals {
  combinations = setproduct(["role1", "role2"], ["policy1", "policy2", "policy3"])
  combination_map = {for item in local.combinations : "${item[0]}-${item[1]}" => item}
}
resource aws_iam_role_policy_attachment {
  for_each = local.combination_map
  role = each.key[0]
  policy_arn = each.key[1]
}

You can remove the intermediate variables:

resource aws_iam_role_policy_attachment {
  for_each = {for item in setproduct(["role1", "role2"], ["policy1", "policy2", "policy3"]) : "${item[0]}-${item[1]}" => item}
  role = each.key[0]
  policy_arn = each.key[1]
}

rei avatar

Hi, I cannot come with a better solution than your last code block…

loren avatar
loren

makes sense to me. note that you can multi-line it to make it more readable

2020-10-07

Alex Jurkiewicz avatar
Alex Jurkiewicz

The inability to put extra newlines in Terraform expressions can really hurt readability. Especially since Terraform syntax highlighting / editor support is so poor

RB avatar

Yes. It would be nice if you could break up lines with a backslash like in shell code. Perhaps there is an open ticket with terraform? If not, it would be a good one to write up

Alex Jurkiewicz avatar
Alex Jurkiewicz

I already have enough “will never get worked on” open issues in that repo

RB avatar

don’t be discouraged!

loren avatar
loren

in many places, you can use parens to get multi-line support, works fine in the ternary ? : also… e.g.

foo = (
  local.test) ? true : (
  local.anothertest) ? false : (
  local.end
)
loren avatar
loren

just note the careful placement of opening and closing parens

Chris Fowles avatar
Chris Fowles

what version of terraform are you talking about?

this is perfectly valid:

service_dependency = {
    for l in chunklist(flatten([
      for k, v in local.technical_services :
      [
        for s in v.depends_on :
        [k, s]
      ] if can(v.depends_on)
    ]), 2) : join("_", l) => l
  }
Mads Hvelplund avatar
Mads Hvelplund

Does anyone have expirence using the Terraform ACME provider with AWS Route 53?

My problem is that I have a module B that uses the ACME provider to make a certificate. Module B is included in module A, and has its provicer injected, like this:

Module A:

provider "aws" {
  alias = "dns"
  ...
}

module "b" {
  providers = { aws = aws.b }
  ...
}

Module B:

...

resource "acme_certificate" "certificate" {
  ...
  dns_challenge {
    provider = "route53"
  }
}

When the ACME provider in B performs the challenge, it doesn’t use the role and credentials from module A’s “dns” provider. Instead it seems to use whatever credentials I have in the shell where I run terraform. I know that I can provide a “config” blob to the “dns_challeng” block, but I only have temporary credentials, so how would I extract those from module A’s provider?

Has anyone had this problem?

Mads Hvelplund avatar
Mads Hvelplund
ACME: acme_certificate - Terraform by HashiCorp

Provides a resource to manage certificates on an ACME CA.

Mads Hvelplund avatar
Mads Hvelplund

Does anyone have a good workaround to avoid a setup like:

  dns_challenge {
    provider = "route53"

    config = {
      AWS_ACCESS_KEY_ID     = "${var.aws_access_key}"
      AWS_SECRET_ACCESS_KEY = "${var.aws_secret_key}"
      AWS_DEFAULT_REGION    = "us-east-1"
    }
  }
loren avatar
loren

i don’t think it’s possible to avoid this… the acme provider resources can’t access the aws provider credentials. any such creds need to come from the config of the acme resource

Mads Hvelplund avatar
Mads Hvelplund

i found a workaround where i use a local-exec provisioner on a null resource to assume the role i want, and with aws cli. then i grab the output and parse it

Mads Hvelplund avatar
Mads Hvelplund

but it’s a hack

loren avatar
loren

yeah sure, you don’t actually have to use vars

Jaeson avatar
Jaeson

When trying to set up dual replication between two sets of buckets in different regions, TF informed me that I had a cycle error. I tried to control through a variable by making the block that caused the error dynamic – I didn’t really mind the thought of running it twice, once for each replication direction. But TF seems a little pessimistic. Did I do something wrong in my configuration, or is TF just really not going to let me control this through a variable?

I found this post which describes one way to handle this, but I’m wondering if there is a better way?

Jaeson avatar
Jaeson

nvm. When I took a closer look at this, I decided to set up the replication from DR to prod as part of the step to switch over to DR.

Nitin Prabhu avatar
Nitin Prabhu

Hello guys Does anyone has recommendation to test terraform modules or like say testing AWS ECK module ? We are currently using https://github.com/cloudposse/terraform-aws-eks-cluster and testing it manually but wanted to know what other people are following. I am aware that we can use terratest but problem with that is it will provision the resources on aws cloud which is time consuming plus it will cost money

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Zach avatar

Won’t cover all AWS services, and I personally have not used this (just found it recently) but you could check into LocalStack https://github.com/localstack/localstack

localstack/localstack

A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline! - localstack/localstack

Zach avatar

there’s a way to configure terraform provider for it

Nitin Prabhu avatar
Nitin Prabhu

thanks Zach had a look at localstack EKS module is not supported in community version

Zach avatar

hah. enterprise gonna enterprise

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’d just like to qualify expensive. For most businesses, the most expensive resource is the human resource. E.g. A business is paying $1M/year for AWS and $20M a year for humans/payroll. AWS is letting you pay for fractional usage to run some tests. This is a very scalable way to manage the cost of testing. Instead, reimplementing this with something like localstack trades this predictable cost of using AWS with a highly variable, unpredictable cost of using an AWS emulator. You’re going to need to manage the additional associated techdebt and solve for the inevitable inconsistencies and bugs.

:100:1
:--1:1
Nitin Prabhu avatar
Nitin Prabhu

thanks

Release notes from terraform avatar
Release notes from terraform
05:44:16 PM

v0.14.0-alpha20201007 0.14.0 (Unreleased) UPGRADE NOTES: configs: The version argument inside provider configuration blocks has been documented as deprecated since Terraform 0.12. As of 0.14 it will now also generate an explicit deprecation warning. To avoid the warning, use provider requirements> declarations instead. (<a href=”<https://github.com/hashicorp/terraform/issues/26135” data-hovercard-type=”pull_request”…

configs: deprecate version argument inside provider configuration blocks by mildwonkey · Pull Request #26135 · hashicorp/terraform

The version argument is deprecated in Terraform v0.14 in favor of required_providers and will be removed in a future version of terraform (expected to be v0.15). The provider configuration document…

Laurynas avatar
Laurynas

Hi, I have the following terraform structure where I use diferent variable files for different environments.

├── [main.tf](http://main\.tf)
├── [mock_api.tf](http://mock_api\.tf)
├── [variables.tf](http://variables\.tf)
├── vars
    ├── dev.tfvars
    ├── prod.tfvars   

During the deployment I simply run: terraform init -reconfigure -backend-config="bucket= and then terraform plan -out=api_tfplan -var-file=${tf_var_file} However, recently I added mock_[api.tf](http://api\.tf) and it only needs to be applied to dev environment only. What is the best way to do that? I could use if env != prod in resources field but mock_api doesn’t have resources. I purposely didn’t use modules because dev/prod (and other) environments needed to be the same but with different variables.

Let me know if you have some ideas about how can mock_[api.tf](http://api\.tf) only be applied for dev env.

loren avatar
loren

use count or for_each on all the resources in mock_[api.tf](http://api\.tf) , and in the expression test a variable to turn the resources on/off

loren avatar
loren

or with tf0.13, drop mock_[api.tf](http://api\.tf) into a subdirectory, modules/mock-api/main.tf, and call it with a module reference from ./main.tf, and use count on the module reference with a variable to turn it on/off. this way you don’t mess with every resource in the module, just the reference to the module

:--1:1

2020-10-06

Alex Jurkiewicz avatar
Alex Jurkiewicz
locals {
  endpoint_config = local.is_test ? {"writer": writer_endpoint} : {"writer": writer_endpoint, "reader": reader_endpoint }
}

Is there some way to refactor this so I don’t have to repeat the “writer” config? I can think of this:

locals {
  endpoint_config = merge({"writer": writer_endpoint}, local.is_test ? {} : {"reader": reader_endpoint })
}

But it’s a little ugly IMO

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I also struggle to see a better way.

IMO this is cleaner as it’s very descriptive. It’s very similar to what you wrote in your second solution, but more verbose.

locals {
  writer_config = { "writer": writer_endpoint }
  reader_config = { "reader": reader_endpoint }
  endpoint_config = local.is_test ? local.writer_config : merge( local.reader_config, local.writer_config )
}
:--1:1

2020-10-05

rei avatar

Hi, does someone know the difference between: https://github.com/cloudposse/terraform-aws-eks-workers/ https://github.com/cloudposse/terraform-aws-eks-node-group And if any, which one should I use? Deploying a new infra

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The terraform-aws-eks-workers module uses the original self-managed auto-scaling groups with EC2s

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the terraform-aws-eks-node-group module implements the fully managed node groups

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

note, you can mix and match. we’ve deployed clusters that use both.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we used the eks-workers for running jenkins and the eks-node-gruop for everything else (for example)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please don’t cross post unless your cross-posting the link:

https://sweetops.slack.com/archives/CDYGZCLDQ/p1601895829001800

Hi, does someone know the difference between: • https://github.com/cloudposse/terraform-aws-eks-workers/https://github.com/cloudposse/terraform-aws-eks-node-group And if any, which one should I use? Deploying a new infra

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(or we get multiple answers all over the place)

rei avatar

Thank you for the response!

Tomek avatar
Tomek

:wave: I’m trying to automate terraform, specifically running terraform plan when a PR is opened on github. I’m having trouble finding what a least-privileged IAM policy would look like to run terraform plan where the backend is on S3

pjaudiomv avatar
pjaudiomv
Backend Type: s3 - Terraform by HashiCorp

Terraform can store state remotely in S3 and lock that state with DynamoDB.

pjaudiomv avatar
pjaudiomv
Backend Type: s3 - Terraform by HashiCorp

Terraform can store state remotely in S3 and lock that state with DynamoDB.

Tomek avatar
Tomek

ah nice, thanks! I’m guessing if you only need to run terraform plan, a least-privelleged read-only policy would grant:

s3:ListBucket

s3:GetObject

dynamodb:GetItem

dynamodb:PutItem

dynamodb:DeleteItem

Tomek avatar
Tomek

assuming even terraform plan tries to grab a lock on the state before running so it needs ddb access too

Tomek avatar
Tomek

when it comes time for terraform apply, I’m guessing the policy will pretty much need to be full admin in order to create resources?

pjaudiomv avatar
pjaudiomv

the plan probably needs same access as apply but RO

pjaudiomv avatar
pjaudiomv

and apply needs access to any resources that yoiu are creating

pjaudiomv avatar
pjaudiomv

theres certain aws services i dont use or dont want my pipelines to have access to, so i basically have an admin user with explicit denys

Jaeson avatar
Jaeson

I’m trying to track down all of the TF .12 preview blog posts. Does anyone know if there a place where these are listed in a linear way? Hashi’s blog seems engineered for distraction.

Yoni Leitersdorf avatar
Yoni Leitersdorf

Their CHANGELOG is where I look

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@ what’s your objective?

Jaeson avatar
Jaeson

I often find myself trying to understand the for..each functionalities that were introduced in TF 12. The only real documentation that seems usable to me (from hashicorp) for this is embedded in one of their blog posts. … I’m usually pretty focused (or fighting to be focused) on solving a task, so it didn’t dawn on me until this morning that I’ve been kind of getting the information in pieces – for example, there seems to be no link from one post to the next in the series… you kind of just have to hunt them all down.

TLDR; I guess I was hoping for a better way to understand how to use the changes brought into TF12.

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, yea, I follow.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

One thing that helps me is often to google for cheatsheets

:--1:1
Jaeson avatar
Jaeson

Has anyone had terraform crash … seemingly permanently? I can’t get it to run anymore.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

might help to set TF_DEBUG environment variable so you can get some more details

Jaeson avatar
Jaeson

TF_LOG ? I didn’t find anything when looking for TF_DEBUG . Not being a jerk, just confirming.

1
Jaeson avatar
Jaeson

Nvm .. it’s in the log:
Use TF_LOG=TRACE to see Terraform’s internal logs

Jaeson avatar
Jaeson

It doesn’t seem to give me much more than I already had.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, sorry - maybe I got it wrong.

Jaeson avatar
Jaeson

np, I appreciate the hint. I think it might be something I did to my variables file.

johandry avatar
johandry

Hi guys, I’d like to know if it’s a good practice to use/execute a module directly or have a code to use it, even if this code only have this module … and maybe 2-3 more resources? … (more details in the thread)

johandry avatar
johandry

I have a few modules, they are to install kubernetes and some products in the cluster, one module per product. My code is about to install K8s and one or multiple of these products, using the modules. My first approach for this design is to have a directory for the modules and one directory for the code using the modules. The second design idea is to have one directory per product and they can be used as modules or not, this way the only reused code (which may be the real module) is the code to provision the K8s cluster.

johandry avatar
johandry

The question is: What’s the best practice with Terraform using modules?:

  1. Have a directory or repo for the module, then the code using this module .. and maybe other module(s)
  2. Have a directory for the module to be use by other codes … and use the same module code to be executed directly, not as a module
johandry avatar
johandry

Using option (1) my repo would be like this:

terraform
├── all_products
├── p1
├── p2
├── p3
└── modules
    ├── k8s
    ├── p1
    ├── p2
    └── p3

So, the code in all_products will use all the modules, The code in p? uses the module k8s and the module for p?

johandry avatar
johandry

Using option (2) the repo would be like this:

terraform
├── all_products
├── p1
├── p2
├── p3
└── modules
    └── k8s

The code in all_products uses the code in p* as modules, the code in p? use the module k8s … if other external code want to install p? uses the code in p? as a module

johandry avatar
johandry

What do you think is better: (1), (2), both are accepted or is there a 3rd option?

PePe avatar

you seem to be doing like if it was a monorepo

PePe avatar

I personally do not like the aproach

PePe avatar

I prefer modules that are instantiated by a project from a main.tf file and then this file pull the modules needed an then run TF somehow

johandry avatar
johandry

Thanks @PePe , it’s the same or similar answer I’ve received from other people (different Slack)

PePe avatar

np

Jaeson avatar
Jaeson

Quick question about dynamic content for TF 12 – all of the content above (and the grant block) should be skipped, right? I’m having difficulties because TF seems intent on processing that block of code.

Matt Gowie avatar
Matt Gowie

Close — you want to provide an empty array:

for_each = false ? [1] : []

I believe that will do the trick for you.

Matt Gowie avatar
Matt Gowie

You might be mixing count / for_each.

Jaeson avatar
Jaeson

ah.

Jaeson avatar
Jaeson

That was it. Thanks so much. I guess I didn’t have the brain for this today – I’ve confused three things as I’ve been working at this: count, array bounds, and even flipped the logic. …

Matt Gowie avatar
Matt Gowie

Happens to the best of us!

loren avatar
loren

New to me, looks like could be useful, https://github.com/sysdogs/tfmodvercheck

sysdogs/tfmodvercheck

Contribute to sysdogs/tfmodvercheck development by creating an account on GitHub.

Sean Turner avatar
Sean Turner

So on the one hand I feel like its not great to create a module that creates only one type of resource. On the other hand, I think there is a good amount of benefit behind only using cross account providers with modules. Thoughts? Take aws_route53_record for example, I think it’s semi-worth creating a module around only this resource as it provides nice isolation to things that have a large blast radius (especially when cross account), and also allows for templating to dynamically render alias blocks as needed.

loren avatar
loren

i’ve done it, makes a lot of sense for cross-account workflows (or cross-region, cross-provider)

loren avatar
loren

usually i have a larger module around it, and this is a nested module in that project, sometimes with a dedicated module for the cross-account workflow

loren avatar
loren

another benefit i’ve been finding recently, is with module-level for_each… putting the resource in a module let’s me document the variables cleanly. then i can manage multiples with module-level for_each and complex objects

loren avatar
loren

here’s an example for a ram-share… the top-level module handles all the “owner” config for a new ram-share, and there is a nested module for the cross-account principal association workflow, https://github.com/plus3it/terraform-aws-tardigrade-ram-share

plus3it/terraform-aws-tardigrade-ram-share

Terraform module to manage a resource share. Contribute to plus3it/terraform-aws-tardigrade-ram-share development by creating an account on GitHub.

Sean Turner avatar
Sean Turner

Totally agreed with module level for each documentation. It took me a second to wrap my head around the new way of doing but now it feels much cleaner as everything is var.foo instead of each.value.foo

1

2020-10-04

Sean Turner avatar
Sean Turner

Hey all, has anyone had any success with dynamically passing a provider to a module? It doesn’t quite seem like it’s possible from what I’ve read so far, but figured I would check as this would be such a powerful feature

loren avatar
loren

pretty sure no expressions are allowed in the providers attribute of a module block

Jaeson avatar
Jaeson

I believe that was what I found when I tried to that.

Sean Turner avatar
Sean Turner

Yeah, was just wondering if there’s some sort of hacky way to accomplish the result

loren avatar
loren

use terragrunt, with generate block(s) to create the provider configs

Jaeson avatar
Jaeson

ah, so that’s where terragrunt comes in.

Sean Turner avatar
Sean Turner

I would still have n number of module blocks though yeah?

loren avatar
loren

one place, anyway

loren avatar
loren

i could imagine being able to also generate a root config that contains the module blocks. that would be interesting. haven’t tried that

loren avatar
loren

cdktf might be another option

Jaeson avatar
Jaeson

cdktf might be a rabbit hole. The last time I tried working with it, it was pretty frustrating.

loren avatar
loren

well cdktf is really new. there’s going to be some raw edges, and a learning curve. but fundamentally, the issue here is that this part of the tf/hcl config must be static by the time terraform is actually executing. so to make it dynamic, you have to generate/template the tf files. terragrunt can do that, cdktf is built to do that, or you can write your own using any template language you like

Jaeson avatar
Jaeson

That’s useful info. @Sean Turner, I didn’t mean to hijack your thread. Just wanted to give you a head’s up that cdktf might lead to some time-creep.

Sean Turner avatar
Sean Turner

No worries. I’m going to keep an eye out for a github issue around this and follow it closely as this would be a massive paradigm. Not too keen on trying tfcdk at the moment either, definitely going to give it a little more time to grow :)

Jaeson avatar
Jaeson

@loren, do you have an example that shows how terragrunt can be used for generation? I’ve looked into it twice now, and didn’t get as far as seeing how it could be done. The example I found is too fancy. I’m looking for something simple so I can grok it quickly.

loren avatar
loren

i don’t think you can use terragrunt just for generation… you have to buy into the whole terragrunt approach

Jaeson avatar
Jaeson

That’s what I thought. And I haven’t had time to make that purchase yet .. which is why I’ve skipped over it twice now. But I think I have an idea how it works, I just want to see the end-game for an easy example to justify the jump.

Jaeson avatar
Jaeson
10:01:39 PM

We have at least temporary (feasible) work-arounds for all of the following things. What I really want are more control structures and templating options, that TF doesn’t include, sometimes by design.

loren avatar
loren

in the terragrunt generate block, the contents attribute accepts an expression that must evaluate to a string. you have access to all terraform functions in the terragrunt.hcl file. such as templatefile and the hcl template language. here’s an example where i accepted a complex object as input, and transformed it to another complex object written to a tfvars file (in order to avoid some issue i no longer even remember ) https://github.com/plus3it/wrangler-watchmaker/blob/master/release/copy-bucket/terragrunt.hcl#L13-L25

plus3it/wrangler-watchmaker

Manages buckets and files needed for the public/default watchmaker configuration - plus3it/wrangler-watchmaker

loren avatar
loren

if you’re not familiar with terraform string templates, https://www.terraform.io/docs/configuration/expressions.html#string-templates

Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

Jaeson avatar
Jaeson

I’m using the string templates. Thanks for the link to your example!

loren avatar
loren

that’s kind of the basis for the solution i’m envisioning here… using the for loop in a string template to dynamically construct provider blocks, referencing values from terragrunt locals… and the locals could be sourced from a yaml file, if that’s your jam, with yamldecode()

2020-10-03

2020-10-02

Matt Gowie avatar
Matt Gowie

Just found this gem: https://github.com/flosell/iam-policy-json-to-terraform Hope it’s useful to some folks!

flosell/iam-policy-json-to-terraform

Small tool to convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document - flosell/iam-policy-json-to-terraform

1
:--1:3
Abhinav Khanna avatar
Abhinav Khanna

terraform-aws-eks-fargate-profile is it compatible with aws 3.* provider? any plans to change version restrictions?

cloudposse/terraform-aws-eks-fargate-profile

Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile

loren avatar
loren

https://www.scalr.com/blog/announcing-public-beta/
Dearest Terraform Community,
It is with great pleasure that I stand before your virtual selves to publicly present the fruit of the last 18 months of our labor: Scalr, a remote backend for Terraform to compete with Terraform Cloud and Terraform Enterprise.
- Sebastian Stadil, CEO

2
Matt Gowie avatar
Matt Gowie

Would love to hear some early feedback / comparison if anybody tries it out.

loren avatar
loren

i signed up for an account for the promo, anyway. but similar, would love to hear others’ experiences

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Sebastian Stadil can probably help with that maybe we can do a demo on office hours

:--1:1
:100:1
1
Matt Gowie avatar
Matt Gowie

Demo would be awesome. This seems to have a struck a nerve — the reception on Reddit looks well received: https://www.reddit.com/r/Terraform/comments/j3c225/scalr_public_beta_is_live/

Scalr Public Beta is Live

Today we have our most exciting news yet… After 18 months of hard work, growing a huge waitlist and getting tremendous feedback during private…

Sebastian Stadil avatar
Sebastian Stadil

The reception has been incredibly encouraging. all you guys.

Sebastian Stadil avatar
Sebastian Stadil

We were thinking about doing an AMA too, if anyone thinks that could be valuable.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the most fair/transparent pricing in the history of SaaS pricing:

• no contact sales link (e.g. terraform cloud!)

• no SSO tax (https://sso.tax)

• no fee for idle users (similar to slack) If only more companies would adopt this minimum level of transparency in pricing.

:--1:3
MattyB avatar
MattyB

https://www.reddit.com/r/devops/comments/j3sdj8/terraform_cloudenterprise_alternative/

check out the comment by danekan

• You e-mailed me that password in plain text.

Terraform Cloud/Enterprise Alternative

Here is a quick update on our journey to build a Terraform Cloud/Enterprise alternative with open standards, transparent pricing and no SSO…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(btw, Scalr is an alternative for terraform cloud)

Zach avatar

They also have an interesting feature with the ‘template registry’ as a self-serve infra launch

Yash avatar

Do you manually update the [versions.tf](http://versions\.tf) in each module with the required version? Are there any tools to automate that?

MrAtheist avatar
MrAtheist

Does anyone know how to generate the github oauth token specifically for private org repo? Im trying to spin up a codepipeline and its asking for a token for the source stage. While TF is trying to setup the github hook, it’s complaining that the repo doesnt exist since it’s targeting ${my_github_username}/${repo} and not the org. Anyone knows a trick to this?

Error: POST <https://api.github.com/repos/$user/$repo/hooks>: 404 Not Found []

  on [codepipeline.tf](http://codepipeline\.tf) line 128, in resource "github_repository_webhook" "webhook":
 128: resource "github_repository_webhook" "webhook" {

terraform-github-repository-webhooks

cloudposse/terraform-github-repository-webhooks

Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks

PePe avatar

yes , that is because the token that github needs require admin access for webhooks on repos

cloudposse/terraform-github-repository-webhooks

Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks

MrAtheist avatar
MrAtheist

hmm i am… lemme double check

PePe avatar

and if you remove the access then it will try to create the webhook again even though is still there

MrAtheist avatar
MrAtheist

mind me asking is this sort of thing documented anywhere…? this is really uh… convoluted

PePe avatar

this is no a module problem

PePe avatar

this is a githug provider thing

PePe avatar

which is incredible annoying

PePe avatar

github changes their APIs all the time, version 3.0 made a huge amount of changes

MrAtheist avatar
MrAtheist

… just as annoying as terraform in general…

PePe avatar

and as you know the provider is alway behing the api releases

PePe avatar

I would not blame TF in this case

PePe avatar

go check cloudformation and you will fell beter about TF

MrAtheist avatar
MrAtheist

i prefer cdk

PePe avatar

AWS apis are crap too

PePe avatar

cdk is cloudformation under the covers

MrAtheist avatar
MrAtheist
06:34:28 PM

iono but my exp with tf has been nothing but headaches… tf syntax comes and goes with each version and theres a delay to support the latest from aws. I havent found a “golden bible” for best practices and all of these open source tf modules are all over the place. thats just my 2c ¯_(ツ)_/¯

1
MrAtheist avatar
MrAtheist

and yes, im full admin to the repo, but still it’s trying to target personal namespace and not the org

2020-10-01

Yoni Leitersdorf avatar
Yoni Leitersdorf

As more and more people are switching to using infrastructure-as-code (like Terraform) to manage their cloud environments, we’re seeing an increase in the desire to do security reviews of the IaC code files. There’s a bunch of tools out there, and a couple of big challenges. Would appreciate your thoughts on the matter. Please see a blog post we’ve just published:

https://indeni.com/blog/identifying-security-violations-in-the-cloud-before-deployment/

Identifying Security Violations in the Cloud before Deployment | Indeni

Treating your cloud infrastructure as code (IaC) enables you to handle the growth in demand for your applications. Additionally, you are adopting IaC to scale

corcoran avatar
corcoran

Good to see you mentioning Checkov here - but this tool can definitely be used for both build and runtime; especially if you look at BridgeCrew’s SaaS offering which will hook back into your repos and remediate both operational issues, as well as your original code. Relationships between modules deffo still an issue.

Identifying Security Violations in the Cloud before Deployment | Indeni

Treating your cloud infrastructure as code (IaC) enables you to handle the growth in demand for your applications. Additionally, you are adopting IaC to scale

Yoni Leitersdorf avatar
Yoni Leitersdorf

I like the bridgecrew offering. Definitely a good option.

corcoran avatar
corcoran

For sure it’s not OSS, but it rounds off the LEFT < - > RIGHT

Mohammed Yahya - محمد المصدّر avatar
Mohammed Yahya - محمد المصدّر

I’m thinking DevSecOps should be followed here. for my TF projects I use unit test ( tf fmt tf validate and tf lint) then Integration testing using Terratest. For security I’m using Checkov and terraform-compliance, all of this should fall into a pipeline

:--1:1
joshmyers avatar
joshmyers

Just me or since Terraform moved the docs to the registry, google search for resources documentation is rubbish…keep getting random mirror sites (SEO f*cked?)

Yoni Leitersdorf avatar
Yoni Leitersdorf

I noticed the same thing actually

Matt Gowie avatar
Matt Gowie

Yeah, it’s degraded for sure.

loren avatar
loren

i mentioned it a couple days ago in the hangops terraform channel, there are a number of maintainers there. they’re aware, and trying to figure it out. it was fine when they first switched, but something happened on the google side and now they need to request a re-index or something

1
Matt Gowie avatar
Matt Gowie

One thing I was thinking about this is that it seems they’re trying to reference modules without the prefix — Like aws_instance would now just be instance under the aws provider. But folks are continuing to search aws_instance and it harder to find it seems.

loren avatar
loren

for now, it’s actually faster to search for “terraform aws” and then use the left side scroll bar to find the resource. it’s much better than it used to be, now that they group by service. https://registry.terraform.io/providers/hashicorp/aws/latest/docs

pjaudiomv avatar
pjaudiomv

Yup dumpster fire

Matt Gowie avatar
Matt Gowie

Yeah, to loren’s point — That does the trick OR “terraform aws_YOUR_RESOURCE” does help narrow it down.

:--1:1
Yoni Leitersdorf avatar
Yoni Leitersdorf
01:46:10 PM

This is getting worse and worse:

:100:2
:-1:1
loren avatar
loren

someone noted in the hangops channel that their robots.txt is blocking everyone

RB avatar

is there a way to get analytics on terraform modules usage ?

Yoni Leitersdorf avatar
Yoni Leitersdorf

Usage in your environment or usage globally?

RB avatar

globally would be nice

RB avatar

could also answer this question @Matt Gowie

https://sweetops.slack.com/archives/CBW699XE0/p1601574548034000

Any other users of https://github.com/cloudposse/terraform-aws-eks-cluster having trouble with first spin up and the aws-auth configmap already being created? I’ve run into it twice now and have been forced to import. Wondering what I’m doing wrong there.

Yoni Leitersdorf avatar
Yoni Leitersdorf

So each module has a download count here: https://registry.terraform.io/browse/modules

RB avatar

it’s possible that we could add a new terraform module with a null resource for analytics. maybe default enable_analytics=true and people can turn it off if they like

Yoni Leitersdorf avatar
Yoni Leitersdorf

But we also pulled it into an Excel file recently that’s easy to sift through:

RB avatar

ah but that is only for public terraform modules

RB avatar

i was thinking of something like homebrew analytics

Matt Gowie avatar
Matt Gowie

@RB the git clone / download metrics from GH would be the way to do it for CP modules I believe.

1
Yoni Leitersdorf avatar
Yoni Leitersdorf
05:54:57 PM
RB avatar

ooo cool. how did you build this excel file ?]

Yoni Leitersdorf avatar
Yoni Leitersdorf

The registry has an API

Yoni Leitersdorf avatar
Yoni Leitersdorf

So built a script to scrape it into a CSV

Yoni Leitersdorf avatar
Yoni Leitersdorf

Was very useful for our development efforts to know what modules are most common and we should support

RB avatar

ah very cool

RB avatar

would your team consider building it into a google spreadsheet instead ?

RB avatar

that way it can be updated on a cron

PePe avatar

always support all the ClousPosse modules

2
Yoni Leitersdorf avatar
Yoni Leitersdorf

It’s actually in a google sheet, but we can’t share documents directly from the Drive (org policy). Can potentially find a place to put it and auto-update.

ricardo.velasquez avatar
ricardo.velasquez

Hi. I am new to Terraform and have been struggling with something for the last few days. I’m trying to deploy and AWS ECS Fargate cluster for my django application. In my setup I have a task definition with two containers: one for the django app and one for nginx. My problem is I haven’t been able to make the django static files to work. I usually use this via docker compose using volumes in the definition like this: If someone can point me in the right direction on how to do this with terraform, that would be awesome

version: '3.0'

services:
  web:
    build: .
    command: >
      sh -c "echo yes | python manage.py collectstatic
      && gunicorn wormhole.wsgi:application --bind 0.0.0.0:8080"
    volumes:
      - ./:/usr/src/app/
      - static_volume:/usr/src/app/static
      - media_volume:/usr/src/app/media
    expose:
      - 8080

  nginx:
    build:
      context: .
      dockerfile: ./Dockerfile-nginx
    volumes:
      - static_volume:/usr/src/app/static
      - media_volume:/usr/src/app/media
    ports:
      - 8000:8000
    depends_on:
      - web

volumes:
  static_volume:
  media_volume:
Yoni Leitersdorf avatar
Yoni Leitersdorf

Where’s the source of the data?

Yoni Leitersdorf avatar
Yoni Leitersdorf

That you’re trying to load into those volumes

ricardo.velasquez avatar
ricardo.velasquez

in the project’s directory

ricardo.velasquez avatar
ricardo.velasquez

I copy it in the docker file

ricardo.velasquez avatar
ricardo.velasquez

I fixed it docker configuration had a couple typos. Thanks

RB avatar

are there any good terraform modules for ecs scheduled tasks ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anyone know what terraform cloud for business is charging? (This is the one that supports on prem runners)

Chris Fowles avatar
Chris Fowles

nah - i’ve asked a couple of times and got the typical “let’s sit down and talk about your needs and we’ll work out a price”

Chris Fowles avatar
Chris Fowles

i complain about it everytime i talk to anyone there

Chris Fowles avatar
Chris Fowles

pressuring a couple of partners to push back on it too

Chris Fowles avatar
Chris Fowles

it’s really frustrating - especially at a tier targeted at small/medium business

som.ban.mca avatar
som.ban.mca

It seems not able to create new aws_ecs_task_definition version even if I force the definition parts like Tags to change. I have below as part of the code, If someone can help that will be great. Really looking for pure terraform solution.

resource ”aws_ecs_task_definition” ”ecs-service-taskdef” {   family = ”${local.name}-${var.task_definition_name}”   container_definitions = data.template_file.startup.rendered

   dynamic ”volume” {      for_each = var.td_volumes      content {        name      = volume.value[“name”]        host_path = volume.value[“host_path”]      }    }

  // For new builds the images will change and force the task to change, Earlier code was,  tags = local.tags   tags = merge( local.tags, {“app-image”=element(split(“:”, local.json_data_images), 1)} )

  lifecycle {     create_before_destroy = true   }

}

### Create service data data ”template_file” ”startup” {   template = file(“task-definitions/${var.task_file}”)   vars = {     name        = var.domainname     image       = local.json_data_images     API         = local.json_data_ecs.appsn     APP         = var.appname     ENV_NAME    = var.environment     ENV         = var.environment     awsstnm     = local.awsstackname     CommonEnv   = regex(“^[a-z]+”, var.environment)   }

}

som.ban.mca avatar
som.ban.mca
04:31:49 AM

It seems not able to create new aws_ecs_task_definition version even if I force the definition parts like Tags to change. I have below as part of the code, If someone can help that will be great. Really looking for pure terraform solution.

resource ”aws_ecs_task_definition” ”ecs-service-taskdef” {   family = ”${local.name}-${var.task_definition_name}”   container_definitions = data.template_file.startup.rendered

   dynamic ”volume” {      for_each = var.td_volumes      content {        name      = volume.value[“name”]        host_path = volume.value[“host_path”]      }    }

  // For new builds the images will change and force the task to change, Earlier code was,  tags = local.tags   tags = merge( local.tags, {“app-image”=element(split(“:”, local.json_data_images), 1)} )

  lifecycle {     create_before_destroy = true   }

}

### Create service data data ”template_file” ”startup” {   template = file(“task-definitions/${var.task_file}”)   vars = {     name        = var.domainname     image       = local.json_data_images     API         = local.json_data_ecs.appsn     APP         = var.appname     ENV_NAME    = var.environment     ENV         = var.environment     awsstnm     = local.awsstackname     CommonEnv   = regex(“^[a-z]+”, var.environment)   }

}

    keyboard_arrow_up