#terraform

Archive: https://archive.sweetops.com/terraform/

2019-10-20

Andrea

hi, I get “An argument named “tags” is not expected here” when trying to use tags in “aws_eks_cluster”

Andrea

according to the docs it is supported, not sure how to debug/further investigate what’s going on… any tip?

Andrea

I’m using the latest TF version, and the syntax is simply

tags = {
    Name = "k8s_test_masters"
  }

@Andrea Do you use aws provider version >=2.31 ?

Andrea

Hi @ I was on 2.29 actually…

Andrea

I removed it and run “terraform init”, and I got

provider.aws: version = "~> 2.30"
Andrea

but no 2.31, and I still get the “tags” error…

Andrea

hold on a sec, I’ve updated to 2.33 and the tags are now showing up in the TF plan command

Andrea

thanks so much!!

Andrea

can I just ask how you knew that at least version 2.31 was needed please?

Andrea

this might help me the next time I fall into a similar issue..

terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

Andrea

fair enough thanks @!

F3D3M2C0R

hey there how can i iterate two levels?, i want to get values from a list inside another list count = length(var.main_domains) count_alias = length(var.main_domains[count.index].aliases) like that

2019-10-19

Alex Siegman

I must be blind - can’t find anywhere in the terraform docs how to add an IP to a listener target group with target_type set to ip for a network load balancer

2019-10-18

Gowiem

How folks — I keep struggling with the idea of bootstrapping an RDS database with a user, schema, etc after it is initially created. I can use local-exec to invoke psql with sql scripts, but that kind of stinks and there seems to be heavy lifting involved to use something like the OS Ansible Provider (https://github.com/radekg/terraform-provisioner-ansible). What I’m trying to get at: Is there a better way? I’d like a way to easily bootstrap on RDS instance or cluster creation + run migrations appropriately.

radekg/terraform-provisioner-ansible

Marrying Ansible with Terraform 0.12.x. Contribute to radekg/terraform-provisioner-ansible development by creating an account on GitHub.

sarkis

Have you looked into the postgresql provider? https://www.terraform.io/docs/providers/postgresql/index.html. This will work better than local-exec.

Provider: PostgreSQL - Terraform by HashiCorp

A provider for PostgreSQL Server.

Gowiem

@sarkis It doesn’t seem to allow me to run SQL files against the remote DB or am I missing that functionality?

sarkis

i wouldn’t do this with terraform, the provider allows you to setup user, database and schema

sarkis

any sql past that like init or migration should not be in TF imo

Gowiem

Yeah, I’m going the ansible route.

Gowiem

Thanks for weighing in @sarkis — just confirming I shouldn’t go that route is great.

sarkis

i think that would work better in that ansible may help in making that stuff idempotent, there could be potentially bad things happening if you keep applying sql every time tf applies

sarkis

via local-exec - i’m not certain how you can easily achieve this .. i haven’t looked at the ansible provisioner, but i assume that is possible… basically you want to only run the sql once right?

Gowiem

Yeah, I need a way to query a database_version column and then run migration files off of that. It’s easy to do that in Ansible. Just being new to TF I didn’t know where to draw the line. Everyone says TF is not for the actual provisioning of resources so this makes.

@Gowiem how do you plan to deploy the actual software ? Is it docker orchestrated ?

2019-10-17

oscar

How do you set the var-file for Terraform on the CLI? For instance export TF_VAR_name=oscar export TF_VAR_FILE=environments/oscar/terraform.tfvars == -var-file=environments/oscar/terraform.tfvars

oscar

Thanks but.. not quite

oscar

I wasn’t quite clear enough

oscar

I meant as an ENV variable

oscar

like in the examples above

you cant. Terraform would see TF_VAR_FILE as a variable called FILE

oscar

Yup. Just an example of what it might be called in case anyone has come across it.

Erik Osterman

@oscar maybe use a yaml config instead?

Erik Osterman
locals {
  env = yamldecode(file("${terraform.workspace}.yaml"))
}
Erik Osterman

or instead of terraform.workspace, use any TF_VAR_…

Erik Osterman

@oscar since you’re using geodesic + tfenv, you can do TF_CLI_PLAN_VAR_FILE=foobar.tfvars

Erik Osterman

or if you want to do stock terraform, you can do

Erik Osterman

TF_CLI_ARGS_plan=-var-file=foobar.tfvars

oscar

Can’t seem to find documentation on doing this

Laurynas

Hi how can I read a list in terraform from file? https://github.com/cloudposse/terraform-aws-ecs-container-definition/blob/master/examples/string_env_vars/main.tf I’d like to pass ENV variables from file eg. env.tpl ` [{ name = “string_var” value = “123” }, { name = “another_string_var” value = “true” }, { name = “yet_another_string_var” value = “false” }, ] ` data “template_file” “policy” { template = “${file(“ci.env.json)}” } ` environment = data.template_file.policy.rendered

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

Hi folks, if you have a service exposing an API documented using OpenAPI and are thinking of creating a terraform provider for it, you might find the following plugin useful:

Link to the repo: https://github.com/dikhan/terraform-provider-openapi

The OpenAPI Terraform Provider dynamically configures itself at runtime with the resources exposed by the service provider (defined in an OpenAPI document, formerly known as swagger file).

dikhan/terraform-provider-openapi

OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file) - dikhan/terraform-provider-openapi

2019-10-16

julien M.

Hello , i have written this aws_autoscaling_policy :

resource "aws_autoscaling_policy" "web" {
  name                   = "banking-web"
  policy_type            = "TargetTrackingScaling"
  autoscaling_group_name = "${aws_autoscaling_group.web.name}"
  policy_type = "TargetTrackingScaling"

  target_tracking_configuration {
    customized_metric_specification {
      metric_dimension {
        name  = "LoadBalancer"
        value = "${aws_lb.banking.arn_suffix}"
      }

      metric_name = "TargetResponseTime"
      namespace   = "AWS/ApplicationELB"
      statistic   = "Average"
    }

    target_value = 0.400
  }
}

but i search to modify this value with this type of ScalingPolicy :

julien M.
cytopia

Hi everybody,

I have the following list variable in Terraform 0.11.x in terraform.tfvars defined:

mymaps = [
  {
    name = "john"
    path = "/some/dir"
  },
  {
    name = "pete"
    path = "/some/other/dir"
  }
]

Now in my module’s <http://main.tf> I want to extend this variable and store it as a local.

The logic is something like this:

if mymaps[count.index]["name"] == "john"
    mymaps[count.index]["newkey"] = "somevalue"
endif

In other words, if any element’s name key inside the list has a value of john, add another key/val pair to the john dict.

The resulting local should look like this

mymaps = [
  {
    name   = "john"
    path   = "/some/dir"
    newkey = "somevalue"
  },
  {
    name = "pete"
    path = "/some/other/dir"
  }
]

Is this somehow possible with null_resource (as they have the count feature) and locals?

Robert

@here could people thumbs up this on GitHub so we can get CloudWatch anomaly dectection metrics in terraform?

Robert
AWS CloudWatch Alarm - Anomaly detection · Issue #9293 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Robert
AWS CloudWatch Alarm - Anomaly detection by hakopako · Pull Request #9828 · terraform-providers/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

Erik Osterman

@here public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

1
Brij S

I have a general terraform layout question. When I first started with terraform and got an ‘understanding’ for modules I created a module, called ‘bootstrap’ . This module includes the creation of acm certs, route53 zones, iam users/roles, s3 buckets, firehose, cloudfront oai. I now realize that modules should probably be smaller than this. What I want to know is - for people who bootstrap new aws accounts using TF, do you have a bunch of smaller modules(acm, route53, firehose, etc) and then for one of things like some iam roles/user just include those resources along with the module in a terraform file?

Brij S

for example:

module {}

module {}

resource "iam_user {}
aknysh

@Brij S take a look at these threads on similar topics, might be of some help to you to understand how we do that

aknysh

Hi guys, Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer? In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core module and passing some vars like name, location, some scaling properties. Any thoughts about it, is it good or bad idea in your opinion?

I appreciate your input.

is there any reason why this module is not upgraded to 0.12 ? https://github.com/cloudposse/terraform-aws-ecs-alb-service-task can I have a stab at it ?

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

sarkis

This module was a very specific use case, I don’t think anyone would have an issue with you taking a stab at it, would need to 0.12 all the things eventually!

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

we use it a lot, it is pretty opinionated I will like to add a few more options to make it a bit more flexible

aknysh

we have 100+ modules to upgrade to TF 0.12

aknysh

this one is in the line, will be updated in the next week

is in the line to be updated next week ? really ?

well, if that is the case then it will save me some work

but for real @aknysh is scheduled for next week ?

aknysh

yes

is this xmas?

aknysh

haha

Erik Osterman

@PePe @aknysh has been tearing it up

Erik Osterman

he just converted all the beanstalk modules and dependencies

Erik Osterman

then converted jenkins

Erik Osterman

added terratest to all of it

Erik Osterman

jenkins by the way is a real beast!

Erik Osterman
Erik Osterman

our main hold up is we’re only releasing terraform 0.12 modules with that have automated terratests

Erik Osterman

this is the only way we can continue to keep the scale we have of modules

I totally agree and yes I saw that jenkins module

is huge

Conrad Kurth

hey everyone, I have maybe a simple question, but whenever I am running a terraform plan with https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment?ref=master it is always updating, even after it is applied. Can someone point me in the right direction?

- setting {
          - name      = "EnvironmentType" -> null
          - namespace = "aws<i class="em em-elasticbeanstalk"></i>environment" -> null
          - value     = "LoadBalanced" -> null
        }
      + setting {
          + name      = "EnvironmentType"
          + namespace = "aws<i class="em em-elasticbeanstalk"></i>environment"
          + value     = "LoadBalanced"
        }
      - setting {
          - name      = "HealthCheckPath" -> null
          - namespace = "aws<i class="em em-elasticbeanstalk"></i>environment<i class="em em-process"></i>default" -> null
          - value     = "/healthz" -> null
        }
      + setting {
          + name      = "HealthCheckPath"
          + namespace = "aws<i class="em em-elasticbeanstalk"></i>environment<i class="em em-process"></i>default"
          + value     = "/healthz"
        }
      - setting {
          - name      = "HealthStreamingEnabled" -> null
          - namespace = "aws<i class="em em-elasticbeanstalk"></i>cloudwatch<i class="em em-logs"></i>health" -> null
          - value     = "false" -> null
        }
      + setting {
          + name      = "HealthStreamingEnabled"
          + namespace = "aws<i class="em em-elasticbeanstalk"></i>cloudwatch<i class="em em-logs"></i>health"
          + value     = "false"
        }
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

aknysh

unfortunately, there is no right direction here. There are many bugs in the aws provider related to how EB environment handles settings

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

aknysh

the new version tries to recreate all 100% of the settings regardless of how you arrange them

Conrad Kurth

nice

Conrad Kurth

so everyone has to live with this

aknysh

0.11 version at least tried to recreate only those settings not related to the type of environment you built

Conrad Kurth

gotcha

aknysh

they have to fix those bugs in the provider

aknysh

there are many others

aknysh
Error: Provider produced inconsistent final plan · Issue #10297 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

aknysh
Provider produced inconsistent final plan · Issue #7987 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

aknysh

that was going on for months and still same issues

Conrad Kurth

ahhh

Conrad Kurth

thank you for the detailed explanation!

aknysh

that’s why we have to do this in our tests for Jenkins which runs on EB https://github.com/cloudposse/terraform-aws-jenkins/blob/master/test/src/examples_complete_test.go#L29

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

aknysh

(the issue with dynamic blocks in settings)

aknysh

at least solved by applying twice

aknysh

but that does not solve the issue with TF trying to recreate 100% of settings regardless if they are static or defined in dynamic blocks

aknysh

at least I did not find a solution

aknysh

ping me if you find anything

Conrad Kurth

hmmm interesting

Conrad Kurth

thanks for all the links!

Conrad Kurth

and will do

kskewes

Hey everyone, thanks so much for your work. New to EKS (moving from IBM) but am curious about subnetting with the CP eks-cluster module (and dependencies - vpc, subnets, workers, etc) and understanding how it works. If we assign maximum size cidr_block of 10.0.0.0/16 to vpc then we will get:

  1. 3x ‘public’ (private) subnets 10.0.[0\|32\|64].0/19 - one per AZ? This contains any public ALB/NLB’s internal IP? What if we provision a private LoadBalancer K8s Service? I will test
  2. 3x ‘private’ subnets 10.0.[96\|128\|160].0/19 - one per AZ? This will be used by k8s worker nodes ASG’s.
  3. Cluster seems to use 172.x.y.z/? (RFC1918 somewhere) for the K8s Pod IP’s and for the K8s Service IP’s. This makes me think we are doing IPIP in the cluster.
  4. The remaining 10.0.192.0/18 is free for us to use, say with separate non K8s ASG’s or perhaps SaaS VPC endpoints (RDS/MQ/ETC) that we want in the same VPC? Is this all correct? Whilst #1 above seems like a large range for a few services, it’s not like IP addresses are sparse. We haven’t added Calico yet.
aknysh

we have this working/tested example of EKS cluster with workers https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

as you can see, it uses the VPC module and subnets module to create VPC and subnets

aknysh

there are many ways of creating subnets in a VPC

aknysh

https://github.com/cloudposse/terraform-aws-dynamic-subnets is one opinionated approach which creates one public and one private subnet per AZ that you provide (we use that almost everywhere with kops/k8s)

cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

the test shows what CIDRs we are getting for the public and private subnets https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/test/src/examples_complete_test.go#L42

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

you can divide your VPC into as many subnets as you want/need

aknysh

it’s not related to the EKS module, which just accepts subnet IDs regardless of how you create them

aknysh

we have a few modules for subnets, all of them divide a VPC differently https://github.com/cloudposse?utf8=%E2%9C%93&q=subnets&type=&language=

Erik Osterman

Yea, definitely no one-size-fits all strategy for subnets

Erik Osterman

It’s one of the more opinionated areas, especially in larger organizations

Erik Osterman

Pick what makes sense for your needs but have a look at our different subnet module that implement different strategies to slice and dice subnets

kskewes

oh wow, awesome, I’ll dive through the different subnets and tests above. Thank so much for the response. A+++

2019-10-15

mmarseglia

has anyone else had that same dependency issue with https://github.com/cloudposse/terraform-aws-acm-request-certificate

cloudposse/terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

https://github.com/Flaconi/terraform-aws-alb-redirect - For everyone who deals with quite a lot of 302/301 redirects like apex -> www and want to solve it outside of nginx or s3, here you go Next release will have optional vpc creation.

Flaconi/terraform-aws-alb-redirect

HTTP 301 and 302 redirects made simple utilising an ALB and listener rules. - Flaconi/terraform-aws-alb-redirect

2
nukepuppy

is it still viable to contribute to the 0.11 branch?

nukepuppy

having an issue where i’d like to not have the module create its own security group and would be good to have that optional if possible

aknysh

branch 0.11/master is for TF 0.11

aknysh
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

aknysh

fork from it and open a PR

aknysh

we’ll merge it into 0.11/master branch

nukepuppy

@aknysh cool sounds good

Gowiem

Hey folks, what is the community approach to running SQL scripts against a new RDS cluster? remote-exec seems not great since it requires that the SG opens up to the calling machine.

Gowiem

I see https://github.com/fvinas/tf_rds_bootstrap — But that seems old. Wondering if my google-fu is not finding the right tool for this job.

fvinas/tf_rds_bootstrap

A terraform module to provide a simple AWS RDS database bootstrap (typically running SQL code to create users, DB schema, …) - fvinas/tf_rds_bootstrap

@Gowiem There is a Mysql and Postgres provider which can create roles inside the RDS, something for you ?

Gowiem

Yeah, possibly. Mind sending me a link?

Gowiem

Thanks Maarten — Will check those out.

yet another PR https://github.com/cloudposse/terraform-aws-rds-cluster/pull/57 Aurora RDS backtrack support

aknysh

thanks @PePe, commented on the PR

added the changes bu I’m having an issue with make readme

curl --retry 3 --retry-delay 5 --fail -sSL -o /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs <https://github.com/segmentio/terraform-docs/releases/download/v0.4.5/terraform-docs-v0.4.5-darwin-amd64> && chmod +x /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs
make: gomplate: No such file or directory
make: *** [readme/build] Error 1

I upgraded my mac to Catalina, it could be related to that

aknysh

did you run

make init
make readme/deps
make readme

yes

well I installed it by hand brew install gomplate

and it worked

2019-10-14

Cloud Posse
04:02:20 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 23, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

Sharanya

Hello All, Quick question - Did anyone come across setting up option group for MYSQL DB in RDS - in terraform

aknysh
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

aknysh
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Gowiem

Hey @aknysh — saw your comment on https://github.com/terraform-providers/terraform-provider-aws/issues/3963 for the cp EBS module. I’m using and I see it not trying to apply the name tag, but I’m still running into issues with receiving that Service:AmazonCloudFormation, Message:No updates are to be performed. + Environment tag update failed. error when I do an apply. It looks like the Namespace tag isn’t applying each time — Have you seen that before?

aknysh

we deleted the Namespace tag from elasticbeanstalk-environment (latest release)

aknysh

it was a problem as well, similar to the Name tag

aknysh

in fact, AWS checks if tags contain anything with the string Name in it

aknysh

probably using regex

aknysh
Regular Expressions: Now You Have Two Problems

a blog by Jeff Atwood on programming and human factors

1
Gowiem

Ahaaa 4 days ago — I’m just the tiniest bit behind the curve. Awesome — Thank you! That module is great stuff. Allowing me to bootstrap this project quite easily!

Gowiem

@aknysh Similar to the EBS environment module’s issue: The application resource module has a similar problem, it just doesn’t cause an error. It causes a change to that resource to be made on each apply. Put up a PR to fix: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application/pull/17

Removes the "Namespace" tag due to EBS limitation by Gowiem · Pull Request #17 · cloudposse/terraform-aws-elastic-beanstalk-application

Including the &quot;Namespace&quot; tag to the beanstalk-application resource required a change for each apply as the tag is never applied. Removing that tag causes the beanstalk-application to avo…

aknysh

thanks @Gowiem, merged. Wanted to do it, but since it does not cause an error and updating an EBS app takes a few secs, never got to it

Gowiem

Glad to help out! Would like to do more where I can since these EBS modules were such a huge help.

2019-10-13

guimin

HI here, I have a question about terraform registry that the NEXT button does not work in the page : https://registry.terraform.io/browse/modules?offset=109&provider=aws and why the registry only show at most 109 modules

guimin

Even if I use curl <https://registry.terraform.io/v1/modules?limit=100&offset=200&provider=aws> and the results also same as the previous.

loren

perhaps that’s just the end of registered aws modules?

loren

oh hmm… the registry home page says there are 1198 aws modules…

guimin

HI @loren It seems like all of provider have the same issue.

loren

yeah, maybe look for or open an issue on the terraform github

loren
Terraform Registry Pagination bug · Issue #22380 · hashicorp/terraform

I was implementing the pagination logic client side in a PowerShell module when I noticed there is a pagination error. The returned offset in the meta is not updated over 115. As you can see, the r…

guimin

Thanks

2019-10-12

Hi everyone, I’m helping a friend with Azure and it’s a quite different from AWS.. Much appreciated if someone could tell me what the scope of a azurerm_resource_group normally is or should be best practice-wise.

Unclear to me if I should fit all resources of one project in there, or group in type like, dns related, network.. etc.etc. Thanks.

We use dedicated resource groups for different use cases.

Would you like to give me an example ?

or multiple, an example per different use-case ..

a) use a HPC resource group for HPC workload incl. network, storage, vms, limits, admin permissions b) SAP resource group for SAP workload incl. network, storage, blob storage, vms, databases. c) NLP processing for the implementation of NLP services d) HUB resource group which includes the HUB VPN endpoint and shared services like domain controllers.

gotcha, thanks

2019-10-11

mmarseglia

i’m trying to apply a generated certificate from ACM, using DNS validation, using the cloudposse module, terraform-aws-acm-request-certificate. The certificate is applied to the loadbalancer in an elasticbeanstalk application. The cert gets created and applied to the loadbalancer before it’s verified, the eb config fails and then the app is in a failed state, preventing further configuration. to fix this I added a aws_acm_certificate_validation resource to the module and changed the arn output to aws_acm_certificate_validation.cert.certificate_arn. this seems to have fixed my problem.

oscar

That looks right to me :) The other solution would have been to force a dependency between the validation resource and the ALB

Jeff Young

Using this

  source  = "cloudposse/vpc-peering-multi-account/aws"
  version = "0.5.0"

And getting this error.

Error: Missing resource instance key

  on .terraform/modules/vpc_peering_cross_account_devdata/cloudposse-terraform-aws-vpc-peering-multi-account-3cf3f60/accepter.tf line 96, in locals:
  96:   accepter_aws_route_table_ids           = "${distinct(sort(data.aws_route_tables.accepter.ids))}"

Because data.aws_route_tables.accepter has "count" set, its attributes must be
accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    data.aws_route_tables.accepter[count.index]
Jeff Young

any help is appreciated.

aknysh

the module has not been converted to TF 0.12 yet, but you are using 0.12 which has more strict type system

Jeff Young

FWIW. I actually just commented this out and proceeded and seem to have my VPC peering working …

# Lookup accepter route tables
data "aws_route_tables" "accepter" {
  # count    = "${local.count}"
Jeff Young

So commenting out the count

Jeff Young

got me further down the road with the module and Terraform 0.12.

Jeff Young

Thanks.

2019-10-10

sarkis

i think it’s pretty recent, June

sarkis

http://learn.hashicorp.com is getting really good too - but most of the material is too introductory for now

Hemanth

Trying to reference an instance.id from different directory- erroring out

./tf
./tf/dir1/ -> (trying to use aws_instance.name.id in ->./tf/modules/cloud/myfile.tf)
./tf/envs/sandbox/ -> (running plan here)
./tf/modules/cloud -> <http://myfile.tf> 

Tried adding “${aws_instance.name.id}” in http://outputs.tf in ./tf/dir1/ and using that value in ./tf/modules/cloud/myfile.tf Throws a new error like

-unknown variable accessed: var.profile in:${var.profile}

How do i go about this ?

aknysh

the module /tf/modules/cloud should have <http://variables.tf> and <http://outputs.tf>

Hemanth

@aknysh yes those two files do exist in /tf/modules/cloud

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

the main file that uses the module provides vars to the module

aknysh

and uses its outputs in its own outputs

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Hemanth

bit confusing, but going to look into it. Thanks for sharing those @aknysh appreciate your help

aknysh

consider the variables and outputs as chains

aknysh

the variables go from top level modules to low level modules

aknysh

the outputs go from low level modules to top level modules

johncblandii
aknysh

thanks @johncblandii we’ll review

AgustínGonzalezNicolini

Guys and Gals, is there a TF resouse to control AWS SSO?

AgustínGonzalezNicolini

I can’t seem to find it

loren

i don’t think there is even a public API for AWS SSO, is there?

loren

my take so far on AWS SSO is basically, just use any other federation provider

2

Hi guys, Is there anyone familiar with IAM role and Instance Profile ?> I have a case like this: I would like to create an Instance Profile with suitable policy to allow access to ECR repo ( include download image from ECR as well). Then I attach that Instance Profile for a Launch Configuration to spin up an instance. The reason why I mentioned Policy for ECR is that I would like to set aws-credential- helper on the instance to use with Docker (Cred-helper). when it launch, so that when that instance want to pull image from ECR, it wont need AWS credential on the host itself at first. All of that module, I would like to put in Terraform format as well. Any help would be appreciated so much.

aknysh

@Phuc see this example on how to create instance profile https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/main.tf#L69

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Hi aknysh

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

the main point I focus on

is that instance launch by launch configuration with instance profile

contain aws credential to access ecr yet

it kind complex for my case, I try alot, but the previous working way for me is ~/.aws/credential must exist on the host.

aknysh

not sure what’s the issue is, but you create a role, assign the required permissions to the role (ECR etc.), add assume_role document for the EC2 service to assume the role, and finally create an instance profile from the role

aknysh

then you use the instance profile in launch configuration or launch template

Do you know how aws-cred-helper and docker login working together ?

aknysh
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

https://github.com/awslabs/amazon-ecr-credential-helper. I want to achieve the same result but with IAM only

awslabs/amazon-ecr-credential-helper

Automatically gets credentials for Amazon ECR on docker push/docker pull - awslabs/amazon-ecr-credential-helper

aknysh

https://github.com/awslabs/amazon-ecr-credential-helper#prerequisites says it works with all standard credential locations, including IAM roles (which will be used in case of EC2 instance profile)

awslabs/amazon-ecr-credential-helper

Automatically gets credentials for Amazon ECR on docker push/docker pull - awslabs/amazon-ecr-credential-helper

aknysh

so once you have an instance profile on EC2 where the amazon-ecr-credential-helper is running, the instance profile (role) will be used automatically

yeah, I was thinking that too

try to figure if there is any missing policy for the iam role

2019-10-09

output "merged_tags" {  
   value = merge(local.list_of_maps...)
}
Mads Hvelplund

Oh, decomposition is supported like js

Gocho

Hi @all Not sure if it is the right place to ask, but I take my chance !

Say I have 3 terraform repositories, A, B and …tadam : C! Both B & C rely on A states (mututlizaton of some components) /——–B A ——/ -——— C

(Please note my graphical skills )

All repositories are in terraform 0.11, but I would like to update to 0.12. What would be the best way to achieve that (and avoid conflicts) ? Can I simply update B then A and C ? Or should I take care about a particular order ?

Erik Osterman

If I recall correctly, you’ll want to run terraform apply with the latest version of 0.11 which will ensure state compatibility with 0.12

Erik Osterman

then you ought to be able to upgrade the projects in any order after that.

Erik Osterman

perhaps someone in #terraform-0_12 has better suggestions

Erik Osterman

If you haven’t reviewed this, make sure you do that first.

aknysh
Pattern to handle optional dynamic blocks

What’s a good way to handle optional dynamic blocks, depending on existence of map keys? Example: Producing aws_route53_record resources, where they can have either a “records” list, or an “alias” block, but not both. I’m using Terraform 0.12.6. I’m supplying the entire Route53 zone YAML as a variable, through yamldecode(): zone_id: Z987654321 records: - name: http://route53test-plain.example.com type: A ttl: 60 records: - 127.0.0.1 - 127.0.0.2 - name: http://route53test-alias.example.com type…

Erik Osterman

When did HashiCorp launch https://discuss.hashicorp.com/?

HashiCorp Discuss

HashiCorp

Erik Osterman

It’s long overdue!

aknysh

yea, they have very nice posts in there

2019-10-08

oscar

Anyone able to find the Terraform Cloud CIDR:?

1
Erik Osterman

Terraform Cloud desperately needs something like this https://chartio.com/docs/data-sources/tunnel-connection/ (what chartio does)

Tunnel connection

Chartio offers two ways of connecting to your database. A direct connection is easier; you can also use our SSH Tunnel Connection.

1
Erik Osterman

@sarkis

1
Erik Osterman

what’s nice about this is you don’t even expose SSH. using autossh you establish persistent tunnels from inside your environments that connect out to chartio

1
oscar

Tragically enough, it isn’t that that’s the issue

1
oscar

our VCS is IP whitelisted

1
oscar

so TF Cloud can’t access our code

1
Erik Osterman

aha

1
oscar

But yes that would be a useful solution if/when I hit the roadblock of terraform private infrastructure

1
oscar

Though in theory it is only running against AWS Accounts

1
oscar

so shouldn’t be an issue

1
Erik Osterman

honestly, if you’re that locked down, onprem terraform cloud is what you’re going to probably need

1
oscar

Aye

1
oscar

Got a call with them tomorrow

1
oscar

I’ll feed back in PM about what they say

1
oscar

Time to get ripped off by SaaS

1
oscar

the trouble is though, we don’t want on prem

1
oscar

we want managed services

1
oscar

less maintenance etc. We don’t want to host it!

1
Erik Osterman

but you self-host VCS it sounds like

1
sarkis

@Erik Osterman 100% agree on the connection bit - I’m proposing something like VPC Endpoints to the team after using TFC a bit we ran into the same limitation

1
oscar

I can’t seem to find it with google / terraform docs

joshmyers

Do they provide one?

dalekurt

Q: How can I target a specific .tf file with terraform? I have multiple tf files in my project and I would like to destroy and apply the resources specified within one specific tf file. I’m aware I can -target the resources within the tf file but I want an easier way if that exists.

Erik Osterman

no easy way that I can think of with pure terraform.

Szymon

Maybe you can put those files into separate folders?

Szymon

Or read about terragrunt

dalekurt

Thanks @Szymon I had been reading up on Terragrunt recently

oscar
04:15:26 PM

Doesn’t appear to be that way

Do they provide one?

MattyB

Hey everyone, I’ve been playing around with Terraform for a couple of weeks. After making a simple POC my modules ended up similar to CloudPosse modules on a much smaller scale. I found your repos just this past weekend and am pretty impressed with them compared to other community modules. One thing I haven’t figured out yet is how to build the infrastructure in one go. It seems to me like you can build the VPC, alb, etc.. all at once and then separately build the RDS due to dependency issues. I’d assume there are similar issues with much more complex architectures? I thought it was how I configured my own POC but using the CloudPosse modules there’s a similar issue. example: `Error: Invalid count argument

on .terraform/modules/rds_cluster_aurora_mysql.dns_replicas/main.tf line 2, in resource “aws_route53_record” “default”: 2: count = var.enabled ? 1 : 0

The “count” value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on. `

aknysh

@MattyB the count error happened very often in TF 0.11, it’s much better in 0.12. We have some info about the count error here https://docs.cloudposse.com/troubleshooting/terraform-value-of-count-cannot-be-computed/

aknysh

The error itself has nothing to do with how you organize your modules and projects - it’s separate tasks/issues. Take a look at these threads https://sweetops.slack.com/archives/CB6GHNLG0/p1569434890216300 and https://sweetops.slack.com/archives/CB6GHNLG0/p1569434945216700?thread_ts=1569434890.216300&cid=CB6GHNLG0

aknysh
cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Sharanya

Lookin for a terraform Module For - RDS- Instance Configuration : SQL Server Standard 2017, 2017 v14

aknysh

we have https://github.com/cloudposse/terraform-aws-rds, tested with MySQL and Postgres, was not tested with SQL Server (but could work, just specify the correct params - https://github.com/cloudposse/terraform-aws-rds/blob/master/variables.tf#L111)

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

aknysh
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Erik Osterman

AgustínGonzalezNicolini

Now that’s how you build a module

1

could it be possible to pass this construct :

lifecycle_rules        = {
    id      = "test-expiration"
    enabled = true

    abort_incomplete_multipart_upload_days = 7

    expiration {
      days = 30
    }

    noncurrent_version_expiration {
      days = 30
    }
  }

to a variable with something like map(string)?

Chris Fowles

you could do it with structural types if you want to validate the input map(object({id = string, enabled = bool, etc})) or you could do it as map(any) if you don’t want to validate the input

Chris Fowles

as a general practice though i tend to try and make variables somewhat more self documenting and have things like expiration_enable abort_incomplete_multipart_upload_days, etc as separate values as it makes it easier to use the module for others

Chris Fowles
Type Constraints - Configuration Language - Terraform by HashiCorp

Terraform module authors and provider developers can use detailed type constraints to validate the inputs of their modules and resources.

but how do you map :

expiration {
      days = 30
    }

and a map(any) for example ?

since is key=value

it will be so much easier if you could attach a lifecycle policy or include a file for varible blocks

Chris Fowles

yeh ok good point - blocks are a bit of a pain in the butterfly

Chris Fowles

technically they’re a list of maps

Error: Invalid value for module argument

  on <http://s3_buckets.tf> line 29, in module "s3_bucket_scans":
  29:   lifecycle_rule        = {
  30:     id      = "test-scans-expiration",
  31:     enabled = true,
  33:     abort_incomplete_multipart_upload_days = 7,
  35:     expiration = {
  36:       days = 30
  37:     }
  39:     noncurrent_version_expiration = {
  40:       days = 30
  41:     }
  42:   }

The given value is not suitable for child module variable "lifecycle_rule"
defined at ../terraform-aws-s3-bucket/variables.tf<i class="em em-102,1-26"></i> all map elements
must have the same type.

it did not like that

I tried this too :

  lifecycle_rule { 
    id = var.lifecycle_rule
  }

and

lifecycle_rule        = <<EOT
  {
    #id      = "test-scans-expiration"
    enabled = true

    abort_incomplete_multipart_upload_days = 7

    expiration {
      days = 30
    }

    noncurrent_version_expiration {
      days = 30
    }
  }
  EOT

it almost worked

aknysh
cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

aknysh

but if you want to have the entire block as variable, why not declare it as object (or list of objects) with different item types including other objects

aknysh
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

aknysh
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Chris Fowles
  lifecycle_rule {
    id      = module.default_label.id
    enabled = var.lifecycle_rule_enabled
    prefix  = var.prefix
    tags    = module.default_label.tags

    noncurrent_version_transition {
      days          = var.noncurrent_version_transition_days
      storage_class = "GLACIER"
    }

    noncurrent_version_expiration {
      days = var.noncurrent_version_expiration_days
    }
  }

in my mind (and experience) this is a much better way to do it rather than try and pass a big object as a variable. you end up with a much easier to understand interface to your module

aknysh

yes agree. The only case when you’d want to provide it as a big object var is when the entire block is optional

loren

I prefer using a big object and leaving the usage up to the user

loren

With dynamic blocks it seems to work very well in tf 0.12

aknysh

yea, it all depends (on use case and your preferences)

and

since the other guy is taking too long

and

We a are all in the same team

if you have time to look at the alb and kms pull request I will appreciate

aknysh

commented on kms

done on that, anything I can help with the terraform-aws-alb/pull/29?

ok , I like the example s3-website module, I will do something like that

Mads Hvelplund

does anyone know how to merge a list of maps into a single map with tf 0.12? i.e, i want:

[
  {
    "foo": {
      "bar": 1
      "bob": 2
    }
  },
  {
    "baz": {
      "lur": 3
    }
  }
]

to become:

{
  "foo": {
    "bar": 1
    "bob": 2
  }
  "baz": {
    "lur": 3
  }
}
Mads Hvelplund

the former is the output of a for loop that groups items with the ellipsis operator

2019-10-07

Hemanth

how does terraform consider ${path.module}, what value does it take for it - Example: like in file(“${path.module}/hello.txt”)

Hemanth

file("${path.module}/hello.txt") is same as file("./hello.txt") ?

aknysh

if used from the same folder (project), then yes, the same

aknysh

${path.module} is useful when the module is used from another external module

aknysh

for example:

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

${path.module} still points to the module’s root, not the example’s root (even if used from the example)

Cloud Posse
04:05:29 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 16, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

2019-10-05

Milos Backonja

HI, I am using custom script to export swagger from restapi and to create boilerplate for API Gateway Import. I created deployment resourse which will trigger redeploy once swagger export is changed. It looks something like this;

Create Deployment

resource “aws_api_gateway_deployment” “restapi” { rest_api_id = aws_api_gateway_rest_api.restapi.id variables = { trigger = filemd5(“./result.json”) } lifecycle { create_before_destroy = true } }

For now it works as expected. I am just trying to find solution to not destroy previous deployments on new deploys. Any suggestions more than welcome.

2019-10-04

Fred Light

Hi guys. This morning i just copied a working geodesic in a new one and i got an error when trying to upload tfstate to tfstate-backend S3 bucker (403). Digging into this i found a bucket policy that was not present in the other buckets. I don’t really understand from where it can come since all the versions and image tags are fixed ones. Does somebody already faced such a case ?

oscar

403 is auth. You’re likely trying to upload to the bucket of the old account/geodesic shell

oscar

Go into your Terraform project’s directory and run env \| grep -i bucket and see what the S3 bucket is to confirm or disprove that theory.

Fred Light
old one:
TF_BUCKET_REGION=eu-west-3
TF_BUCKET=go-algo-dev-terraform-state
Fred Light
new one:
TF_BUCKET_REGION=eu-west-3
TF_BUCKET=go-algo-commons-terraform-state
oscar

Hm. Not what I thought

Fred Light

in fact i dont understand how old one doesn’t had the Bucket Policy wich is clearly defined in aws-tfstate-backend (still it doesn’t explain why it was blocking me but …)

Fred Light

the usage was :

  • start geodesic
  • assume-role
  • go in tfstate-backend folder
  • comment s3 state backend
  • init-terraform
  • terraform plan/apply
  • uncomment s3 backend
  • re init-terraform and answer yes about uploading existing state -> 403
Fred Light

and the error is :

Initializing modules...
- module.tfstate_backend
- module.tfstate_backend.base_label
- module.tfstate_backend.s3_bucket_label
- module.tfstate_backend.dynamodb_table_label

Initializing the backend...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the
  newly configured "s3" backend. No existing state was found in the newly
  configured "s3" backend. Do you want to copy this state to the new "s3"
  backend? Enter "yes" to copy and "no" to start with an empty state.

  Enter a value: yes

Releasing state lock. This may take a few moments...
Error copying state from the previous "local" backend to the newly configured "s3" backend:
    failed to upload state: AccessDenied: Access Denied
	status code: 403, request id: XXXXXXXXXXXXX, host id: xxxxxxxxxxxxxxxx

The state in the previous backend remains intact and unmodified. Please resolve
the error above and try again.
Fred Light

deleting bucket policy allowed upload with the same command

oscar

The state in the local tfstate file is your old bucket

oscar

I suspect it is using the same .terraform directory as the old geodesic / account?

oscar

I’ve had that a few times when I forget to reinit properly.

Fred Light

humm should not since this ignored in dockerignore no ?

Fred Light

will have to clone it again so i will check about it

Fred Light

and also the terraform-state folder is comming from FROM cloudposse/terraform-root-modules:0.106.1 as terraform-root-modules

Fred Light

so it could be no way of importing a pre-existing .terraform folder unless i am missing something

oscar

Yeh wasn’t sure if you were running from /localhost or /conf

Fred Light

yes from /conf

Fred Light

ok i duplicated a second env from the original one and i have got the same symptoms (403 when uploading tf state)

Fred Light

the policy applied to the bucket is this one :

Fred Light
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:PutObject",
            "Resource": "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">s3:::go-algo-prod-terraform-state/*",
            "Condition": {
                "StringNotEquals": {
                    "s3<i class="em em-x-amz-server-side-encryption""></i> "AES256"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:PutObject",
            "Resource": "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">s3:::go-algo-prod-terraform-state/*",
            "Condition": {
                "Null": {
                    "s3<i class="em em-x-amz-server-side-encryption""></i> "true"
                }
            }
        }
    ]
}
Fred Light

tried to remove 1st statement => 403, removed only 2nd one => 403 so i guess both are causing issue

Fred Light

runnin in eu-west-3 if that make a difference about ServerSideEncryption

Bogdan

anyone knows what the syntax of container_depends_on input in cloudposse/terraform-aws-ecs-container-definition should look like? CC: @Erik Osterman @sarkis @aknysh

Bogdan

in https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html it’s specified as containing two values: condition and containerName

ContainerDependency - Amazon Elastic Container Service

The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.

Bogdan

so should I pass a list of maps that contains those two keys?

aknysh

try to provide a list of maps with those keys

aknysh

let us know if any issues

Bogdan

will do thanks!

Bogdan

I’m using using something like in the terraform-aws-ecs-container-definition module with my tf version being 0.12.9:

  log_options = {
    awslogs-create-group  = true
    awslogs-region        = "eu-central-1"
    awslogs-groups        = "group-name"
    awslogs-stream-prefix = "ecs/service"
  }

but I get
Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal bool into Go struct field LogConfiguration.Options of type string

Bogdan

I’ve tried to use (double-)quotes for the “true” and still didn’t get rid of it…

Bogdan

@sarkis @aknysh ^^^

cabrinha
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

cabrinha

and I’m wondering, why is there an alb_ingress definition, but it seems that there is no ALB being created?

sarkis

The listener needs to be created in that module and then passed along here: https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L94

~https://github.com/cloudposse/terraform-aws-ecs-alb-service-task module creates the ALB itself~

it’s been a while since i used these - see @aknysh response below for a better explanation

aknysh

ALB is created separately

aknysh
cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

cabrinha

So, when I try to spin down this stack, I get:

module.alb_ingress.aws_lb_target_group.default: aws_lb_target_group.default: value of 'count' cannot be computed
cabrinha

And, if I spin the stack up, then destroy it, I hit this error, and even running a new plan then gives the same error.

aknysh

this project was applied and destroyed many times w/o those errors https://github.com/cloudposse/terraform-root-modules/tree/master/aws/ecs

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh

it’s for atlantis, but you can take the parts related only to ECS and test

cabrinha

I think the issue is that in the alb_ingress module, I’m passing in the default_target_group_arn from the ALB module into target_group_arn of the alb_ingress module…

cabrinha
cabrinha

So, I’m a little confused as to how to connect the alb_ingress module with the alb module, or if I even need to.

cabrinha

The ALB module has no examples and the alb_ingress module has an example of only itself being called. I think it’d be nice to have an example that shows putting these two together.

cabrinha

Somehow I keep getting this error:

aws_ecs_service.ignore_changes_task_definition: InvalidParameterException: The target group with targetGroupArn arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">elasticloadbalancing<i class="em em-us-west-2"></i>:targetgroup/qa-nginx-default/b4be3dbba5e084ab does not have an associated load balancer.
cabrinha

Looks like the service and the ALB are being created at the same time, but really the service needs to be created after the alb.

aknysh

yes

aknysh

ALB needs to be created first

aknysh

so either use -target, or call apply two times

aknysh

(we’ll be adding examples this week when we convert all the modules to TF 0.12)

cabrinha

I suppose in TF 0.12 they’ll have depends_on for modules soon.

aknysh

it’s not about depends_on, TF waits for the ALB to be created, BUT it does not wait for it to be in READY state

aknysh

and it takes time for ALB to become ready

cabrinha

right

aknysh

so depends_on will not help here

Bogdan
05:38:28 PM

It turns out that starting with capital “T” solves the issue; it should be “True” not true

I’m using using something like in the terraform-aws-ecs-container-definition module with my tf version being 0.12.9:

  log_options = {
    awslogs-create-group  = true
    awslogs-region        = "eu-central-1"
    awslogs-groups        = "group-name"
    awslogs-stream-prefix = "ecs/service"
  }

but I get
Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal bool into Go struct field LogConfiguration.Options of type string

1
Brij S

i’ve got a variable, "${aws_ssm_parameter.npm_token.arn}" which ends up being arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">ssm<i class="em em-us-west-2"></i>xxxxxxxxx:parameter/CodeBuild/npm-token-test Is it possible to remove the -test based on if a variable is true or not?

Brij S

tried this, with no luck

${var.is_test == "true" ? "${substr(aws_ssm_parameter.npm_token.arn, 0, length(aws_ssm_parameter.npm_token.arn) - 5)}" : "${aws_ssm_parameter.npm_token.arn}"}
aknysh

why no luck?

aknysh

you don’t need to do interpolation inside interpolation

aknysh

"${var.is_test == "true" ? substr(aws_ssm_parameter.npm_token.arn, 0, length(aws_ssm_parameter.npm_token.arn) - 5) : aws_ssm_parameter.npm_token.arn}"

aknysh

and what’s the provided value for var.is_test?

Brij S

true

Brij S

that worked forgot I idnt need to do the interpolation inside

Brij S

thanks @aknysh!

2019-10-03

oscar

@Erik Osterman Many moons ago we discussed that Dockerfile is for global variables (#geodesic), .envrc is good for slightly less global variables, but shared across applications, and that for terraform only variables it should live in an auto.tfvars file.

I’ve since followed that rule but I’ve noticed the following warning:


Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.

It must be time to move away from that approach and actually start placing Terraform variables in our .envrc with USE tfenv right?

Erik Osterman

I don’t think that’s what this means.

Erik Osterman

It means you have a variable in one of your .tfvars files that does not have a corresponding variable block:

variable "...." { 
}
Jakub Korzeniowski

Hi . I’m dealing with some really old terraform files that sadly don’t specify the version of the modules that have been used to apply the infrastructure. Is there a way to tell that from the tf state file?

Erik Osterman

Hrmmmm that could be tricky. I can’t recall if the tfstate file persists that. Have you had a look at it?

Daniel Minella

Does anyone have a terraform script that creates a stack with prometheus and thanos?

drexler

i built a VPC pre TF v0.12 using the terraform-aws-modules/vpc/aws. I forgot to pin down the version. Anyone know the last compatible version with TF v0.11.x? Getting the following error:

Error downloading modules: Error loading modules: module vpc: Error parsing .terraform/modules/21a99daec297cf2c47674e5f63337da8/terraform-aws-modules-terraform-aws-vpc-5358041/main.tf: At 2<i class="em em-23"></i> Unknown token: 2:23 IDENT max
aknysh
Terraform AWS modules

Collection of Terraform AWS modules supported by the community - Terraform AWS modules

oscar
terraform-aws-modules/terraform-aws-vpc

Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc

any reson why this : https://github.com/cloudposse/terraform-aws-kms-key/blob/0.11/master/main.tf#L43 got deleted from the TF 0.12 version ?

cloudposse/terraform-aws-kms-key

Terraform module to provision a KMS key with alias - cloudposse/terraform-aws-kms-key

looks like there is already a pull request for this :

cloudposse/terraform-aws-kms-key

Terraform module to provision a KMS key with alias - cloudposse/terraform-aws-kms-key

aknysh

it was added to TF 0.11 branch after it was converted to 0.12

I see so that PR should be ok to merge ?

aknysh

after a few minor issues fixed, yes (commented in the PR)

awesome

davidvasandani

Is it possible to disable the log bucket for the terraform-aws-s3-website module? It seems like it was going to be configureable but I can’t find it. https://github.com/cloudposse/terraform-aws-s3-website/issues/21#issuecomment-420829113

Error deleting S3 Bucket {logs-bucket} : BucketNotEmpty · Issue #21 · cloudposse/terraform-aws-s3-website

module Parameters module &quot;dev_front_end&quot; { source = &quot;git://github.com/cloudposse/terraform-aws-s3-website.git?ref=master&quot;> namespace = &quot;namespace&quot; stage = &quot;…

Erik Osterman

I’m not sure off the bat, however, @aknysh is working on terraform all next week (as this week)

Error deleting S3 Bucket {logs-bucket} : BucketNotEmpty · Issue #21 · cloudposse/terraform-aws-s3-website

module Parameters module &quot;dev_front_end&quot; { source = &quot;git://github.com/cloudposse/terraform-aws-s3-website.git?ref=master&quot;> namespace = &quot;namespace&quot; stage = &quot;…

Erik Osterman

he’s currently working on the beanstalk modules

Erik Osterman

we can maybe get to this after that (or if you want to open a PR)

aknysh

it’s easy to implement: a new var.logs_enabled and dynamic block for logging

aknysh

but when it’s enabled, it’s still can’t be destroyed automatically w/o adding force_destroy as is done for the main bucket https://github.com/cloudposse/terraform-aws-s3-website/blob/master/main.tf#L45

cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

aknysh

that’s probably another var.logs_force_destroy

davidvasandani

Cool! Thanks @aknysh and @Erik Osterman I’ll work on a PR. Sorry for not just looking at the code and realizing that a PR would resolve it.

Hemanth

what’s happening here -> ` count = “${var.add_sns_policy != “true” && var.sns_topic_arn != “” ? 0 : 1}” ` can someone explain ? referring from - https://github.com/cloudposse/terraform-aws-efs-cloudwatch-sns-alarms/blob/master/main.tf

cloudposse/terraform-aws-efs-cloudwatch-sns-alarms

Terraform module that configures CloudWatch SNS alerts for EFS - cloudposse/terraform-aws-efs-cloudwatch-sns-alarms

take a look here, https://bit.ly/1mz5q0g wikipedia article on the ternary operator

cloudposse/terraform-aws-efs-cloudwatch-sns-alarms

Terraform module that configures CloudWatch SNS alerts for EFS - cloudposse/terraform-aws-efs-cloudwatch-sns-alarms

2019-10-02

drexler

figured out my problem…the custom AMI i was using didn’t have user data script execution enabled. The little things we miss…

1
01:35:41 PM

Helpful question stored to @:

Why isn't my User Data script running?
Erik Osterman

@here public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

sohel2020

Do sweetops has any terraform module to create Kubernetes cluster using kops?

Erik Osterman

no, we use kops directly, but we use terraform to provision backing services needed by kops

Erik Osterman

i have seen some modules out there do what you say though…

Erik Osterman

(not by us)

sohel2020

@Erik Osterman Could you please give me a link?

Erik Osterman
GitHub

GitHub is where people build software. More than 40 million people use GitHub to discover, fork, and contribute to over 100 million projects.

1
Bogdan

Anyone can recommend 0.12-ready ECS service module?

Bogdan

I tried airship but can’t even get pass terraform init unfortunately

2019-10-01

rbadillo

Good morning, I’m having issues with block_device_mappings in a launch template

rbadillo

Terraform 0.12

rbadillo
  block_device_mappings {
    device_name = "/dev/xvda"

    ebs {
      volume_type = "gp2"
      volume_size = 64
    }
  }

  block_device_mappings {
    device_name = "/dev/sdb"

    ebs {
      volume_type = "io1"
      iops        = var.iops
      volume_size = var.volume_size
    }
  }
rbadillo

That’s resulting on this

rbadillo
rbadillo

Any idea what am I doing wrong ?

aknysh

what’s the issue?

rbadillo

terraform is adding iops to the gp2 volume

rbadillo

don’t know why

aknysh

so maybe that’s a default

rbadillo

Actually everything is good

rbadillo

false alarm

rbadillo

thanks for your help

drexler

need some gurus with this. I have a powershell script to pull an artifact from S3 and install it application during the EC2 bootstrap process. Standalone, the Powershell script works. When applied via the User Data with property i get the following in the Ec2ConfigLog when i remote into the instance to see what happened:

2019-10-01T16<i class="em em-13"></i>54.710Z: Ec2HandleUserData: Message: Start running user scripts
2019-10-01T16<i class="em em-13"></i>54.726Z: Ec2HandleUserData: Message: Could not find <runAsLocalSystem> and </runAsLocalSystem>
2019-10-01T16<i class="em em-13"></i>54.726Z: Ec2HandleUserData: Message: Could not find <powershellArguments> and </powershellArguments>
2019-10-01T16<i class="em em-13"></i>54.726Z: Ec2HandleUserData: Message: Could not find <persist> and </persist>

I’ve ruled out the Powershell version since v4 works with it and even base64-decoded what Terraform does and uploads when creating the resource. The target instance is a Windows Server 2012 box. Ideas appreciated.

2019-09-30

Cloud Posse
04:03:51 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 09, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

Milos Backonja

Guys, i’ve been using https://github.com/cloudposse/terraform-aws-vpc-peering to peer two vpcs and it works awesome. On current project I need to peer N numbers of VPC’s all with each other. As number of VPCs grows it become pretty hard to manage everything even with terraform. Is there any way to dynamically create peering mesh? CIDRs are carefully chosen so there will be no overlapping and I can fetch all vpcs with single data source. This shot from AWS describes my setup perfectly

AgustínGonzalezNicolini

I’d suggest using transit gateway

Working with Shared VPCs - Amazon Virtual Private Cloud

VPC sharing allows multiple AWS accounts to create their application resources, such as Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, Amazon Redshift clusters, and AWS Lambda functions, into shared, centrally-managed Amazon Virtual Private Clouds (VPCs). In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.

Stephane Minisini

@Milos Backonja I would look into Transit Gateway. This allows you to have a hub and spoke type of network and manage the routing tables centrally.

loren

second this

Milos Backonja

Awesome, thanks a lot, This simplifies my setup enormously. I will need to check/estimate costs.

loren

you can do a bunch of other cool things with transit gateways, like centralize nat gateways, or hook in a central direct connect

rbadillo

Hi Guys, does anyone here using Terraform Enterprise ?

Joan Hermida

Hub n’ Spoke with VPC Transit Gateway

Brij S

does anyone know how to add private subnets to the default vpc using terraform?

Don’t use the default pvc, it is bad practice…

Brij S

is there a module that creates a vpc with private subnet?

yes, just go to clousposse github and search for vpc and subnets

we use their modules and they work great

aknysh

thanks @PePe

aknysh
cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster

Brij S

specifically lines 5-24, right?

yes

rohit

does aws_alb_listener resource multiple certificate_arn ?

rohit

i think it does using aws_lb_listener_certificate

1
aknysh

For those interested in the EKS modules, we’ve converted them to TF 0.12

2019-09-29

Bruce

Hey guys, I am looking for the best way to roll back a change to an ASG to the previous known working ami as part of CICD pipeline with Terraform. Thinking of using a script to tag the previous AMI and using that to identify last known config. Has anyone else solved this problem?

I’ve been asked to provision 3 EKS clusters: Dev, Staging, and Prod. What is the way that you guys do this? Currently, I’m thinking of

  • Having 3 branches in my git repo called “dev”, “staging”, and “prod”
  • Having 3 .tfvars files called dev.tfvars, staging.tfvars, prod.tfvars
  • If I commit to dev, My CICD runs terraform apply using a workspace called dev, using dev.tfvars
Nikola Velkovski

Hi @, personally I am a fan of workspaces. We used to have to have this setup but without the fixed branches, CI/CD automaticaly deployed a branch to staging and for prod it was a interactive apply ( if tests passed )

2019-09-27

Rajesh Babu Gangula

@here I am trying to upgrade from v.11.14 to v.12 and after going through the upgrade steps and fixing some code changes … now I am seeing following issue

Error: Missing resource instance key

  on .terraform/modules/public_subnets.public_label/outputs.tf line 29, in output "tags":
  29:         "Stage", "${null_resource.default.triggers.stage}"

Because null_resource.default has "count" set, its attributes must be accessed
on specific instances.

For example, to correlate with indices of a referring resource, use:
    null_resource.default[count.index]

did anyone faced similar issue and was able to fix it

aknysh

try "${join("", null_resource.default..*.triggers.stage}"

AgustínGonzalezNicolini

Hi

AgustínGonzalezNicolini

I would apreciate some help with the terraform-aws-elasticsearch module

AgustínGonzalezNicolini

when trying to use it from the complete example

AgustínGonzalezNicolini

i get in a plan the following

AgustínGonzalezNicolini
AgustínGonzalezNicolini

tha’s one example but i get that for all the variables

aknysh

you have to provide values for all variables

AgustínGonzalezNicolini

it seems as if it were not reading the set variables

AgustínGonzalezNicolini

yeah, but in the http://variables.tf file?

aknysh

or to use the .tfvar files, use :

aknysh
terraform plan -var-file="fixtures.us-west-1.tfvars"
terraform apply -var-file="fixtures.us-west-1.tfvars"
AgustínGonzalezNicolini

hoooo i see

aknysh

but don’t use out values

aknysh

change them

AgustínGonzalezNicolini

so the modules as module "elasticsearch" { blah blah should be empty of values if i use a tfvars file rigth ?

aknysh

you can provide values for the vars from many diff places https://www.terraform.io/docs/configuration/variables.html

Input Variables - Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

rohit

how do you provide credentials to private terraform github repository module ?

AgustínGonzalezNicolini

like this in your http://providers.tf

AgustínGonzalezNicolini
AgustínGonzalezNicolini

then just set in http://variables.tf files the values

rohit

thanks

rohit

and how do i provide the path to github module if it is not at the root level

rohit

for example, source = "[email protected]:hashicorp/example.git"

rohit

but my http://main.tf is under modules directory

rohit

@AgustínGonzalezNicolini how would i access it ?

AgustínGonzalezNicolini
aknysh

[email protected]:hashicorp/example.git//myfolder?ref=tags/x.y.z

rohit

thanks

AgustínGonzalezNicolini

Thanks @aknysh!!!

2019-09-26

Robert

Hey does anyone have a terraform party slackmoji?

Robert

I will trade you one terraform-unicorn-dab slackmoji.

Robert
1

omg I love this

1
AgustínGonzalezNicolini

I’d love a terraform-parrot

AgustínGonzalezNicolini

hahaha

Robert

I was hoping for something like my kubernetes party:

Robert

I stole that form http://kubernetes.slack.com

AgustínGonzalezNicolini

Nice!

Robert

I just made it!

Robert
2
1
Robert

Probably not my best work, but not bad for a first gif

Robert
02:37:27 PM

¯_(ツ)_/¯

AgustínGonzalezNicolini

Thanks

Erik Osterman

I’ve added and

6
5
6
Robert
06:32:12 PM
6
5
6
Joan Hermida

Niiice!

Joan Hermida

Where do I get that unicorn XD

Joan Hermida

I really need it in my workspace

Erik Osterman

1) download the icons above 2) go here https<i class="em em-//$<http"></i>//team.slack.com/customize/emoji> where $team is your slack team

Joan Hermida

Ohhh it is above

1
Joan Hermida

XD

2019-09-25

Sharanya

Components for secure UI hosting in S3 • S3 — for storing the static site • CloudFront — for serving the static site over SSL • AWS Certificate Manager — for generating the SSL certificates Route53 — for routing the domain name to the correct location Did anyone come across any modules for this in terraform ?

aknysh
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh

that’s how we provision static site from S3 https://docs.cloudposse.com/

Sharanya

thankq @aknysh

oscar
amancevice/terraform-aws-serverless-pypi

S3-backed serverless PyPI. Contribute to amancevice/terraform-aws-serverless-pypi development by creating an account on GitHub.

kj22594

Hi all,

I’ve been using terraform (0.10 & 0.11) for close to three years now and as terraform 0.12 gets more support/becomes more of the industry standard, my team is looking to adopt it in a way where we can rearchitect our terraform structure, and reduce the general number of pain points across the team.

Currently we are a multi-region AWS shop that has single terraform repos for every service we deploy, with modules at the root of the repo, and directories representing each of our environments (qa-us-east-1, qa-eu-west-1). We run terraform from within those environment specific directories and push remote state to S3 to maintain completely separate state.

We’re thinking about how we can merge all of this into a single repo where:

  • There are modules that can be reused across all of our different services (they’d either live at the root of the base terraform repo or in a separate terraform modules repo that we can reference from within our base repo)
  • We duplicate as little code as possible (probably obvious but still worth mentioning)
  • We continue to keep all state separate on a per environment basis
  • Follow terraform best practices to make sure that upgrade paths continue to be easy/straightforward

We also want to keep in mind that we are shifting to a multi account AWS organization where our terraform will be deploying into different AWS accounts as well.

The team so far has demoed both Terragrunt and Terraform Workspaces. We are also considering not using workspaces or Terragrunt but still migrating to the single repo structure. There have been mixed opinions about all options considered. I’d love to get feedback from the community if anyone has opinions based on current or previous experiences with either.

kj22594

Please note that we are currently not using Terraform Enterprise but that has been an option that could be considered as well

Tom de Vries

Regarding the multiple AWS account, we have a similar setup where, depending on the env directory you’re in, we hop into the correct AWS Account. Would that work for you are are you planning on deploying the same environment within multiple accounts?

kj22594

it would be different environments within multiple accounts. The rough plan is to have each of our teams have a production & development/test account. So one thought was that the specific account would be another extracted layer of directories, either a level above or below the env directory

aknysh

@kj22594 take a look here, similar conversation https://sweetops.slack.com/archives/CB6GHNLG0/p1569261528160800

Hi guys, Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer? In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core module and passing some vars like name, location, some scaling properties. Any thoughts about it, is it good or bad idea in your opinion?

I appreciate your input.

kj22594

thanks. I’ll take a look

Hi guys, Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer? In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core module and passing some vars like name, location, some scaling properties. Any thoughts about it, is it good or bad idea in your opinion?

I appreciate your input.

aknysh

in short, we use the following:

aknysh
  1. A catalog of top-level modules where we assemble the low-level modules together and connect them. They are completely identity-less and could be deployed in any AWS account in any region https://github.com/cloudposse/terraform-root-modules/tree/master/aws
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh
  1. A container (geodesic https://github.com/cloudposse/geodesic( with all the tools required to provision cloud infrastructure
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

aknysh
  1. Then, for a specific AWS account and specific region, we create a repo and Docker container, e.g. https://github.com/cloudposse/testing.cloudposse.co
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

aknysh

it provides:

aknysh

1) all the tools to provision infrastructure

aknysh

2) Settings and configs for a specific environment (account, region, stage/env, et.) NOTE that secrets are read from ENV vars or SSM using chamber

aknysh

3) The required TF code for each module that needs to be provisioned in that account/region gets loaded dynamically https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks/.envrc

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

aknysh

4) to login to AWS, an AIM role gets assumed in the container (we use aws-vault)

aknysh

so once inside that particular container (http://testing.cloudposse.co), you have all the tools, all required TF code, and all the settings/configs (that specify where and how the modules get provisioned)

aknysh

so the code (logic) is separated from data (configs) and the tools (geodesic), but get combined in a container for a particular environment

kj22594

Wow, thanks. That makes a ton of sense and seems to be a very sound way of approaching this problem. I do really like the idea of having root level modules repo where you can interconnect different modules for use cases that happen numerous times but also having the modules split out so that they can be reused separately too

aknysh

yes

aknysh

also, while terragrunt helps you to organize your code and settings, this approach gives you much more -code/settings/tools in one container related to a particular environment

aknysh

(terragrunt still can be used to organize the code if needed) https://www.reddit.com/r/Terraform/comments/afznb2/terraform_without_wrappers_is_awesome/

Terraform Without Wrappers is AWESOME!

One of the biggest pains with terraform is that it’s not totally 12-factor compliant (III. Config). That is,…

aknysh

the nice part about all of that is that the same container could be used from three different places: developer machine, CI/CD pipelines (those that understand containers like Codefresh or GitHub Actions), and even from GitHub itself using atlantis (which is running inside geodesic container) - that’s how we do deployment and testing of our modules on real AWS infrastructure

kj22594

That is really cool. Atlantis is something that I’ve had conversations with a friend about but we’ve never actually implemented it or even tested it

kj22594

I really appreciate this, this is all great knowledge and insight

Erik Osterman

public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

3

2019-09-24

gyoza

you just need to replace ” “, “_” on the old value

cabrinha

You guys using terraform cloud at all yet?

What are the overall benefits?

cabrinha

Visualization into runs via the web UI. You can see whats been applied recently and how that run went.

cabrinha

You can lock down certain users, you can also plan/apply automatically based on changes to git.

Interesting. I’ll have to check it out. Used to getting the auto features baked into my CI workflow, so if tf-cloud can potentially simplify that, it could be a win.

Does the visualization piece look at anything outside the tf-state?

Using #atlantis for now, as it is more flexible

Though terraform cloud does look appealing

leonawood

Can you use terraform_remote_state data source as an input attribute for subnet in the cloudposse aws ec2 module?

leonawood

I am using the terraform approved aws vpc module to create my VPC, and have correctly setup all my outputs, one specific being a public_subnet ID and I am trying to reference said subnet ID as a terraform_remote_state data source as the subnet attribute but am not sure of the proper syntax

aknysh

If I create NATs in one module, is there a way to get a list of NAT GW and pass it to a new sg with TF?

leonawood

hey @aknysh that helped me

1
leonawood

thank you!

Brij S

I have a terraform module in which we use to setup new AWS accounts with certain resources. So this module is generic enough to use on ‘dev’ aws account, ‘qa’ account and ‘prod’ account for say. However, I need to only create some resources based on the environment. How can I achieve this with a module? I saw this online: https://github.com/hashicorp/terraform/issues/2831

Ignore resource if variable set. · Issue #2831 · hashicorp/terraform

We have a couple of extra terraform resources that need creating under certain conditions. For example we use environmental overrides to create a &quot;dev&quot; and a &quot;qa&quot; environment fr…

Brij S

is this still the best way?

Brij S

was about to try that out but read that if the count is set to 0, it would destroy the resource ?

aknysh

for all resources in the module, you could use count = var.environment == "prod" ? 1 : 0 or count = var.environment == "qa" ? 1 : 0 etc.

aknysh

or any combination of the conditions

Brij S

so adding count = var.environment == "prod" ? 1 : 0 would ensure the resource is only created in prod?

aknysh

it will ensure that if var.environment == "prod" then the resource will be created. If you run it in prod, it will be in prod.

aknysh

at the same time, you could make a mistake and set var.environment == "prod" and run it in dev, then it will be created as well in dev

2
aknysh

@Brij S you need some kind of container (or separate repo) where you set all configs for let’s say prod (e.g. region and AWS prod account ID) and where you set var.environment == "prod"

aknysh

when you run it, it will be used only in the prod account and since var.environment == "prod", the resource will be created

aknysh

so a better strategy would be not to create a super-giant module with many conditions to create resources or not

aknysh

divide the big module into small modules

aknysh

then use some tools to combine only the required modules into each account repo

aknysh

the tool could be terragrant or what we do using geodesic and remote module loading https://www.reddit.com/r/Terraform/comments/afznb2/terraform_without_wrappers_is_awesome/

Terraform Without Wrappers is AWESOME!

One of the biggest pains with terraform is that it’s not totally 12-factor compliant (III. Config). That is,…

leonawood

anyone here split up state files? we use tf workspaces and it works quite nicely. I am interested if theres a way to combine all the outputs into one file tho for reference?

leonawood

so I can just send to our sys admin and it contain all the relevant details

Brij S

@aknysh i will look into terragrunt, as for now Id like to use the above suggestion with TF11, but having some issue with syntax: ${var.aws_env} == "prod" ? "1" : "0" doesnt work - what am i missing?

aknysh

"${var.aws_env == "prod" ? 1 : 0}"

Brij S

what about the closing }

aknysh

need it too

Brij S

cool let me try that

johncblandii
#21: Add support for more config options by johncblandii · Pull Request #22 · cloudposse/terraform-aws-alb-ingress

Feature Added the following with sensible defaults to not break the current consumers: health check variables to enable/disable and control the port + protocol slow_start stickiness // CC @aknysh…

johncblandii

I did not check the provider versions so unsure if it’ll break consumers or not

#21: Add support for more config options by johncblandii · Pull Request #22 · cloudposse/terraform-aws-alb-ingress

Feature Added the following with sensible defaults to not break the current consumers: health check variables to enable/disable and control the port + protocol slow_start stickiness // CC @aknysh…

johncblandii

added a simple example too

aknysh

thanks @johncblandii

aknysh
cloudposse/terraform-aws-alb-ingress

Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress

johncblandii

no prob

Brij S

is there a way to create an IAM user, generate access keys and plug them into paramstore with terraform?

2019-09-23

If I create NATs in one module, is there a way to get a list of NAT GW and pass it to a new sg with TF?

Vlad Ionescu

Your module can output the list of NAT GWs and you can do whatever you desire with that list

is that only if I am creating that sg within the same module?

Vlad Ionescu

Nope.

So there are levels. Think of them as boxes. Terrafrom resources have attributes( variables you set say, ami_name for an EC2 instance) and outputs( say instance_name). You can take that output and play around with it in the same module. Or you can get that output and push it out of your module — your module now outputs that value too.

Thank you! Is there another way to do it with using just data? like data aws_nat_gatway and then scrape for a list with tags

there are examples in terraform-root-modules of reading the output of other modules using their remote state.. https://github.com/cloudposse/terraform-root-modules/blob/9301b150c89a5543bdd2785ecdacf000ee6c5561/aws/iam/audit.tf#L15

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

guigo2k

@ I believe this post will answer your questions https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa#7077

Thank you!

oscar
Ignore changes to database password by osulli · Pull Request #41 · cloudposse/terraform-aws-rds

why To use this module and not cause a re-creation, you would have to hardcode the password somewhere in your config / terraform code. This is not a secure method. Naturally if you use a secrets sy…

Cloud Posse
04:02:28 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 02, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

Szymon

Hi guys, Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer? In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core module and passing some vars like name, location, some scaling properties. Any thoughts about it, is it good or bad idea in your opinion?

I appreciate your input.

aknysh

the idea is good. That’s how we create terraform environments (prod, staging, dev, etc.). We have a catalog of terraform modules (just code without settings/configs). Then for each env, we have a separate GitHub repo where we import the modules we need (using semantic versioning so we know exactly which version we are using in which env) and provide all the required config/settings for that environment, e.g. AWS region, stage (prod, staging, etc.), and security keys (from ENV vars or AWS SSM)

Szymon

As I understand, you’re actually not creating a Terraform Module of your core/base infra, but instead you have catalogs/repos per environment with versioned “module puzzles”?

aknysh

for example, we have a catalog of TF modules - reusable code which we can use in any env (prod, staging, dev, testing) https://github.com/cloudposse/terraform-root-modules/tree/master/aws

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh

the code does not have any identity, it could be deployed anywhere after providing the required config/settings

aknysh

then for example, in testing env, we create projects for the modules we need (e.g. eks), https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

aknysh

and load the module code from the catalog https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks/.envrc (uisng semantic versioning)

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

aknysh

but all the config/settings are provided from a few places:

aknysh
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

aknysh
  1. Dockerfile (in which we have settings common for all modules in the project) https://github.com/cloudposse/testing.cloudposse.co/blob/master/Dockerfile
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

aknysh
  1. Secrets are from ENV vars (which get populated from diff sources, e.g. AWS SSM, Secrets Manager, Vault, etc.) when the CI/CD deployment pipeline runs, or on dev machine by executing some commands)
Szymon

I see, thank you very much I started with different approach, I keep all my environments in one Terraform Repository with projects and I include modules from external git repositories (each module in separate git repository)

aknysh

that’s what we do too

aknysh

https://github.com/cloudposse/terraform-root-modules is a (super)-catalog of top level modules which are aggregations of low-level modules

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh

each projects in there connects low-level modules together into a reusable top-level module https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks/eks.tf#L31

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Szymon

ah, right

aknysh

those aggregations are opionated since you can have many diff ways to connect low-level modules to create a top-level module you need

Szymon

Interesting approach. I was reading quite a lot recently, best practices with Terraform,TF Up & Running etc. and in most cases people don’t recommend using nested modules, but it looks really reasonable in your case.

aknysh

they are not nested (in that sense)

aknysh

those are module instantiation and connecting them together into a bigger module

aknysh

that’s why we have modules in TF - to reuse them

aknysh

in other modules

Szymon

Actually that was my understanding of nested word, sorry. English is not my first language

joshmyers

By nested modules, they mean modules of modules. Cloudposse stuff does use modules of modules e.g. module A may use module B, and module B may use module C

joshmyers

It works fine, but can be interesting to debug several layers down

joshmyers

If you want composable modules, there isn’t much of a way around that

joshmyers

And by they, I mean folks behind tf up and running etc

2
Vlady Veselinov

hi y’all

1
Hemanth

Any samples/examples for implementing Cloudwatch events>create new rule>Ec2 Instance State-Change Notification > Target > SNS > email, currently going through official docs

@Hemanth you cannot create email subscription to an SNS topic with terraform, because they require a confirmation

Callum Robertson

Hey All, has anyone had issues creating azure resources with an s3 backend?

Callum Robertson

@aknysh have you ever used an s3 backend with other providers for resources? I’m getting an issue where my declared resources are being pick up in the state file

aknysh

did not use azure, but you can give more details about the issue, maybe somebody here will have some ideas

Otherwise, you just want to create the following resources: aws_cloudwatch_metric_alarm, aws_sns_topic, and aws_sns_topic_subscription

aknysh

@Hemanth ^

Hemanth

@aknysh the https://github.com/cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms is empty. but thanks those samples are helpful

cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms

Terraform module that configures CloudWatch SNS alerts for EC2 instances - cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms

aknysh

that one was not implemented

aknysh

PRs are welcome

2019-09-22

guigo2k

guys, any update on this https://sweetops.slack.com/archives/CB6GHNLG0/p1566415698381800 ? Really looking forward to use these modules with TF 0.12

for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20

I forgot to update the version for CI to 0.12, will try and push that out

aknysh

yes, we are working on that now, will be done in the next 2-3 days

for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20

I forgot to update the version for CI to 0.12, will try and push that out

guigo2k

thanks for the update @aknysh

2019-09-20

Szymon

Azure Hi everyone, I’m about to move my big terraform configuration into separate modules, but I have a question about best practice regarding resource-groups. If I will create resource-group resource in every of my modules, it will be fine, because it will be created once, but when for some reason I will remove entire module or I will try to redeploy it, wouldn’t Terraform want to delete my resource group (and all other resources-modules)? Should I rather use data resource to make reference to resource group created in another module or what are your ideas? Thanks

gyoza

hey guys… not sure whats going on but it looks like the 0.9.0 - terraform-aws-cloudfront-s3-cdn module is creating ARN IDs like

“arniam:user/CloudFront Origin Access Identity XXXXXXXXXXXXXXXX”

for S3 policies to allow Cloudfront access

Nikola Velkovski

ah that’s a new issue

Nikola Velkovski

I just encountered it today

gyoza

oh thank god.

Nikola Velkovski

AWS changed how the API behaves

Nikola Velkovski

in the background

Nikola Velkovski

if you need a quick fix

gyoza

I literally thought i was going crazyh

Nikola Velkovski

hahah it happened to me as well

gyoza

i do ,. please

Nikola Velkovski

sec

Nikola Velkovski
S3 bucket policy invalid principal for cloudfront · Issue #10158 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Nikola Velkovski

the glorious fix is

gyoza

aaaaah

gyoza

can i just downgrade my provider version?

gyoza

lets take a peak

Nikola Velkovski
    principals {
      type        = "AWS"
      identifiers = [replace("${aws_cloudfront_origin_access_identity.this.iam_arn}", " ", "_")]
    }
gyoza

Thank you!

Nikola Velkovski

for now you should be able to patch it until @aknysh or @Erik Osterman wake up and officialy fix it

gyoza

haha Erik is a long time friend of mine, i can hold something over him i think to get it fixed

gyoza

Nikola Velkovski

gyoza

although, I was the one who was usually embarrassing themselves…

gyoza

I think using the replacements only works for current state files, if you’re doing new policies you have to use type CanonicalUser and identifier s3_canonical_user_id

gyoza

aaaah

Nikola Velkovski

nope that’s not going to work

gyoza

It just applied for me.

Nikola Velkovski

even though CanonicalUser and identifier s3_canonical_user_id will pass tf apply

Nikola Velkovski

try it again

Nikola Velkovski

aws is changing it in the background

Nikola Velkovski

you’ll get a change on every apply

gyoza

really

gyoza

ugh

Nikola Velkovski

at least that’s what happened to me

gyoza

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

gyoza

damn

gyoza

It wont take the replace suggestion, keeps telling me bad

gyoza
Error: Error putting S3 policy: MalformedPolicy: Invalid principal in policy
gyoza

gonna try something

gyoza

it was too early, i was using dashes lol….

gyoza

underscores work

gyoza

thanks for the help Nikola!

gyoza

gonna lurk here now….

gyoza

Nikola Velkovski

you are welcome

2019-09-19

oscar

How come only the creator of the EKS cluster can connect using the CP moduels?

By default, only the creator of the cluster has access to it using IAM. The aws-auth ConfigMap in the kube-system namespace controls it. You can add an IAM role mapped to a K8s group that will give anyone who is able to assume that role the ability the log in. Looks like CloudPosse’s implementation of the terraform-aws-eks-workers module doesn’t make this configurable yet.

Looks like the template for the ConfigMap is here: https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/config_map_aws_auth.tpl

The EKS cluster example shows it being applied here: https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf

Here’s an example of what it would look like with an IAM role bound to a K8s group that would give anyone that is able to assume the role my-eks-cluster-admin the ability to log into the cluster with cluster-admin privileges:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: \|
    - rolearn: arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:role/REDACTED
      username: system<i class="em em-node"></i>{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

    - rolearn: arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:role/my-eks-cluster-admin
      username: my-eks-cluster-admin
      groups:
        - system:masters

  mapUsers: \|

  mapAccounts: \|

Then, you would change the command being run in your kubeconfig to use the role by using the -r flag in the aws-iam-authenticator token command.

Be advised that this will defeat some auditability because Kubernetes will see everyone as the user my-eks-cluster-admin. You can do a very similar thing with the mapUsers section in order to map each user you want to give access to with a username in Kubernetes.

The syntax for mapUsers is

mapUsers: \|
  - userarn: <theUser'sArn>
    username: <TheUsernameYouWantK8sToSee>
    groups:
      - <TheK8sGroupsYouWantTheUserToBeIn>
oscar

Thank you we found the answer to this earlier on! Really apporeciate your detail!

oscar

We’re planning to fork it when 0.12 of the module goes live to support this customizability

aknysh

thanks guys, we will add additional roles and users mapping (working on 0.12 of the modules now)

oscar

Ah that’s cool. My new firm is really keen to use CP’s own version of 0.12 (not the fork/PR branch). We have our own customizability reqs so once 0.12 is done and pushed we can start extending

oscar

https://github.com/hashicorp/terraform/issues/22649 anyone experiencing this out of nowhere? (All devs using the state file are on 0.12.6)

Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6 · Issue #22649 · hashicorp/terraform

Terraform Version v0.12.7 Debug Output Error: Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6; upgrade to Terraform v0.12.7 or greater to w…

aknysh

they have been busy adding new features

Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6 · Issue #22649 · hashicorp/terraform

Terraform Version v0.12.7 Debug Output Error: Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6; upgrade to Terraform v0.12.7 or greater to w…

aknysh

usually that happened when using 0.12 then trying to read the state with 0.11

aknysh

but now looks like any version bump causes that

oscar

But everyone (2 people - we’re next to eachother) using that project are using the same geodesic shell and have the same version 0.12.6… yet the statefile in S3 says 0.12.7 O.O

oscar

neither of us have 0.12.7 which is super weird

aknysh

geodesic has 0.12.6 as well?

oscar

yep!

oscar

or rather

oscar

we are both in geodesic

oscar

and terraform version is 0.12.6 on both our PCs

oscar

No one else feasibly ran this

aknysh

inside geodesic, terraform version is 0.12.6 as well?

oscar

Yes

oscar

on our locals: 0.12.0

oscar

on our geodesics: 0.12.6

oscar

Whilst we’d like to know why, we’re happy to use 0.12.9 etc

oscar

.. but we’re using cloudposses terraform_0.12

oscar

@aknysh I see that 0.12.7 is in your packages https://github.com/cloudposse/packages/blob/master/vendor/terraform-0.12/VERSION

however apk add –update –no-cache terraform_0.12 does not work as expected

cloudposse/packages

Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages

oscar

Ok updated my geodesic FROM to 0.122.4 and that cleared the cache

oscar

now on 0.12.7

Alex Siegman

i thought you needed apk update && apk add --update [email protected] is the @cloudposse not required?

oscar

doh that must be it

oscar

merci beaucoup

Alex Siegman

granted, using the newest geodesic is also nice~ features and bugfixes, oh my

oscar

I was only coming from 0.119 - not that far behind!

Alex Siegman

I also usually customize that in my own dockerfile that wraps geodesic:

RUN apk add [email protected] [email protected]==0.12.7-r0

Is what’s in ours, but we only have one or two 0.12 projects, everything is mostly on 0.11 still

stupid question I’m using

locals {
  availability_zones = slice(data.aws_availability_zones.available.names, 0, 2)
}

but sometimes my resources end up in the same AZ

better to just hardcode them ?

aknysh

@PePe what do you mean by sometimes? When in diff regions?

aknysh

the code above is ok and should work

same region

aknysh

no need to hardcode anything

I’m using the terraform terraform-aws-rds-cluster module

which I’m going to send a PR to support global clusters

aknysh
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

aknysh

that example worked many times

I know is weird because if I recreate the cluster then it will work

I wonder now….maybe I just have a problem in one region

we use TF to create the accounts so in every region we subnets for every AZ

I was wondering if for some reason we made a mistake or something

but I’m using a data lookup to find them base on tags

aknysh

yea check the data lookup if it returns the correct result

exactly what I’m doing

I’m gettin 3 subnet ids in us-east-1 and 4 in us-west-2

so the data lookups are good

cluster_size = 2 and I pass 4 subnets, then it should be ok

Has anyone else run into the issue where you can’t pass variables via the command line when using the remote backend since last week when they released terraform cloud?

Error: Run variables are currently not supported

The "remote" backend does not support setting run variables at this time.
Currently the only to way to pass variables to the remote backend is by
creating a '*.auto.tfvars' variables file. This file will automatically be
loaded by the "remote" backend when the workspace is configured to use
Terraform v0.10.0 or later.

Additionally you can also set variables on the workspace in the web UI:
<https://app.terraform.io/app/Boulevard/sched-dev-feature-branch-environments/variables>
aknysh

thanks @PePe

aknysh

commented

fixed

aknysh

did you run

make init
make readme/deps
make readme
aknysh

looks like README was not updated

aknysh

and docs/terraform.md was deleted

weird

mmm

❰jamengual❙~/github/terraform-aws-rds-cluster(git:globalclusters)❱✔≻ make readme                                                                                                                                                                  5.2s  Thu 19 Sep 18<i class="em em-51"></i>17 2019
curl --retry 3 --retry-delay 5 --fail -sSL -o /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs <https://github.com/segmentio/terraform-docs/releases/download/v0.4.5/terraform-docs-v0.4.5-darwin-amd64> && chmod +x /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs
2019/09/19 18<i class="em em-51"></i>24 At 3<i class="em em-16"></i> Unknown token: 3:16 IDENT var.namespace
make: *** [docs/terraform.md] Error 1
aknysh

hmmm

aknysh

looks like something is broken (will have to look)

Erik Osterman

Looks like an old build harness

Erik Osterman

The ident error tells me that it’s using an old version of terraform

Erik Osterman

Terraform-docs does not support it natively, so we have a wrapper around terraform docs

ohhhh

one sec

I have two binaries

Erik Osterman

Also might get fixed if you blow away build harness and rerun make init. Just a hunch.

Erik Osterman

(On my phone so cant provide more detail)

done

aknysh

Unknown token: 3:16 IDENT happened to me when TF versions mismatched

thanks guys

aknysh

tested on AWS and merged

cytopia

I am currently working on a new fix for the terraform-docs.awk wrapper here: https://github.com/antonbabenko/pre-commit-terraform/issues/65

If there are any other issues coming up, let me know

terraform_docs failing on complex types which contains "description" · Issue #65 · antonbabenko/pre-commit-terraform

How reproduce Working code: staged README.md <!– BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK –> <!– END OF PRE-COMMIT-TERRAFORM DOCS HOOK –> staged http://vars.tf variable &quot;ingress_ci…

2019-09-18

oscar

Anyone seen the issue where you curl from an EKS worker node to the cluster and get SSL issues?

oscar

Using CP worker / cluster / asg modules.

oscar

curl: (60) SSL certificate problem: unable to get local issuer certificate

oscar

… this is curling the API endpoint as per EKS

oscar

@Addison Higham I’m using your branches from here https://sweetops.slack.com/archives/CB6GHNLG0/p1566415698381800

Error: Invalid count argument

  on .terraform/modules/eks_workers.autoscale_group/ec2-autoscale-group/main.tf line 120, in data "null_data_source" "tags_as_list_of_maps":
 120:   count = var.enabled ? length(keys(var.tags)) : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20

I forgot to update the version for CI to 0.12, will try and push that out

oscar

but getting the following error. Could you provide any guidance on what you think that might be?

Addison Higham

yeah, that was an oopsie, a fix got merged… but maybe it didn’t make it onto the branch I was trying to upstream

Addison Higham

lemme find it

oscar

Thanks. If possible could you push it to your fork’s master? I did try your inst-* branch but that didn’t seem to quite fix it

Addison Higham

oh that is a different issue @oscar, what are you passing to tags? as the error message says, it can’t have anything dynamic being passed in

oscar

tags is actually empty

oscar

I’m passing var.tags which is an empty {} in my terraform proejct that calls your eks_worker module

oscar

so am I correct in using your worker & cluster branches @master branch?

oscar

because I’m aware you also have the ASG one updated, but do the master branches of worker and cluster point to that?

Addison Higham

oh yeah, so that is why we use the inst-version, which does this: https://github.com/instructure/terraform-aws-eks-cluster/pulls?utf8=%E2%9C%93&q=is%3Apr

instructure/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to instructure/terraform-aws-eks-cluster development by creating an account on GitHub.

Addison Higham

to be safe, whenever I change refs, I also just delete .terraform directory and re-init

Addison Higham

it is sorta weird, we didn’t want to open a PR to our updated module, but they do need to merge them in order for these to work

oscar

Ya I understand the need for the branch. I’ll give another go later on.

oscar

So worker inst Cluster master

oscar

And that should fix my previous issue with count?

Addison Higham

I think so? at least that is what we have and don’t have any issues

Erik Osterman

public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

oscar

@Addison Higham - darn still got the same issue

oscar
module "eks_cluster" {

  source     = "git:<i class="em em-<https"></i>//github.com/instructure/terraform-aws-eks-cluster.git?ref=master>"
...
}

module "eks_workers" {
  source = "git:<i class="em em-<https"></i>//github.com/instructure/terraform-aws-eks-workers.git?ref=inst-version>"
...
Addison Higham

same error?

oscar

Yeh

oscar
Error: Invalid count argument

  on .terraform/modules/eks_workers.autoscale_group/main.tf line 120, in data "null_data_source" "tags_as_list_of_maps":
 120:   count = var.enabled ? length(keys(var.tags)) : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
module "eks_workers" {
  source = "git:<i class="em em-<https"></i>//github.com/instructure/terraform-aws-eks-workers.git?ref=inst-version>"

  namespace     = var.namespace
  stage         = var.stage
  name          = var.name
  tags          = var.tags
...
}
oscar

var.tags is empty (defaulting to {})

Addison Higham

is your cluster_name dynamic? see https://github.com/instructure/terraform-aws-eks-workers/blob/master/main.tf#L2, the workers module computes some tags, so your cluster_name needs to be known at plan time

instructure/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - instructure/terraform-aws-eks-workers

oscar

Omg

oscar

that must be it

Addison Higham

that is why in the example you see them use the label module to compute the name of the cluster in multiple distinct places

oscar
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

oscar
# mine
  cluster_name                       = "${module.eks_cluster.eks_cluster_id}"
oscar

Will hardcode to a string for now

oscar

Super thanks. Cluster and workers up now

oscar

But back to workers not connecting to cluster.

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

oscar

Ah - no thank you. I saw this before but didn’t honestly understand it! Should this be run at cluster creation or can be applied afterwards?

aknysh

so at the time we did it, in some cases there were some race conditions, that’s why we did not enable it by default

aknysh

after the cluster applied, we set the var to true and applied that

oscar

Many thanks

aknysh

but now you can test it with the var enabled from start

aknysh

we did that almost a year ago so a lot prob has changed

aknysh

and we will convert the EKS modules to 0.12 and add auto-tests this/next week (finally )

oscar

oscar

Would love to get a hold of those updated modules

oscar

Andriy you are my hero

oscar

My workers are now connected

aknysh

oscar

TF weirdly got an unauthorized response when applying the command:

 kubectl apply -f config-map-aws-auth-xxx-development-eks-cluster.yaml --kubeconfig kubeconfig-xxx-development-eks-cluster.yaml
oscar

but my kubectl already had the context activated

oscar

so I just ran the apply configmap without the –kubeconfig

Julio Tain Sueiras

@aknysh XD XD XD XD XD, so I found the biggest issue that is causing vs code users for using the terraform lsp plugin, I forgot to omit the hover provider from the first release that I was trying out(so is very error prone), since I only use vim, so there is no hover that get activated

1
Julio Tain Sueiras

so now is alot more stable for any GUI based Editor that is going to use terraform-lsp

aknysh

nice @Julio Tain Sueiras

Erik Osterman

@ btw, didn’t realize you were on sweetops. We discussed your comment today #office-hours today https://github.com/hashicorp/terraform/issues/15966#issuecomment-520102463 (@sarkis had originally directed my attention to it)

Feature: Conditionally load tfvars/tf file based on Workspace · Issue #15966 · hashicorp/terraform

Feature Request Terraform to conditionally load a .tfvars or .tf file, based on the current workspace. Use Case When working with infrastructure that has multiple environments (e.g. &quot;staging&q…

08:41:25 PM

@ has joined the channel

rohit

i am facing issues with pre-commit when using in my terraform project

rohit
repos:
- repo: <git://github.com/antonbabenko/pre-commit-terraform>
  rev: v1.15.0
  hooks:
    - id: terraform_fmt
    - id: terraform_docs_replace
rohit

i receive the following error

rohit
 pkg_resources.DistributionNotFound: The 'pre-commit-terraform' distribution was not found and is required by the application
rohit
rohit

any ideas on what could be the problem ?

Erik Osterman

@antonbabenko

@rohit Not sure if it will fix anything, but you can try changing the git:// to https://. Here’s mine for reference:

- repo: <https://github.com/antonbabenko/pre-commit-terraform>
    rev: v1.19.0
    hooks:
      - id: terraform_fmt
      - id: terraform_docs
rohit

i think the problem is with terraform_docs_replace

rohit

and maybe it has terraform version 0.11.13

rohit

i want to replace the README file automatically as part of commit

rohit

do you know if the same can be achieved using terraform_docs ?

its possible. I contributed terraform_docs_replace several months ago, it probably hasn’t been touched since then.

rohit

i think terraform_docs_replace is only supported in terraform v12

terraform_docs just makes changes to an existing README between the comment needles

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
stuff gets changed here
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

terraform_docs_replace was made quite a while ago, before 12 came out

rohit

when i update variables and their description in <http://variables.tf>, my README.md files does not gets updated using terraform_docs

Erik Osterman

Erik Osterman

good read

Brij S

if ive got a module such as:

module "vpc_staging" {
  source = "./vpc_staging"
}

can I access a variable/output created in that module in another module like so?

module "security-group" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "1.25.0"

  name = "sg"
  description = "Security group for n/w with needed ports open within VPC"
  vpc_id     = "${module.vpc_staging.vpc_id}"

}

Would I use the variable name, or output id? What do I reference basically?

The second module can use the outputs of the first module. So, the vpc_staging module would need an output called vpc_id for that example you gave.

Brij S

right! I thought so, just wanted to confirm

Brij S

thanks

Claudio Palmeira

Hey guys, I do have a problem with the examples on the eks_cluster, more specifically on the subnets module It has an unssoported argument there:

Claudio Palmeira

An argument named “region” is not expected here.

Claudio Palmeira

unsupported

Claudio Palmeira

module subnets on <http this line -> region = “${var.region}” Terraform complains about it not being an expected argument

aknysh

the example is not actually correct since the EKS modules are TF 0.11, but the subnet module are pinned to master which is already 0.12

aknysh

we are working on converting EKS modules to 0.12

aknysh

for now, pin the subnet module to a TF 0,11 release

aknysh
module "subnets" {
  source              = "git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.12.0>"
cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

pin to 0.4.1 which is TF 0.11

Claudio Palmeira

Thank you mate

2019-09-17

Hemanth

Inside terraform(.tf) i can use assign dynamic stuff using variables like - key_name = "${var.box_key_name}" for different environments, how can i do the same inside the user-data scripts attached to tf, i am tyring to have unique values for sudo hostnamectl set-hostname jb-*environtmenthere* in the user-data script

PiotrP

hi gents, has any one of you succesfully created s3 bucket module with dynamic cors configuration?

aknysh

not sure what you mean by ‘dynamic configuration’, but take a look here https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf#L79

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

PiotrP

by dynamic configuration, I thought about utilizing terraform’s ‘dynamic’ feature

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

PiotrP

the same approach you linked I use right now but it forces to have any kind of CORS configuration applied to the bucket, even when you do not need CORS at all

PiotrP

with dynamic configuration I thought I will be able to create s3 buckets with or without cors configuration

aknysh

that’s easy to implement

PiotrP

I ended up with something like this:

dynamic "cors_rule" {
    for_each = var.cors_rules
    content {
      allowed_headers = [lookup(cors_rule.value, "allowed_headers", "")]
      allowed_methods = [lookup(cors_rule.value, "allowed_methods")]
      allowed_origins = [lookup(cors_rule.value, "allowed_origins")]
      expose_headers = [lookup(cors_rule.value, "expose_headers", "")]
      max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
    }
  }
PiotrP

when variable cors_rules is a list of maps like this:

cors_rules = [{
    allowed_origins = "*"
    allowed_methods = "GET"
  }]
PiotrP

however, this approach is still not perfect, because values not mentioned in the cors_rules variable will be applied anyway with default values

PiotrP

am I missing something ?

aknysh

i don’t think it’s possible to do it, unless you want to use many permutations of dynamic blocks with for_each with different conditions

PiotrP

I see

PiotrP

thanks for answering

aknysh

that’s how we deploy https://docs.cloudposse.com/

James D. Bohrman

Here’s a little tool I’ve been working on that the gamers here might like. I used a lot of Cloud Posse modules also

https://github.com/jdbohrman/parsec-up

jdbohrman/parsec-up

Terraform module for deploying a Parsec Cloud Gaming server. - jdbohrman/parsec-up

1
Erik Osterman

for discoverability, have you considered renaming it to terraform-aws-parsec-instance? this is the format hashicorp suggests for the registry

jdbohrman/parsec-up

Terraform module for deploying a Parsec Cloud Gaming server. - jdbohrman/parsec-up

1
James D. Bohrman

I haven’t but I will probably do that!

1
Julio Tain Sueiras

@aknysh finally so apparently A) Terraform state loading is private to itself in the UI and Command code, so I will need to talk to either paul or terraform team about it, B) and good news finally find out that loading in terraform is implicit cascading

Julio Tain Sueiras

so if you have two file , call it http://main.tf and http://data.tf, then you state ForceFileSource on http://main.tf to be “” then do LoadConfigDir

Julio Tain Sueiras

terraform will declare http://main.tf to be empty

Julio Tain Sueiras

and skip reading

Julio Tain Sueiras

is useful, since I need it to do resource & data types gathering for error checking

1

2019-09-16

Bruce

Hi @James D. Bohrman this might help “Deploying a Windows 2016 server AMI on AWS with Packer and Terraform. Part 1” by Bruce Dominguez https://link.medium.com/8hIu8JaK1Z

Deploying a Windows 2016 server AMI on AWS with Packer and Terraform. Part 1

Automating a deployment of a Windows 2016 Server on AWS should be easy right, after all deploying an ubuntu server with Packer and…

Bruce

Does anyone have a good suggestion on creating a snapshot from a Rds database (that’s encrypted) and restoring it to a Dev/testing Env and doing some data scrubbing?

joshmyers

Suggestions yes, any of them any good? Not so sure

joshmyers

Have seen this done in several ways, none of which were particularly nice

joshmyers

@Bruce https://github.com/hellofresh/klepto looked interesting in this space last time I checked

hellofresh/klepto

Klepto is a tool for copying and anonymising data. Contribute to hellofresh/klepto development by creating an account on GitHub.

joshmyers

(probably not a discussion for this particular channel)

Bruce

Thanks @joshmyers I will check it out.

Mike Whiting

is anyone able to advise on aws_ecs_task_definition. If I specify multiple containers in the task definition file then neither of the containers come up.

Mike Whiting

but if I have just one it works

joshmyers

@Mike Whiting you are really going to need to post your instantiation of the Terraform resource or whatever. What you expected. What the actual error message is etc

Mike Whiting

did you mean to @ me?

Mike Whiting

Mike Whiting

these are the resources:

resource "aws_ecs_task_definition" "jenkins_simple_service" {

//  volume {
//    name      = "docker-socket"
//    host_path = "/var/run/docker.sock"
//  }

  volume {
    name      = "jenkins-data"
    host_path = "/home/ec2-user/data"
  }
  family                = "jenkins-simple-service"
  container_definitions = file("task-definitions/jenkins-gig.json")
}

resource "aws_ecs_service" "jenkins_simple_service" {
  name            = "jenkins-gig"
  cluster         = data.terraform_remote_state.ecs.outputs.staging_id
  task_definition = aws_ecs_task_definition.jenkins_simple_service.arn
  desired_count   = 1
  iam_role        = data.terraform_remote_state.ecs.outputs.service_role_id

  load_balancer {
    elb_name       = data.terraform_remote_state.ecs.outputs.simple_service_elb_id
    container_name = "jenkins-gig"
    container_port = 8080
  }
}
Mike Whiting
[
  {
    "name": "jenkins-gig",
    "image": "my-image",
    "cpu": 0,
    "memory": 512,
    "essential": true,
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 8000
      }
    ],
    "environment" : [
      {
        "name" : "VIRTUAL_HOST",
        "value" : "<host>"
      },
      {
        "name": "VIRTUAL_PORT",
        "value": "8080"
      }
    ],
    "mountPoints": [
      {
        "sourceVolume": "jenkins-data",
        "containerPath": "/var/jenkins_home",
        "readOnly": false
      }
    ]
  },
  {
    "name": "nginx-proxy",
    "image": "jwilder/nginx-proxy",
    "cpu": 0,
    "memory": 512,
    "essential": true,
    "portMappings": [
      {
        "containerPort": 80,
        "hostPort": 80
      }
    ]
  }
]
Mike Whiting

if I remove the nginx-proxy container from the definition then ecs-agent successfully pulls and launches the jenkins container but with it included nothing happens

Mike Whiting

nb: ‘my-image’ is from a private registry and nginx-proxy is public

joshmyers

Do you have any error events being logged?

joshmyers

Are there creds for the private repo?

Mike Whiting

I’m just observing the ecs-agent logs currently (within the instance)

Mike Whiting

as I say, the container from the private image launches fine when I don’t specifiy the proxy container in the definition file.. i.e. one container object

joshmyers

You hadn’t specific which one you can bring up on it’s own, or that one is in a private registry at that point

joshmyers

ECS agent logs should give you an idea

Mike Whiting

I can bring up the jenkins container (private image) on it’s own

Mike Whiting

when the nginx-proxy definition is present ecs-agent just sits idle

Mike Whiting

does that make sense?

joshmyers

yes

Mike Whiting

what do you think I should try?

Mike Whiting

starting to wonder if terraform is really for me if I can’t get help

Mike Whiting

(from anywhere)

joshmyers

Terraform is just making API calls for you

Mike Whiting

yep

oscar

The tags for this module are so confusing: https://github.com/cloudposse/terraform-aws-rds/releases

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

oscar

I’ve been using 0.11 by mistake as I took the ‘latest’ tag

oscar

but that’s actually just a hotfix

oscar

the latest .12 tag ios 0.10

oscar

true I could have read the list and lesson learned, but had me stumped for a while as to why it wasn’t working!

aknysh

@oscar don’t pin to master/latest, always pin to a release. In the module, TF 0.12 started with tag 0.10.0, but when we needed to add some features to TF 0.11 code, we created the tag 0.9.1 which is the latest tag, but not for TF 0.12 code

oscar

Yes that’s what I mean

oscar

a 0.11 tag is at the top of the tags list

oscar

bamboozled me, logically I would have thought only 0.12 tags would be at the top of the ‘releases’ tab

aknysh

that’s how GitHub works

oscar

so I had it pinned to a 0.11 until I realised what was going on

loren

i don’t even see a 0.11 tag in there. there is a 0.11 branch

oscar
loren

exactly

oscar

0.9.1 is a TF 0.11 tag

loren

oh you mean the 0.9.1 tag only supports tf 0.11

loren

not that there is a 0.11 tag

oscar

Aye

loren

confusing

oscar

bamboozles

aknysh

we did not find a better way to support both code bases and tag them

oscar

Haha its fine, I was just pointing out it is a bamboozle

aknysh

so we started a TF .12 code with some tag and continue incresing it for 0.12

oscar

It makes sense

loren

what you are doing makes sense to me, releasing patch fixes on the 0.9 minor stream

aknysh

for 0.11, usually increase the last tag for 0.11 branch

oscar

The lesson learned was ‘don’t just grab the top most tag’

loren

would be cool if tf/go-getter supported more logic in the ref than an exact committish… a semver comparator (like ~>0.9) would be awesome

oscar

tf/go-getter waht does this do?

loren

terraform uses go-getter under the covers to retrieve modules specified by source

oscar

I see, yeh that would be smart

loren
hashicorp/go-getter

Package for downloading things from a string URL using a variety of protocols. - hashicorp/go-getter

oscar

checks the http://versions.tf file and cehcks for compatibility

oscar

@aknysh I think I was doing the PR as you commented! https://github.com/cloudposse/terraform-aws-rds/pull/38

aknysh

running automatic tests now, if ok will merge

oscar

where abouts are your tests?

oscar

I couldn’t see them

oscar

I noted Codefresh wasn’t in the PR either

aknysh
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

aknysh
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

aknysh
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

oscar

Oh I see. When I navigated the test/ directory it looked like an example

oscar

but I realise now that examples_complete_test.go is related ot the examples/ dir

oscar

and that examples/ isn’t just documentation. Nice

aknysh
oscar

Yah that’s some nice gitops

oscar

I was expecting a trigger

aknysh
oscar

but that’s cooler

aknysh

it is a trigger, but we have to trigger it (for security when dealing with PRs from forks)

oscar

Oh that makes sense actually

aknysh

otherwise you could DDOS it

oscar

Yeh

aknysh

merged and released 0.11.0 (now you have that tag ) thanks

oscar

woop, thanks

oscar

Debate/Conversation:

“We should enable deletion_protection for production RDS”

https://www.terraform.io/docs/providers/aws/r/db_instance.html#deletion_protection

1
oscar

For: anyone in console / terraform cannot accidentally delete (assuming IAM permissions are not super granular & TF is being operated manually)

1
oscar

Against: presumably this would mean the resource cannot be updated? I’m not too familiar with RDS so unsure on how many settings actually cause a re-create

1
asmito

better to enable it, but usually when you want to delete an RDS instance aws takes a snapshot of it as back up.

asmito

guys do you know when we will have count enabled for module

oscar

Not seen an ETA yet, just that it is reserved alongside for_each

asmito

?

Cloud Posse
04:03:44 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Sep 25, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

Mike Whiting

I’ve had a brainwave that perhaps I need to add another dedicated aws_ecs_service resource for the nginx-proxy - see my example code above. is this a possibility?

oscar

Is there a MKS/Kafka module anywhere?

Tehmasp Chaudhri

Has anyone solved a solution for dynamically determining which subnets are free in a given VPC to then use for deploying some infrastructure into? Or know of some examples?

aknysh

what do you mean by are free?

Tehmasp Chaudhri

available ip address space

aknysh

that’s not easy

Tehmasp Chaudhri

yup; plus we have multiple cidr blocks (secondaries) being added to the VPC so in some cases the the secondary blocks are barely usable because subnets created off of them don’t garnish many ip address space (e.g. \28)

Tehmasp Chaudhri

so yeah - in those cases basically need a way to filter away “unusable” subnets

Tehmasp Chaudhri

the closet thing i’ve found is running a local cmd and finding a way to stuff it into a data template to somehow use downstream - kind of like the solution here: https://medium.com/faun/invoking-the-aws-cli-with-terraform-4ae5fd9de277

Tehmasp Chaudhri

but all very ugly

aknysh

you can use https://www.terraform.io/docs/providers/aws/d/subnet_ids.html to get all subnets for a VPC

AWS: aws_subnet_ids - Terraform by HashiCorp

Provides a list of subnet Ids for a VPC

Brij S

does TF support inline code for lambda functions like cloudformation?

2019-09-15

SweetOps
06:00:43 PM

Are you using some of our https://cpco.io/terraform-modules in your projects? Maybe you could https://cpco.io/leave-testimonial\|leave us a testimonial! It means a lot to us to hear from people like you.

James D. Bohrman

Hey guys! I’m looking for some advice on how to approach an issue. I’m trying to figure out a way to use Terraform to provision a Windows Server 2016 instance that will run this cloud prep tool once it’s provisioned. I want to do something with Packer down the line but right now I’m just trying to make an easy way to spin up cloud gaming rigs on AWS for myself.

Prep tool: https://github.com/jamesstringerparsec/Parsec-Cloud-Preparation-Tool

Andrew Jeffree

Is what you’re after. There are plenty of examples out there on how to pass user-data to an ec2 instance in terraform.

James D. Bohrman
davidvasandani

@James D. Bohrman this link didn’t work for me.

2019-09-13

Nikola Velkovski

@Brij S you can verify if the profiles are properly set with aws cli

Nikola Velkovski

e.g. aws s3 ls --profile storesnonprod

Nikola Velkovski

because terraform uses that

Maciek Strömich

or by AWS_PROFILE=profilename aws s3 ls

ciastek

I need something like “random_string” resource, but with a custom command. So, execute a command only if the resource isn’t in the state yet (or was tainted), and use commands output as a value to put in the state. Any idea what kind of magic to use to achieve such result?

aknysh
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh

if you provide the param in var, use it

aknysh

if not, use random string to generate it

ciastek

Thank you. Unfortunately it’s not the thing I look for. I need something like:

resource "somemagicresource" "pass" {
  command = "openssl rand -base64 12"
}
aknysh
Provisioner: local-exec - Terraform by HashiCorp

The local-exec provisioner invokes a local executable after a resource is created. This invokes a process on the machine running Terraform, not on the resource. See the remote-exec provisioner to run commands on the resource.

ciastek

Unfortunatelly provisioners doesn’t store any kind of result in a state.

ciastek

I’ll try to go with https://github.com/matti/terraform-shell-resource , but thanks for all the links provided

matti/terraform-shell-resource

Contribute to matti/terraform-shell-resource development by creating an account on GitHub.

aknysh
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

oscar

Have you guys seen this before?

Error: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
	status code: 409, request id: aaaa, host id: aaaa//bbbb+cxxx=
oscar

It doesn’t exist on ANY of our accounts

oscar

its a very, very, specific and niche bucket name

oscar

the changes someone else owns it is extremely slim

oscar

I’m sure I saw this about 4-5 months ago, but it actually was created on one of our accounts

aknysh

we saw that happens when you create a resource (e.g. bucket) not using the TF remote state. Then the state was lost on local machine. Then TF tries to create it again, but it does exist in AWS

aknysh

check that you use remote state and not losing it

oscar

Yeh that adds up with what potentially happened

oscar

That local state file is long gone

oscar

How can I recover the S3 bucket? :S

aknysh

you need to find it in AWS

aknysh

and either destroy it manually in the console, or import it

oscar

It isn’t there

oscar

I’ve searched hte account (it has no buckets) - new account

aknysh

make sure you have permissions to see it (maybe it was created under diff permissions)

oscar

Yeh I’ve checked sadly with Admin permissions in console

oscar

It genuinely isn’t there, even got one of the IT guys to look

oscar

I’ve opened a ticket with AWS but slow

aknysh

S3 is global, so you need to check all your accounts, even those you don’t know about

oscar

Wait so, it could be on a different aws account to that on which I ran terraform?!

aknysh

it could be. I don’t remember if AWS shows the error Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it in this case

oscar

AWS console just shows name already in use

oscar

when attempting to replicate but in the console

oscar

aghhh

aknysh

so yes, you created it before and lost the state (if you are saying that the chance is very slim that some other people used the same name)

oscar

But surely even if I lost the state file

oscar

the s3 bucket would be on the aws account

oscar

btw the context is the backend module

oscar

Weird it has happened again on another account

oscar

it is in my local state file

oscar

a resource

oscar

but it isn’t on the console

oscar

what is going on

aknysh

you have to follow exact steps when provisioning the TF state backend because you don’t have the remote backend yet to store the state in

oscar

Yeh no I know

oscar

it was an accident

oscar

I’m familiar with it

aknysh

you have to provision it first with local state, then update TF code to use remote S3 backend, then import the state

oscar

Probably the 18th account I’ve used your module on.. just something weird happened this time

oscar

Ya, I do this:

run:
	# Apply Backend module using local State file
	direnv allow
	bash toggle-s3.sh local
	terraform init && terraform apply
	# Switch to S3 State storage
	bash toggle-s3.sh s3
	terraform init && terraform apply
oscar

and my toggle-s3.sh script basically comments out hte backend

aknysh

ok

oscar

It’s worked plenty time

oscar

not sure what happened this time though

aknysh

i guess the bucket with that name exists for any reason (you created it, other people created it on diff account, or other people from diff orgs created it). Try to use a diff name

oscar

No I think something weird is happening

oscar

on a second account I’m getting this

oscar
Error: Error in function call

  on .terraform/modules/terraform_state_backend/main.tf line 193, in data "template_file" "terraform_backend_config":
 193:       coalescelist(
 194:
 195:
 196:
    \|----------------
    \| aws_dynamodb_table.with_server_side_encryption is empty tuple
    \| aws_dynamodb_table.without_server_side_encryption is empty tuple

Call to function "coalescelist" failed: no non-null arguments.
oscar

I’ve followed the same pattern and commands as many accounts previously

oscar

all version locked etc

oscar

No clue why it isn’t having it today

oscar

annnnnd its working again

oscar

what the ?

oscar

what theee

oscar

that magic bucket thats there but not there?

oscar

can see it on aws cli

oscar

but not console

oscar

whaat

oscar

same permissions (iam role)

aknysh

@oscar I think you mixed up TF versions

aknysh

if you use 0.11, use 0.11 state backend

aknysh

same for 0.12

oscar

Its aws

oscar

look at htis

oscar
 ✗ . (none) state_storage ⨠ aws s3 rm <s3://xxx-terraform-state>
-> Run 'init-terraform' to use this project
 ⧉  xxx
 ✗ . (none) state_storage ⨠ aws s3 ls
2019-09-13 14<i class="em em-20"></i>17 xxx-terraform-state
oscar

even after removing it is stil there hahaaha jeez

aknysh

or, another posibility, aws provider was not pinned, got updated, and the new one has issues

aknysh

we had a few 0.11 modules basted after the aws provider was updated

oscar

I think you nailed it akynsh#

oscar

andriy*

oscar

managed torecover the state file by cli#

oscar

Error: Failed to load state: Terraform 0.12.6 does not support state version 4, please update.

oscar

oscar

that was released just this morning

oscar

I wonder if I had the .terraform/modules directory already there

aknysh

so the conclusion is, always pin everything, TF modules, TF version, providers, etc.

oscar

Damn I have it pinned to major

oscar

aws = “~> 2.24”

Brij S

so I found out why my module was using different credentials. It was because i had a <http://main.tf> in my module with the following content:

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  profile = "ambassadorsnonprod"
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  profile = "ambassadorsprod"
  alias   = "prod"
}

However, when I remove this <http://main.tf> file from the module, and run tf plan with configuration that references this module I get the following error:

To work with
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment its
original provider configuration at module.cicd-web.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment,
after which you can remove the provider configuration again.

I have a http://main.tf that is setup so I’m not sure why im getting this error

oscar

ominously similar to my situ

Todd Linnertz

I am looking at using the SweetOps s3_bucket module but I am not sure how to enable server access logging using the module. Does the module support enabling server access logging?

aknysh

depending on the s3 module you want to use

aknysh
cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

1

I’m having some issues with Terraform and terraform-aws-rds-cluster module, I’m creating a Global cluster ( I forked the cloudposse module and added one line) but this is not just related to to global aurora clusters but the problem is that the cluster finish creating but terraform for some reason keeps pooling for status until it times out after 1 hour , this is what I see :

module.datamart_secondary_cluster.aws_rds_cluster.default[0]: Creation complete after 9m10s [id=example-stage-pepe1secondary]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Creating...
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Creating...
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Still creating... [10s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Still creating... [10s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Still creating... [20s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Still creating... [20s elapsed]

that will continue for 1 hour…..

and the console will show it as available

aknysh

you see that eveytime you provision or just saw one time?

have you seen this before ?

aknysh

no

aknysh

if it only once, I’d say your session had expired

it is pretty consistant

the workaround was to create the secondary cluster with 0 instances

then change it two instances

pretty much every time

I mean I have not been able to successfully complete the creation the cluster

Brij S

has anyone used multiple providers for a module?

Brij S

Ive done this multiple times with success, but now I’m facing an issue where all resources are created in one account(provider) and not the other and i’m not sure why

davidvasandani

@Brij S you’ll need to post some code or errors for us to help you diagnose.

I had an issue like that yesterday, the name of the resource needs to be different and you need to pass the provider alias to every resource

Brij S

in my /terraform/modules/cicd folder Ive got a <http://main.tf> file with the following:

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "prod"
}

in my /terraform/cicd/stores folder Ive got a <http://main.tf> with the following:

provider "aws" {
  version    = "~> 2.25.0"
  region     = var.aws_region
  profile    = "storesnonprod"
  alias      = "nonprod"
}

provider "aws" {
  version    = "~> 2.25.0"
  region     = var.aws_region
  profile    = "storesprod"
  alias      = "prod"
}

and ive got a /terraform/cicd/stores/web.tf file Ive got

module "cicd-web" {
  source = "../../modules/cicd-web"

  providers = {
    aws.nonprod = "aws.nonprod"
    aws.prod    = "aws.prod"
  }
........

in all of my resources ive got either a provider = "aws.nonprod" or provider = "aws.prod" but they all get created in aws.nonprod

Brij S

@davidvasandani ^

Brij S

However, I realized that if I put profiles in /terraform/modules/cicd/main/tf then it works! However, that defeats my purpose of the module since id want to use different profiles for different accounts

aknysh

there is no difference between these providers

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "prod"
}
aknysh

they are the same

Brij S

thats a good point.. didnt notice that

aknysh

they need to have some diff, e.g. region

Brij S

but the region is the same as well

Brij S

if i remove that http://main.tf from the module I get an error saying it needs it

aknysh

they have to be diff otherwise why do you need them

aknysh
provider "aws" {
  region                  = "us-west-2"
  shared_credentials_file = "/Users/tf_user/.aws/creds"
  profile                 = "customprofile"
}
aknysh

diff region, or diff profile, or diff shared_credentials_file

Brij S

right, I can add profile but if that lives in the module I cant reuse it

Brij S

for another set of accounts

aknysh

which tells the provider to use diff credentials from diff profile to access diff account

aknysh

you need to add that

Brij S

if I leave profile in the <http://main.tf> in the module, then I cant reuse the module

Brij S

because another account will have a different profile

aknysh

whatever you are saying you can’t reuse, does not make any diff for terraform

Brij S

so in my module, ``/terraform/modules/somemodule` i have a http://main.tf which includes a profile which is used for account A

aknysh

you create a set of providers (they should differ by region or profile)

Brij S

differ by profile, yes

aknysh

then for each module, you send a set of required providers

aknysh

and in each resource use the provider aliases

aknysh

there is no other way of doing it

Brij S

wait, in the module, the http://main.tf if I put a profile in

Brij S

how does the module become reusable if the profile is hardcoded for a certain account

aknysh

the module is reusable because you send it a list of providers (which can contain only one)

aknysh

and the module uses that provider

aknysh

w/o knowing the provider details

Brij S

yes I understand that

Brij S

so that means, I remove <http://main.tf> from my module?

Brij S

(which causes errors)

aknysh

not sure I understand that

Brij S

ok let me explain

aknysh

you fix the error in http://main.tf

aknysh

not remove it

Brij S

in /terraform/modules/somemodule/main.tf I have:

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  alias   = "prod"
}

In /terraform/folder/main.tf I have:

provider "aws" {
  version    = "~> 2.25.0"
  region     = var.aws_region
  profile    = "storesnonprod"
  alias      = "nonprod"
}

provider "aws" {
  version    = "~> 2.25.0"
  region     = var.aws_region
  profile    = "storesprod"
  alias      = "prod"
}

In /terraform/folder/web.tf I have:

module "cicd-web" {
  source = "../../modules/somemodule"

  providers = {
    aws.nonprod = "aws.nonprod"
    aws.prod    = "aws.prod"
  }
Brij S

that is how im using the providers

can you have multiple providers in a module ?

loren

you can. kinda need to when you want to implement a cross-account workflow, for things like vpc peering, resource shares, etc…

I think you can but should you do it ?

Brij S

if I remove /terraform/somemodule/main.tf I get this error:

Error: Provider configuration not present

To work with
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment its
original provider configuration at module.somemodule.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment,
after which you can remove the provider configuration again.

in my case I instantiate the module twice one with one provider and one with the other

look at this example

they don’t create one resource within two providers

Brij S

in my module I have multiple resources that have either provider = "aws.nonprod" or provider = "aws.prod"

mmm, maybe moving to a module that can do do any provider and the pass one provider to the module

aknysh

ok, we are mixing up at least 4-5 diff concepts here

sorry

aknysh
  1. @Brij S if you created resources using a provider, you can’t just remove it. Delete the resources, then remove the providers from <http://main.tf>, then re-apply again
aknysh
  1. @Brij S your providers must be different (that’s after you do #1). Otherwise TF uses just the first one since they are the same (that’s why eveything gots created in just one account)
aknysh
  1. @PePe you create a module, but don’t hardcode any provider in it. You can send the provider(s) to it IF nessessary
aknysh
  1. But in (almost) all cases, it’s not necessary. The only use-case you need to send provider(s) to a module is when your module is designed in such a way so it creates resources in diff regions or in diff accounts (bad idea)
aknysh
  1. Creating such a module that creates resources in diff region is OK (in this case you can send it a list of providers that differ by region)
aknysh
  1. Creating such a module that creates resources in diff accounts is bad idea
Brij S

@aknysh could I show you the problem Im having? I dont have any resources created but i’m still getting the error

aknysh

sounds like you have resources created

To work with
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment its
original provider configuration at module.somemodule.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
Brij S

i just ran terraform destroy, no resoruces found

Brij S

could we zoom possibly?

aknysh

regarding #6 above: instead of thinking of creating modules that uses providers for diff accounts, it’s better to create yourself an environment which will allow you to login into diff accounts (by using diff profiles in ~./aws, and eveb better by assuming roles)

2019-09-12

joshmyers

How are folks doing multi region as far as Terraform goes…?

loren

provider per region, pass the provider explicitly to each module/resource

loren

these guys have the best reference i’ve seen for it, https://github.com/nozaq/terraform-aws-secure-baseline/blob/master/providers.tf

nozaq/terraform-aws-secure-baseline

Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations. - nozaq/terraform-aws-secure-baseline

Nikola Velkovski

Workspaces ?

joshmyers

I’m more interested in things like what you do with the state file

loren

is it the age-old question of one giant state, or many smaller states? i think either way it would be controlled by the backend config…

you can have a backend config with a credential that keeps it all in one region if it is one state, should work fine, even if the resources are in multiple regions

or a backend config per state where you apply some rationale/logic to where you want that state stored…

joshmyers

I don’t think this is so simple. You can’t have state for multi regions all in a bucket in one of the regions

joshmyers

the region goes down, which maybe the reason you have gone multi region in the first place, now you can’t get to your TF state

loren

why not?

loren

that’s a different issue, not a technical limitation of tf

joshmyers

I wasn’t talking specifically about restrictions by TF, I’m wondering how people are doing it in a sane way

joshmyers

re a conversation I’ve just had with @Nikola Velkovski

loren

would cross-region bucket replication be sufficient?

joshmyers
loren

set that up on your backend, then repoint your backend config in tf if you need to use another region

joshmyers

Yeah that could get you out of a bit of a hole, but I don’t want to have to repoint backends etc

loren

what is your backend? can you do consul or something in a cross-region way?

joshmyers

S3

Nikola Velkovski

This is what I found regarding remote state and workspaces

Nikola Velkovski
Backend Type: remote - Terraform by HashiCorp

Terraform can store the state and run operations remotely, making it easier to version and work with in a team.

Nikola Velkovski

sorry here it is for s3

Nikola Velkovski
Backend Type: s3 - Terraform by HashiCorp

Terraform can store state remotely in S3 and lock that state with DynamoDB.

Nikola Velkovski

hmm no mention of changing the bucket with workspaces

joshmyers

IIRC you can’t use interpolation in the backend block

loren

with s3, to avoid manually re-jiggering your backend, you would need to be managing the s3 endpoint rather explicitly, doing some kind of health check on the real endpoints and re-pointing things as necessary

loren

and you may still hit problems when running tf, since you’d have to also be quite careful about targeting resources to avoid running against the downed region

joshmyers

I haven’t yet seen a setup that actually addresses these problems. setting up multiple providers in the same state feels like half a solution, and one that will likely bite you when you need to reach for it

joshmyers

It isn’t easy

loren

yeah, if this is that big a concern, you may be best off confining a state to a single region as much as possible, and setting up your app accordingly (deploy independently to multiple regions)

loren

still may need some coordination layer perhaps that your app states depend on, but now your cross-region blast radius is confined to just that resource

asmito

i would recommend having a state for each region

joshmyers

which goes into a state bucket for each said region

joshmyers

replicate between each other maybe

asmito

using one bucket, different paths of state per regions you can do that manually means having below tree:

providers/aws
├── eu-east-1
│   ├── dev
│   ├── pre
│   ├── pro
│   └── qa
└── eu-west-1
    ├── dev
    ├── pre
    ├── pro
    └── qa

or you can use terraform workspaces

joshmyers

terraform workspaces can’t be interpolated into backend config AFAICR

joshmyers

I think one bucket isn’t ideal…

asmito

for me one bucket seems ideal, and you can only play with paths inside it.

asmito
joshmyers

and if eu-west-1 goes down? you can’t provision in eu-west-1 OR other region ?

asmito

s3 is a global service

joshmyers

buckets are regional, have def seen S3 in a region go down before (not often but has happened and one of the drivers for going multi region for me)

asmito

Ah my bad S3 bucket name is unique globally, confused totally agree with you on that spin up a bucket for each region is ideal

davidvasandani

@joshmyers would Aurora Serverless Postgres as a TF backend solve this problem?

davidvasandani

I believe that if a region went down the DNS would just failover to the new promoted master in a new region.

davidvasandani

or leveraging Minio distributed across multiple regions (or even cloud providers!) https://dickingwithdocker.com/2019/02/terraform-s3-remote-state-with-minio-and-docker/

joshmyers

Thanks @davidvasandani, will have a look!

davidvasandani

Let us know what you end up going with? I know at some point I’ll need to address a more robust TF backend.

Nikola Velkovski

continuing the thread in order to go multi region/environment we can do something like this

Nikola Velkovski
locals {
  environment = element(split("_", terraform.workspace), 1)
  region      = element(split("_", terraform.workspace), 0)

}

output "region" {
  value = local.region
}

output "environment" {
  value = local.environment
}
Nikola Velkovski

and the the workspace should be set like

Nikola Velkovski

eu-west-1_staging

Nikola Velkovski

it’s a bit hacky but does the trick

joshmyers

backends don’t allow interpolation, so you are gonna need some kind of wrapper to get different buckets per region without inputting vars etc

Nikola Velkovski

yes it also doesn’t tackle the state problem

Nikola Velkovski

but it sounds like you don’t want to put your state in an s3 bucket

Nikola Velkovski

maybe other backends might work for you ?

joshmyers

No I think S3 is fine, but it needs to be regional specific and therefore named buckets, so need some way to easily toggle the backend bucket too

joshmyers

The Aurora Postgres idea is interesting, but a few things. Requires much more setup, automating that is possible, but a pain. Requires credentials. Doesn’t solve one of the problems we spoke about. State would be all good in the case of a regional failure as DNS should flip over to the other region and should be all good, but if you have multi region provider in a single run anyway, you are gonna have a hard time if one of those regions is down

joshmyers

Half of your apply or so is gonna fail, potentially leaving you in an interesting state

davidvasandani

Really good point.

davidvasandani

Have you thought of ways to simulate this?

joshmyers

Nope, and my guess is that when AWS breaks in such a way, all bets are off anyway, but moving onto a client where this is a major concern and wanted to know others feelings

joshmyers

@Erik Osterman any thoughts?

davidvasandani

I believe the plan is to usually decouple the infrastructure and application so that the application self heals until the provider resolves the outage (ie don’t try to terraform while S3 is offline )

davidvasandani

but looking to hear Erik’s thoughts on this.

loren

terragrunt is a wrapper that lets you use some interpolation in the backend config, it resolves it and constructs the init command for you

joshmyers

heh, I know, that is about all I want out of it at this point! lol

davidvasandani

+1 for #terragrunt

Erik Osterman

@Mike Whiting can you share how you are invoking the module?

Mike Whiting
02:46:23 PM

@Mike Whiting has joined the channel

Mike Whiting

@Nikola Velkovski yep that’s the one

Nikola Velkovski

oh that woiuld be me

Nikola Velkovski

what a coincidence

Nikola Velkovski

are you using terraform 0.12 by any chance ?

Mike Whiting

yeah

Nikola Velkovski

unfortunately it has not been ported yet

Nikola Velkovski

Nikola Velkovski

but I can dedicate some time and do it

Mike Whiting

that would be awesome

Nikola Velkovski

cool

Mike Whiting

just to clarify…

Mike Whiting

this will enable ec2 instances to log to cloudwatch events

Nikola Velkovski

what do you mean by cloudwatch events

Nikola Velkovski

Cloudwatch events are cron like jobs

Mike Whiting

ah ok

Nikola Velkovski

it will add additional metrics to cloudwatch

Mike Whiting

I just want to see logs from docker

Nikola Velkovski

ah that’s not it.

Nikola Velkovski

in order to see the logs

Nikola Velkovski

from docker you’‘ll need to have:

  • the dockerized app to write to stdout
  • iam role for the ec2 machines to write to cloudwatch logs
  • a log group in cloudwatch logs
Nikola Velkovski

I think that should do it

Mike Whiting

sounds good…

Mike Whiting

however let me explain how I got here

Nikola Velkovski

oh and this

Nikola Velkovski
Using the awslogs Log Driver - Amazon ECS

You can configure the containers in your tasks to send log information to CloudWatch Logs. This allows you to view the logs from the containers in your Fargate tasks. This topic helps you get started using the awslogs log driver in your task definitions.

Mike Whiting

I’m creating a aws_ecs_task_definition and I suspect the service is failing to start because the docker image resides on a gitlab image registry and I imagine it’s not possible to use a docker image somewhere where the authentication isn’t through AWS

Mike Whiting

but I was hoping to see evidence of that through some kind of logging

Nikola Velkovski

if it’s ECS/EC2 then you can ssh into the machine and check the agent logs

Nikola Velkovski

otherwise you’ll need to set up logging

Nikola Velkovski

from experience the most usual problem is that the instances do not have internet

Nikola Velkovski

you can try with a simple docker image

Nikola Velkovski

e.g. nginx

Nikola Velkovski

and see if it works

Mike Whiting

if I use a vanilla docker image e.e. jenkins:lts which is available publically then everything works

Mike Whiting

*e.g

Nikola Velkovski

so it’s not an internet issue

Mike Whiting

Nikola Velkovski

you should see how to authenticate through docker with gitlab

Mike Whiting

makes sense.. I suppose actually I just need to perform the authentication through the user_data field of aws_launch_configuration

Nikola Velkovski

pretty much

Mike Whiting

thanks.. that’s given me some direction

Nikola Velkovski

@Erik Osterman what’s the workflow in this case, should Mike create an issue ?

Erik Osterman

Ya if he needs it now, the best bet is to fork and run terraform upgrade

Nikola Velkovski

cool thanks

Hemanth

hey guys i am trying to create a ec2 (after taint’ing the existing ec2), attaching the ebs volume(using aws_volume_attachment), and using an user-data script in my tf to mount the volume (which is /home), and also trying to import some data from /home to the newly created instance, problem is many times the /home is not mounted, and the /var/log/cloud-init-output.log shows No such file or directory to the files i am trying to import, any thoughts on this ?

Hemanth

^ hope that question is not confusing

aknysh


problem is many times the /home is not mounted

aknysh

is it never mounted, or just sometimes fails?

Hemanth

well most of the times it is never mounted(9/10 times) , i manually ssh into the instance and do a sudo mount -a and it mounts, i tried adding sudo mount -a to the user-data script itself – doesn’t help

aknysh

might be some race conditions

aknysh

e.g. something (EBS) is not ready yet, but the code tries to mount it

aknysh

try to add a delay for testing

aknysh

or maybe there are some settings to wait

Hemanth

i tried adding sleep 60 in the user-data script which didn’t work, OR should i add something to the terraform itself for the wait thingee ?

aknysh

no, tf does not wait for ramdom things, just for resources to be created (and not for all in all cases)

Hemanth

oh got it, will try some combinations of wait in my user-data script itself

aknysh
How to make a bash script wait till a pendrive is mounted?

I have a bash script which has a line cd /run/media/Username/121C-E137/ this script is triggered as soon as the pen-drive is recognized by the CPU but this line should be executed only after the mo…

aknysh
Check if directory mounted with bash

I am using mount -o bind /some/directory/here /foo/bar I want to check /foo/bar though with a bash script, and see if its been mounted? If not, then call the above mount command, else do somethin…

Hemanth

still trying ways mentioned from ^ stackoverflows, came up with

if sudo blkid \| grep /dev/xvdb > /dev/null; then
    sudo mount -a
else
    sleep 10
fi

any elegant approaches to make that in a loop ?

Hemanth

update: right now adding a direct sleep 10 (without any loop) seems to have solved the problem

Brij S

hey all, all of a sudden i’m getting this error when running terraform apply

Error: error validating provider credentials: error calling sts<i class="em em-GetCallerIdentity"></i> NoCredentialProviders: no valid providers in chain. Deprecated.

I have no idea why this is happening. The only thing I did was add some credentials to my .aws/credentials file.

My providers look like this

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  profile = "storesnonprod"
  alias   = "nonprod"
}

provider "aws" {
  version = "~> 2.25.0"
  region  = var.aws_region
  profile = "storesprod"
  alias   = "prod"
}
Brij S

does anyone know what might be causing this?

sarkis

@Brij S is it possible you mucked up your .aws/credentials toml format so it’s not being parsed correctly by the TF provider?

Brij S

@sarkis, it seems when I add some old profiles back to the credentials file it works. But when I remove them i get the error

sarkis

do your profiles depend on each other? i think there was something like source_profile i can’t remember the exact param

Brij S

no

Brij S

two seperate profiles as you see in the snippet above

sarkis

can you share your ~/.aws/credentials file in DM and redact sensitive data

Brij S

sure

mpmsimo

Anyone using a tool like drifter or terraform-monitor-lambda to detect state drift?

mpmsimo

Any success or best practices for identifying and correcting Terraform changes over time?

mpmsimo
digirati-labs/drifter

Check for drift between Terraform definitions and deployed state. - digirati-labs/drifter

mpmsimo
futurice/terraform-monitor-lambda

Monitors a Terraform repository and reports on configuration drift: changes that are in the repo, but not in the deployed infra, or vice versa. Hooks up to dashboards and alerts via CloudWatch or I…

Steven

While these can be useful in a small environment, they are supporting a problematic process and are not going to scale well

Steven

If using micro services, there will be a state file per microservice. Lets say you a small environment with only 10 service and you have 3 environments. That’s 30 state files plus a few more for environment infrastructure and a few for account infrastructure. This can grow fast

Steven

Then there is the bad process that it is supporting. People making production changes from their local systems with possibly no testing or any audit tracking. A much better process would be to commit the change to git repo and have that trigger the terraform run. This gets rid of the drift issue due to uncommitted changed. It also allows you to add testing and ensure it is run, as well as having an audit trail

aknysh

master brach reflects what is deployed to prod. With all the history from the PRs. There should be no drift since what’s is currently in master is deployed to prod with terraform/helm/helmfile

aknysh

what we usually do to make and deploy a change to apps in k8s and serverless: create a new branch, make changes, open a PR, automatically (CI/CD) deploy the PR to unlimited staging so people could test it, approve the PR, merge the PR to master, cut a release which gets automatically deployed to staging or prod depending on release tag

aknysh

for infrastructure (using terraform): create new branch, open a PR, make changes, run terraform plan automatically, review the plan, approve the PR, run terraform apply, if everything is OK, merge the PR to master

aknysh

we use atlantis for that

aknysh
cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

aknysh
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

2019-09-11

oscar

Not yet but took a read. Keen to hear someone’s experience & comparison to local Geodesic workflow / CI tools using Geodesic workflow / Atlantis

1
Callum Robertson

definitely +1 on this. This is the workflow that we’ve just committed to, so keen on hearing peoples experiences!

1
Haydar Ciftci

I have a hard time getting the cloudposse modules to work with the recent terraform version (v0.12.8). I feel like I’m missing something, any ideas?

aknysh

make sure all modules you are using are converted to TF 0.12

aknysh

For example, this one is now cloudposse/terraform-aws-alb

aknysh

Don’t know about all modules in terraform-aws-modules/....... (they are CloudPosse’s)

Haydar Ciftci

Yeah, so it is indeed an issue with the module implementation itself?

aknysh

not implementation

aknysh

the modules that are still in TF 0.11 syntax will not work in TF 0.12 (with a few small exceptions)

oscar

Try:

  on .terraform/modules/alb_magento2/main.tf line 33, in resource "aws_security_group_rule" "http_ingress":
  33:   cidr_blocks       = [var.http_ingress_cidr_blocks]
oscar

Change: removal of "${ and }"

oscar

If that doesn’t work, try: cidr_blocks = var.http_ingress_cidr_blocks .. since it is already a list

Erik Osterman

#office-hours starting now! ask questions, get answers. free for everyone. https://zoom.us/j/508587304

2019-09-10

rohit

does anyone know how to read subnet name from state file ?

sarkis

@rohit doesn’t look like there is a data source for this yet: https://github.com/terraform-providers/terraform-provider-aws/pull/9525

data-source/aws_db_subnet_group: create aws_db_subnet_group data-source by maxenglander · Pull Request #9525 · terraform-providers/terraform-provider-aws

Adds a data source for aws_db_subnet_group. Used aws_db_instance as a model for this work. Currently only allows looking up exactly one database subnet group using name as the argument, although th…

rohit

@sarkis thanks. I will try a different alternative then

sarkis
HashiConf | Watch the Opening Keynote Live

Join us live as HashiCorp Founders Armon Dadgar and Mitchell Hashimoto deliver the opening keynote at HashiConf in Seattle, WA.

1
sarkis

terraform plan getting a cost estimation feature on TF Cloud interesting…

I didn’t find any references to ECS Service Discovery in CP modules. Is it because everyone is running an alternative solution?

For someone getting started with containers, and not having more than 3-4 services at the most, should I even bother with orchestration and/or sophisticated methods of service discovery?

Or will ALB/ECS combo get the job done?

aknysh

for that we usually deploy https://istio.io/docs/concepts/what-is-istio/ in the k8s cluster

What is Istio?

Introduces Istio, the problems it solves, its high-level architecture and design goals.

aknysh

don’t have anything in TF

aknysh
What is Istio?

Introduces Istio, the problems it solves, its high-level architecture and design goals.

That’s what AWS AppMesh does, right? I wonder if that’s an overkill for my use case though.

aknysh

yes AppMesh should do similar things. we did not use it yet

aknysh

for 3 static services might be an overkill but at the same time, you get an experience and be able to use it with tens of services

1
Callum Robertson

Has anyone played with the new Terraform SaaS offering?

Callum Robertson

Looks like TF cloud has hit GA

2019-09-09

Michał Czeraszkiewicz

Hi, How can I reference resources