#terraform (2019-06)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2019-06-03

Bogdan avatar

is there a way to source env vars in https://github.com/cloudposse/terraform-aws-codebuild (v.0.1.6.0) from SSM Parameter store? cc: @Andriy Knysh (Cloud Posse) @Igor Rodionov @jamie? Apparently it’s supported in the native aws_codebuild_project:

    environment_variable {
      "name"  = "SOME_KEY2"
      "value" = "SOME_VALUE2"
      "type"  = "PARAMETER_STORE"
    }
cloudposse/terraform-aws-codebuild

Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild

Bogdan avatar

which I couldn’t find in the [main.tf](http://main.tf) at ln183

jamie avatar

The answer is yes, you can add type == "PARAMETER_STORE"

jamie avatar

I have just added a pull request to update the example. https://github.com/cloudposse/terraform-aws-codebuild/pull/43

Label module update + Readme example + descriptions for SSM parameter store usage by Jamie-BitFlight · Pull Request #43 · cloudposse/terraform-aws-codebuild

What: Updates to allow module.label.context to be passed in to the module. Example to show how to address parameter store Why Because someone asked about it in the slack channel

Bogdan avatar

thanks for the update Jamie! On a side-note do you know if anything will come out of https://github.com/blinkist/terraform-aws-airship-ecs-cluster/pull/9 btw? Or would I be better off forking it? Marten hasn’t been so responsive for the last weeks

This PR includes three features by Jamie-BitFlight · Pull Request #9 · blinkist/terraform-aws-airship-ecs-cluster

launch_template resource is used instead of a launch_configuration, this allows for a more granular tagging. i.e. The instances and the instance ebs volumes get tags. The ASG uses a mixed_instances…

jamie avatar

@Bogdan As far as I am aware pull requests into Blinkist are basically not happening, or are happening very slowly. The repo’s are in production and they aren’t taking in changes very regularly. @maarten himself is not working at Blinkist. He has his own personal brand called DoingCloudRight which is where the airship.tf site is generated from, and where some of his more current terraform modules exist.

jamie avatar

I suggest you take a fork of mine for now if you want those features..

maarten avatar
maarten

Feel free to fork. If you have better other ideas let me know. Baby expecting plus many other things atm.. cant commit my time at the moment unfortunately. @jamie if i make you repo admin, if @jonboulle agrees. Would that be helpful ?

jamie avatar

please do, i actually have a ton of time now.

Bogdan avatar

Thanks for clarifying Maarten and as an 11-week old father myself I wish you good luck and congrats :slightly_smiling_face:. I’ll follow yours and Jamie’s suggestion and fork Jamie’s latest then. Regarding ideas, one that has been on my mind was aws_autoscaling_schedule which if set I could spin the cluster early in the morning and kill it after working hours

jonboulle avatar
jonboulle

fine for the repo admin!

maarten avatar
maarten

Great!

Cloud Posse avatar
Cloud Posse
04:01:15 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
zoom https://zoom.us/j/684901853
slack #office-hours (our channel)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(ignore the incorrect date there - need to fix that!)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

office hour this wednesday

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jamie thanks! good to see you around

jamie avatar

@Erik Osterman (Cloud Posse) after all the damn waiting, I’m finally in the States

jamie avatar

as of 2 weeks ago

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

was just gonna ask!

jamie avatar

and now I’m a bit stabilised

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

welcome!!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

East coast?

jamie avatar

so I am catching up on my backlog

jamie avatar

yeah, Connecticut

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cool!

jamie avatar

I’m doing a pull request for this one in 30 mins https://github.com/cloudposse/terraform-aws-dynamic-subnets/issues/36

Add a simple way to just use all AZs · Issue #36 · cloudposse/terraform-aws-dynamic-subnets

The existence of length(data.aws_availability_zones.available.names) implies that something already knows what "all AZs" or "n AZs" should look like; but I still have to specify…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re about to kick off 0.12 updates. we started with terraform-null-label (alsmost ready for merge)

fast_parrot2
jamie avatar

I can see that activity!

jamie avatar

Are you doing it in named tags or named branches?

jamie avatar

so that 0.11 can be maintained too?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

going to keep master for bleeding edge / whatever is “latest” so 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

creating 0.11/master for bug fixes

jamie avatar

cool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

going to keep 0.x module tags since interfaces are still subject to change

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the thinking is no new feature work in 0.11

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

only bug fixes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so all updates should be patch releases to the last 0.11 release

jamie avatar

My clients haven’t been switching to 12

jamie avatar

since they have huge libraries of existing modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

to 12 as the release versin?

jamie avatar

0.12

jamie avatar

yeah

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

er.. i mean, how are they tagging their modules post-0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i see some bumping the major version

jamie avatar

Ah, they haven’t yet.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(don’t really want to do that since not even terraform is 1.x!)

jamie avatar

Just haven’t updated to use 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok

jamie avatar

I’ll brb, i’m gonna finish this module

jamie avatar

then I can look at what it takes to convert a module to 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the main blocker I have right now is that terraform-docs and json2hcl do not support 0.12, which i what we’ve been using to (a) generate README.md documentation, (b) validate that input and output descriptions have been set

jamie avatar

ah… annoy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Add support for Terraform 0.12's Rich Value Types · Issue #62 · segmentio/terraform-docs

Prerequisites Put an x into the box that applies: This issue describes a bug. This issue describes a feature request. For more information, see the Contributing Guidelines. Description With the upc…

jamie avatar

I’m getting inventive in this module to maintain backwards compatibility

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

as far as I’m concerned that’s not a requirement. if we have to settle for LCD, then we lose the benefits of 0.12 that make it so compelling.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Visual Studio Code Still has not updated the terraform plugin to support 0.12 i’m using Mikael Olenfalk’s plugin

jamie avatar

Okay @Erik Osterman (Cloud Posse) finished the pull https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/50

Fixes 36 by Jamie-BitFlight · Pull Request #50 · cloudposse/terraform-aws-dynamic-subnets

what Added a working example. Added the ability to specify the number of public or private subnets. Added the ability to specify the format of the public/private subnet tag. Updated the label modul…

2019-06-04

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

did anyone use this

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

to assign ssh_key to users?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

not just to the user related to the app token

Ayo Bami avatar
Ayo Bami
02:45:24 PM
Ayo Bami avatar
Ayo Bami

am not sure what am doing wrong.. I did tag the subnet with shared

Ayo Bami avatar
Ayo Bami

tags = { “kubernetes.io/cluster/eks-beemo” = “shared”

jamie avatar

@Ayo Bami which module are you using?

jamie avatar

Also, is the subnet public or private?

Rob avatar

Does anyone have suggestions for the best way to use the terraform-aws-ecs-codepipeline repo (or anythign similar) with more complex pipelines? It is hard coded to a single build and single deploy stage. However most real pipelines have multiple environments to deploy to with additional actions like automated tests and approval gateways. Ideally, we’d like something that is configurable to use the same codepipeline module for different project requirements. Any thoughts?

jose.amengual avatar
jose.amengual

I’m using that repo

jose.amengual avatar
jose.amengual

it if very opinionated

jose.amengual avatar
jose.amengual

so you should basically create your own terraform calling those modules individually and pass the appropriate variables to each

1
jose.amengual avatar
jose.amengual

like the listerner rules for example , you might need to call it twice if you have two target_groups

Rob avatar

The problem is the build and deploy stages and actions of the pipeline are hardcoded so I don’t see a way to alter them when calling.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Rob the module is opinionated and is used for simple pipelines (source/build/deploy). For something more generic, you can open a PR (we’ll review fast), or maybe even create a new module (if it’s too complicated to add new things to the existing module)

jose.amengual avatar
jose.amengual

you will have to fork and make your own and maybe send a PR over to make it more flexible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep @jose.amengual and @Andriy Knysh (Cloud Posse) are spot on

jose.amengual avatar
jose.amengual

it is pretty hard to create a module that “all size fits all”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep… also, we (cloudposse) use mostly #codefresh for more serious release engineering. AWS CodeBuild/CodePipeline are quite primitive by comparison.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

more primitive and more difficult to create complex pipelines. Codefresh has better DSL and UI

jose.amengual avatar
jose.amengual

so no jenkins in your world ?

Rob avatar

Thanks so much for the information! I didn’t see an easy way to modify the module for complex pipelines now, but saw that in TF 12 it can be done with looping logic.

We’ll absolutely take a look at codefresh. Do you have TF modules for defining more complex pipelines in codefresh?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/example-app

Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the example repo defines all Codefresh pipelines that we usually use

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it’s not terraform, pipelines defined in yaml)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual we have jenkins modules https://github.com/cloudposse/terraform-aws-jenkins

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but we stopped using it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for many reasons

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

jenkins is ok, but requires separate deployment and maintenance (as the module above shows)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s also old and convoluted and has a lot of security holes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Vulnerabilities Found in Over 100 Jenkins Plugins | SecurityWeek.Com

Researcher discovers vulnerabilities, mostly CSRF flaws and passwords stored in plain text, in more than 100 Jenkins plugins, and patches are not available in many cases.

jose.amengual avatar
jose.amengual

yes jenkins is like the standard ci/do everything tool but nowadays services like codefresh are better

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Thousands of Jenkins servers will let anonymous users become admins | ZDNetattachment image

Two vulnerabilities discovered and patched over the summer expose Jenkins servers to mass exploitation.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t want to deal with all that stuff

jose.amengual avatar
jose.amengual

ohhh yes I know, I work at Sonatype(Nexus IQ/Firewall and Maven tool creators ), , we had scan jenkins, node and others

Rob avatar

Thanks so much everyone, really appreciate the assistance!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Codefresh also added parallel execution, using which you can build many images (e.g. for deploy and for test) in parallel

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
builds_parallel:
    type: parallel
    stage: Build
    steps:
      build:
        title: "Build image"
        type: build
        description: "Build image"
        dockerfile: Dockerfile
        image_name: ${{CF_REPO_NAME}}
        no_cache: false
        no_cf_cache: false
        tag: ${{CF_SHORT_REVISION}}

      build_test:
        title: "Build test image"
        type: build
        description: "Build test image"
        dockerfile: Dockerfile
        target: test
        image_name: ${{CF_REPO_NAME}}
        tag: ${{CF_SHORT_REVISION}}-test
        when:
          condition:
            all:
              executeForDeploy: "'${{INTEGRATION_TESTS_ENABLED}}' == 'true'"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

after building the test image, you can run tests on it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
test:
    title: "Run tests"
    stage: Test
    type: composition
    fail_fast: false
    composition: codefresh/test/docker-compose.yml
    composition_candidates:
      app:
        image: ${{build_test}}
        command: codefresh/test/test.sh
        env_file:
          - codefresh/test/test.env
        links:
          - db
    when:
      condition:
        all:
          executeForDeploy: "'${{INTEGRATION_TESTS_ENABLED}}' == 'true'"
jose.amengual avatar
jose.amengual

pretty cool

Vidhi Virmani avatar
Vidhi Virmani

hello,

How can I use * in a route53 recordset using terraform for example *.[example.com](http://example.com)?

Vishnu Pradeep avatar
Vishnu Pradeep

@Vidhi Virmani I believe you can have *.example.com in the name field. eg.

resource "aws_route53_record" "www" {
  zone_id = "${aws_route53_zone.primary.zone_id}"
  name    = "*.example.com"
  type    = "A"
  ttl     = "300"
  records = ["${aws_eip.lb.public_ip}"]
}

https://www.terraform.io/docs/providers/aws/r/route53_record.html

AWS: aws_route53_record - Terraform by HashiCorp

Provides a Route53 record resource.

Vidhi Virmani avatar
Vidhi Virmani

* is not allowed.

Vishnu Pradeep avatar
Vishnu Pradeep

@Vidhi Virmani Interesting. what version of aws provider are you using? I’ve tried with 1.60.0 and it didn’t complain

Vishnu Pradeep avatar
Vishnu Pradeep
terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

Vidhi Virmani avatar
Vidhi Virmani

Hey, It worked for me.. There was an extra space and I didn’t realise that. Sorry for such a dumb question

Vishnu Pradeep avatar
Vishnu Pradeep

2019-06-05

shtrull avatar
shtrull

need a little help, I’m trying to create a bucket resource “aws_s3_bucket” “blabla” { bucket = “blabla-terrafom.dev” acl = “private” } i want to make this as smart as possible if I’m in env prod i need it to be blabla-terrafom but if i am in any other env i need it to be blabla-terrafom.${var.Environment_short} my problem is how to make the “.” only appear when Environment_short is not prd

shtrull avatar
shtrull

i tried playing with ${var.Environment_short != “prd”: “.${var.Environment_short}“} but with no luck

loren avatar

bucket = “blabla-terrafom${var.Environment_short != "prd" ? ".${var.Environment_short}" : ""}”

loren avatar

or clean it up with a local:

locals {
  env = "${var.Environment_short != "prd" ? ".${var.Environment_short}" : ""}"
}
1
loren avatar

bucket = "blabla-terrafom${local.env}"

jamie avatar

It’s a fine way to handle it. If you know all your env short names in advance you could also use a locals map to remove the trinary

2
Mike Nock avatar
Mike Nock

Morning all! Was wondering if anyone has run into this before: Creating an ECS cluster and service with terraform, have the ALB, cluster, target group, and target group attachment set, task definition and all that gets created as expected - but when TFE goes to create the service it fails because it never associated an ALB with the target group and therefore can’t add a target to the group.

jamie avatar

Yes I’m very familiar with that issue. :) which modules are you using?

Mike Nock avatar
Mike Nock

Hey, sorry didn’t see your reply. Not using any module, building it by hand because I had trouble getting a module that created the whole service and ALB, and exported the ALB dns name so I could make r53 entries with it

jamie avatar

@Mike Nock would you be happy to share what you have? Also all this is handled in the https://airship.tf terraform modules that @maarten created. But there’s a few tricks you may want to use to force dependence between resources you’re creating.

Airship Modules

Flexible Terraform templates help setting up your Docker Orchestration platform, resources 100% supported by Amazon

Mike Nock avatar
Mike Nock

Sure I’d be open to that. I just got the service working correctly, so removing some of what I hard coded to do that atm.

jamie avatar

Its its working… then great

Mike Nock avatar
Mike Nock

Error code: Error: Error modifying Target Group Attributes: InvalidConfigurationRequest: The provided target group attribute is not supported status code: 400, request id: ff5c4dd6-870b-11e9-832e-3dd05e9547d1

Suresh avatar

Hey any quick example of how to provide list var with the terraform cli

Suresh avatar

I am using this terraform plan -var env_vars={"test" : "value", "test1": "value2"}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this won’t work; missing ''

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

did you define env_var as a varariable?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what’s the precise error you are getting from terraform

Suresh avatar

Yup, I have got env_var as a variable error:

 as valid HCL: At 1:28: illegal char[0m[0m
[31mUsage: terraform plan [options] [DIR-OR-PLAN]
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you share precisely the command you are running?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

terraform plan -var env_vars={"test" : "value", "test1": "value2"} will 100% not work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the " is evaluated by bash (or your shell), so that it is equivalent to running

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

terraform plan -var env_vars={test : value, test1: value2}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so that ` : value, test1: value2} are actually passed as arguments to terraform plan`

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and that env_vars={test which is not a valid terraform expression

Suresh avatar

it failing cause of HCL syntax error

loren avatar

wrap it in quotes? env_vars='{"test" : "value", "test1": "value2"}'

loren avatar

or maybe -var 'env_vars={"test" : "value", "test1": "value2"}'

loren avatar

or even -var=env_vars='{"test" : "value", "test1": "value2"}'?

Suresh avatar

let me try those @loren

loren avatar

that’s a map though, question was about a list…

Suresh avatar

Sorry, looking for map var

1
loren avatar
Input Variables - Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

loren avatar
terraform apply -var="image_id=ami-abc123"
terraform apply -var='image_id_list=["ami-abc123","ami-def456"]'
terraform apply -var='image_id_map={"us-east-1":"ami-abc123","us-east-2":"ami-def456"}'
Suresh avatar

tried the same, neither of it worked

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

office hours starting now: https://zoom.us/j/684901853

johncblandii avatar
johncblandii

anyone know if AWS Aurora is always a cluster or can it be a single instance? Seems like using db_instance is an option, but i’m getting errors with storage type io1

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s always a cluster, but could be with a single instance. Cluster itself does not cost anything, it’s just metadata

johncblandii avatar
johncblandii

ok, gotcha. Makes sense. Thx

2019-06-06

Paul Lupu avatar
Paul Lupu

hello, can someone help me understand what I’m doing wrong in the elastic-beanstalk-environment module? here’s the code / output https://gist.github.com/lupupaulsv/d705e695be36e0ec6c21f1b9e9d70a3b

Vishnu Pradeep avatar
Vishnu Pradeep

@Paul Lupu

security_groups = ["sg-07f9582d82c4058e8, sg-0bf6bf06395fdf2c4"]

should be

security_groups = ["sg-07f9582d82c4058e8", "sg-0bf6bf06395fdf2c4"]

Mike Nock avatar
Mike Nock

Hey guys, using this module: https://github.com/cloudposse/terraform-aws-s3-log-storage?ref=master and running into an issue where the ALBs I’m creating don’t have access to the bucket. Is that an attribute input I need to set?

cloudposse/terraform-aws-s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail - cloudposse/terraform-aws-s3-log-storage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a working example of ALB and log storage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-alb

Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-lb-s3-bucket

Terraform module to provision an S3 bucket with built in IAM policy to allow AWS Load Balancers to ship access logs - cloudposse/terraform-aws-lb-s3-bucket

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-lb-s3-bucket

Terraform module to provision an S3 bucket with built in IAM policy to allow AWS Load Balancers to ship access logs - cloudposse/terraform-aws-lb-s3-bucket

Mike Nock avatar
Mike Nock

Ahhh I think that last one is it, I don’t think I applied a bucket policy

Mike Nock avatar
Mike Nock

Seems like the log bucket is pretty similar to the log_storage one that I’m using. Same variables and what not

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s the same + the IAM policy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform-aws-lb-s3-bucket uses terraform-aws-s3-log-storage and adds the policy

Maeghan Porter avatar
Maeghan Porter

Hey all! I’m using a couple of your guys’ modules to create and peer a vpc. I’m trying to do it all in one tf run so that the state file won’t be crazy. But I’m getting these errors on the plan:

* module.peering.module.vpn_peering.data.aws_route_table.requester: data.aws_route_table.requester: value of 'count' cannot be computed
* module.peering.module.monitoring_peering.data.aws_route_table.requester: data.aws_route_table.requester: value of 'count' cannot be computed[0m
Maeghan Porter avatar
Maeghan Porter

I suspect it’s because the route tables will be created by the vpc module, but it doesn’t know that and expects them to already exist maybe?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

unfortunatly, any time a data provider is used you cannot do all-in-one run

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if that data provide depends on something created

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s because terraform expects the plan to be entirely deterministic before execution. if you instead inlined all the modules into the same context

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and got rid of the data provider, you could possibly mitigate it

Maeghan Porter avatar
Maeghan Porter

damn, I thought that might be true. I was hoping I could do it all in one because we’re trying to automate it and if the pipeline runs again the statefile that created the vpc is like “wtf is this stuff, I’ll just wipe that out for you”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, some new options might be available in 0.12, but we haven’t explored that yet

Maeghan Porter avatar
Maeghan Porter

oh ok cool, I will have a look, thank you!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’ve just kind of “sucked it up” for now hoping that eventually this will all get addressed. then we use a Makefile with an apply target or coldstart target to help us in those coldstart scenarios

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but it’s not ideal… sorry!!

Maeghan Porter avatar
Maeghan Porter

All good, thank you for the explanation! We will also have to suck it up I guess

jamie avatar

There are some workarounds. Like using Makefile like Erik says to run targeted applies in a certain order. The workaround I use the most is splitting terraform into parts with individual state files, and using terraform_remote_state to look up the values

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jamie doesn’t that still require multi-phased apply?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(maybe I’m missing something!)

jamie avatar

It does, you’re right. The difference being that each apply is standalone and complete. I use it more often than having extra Args that would create certain resources first. If data sources had a defaults section I would be really happy. :)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, so you make the delineation very cut & dry

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. by using different project folders.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suppose that’s a good rule to go by

jamie avatar

It was the easier-to-understand option for the client I was working with. Having terraform apply -target module.x.submodule.y was harder to wrap their heads around than cd vpc; terraform apply

jamie avatar

This way it stays native terraform

jamie avatar

Instead of requiring make

2019-06-07

Meb avatar

What are your best practises for data importing in TF from another stack (tfstate). Examples showing that.

nutellinoit avatar
nutellinoit

You want to import exsisting resources on a new terraform project or get outputs from another state?

Meb avatar

No remote states

Meb avatar

mainly how do you do. To see if there is better patterns in “importing” remote states using https://www.terraform.io/docs/providers/terraform/d/remote_state.html

Terraform: terraform_remote_state - Terraform by HashiCorp

Accesses state meta data from a remote backend.

nutellinoit avatar
nutellinoit

the only way i use is via remote_state

jamie avatar

I also use the terragrunt wrapper for terraform, so that I can do interpolation on the backend resources. And that way I can use standard naming conventions across all terraform state keys, and access their remote state values in a standard way. @Meb

jamie avatar

Example to follow

jamie avatar
terragrunt = {
  remote_state {
    backend = "s3"

    config {
      key            = "Namespace=${get_env("TF_VAR_namespace", "not-set")}/Name=${get_env("TF_VAR_name", "not-set")}/Environment=${get_env("TF_VAR_environment", "not-set")}/${path_relative_to_include()}/${get_env("TF_VAR_state_key", "not-set")}"
      bucket         = "${get_env("TF_VAR_state_bucket", "not-set")}"
      region         = "${get_env("TF_VAR_state_region", "not-set")}"
      encrypt        = true
      dynamodb_table = "${get_env("TF_VAR_state_dynamodb_table", "not-set")}"
      # role_arn       = "arn:aws:iam::${get_env("TF_VAR_state_assume_role_AWS_account_id", "not-set")}:role/${get_env("TF_VAR_state_assume_role_prefix", "not-set")}-${get_env("TF_VAR_name", "not-set")}-${get_env("TF_VAR_environment", "not-set")}"

      s3_bucket_tags {
        name = "Terraform state storage"
      }

      dynamodb_table_tags {
        name = "Terraform lock table"
      }
    }
  }
}
jamie avatar

That is what my backend file looks like for terragrunt

jamie avatar

my terraform.tfvars includes it

terragrunt = {
  include = {
    path = "./backend.tfvars"
  }
...
jamie avatar

And the environment file that passes in all of the values:

jamie avatar
##################################################################################################################
## NOTE: No not put spaces after the equals sign, or quotes around any of the strings to the right of the equals sign.
## the values are line delimited
##################################################################################################################

## The namespace, name, and environment must be unique per deployment

## These values are kept short to accomodate the name length considerations at AWS's end

## The namespace should be kept to 6 characters or fewer, 
## and can be used to indicate a region (i.e. uk), or a company (i.e. gtrack)
TF_VAR_namespace=ccoe

## The name should be 12 characters or fewer, this is the name of the client, or project.
## This should match the project.name in the build.properties of the Junifer project
TF_VAR_name=spinnaker

## The environment should be kept to 8 or fewer characters, and can be something like PROD, UAT or DEV
## You can use other names, just use all CAPS
TF_VAR_environment=SANDBOX

# Region to generate all resources
TF_VAR_region=ap-southeast-2

## The bucket must be accessible by the account running terraform, these can be shared across deployments
## These settings should not be changed as the terraform state is centralised in this bucket for all deployments (in this region)
TF_VAR_state_bucket=central-terraform-state
TF_VAR_state_dynamodb_table=central_terraform_state_lock
TF_VAR_state_region=ap-southeast-2
TF_VAR_state_key=terraform.tfstate
# TF_VAR_state_assume_role_AWS_account_id=xxxxxxxx

## the prefix for the role that terraform will assume for the terraform state
## the full role is of the form ${prefix}-${TF_VAR_name}-${TF_VAR_environment}. see below for these other values
TF_VAR_state_assume_role_prefix=terraform-state-access
jamie avatar

The environment variables can then be passed in at run time either via this file for testing, or via the deployment pipeline (i.e. runatlantis)

Meb avatar

Thanks

Bogdan avatar
Bogdan
03:32:57 PM

also asking here since it’s relevant

anyone knows a faster way to setup Vault than described in https://github.com/hashicorp/terraform-aws-vault ?

Blaise Pabon avatar
Blaise Pabon

Could you say more about how you mean “faster”? Fewer steps? Fewer decisions? Fewer components? …There are images in DockerHub that might have the combination of settings that will work for you.

anyone knows a faster way to setup Vault than described in https://github.com/hashicorp/terraform-aws-vault ?

Bogdan avatar

Thanks for asking @Blaise Pabon ! I was referring to both fewer steps and fewer decisions while avoiding the need for an orchestration and use of Docker

Blaise Pabon avatar
Blaise Pabon

@Bogdan, ok, that makes sense, well in that case, have you looked around for a good Ansible role? I haven’t tried it yet, but I might use https://galaxy.ansible.com/andrewrothstein/vault for my home lab.

Ansible Galaxy

Jump start your automation project with great content from the Ansible community

Mike Nock avatar
Mike Nock

Does anyone know what format this module outputs the subnet_ids in? https://github.com/terraform-community-modules/tf_aws_public_subnet

Trying to import the list of subnets as a string but it keeps saying I’m incorrectly attempting to import a tuple with one element (it should have 2 subnets)

terraform-community-modules/tf_aws_public_subnet

A Terraform module to manage public subnets in VPC in AWS. - terraform-community-modules/tf_aws_public_subnet

antonbabenko avatar
antonbabenko

Run, run, run away from this 4 years old Terraform module which has not gotten any love for the last 2 years…

terraform-community-modules/tf_aws_public_subnet

A Terraform module to manage public subnets in VPC in AWS. - terraform-community-modules/tf_aws_public_subnet

1
antonbabenko avatar
antonbabenko

PS: it is a list

antonbabenko avatar
antonbabenko
terraform-aws-modules/terraform-aws-vpc

Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc

1
1
jose.amengual avatar
jose.amengual

I was using the terraform-aws-alb-ingress and I had to fork it but now I’m taking another look and I think I misunderstood the intention , please correct me if I’m wrong but if I was going to use it I should declare a module per target group I have ?

jose.amengual avatar
jose.amengual

basically this module was made for one target group

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, more or less

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual it was design to “kind’a” work like an Ingress rule in kubernetes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(since we approached ECS coming from a k8s background)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s also why we have a “default backend”

jose.amengual avatar
jose.amengual

I see ok, my problems is that I try to used thinking it could ingest a list

jose.amengual avatar
jose.amengual

but in K8s you declare every ingress

jose.amengual avatar
jose.amengual

so it make sense

jose.amengual avatar
jose.amengual

I will just call it again for my bluegreen target

jose.amengual avatar
jose.amengual

and at some point I will send a PR for the ALB module to support arbitrary targets

2019-06-08

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Generate new repositories with repository templatesattachment image

Today, we’re excited to introduce repository templates to make boilerplate code management and distribution a first-class citizen on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Good for terraform modules

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Is it added as a resouse?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini
Provider: GitHub - Terraform by HashiCorp

The GitHub provider is used to interact with GitHub organization resources.

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I’ve worked in a couple of awsome modules, but im not yet allowed to publiclly add them (company policy)

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

modules for users and repos

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

version .11.14 and .12

2019-06-09

rj avatar

Anyone know of a good GUI for terraform, other than Atlas?

2019-06-10

Cloud Posse avatar
Cloud Posse
04:03:48 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
zoom https://zoom.us/j/684901853
slack #office-hours (our channel)

[Gamifly] Vincent avatar
[Gamifly] Vincent
05:33:03 PM

Hi guys, I’m running nuts against this error. I am using the tutorial to create a beanstalk app ; it seems that the output required to create a related env is not working. Any idea why ? I’m running on Windows 10 (yeah, yeah…!): terraform v0.11.14

  • provider.aws v2.12.0
  • provider.null v2.1.2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@[Gamifly] Vincent did you look at the example of using EB application and environment https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-application

Terraform Module to define an ElasticBeanstalk Application - cloudposse/terraform-aws-elastic-beanstalk-application

[Gamifly] Vincent avatar
[Gamifly] Vincent
05:46:54 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what does your [outputs.tf](http://outputs.tf) file look like?

[Gamifly] Vincent avatar
[Gamifly] Vincent
05:50:30 PM

the outputs.tf from elastic_beanstalk_application looks like that

[Gamifly] Vincent avatar
[Gamifly] Vincent
05:52:02 PM

And the resource is…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe some concurrency issue, did you try to apply more than once?

[Gamifly] Vincent avatar
[Gamifly] Vincent

yup, alot As the application is created, I need to destroy it manually from the aws console between each try

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

why manually?

[Gamifly] Vincent avatar
[Gamifly] Vincent

why not ?!

[Gamifly] Vincent avatar
[Gamifly] Vincent
06:45:03 PM
[Gamifly] Vincent avatar
[Gamifly] Vincent

actually, the app exists within the console ; just the state not up-to-date ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

usually happens when one tries to add/delete something manually

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but seriously, try to remove everything and apply again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what state backend are you using?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what you see also happens when you apply something using local state, then exit the shell (or whatever you are using to access AWS), and the state gets lost; or you just delete the state file on purpose or accidentally

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use S3 for TF state backend

[Gamifly] Vincent avatar
[Gamifly] Vincent

I think I just use the local backend ; I’ll try the destroy and apply…see you in some minutes !

johncblandii avatar
johncblandii

@wbrown43 hit a problem with two CP modules conflicting with internal SGs.

https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L12 …and… https://github.com/cloudposse/terraform-aws-alb/blob/master/main.tf#L11

Because we use the same values the name becomes the same: app-name-dev. So the second one to create fails because the first w/ that same name exists.

What is the best approach to handle this conflict? We have our own db SG so don’t really want the rds cluster one, tbh, but it is required by the module.

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

cloudposse/terraform-aws-alb

Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can add diff attributes attributes = "${distinct(compact(concat(var.attributes,list("something", "else"))))}"

1
johncblandii avatar
johncblandii

but it would add those to the cluster too

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes it will add

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

need a PR to fix that conflict if you don’t want to add attributes to all the names

[Gamifly] Vincent avatar
[Gamifly] Vincent
08:21:44 PM

Do you know if there is any nice way to recover from that ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

look into why default is nil and if you have it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

once that is fixed, run plan/apply again (should be nothing to recover here)

[Gamifly] Vincent avatar
[Gamifly] Vincent

It actually happens during a destroy. I went through tfstate, to check where it was missing, then refresh/plan/apply to ‘refresh’ aws_vpc.default Then destroy again. Same error…I think there is something I’m missing here ; as I reapplyed, should’nt the module be ok ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to destroy everything first. If terraform can’t destroy something, you need to go to the AWS console and destroy it manually. Since some things were destroyed (and probably applied) manually before and were missing from the state file, it’s difficult to say anything what is missing or wrong

[Gamifly] Vincent avatar
[Gamifly] Vincent

Ok, thanks ! I though there maybe was a way not to do manually again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if TF can’t destroy something b/c it’s in the state file, but missing in AWS, you need to taint the resource

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

usually, there are two rules on how to clean up a TF mess:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. If something is in AWS, but not in the state file (b/c of lost or broken or deleted state file) -> manually delete the resources from AWS console
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. If something is in the state file, but not in AWS (was manually deleted/updated/changed) -> terraform taint or state rm the resources
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is a third way to clean up the mess, but not sure if you want to do it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Delete the state file and then nuke the account https://github.com/gruntwork-io/cloud-nuke or https://github.com/rebuy-de/aws-nuke
gruntwork-io/cloud-nuke

A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it - gruntwork-io/cloud-nuke

rebuy-de/aws-nuke

Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke

[Gamifly] Vincent avatar
[Gamifly] Vincent

That sounds like a whiter-than-white cleaning

[Gamifly] Vincent avatar
[Gamifly] Vincent

Thanks for those advises

[Gamifly] Vincent avatar
[Gamifly] Vincent

What about…things that are not in AWS because manually deleted nor in the state ? ( I juste deleted the vpc but the error strikes again ) I may sleep on that by now and get back on it on the morrow with a clearer mind. Thanks again !

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sounds like not all things were manually deleted

[Gamifly] Vincent avatar
[Gamifly] Vincent

isn’t there a way to ‘force’, i.e. continue even if the error strikes, to destroy as much as possible, then finish manually ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depends on the error(s), but usually not

[Gamifly] Vincent avatar
[Gamifly] Vincent

Hi @Andriy Knysh (Cloud Posse), still stuck on the same error…! I have no idea what resource is blocking the destroy : no sign of it in aws, I exported a full list of all resources… If you have any insight on what may cause the “Error: Error applying plan: 1 error occurred: * module.vpc.output.vpc_cidr_block: variable “default” is nil, but no error was reported” i’d be in debt !

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. go to the AWS console VPC and check all the subnets, route tables, elastic IPs. Delete them manually if any present
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. then do terraform taint -module=vpc aws_vpc.output
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

BTW, did you name your aws_vpc resource output? (seems strange)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Check if you have the resource with the name output and the variable with the name default (in the code). Share your code with me b/c it’s difficult to say anything without seeing it
[Gamifly] Vincent avatar
[Gamifly] Vincent

Thanks your your messages ;

  1. Already done, but some of the resources here are not named ; as there is no way to get a creation_timestamp, I can’t be 100% sure that it had been created via terraform (the alternative is that this is my dev/prod…)
  2. The module root.vpc has no resources. There is nothing to taint.
  3. The only file I modified was main.tf ```

    ——— Complete example to create Beanstalk —————-

    https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment

provider “aws” { region = “eu-west-1” profile = “terraform” }

variable “max_availability_zones” { default = “2” }

variable “namespace” { default = “eg” }

variable “stage” { default = “dev” }

variable “name” { default = “test” }

variable “zone_id” { type = “string” description = “Route53 Zone ID” }

data “aws_availability_zones” “available” {}

module “vpc” { source = “git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.4.1” namespace = “${var.namespace}” stage = “${var.stage}” name = “${var.name}” cidr_block = “10.0.0.0/16” }

module “subnets” { source = “git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.10.0” availability_zones = [”${slice(data.aws_availability_zones.available.names, 0, var.max_availability_zones)}”] namespace = “${var.namespace}” stage = “${var.stage}” name = “${var.name}” region = “us-east-1” vpc_id = “${module.vpc.vpc_id}” igw_id = “${module.vpc.igw_id}” cidr_block = “${module.vpc.vpc_cidr_block}” nat_gateway_enabled = “true” }

module “elastic_beanstalk_application” { source = “git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=tags/0.1.6” namespace = “${var.namespace}” stage = “${var.stage}” name = “${var.name}” description = “Test elastic_beanstalk_application” }

module “elastic_beanstalk_environment” { source = “git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tags/0.13.0” namespace = “${var.namespace}” stage = “${var.stage}” name = “${var.name}” zone_id = “${var.zone_id}” app = “${module.elastic_beanstalk_application.app_name}”

instance_type = “t2.small” autoscale_min = 1 autoscale_max = 2 updating_min_in_service = 0 updating_max_batch = 1 loadbalancer_type = “application” vpc_id = “${module.vpc.vpc_id}” public_subnets = “${module.subnets.public_subnet_ids}” private_subnets = “${module.subnets.private_subnet_ids}” security_groups = [”${module.vpc.vpc_default_security_group_id}”] solution_stack_name = “64bit Amazon Linux 2018.03 v2.12.12 running Docker 18.06.1-ce” keypair = “”

env_vars = “${ map( “ENV1”, “Test1”, “ENV2”, “Test2”, “ENV3”, “Test3” ) }” } ```

cloudposse/terraform-aws-elastic-beanstalk-application

Terraform Module to define an ElasticBeanstalk Application - cloudposse/terraform-aws-elastic-beanstalk-application

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

[Gamifly] Vincent avatar
[Gamifly] Vincent

as per the rest of the code, it’s genuine cloudposse

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the output of terraform output?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the output of terraform plan?

[Gamifly] Vincent avatar
[Gamifly] Vincent
$ terraform output
The state file either has no outputs defined, or all the defined
outputs are empty. Please define an output in your configuration
with the `output` keyword and run `terraform refresh` for it to
become available. If you are using interpolation, please verify
the interpolated value is not empty. You can use the 
`terraform console` command to assist.
[Gamifly] Vincent avatar
[Gamifly] Vincent

…and the plan

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the console output (not the JSON file) when you run terraform plan?

[Gamifly] Vincent avatar
[Gamifly] Vincent

t plan -out plan

[Gamifly] Vincent avatar
[Gamifly] Vincent

t plan > plan

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the output from terraform destroy? (without saying yes, just the destroy plan)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

a few more things:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Update these vars ``` variable “namespace” { default = “eg” }

variable “stage” { default = “dev” }

variable “name” { default = “test” } ```

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those are just examples, you need to specify your own namespace, stage and name - otherwise there could be name conflict with other resources created by using the same example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. In your code, you use diff regions
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
provider "aws" {
  region  = "eu-west-1"
  profile = "terraform"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and for the subnets

region              = "us-east-1"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

everything should be in one region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make a var region and use it eveywhere for all modules and providers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@[Gamifly] Vincent once you fix those two issues, run terraform plan/apply again

[Gamifly] Vincent avatar
[Gamifly] Vincent

[Sorry, was on phone]:

  1. as per the example vars, I added those to main.tf ; I override them via terraform.tfvars
  2. I did not see about the subnet region, thanks ; I have a doubt that it is taken into account as I always get prompted, for a region selection, whenever I run a command
  3. ``` $ t destroy provider.aws.region The region where AWS operations will take place. Examples are us-east-1, us-west-2, etc.

Default: us-east-1 Enter a value: eu-west-1

data.aws_iam_policy_document.service: Refreshing state… data.aws_elb_service_account.main: Refreshing state… data.aws_region.default: Refreshing state… data.aws_availability_zones.available: Refreshing state… data.aws_iam_policy_document.ec2: Refreshing state… data.aws_iam_policy_document.default: Refreshing state… data.aws_iam_policy_document.elb_logs: Refreshing state… data.aws_availability_zones.available: Refreshing state… ```

[Gamifly] Vincent avatar
[Gamifly] Vincent

(I just edited 1. , first answer was wrong)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

fix region (the same region for all resources and providers), update namespace/stage/name and run terraform plan/apply again let me know the result

[Gamifly] Vincent avatar
[Gamifly] Vincent

Back to the beginning and why I first came here !

Error: Error applying plan:

1 error occurred:
        * module.elastic_beanstalk_application.output.app_name: Resource 'aws_elastic_beanstalk_application.default' does not have attribute 'name' for variable 'aws_elastic_beanstalk_application.default.name'



Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
[Gamifly] Vincent avatar
[Gamifly] Vincent

@Andriy Knysh (Cloud Posse) I re-run apply

[Gamifly] Vincent avatar
[Gamifly] Vincent
Error: Error applying plan:

1 error occurred:
        * module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: 1 error occurred:
        * aws_elastic_beanstalk_application.default: InvalidParameterValue: Application gamelive-dev-gamelive-test-dev already exists.
        status code: 400, request id: 834e0b6a-7c01-4bab-bc02-3ab73607f5aa
[Gamifly] Vincent avatar
[Gamifly] Vincent
$ t destroy -target=aws_elastic_beanstalk_application.default
null_resource.default: Refreshing state... (ID: 4362035586681674298)
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes


Destroy complete! Resources: 0 destroyed.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@[Gamifly] Vincent send me your complete code (the one you ran), I’ll take a look when have time

[Gamifly] Vincent avatar
[Gamifly] Vincent

Ok ; I actually used that as a main: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/master/examples/complete The only difference is that I added variables to override in tfvar

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

[Gamifly] Vincent avatar
[Gamifly] Vincent

Thanks for your time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you change the region to be the same for all modules and providers?

[Gamifly] Vincent avatar
[Gamifly] Vincent

I think so

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

re: aws_elastic_beanstalk_application.default: InvalidParameterValue: Application gamelive-dev-gamelive-test-dev already exists - it says the application with that name already exists

[Gamifly] Vincent avatar
[Gamifly] Vincent
[Gamifly] Vincent avatar
[Gamifly] Vincent
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s var.region?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and why don’t you use it in

provider "aws" {
  region  = "eu-west-1"
  profile = "terraform"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as well

[Gamifly] Vincent avatar
[Gamifly] Vincent

I didn’t do that update after you indicated me that the subnets were using another region, my bad

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, all modules and providers have to use just one region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make a var region and default it to the region you want

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use the var for subnets module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and use it for the provider as well

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
provider "aws" {
  region  = "${var.region}"
  profile = "terraform"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make sure if you have region anywhere else, use the same var

[Gamifly] Vincent avatar
[Gamifly] Vincent

yes, I’ve just done it, thx

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Did you run plan and apply after you fixed the region?

[Gamifly] Vincent avatar
[Gamifly] Vincent

yes

[Gamifly] Vincent avatar
[Gamifly] Vincent

I got back to the very first error I had:

Error: Error applying plan:

1 error occurred:
        * module.elastic_beanstalk_application.output.app_name: Resource 'aws_elastic_beanstalk_application.default' does not have attribute 'name' for variable 'aws_elastic_beanstalk_application.default.name'



Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

What’s the error? That the application already exists?

[Gamifly] Vincent avatar
[Gamifly] Vincent

When I rerun, the error is that the application already exists

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@[Gamifly] Vincent i just ran the example here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/examples/complete/main.tf without any modification except I changed the namespace to something unique

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform apply without the errors you saw

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(BTW, if you use the same namespace, stage and name, it will fail because the bucket eg-dev-test already exists in AWS and all bucket names are global

[Gamifly] Vincent avatar
[Gamifly] Vincent

Ok, sounds like I was not in luck ! Have you tried to destroy and apply again ? If the error was an already existing bucket, wouldn’t the error be related to S3 buckets ? Thanks agin for your help

[Gamifly] Vincent avatar
[Gamifly] Vincent

Ok, I tried:

  • add a S3 backend,
  • change namespace / name I still get the same error: ``` module.subnets.aws_network_acl.private: Creation complete after 1s (ID: acl-0889fc9cefc38c3d8) module.subnets.aws_network_acl.public: Creation complete after 2s (ID: acl-000584f9e2f8afb4c) module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: Still creating… (10s elapsed) module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: Still creating… (20s elapsed) module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: Still creating… (30s elapsed) module.elastic_beanstalk_application.aws_elastic_beanstalk_application.default: Creation complete after 30s

Error: Error applying plan:

1 error occurred: * module.elastic_beanstalk_application.output.app_name: Resource ‘aws_elastic_beanstalk_application.default’ does not have attribute ‘name’ for variable ‘aws_elastic_beanstalk_application.default.name’ ```

[Gamifly] Vincent avatar
[Gamifly] Vincent

I’m getting to the point of thinking that the error cannot only be on me

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Yes I applied and destroyed a few times using the example (but changed the namespace)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Never got the error you see

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Something is wrong with your state file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can show the exact, meaning 100%, code you use so I’ll take a look

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Also suggest to throw away the state file and start from scratch and see what happens

[Gamifly] Vincent avatar
[Gamifly] Vincent

By using an S3 backend, I started with a scratch state file ; I moved m main.tf and tfvars into another folder, to scratch the whole project but the conf, maybe this will make a difference.

[Gamifly] Vincent avatar
[Gamifly] Vincent

Here is the full project link: <https://www.dropbox.com/s/tkhq2f10fvtc3rt/terraform.zip?dl=0> Last thing I did:

  • destroy everything
  • change all the variables
  • change the region
  • apply –> same error
[Gamifly] Vincent avatar
[Gamifly] Vincent
Error: Error applying plan:

1 error occurred:
        * module.elastic_beanstalk_application.output.app_name: Resource 'aws_elastic_beanstalk_application.default' does not have attribute 'name' for variable 'aws_elastic_beanstalk_application.default.name'
[Gamifly] Vincent avatar
[Gamifly] Vincent

Could this be related to using tf with windows 10 ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

What terraform version and AWS provider version are you using?

[Gamifly] Vincent avatar
[Gamifly] Vincent
$ tf -v
Terraform v0.11.14
+ provider.aws v2.12.0
+ provider.null v2.1.2
[Gamifly] Vincent avatar
[Gamifly] Vincent

Hey aknysh, any new idea on the subject ?!

[Gamifly] Vincent avatar
[Gamifly] Vincent

I have given admin access to the user…and it works.

[Gamifly] Vincent avatar
[Gamifly] Vincent

There is a very misleading error here ; I created the user step by step with each auth required one by one

[Gamifly] Vincent avatar
[Gamifly] Vincent

@Andriy Knysh (Cloud Posse) just to be sure you’ve seen the last messages

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@[Gamifly] Vincent re: I have given admin access to the user…and it works.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what other errors do you see?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the IAM user under which you provision the resources needs to have all the required permissions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we usually give such user an admin access

[Gamifly] Vincent avatar
[Gamifly] Vincent

At first I just gave the required permissions, by running apply until it broke to ask such permission.

[Gamifly] Vincent avatar
[Gamifly] Vincent

There is no other error, the misleading one I was speaking about was one we were trying to solve

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you give the user the admin permissions, and everything else regarding users/permissions is correct, it should not ask for any other permissions and apply should complete

[Gamifly] Vincent avatar
[Gamifly] Vincent

That is my point: something was incorrect with the permissions before I did grant admin rights, but the error was module.elastic_beanstalk_application.output.app_name: Resource 'aws_elastic_beanstalk_application.default' does not have attribute 'name' for variable 'aws_elastic_beanstalk_application.default.name' instead of the usual permission error

[Gamifly] Vincent avatar
[Gamifly] Vincent

I’m not sure about my wish to give admin rights to a user that should only timely modify the architecture ^^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well, then you have to find out all the required permissions the user will need for all services you deploy (which could be not an easy task)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and once you add new AWS resources to deploy, you’ll have to find and specify new permissions again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that is not scalable

Tony Thayer avatar
Tony Thayer

Does anyone have any advice on the elasticsearch module?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Tony Thayer avatar
Tony Thayer

Yes, that’s the one I’m using. It builds the cluster just fine but I’m unable to access the consoles it creates.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can’t, at least from the internet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it creates the cluster in a VPC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the cluster is not publicly accessible

Tony Thayer avatar
Tony Thayer

ok

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

a few ways of accessing it:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Add a bastion server to the VPC
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. We usually deploy an identity-aware proxy (IAP) in a kubernetes cluster, which allows us to access the Kibana (which is also deployed with the cluster)
Tony Thayer avatar
Tony Thayer

ok, thanks for the advice.

2019-06-11

oscarsullivan_old avatar
oscarsullivan_old

How do I get the SG id sg-... with a data resource to later user in another SG’s ingress rule? :sweat_smile: I’ve tried both aws_security_group (.arn) and aws_security_groups (.ids[0]) but no luck.

Invalid id: "16" (expecting "sg-...")
oscarsullivan_old avatar
oscarsullivan_old

Uhh. Scratch that. Something else going on there

johncblandii avatar
johncblandii

If I have this correct, *-dev-exec from ecs-web-app is supposed to be the role we change to enable SSM read/write, correct?

If so, it seems we cannot access it without it being an output. I’ll PR changes if needed, but I wanted to verify this before doing so. Thoughts?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for Atlantis

johncblandii avatar
johncblandii

yeah, was looking at it, but i don’t see reference to those params

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrmmm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You don’t see reference to module.web_app.task_role_name?

johncblandii avatar
johncblandii

and i did use the task_role_name

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(which params…)

johncblandii avatar
johncblandii

that didn’t work, though, since that’s not the exec one

johncblandii avatar
johncblandii

we’re using terraform-aws-ecs-alb-service-task

johncblandii avatar
johncblandii
cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you sure you what that phase?

johncblandii avatar
johncblandii

?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is this something your container needs to do?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. read from SSM

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

with chamber

johncblandii avatar
johncblandii

using task_secrets and it throws access denied

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, haven’t tried that yet

johncblandii avatar
johncblandii

ah, forgot you have chamber in there

johncblandii avatar
johncblandii

Fetching secret data from SSM Parameter Store in us-west-2: AccessDeniedException: User: arn:aws:sts::496386341798:assumed-role/event-horizon-replicator-dev-exec/654fb1ebeef7484c867a885e23ae82a0 is not authorized to perform: ssm:GetParameters on resource: arn:aws:ssm:..........

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you might need to get back to us on that one!

johncblandii avatar
johncblandii

johncblandii avatar
johncblandii

ok…i’ll tie things in then get back to you

1
johncblandii avatar
johncblandii

@Erik Osterman (Cloud Posse) here’s the working version. Basically we need these permissions added to the exec role. I’ll get around to PR’ing the exposure of those so we don’t have to manually recreate it.

data "aws_iam_policy_document" "replicator_ssm_exec" {
  statement {
    effect    = "Allow"
    resources = ["*"]

    actions = [
      "ssm:GetParameters",
      "secretsmanager:GetSecretValue",
      "kms:Decrypt",
    ]
  }
}

resource "aws_iam_role_policy" "replicator_ecs_exec_ssm" {
  name   = "${local.application_name_full}-ssm-policy"
  policy = "${data.aws_iam_policy_document.replicator_ssm_exec.json}"

  role = "${local.application_name}-replicator-dev-exec"
}
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Awesome! Glad you figured it out. Lookforward to the PR

1
Bogdan avatar

I’m using the github.com/cloudposse/terraform-aws-codebuild and get the following error when setting cache_enabled = false and/or cache_bucket_suffix_enabled = false:

${local.cache_def[var.cache_enabled]}
    * module.name.local.cache: local.cache: key "0" does not exist in map local.cache_def in:

${local.cache_def[var.cache_enabled]}

cc: @Erik Osterman (Cloud Posse) @Igor Rodionov @Andriy Knysh (Cloud Posse) @jamie

jamie avatar

I wrote that bit, ill take a look

jamie avatar

Put quotes around the "true" or "false"

1
jamie avatar

and try again

jamie avatar

its a terraform pain. true and false are binary, but “true” and “false” are strings that terraform converst to binary

jamie avatar

and in the cache_def map, it uses the string version

jamie avatar

I can probably make that more stable in the future, especially with 0.12

jamie avatar

@Bogdan

Bogdan avatar

@jamie it worked! I can also help with a PR if you’re too busy

Bogdan avatar

thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(we’ll be slowly fixing the "true"/"false" issue as we upgrade modules to 0.12)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the latest terraform-null-label now supports boolean

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

130 modules to go

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Tony Thayer avatar
Tony Thayer

let alone having to have both versions of terraform installed next to each other depending on the modules you’re using

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

speaking of which, have you seen how we handle that in geodesic?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can write use terraform 0.12 in the .envrc for a given project

1
Tony Thayer avatar
Tony Thayer

oh thats nice

jamie avatar

@Bogdan theres an open PR with the fix for it in there https://github.com/cloudposse/terraform-aws-codebuild/pull/43

Label module update + Readme example + descriptions for SSM parameter store usage by Jamie-BitFlight · Pull Request #43 · cloudposse/terraform-aws-codebuild

What: Updates to allow module.label.context to be passed in to the module. Example to show how to address parameter store Why Because someone asked about it in the slack channel

Bogdan avatar

Cool! I see Erik and Andriy had a look

Label module update + Readme example + descriptions for SSM parameter store usage by Jamie-BitFlight · Pull Request #43 · cloudposse/terraform-aws-codebuild

What: Updates to allow module.label.context to be passed in to the module. Example to show how to address parameter store Why Because someone asked about it in the slack channel

jamie avatar

Which I totally don’t remember doing

cabrinha avatar
cabrinha

anyone have an AWS AppMesh terraform module they’re working on?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no, but LMK if you find something

cabrinha avatar
cabrinha

I’ve been getting super confused trying to translate https://github.com/aws/aws-app-mesh-examples/blob/master/examples/apps/colorapp/ecs into Terraform

aws/aws-app-mesh-examples

AWS App Mesh is a service mesh that you can use with your microservices to manage service to service communication. - aws/aws-app-mesh-examples

jamie avatar

I can translate that to terraform for ya. Have you started yet?

jamie avatar

@cabrinha @Erik Osterman (Cloud Posse) I’ve started on the appmesh module https://github.com/bitflight-public/terraform-aws-app-mesh

bitflight-public/terraform-aws-app-mesh

Terraform module for creating the app mesh resources - bitflight-public/terraform-aws-app-mesh

cabrinha avatar
cabrinha

@jamie wow dude thanks a lot!

cabrinha avatar
cabrinha

I’m going to test this out.

jamie avatar

its not finished

jamie avatar

But you can look through it

jamie avatar

its missing the resources for the app mesh virtual services

cabrinha avatar
cabrinha

There is a lot I still don’t understand about AppMesh

cabrinha avatar
cabrinha

namespace, stage, I hope we can make these values configurable or have the ability to leave them blank

jamie avatar

and, if you keep waiting I’ll have the example folder create an ecs cluster, create the services in it with the envoy sidecars and the xray sidecars and the cloudwatch logging, and the rest of that colorteller app

cabrinha avatar
cabrinha
add dns_name variable to allow control of CNAME by cabrinha · Pull Request #25 · cloudposse/terraform-aws-efs

The creation of the EFS volume&#39;s CNAME is currently out of user&#39;s control. This change proposes the option to allow users to set their own DNS name for the volume. If no name is set, fallba…

cabrinha avatar
cabrinha

@jamie you’re a saint

jamie avatar

Yeah theres already an override for the naming

jamie avatar
variable "mesh_name_override" {
  description = "To provide a custom name to the aws_appmesh_mesh resource, by default it is named by the label module."
  default     = ""
}
jamie avatar

So yeah. You can’t leave the label name blank, but you can override it

jamie avatar

Anyway… I have to stop for now on making this. I have a Dockerfile I need to update for a client

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Andriy Knysh (Cloud Posse) I am planning to add support for 0.12 to tfmask if is ok with you guys

Julio Tain Sueiras avatar
Julio Tain Sueiras

to be honest, I wish there is a way to auto mark vault data source as sensitive

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Julio Tain Sueiras, your contribution is welcome

2019-06-12

antonbabenko avatar
antonbabenko

https://github.com/cloudposse/terraform-aws-ecs-container-definition - I believe this is a blocker for many who wants to use Terraform 0.12, or is it working with 0.12 already?

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

J avatar

how do I get tfmask installed / setup as part of a pipeline?

antonbabenko avatar
antonbabenko

https://github.com/cloudposse/terraform-aws-ecs-container-definition/pull/36 - @Igor Rodionov you are one of contributor there who is online Does it look good for you to get this merged? I also run all examples with Terraform 0.12 and they work as expected.

Igor Rodionov avatar
Igor Rodionov

yes, but I do not involved now into migration to terraform 0.12

antonbabenko avatar
antonbabenko

This is the only change which is needed to get that module to work with both - 0.11 and 0.12. The rest is cosmetic changes which we can do later.

1
Igor Rodionov avatar
Igor Rodionov

done

Igor Rodionov avatar
Igor Rodionov

release created 0.15.0

antonbabenko avatar
antonbabenko

Thanks! I really don’t like to maintain my own forks

1
3
Bogdan avatar

if only that would always be possible

hiding1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
removed provider configuration by svenwb · Pull Request #55 · cloudposse/terraform-aws-dynamic-subnets

what removed provider block why While using this module a terrform plan always requested to specify the region, although the variable region is set in the module. provider &quot;aws&quot; { assum…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is it true to this day you can still not pin providers in modules without supporting all parameters of the provider?!?!?

loren avatar
Provider configuration not inherited from parent module · Issue #56 · cloudposse/terraform-aws-dynamic-subnets

While using this module a terrform plan always requested to specify the region, although the variable region is set in the parent module. Example provider &quot;aws&quot; { assume_role { role_arn =…

loren avatar

or try this: terraform plan -var <region>?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks! Will see what he says.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) you didn’t run into this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not with dynamic subnets

jamie avatar

I had this issue when running the examples on my computer

jamie avatar

prompting for region every time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I guess since we always set AWS_REGION we don’t notice it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

deployed atlantis many times using the subnet module https://github.com/cloudposse/terraform-root-modules/blob/master/aws/ecs/vpc.tf

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jamie @loren what are your thoughts on provider pinning at the module level?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve been plagued by constant regressions with every aws provider release

jamie avatar

its a good idea

jamie avatar

I have fixed the issue with it in my pull request

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sec

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

PR#54?

1
loren avatar

I always pin the explicit version for the aws provider in my root module, and test updates there. I don’t worry about it in the module, I’m fine if a known min version is documented in the readme

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Erik Osterman (Cloud Posse) just tested, if you define a provider w/o region in a low-level module, like this

provider "aws" {
  version = "~> 2.12.0"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then define a provider in top-level module that uses the low-level module

provider "aws" {
  region = "us-east-1"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will ask for region all the time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(the top-level region does not apply, they don’t merge)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you try adding region, default it to "", then see if it uses the AWS_REGION env

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if that’s the case, maybe we use that pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that way it works both ways

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

will try now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) did this work?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@loren - the fix is to pass providers explicitly, which is a new feature in 0.11.14

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Modules - 0.11 Configuration Language - Terraform by HashiCorp

Modules are used in Terraform to modularize and encapsulate groups of resources in your infrastructure. For more information on modules, see the dedicated modules section.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
module "vpc" {
  source     = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.3.4>"
  providers  = {
    aws = "aws"
  }
  namespace  = "${local.namespace}"
  stage      = "${local.stage}"
  name       = "${local.name}"
  cidr_block = "172.16.0.0/16"
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jamie heads up

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Provider configuration not inherited from parent module · Issue #56 · cloudposse/terraform-aws-dynamic-subnets

While using this module a terrform plan always requested to specify the region, although the variable region is set in the parent module. Example provider &quot;aws&quot; { assume_role { role_arn =…

loren avatar

Nice!

jamie avatar

How do you want the module changed? Remove the provider, or keep the provider?

johncblandii avatar
johncblandii

I think I ran the right make commands for docs, but let me know of any changes required: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/27

Add exec outputs by johncblandii · Pull Request #27 · cloudposse/terraform-aws-ecs-alb-service-task

The task exec outputs are needed to add to the policy for SSM secrets usage and likely other things.

johncblandii avatar
johncblandii

the example was broken too. it runs now, though

Add exec outputs by johncblandii · Pull Request #27 · cloudposse/terraform-aws-ecs-alb-service-task

The task exec outputs are needed to add to the policy for SSM secrets usage and likely other things.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Lgtm

johncblandii avatar
johncblandii

pushed lint fixes

johncblandii avatar
johncblandii

bump @Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cut 0.12.0

johncblandii avatar
johncblandii

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Public #office-hours starting now! Join us on Zoom if you have any questions. https://zoom.us/j/684901853

Blaise Pabon avatar
Blaise Pabon

Thanks for the office hours, that was very helpful!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great you stopped by @Blaise Pabon!

1
jose.amengual avatar
jose.amengual

Hi I was using : https://github.com/cloudposse/terraform-aws-ecs-container-definition and I did not see a way to use docker labels, Is that possible?

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

jamie avatar

Currently that is true.

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

jose.amengual avatar
jose.amengual

ok I do not see a reason no to have it

jose.amengual avatar
jose.amengual

I mean adding it it will not break anything

jose.amengual avatar
jose.amengual

Can someone explain me how this module work ? I wanted to send a PR to add the Docker labels but I do not understand how the json is being rendered with replace, thanks

jamie avatar

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) Did one of you guys write this module?

jose.amengual avatar
jose.amengual

I was reading last night and I seem to understand now

jose.amengual avatar
jose.amengual

I could just add :

jose.amengual avatar
jose.amengual
json_with_stop_timeout       = "${replace(local.json_with_memory_reservation, "/\"stop_timeout_sentinel_value\"/", local.encoded_stop_timeout)}"
json_with_docker_labels      = "${replace(local.json_with_stop_timeout, "/\"docker_labels_sentinel_value\"/", local.encoded_docker_labels)}"
jose.amengual avatar
jose.amengual

plus the local value

jose.amengual avatar
jose.amengual

I guess

jose.amengual avatar
jose.amengual

like:

jose.amengual avatar
jose.amengual
encoded_docker_labells = "${jsonencode(local.docker_labels)}"
marc avatar

I have a project where someone ran terraform using 0.12 by mistake, since it was a super simple repo it just worked, does anyone know is there a way to make the state file compatible with 0.11.14 again?

loren avatar

Delete the state file, and import the resources using tf 0.11?

marc avatar

mmm, that might work

marc avatar

I don’t think so, but figured I’d ask

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you’re using a versioned bucket, you might be able to just pull the previous .tfstate file

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and restore it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(our terraform-aws-tfstate-backend does versioning)

2019-06-13

David Nolan avatar
David Nolan

And then add the terraform version restrictions into your project config to prevent another oops before you’re ready.

1
loren avatar
terraform {
  required_version = "0.11.14"

  backend "s3" {}
}
David Nolan avatar
David Nolan

I’ve been using required_version = "<0.12.0, >= 0.11.11" (or pick some preferred minimum for you)

1
loren avatar

i pin exact versions for terraform in my root config because state is not backwards compatible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ah, interesting technique.

mrwacky avatar
mrwacky

Same. we were pinning to 0.11.x, then we ended up with three different x in our various Terraform roles. I just pinned them all to 0.11.14 this week

1
loren avatar

so if someone runs apply with tf 0.11.14, that’s it, everyone has to use that version

Igor avatar

^^ Is that true? How do you upgrade the state files then when you migrate to newer versions, specifically 0.12?

loren avatar

run apply with terraform 0.12, if it succeeds, your state now requires tf 0.12. if it fails, your state is not updated

Nikola Velkovski avatar
Nikola Velkovski

https://github.com/tfutils/tfenv seems to work quite well

tfutils/tfenv

Terraform version manager. Contribute to tfutils/tfenv development by creating an account on GitHub.

antonbabenko avatar
antonbabenko

yes, have been using it for almost a year

tfutils/tfenv

Terraform version manager. Contribute to tfutils/tfenv development by creating an account on GitHub.

Nikola Velkovski avatar
Nikola Velkovski

damn, please share these goodies more often

Nikola Velkovski avatar
Nikola Velkovski

I had to use this because the fmt is broken in 0.12 and the ci was complaining about my commits ( yes I’ve been spoiled by the fmt ) so I had to revert to 0.11 for work projects

antonbabenko avatar
antonbabenko

You should just attend my meetups

1
Nikola Velkovski avatar
Nikola Velkovski

Hasen Ahmad avatar
Hasen Ahmad

hello all! I’m new here but have been using the sweetops modules in terraform for a little while but have run into an issue. I am trying to use the jenkins module that creates an beanstalk app etc. The issue I am running into is beanstalk has this error Creating Load Balancer listener failed Reason: An SSL policy must be specified for HTTPS listeners. I made a key pair for the jenkins module so im not sure what needs to happen?

2019-06-14

Meb avatar

Any idea for the error

Initializing modules...
- module.eg_prod_bastion_label
Error downloading modules: Error loading modules: module eg_prod_bastion_label: Error parsing .terraform/modules/3b7de6adc81422f0cdde31b2ae8597c0/main.tf: At 14:25: Unknown token: 14:25 IDENT var.enabled

using https://github.com/cloudposse/terraform-null-label The example in copy & paste..

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Meb avatar

got doubt & found it. Master is 0.12 terraform so I had to force it ot use 0.11.1

1
jamie avatar

Yes @Erik Osterman (Cloud Posse) went with the convention that seems to be most popular on using master for the current latest, and branch 0.11/master for the latest of the 0.11 terraform

1
jamie avatar

But pinning it to a release tag is always the best option.

Meb avatar

this is gonna be fun

J avatar

does anyone know how to get an ip address from an address_prefix in terraform and then pass that to ansible ?

jamie avatar

Did you solve this issue?

jamie avatar

Do you still need help with this?

Meb avatar

terraform output + jq > ansible?

1
loren avatar

TIL terraform output takes an output name as an argument, for when you don’t want all the outputs

jose.amengual avatar
jose.amengual

I created a simple codedeploy module for my Fargate task and everything went well but I notice that the target groups were switched so not terraform wants to correct the change :

jose.amengual avatar
jose.amengual
Terraform will perform the following actions:

  ~ module.alb.aws_lb_listener.https
      default_action.0.target_group_arn: "arn:aws:elasticloadbalancing:us-east-1:99999999999:targetgroup/app-green/6cf4c676cb238179" => "arn:aws:elasticloadbalancing:us-east-1:99999999999:targetgroup/app-default/fd2cdc38bdf07078"

jose.amengual avatar
jose.amengual

where they use local-exec provider to do the code deploy part which I found weird but that is me…

jose.amengual avatar
jose.amengual

what do you guys recommend to “solve” this ? maybe this does not need solving and just deal with it, since at some point the arn will go back to the original

jose.amengual avatar
jose.amengual

maybe I could run another TF that access the same state and search using data. resources and switch the target group arns…

jose.amengual avatar
jose.amengual

any ideas are welcome

2019-06-15

SweetOps avatar
SweetOps
06:01:13 PM

Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.

2019-06-16

jaydip189 avatar
jaydip189

I want to create nodejs elastic beanstalk resources using terraform template

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all you need to do is to change the image from Docker to NodeJS

2019-06-17

oscarsullivan_old avatar
oscarsullivan_old

Does anyone have a nice way of transformation a list output of aws_instances.public_ips to something suitable for CIDR_BLOCK in an SG?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You just need to add /32 to each one right?

oscarsullivan_old avatar
oscarsullivan_old
Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Provider configuration not inherited from parent module · Issue #56 · cloudposse/terraform-aws-dynamic-subnets

While using this module a terrform plan always requested to specify the region, although the variable region is set in the parent module. Example provider &quot;aws&quot; { assume_role { role_arn =…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I need some feedback here! Difficult choice and it feels like we are going against what most other modules do, but not sure if that’s reason enough to continue that practice.

Cloud Posse avatar
Cloud Posse
04:05:06 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
zoom https://zoom.us/j/684901853
slack #office-hours (our channel)

jose.amengual avatar
jose.amengual
05:47:14 PM

I should have posted this today instead of late friday…. anyhow, any thoughts on this ?

I created a simple codedeploy module for my Fargate task and everything went well but I notice that the target groups were switched so not terraform wants to correct the change :

Mads Hvelplund avatar
Mads Hvelplund

Hi #terraform

Let’s say I have a list keys and I want to transform it into a map, using something like:

locals {
  account = "287985351234"
  names   = ["alpha", "beta", "gamma"]
  region  = "eu-west-1"
}

data "null_data_source" "kms" {
  count = "${length(local.names)}"

  inputs = {
    key   = "${upper(local.names[count.index])}"
    value = "${format("arn:aws:kms:%s:%s:key/%s",local.region, local.account, local.names[count.index])}"
  }
}

output "debug" {
  value = "${data.null_data_source.kms.*.outputs}"
}

The output is a list of maps:

data.null_data_source.kms[2]: Refreshing state...
data.null_data_source.kms[0]: Refreshing state...
data.null_data_source.kms[1]: Refreshing state...

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

debug = [
    {
        key = ALPHA,
        value = arn:aws:kms:eu-west-1:287985351234:key/alpha
    },
    {
        key = BETA,
        value = arn:aws:kms:eu-west-1:287985351234:key/beta
    },
    {
        key = GAMMA,
        value = arn:aws:kms:eu-west-1:287985351234:key/gamma
    }
]

Is there any way to make it one map with all the keys like this?:

    {
        ALPHA = "arn:aws:kms:eu-west-1:287985351234:key/alpha",
        BETA = "arn:aws:kms:eu-west-1:287985351234:key/beta",
        GAMMA = "arn:aws:kms:eu-west-1:287985351234:key/gamma"
    }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can create two lists and then use zipmap to create a map https://www.terraform.io/docs/configuration-0-11/interpolation.html#zipmap-list-list-

Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Interpolation Syntax - 0.11 Configuration Language - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

Mads Hvelplund avatar
Mads Hvelplund

the problem is that I want to apply functions to values in the first list (the keys). You can’t do that with formatlist.

Mads Hvelplund avatar
Mads Hvelplund

(I think )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just create two lists for names, one in lower-case, the other in upper-case

Mads Hvelplund avatar
Mads Hvelplund

umm, no.

Mads Hvelplund avatar
Mads Hvelplund

look, i can easily write out the last map. i want to reduce the amount of error prone cut and paste in my template.

Mads Hvelplund avatar
Mads Hvelplund

once there is twenty names in the list it gets very verbose and tedious to scroll through

loren avatar

what happens if you do:

data "null_data_source" "kms" {
  count = "${length(local.names)}"

  inputs = {
    "${upper(local.names[count.index])}" = "${format("arn:aws:kms:%s:%s:key/%s",local.region, local.account, local.names[count.index])}"
  }
}
loren avatar

oh right, again a list of maps

loren avatar

shoot i think i did something like this somewhere, using zipmap

loren avatar
loren
07:53:02 PM

goes hunting through old code

loren avatar

ahh yes, you can key into your null data source outputs…

locals {
  its_a_map = "${zipmap(data.null_data_source.kms.*.outputs.key, data.null_data_source.kms.*.outputs.value)}"
}
3
Mads Hvelplund avatar
Mads Hvelplund

Perfect! Thanks a million

2019-06-18

Bogdan avatar
resource "null_resource" "get-ssm-params" {
  provisioner "local-exec" {
    command = "aws ssm get-parameters-by-path --path ${local.ssm_vars_path}--region ${var.region} | jq '.[][].Name' | jq -s . > ${local.ssm_vars}"
  }
}

resource "null_resource" "convert-ssm-vars" {
  count = "${length(local.ssm_vars)}"

  triggers = {
    "name"      = "${element(local.ssm_vars, count.index)}"
    "valueFrom" = "${local.ssm_vars_path}${element(local.ssm_vars, count.index)}"
  }
}

guys do you know a way to get the output of a command that ran in a null_resource local-exec provisioner into an HCL list? I tried the above but didn’t work

Mads Hvelplund avatar
Mads Hvelplund

do you need to read the ssm params that way? you could probably load the value with a data element, jsondecode it, and get the Name key out with normal interpolation syntax

Bogdan avatar

@Mads Hvelplund thanks. I’ll use the external provider and data source to execute my script

Bogdan avatar

sure, there’s a data source for a single ssm param

Bogdan avatar

but not if you have tens

Bogdan avatar

if there’s tens, one would like a different way to read them without also generating/creating tens of data sources

jamie avatar

Hey bogdan, You may want to instead generate a lambda function and call it, to return the list.

foqal avatar
foqal
02:33:21 PM

Helpful question stored to <@Foqal> by @loren:

Hi <strong><a href='/terraform'>#terraform</a></strong>...
jamie avatar

In the same way #airship uses the lambda function to lookup docker labels.

Bogdan avatar

I hear you @jamie but when would the lambda update the task definition’s container definition to inject any new/updated params? i found that having more control on when the TD/CD gets a new revision (at apply-time) is safer

jamie avatar

@Bogdan oh I was meaning you can extend the tf functionality by creating a lambda and immediately calling it. And using the results to change the task…

jamie avatar

If you haven’t done this before have a look at the process the https://airship.tf ecs service module creates and calls the lambda function to get extra ecs task details

Airship Modules

Flexible Terraform templates help setting up your Docker Orchestration platform, resources 100% supported by Amazon

jose.amengual avatar
jose.amengual

I need to create a beanstalk environment and I was going to use the cloudposse modules, but I need a NLB with static ips and SSL termination

jose.amengual avatar
jose.amengual

but I do not know if Beanstalk multicontainer might not be able to use such setup ?

2019-06-19

jamie avatar

@Callum Robertson this channel will help you.

maarten avatar
maarten

Hi Everyone, with https://github.com/hashicorp/terraform/issues/17179 still open ( full reshuffle of resources / recreation happens when the top item of the list gets removed ) , I’m wondering how other Terraformers are doing mass creation of AWS resources, like users.

for_each attribute for creating multiple resources based on a map · Issue #17179 · hashicorp/terraform

Hi, We are missing a better support for loops which would be based on keys, not on indexes. Below is an example of the problem we currently have and would like Terraform to address: We have a list …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We are adding users one by one by creating a separate module for each user, e.g. https://github.com/cloudposse/root.cloudposse.co/tree/master/conf/users/overrides

cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when we run it, the users is added to the required groups

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that way we can add/remove users from groups without touching other users and without changing group lists

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did it only for users

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

maarten avatar
maarten

@cytopia

cytopia avatar
cytopia

@maarten thanks for starting the conversation.

The only downside I currently see with this approach is when it comes to deleting users.

If all users where managed by a single module defines in terraform.tfvars as a list of items (dynamically generated Terraform code with independent resource blocks, so you don’t hit the issue of resource re-creation by changing a list in between) I could simply delete an entry, regenerate the code and create a pull request to be reviewed. Upon merge and deploy, that user will be deleted.

With the multiple module approach I will probably have multiple directories and when deleting a directory and push the changes to git, nobody can actually delete that user, because you will have to terraform destroy in that directory before deleting it.

How do you handle that situation with the multi-module approach?

maarten avatar
maarten

@cytopia https://github.com/cloudposse/root.cloudposse.co/blob/master/conf/users/README.md is an actual single root module approach with multiple ‘user-module’ definitions. Users will be hardcoded in tf directly as there won’t be any re-usability possible.

cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

maarten avatar
maarten

Re: With the multiple module approach I will probably have multiple directories and when deleting a directory and push the changes to git, nobody can actually delete that user, because you will have to terraform destroy in that directory before deleting it.

This will always be the case with every root module you create, hence first destroy then delete.

cytopia avatar
cytopia

I am just thinking around how well that would work if you had something like Atlantis setup for auto-provisioning on Pull requests.

In that case you would always have the need for manual provisioning before a review has actually happened.

maarten avatar
maarten

when using a single root module approach for users, or iam in general you don’t have this problem

cytopia avatar
cytopia

How would that would eliminate the need for manual provisioning before review/merge for deleting users/roles?

As I’ve stated above, you would probably need to terraform destroy manually and locally, then remove the directory, git commit and push. Or am I mistaken here?

maarten avatar
maarten

I think you’re not capturing what I meant. With the single root module approache with multiple users, one justs add a user and deletes a user and only terraform apply is used. The state of the iam root module itself does not need to be destroyed to delete a user

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Andriy Knysh (Cloud Posse) 0.12.2 added yaml decoding and encoding, pretty nice(and allow pretty nice but weird conversion from json to yaml)

2
Julio Tain Sueiras avatar
Julio Tain Sueiras

(terraform v0.12.2)

Abel Luck avatar
Abel Luck

I’m using the null-label in a reusable module. I want to pass the user-supplied tags as well as add a new one

Abel Luck avatar
Abel Luck

tags = "${var.tags}" is what I have now.. the user may or may not have supplied any tags

Abel Luck avatar
Abel Luck

what’s the proper way to add a tag there?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your top-level module, you can merge the tags supplied by the user with some hardcoded tags. Then send the resulting map to the label module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

jaydip189 avatar
jaydip189

Variables not allowed

on <value for var.security_groups> line 1: (source code not available)

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Andriy Knysh (Cloud Posse) question for ya, do you guys use AzureDevops?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we (Cloud Posse) are mostly AWS shop with a bit of GCP

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’m sure there are many people here who use Azure

Julio Tain Sueiras avatar
Julio Tain Sueiras

I mean AzureDevOps as in the CI/CD service

Julio Tain Sueiras avatar
Julio Tain Sueiras

so me(and my company is going to release terraform provider for azuredevops)

Julio Tain Sueiras avatar
Julio Tain Sueiras

just need to word the license correctly

Julio Tain Sueiras avatar
Julio Tain Sueiras

(to avoid any issue)

Julio Tain Sueiras avatar
Julio Tain Sueiras

(my company as in the company I work in)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would follow the lead of what terraform-providers (official hashicorp org) uses

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s all MPL-2.0

Julio Tain Sueiras avatar
Julio Tain Sueiras

true

Julio Tain Sueiras avatar
Julio Tain Sueiras

P.S. I already implemented the following

Julio Tain Sueiras avatar
Julio Tain Sueiras
Resources: 
 azuredevops_project
 azuredevops_build_definition
 azuredevops_release_definition
 azuredevops_service_endpoint
 azuredevops_service_hook
 azuredevops_variable_group
 azuredevops_task_group

Data Sources:
 azuredevops_project
 azuredevops_service_endpoint
 azuredevops_source_repository
 azuredevops_workflow_task
 azuredevops_group
 azuredevops_user
 azuredevops_build_definition
 azuredevops_agent_queue
 azuredevops_task_group
 azuredevops_variable_group
 azuredevops_variable_groups
1
Julio Tain Sueiras avatar
Julio Tain Sueiras

btw with the new yamlencode function

Julio Tain Sueiras avatar
Julio Tain Sueiras

that mean for helm provider you can do something like this

Julio Tain Sueiras avatar
Julio Tain Sueiras
values = [
  yamlencode({
     test = {
        test2 = "hi"
     }
  })
]
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

#office-hours starting now! https://zoom.us/j/684901853

Have a demo of using Codefresh for ETL

Callum Robertson avatar
Callum Robertson

Thanks for the invite @jamie

Callum Robertson avatar
Callum Robertson

Hey all, this slack group looks fantastic, DevOps & Cloud engineer here from New Zealand. Getting big into the Hasicorp stack and from looking around, this place is a treasure chest of good ideas

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also have a look at our archives

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
SweetOps Slack Archive

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

1
Callum Robertson avatar
Callum Robertson

Thanks @Erik Osterman (Cloud Posse), appreciate that mate!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@endofcake is also in NZ and serious terraformer

Callum Robertson avatar
Callum Robertson

Awesome, thanks Erik, @endofcakeyou going to be at DevOps days?

endofcake avatar
endofcake

Hi @Callum Robertson , not sure yet. I went to the first conference and it was really good. Will have to think whether I can afford the second one though.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Welcome @Callum Robertson!

jaydip189 avatar
jaydip189

I’m getting Error https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment: Variables not allowed

on <value for var.security_groups> line 1: (source code not available)

Variables may not be used here.

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

show the exact error. what TF version are you using?

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Mithra avatar

Terraform v0.12.2

  • provider.aws v2.15.0
jaydip189 avatar
jaydip189

We are working on same project

Mithra avatar

Error: Variables not allowed

on <value for var.private_subnets> line 1: (source code not available)

Variables may not be used here.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Mithra avatar

Getting below error var.private_subnets List of private subnets to place EC2 instances Enter a value: 1

Mithra avatar

Error: variable private_subnets should be type list, got number

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you using TF 0.12?

Mithra avatar

YES

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the modules are not converted to TF 0.12 yet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are working on it now

Mithra avatar

okay

Mithra avatar

is it resolved yet ?

jaydip189 avatar
jaydip189

No, We have copied 0.11.3 binary terraform .

2019-06-20

Ramesh avatar

Any experts who did implement cloudposse Terraform module for jenkins in aws

Ramesh avatar

Need help with it !

Stephen Lawrence avatar
Stephen Lawrence

Any possibility of getting someones patch PR’d for eks-workers module? https://github.com/cloudposse/terraform-aws-eks-workers/compare/cloudposse<i class="em em-master...fliphess"</i>patch-1?expand=1>

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Ramesh avatar

Any Terraform aws experts ?

Ramesh avatar

Will pay you guys for cloudpose Jenkins setup

jose.amengual avatar
jose.amengual

simple pull request to allow for custom default target group port : https://github.com/cloudposse/terraform-aws-alb/pull/19

Adding default target group port to accomodate for a TG with a port other than 80 by jamengual · Pull Request #19 · cloudposse/terraform-aws-alb

This is to enable the option to specify the default target group port in the cases that the service does not listen on port 80.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ramesh are you seeing issues with jenkins? Have you seen the examples here https://github.com/cloudposse/terraform-aws-jenkins/tree/master/examples? (they were 100% tested, but it was some time ago)

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as of now, the examples will not work verbatim and will need little changes. For example, terraform-aws-dynamic-subnets has been already converted to TF 0.12, so pinning to master will not work and for TF 0.11 needs to be pinned to 0.12.0 as in ref=tags/0.12.0

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ramesh let us know when you are stuck, could help you get going or review the plan/apply errors if any

Ramesh avatar

Great, thanks Aknysh. Let me retry.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual will check the PR, thanks

jose.amengual avatar
jose.amengual

thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual please rebuild README

jose.amengual avatar
jose.amengual

ohhh sorry I will

jose.amengual avatar
jose.amengual

done

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-alb

Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb

jose.amengual avatar
jose.amengual

amazing

jose.amengual avatar
jose.amengual

thanks

jose.amengual avatar
jose.amengual

what are your thoughts of creating a target group module ?

jose.amengual avatar
jose.amengual

we have some custom target groups, some times more than https

jose.amengual avatar
jose.amengual

so I was thinking on doing somethin like the alb-listeners rules module where you could create target groups

jose.amengual avatar
jose.amengual

but I’m not sure

jose.amengual avatar
jose.amengual

imagine doing bluegreen , where can I create the target group for bluegreen ?

jose.amengual avatar
jose.amengual

that is the answer I’m trying to answer

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sounds good

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-alb-ingress

Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress

jose.amengual avatar
jose.amengual

I’m using that one too

jose.amengual avatar
jose.amengual

so for bluegreen I will use a different target group that will be using a different listener rule in a custom port other than 443 like 8080 or something

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

jose.amengual avatar
jose.amengual

in order to do that I need to call that module twice so I can pass the proper variables

jose.amengual avatar
jose.amengual

that is fine

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use var.port

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah yes, twice

jose.amengual avatar
jose.amengual

but the alb module will create the alb detault target group but not the bluegreen TG

jose.amengual avatar
jose.amengual

so I have two options : or I create a module for custom target groups and make the alb module to have like a no_target_groups flag

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it could be improved yes

jose.amengual avatar
jose.amengual

or in my CodeDeploy module add the bluegreen target group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t usually use the default TG after it gets created

jose.amengual avatar
jose.amengual

same here

jose.amengual avatar
jose.amengual

lol

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you see how to improve it, PRs always welcome

jose.amengual avatar
jose.amengual

so a target_group module could be useful then

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

jose.amengual avatar
jose.amengual

cool, I will work on that

jose.amengual avatar
jose.amengual

thanks for your help

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not sure if it deserves to be a module since it will have just one resource (unless you add more functionality to it)

jose.amengual avatar
jose.amengual

hahahah true….and plus is key part of an ALB

jose.amengual avatar
jose.amengual

they are married

jose.amengual avatar
jose.amengual

ALB + listener rule + TG

jose.amengual avatar
jose.amengual

so maybe inside of the alb module will be better

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are too many possible combinations here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe a separate TG module will be useful to not repeat many settings all the time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and it could be used from other modules, e.g. alb or alb-ingress (which supports external TG w/o creating a new one)

jose.amengual avatar
jose.amengual

yes that was one of the problems I was thinking

jose.amengual avatar
jose.amengual

then I will have to do count on the resource creation to support custom TGs in the alb module and it could get messy

2019-06-21

Meb avatar

any one using localstack to test TF here? I mean at least for simple projects as it doesn’t have 100% coverage

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I use it as a docker machine in my mac

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

to test simple iam roles, lambdas and dynamosdb structures

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
jose.amengual avatar
jose.amengual

I was looking into that

jose.amengual avatar
jose.amengual
Terraform Infrastructure Design Patterns

In this post we’ll explore Terraform’s ability to grow with the size of your infrastructure through its slightly hidden metaprogramming capabilities.

jose.amengual avatar
jose.amengual

I just don’t like multiple files per layer

jose.amengual avatar
jose.amengual

layer/main.tf+outputs.tf+variables.tf

tamsky avatar


I just don’t like multiple files per layer

I :heart: multiple files per layer.

Layers facilitate these types of overrides, which are very powerful, imho: 1) replace an earlier layer file with an empty file 2) use the terraform *[_override.tf](http://_override.tf) filename feature to merge/replace final values into the config.

jose.amengual avatar
jose.amengual

How can you empty a whole file of SGs if they are being use?

jose.amengual avatar
jose.amengual

How do you know one SG of tga file belongs to what?

jose.amengual avatar
jose.amengual

By the name of the resource?

tamsky avatar

You can try to delete the in-use SG manually in the AWS VPC web console – it should then complain and tell you all resources that are using the SG.

2019-06-22

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@antonbabenko got your issue on tfstate backend. I am afk, but will address’s when back

1

2019-06-23

vishnu.shukla avatar
vishnu.shukla

Hi can someone help me in using connection below is the error

vishnu.shukla avatar
vishnu.shukla

Error: Missing required argument

on main.tf line 26, in resource “aws_instance” “example”: 26: connection {

The argument “host” is required, but no definition was found.

vishnu.shukla avatar
vishnu.shukla

and here is my code

vishnu.shukla avatar
vishnu.shukla

provider “aws” { profile = “default” region = “us-east-1” }

resource “aws_key_pair” “mykey” { key_name = “mykey” public_key = “${file(“${var.PATH_TO_PUBLIC_KEY}”)}” }

resource “aws_instance” “example” { ami = “ami-2757f631” instance_type = “t2.micro” key_name = “${aws_key_pair.mykey.key_name}”

provisioner “file” { source = “script.sh” destination = “/tmp/script.sh” } provisioner “remote-exec” { inline = [ “chmod +x /tmp/script.sh”, “sudo /tmp/script.sh” ] } connection { user = “${var.INSTANCE_USERNAME}” private_key = “${file(“${var.PATH_TO_PRIVATE_KEY}”)}” } }

jamie avatar

@vishnu.shukla can you test by moving the connection block into the provisioner block?

jamie avatar
 provisioner "file" {
   source = "script.sh"
   destination = "/tmp/script.sh"

  connection {
   user = "${var.INSTANCE_USERNAME}"
   private_key = "${file("${var.PATH_TO_PRIVATE_KEY}")}"
   }
 }
 provisioner "remote-exec" {
   inline = [
   "chmod +x /tmp/script.sh",
   "sudo /tmp/script.sh"
   ]

  connection {
   user = "${var.INSTANCE_USERNAME}"
   private_key = "${file("${var.PATH_TO_PRIVATE_KEY}")}"
   }
 }
 }
jamie avatar

It is meant to be fine to use it in either, but it seems like yours isnt inheriting the host address correctly.

jamie avatar

An alternative is to move the provisioners into a null_resource, and pass in the host address like, host = "${aws_instance.example.public_ip}"

loren avatar

I think this was a tf 0.12 change… It no longer has logic to automatically set the host attribute, you have to pass it in the provisioner now

2019-06-24

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Can't use this module with S3 bucket in different region · Issue #27 · cloudposse/terraform-aws-tfstate-backend

Why do you need to specify providers block in the module? I have env vars set to AWS_REGION=eu-west-1 and AWS_DEFAULT_REGION=eu-west-1 which makes it impossible for me to use this module when worki…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our plan (currently in action) is to continue rolling provider pinning. Please speak up if you have any insights.

tobiaswi avatar
tobiaswi

Hey everyone, im banging my head against the wall on a certain implementation logic with terraform: network modules to create subnets that consume maps or lists

tobiaswi avatar
tobiaswi

lets assume i write a module to create a subnet and all routing and firewall resources that are required to be deployed on each and every subnet in the company

tobiaswi avatar
tobiaswi

when i call the module and pass a map(or list) with three cidrs everything is great at the start

tobiaswi avatar
tobiaswi

but what do you do when later on you want to delete subnet #2

tobiaswi avatar
tobiaswi

terraform looks at the indexes when iterating. Element #0 is subnet-#1, Element #1 is subnet-#2 etc.

tobiaswi avatar
tobiaswi

when i remove subnet-#2, terraform wants to delete element #1 and #2 = deleting subnets-#2 and #3 and than redeploy the previous subnet-#3 at the position of element#2

tobiaswi avatar
tobiaswi

thats an issue because the subnet-#3 didnt change at all, just its index in the list. any ideas how to tackle this?

Callum Robertson avatar
Callum Robertson

hey @tobiaswi, not sure if this is over simplifying what you’re asking

Callum Robertson avatar
Callum Robertson

but have you tried just targeting the resource that you’re wanting to destroy?

Callum Robertson avatar
Callum Robertson
Command: destroy - Terraform by HashiCorp

The terraform destroy command is used to destroy the Terraform-managed infrastructure.

Callum Robertson avatar
Callum Robertson

you can also specify the index used for each subnet if you didn’t want it to be dynamic

Callum Robertson avatar
Callum Robertson

can you share the code snippet of your config?

loren avatar

yeah, terraform doesn’t do that well (yet), https://github.com/hashicorp/terraform/issues/14275

Terraform changes a lot of resources when removing an element from the middle of a list · Issue #14275 · hashicorp/terraform

We have a lot of AWS Route53 zones which are setup in exactly the same way. As such, we are using count and a list variable to manage these. The code basically looks like this: variable &quot;zone_…

foqal avatar
foqal
11:27:52 AM

Helpful question stored to <@Foqal> by @loren:

Hi can someone help me in using connection below is the error...
loren avatar

here’s the issue to track for the eventual solution, https://github.com/hashicorp/terraform/issues/17179

for_each attribute for creating multiple resources based on a map · Issue #17179 · hashicorp/terraform

Hi, We are missing a better support for loops which would be based on keys, not on indexes. Below is an example of the problem we currently have and would like Terraform to address: We have a list …

Ayo Bami avatar
Ayo Bami

HI, Is anyone familiar with this module https://github.com/terraform-aws-modules/terraform-aws-vpc

terraform-aws-modules/terraform-aws-vpc

Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc

Ayo Bami avatar
Ayo Bami

I am trying to get a VPN created to connect to our environment

Ayo Bami avatar
Ayo Bami

I am currently using that module for the VPC setup

Callum Robertson avatar
Callum Robertson

Hey Ayo

Callum Robertson avatar
Callum Robertson
terraform-aws-modules/terraform-aws-vpn-gateway

Terraform module which creates VPN gateway resources on AWS - terraform-aws-modules/terraform-aws-vpn-gateway

Callum Robertson avatar
Callum Robertson

Gives a great example of using this module in conjunction with what you’ve got

Callum Robertson avatar
Callum Robertson

@Ayo Bami ^

Ayo Bami avatar
Ayo Bami

oh nice, I had a quick look at it earlier, I Just didn’t understand.. How does my laptop connect to it

Ayo Bami avatar
Ayo Bami
12:04:32 PM

@Callum Robertson Pretty much sure thats all I need I just don’t understand the concept of how to connect to it and how is that IP generated

Callum Robertson avatar
Callum Robertson

are you creating a S2S VPN or a client VPN?

Ayo Bami avatar
Ayo Bami

client VPN ideally

Callum Robertson avatar
Callum Robertson

Might help you to conceptualise what you’re doing

Callum Robertson avatar
Callum Robertson
Getting Started with Client VPN - AWS Client VPN

The following tasks help you become familiar with Client VPN. In this tutorial, you will create a Client VPN endpoint that does the following:

Ayo Bami avatar
Ayo Bami

@Callum Robertson Thanks for your help I was able to configure the VPN, I think its missing authorize-client-vpn-ingress I can’t seem to find that in terraform documentation. Any idea how I can get authorize-client-vpn-ingress witout creating it on the console ?

1
Callum Robertson avatar
Callum Robertson

I personally haven’t played with client VPN”s in terraform. if that automation isn’t currently possible with Terraform, try using a null resource with a local-exec provisioner

Callum Robertson avatar
Callum Robertson

Sorry, I’m suggesting you do this as it might be available in the AWS CLI

Callum Robertson avatar
Callum Robertson

Let me know how you get on mate

tobiaswi avatar
tobiaswi

@Callum Robertson terraform is run in ci/cd so destroy isnt a option. a change has to occur in the variable that defines the input. What do you mean with specific index? got an example for that with count and a module? Sorry cant share, its internal code but its very generic, nothing specially fancy

tobiaswi avatar
tobiaswi

@loren yes thats exactly my issue. thank you for the issues. i’ll be watching these

1
Cloud Posse avatar
Cloud Posse
04:02:56 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Jul 03, 2019 11:30AM.
Add it to your calendar
zoom https://zoom.us/j/684901853
slack #office-hours (our channel)

2019-06-25

antonbabenko avatar
antonbabenko
cycloidio/terracognita

Reads from existing Cloud Providers (reverse Terraform) and generates your infrastructure as code on Terraform configuration - cycloidio/terracognita

Bogdan avatar
Bogdan
09:52:56 AM

@antonbabenko did you know about https://github.com/dtan4/terraforming ? What’s different in terracognita?

cycloidio/terracognita

Reads from existing Cloud Providers (reverse Terraform) and generates your infrastructure as code on Terraform configuration - cycloidio/terracognita

dtan4/terraforming

Export existing AWS resources to Terraform style (tf, tfstate) - dtan4/terraforming

jose.amengual avatar
jose.amengual

It looks like dtan4 is not working on the project anymore

cycloidio/terracognita

Reads from existing Cloud Providers (reverse Terraform) and generates your infrastructure as code on Terraform configuration - cycloidio/terracognita

dtan4/terraforming

Export existing AWS resources to Terraform style (tf, tfstate) - dtan4/terraforming

jose.amengual avatar
jose.amengual

so the future of terraforming is uncertain

jose.amengual avatar
jose.amengual

there is a bunch of PRs waiting

jose.amengual avatar
jose.amengual
20 Pull Requests not merged · Issue #423 · dtan4/terraforming

Howdy. I really appreciate the work done here, but there are now 20 PRs awaiting review, including some that appear to have bug fixes, expanded resources descriptions, and new AWS services support….

tamsky avatar

terraforming is a lovely tool – I hope it continues to stay useful/maintained

antonbabenko avatar
antonbabenko

terracognita scans the whole account, while terraforming creates resources one-by-one. Also, terracognita is very new. Time will show.

David Nolan avatar
David Nolan

I’d argue that makes terracognita an antipattern. You almost never want a single monolithic terraform configuration for everything.

sarkis avatar

after 4-5 failed monolith terraform setups, 100% agree… although this could be an interesting starting point, generate tf code for all my resources and then i can regroup them into smaller pieces

2
Bogdan avatar

but how did you split the TF state file afterwards?

antonbabenko avatar
antonbabenko

terraform state mv

jose.amengual avatar
jose.amengual

I need help : so I was using

module "terraform_state_backend" {
  source     = "git::<https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.7.0>"
  name       = "${var.name}"
  namespace  = "${var.namespace}"
  stage      = "${var.stage}"
  attributes = "${var.attributes}"
  region     = "${var.region}"
}
jose.amengual avatar
jose.amengual

created the state all that, added the provider config to my main.tf and then I created another s3 bucket resource BUT with the same name, so now my state is in the same bucket than my ALB logs

jose.amengual avatar
jose.amengual

how can I move this out ?

jose.amengual avatar
jose.amengual

if I change the bucket name for the ALB logs is trying to delete the other bucket but since is not empty it can’t

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how many resources did you already create? can you just manually delete everything in the AWS console and start with new state?

jose.amengual avatar
jose.amengual

like 5 resources

jose.amengual avatar
jose.amengual

I can’t delete it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can’t delete it manually? or not allowed to delete it?

jose.amengual avatar
jose.amengual

you mean the resources ?

jose.amengual avatar
jose.amengual

that where created by terraform ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

jose.amengual avatar
jose.amengual

I’m asking if we already gave the static ips to the clients

jose.amengual avatar
jose.amengual

can I just create a new “state bucket” wit the module and copy the content over ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can, but you will have to give it a different name, which since you are using the module will require giving it diff var.name or adding some attributes to create a unique name

jose.amengual avatar
jose.amengual

yes I was thinking to give it another attribute so the name is different

jose.amengual avatar
jose.amengual

I thought by leaving this

module "terraform_state_backend" {
  source     = "git::<https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.7.0>"
  name       = "${var.name}"
  namespace  = "${var.namespace}"
  stage      = "${var.stage}"
  attributes = "${var.attributes}"
  region     = "${var.region}"
}
jose.amengual avatar
jose.amengual

inside of my main.tf it will not try to delete the bucket

jose.amengual avatar
jose.amengual

but then I have this :

module "s3_bucket" {
  source                 = "git::<https://github.com/cloudposse/terraform-aws-s3-log-storage.git?ref=tags/0.4.0>"
  namespace              = "${var.namespace}"
  stage                  = "${var.stage}"
  name                   = "${var.name}"
  delimiter              = "${var.delimiter}"
  attributes             = "${var.attributes}"
  tags                   = "${var.tags}"
  region                 = "${var.region}"
  policy                 = "${data.aws_iam_policy_document.default.json}"
  versioning_enabled     = "true"
  lifecycle_rule_enabled = "false"
  sse_algorithm          = "aws:kms"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "terraform_state_backend" {
  source     = "git::<https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.7.0>"
  name       = "${var.name}"
  namespace  = "${var.namespace}"
  stage      = "${var.stage}"
  attributes = ["${compact(concat(var.attributes, list("2")))}"]
  region     = "${var.region}"
}
jose.amengual avatar
jose.amengual

so let me see if I understand this correctly : 1.- I create a new state bucket with the module 2.- then I copy the state file to the new bucket 3.- Then I manually edit the state file to use the new bucket

    • cross fingers
2
jose.amengual avatar
jose.amengual

in reality the only bucket I need to change is the state bucket not the logs bucket

jose.amengual avatar
jose.amengual

but I guess when the state was copied from the local state file to the s3 bucket then it keep the name of the bucket there

jose.amengual avatar
jose.amengual

somehow I manage to do it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what did you do? @jose.amengual

jose.amengual avatar
jose.amengual

I made sure my plan was clean

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

deleted everything and started with clean state?

jose.amengual avatar
jose.amengual

Then I did a terraform state pull > terraform. Tfstate

jose.amengual avatar
jose.amengual

The I disable the remote state in the main. Tf

jose.amengual avatar
jose.amengual

The I did an apply target to the tf state module

jose.amengual avatar
jose.amengual

Then I changes the provider config to the new bucket

jose.amengual avatar
jose.amengual

Then I did a init, and say yes to copy the tf state from cache

jose.amengual avatar
jose.amengual

And from then on I basically made sure to have the state in the new bucket and changed the S3 log bucket name than then forced to remove it

jose.amengual avatar
jose.amengual

Delete it manually etc

jose.amengual avatar
jose.amengual

There it was not pretty

jose.amengual avatar
jose.amengual

But it worked

jose.amengual avatar
jose.amengual

There was no need to do it that way, if I created the state bucket before hand it could have been 4 commands

jose.amengual avatar
jose.amengual

But I wanted to have the state bucket as part of the project tfstate

jose.amengual avatar
jose.amengual

And due to that I had to delete the dynamo table manually etc

jose.amengual avatar
jose.amengual

this are basically the steps

Cloud Posse avatar
Cloud Posse
01:16:08 AM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Jun 26, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2019-06-26

jose.amengual avatar
jose.amengual

How do you guys pass variable values to the modules ?

  • On the cli like -var foo=bar
  • using terraform.tfvars
  • ENV variables
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for not-secrets, using .tfvars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for secrets, ENV vars, which we usually read from SSM using chamber

jose.amengual avatar
jose.amengual

chamber…

jose.amengual avatar
jose.amengual

is that an external tool ?

jose.amengual avatar
jose.amengual

found it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Yes

jose.amengual avatar
jose.amengual

I really wish we had consul and Vault

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

chamber is just a tool to access SSM param store

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Not a full blown secret management system

jose.amengual avatar
jose.amengual

I understand

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Vault is little bit difficult to setup compared to just using SSM

jose.amengual avatar
jose.amengual

Tell me about it, it took me 2 months to setup a full production cluster

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

did you try the “official” hashicorp module?

jose.amengual avatar
jose.amengual

this 2017 in november , and I was working at EA where is all chef based

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha… yea, makes sense it would be that much effort

jose.amengual avatar
jose.amengual

to be honest setting up Vault for prod with 2 instances for prod is like 15 lines in the config file, the pain was Consul to some extend, solving the chicken and egg problem for the master key, learning about consul-template

jose.amengual avatar
jose.amengual

but the hardest part by far is to understand the Identity provider , how the IAM auth and PSK7 authorization works and tight all that up to the policies

jose.amengual avatar
jose.amengual

if you have very good understanding on IAM

jose.amengual avatar
jose.amengual

it is easier to setup if you run it with AWS

jose.amengual avatar
jose.amengual

but it is a pretty good product

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Mike Nock avatar
Mike Nock

Hey all! Is anyone familiar with creating a workspace and setting multiple tfe variables through an API call? I’ve trying to make the post for the tfe vars, but I can’t seem to get multiple attributes in one JSON payload. Tried declaring multiple “attributes” sections, tried putting them in a list [{}], tried declaring multiple data payloads. When declaring multiple attributes, only the last attribute listed takes. Multiple data payloads fails, and setting the attributes as a list fails, due to invalid JSON body.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unfortunately, not that many here running TFE (terraform enterprise)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@johncblandii is running it though

Mike Nock avatar
Mike Nock

Fair enough

Callum Robertson avatar
Callum Robertson

hey @jose.amengual if you’re on Mac. @jamie told me about aws-vault, it’s been fantastic for me, puts all of your access keys, secrets and session tokens into your ENV vars. There’s also a container variation of it if you’re looking to use outside your local machine

2
jose.amengual avatar
jose.amengual

I use it every day

jose.amengual avatar
jose.amengual

bu secrets I was referring too are db passwords etc

Callum Robertson avatar
Callum Robertson

ah my mistake mate!

jose.amengual avatar
jose.amengual

np

Callum Robertson avatar
Callum Robertson
99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And it also ships with #geodesic

3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) is hard at work upgrading our modules to 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so far, ~10% are updated (but we’re focused on the most popular/most referenced modules)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we are using the hcl2 label on all 0.12 compatible modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, in the process of upgrading the modules, we’re adding tests to each one. checkout the test/ folder.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Rackspace Infrastructure Automation

Rackspace Infrastructure Automation has 30 repositories available. Follow their code on GitHub.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Rackspace has started publishing terraform modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what’s wierd is they are not following terraform module naming conventions.

4
Andrew Jeffree avatar
Andrew Jeffree

That is a very weird naming scheme

Lee Skillen avatar
Lee Skillen

@Andriy Knysh (Cloud Posse) How painful (or hopefully, smooth) has the migration been so far? Our entire infrastructure is still on 0.11, but I’d really like to migrate.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s actually pretty smooth

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

mostly just syntactic sugar

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-cloudtrail

Terraform module to provision an AWS CloudTrail and an encrypted S3 bucket with versioning to store CloudTrail logs - cloudposse/terraform-aws-cloudtrail

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but we are not only converting to TF 0.12, we also adding:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Real examples (if missing)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Many tests to test the module and the example, and to actually provision the example in AWS test account
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Codefresh test pipeline to run those tests
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

after that’s done, an open PR will trigger all those tests to lint the code, check terraform and providers version pinning, validate TF code, validate README, provision the example in AWS, and check the results (then destroy it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Lee Skillen ^

Lee Skillen avatar
Lee Skillen

That’s a fantastic piece of work, well done. I am definitely going to have to send some tweets your way as thanks for inspiration. Looking forward to the migration. Mostly to remove a heap of hcl workarounds. I’ll check on the #terragrunt folks too since that’s our setup.

1
Lee Skillen avatar
Lee Skillen

I’ll need to evaluate whether we can (and should) step away from the terragrunt-driven pipeline that we have at the moment too. Sensing future pain.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@loren and @antonbabenko can probably speak more to what’s involved upgrading terragrunt projects to 0.12. there was recently some chatter in #terragrunt

Lee Skillen avatar
Lee Skillen

Thanks Eric - I didn’t know there was a #terragrunt joins

2019-06-27

h3in3k3n avatar
h3in3k3n

Good day everyone !

Just curious do we have any CloudFront module which is populated for multiple types of origin ? https://sourcegraph.com/github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/-/blob/main.tf

example now everything on my side is not always intended to use S3_origin_Config

  origin {
    domain_name = "${aws_s3_bucket.mysite.bucket_regional_domain_name}"
    origin_id   = "${module.mysite_label.id}" #  A unique identifier for the origin (used in routes / caching)

    # s3 should not be configured as website endpoint, else must use custom_origin_config
    s3_origin_config {
      origin_access_identity = "${aws_cloudfront_origin_access_identity.mysite.cloudfront_access_identity_path}"
    }
  }

it might intended to use custom origin also. I have plan to rewrite this module to add few more conditions to make enduser can choose custom origin or s3 origin

main.tf - cloudposse/terraform-aws-cloudfront-s3-cdn - Sourcegraph

Sourcegraph is a web-based code search and navigation tool for dev teams. Search, navigate, and review code. Find answers.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

With 0.12 this should be achievable. With 0.11 it would have been too cumbersome.

main.tf - cloudposse/terraform-aws-cloudfront-s3-cdn - Sourcegraph

Sourcegraph is a web-based code search and navigation tool for dev teams. Search, navigate, and review code. Find answers.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are updating all modules to 0.12

h3in3k3n avatar
h3in3k3n

thanks for your hardworking Erik

2019-06-28

suleiman ali avatar
suleiman ali

Hi everyone , First of all i would like to thank everyone behind the Cloudposse terraform modules it literally made my life easier , i have used the EKS and the EKS-Worker modules and i have an issue that i cant figure out, basically both the eks and the eks-workers work fine and the nodes can join the cluster but it never becomes Ready as well as they never get assigned secondary ip’s which it did the first time i ran terraform, im not sure if its related to the ami or the initialization of the worker nodes even though i haven’t changed anything, how should i proceed ? and what might be the cause? the tags !? im using the terraform-aws-modules/vpc/aws for the VPC and subnets with the complete eks example for the eks cluster and worker nodes

1
1
suleiman ali avatar
suleiman ali

thank you

1
nutellinoit avatar
nutellinoit

@suleiman ali your VPC is tagged with [kubernetes.io/cluster/yourclustername](http://kubernetes.io/cluster/yourclustername) shared?

Callum Robertson avatar
Callum Robertson

Are these working nodes in a private or public subnet? the configure of the EKS nodes is different. However, if you’re nodes are connecting to the cluster, it sounds like it may even be a SG issue on the EKS cluster talking to your nodes, have you triple checked the SG’s and the tags on those SG’s?

Callum Robertson avatar
Callum Robertson

Additionally, it might be worth checking your user_data

Callum Robertson avatar
Callum Robertson

I run the following

Callum Robertson avatar
Callum Robertson

“/etc/eks/bootstrap.sh –kubelet-extra-args ‘–cloud-provider=aws’ –apiserver-endpoint ‘${var.eks_cluster_endpoint}’ –b64-cluster-ca ‘${var.eks_certificate_authority}’ ‘${var.eks_cluster_name}’ echo “/var/lib/kubelet/kubeconfig” cat /var/lib/kubelet/kubeconfig echo “/etc/systemd/system/kubelet.service” cat /etc/systemd/system/kubelet.service

nutellinoit avatar
nutellinoit

or perhaps you are trying t3a instances with an older eks worker node AMI

1
nutellinoit avatar
nutellinoit
Support for t3a, m5ad and r5ad instance types is missing · Issue #262 · awslabs/amazon-eks-ami

What would you like to be added: Support for t3a, m5ad and r5ad instance types. Why is this needed: AWS had added new instance types, and the AMI does not currently support them.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@cytopia thanks btw for upstreaming all your fixes to terraform-docs.awk to our cloudposse/build-harness. You saved @Andriy Knysh (Cloud Posse) a bunch of time today when he was updating one of our modules and ran into some of the issues you fixed.

1
cytopia avatar
cytopia

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) there is another fix on the way. I tried to make the PR as descriptive and detailed as possible. I left an example with which you can try out the current faulty behaviour.

PR here: https://github.com/cloudposse/build-harness/pull/157

Fix: Do not double quote TF >= 0.12 legacy quoted types by cytopia · Pull Request #157 · cloudposse/build-harness

Do not double quote TF >= 0.12 legacy quoted types This PR addresses another issue with double-double quoting legacy types. This happens if you still double quote string, list and map in Terrafo…

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes thanks @cytopia, all your changes worked perfectly

2019-06-29

Julio Tain Sueiras avatar
Julio Tain Sueiras

k, lately didn’t have a chance to work on terraform-lsp due to a broken computer, just receive my order for a USB monitor, and it work pretty well, so I should able to continue the terraform lsp for rest of the objectives

1

2019-06-30

Ayo Bami avatar
Ayo Bami

HI guys, has anyone being able to configure authorize-client-vpn-ingress using terraform. I can’t find it in the documentation. Thanks in advance

Callum Robertson avatar
Callum Robertson

replied to the original thread AYo

    keyboard_arrow_up