#terraform (2020-12)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-12-01

michaelssingh avatar
michaelssingh

Running into the following error when using the terraform-aws-ecs-container-definition module

michaelssingh avatar
michaelssingh
Error: Variables not allowed

  on <value for var.environment> line 1:
  (source code not available)

Variables may not be used here.
michaelssingh avatar
michaelssingh

With a configuration that looks like this

    {
      name = "SPRING_PROFILES_ACTIVE"
      value = "${var.spring_active_profile}"
    },
tim.j.birkett avatar
tim.j.birkett

Is that inside a template or something?

michaelssingh avatar
michaelssingh

it is being passed directly to the module as value to environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(fwiw, value = "${var.spring_active_profile}" is HCLv1 syntax. In HCL2, it should be value = var.spring_active_profile )

Joe Niland avatar
Joe Niland

Is that extract taken from a tfvars file?

michaelssingh avatar
michaelssingh

It is not

michaelssingh avatar
michaelssingh

It is being passed directly to the module

Joe Niland avatar
Joe Niland

Can you share a minimal example?

michaelssingh avatar
michaelssingh

Here’s an example using the terragrunt style

Joe Niland avatar
Joe Niland

Ah terragrunt.

Yes, it’s not possible to do that, since terragrunt is a wrapper and is just setting these inputs as TF_VAR_… when calling Terraform.

The way I understand the purpose of terragrunt.hcl is that it is where you set the variable values.

Can you explain why you need to do it this way?

michaelssingh avatar
michaelssingh

the values get passed in via a regular terraform module

michaelssingh avatar
michaelssingh

that was just an example

Joe Niland avatar
Joe Niland

I’d need to see the entire example with all files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Consider the #terragrunt channel instead

1
michaelssingh avatar
michaelssingh

Seems a bit odd to me that this would not be allowed?

Hao Wang avatar
Hao Wang

Do you have sample codes I can try locally?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

can anyone recommended a Elasticache module upstream?

Matt Gowie avatar
Matt Gowie
cloudposse/terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
variable "replicas_per_node_group" {
  type        = number
  default     = 0
  description = "Required when `cluster_mode_enabled` is set to true. Specify the number of replica nodes in each node group. Valid values are 0 to 5. Changing this number will force a new resource."
}
roth.andy avatar
roth.andy

You can validate that the value is a number 0-5. I don’t think you can enforce a number that is not the default if and only if some other variable is some value

roth.andy avatar
roth.andy

You could use a conditional in your terraform code that changes the value if the other value is true

roth.andy avatar
roth.andy

You would in effect be setting a different default when the other value is true

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can’t do it with validation, as that can only reference the variable in question.

There is another way to do this, but it’s a little weird. I personally use it heavily because I think it’s really useful to enforce conditions like this rather than let them generate hard to understand errors.

data external validate_replicas {
  count = var.cluster_mode_enabled && var.replicas > 5 ? 1 : 0
  command = [ "Error: if cluster mode is enabled, replicas must be 5 or less." ]
}
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a way to do this with validation ?

Gareth avatar

if you have a map/object and the key name needs to contain “:” character for backwards compatibility with my environment e.g.

"terraform:managed" = string
"terraform:root"    = string

Can you? TF currently complains

"Object constructor map keys must be attribute names."

I’ve tried a variety of escape characters but looks like this is a none starter. Any ideas please?

loren avatar

i believe the expression syntax using parens may work?

("terraform:managed") = string
("terraform:root")    = string
Gareth avatar

Thanks Loren, not working on my quick test. Did I misunderstand?

variable "configs" {
  description = "TESTING of configs."
type = object ({
      ("terraform:managed") = string
      ("terraform:root")    = string
    })
}

Also just trying changing the “(“ to “{“ {“terraform:managed”} = string {“terraform:root”} = string

Gareth avatar

but no joy

loren avatar

i wasn’t certain it would work in a variable definition. the syntax works elsewhere in tf where the expression confuses the standard parser. wrap the expression in parens and the parser then knows what to do. but the colon may confuse it further, since both colon and equal are valid separator tokens for tf maps

1
Gareth avatar

Fair enough, and thank you. Just wanted to double check I’d not simply applied your suggestion wrongly. I can work around it for now, it was only a name of a AWS tag, so can injected it in later, was just trying to get as much into my data structure as possible.

loren avatar

for example, this works:

$ cat main.tf
locals {
  foo = {
    ("colon:test") = "bar"
  }
}

output foo {
  value = local.foo
}

$ terraform apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

foo = {
  "colon:test" = "bar"
}
loren avatar

though, as a local like that, the parens are not needed around the key

Gareth avatar

Ah, that’s good to know. (you should share a get me a beer/coffee link) I think I owe you many now.

loren avatar

haha, all good

loren avatar

might want to open an issue with this use case. the locals test shows that the colon is valid in the key. i think this means that the object constructor is not quite intelligent enough to handle this correctly

Gareth avatar

Good point, I’ll get one written up when I get home.

loren avatar

only related issue i can find is this one, indicating the same problem with a period in the key… https://github.com/hashicorp/terraform/issues/22681

Can't use "." in object type key names · Issue #22681 · hashicorp/terraform

Terraform Version 0.12.7 Terraform Configuration Files variable &quot;some_variable&quot; { type = map(object({ variable.1.thing = object({ variable.list = list(string) }) })) } output &quot;output…

Gareth avatar

Thanks for the reference, I’m happy to try and report it. I’ve a few hours travel until home so will do it once I get there.

btai avatar

those of you that use terraform cloud for your modules i.e.

module "consul" {
  source = "app.terraform.io/example-corp/k8s-cluster/azurerm"
  version = "1.1.0"
}

how do you test changes to your modules before cutting a new version for it?

is the best approach to just point your reference of the module at the git repo source during local testing and change it back once its ready?

module "consul" {
  source = "[email protected]:example-corp/terraform-azurerm-k8s-cluster.git?ref={new_changes}"
  # source = "app.terraform.io/example-corp/k8s-cluster/azurerm"
  # version = "1.1.0"
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

You can use a local directory as the source

btai avatar

right, so that mean you also comment out of the terraform cloud source while doing local development?

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes. That’s what I do when testing changes to any module locally

1
github140 avatar
github140

We use kitchen-terraform to test each version before releasing it.

2020-12-02

Babar Baig avatar
Babar Baig

Greetings everyone. I am using terraform-aws-ecs-container-definition and trying to add volumes using following code

  volumes_from = [
    {
      sourceContainer="applogs"
      readOnly=false
    }
  ]
  mount_points = [
    {
      containerPath = "/app/log"
      sourceVolume = "applogs"
    }
  ]

But I am getting following error

Error: ClientException: Invalid 'volumesFrom' setting. Unknown container: 'applogs'.

  on main.tf line 151, in resource "aws_ecs_task_definition" "this":
 151: resource "aws_ecs_task_definition" "this" { 

Can anyone help me figure out what I am doing wrong here? I was unable to find an example.

According to the link https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html I think the input is correct but I am unable to figure out the missing piece here. Any help is appreciated.

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

Docker volumes - Amazon Elastic Container Service

When using Docker volumes, the built-in local driver or a third-party volume driver can be used. Docker volumes are managed by Docker and a directory is created in /var/lib/docker/volumes on the container instance that contains the volume data.

Tom Dugan avatar
Tom Dugan

hmm are you trying to use a docker volume or mount a volume from another container?

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

Docker volumes - Amazon Elastic Container Service

When using Docker volumes, the built-in local driver or a third-party volume driver can be used. Docker volumes are managed by Docker and a directory is created in /var/lib/docker/volumes on the container instance that contains the volume data.

Babar Baig avatar
Babar Baig

I want to use docker volume which can act as a fresh volume for this newly created container. Actually my use case is that I want to use a volume which is shared between host and container. I want to access files placed on a specific path inside the container from host.

Tom Dugan avatar
Tom Dugan

Ah yeah so you should define your docker volume parameters in the task definition then use the same volume name in the container definition

Tom Dugan avatar
Tom Dugan

if you look at the example under docker volume configurations you’ll see how the docker volume is reference, in your container definition using the cloud posse module you would just use sourceVolume to reference the name defined under volume.

Tom Dugan avatar
Tom Dugan
module "container_def" {
  mount_points = [
    {
      containerPath = "/app/log"
      sourceVolume = "service-storage"
    }
  ]

resource "aws_ecs_task_definition" "service" {
  family                = "service"
  container_definitions = module.container_def.json_map_encoded_list

  volume {
    name = "service-storage"

    docker_volume_configuration {
      scope         = "shared"
      autoprovision = true
      driver        = "local"

      driver_opts = {
        "type"   = "nfs"
        "device" = "${aws_efs_file_system.fs.dns_name}:/"
        "o"      = "addr=${aws_efs_file_system.fs.dns_name},rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
      }
    }
  }
}
Babar Baig avatar
Babar Baig

So I’ll put

  volume {
    name      = "service-storage"
    host_path = "/ecs/service-storage"
  }

inside task definition and inside my container definition module I’ll use

  volumes_from = [
    {
      sourceContainer="service-storage"
      readOnly=false
    }
  ]
Tom Dugan avatar
Tom Dugan

~yeah something like that!~ sorry not volumes_from you’ll need mount_points

1
Babar Baig avatar
Babar Baig

Got it. I overlooked task definition, My bad. Thanks @Tom Dugan

Babar Baig avatar
Babar Baig

Got it.

party_parrot1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is anyone using a postgres provider to create databases and users?

Matt Gowie avatar
Matt Gowie

Yep.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

any specific one you’d recommend?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i essentially need to do the following …

-- Create required databases
CREATE DATABASE notaryserver;
CREATE DATABASE notarysigner;
CREATE DATABASE registry ENCODING 'UTF8';
-- Create harbor user
-- The helm chart limits us to a single user for all databases
CREATE USER harbor;
ALTER USER harbor WITH ENCRYPTED PASSWORD 'change-this-password';
-- Grant the user access to the DBs
GRANT ALL PRIVILEGES ON DATABASE notaryserver TO harbor;
GRANT ALL PRIVILEGES ON DATABASE notarysigner TO harbor;
GRANT ALL PRIVILEGES ON DATABASE registry TO harbor;
Matt Gowie avatar
Matt Gowie

There is a terraform-providers/postgresql provider which is the standard AFAIK.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

from what i saw it doesn’t handle user creation

Matt Gowie avatar
Matt Gowie

Ah maybe you’re correct on that front. I’ve created roles through the psql provider but not users probably.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

makes sense

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can create a login role with the postgres provider which should be what you want, afaik

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

this didn’t seem to work for me

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am getting the following error …

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: dial tcp :5432: connect: connection refused

  on .terraform/modules/data_platform_core/modules/data-platform-core/harbor-postgres-configuration.tf line 10, in provider "postgresql":
  10: provider "postgresql" {
Steve Wade (swade1987) avatar
Steve Wade (swade1987)
Troy Taillefer avatar
Troy Taillefer

no but I am doing something similar with snowflake using https://tf-registry.herokuapp.com/providers/chanzuckerberg/snowflake/latest very positive experience

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

nice i am looking for something that’ll work with AWS RDS

1
jose.amengual avatar
jose.amengual

I use…….

mysql = {
      source = "terraform-providers/mysql"
    }
2
Alex Jurkiewicz avatar
Alex Jurkiewicz

Just don’t try and manage users from the same Terraform configuration you create the rds resources

jose.amengual avatar
jose.amengual

why?

jose.amengual avatar
jose.amengual

( I do……what did I do wrong?)

Alex Jurkiewicz avatar
Alex Jurkiewicz

The provider needs to be configured before resources are created. If you attempt to configure the db provider based on the dynamic hostname/credentials generated in the same Terraform stack this is impossible

Alex Jurkiewicz avatar
Alex Jurkiewicz

It works if you create the cluster first and later add the db provider resources. But it will fail if you ever rebuild the stack

Alex Jurkiewicz avatar
Alex Jurkiewicz

Even worse, the MySQL provider has silent defaults for credentials. If you try and load them from a non-static source, it will appear to work but really be using localhost for the hostname or whatever

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I was planning to have a module that wrapped my existing rds module and then took the outputs from that and passed them to the provider. Are you advising against this?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yes, exactly

Alex Jurkiewicz avatar
Alex Jurkiewicz

Personally I break it up into two configurations. First configuration creates rds, second manages users and other objects within.

But other people keep it in a single configuration and use hardcoded variables plus apply -target runs. Both work.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

That’s going to be difficult for us as we want to provision the DB when it’s created

jose.amengual avatar
jose.amengual

in TF you can search the cluster and then run the user creation and grant

Alex Jurkiewicz avatar
Alex Jurkiewicz

It’s not that difficult. Your repo has two terraform configurations. create-rds/main.tf and everything-else/main.tf. Run terraform apply twice in a row

jose.amengual avatar
jose.amengual

ahhhhhhh ok , that is another way

Release notes from terraform avatar
Release notes from terraform
09:44:14 PM

v0.14.0 0.14.0 (December 02, 2020) NEW FEATURES:

Terraform now supports marking input variables as sensitive, and will propagate that sensitivity through expressions that derive from sensitive input variables.

terraform init will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (<a href=”https://github.com/hashicorp/terraform/issues/26524” data-hovercard-type=”pull_request”…

Initial integration of the provider dependency pinning work by apparentlymart · Pull Request #26524 · hashicorp/terraform

This follows on from some earlier work that introduced models for representing provider dependency &quot;locks&quot; and a file format for saving them to disk. This PR wires the new models and beha…

4
RB avatar

finally no more worries about tfstate and changing minor versions

RB avatar

i’ll wait for some minor updates before moving to tf14 . it looks impressive so far

Chris Wahl avatar
Chris Wahl

I’ve enjoyed using the new concise diff engine.

jose.amengual avatar
jose.amengual

same here, I will wait for few updates

2020-12-03

Babar Baig avatar
Babar Baig

Can someone point me to relevant material in writing Terraform code which depicts industry standards. I am struggling to structure my code in way that it does not have duplication and can be used to deploy in multiple accounts with multiple environments? I explored Terragrunt and it is one of my options that I can use to remove duplication. So simply put I am looking for

  1. Industry standard large enterprise code structuring method for Terraform
  2. Avoid duplication For now I simply create a new folder for each new use case. For example
test-org-one-ecs-solution-production
  - modules
  - test-org-vpc
     - main.tf
     - rest of the files
  - test-org-rds
     - main.tf
     - ...
  - test-org-ecs-app
     - main.tf (it has resources defined and it also call Terraform AWS modules to create a complete app solution of ECS for test-org)
     - ...
test-org-one-ecs-solution-staging
  - copy of above

Whereas the TF states are maintained in S3.

Tom Dugan avatar
Tom Dugan

I’m interested in others responses as well . I have some of my own opinions and I’ll share some resources on the topic that have helped our organization develop our TF code structure. I would assume you are not using Terraform Cloud?

Terraform Repository Best Practices

Terraform for Production

TF Vars Driven method

Digital Ocean’s Take on Directory structure On the topic of Terragrunt, I do not personal use it but colleagues of mine do use it successfully. The feedback is mostly positive. I will say that Terragrunt does abstract some vanilla Terraform features that has resulted in some miscommunication of concepts between us

Terraform Repository Best Practices, Parts 1 & 2attachment image

Learn how to standardize your Terraform code and eliminate duplicate Terraform code.

Structuring HashiCorp Terraform Configuration for Productionattachment image

How do you scale your Terraform configuration as your team grows? In this post, we discuss approaches to structuring your Terraform configuration for improved testing, reusability, and scalability.

How We Organize Terraform Code at 2nd Watch - 2nd Watch

In this blog post, we’ll go over how we structure our IaC repositories at 2nd Watch with a particular focus on Terraform, an open-source tool by Hashicorp for provisioning infrastructure across multiple cloud providers with a single interface.

How To Structure a Terraform Project | DigitalOceanattachment image

Structuring Terraform projects appropriately according to their use cases and perceived complexity is essential to ensure their maintainability and extensibility in day-to-day operations. In this tutorial, you’ll learn about structuring Terraform proj

1
Babar Baig avatar
Babar Baig

Correct. I am not using TF cloud.

Babar Baig avatar
Babar Baig

Thanks @Tom Dugan. I’ll look into the material. I personally want to avoid Terragrunt

Tom Dugan avatar
Tom Dugan

I am interested in what you end up with!

Babar Baig avatar
Babar Baig

Sure. I’ll share.

Jonathan Le avatar
Jonathan Le

I use modules (local and published, a “base” folder that holds common configuration across environments. Then for each environment I just symlink to the common stuff in the “base” folder. In each specific environment I’ll use separate tfvars files to set the modules settings for each env. Works fine for me. Just another idea for you.

Also, an “Environment” might be represented by 10s or hundreds of workspaces. It’s never just 1 giant workspace with everything in there.

Jonathan Le avatar
Jonathan Le
05:02:57 PM

Here’s a small example:

├── Makefile
├── README.md
├── modules
│   └── default_route_device
│       ├── main.tf
│       └── variables.tf
└── projects
    ├── account-base
    │   ├── account-base-nonprod.auto.tfvars
    │   ├── account-base-prod.auto.tfvars
    │   ├── account-base.auto.tfvars
    │   ├── datasources.tf
    │   ├── main.tf
    │   ├── providers.tf
    │   ├── template.sh
    │   └── variables.tf
    ├── blah-env-us-east-1
    │   ├── account-base-nonprod.auto.tfvars -> ../account-base/account-base-nonprod.auto.tfvars
    │   ├── account-base.auto.tfvars -> ../account-base/account-base.auto.tfvars
    │   ├── backend.tf
    │   ├── datasources.tf -> ../account-base/datasources.tf
    │   ├── main.tf -> ../account-base/main.tf
    │   ├── providers.tf -> ../account-base/providers.tf
    │   ├── blah-nonprod-us-east-1.auto.tfvars
    │   └── variables.tf -> ../account-base/variables.tf
    └── blah-env-us-west-2
        ├── account-base-nonprod.auto.tfvars -> ../account-base/account-base-nonprod.auto.tfvars
        ├── account-base.auto.tfvars -> ../account-base/account-base.auto.tfvars
        ├── backend.tf
        ├── datasources.tf -> ../account-base/datasources.tf
        ├── main.tf -> ../account-base/main.tf
        ├── providers.tf -> ../account-base/providers.tf
        ├── blah-nonprod-us-west-2.auto.tfvars
        └── variables.tf -> ../account-base/variables.tf
1
1
Babar Baig avatar
Babar Baig

Thanks @Jonathan Le I’ll be taking a look into the shared approach while working on finalizing the approach that suits my organization.

Jonathan Le avatar
Jonathan Le

NP. your requirements might be different than mine, so just giving an example of think about. Good luck.

1
Jon avatar

Good morning, I was curious if anyone knew of a better way to consume this module then how I currently am doing and wouldn’t mind sharing. I’m using https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/latest

Basically, my [main.tf](http://main.tf) looks like this:

module "ssm_parameter_store" {
  source  = "cloudposse/ssm-parameter-store/aws"
  version = "0.4.1"

  parameter_write = var.parameter_write
  kms_arn         = data.aws_kms_key.ec2_ami_cmk.arn ## encrypts/decrypts secrets marked as "SecretString"
}

And I am passing in a tfvars for each environment (dev,test,prod).

parameter_write = [
  {
    name      = "/dev/app/us-east-1/path/to/secrets/foo"
    value     = "abc123"
    type      = "String"
    overwrite = "true"
  },
  {
    name      = "/dev/app/us-east-1/path/to/secrets/password"
    value     = "def456"
    type      = "SecureString"
    overwrite = "true"
  }
]
Jon avatar

This works just fine but as you can see, a portion of my path is something that can be programmtically filled in. I was thinking of using a local variable called prefix then that would cut down on having to type out the full path each time.

Jon avatar

I was hoping I could do something like for_each for the parameter_write = part?

Maybe something like this? I’m trying to find more information on how this is achieved.

dynamic "parameter" {
  for_each = [for param in properties: {
    name = "${local.prefix}-${param.name}"
    value = param.value
    type = param.type
    overwrite = param.overwrite
  }]
}
Jon avatar

I also noticed that the module is currently using count instead of for_each so at this time I’m not sure if it would introduce any problems.

Tom Dugan avatar
Tom Dugan

I don’t personally use this module but I would echo the last concern about it using count vs for_each in that case I would opt to for_each the module if i were to use it. Saying that I just for_each the parameter store resource itself and I do define a path as a local.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, we’d accept PRs to update the module to use for_each - just nothing we got around to.

amelia.graycen avatar
amelia.graycen

I’m also wondering about the beanstalk buckets you’re using. They look like the prod beanstalk?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have an example of firing a lambda when a RDS database (not aurora) is created or modified?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to configure one to provision the instance with users and databases on creation

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to work out what the event_pattern would look like

Babar Baig avatar
Babar Baig
Using Amazon RDS event notification - Amazon Relational Database Service

Get a notification by email, text message, or a call to an HTTP endpoint when an Amazon RDS event occurs using Amazon SNS.

Babar Baig avatar
Babar Baig
{
  "source": [
    "aws.rds"
  ],
  "detail-type": [
    "RDS DB Instance Event"
  ],
 "detail": { "EventCategories": ["creation"] }
}
Babar Baig avatar
Babar Baig

@Steve Wade (swade1987) try above

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

thanks man appreciated

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am currently fighting python code at the moment

1
Joan Porta avatar
Joan Porta

Hi guys! any recommendation of good tool to import AWS resources in Terraform? Something better than this because I have lots of resources.

terraform import example_thing.foo abc123
Matt Gowie avatar
Matt Gowie

Check out terraformer

Matt Gowie avatar
Matt Gowie
GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer

Matt Gowie avatar
Matt Gowie

(this is fully a joke, don’t do that)

1
Joan Porta avatar
Joan Porta

Thx Matt!

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Does anyone have an opinion on the thread I post here - https://twitter.com/swade1987/status/1334554787711492097?s=21

Question … How are people handling the Go code they require for lambdas that are deployed as part of their terraform modules? (1/n)

loren avatar

I’d keep it separate… You can use a source-only module to retrieve the go project, and reference paths in .terraform to pass to your tf lambda resource

Question … How are people handling the Go code they require for lambdas that are deployed as part of their terraform modules? (1/n)

loren avatar

The go project doesn’t need any tf code at all

loren avatar
module foo {
  source = "git::https://....git?ref=<ver>"
}

module lambda {
  source = "git::<https://github.com/terraform-aws-modules/terraform-aws-lambda.git?ref=v1.30.0>"

  ...
  source_path = "${path.module}/.terraform/modules/foo/..."
}
loren avatar

note the link between the label of the source-only module, foo, and the path .terraform/modules/foo/...

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

You’ve lost me, I can reference a zip file from a source code url?

loren avatar

i don’t know why you would create the zip. let the module do it

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

What am I passing to the module then just the binary itself?

loren avatar
<https://github.com/terraform-aws-modules/terraform-aws-lambda>
loren avatar

recommend reviewing the module readme

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Interesting I’m using a different module at the moment can easily switch though

loren avatar

i don’t see any examples in the repo for golang, so it will involve some experimentation to figure out how to get the module to build it. or you have your golang devs build it as part of their release/version cycle, and you can pull down that artifact and have the module create the zip of it

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I think the easiest option would be to release a zip file as part of the golang repo and reference it that way

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

It’s so trivial to do I just need to work out how to get in inside tf

loren avatar

i just so despise committing a zip file or any binary to a repo. makes me sick inside

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Not commiting having it as a release artifact

loren avatar

oh right

loren avatar

that makes sense, your golang pipeline can compile the code and create the zip at the same time. easy

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Exactly

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Then I’ll use a null resource to get it

loren avatar

i guess now you can instead publish a container with the package, instead?

loren avatar

might be even easier

loren avatar
New for AWS Lambda – Container Image Support | Amazon Web Servicesattachment image

With AWS Lambda, you upload your code and run it without thinking about servers. Many customers enjoy the way this works, but if you’ve invested in container tooling for your development workflows, it’s not easy to use the same approach to build applications using Lambda. To help you with that, you can now package and […]

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

The issue with that is from the looks of things it has to be an ECR image

loren avatar

sure, either way, you’ll need to publish the release artifact to somewhere… i guess as part of your tf config you could somehow mirror from another registry to ecr. or just push from the golang pipeline to ecr

mfridh avatar

Neat trick. Never had a reason to come up with the thought to use files from .terraform yet. For the very small lambda stuff so far, the .go file just sits next to the terraform files. Will keep this method in mind for future

1
loren avatar

it’s a new favorite pattern of mine. helps separate concerns across different teams and projects. takes advantage of how terraform init pulls all module sources to the local .terraform cache before generating the plan. so the files are guaranteed to be at that path

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

@mfridh do you create the zip file as part of the terraform code or is that in the module directory already?

Shannon Dunn avatar
Shannon Dunn

Question regarding upgrade modules to 0.13, we usually dont declare the providers in module, and let them use the provider configuration from the caller terraform. but it seems 0.13 requires something like this in all module repos

terraform { required_version = ">= 0.13" required_providers { aws = { source = "hashicorp/aws" version = "~> 2.0" } local = { source = "hashicorp/local" version = "~> 1.2" } null = { source = "hashicorp/null" version = "~> 2.0" } } }

1
Shannon Dunn avatar
Shannon Dunn

@Alex Jurkiewicz thanks!

1
jose.amengual avatar
jose.amengual

is this required in 0.14? for some reason I think this was going to be required at some point

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think it might be

Shannon Dunn avatar
Shannon Dunn

another quick question, so doing the above to a sub module appears to actually initizalize a provider instead of using the root one, i know in tf 0.12 this could cause problems when removing resources or the provider itself, from the module. but is it generally okay now

├── provider[registry.terraform.io/hashicorp/aws] 3.16.0 ├── module.vpc │ └── provider[registry.terraform.io/hashicorp/aws]

Shannon Dunn avatar
Shannon Dunn

each module would have its own aws provider

Shannon Dunn avatar
Shannon Dunn

its just before we were initalizing providers in each module, then trying to remove the module caused errors, if it had a seperate aws provider in the module, i just wanna make sure im not making the same mistake

Alex Jurkiewicz avatar
Alex Jurkiewicz

I’m not really sure what the behaviour here is in theory, but in practice you “shouldn’t” run into problems. You should create all providers at the root level and pass them to your modules, rather than relying on the modules creating them.

Shannon Dunn avatar
Shannon Dunn

kk thanks again man

Alex Jurkiewicz avatar
Alex Jurkiewicz
provider "aws" {}
module "rds-cluster" {}

will implicitly use the root aws provider

provider "aws" { alias = "aws1" }
provider "aws" { alias = "aws2" }
module "rds-cluster" {
  providers = {
    aws = provider.aws2
  }
}

will explicitly pass a specified provider (the syntax is off the top of my head, check the docs for correct details)

Shannon Dunn avatar
Shannon Dunn
provider "aws"
module "rds-cluster" {
...
}
Shannon Dunn avatar
Shannon Dunn

will still use the root provider, even tho i put the required_providers in the terraform block of rds-cluster right?

Shannon Dunn avatar
Shannon Dunn

i think i got it tho

Shannon Dunn avatar
Shannon Dunn

it makes sense

Shannon Dunn avatar
Shannon Dunn

thanks for all the help man

Alex Jurkiewicz avatar
Alex Jurkiewicz

modules will create providers if one doesn’t exist that’s suitable in the root configuration. IMO this is bad, but it’s the way Terraform works. So just make sure to create the providers yourself and you won’t get surprised

Shannon Dunn avatar
Shannon Dunn

other wise it will try and use -/aws instead of hashicorp/aws

Shannon Dunn avatar
Shannon Dunn

is this instantiating a new aws provider in the module, or just requiring the root tf have that version?

Shannon Dunn avatar
Shannon Dunn

should i always have a block like this in ALL modules with relevant providers

Shannon Dunn avatar
Shannon Dunn

is that best practice now?

Shannon Dunn avatar
Shannon Dunn

docs are a little unclear if this is only if i want a module local provider

Alex Jurkiewicz avatar
Alex Jurkiewicz

The above block is not “required”, but generally you should include a minimal version where you specify the source of each required module. The source is a mapping from friendly name (“aws”, “pagerduty”) to the Hashicorp registry name (“hashicorp/aws” or “pagerduty/pagerduty”). The recommendation from Hashicorp is that top level configurations specify strict version strings (“~>” or “=”), while modules specify minimum version only for providers (“>=”). And I recommend in this Slack you thread your messages and post more than once sentence per message :)

Shannon Dunn avatar
Shannon Dunn

ahhhhh

Shannon Dunn avatar
Shannon Dunn

ah ok

1
1
1
loren avatar

The way it appears to require -/aws is an artifact of the plan for the upgrade to tf 0.13 tfstate. After the upgrade apply, you shouldn’t see that anymore

1
Shannon Dunn avatar
Shannon Dunn

this worked, doing a local init outside of tfe, was able to correctly udpate these providers

thanks man

1
loren avatar

no prob!

loren avatar

You can use terraform state replace-providers to fix it before the apply if you want. Just be aware that modifies your tfstate and may cause problems if you wanted to keep using the earlier version… https://www.terraform.io/docs/commands/state/replace-provider.html

Command: state replace-provider - Terraform by HashiCorp

The terraform state replace-provider command replaces the provider for resources in the Terraform state.

1
this1
Jurgen avatar

hey team @dmdla I am going through a bunch of your modules that I use but I am on TF 0.14….

https://github.com/cloudposse/terraform-aws-iam-system-user/pull/38 https://github.com/cloudposse/terraform-aws-dynamodb-autoscaler/pull/27 https://github.com/cloudposse/terraform-aws-route53-cluster-hostname/pull/29

Once these ones are in, i’ll do the next round. Thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s use #pr-reviews, but we’re standing by to help review.

Jurgen avatar

ah, didn’t know the channel.. i’ll move

Jurgen avatar

yeah, I just have a project that isn’t in prod yet and I am beating the fore front of all versions

Jurgen avatar

so not afraid to have shit break on tf betas, etc.

2020-12-04

jonjitsu avatar
jonjitsu

Has anyone ever used a aws_cloudformation_stack because cloudformation did something better like resource updates?

loren avatar

I’ve never seen cloudformation do updates better. But I have used cloudformation when tf didn’t yet support the resource but cfn did

RB avatar

yes i use the aws_cloudformation_stack but only because our 3rd party CICD tool provides that instead of a terraform module

Freddie Fabregas avatar
Freddie Fabregas

I used it for autoscaling group rolling updates.

Primarily when EKS Managed group wasn’t introduced yet.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Anybody here using the new service_ipv4_cidr field in aws_eks_cluster? Or looking to use it?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Separate question: we use TFLint and it feels a bit light in the rules that it has. Is there a better tool out there? Are there specific things you normally run into that TFLint doesn’t cover?

Matt Gowie avatar
Matt Gowie

No info for ya, but I’m interested in following along on this one. I’d like to roll out TFLint to a large client Terraform monorepo at some point soon.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

What are you hoping for it to catch for you?

Matt Gowie avatar
Matt Gowie

I’d just like to enforce more consistency before folks commit. I’ve got other infra engineers who follow the patterns I’ve put in place, but there are a ton of more dev focused folks on the team who are writing TF code now so I’d love to catch naming, quoting, and other linting style hiccups before their code gets into PR and I have to reject it.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Ah, TFLint can definitely do that for you. It is also capable of handling lint issues on a per provider basis - like look for specific errors people tend to make with the AWS provider.

Matt Gowie avatar
Matt Gowie

Yeah, figured as much. Just need to implement it! Following along on threads as I’m interesting hearing if anyone ways in with good tips for you

github140 avatar
github140

Most probably you came across that already https://github.com/antonbabenko/pre-commit-terraform

antonbabenko/pre-commit-terraform

pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Yep

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you looking for the tools or the policies?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s conftest which speaks rego, but haven’t found a catalog of policies yet.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Blokje5/validating-terraform-with-conftest

Example Code along with the blog post at https://blokje5/dev - Blokje5/validating-terraform-with-conftest

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

The policies Then I can try and figure out what tools cover them, or not.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I’m trying to start from “what are the policies and rules worth having”, and then see what I can use to enforce them.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, it’s the hard part for sure. We’re creating catalogs for this kind of stuff (we have for SCP, AWS Config, Datadog, etc). Nothing yet for HCL policies.

kskewes avatar
kskewes

Checkov perhaps?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

That’s more of a security analysis than an operational analysis, no?

1
barak avatar

You can create “cost policies” using checkov if you mean that by operations.

barak avatar

like ec2 types etc

2020-12-05

2020-12-07

loren avatar

terraform is publishing a roadmap… hadn’t noticed that before… https://github.com/hashicorp/terraform-provider-aws/blob/master/ROADMAP.md

hashicorp/terraform-provider-aws

Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.

Zach avatar


Lifecycle: Retain [Add ‘retain’ attribute to the Terraform lifecycle meta-parameter]
Issue//github.com/hashicorp/terraform-provider-aws/issues/902)
Some resources (e.g. log groups) are intended to be created but never destroyed. Terraform currently does not have a lifecycle attribute for retaining such resources. We are curious as to whether or not retaining resources is a workflow that meets the needs of our community
Huh what would that do …. you run a destroy but it leaves that particular resource (and dependencies) alone and ‘dangling’ within AWS?

DeletionPolicy Attribute - Retain · Issue #902 · hashicorp/terraform-provider-aws

Hi there, Terraform Version all Affected Resource(s) Please list the resources as a list, for example: aws_s3_bucket, s3 is a sample, this feature should be applied to most resources. meta-paramete…

loren avatar

that’s the idea yeah

Matt Gowie avatar
Matt Gowie

That could be pretty useful is you were trying to have your environments scale to zero.

Zach avatar

What does ‘scale to zero’ mean?

Zach avatar

Oh, kubernetes thing

Matt Gowie avatar
Matt Gowie

Scale to zero is the idea that you don’t have running infra costs when no one is using it. So lambdas are a good example. But let’s say you’re running a Fargate cluster, you could scale your instances and any other bill-per-usage infra down to zero overnight for example and not have to pay for them.

Zach avatar

Sure. How’s that play into this idea of resources left outside the state?

Matt Gowie avatar
Matt Gowie

The retain functionality would be less helpful with actually scaling your compute cause that already has good scaling functionality, but could help with removing your elasticsearch cluster but not your VPC for example.

Gareth avatar

Good afternoon, is there a quick way to take a map and remove any duplicate values, while maintain it as a map or creating a new one, as I need the key later on. I’ve tried reversing the key and values e.g. making the value become the key. In the hope that the duplicate would then just replace what was there but looks like TF no longer allows that (Might never have allowed it but thought it did). Equally tried converting it to a list and then running it though distinct, which works in terms of removing the duplicate values but obviously loses the key.

roth.andy avatar
roth.andy

for the duplicates, are the key and value both the same?

roth.andy avatar
roth.andy

Is it

{
  "foo" = "bar"
  "foo" = "bar"
}

or

{
  "foo" = "bar"
  "foo" = "baz"
}
roth.andy avatar
roth.andy

Also, it may be possible to have a map with duplicate keys, but that is not the intention of map. Maps are supposed to be lookup tables of unique keys to values

roth.andy avatar
roth.andy

oh hang on I see what you are saying. The keys are different but there are duplicate values

roth.andy avatar
roth.andy

In the case of having duplicate values, how are you deciding which key to keep? The first instance? Or some other logic

Gareth avatar

Afternoon Andrew, the data set I have would be the second option

{
  "foo" = "bar"
  "foo" = "baz"
}
roth.andy avatar
roth.andy

Is it? I’m understanding your question now as you having this kind of data:

{
  "foo" = "bar"
  "baz" = "bar"
}

Where the keys are different, but they contain the same value

1
Gareth avatar

In terms of which key to keep, it doesn’t actually matter in may case.

Gareth avatar

Although, in your above example I’d say the key was on the left. So, more your second option than that the one directly above.

{
  "key1" = "url1"
  "key2" = "url1"
  "key3" = "url3"
}

So what I’d like to get to is

{
  "key1" = "url1"
  "key3" = "url3"
}
roth.andy avatar
roth.andy

right

roth.andy avatar
roth.andy

yep, we’re on the same page

roth.andy avatar
roth.andy

I’m thinking something with the merge function https://www.terraform.io/docs/configuration/functions/merge.html

merge - Functions - Configuration Language - Terraform by HashiCorp

The merge function takes an arbitrary number maps or objects, and returns a single map or object that contains a merged set of elements from all arguments.

roth.andy avatar
roth.andy

still pondering

Gareth avatar

I wondered if I could have done something via a for loop and contains but that only looks to work if I make a copy of the map and they basically loop over it and say add it to map3 if value is not contained in map2. As TF complains if I try and refer back to myself while iterating through the loop. This might be the only way to do it; it just didn’t feel the most optimal or possibly even correct approach. I’ve a bad habit of overcomplicating things, when a simpler answer exists.

roth.andy avatar
roth.andy

you stumped me

Gareth avatar

I’ve stumped myself on every twist and turn with terraform. So, its not an unusual feeling for me. Thank you for taking the time to make some suggestions.

loren avatar

i would stop and ask, why is my data model like this, and can i rework the data model to function better in the context of the tooling?

loren avatar

but, you can probably do something with the functions for set math. that can tell you which values are in both sets, which values are missing from a set, etc…

loren avatar
setsubtract - Functions - Configuration Language - Terraform by HashiCorp

The setsubtract function returns a new set containing the elements from the first set that are not present in the second set

Gareth avatar

Hi Loren, as ever you do make a very valid point regarding the data set. I think the short answer is… its 100% my fault. I have a data structure (map(opjects)) that I loop over and gather some information, basically just 3 configuration that represent multiple sites

stuff = { for config_key, config in var.site_configs2 : config_key => config.s3_config.s3_bucketname if config.s3_config.s3_create == true }

stuff = {
   authoring   = "assets.mydomainname.test"
   cms         = "assets.mydomainanme.test"
   maintenance = "assets.maintenance.mydomainname.test"
}

Some websites are only host headers but have somethings that access S3 so have the same s3 bucket specified as another config. setting the if config.s3_config.s3_create == true has worked around around the problem as I can set it to false on the sites that are just headers but I’m not sure I’ll be able to do that for ever. So was looking to simply remove the duplicates.

The key is important only because the for_each creating the s3 bucket should be named based on the key, so I can easily reference it by the known name of the key when creating other resources.

Sorry, hope that makes some level of sense. in terms of the explanation. Probably not my rational for structuring it the way I have. I’ll have a look at setsubtract, thank you for the advice.

Gareth avatar

On a different note to the above questions. Can anyone please tell me if its possible yet to do a for_each within a resource and have it dynamically chance regions? Based on https://github.com/hashicorp/terraform/issues/19932 it looks like its still not possible but I was wondering if anybody has seen a work around with TF 0.14?

Instantiating Multiple Providers with a loop · Issue #19932 · hashicorp/terraform

Current Terraform Version Terraform v0.11.11 Use-cases In my current situation, I am using the AWS provider so I will scope this feature request to that specific provider, although this may extend …

Alex Jurkiewicz avatar
Alex Jurkiewicz

It’s not possible

Instantiating Multiple Providers with a loop · Issue #19932 · hashicorp/terraform

Current Terraform Version Terraform v0.11.11 Use-cases In my current situation, I am using the AWS provider so I will scope this feature request to that specific provider, although this may extend …

1
Gareth avatar

Thanks for confirming Alex

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

any lambda cloudwatch exports able to tell me why my lambda does not fire when my DB gets created …

resource "aws_cloudwatch_event_rule" "harbor_rds_creation_or_modification_event" {
  name        = "${var.team_prefix}-${var.environment}-harbor-db-event"
  description = "Capture any event related to the ${var.team_prefix}-${var.environment} harbor database."

  event_pattern = <<PATTERN
{
  "source": [
    "aws.rds"
  ],
  "resources": [
    "${module.harbor_postgres.database_arn}"
  ],
  "detail-type": [
    "RDS DB Instance Event",
    "RDS DB Cluster Event"
  ]
}
PATTERN
}

resource "aws_cloudwatch_event_target" "harbor_rds_creation_or_modification_event" {
  rule = aws_cloudwatch_event_rule.harbor_rds_creation_or_modification_event.name
  arn  = module.harbor_lambda.arn
}

resource "aws_lambda_permission" "harbor_rds_creation_or_modification_event" {
  statement_id  = "Allow-Harbor-Database-Provisioner-Execution-From-Cloud-Watch-Event"
  action        = "lambda:InvokeFunction"
  function_name = module.harbor_lambda.name
  principal     = "events.amazonaws.com"
  source_arn    = aws_cloudwatch_event_rule.harbor_rds_creation_or_modification_event.arn
}
Babar Baig avatar
Babar Baig

@Steve Wade (swade1987) did you try the event pattern that I shared in the previous thread?

Matt Gowie avatar
Matt Gowie

TIL — [overide.tf](http://overide.tf) is a special file in Terraform: https://www.terraform.io/docs/configuration/override.html

Override Files - Configuration Language - Terraform by HashiCorp

Override files allow additional settings to be merged into existing configuration objects.

RB avatar

wow that would be confusing to debug if you didn’t know this was a thing

Override Files - Configuration Language - Terraform by HashiCorp

Override files allow additional settings to be merged into existing configuration objects.

Matt Gowie avatar
Matt Gowie

Yeah for real. I wouldn’t really want to use honest. Seems like a way to put a bandaid on a larger problem.

loren avatar

imagine an environment without internet access, where module source urls need to be overridden to point at an internally accessible git remote…

Matt Gowie avatar
Matt Gowie

That’s a good example… have you used this before / needed that pattern Loren? Do you not check in your override.tf in that case?

loren avatar

i check it in my root, not in public modules

loren avatar

and yes, exactly this use case

Matt Gowie avatar
Matt Gowie

Interesting

Alex Jurkiewicz avatar
Alex Jurkiewicz

Wow. It’s this newish?

loren avatar

nah, been using it myself since tf 0.11

1
Garth avatar

Hi All. Question about the use of the https://github.com/cloudposse/terraform-aws-cloudformation-stack module. I’m trying to use some values in the parameters key-value map that are from local variables, e.g.

module "ecs_cloudwatch_prometheus" {
  source = "git::<https://github.com/cloudposse/terraform-aws-cloudformation-stack.git?ref=tags/0.4.1>"

  enabled            = true
  namespace          = "eg"
  stage              = var.env_name
  name               = "cloudwatch-prometheus"
  template_url       = "<https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/ecs-task-definition-templates/deployment-mode/replica-service/cwagent-prometheus/cloudformation-quickstart/cwagent-ecs-prometheus-metric-for-awsvpc.yaml>"

  parameters         = {
    ECSClusterName = "${var.env_name}-ecs-cluster"
    CreateIAMRoles = false
    ECSLaunchType = "fargate"
    SecurityGroupID = "${local.security_group_ids}"
    SubnetID = "${local.subnet_ids}"
    TaskRoleName = var.env_name == "production" ? "ecs_task_execution_role" : "${var.env_name}_ecs_task_execution_role"
    ExecutionRoleName = var.env_name == "production" ? "ecs_role" : "${var.env_name}_ecs_role"
  }

  capabilities = ["CAPABILITY_IAM"]
}

but I get the error

The given value is not suitable for child module variable "parameters" defined
at .terraform/modules/ecs_cloudwatch_prometheus/variables.tf:71,1-22: element
"SecurityGroupID": string required.

Perhaps I’m just misunderstanding how to use that key-value map. Could someone take a look at my syntax and see if there is an obvious problem? Thank you!

cloudposse/terraform-aws-cloudformation-stack

Terraform module to provision CloudFormation Stack - cloudposse/terraform-aws-cloudformation-stack

github140 avatar
github140

Hi @Garth, just by the variable name it looks like you’re having an array instead of a single item. What’s in local.securitygroup_ids?

cloudposse/terraform-aws-cloudformation-stack

Terraform module to provision CloudFormation Stack - cloudposse/terraform-aws-cloudformation-stack

Garth avatar

Here are my locals:

locals {
  security_group_ids = concat(module.networking.security_groups_ids, [module.rds.db_access_sg_id])
  subnet_ids = module.networking.private_subnets_ids
}
Garth avatar

do i need to format the lists as strings that the cf template is expecting?

Garth avatar

et voila! a join appears to do it.

Garth avatar

Thanks @github140 for the hint!

btai avatar

anyone run into cycle errors running terraform destroy with eks + k8s/helm provider? If I remove the helm/kube resources by running terraform apply w/ the helm/k8s resources removed I can subsequently run terraform destroy on the eks cluster/worker. I’m running terraform on 0.12.29 still

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sounds like you might benefit from 0.13 depends_on for modules

btai avatar

@Erik Osterman (Cloud Posse) looks like people are running into alot of cycle issues in 0.13 too. I’ve been running this setup for awhile now (deploy cluster + deploy helm/k8s resources in a single terraform apply + destroy w/o a problem) and its historically worked great. Seems like a possible regression towards the end of 0.12.x that’s also leaked into 0.13?

2020-12-08

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@here PSA: we’re working on some of the underlying scaffolding and tooling to better support 0.14 and future updates. 0.13 was a big pain, and we learned a lot. A few things are happening behind the scenes:

• We’re switching everything on our side over to the terraform registry notation so we can use rennovatebot (which doesn’t like our ref=tags/... format). • In order to support the registry notation, we needed to update our test-harness to support bats (this is done, but is not backwards compatible). • We’re adding mergify to quickly merge automated PRs, but were quickly blocked by the next hurdle: mergify cannot be a CODEOWNER because it’s a GitHub App. So we need to upgrade our account. Working on that (but it’s expensive!) • We’ve added added the make targets to quickly convert a module to use the “new” (but not so new) provider and registry notation. But turns out many have trouble running this with the build-harness natively, so we’re going to add a make docker/shell target to run it in a container and mount the cwd into the container. This will also help with make readme and other things like it. • We’ve added the github actions to automatically rebuild the README nightly, but it’s blocked on the mergify issue above. • We’ve added the github actions to automatically update the [context.tf](http://context.tf) from the central copy in terraform-null-label • We’ve added the make target to the build-harness which will update the lower-bound pinning for modules pinned to >= 0.12 to be >=0.12.26 (to support new provider syntax) • We’ve drafted the rennovatebot configuration to automatically update module pinning and run tests, then merge when they pass. All of this work will make all future upgrades of terraform breezy. Unfortunately, with so many changes, we ran into the inevitable rough edges.

This is all to say, we’re currently blocked on testing PRs because tests are failing due to our changes. We’re working to fix those. ETA is by end-of-the-week.

1
1
Matt Gowie avatar
Matt Gowie

Does anyone do terraform tests against their root modules? I’m assuming no, but if anyone is I’d like to hear your experience.

jose.amengual avatar
jose.amengual

yes

loren avatar

kinda? depends on what you mean by “terraform tests”…

Chris Wahl avatar
Chris Wahl

Yes, indeed.

Matt Gowie avatar
Matt Gowie

@jose.amengual @Chris Wahl do you derive value out of those tests? Are you going about it with the terratest process? Do you require passing before applying those root modules?

Matt Gowie avatar
Matt Gowie

@loren referring to terratests tests I guess or other terraform testing framework tests.

loren avatar

ahh, no. our roots are comprised of modules, with logic tied together using locals or data sources… no actual resources. we use terratest to exercise each module independently.

Matt Gowie avatar
Matt Gowie

That’s what I expected folks to do — Test your reusable modules, but root modules don’t get tested other than actually being used.

loren avatar

because there can be issues threading modules together with that logic, we do have a “mock” account for exercising the root config… the ci pipeline runs a plan against all accounts when a pr is opened for review. when merged to the main branch, the ci runs the apply against the mock account. if that succeeds, and if the “new release” condition is present, then it tags the repo. the ci pipeline picks up the tag event and runs the apply on all accounts

loren avatar

that review pipeline only has read permissions, so it can’t inadvertently do anything

jose.amengual avatar
jose.amengual

We are using terratest and implementing aws-nuke, the idea is to build e2e integration tests

jose.amengual avatar
jose.amengual

value=the thing works and here is the report

Matt Gowie avatar
Matt Gowie

gotcha. Interesting, thanks gents.

jose.amengual avatar
jose.amengual

you can’t ask us a question without telling us why you are asking

jose.amengual avatar
jose.amengual

otherwise we charge for answers

Matt Gowie avatar
Matt Gowie

Hahah just wondering what folks in this community do. I’m trying to get my mind around this for a client.

jose.amengual avatar
jose.amengual

ahh ok

Chris Wahl avatar
Chris Wahl

Similar to @loren - our root modules are mostly just calling other modules. I use tflint to check the version being snagged (e.g. “latest” versus a branch / version) and another test just to make sure the module path is still valid (based on a previous issue where someone moved a GitLab project and broke everything).

1
Matt Gowie avatar
Matt Gowie
06:21:38 PM

From Terraform 0.14 webinar — Lightly confirming we’re getting 1.0 after 0.15?

3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ohhhhhh interesting

Release notes from terraform avatar
Release notes from terraform
06:24:11 PM

v0.14.1 0.14.1 (December 08, 2020) ENHANCEMENTS: backend/remote: When using the enhanced remote backend with commands which locally modify state, verify that the local Terraform version and the configured remote workspace Terraform version are compatible. This prevents accidentally upgrading the remote state to an incompatible version. The check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, -ignore-remote-version. (<a…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
08:05:48 PM

Any github actions users starting to get a weird error? (started over the past hour or so)

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
Looks like the latest release (15 mins ago) has some issues? · Issue #75 · hashicorp/setup-terraform

Run hashicorp/setup-terraform@v1 internal/modules/cjs/loader.js:800 throw err; ^ Error: Cannot find module &#39;asn1.js&#39; Require stack: - /home/runner/work/_actions/hashicorp/setup-terraform/v1…

natalie avatar
natalie

Hello, very general question ( just curious) if anyone here used/heard about Terraboard (https://github.com/camptocamp/terraboard)? any thoughts you might have?

camptocamp/terraboard

A web dashboard to inspect Terraform States - camptocamp/terraboard

Release notes from terraform avatar
Release notes from terraform
09:34:14 PM

v0.14.2 0.14.2 (December 08, 2020) BUG FIXES: backend/remote: Disable the remote backend version compatibility check for workspaces set to use the “latest” pseudo-version. (#27199) providers/terraform: Disable the remote backend version compatibility check for the terraform_remote_state data source. This check is unnecessary, because the…

alert2
Chris Wahl avatar
Chris Wahl

Busy day for TF releases

cabrinha avatar
cabrinha

hello all

cabrinha avatar
cabrinha

I’m having an issue running some TF code from this PR: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/1138

feat: enable default launch template by ArchiFleKs · Pull Request #1138 · terraform-aws-modules/terraform-aws-eks

Signed-off-by: Kevin Lefevre [email protected] PR o&#39;clock Description Enable the creation of a default launch template if needed to use with managed node pool. This enable the use of kube…

cabrinha avatar
cabrinha
Error: Invalid for_each argument

  on ../../../modules/terraform-aws-eks/modules/node_groups/launchtemplate.tf line 2, in data "template_file" "workers_userdata":
   2:   for_each = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
cabrinha avatar
cabrinha

the other guy says he can run the same code just fine … which is crazy — i tried the same version of TF that he’s on

loren avatar

are you both starting from a blank tfstate?

cabrinha avatar
cabrinha

pretty sure he is — let me check mine

cabrinha avatar
cabrinha

my $ terraform state list is empty

loren avatar

this kind of error is more common from a blank tfstate… double check the other one

cabrinha avatar
cabrinha

how can i get rid of my blank tfstate?

cabrinha avatar
cabrinha

or … more importantly, how do i avoid the error?

loren avatar

once you apply, tfstate will not be empty

cabrinha avatar
cabrinha

apply fails too

loren avatar

the error is telling you, use -target to apply dependent resources that make up your for_each expression

cabrinha avatar
cabrinha

dependent …

cabrinha avatar
cabrinha

i’ll try that

loren avatar

basically, you have something in local.node_groups_expanded making it such that your k value is not known during the plan

loren avatar

if k is not know in the plan phase, then terraform cannot determine the resource label, and it fails with this error

cabrinha avatar
cabrinha

so terraform apply -target=module.eks.modules.node_groups ?

cabrinha avatar
cabrinha

that ran, applied nothing, still same error

loren avatar

i can’t give you the answer, i can only describe the condition under which that error occurs

cabrinha avatar
cabrinha

using more specific targets just results in help message being printed

Alex Jurkiewicz avatar
Alex Jurkiewicz

what do you mean by “more specific targets”?

cabrinha avatar
cabrinha

terraform apply -target='module.eks.modules.node_groups.aws_launch_template.workers'

cabrinha avatar
cabrinha
feat: enable default launch template by ArchiFleKs · Pull Request #1138 · terraform-aws-modules/terraform-aws-eks

Signed-off-by: Kevin Lefevre [email protected] PR o&#39;clock Description Enable the creation of a default launch template if needed to use with managed node pool. This enable the use of kube…

cabrinha avatar
cabrinha

i only have 1 module defined in my main.tfmodule.eks — that module contains another, called node_groups

cabrinha avatar
cabrinha

$ terraform apply -target=‘module.eks.module.node_groups’ results in the same error

Alex Jurkiewicz avatar
Alex Jurkiewicz

seems weird. The value of local.node_groups_expanded seems to only depend on variables and static references

1
cabrinha avatar
cabrinha

yeah … so what the heck lol

cabrinha avatar
cabrinha

is there something wrong with this syntax? for_each = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }

loren avatar

syntax looks fine to me

cabrinha avatar
cabrinha

node_groups_expanded also has a for k, v thing going on

cabrinha avatar
cabrinha

can anyone else try running this for me?

Alex Jurkiewicz avatar
Alex Jurkiewicz

try opening terraform console and see what value local.node_groups_expanded has

cabrinha avatar
cabrinha

once i run terraform console, then what?

loren avatar

one thing you could try is getting rid of the template_file data source… you ought to be able to use the function templatefile() directly

loren avatar
  user_data = base64encode(templatefile("${path.module}/templates/userdata.sh.tpl", {
      kubelet_extra_args = each.value["kubelet_extra_args"]
    }
  ))
loren avatar

template_file is deprecated anyway…

cabrinha avatar
cabrinha

now it’s just complaining about the next block that uses that for_each

cabrinha avatar
cabrinha
Error: Invalid for_each argument

  on terraform-aws-eks/modules/node_groups/launchtemplate.tf line 8, in resource "aws_launch_template" "workers":
   8:   for_each               = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
loren avatar

progress!

1
cabrinha avatar
cabrinha

“Terraform cannot predict how many instances will be created.”

I feel like we could easily count how many will be created using a function or something

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

@loren @cabrinha get a ~room~hread

2
cabrinha avatar
cabrinha

help me not hit this bug pls

2020-12-09

sheldonh avatar
sheldonh

Is anyone using the GitHub pull request comment feature with terraform cli? Digging the preview in the PR.

I do want to figure out if I could use Github deployment feature in actions to make that work smoother on the final merge and approval review.

Is after merge to master I want the final plan to be approved. Right now I have it trigger a run to approve in Terraform cloud but because the call is synchronous it means unless promptly resolved it will error with timeout.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I’d be interested in this too. We’re building a github action for TF security review and it will be commenting on the PR too.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yesterday we released our catalog for the full suite of managed AWS Config rules (including those for CIS). https://github.com/cloudposse/terraform-aws-config https://github.com/cloudposse/terraform-aws-config/tree/master/catalog

cloudposse/terraform-aws-config

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config

cloudposse/terraform-aws-config

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config

4
2
Mr.Devops avatar
Mr.Devops

Hi I’m using terraform cloud and have published a private module for my org. Is there any tips on how I can go about using best practice to reference the module in a separate repo I use to only apply variables using the tfe_variables resources?

Mr.Devops avatar
Mr.Devops

E.g if I commit my changes for all tfe_variables for my workspaces how can I have my repo reference my private module?

Perry Luo avatar
Perry Luo

I use the AWS Redshift Terraform module, https://github.com/terraform-aws-modules/terraform-aws-redshift. I got the error below. Error: InvalidClusterSubnetGroupStateFault: Vpc associated with db subnet group redshift-subnet-group does not exist. Per document, it says: redshift_subnet_group_name: The name of a cluster subnet group to be associated with this cluster. If not specified, new subnet will be created. I use the module, terraform-aws-modules/vpc/aws to provision VPC with following subnets:

  private_subnets      = var.private_subnets
  public_subnets       = var.public_subnets
  database_subnets     = var.database_subnets
  elasticache_subnets  = var.elasticache_subnets
  redshift_subnets     = var.redshift_subnets

Below is the redshift code:

module "redshift" {
  source  = "terraform-aws-modules/redshift/aws"
  version = "2.7.0"

  redshift_subnet_group_name = var.redshift_subnet_group_name
  subnets                    = data.terraform_remote_state.vpc.outputs.redshift_subnets
  cluster_identifier         = var.cluster_identifier
  cluster_database_name      = var.cluster_database_name
  encrypted                  = false
  cluster_master_password    = var.cluster_master_password
  cluster_master_username    = var.cluster_master_username
  cluster_node_type          = var.cluster_node_type
  cluster_number_of_nodes    = var.cluster_number_of_nodes
  enhanced_vpc_routing       = false
  publicly_accessible        = true
  vpc_security_group_ids     = [module.sg.this_security_group_id]
  final_snapshot_identifier  = var.final_snapshot_identifier
  skip_final_snapshot        = true
}

The error is gone if I comment out the line, redshift_subnet_group_name = var.redshift_subnet_group_name But, why?

terraform-aws-modules/terraform-aws-redshift

Terraform module which creates Redshift resources on AWS - terraform-aws-modules/terraform-aws-redshift

2020-12-10

Perry Luo avatar
Perry Luo

I got errors below:

terraform validate

Error: Unsupported block type

  on .terraform/modules/elasticsearch/main.tf line 105, in resource "aws_elasticsearch_domain" "default":
 105:   advanced_security_options {

Blocks of type "advanced_security_options" are not expected here.
Error: Unsupported argument

  on .terraform/modules/elasticsearch/main.tf line 139, in resource "aws_elasticsearch_domain" "default":
 139:     warm_enabled             = var.warm_enabled

An argument named "warm_enabled" is not expected here.


Error: Unsupported argument

  on .terraform/modules/elasticsearch/main.tf line 140, in resource "aws_elasticsearch_domain" "default":
 140:     warm_count               = var.warm_enabled ? var.warm_count : null

An argument named "warm_count" is not expected here.


Error: Unsupported argument

  on .terraform/modules/elasticsearch/main.tf line 141, in resource "aws_elasticsearch_domain" "default":
 141:     warm_type                = var.warm_enabled ? var.warm_type : null

An argument named "warm_type" is not expected here.

[terragrunt] 2020/12/10 14:11:49 Hit multiple errors:

Here are the code: main.tf:

module "elasticsearch" {
  source                  = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"

  security_groups                = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
  vpc_id                         = data.terraform_remote_state.vpc.outputs.vpc_id
  subnet_ids                     = data.terraform_remote_state.vpc.outputs.private_subnets
  zone_awareness_enabled         = var.zone_awareness_enabled
  elasticsearch_version          = var.elasticsearch_version
  instance_type                  = var.instance_type
  instance_count                 = var.instance_count
  encrypt_at_rest_enabled        = var.encrypt_at_rest_enabled
  dedicated_master_enabled       = var.dedicated_master_enabled
  create_iam_service_linked_role = var.create_iam_service_linked_role
  kibana_subdomain_name          = var.kibana_subdomain_name
  ebs_volume_size                = var.ebs_volume_size
  #dns_zone_id                    = var.dns_zone_id
  kibana_hostname_enabled        = var.kibana_hostname_enabled
  domain_hostname_enabled        = var.domain_hostname_enabled

  advanced_options = {
    "rest.action.multi.allow_explicit_index" = "true"
  }

  context = module.this.context
}

context.tf:

Alex Jurkiewicz avatar
Alex Jurkiewicz

Probably your aws provider version is too old

Perry Luo avatar
Perry Luo
provider "aws" {
  version = "2.55.0" 
  region  = var.region
}
Perry Luo avatar
Perry Luo

I changed aws provider to 3.20.0. It solves the problem.

Perry Luo avatar
Perry Luo
module "this" {
  source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.22.0>"

  enabled             = var.enabled
  namespace           = var.namespace
  environment         = var.environment
  stage               = var.stage
  name                = var.name
  delimiter           = var.delimiter
  attributes          = var.attributes
  tags                = var.tags
  additional_tag_map  = var.additional_tag_map
  label_order         = var.label_order
  regex_replace_chars = var.regex_replace_chars
  id_length_limit     = var.id_length_limit

  context = var.context
}

# Copy contents of cloudposse/terraform-null-label/variables.tf here

variable "context" {
  type = object({
    enabled             = bool
    namespace           = string
    environment         = string
    stage               = string
    name                = string
    delimiter           = string
    attributes          = list(string)
    tags                = map(string)
    additional_tag_map  = map(string)
    regex_replace_chars = string
    label_order         = list(string)
    id_length_limit     = number
  })
  default = {
    enabled             = true
    namespace           = null
    environment         = null
    stage               = null
    name                = null
    delimiter           = null
    attributes          = []
    tags                = {}
    additional_tag_map  = {}
    regex_replace_chars = null
    label_order         = []
    id_length_limit     = null
  }
  description = <<-EOT
    Single object for setting entire context at once.
    See description of individual variables for details.
    Leave string and numeric variables as `null` to use default value.
    Individual variable settings (non-null) override settings in context object,
    except for attributes, tags, and additional_tag_map, which are merged.
  EOT
}

variable "enabled" {
  type        = bool
  default     = true
  description = "Set to false to prevent the module from creating any resources"
}

variable "namespace" {
  type        = string
  default     = "dev"
  description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}

variable "environment" {
  type        = string
  default     = "dev-blue"
  description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}

variable "stage" {
  type        = string
  default     = "dev-blue"
  description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}

variable "name" {
  type        = string
  default     = "es-nsm-blue"
  description = "Solution name, e.g. 'app' or 'jenkins'"
}

variable "delimiter" {
  type        = string
  default     = "-"
  description = <<-EOT
    Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
    Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
  EOT
}

variable "attributes" {
  type        = list(string)
  default     = []
  description = "Additional attributes (e.g. `1`)"
}

variable "tags" {
  type        = map(string)
  default     = {}
  description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}

variable "additional_tag_map" {
  type        = map(string)
  default     = {}
  description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}

variable "label_order" {
  type        = list(string)
  default     = null
  description = <<-EOT
    The naming order of the id output and Name tag.
    Defaults to ["namespace", "environment", "stage", "name", "attributes"].
    You can omit any of the 5 elements, but at least one must be present.
  EOT
}

variable "regex_replace_chars" {
  type        = string
  default     = null
  description = <<-EOT
    Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
    If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
  EOT
}

variable "id_length_limit" {
  type        = number
  default     = null
  description = <<-EOT
    Limit `id` to this many characters.
    Set to `0` for unlimited length.
    Set to `null` for default, which is `0`.
    Does not affect `id_full`.
  EOT
}
Perry Luo avatar
Perry Luo
michaelssingh avatar
michaelssingh

Is it possible to retrieve the ARN of a specific key in a secret using aws_secretsmanager_secret or the aws_secretsmanager_secret_version data source?

michaelssingh avatar
michaelssingh

AWS docs say that the ARN to a specific key can be constructed this way

"arn:aws:secretsmanager:region:aws_account_id:secret:example-secret:example-key::"
michaelssingh avatar
michaelssingh

I’m curious if it’s possible to just reference the ARN via a datasource

michaelssingh avatar
michaelssingh

Rather than constructing a string myself, eg:

data "aws_secretsmanager_secret" "this" {
  name = var.secrets_manager_secret
}
locals {
  example_service_token_secret_arn = data.aws_secretsmanager_secret.this.arn
}
valueFrom = "${local.example_service_token_secret_arn}::example_service_token::"
Milindu Kumarage avatar
Milindu Kumarage
03:36:39 AM

Hi all, I’m using cloudposse/terraform-aws-elasticache-redis and stuck with this error. I have no idea how to move forward. I tried with TF 1.35.5 and getting state snapshot was created by Terraform v0.14.0, which is newer than current v0.13.5 error. How can I resolve this issue?

jose.amengual avatar
jose.amengual

you used tf 0.14 to do init ?

jose.amengual avatar
jose.amengual

or 0.13.5?

jose.amengual avatar
jose.amengual

if you switch you need to remove the .terraform directory before you try init again

Milindu Kumarage avatar
Milindu Kumarage

I tried different ways, not knowing what to do, now not sure which version the init happened last. :sweat_smile: I’ll try deleting the .terraform and doing an init again

Milindu Kumarage avatar
Milindu Kumarage

I deleted the .terraform  and did an init again with 1.13.5 but still getting the state snapshot was created by Terraform v0.14.0, which is newer than current v0.13.5 error.

jose.amengual avatar
jose.amengual

so your state was upgraded

jose.amengual avatar
jose.amengual

because you used 0.14

jose.amengual avatar
jose.amengual

you need to go back to 0.14

jose.amengual avatar
jose.amengual

it is not 1.13.5 it is 0.13.5 FYI

1
Milindu Kumarage avatar
Milindu Kumarage
06:34:35 AM

On 0.14 I’m getting this error as I posted

Milindu Kumarage avatar
Milindu Kumarage

We are using cloudposse/terraform-aws-elasticache-redis

jose.amengual avatar
jose.amengual

so you need to push a PR to relax the provider version

jose.amengual avatar
jose.amengual
Fix aws provider version for latest terraform 0.13 by reixd · Pull Request #29 · cloudposse/terraform-aws-acm-request-certificate

what Updated the required provider versions to get this module working with the latest terraform 0.13 release why Without this patch this module does not work with terraform 0.13.4

Milindu Kumarage avatar
Milindu Kumarage

This one supposed to relax the provider version of cloudposse/terraform-aws-elasticache-redis, right? It’s not getting merged

2020-12-11

breanngielissen avatar
breanngielissen

Hi everyone, we use the cloudposse/terraform-aws-route53-cluster-zone and are seeing a race condition with the creation of the ns record by Terraform with the creation of the NS record by AWS. Has anyone else run into that? Is there a reason to not use the AWS created NS records? You could achieve management over the resource by doing a data import instead of a creation.

breanngielissen avatar
breanngielissen

We add the allow_overwrite to the ns record and this was solved.

melissa Jenner avatar
melissa Jenner

I provisioned Elasticsearch. I got URL outputs of “domain_endpoint”, “domain_hostname”, “kibana_endpoint” and “kibana_hostname”. But, I cannot hit any of these URLS. I got, “This site can’t be reached”. Below is the code:

main.tf:

module "elasticsearch" {
  source                  = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
  security_groups                = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
  vpc_id                         = data.terraform_remote_state.vpc.outputs.vpc_id
  zone_awareness_enabled         = var.zone_awareness_enabled
  subnet_ids                     = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
  elasticsearch_version          = var.elasticsearch_version
  instance_type                  = var.instance_type
  instance_count                 = var.instance_count
  encrypt_at_rest_enabled        = var.encrypt_at_rest_enabled
  dedicated_master_enabled       = var.dedicated_master_enabled
  create_iam_service_linked_role = var.create_iam_service_linked_role
  kibana_subdomain_name          = var.kibana_subdomain_name
  ebs_volume_size                = var.ebs_volume_size
  dns_zone_id                    = var.dns_zone_id
  kibana_hostname_enabled        = var.kibana_hostname_enabled
  domain_hostname_enabled        = var.domain_hostname_enabled
  allowed_cidr_blocks            = ["0.0.0.0/0"]
  advanced_options = {
    "rest.action.multi.allow_explicit_index" = "true"
  }
  context = module.this.context
}

terraform.tfvars:

enabled = true
region = "us-west-2"
namespace = "dev"
stage = "abcd"
name = "abcd"
instance_type = "m5.xlarge.elasticsearch"
elasticsearch_version = "7.7"
instance_count = 2
zone_awareness_enabled = true
encrypt_at_rest_enabled = false
dedicated_master_enabled = false
elasticsearch_subdomain_name = "abcd"
kibana_subdomain_name = "abcd"
ebs_volume_size = 250
create_iam_service_linked_role = false
dns_zone_id = "Z08006012JKHYUEROIPAD"
kibana_hostname_enabled = true
domain_hostname_enabled = true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is provisioned in private subnets

slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
Joe Niland avatar
Joe Niland

I believe you need to configure it without a VPC to allow public access, otherwise you need to create a reverse proxy or use a VPN.

If you do public, make sure you vary the access policy and/or IP restrictions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

private subnets can’t be accessed from the internet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, agree with @Joe Niland

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make sure you protect it (with IP restrictions or password) if you open it to the internet. The AWS IPs are constantly scanned by bots, and your cluster will be hacked in minutes

this1
melissa Jenner avatar
melissa Jenner

How about enable NAT? Will it solve the problem?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

NAT is from the subnets to the internet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so no, you will not be able to access the cluster behind NAT

melissa Jenner avatar
melissa Jenner

So, I need to provision ES with none VPC.

melissa Jenner avatar
melissa Jenner

How about I place ES in the public subnets? It should solve the problem?

melissa Jenner avatar
melissa Jenner

What is the best practice? If you provision the ES, How will you do?

Joe Niland avatar
Joe Niland

It depends on the access requirements. What are you trying to do?

melissa Jenner avatar
melissa Jenner

The idea case is to be able to access ES behind VPN.

melissa Jenner avatar
melissa Jenner

I login to my company via VPN.

melissa Jenner avatar
melissa Jenner

And I provision ES.

melissa Jenner avatar
melissa Jenner
melissa Jenner avatar
melissa Jenner

{ “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “AWS”: “” }, “Action”: “es:”, “Resource”: “${resource_arn}/*” } ] }

melissa Jenner avatar
melissa Jenner

Will it solve the problem if I attach this policy?

Joe Niland avatar
Joe Niland

That’s at a different layer. First you need to get access to it at the network level. If you’re using a VPN to connect to your VPC that should be ok, although ideally you would lock down the permissions to certain roles or whatever.

melissa Jenner avatar
melissa Jenner

Yes. I am using a VPN to connect to my VPC. How to add the access policy I posted above as Terraform code?

melissa Jenner avatar
melissa Jenner

I added the line below: iam_role_arns = [“*”] But, I got error. module.elasticsearch.aws_elasticsearch_domain_policy.default[0]: Creating...

Error: InvalidTypeException: Error setting policy: [{ “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “”, “Effect”: “Allow”, “Resource”: [ “arnawses12345678:domain/abcd-domain/”, “arnawses12345678:domain/abcd-domain” ], “Principal”: { “AWS”: [ “arnawsiam:role/abcd-domain-user”, “” ] } } ] }]

2020-12-12

Babar Baig avatar
Babar Baig

Hi wave I am working on a project where I need to deploy a Ruby application along with the infrastructure (created from Terraform) on ECS. I am using CircleCI pipeline. Pipeline job creates the infra (RDS,Redis, ECR and my ECS services and cluster) through Terraform CLI. Now I’ve a requirement that whenever the environment variables of the application is changed I want to deploy new infrastructure so that each application is working separately. The problem I’m facing is with the S3 Backend configuration not being dynamic. If somehow I could provide the s3 key dynamically so that the state file for each application is maintained separately.

Simply put my use case is that whenever CircleCI pipeline is triggered, based on the environment variable either a new deployment along with the infrastructure is made if the environment variable file is new or it simply updates the old infra and deployment.

terraform {
  backend "s3" {
    encrypt = true
    key     = "./tfstates/staging/${var.something_dynamic}/ecr/terraform.tfstate"
    region  = "eu-west-1"
    bucket  = "mys3bucket"
    profile = "default"
  }
}
Tom Dugan avatar
Tom Dugan

Very cool set up, I think you can solve your problem using backend partial config. https://www.terraform.io/docs/backends/config.html

Backends: Configuration - Terraform by HashiCorp

Backends are configured directly in Terraform files in the terraform section.

Babar Baig avatar
Babar Baig

Exactly the thing I was looking for. I need to read documentation more and more. Thanks @Tom Dugan

michaelssingh avatar
michaelssingh

Consider the following object,

variable "data_sources" {
  type = list(object{
    environment = string
    url = string
  })

  description = "An object containing data source URLs per environment"

  default = [
    {
      beta = "jdbc:<postgresql://1.1.1.1:5432/db>"
    }
  ]
}

I am attempting to retrieve the value of the URL and assign it to a local based on a user supplied variable called environment

michaelssingh avatar
michaelssingh

Digging through the various function documentation for terraform there doesn’t appear to be many that operate on objects.

michaelssingh avatar
michaelssingh

Any tips are welcomed.

michaelssingh avatar
michaelssingh
locals {
  data_source_url = var.environment != null ?
}

Is as far as I have gotten. The idea here is if it doesn’t match match any of the values in data_sources.environment fall back to beta, retrieve the value of url and assign it to data_source_url

michaelssingh avatar
michaelssingh

If it does match an environment in the object, retrieve that value and assign it to data_sources.environment

michaelssingh avatar
michaelssingh

Is the object variable type the most optimal here?

michaelssingh avatar
michaelssingh

Does this require me creating a map of the data_sources.environment in order to do the comparison?

michaelssingh avatar
michaelssingh

This is what I came up with

data_source_urls        = var.data_source_urls
data_source_keys        = [for m in local.data_source_urls : lookup(m, "environment")]
data_source_values      = [for m in local.data_source_urls : lookup(m, "url")]
data_source_as_map      = zipmap(local.data_source_keys, local.data_source_values)
default_data_source_url = local.data_source_as_map["beta"]
data_source_url         = lookup(local.data_source_as_map, var.environment, local.default_data_source_url)

2020-12-13

2020-12-14

Mikhail Naletov avatar
Mikhail Naletov

Hi Cloudposse! Could someone explain me why we have a pinned github provider version? This pinned version blocks using count for the module and module using this module as a dependency (ecs-codepipeline, ecs-web-app, ecs-service-web-task, etc) https://github.com/cloudposse/terraform-github-repository-webhooks/blob/master/versions.tf#L5

cloudposse/terraform-github-repository-webhooks

Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks

Joe Niland avatar
Joe Niland
Anonymous access flag deprecation? · Issue #502 · terraform-providers/terraform-provider-github

Terraform Version Terraform version: 0.12.28 Provider version: 2.9.0 Affected Resource(s) Provider configuration Terraform Configuration Files terraform { required_version = &quot;~> 0.12.0"…

Joe Niland avatar
Joe Niland
fix: ensure provider does not use version >= 2.9 by jhosteny · Pull Request #20 · cloudposse/terraform-github-repository-webhooks

The GitHub provider introduced a breaking change with the minor bump to version 2.9.0, by removing a number of configuration options. This included the &#39;anonymous&#39; flag, which is expected t…

Mikhail Naletov avatar
Mikhail Naletov

So, if we remove this flag and change this variable in all modules using github-repository-webhooks we will be able to unpin?

Joe Niland avatar
Joe Niland

perhaps.. maybe a workaround is to set anonymous if token if absent, which is what it mentions here. Best way is to create a PR and test it.

Add v3.0.0 Breaking Changes To Provider Schema by jcudit · Pull Request #521 · terraform-providers/terraform-provider-github

Ahead of our next major release, this PR modifies the provider schema in the following ways: token becomes optional, with its absence signalling anonymous mode organization is no longer deprecated…

jose.amengual avatar
jose.amengual

You can thank Github for changing the API again that is why there is this breaking change and we had to pin

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks Pepe - ya I think this was our temporary workaround until we could solve it a better way. Otherwise, we’re not about strict pinning like this.

this1
Jay avatar

Hi, I am trying to mask sensitive information from plan and apply output. I tried couple of ways:

• using sensitive keyword, but https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.35.0 version pinned at required_version = “>= 0.12.0, < 0.14.0” and sensitive keyword is available from >= 0.14.0

• Tfmask doesn’t seems to work with resources or values which are lists. (I am trying to mask helm values variable). Any suggestions on how to go about this.

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Alex Jurkiewicz avatar
Alex Jurkiewicz

Sounds like you need to submit a pull request to allow the module to work with 0.14

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, I think that’s your best bet.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

PRs welcome. Post it in #pr-reviews for expedited reviews

Guy Elia avatar
Guy Elia

Hey Guys, a tiny PR with the required changes: https://github.com/cloudposse/terraform-aws-rds/pull/80

Upgrade dependency modules for support tf14 by guyelia · Pull Request #80 · cloudposse/terraform-aws-rds

what Upgrading dependency modules to versions that supporting Terraform 14 why For using this module with Terraform 14

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)

@Jay <https://github.com/cloudposse/terraform-aws-rds-cluster> was updated to support TF 0.14. Please, give it a shot now

Jay avatar

@Maxim Mironenko (Cloud Posse) thanks I will check.

2020-12-15

jose.amengual avatar
jose.amengual

anyone have seen this before ?

120:128: syntax error: A identifier can’t go after this “"”. (-2740)
aws-vault exec cloud-native-dev -- terraform validate
Success! The configuration is valid.

Terraform plan/apply work just fine

MattyB avatar

Did you copy/paste the command from somewhere? I’m thinking hidden quotes but idk

MattyB avatar

Not sure if it’s aws-vault throwing the error or what

jose.amengual avatar
jose.amengual

I have been trying to find the quotes

jose.amengual avatar
jose.amengual

this started happening after 0.13upgrade was run

Joe Niland avatar
Joe Niland

@jose.amengual I think the error comes from osascript. Do you have anything weird in ~/.aws/config?

jose.amengual avatar
jose.amengual

mmmm

jose.amengual avatar
jose.amengual

let me see

jose.amengual avatar
jose.amengual

I had some empty duplicated profiles…

jose.amengual avatar
jose.amengual

I’m checking

jose.amengual avatar
jose.amengual

same error

Joe Niland avatar
Joe Niland

yeah, empty profiles should be fine

jose.amengual avatar
jose.amengual

I do not remember seen this before

Gareth avatar

Hello can anyone please help me find the correct syntax of nested “for” to get this data structure into a for_each block?

test = {
    cms = {
          random_key_name1 = "[email protected]"
          },
  site2 = {
          unknown_key_name1 = "[email protected]"
          random_name3 = "[email protected]"
          }
}

The data I wish to use in the resource block is the first Key e.g. cms Then second key e.g. random_key_name1 Then the value associated with random_key_name1 e.g. “[email protected]

I’ve been able to similar before but I’ve always known the name of the keys at the second level but this time the keys could be named anything. I know I need to do something like this but I just can’t find the right configuration.

[for mykey in keys(var.test) : {
for k,v in in var.test[mykey] : k => v }
]

Resource block

resource "aws_ssm_parameter" "mailFromAddress" {
  for_each = { CANT GET THE CORRECT FOR LOOP }
  name = format("/%s/%s", cms, random_key_name1)
  type = "String"
  value = each.value aka "[email protected]"
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

this means the email IDs need to be unique across all sites

Alex Jurkiewicz avatar
Alex Jurkiewicz

(I’m guessing your terminology here.)

Alex Jurkiewicz avatar
Alex Jurkiewicz

If that’s true, why not change the initial data structure to:

emails = {
  random_key_name1 = {
    site = "cms"
    email = "[email protected]"
  }
}

Then it’s easier:

resource "aws_ssm_parameter" "mailFromAddress" {
  for_each = local.emails
  name = "${each.value.site}/${each.key}"
  value = each.value.email
}
Gareth avatar

Hi Alex, The CMS, site2 would be unique, as would each of the keys at the second level. So, yes you could call them email IDs.

Gareth avatar

1 minute please, just writing a more detailed reply.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Here’s a solution to your exact structure, if you can’t do that:

> local.site_emails
{
  "blog" = {
    "blog_contactus" = "[email protected]"
    "blog_nodeply" = "[email protected]"
  }
  "cms" = {
    "cms_noreply" = "[email protected]"
  }
}
> merge([for site, emails in local.site_emails : { for email_id, email_addr in emails : email_id => email_addr } ]...)
{
  "blog_contactus" = "[email protected]"
  "blog_nodeply" = "[email protected]"
  "cms_noreply" = "[email protected]"
}
1
Gareth avatar

I think that might work on this simplified structure I created to ask the question but in reality my real structure is much bigger.

variable "site_configs" {
  type = map(object({
    configuration_name = string
    brand              = string
    primary_url        = string
    subdomains         = list(string)
    domain_name        = string
    environment        = string
    ses_config = object({
      create_ses_user          = bool
      allowed_email_recipients = list(string)
      mailFromAddress_config   = map(string)
    })
    cloudfront_config = object({
      create_cloudfront = bool
    })
    firewall_config = object({
      waf_enabled       = bool
      dedicated_waf_acl = bool
    })
    iis_config = object({
      iis_site_name = string
      site_type     = string
      site_language = string
    })
    s3_config = object({
      s3_create     = bool
      s3_bucketname = string
    })
    ec2_config = object({
      ec2_description = string
    })
    ssl_config = object({
      create_ssl_cert            = bool
      primary_url                = string
      subdomains                 = list(string)
      ssl_description            = string
      ssl_root_domain_names      = map(string)
      ssl_perform_validation     = bool
      ssl_validation_method      = string
      ssl_wait_for_validation    = bool
      ssl_allow_overwrite_dns    = bool
      ssl_transparency_logging   = bool
      ssl_certificate_import     = bool
      ssl_certificate_public_pem = string
      ssl_certificate_chain_pem  = string
      ssl_private_key_key        = string
    })
  }))

}

I’ve used

mailfromAddresses = { for config_key, config in var.site_configs : config_key => {
    for mykey, myvalue in config.ses_config.mailFromAddress_config : mykey => myvalue
    } if config.ses_config.create_ses_user == true
  }

to generate the simplified output. I posed for this question and what I was going to use as the input to the for_each subject to suggestion from here.

Changes to Outputs:
  + test = {
      + cms = {
          + mailFrom2       = "[email protected]"
          + mailFromAddress = "[email protected]"
        },
      + mysite2 = {
          + site2email      = "[email protected]"
          + mailFromAddress = "[email protected]"
        }

    }

The data from the mailFromAddress_config section of the site_config is later used within userdata to perform build transform within other configuration files.

Alex Jurkiewicz avatar
Alex Jurkiewicz

right. that makes sense. Then you can change the key to be a concatenation of site ID and email ID with something like

> merge([for site, emails in local.site_emails : { for email_id, email_addr in emails : "${site}-${email_id}" => email_addr } ]...)
{
  "blog-blog_contactus" = "[email protected]"
  "blog-blog_nodeply" = "[email protected]"
  "cms-cms_noreply" = "[email protected]"
}

Then there’s no restriction on requiring the keys to be globally unique

Gareth avatar

okay, I had something similar in terms of the merged site name and other key in my own testing but when I then try and overlay this to the for_each I’m struggling to pull the three pieces of information out.

so in the aws_ssm_parameter resource I need the name value to be

 "/blog/blog_contactus"

and the value to be “[email protected]” would you suggest the last structure you supplied but then to get the key “blog” you simply split ${each.key} based on the “-“ to get the key name back?

Alex Jurkiewicz avatar
Alex Jurkiewicz

maybe it would be better to generate a final structure which is a list:

[
  {  site = "cms", id = "noreply", addr = "[email protected]" },
  ...
]

Then no need to worry about uniqueness constraint at all

Gareth avatar

okay, I think I’m following. I’ll run some tests on my data structure based on the above and see how far I can get.

I would never have got my head around this on my own, as I’ve not used “…” expression yet and I wouldn’t have thought about merging the list together. So, thank you!

Gareth avatar

I can confirm

 mailfromAddresses = merge([for configs in keys(var.site_configs) :
    { for a, b in var.site_configs[configs].ses_config.mailFromAddress_config : "${configs}-${a}" => b } if var.site_configs[configs].ses_config.create_ses_user == true
  ]...)

outputs

+ test = {
      + cms-mailFrom2       = "[email protected]"
      + cms-mailFromAddress = "[email protected]"
    }

Which is great! As I think I would be able to split on the “-“ to get back to the key “cms” However, you made one further suggestion but I’m not sure how to use it. Did you mean I should pass the output from local.mailfromAddresses to another local / for loop to remap it or did you mean I should modify the original local.mailfromAddresses to match your new structure in some way. Apologies, if I’m missing the obvious.

Alex Jurkiewicz avatar
Alex Jurkiewicz

The ... operator is VERY poorly documented. In fact I just looked and couldn’t find it documented at all! Can anyone find a docs reference?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I was thinking something like this:

> flatten([for site, emails in local.site_emails : [ for email_id, email_addr in emails : {site = site, id = email_id, addr = email_addr} ] ])
[
  {
    "addr" = "[email protected]"
    "id" = "blog_contactus"
    "site" = "blog"
  },
  {
    "addr" = "[email protected]"
    "id" = "blog_nodeply"
    "site" = "blog"
  },
  {
    "addr" = "[email protected]"
    "id" = "cms_noreply"
    "site" = "cms"
  },
]
Gareth avatar

Ah, okay, and the for_each would accept that in as an input? I didn’t think for_each could handle a list but I might not be thinking this through properly, it is getting rather late in the UK.

loren avatar

@Alex Jurkiewicz the ... operator has near-zero documentation, and it actually means different things in different contexts (function calls vs for expressions)… here is the original hcl2 spec that describes them… https://github.com/hashicorp/hcl2/blob/master/hcl/hclsyntax/spec.md#functions-and-function-calls

hashicorp/hcl2

Former temporary home for experimental new version of HCL - hashicorp/hcl2

loren avatar

you can also find a reference in the tf docs on for expressions: https://www.terraform.io/docs/configuration/expressions/for.html
Finally, if the result type is an object (using { and } delimiters) then the value result expression can be followed by the ... symbol to group together results that have a common key:

{for s in var.list : substr(s, 0, 1) =
s… if s != “”}

1
Gareth avatar

Okay, penny has dropped fiesta_parrot and I’ve manged to test based around your examples. Can’t thank you enough for taking the time to go through that and making the suggestions. Your last example simplifies it a lot. thank you

1
Gareth avatar

Thanks Loren!

loren avatar

i can’t actually find a reference to the function call version of ... in the tf docs…

loren avatar

oh wait, there it is: https://www.terraform.io/docs/configuration/expressions/function-calls.html#expanding-function-arguments
If the arguments to pass to a function are available in a list or tuple value, that value can be expanded into separate arguments. Provide the list value as an argument and follow it with the ... symbol:

min([55, 2453, 2]…)

The expansion symbol is three periods (...), not a Unicode ellipsis character (). Expansion is a special syntax that is only available in function calls.

Alex Jurkiewicz avatar
Alex Jurkiewicz

hashtag LetUsWriteTerraformInARealLanguage

1
loren avatar

i recently asked about exactly this operator in the hangops slack, so i feel your pain

loren avatar

but the question and the answer have aged out of their slack

Alex Jurkiewicz avatar
Alex Jurkiewicz

so many *ops slacks! Is that one any good? Can you share an invite?

loren avatar

I feel like hangops is one of the oldest, but cloudposse is one of the best. I’m only in that one cuz some of the hashi folks occasionally are active

Gareth avatar

Anybody know of a written aws lambda that can instigate a RDS MSSQL restore from S3 backup? Looking for something to help bootstrap an MSSQL database in RDS with a baseline db.

or maybe a way of creating a gold ami but for RDS MSSQL? or a way of running stored procedure directly from terraform? ideally all controlled by terraform.

loren avatar

What I’ve done before is create the db one time, then snapshot it, then use the snapshot as the starting point for subsequent rds instances…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Remember to join us tomorrow (12/16) at 11:25am PST to learn about TACOS - Terraform Automation and Collaboration Software

We have speakers from:

• HashiCorp Terraform Cloud

• Env0

• Scalr

• Spacelift https://cloudposse.com/office-hours

7

2020-12-16

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a way to create a lambda that just gets fired once when it gets created?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i basically want to create a lambda during the bootstrapping of an AWS account and then never needs to fire again

loren avatar
plus3it/terraform-aws-org-new-account-trust-policy

A Terraform Module. Contribute to plus3it/terraform-aws-org-new-account-trust-policy development by creating an account on GitHub.

loren avatar
plus3it/terraform-aws-org-new-account-trust-policy

A Terraform Module. Contribute to plus3it/terraform-aws-org-new-account-trust-policy development by creating an account on GitHub.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

the issue is the account will already exist

loren avatar

bootstrapping an account implies it is a new account or an invited account

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

its going to be when we bootstrap the account with other stuff i basically want to write a lambda that optionally adds the account to fugue.co

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

we have two TF roots

• tf-organisation where the accounts are listed

• tf-accounts where we add baseline stuff to the account

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i want to create the lambda optionally from tf-accounts when we add the baseline config

loren avatar

create an event rule for the new account, use a schedule, set the schedule to run once

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

makes sense

mfridh avatar

Does it have to be complicated? A lambda can be triggered from terraform on demand…

mfridh avatar

Guess it would need some form of persistence after it did trigger for that particular account so it only happens the first run.

mfridh avatar

It could also be handled fully in terraform with triggers

David Knell avatar
David Knell

I have a strange issue that I hope someone can point me in the right direction. I inadvertently updated to tf 0.14.2 (new laptop, homebrew, and being dumb). Anyway, needless to say all my tf states are in a bad, uhhh, state, which AFAIK cannot be reverted back to 0.13.x. I am using some of the cloudposse tf modules and a lot of them understandably are not ready for 0.14.x. So what does one do in a situation such as this? Naturally, fork every cloudposse terraform repo and hacking up the code until every last tf error is gone. Mission accomplished! but…. my tf plan now says that it wants to replace most of my resources because the name changed (from cloudposse/label/null). It seems that the attributes ordering is different now. i.e.

~ name   = "dt-prod-api-ecs-exec" -> "dt-prod-api-exec-ecs" # forces replacement

So I have 2 questions

  1. Does anyone know a way to revert a state back to 0.13.x?
  2. Is this attribute re-ordering situation something that anyone has encountered?
cflowe avatar
  1. i don’t recall having a reorder issue, but i also have not upgraded to 0.14

  2. if you’re using remote state with object versioning then try reverting the remote object to the previous version. you can also try starting with a fresh state, then terraform plan (do not apply) and terraform import the resource names returned by the plan. i don’t think terraform refresh can save you in this case.

barak avatar

Howdy ya’ll. I know there are some checkov user’s here. One of the recent updates added the ability to run terraform plan analysis. So now it supports both static and dynamic analysis of terraform. More about it: https://www.checkov.io/2.Concepts/Evaluate%20Terraform%20Plan.html https://bridgecrew.io/blog/terraform-plan-security-scanning-checkov/

Terraform plan analysis with Checkov and Bridgecrew | Bridgecrew Blogattachment image

Learn how to leverage Checkov and Bridgecrew to scan both raw Terraform files and Terraform plan output for security and compliance errors.

4
Mohammed Yahya avatar
Mohammed Yahya

Thanks for sharing

Terraform plan analysis with Checkov and Bridgecrew | Bridgecrew Blogattachment image

Learn how to leverage Checkov and Bridgecrew to scan both raw Terraform files and Terraform plan output for security and compliance errors.

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can’t modify lifecycle { ignore_changes data with dynamic data. I have a module I’d like to consume in multiple places, but with different ignore_changes configuration for an internal resource – depending on the consumer.

The only way I can think to do this is duplicating the resource with different lifecycle configuration and having a condition define which copy of the the resource is actually created.

But this is really ugly. Anyone have a better idea?

mfridh avatar

The only way around it currently as I understand. Have had similar situation with some upstream alb modules and the min/max/desired …

mfridh avatar

I’m learning to accept it and just live with the fact that if it’s in a module at least it’s “hidden” so what do I actually care once the module does what it’s supposed to.

loren avatar

i’m pretty excited about this experiment… complex objects with optional attributes and default values! https://www.terraform.io/docs/configuration/functions/defaults.html

defaults - Functions - Configuration Language - Terraform by HashiCorp

The defaults function can fill in default values in place of null values.

cool-doge2
Matt Gowie avatar
Matt Gowie

This will be great.

defaults - Functions - Configuration Language - Terraform by HashiCorp

The defaults function can fill in default values in place of null values.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

2020-12-17

Lukasz K avatar
Lukasz K

Hi guys, anyone managed to schedule daily cleanup job in terraform cloud? I have workspace on which I would like to execute terraform destroy as a daily job

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have a recommended example to add some xml to an existing xml via a bash script?

pjaudiomv avatar
pjaudiomv

this is probably not the best channel for this question as its not terrraform related but have you checked out XMLStarlet

1
pjaudiomv avatar
pjaudiomv

xsltproc could be useful to if you have a xslt

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i just want to add

<init-param>
    <param-name>DisableTaskScheduler</param-name>
    <param-value>FALSE</param-value>
  </init-param>
Alex Jurkiewicz avatar
Alex Jurkiewicz

Why bash?! Do it in a language which can parse XML and you will be a happier man

Alex Jurkiewicz avatar
Alex Jurkiewicz

I guess bash is suitable if you want to append or prepend it. But as soon as you get to the “grep for a certain string and add my snippet after that” you are writing a future bomb IMNSHO

1
Matt Gowie avatar
Matt Gowie

Anyone using the Kubernetes provider with EKS + Terraform Cloud? Any direct path to success for configuring the provider?

Matt Gowie avatar
Matt Gowie

Moving a client onto Terraform Cloud and I believe my path forward involves including the K8s client certificates as the authentication mechanism + aws-auth role for my AWS Terraform CI creds. But if anyone has a “you only need to do X, Y, and Z” approach that’d be awesome.

tim.j.birkett avatar
tim.j.birkett

Using an IAM role you can configure the kubernetes provider with something like:

data "aws_eks_cluster" "cluster" {
  name = var.cluster_name
}

data "aws_eks_cluster_auth" "cluster" {
  name = var.cluster_name
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
    load_config_file       = false
  }
}
1
1
Matt Gowie avatar
Matt Gowie

Ah good stuff @Tim Birkett — Thanks.

Matt Gowie avatar
Matt Gowie

And regarding the IAM role — You’re referring to the user / AWS creds provider to TFC having a role within the cluster, correct?

tim.j.birkett avatar
tim.j.birkett

So you’d have an IAM role that your CI (TFC?) would use, then you’d need a role mapping in the aws-auth config map as described here: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html#awsdocs-filter-selector<i class="em em-~"</i>text=Add%20your%20IAM%20users%2C%20roles%2C%20or%20AWS%20accounts%20to%20the%20configMap>.

Managing users or IAM roles for your cluster - Amazon EKS

The aws-auth ConfigMap is applied as part of the guide which provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a sample Kubernetes application. It is initially created to allow your nodes to join your cluster, but you also use this ConfigMap to add RBAC access to IAM users and roles. If you have not launched nodes and applied the

tim.j.birkett avatar
tim.j.birkett

That could link to system:masters but probably shouldn’t

1
Release notes from terraform avatar
Release notes from terraform
09:44:14 PM

v0.14.3 0.14.3 (December 17, 2020) ENHANCEMENTS:

terraform output: Now supports a new “raw” mode, activated by the -raw option, for printing out the raw string representation of a particular output value. (#27212) Only primitive-typed values have a string representation, so this formatting mode is not compatible with complex types. The…

command/output: Raw output mode by apparentlymart · Pull Request #27212 · hashicorp/terraform

So far the output command has had a default output format intended for human consumption and a JSON output format intended for machine consumption. However, until Terraform v0.14 the default output…

1
Austin Loveless avatar
Austin Loveless

Hey all! I’m using the terraform-aws-rds-cluster module and am trying to setup a secondary(replica) of my primary cluster, but I’m running into an issue with the secondary cluster. 🧵

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Austin Loveless avatar
Austin Loveless

I’m getting the error

error creating RDS cluster: InvalidParameterCombination: Cannot specify user name for cross region replication cluster

I’m wondering if this is because I haven’t set the field global_cluster_identifier. Anyone had this issue before?

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

jose.amengual avatar
jose.amengual

are you trying to setup a global cluster ?

jose.amengual avatar
jose.amengual

or a replica cluster

jose.amengual avatar
jose.amengual

anything global_ is for global databases which are different than replica clusters using regular DB replication

Austin Loveless avatar
Austin Loveless

I want to setup a regular db replica cluster

jose.amengual avatar
jose.amengual

so you want to setup the replica_source_identifier

jose.amengual avatar
jose.amengual

no user or pass word

jose.amengual avatar
jose.amengual

you can leave them as = ""

Austin Loveless avatar
Austin Loveless

ahhh I didn’t leave them as empty strings

Austin Loveless avatar
Austin Loveless

maybe that’s what I’m missing

jose.amengual avatar
jose.amengual

they are required but for the replica they need to be empty

Austin Loveless avatar
Austin Loveless

got it!

Austin Loveless avatar
Austin Loveless

trying it out now

Austin Loveless avatar
Austin Loveless

@jose.amengual is it possible to make this replica in the same region?

jose.amengual avatar
jose.amengual

no

jose.amengual avatar
jose.amengual

that is an AWS restriction

Austin Loveless avatar
Austin Loveless

cool that’s what I thought.

Austin Loveless avatar
Austin Loveless

Sorry, I think I was confused when I said i wanted a replica earlier. I want to establish a read replica. Does this template have the ability to do that?

jose.amengual avatar
jose.amengual

yes

jose.amengual avatar
jose.amengual

ohhhh read replica

Austin Loveless avatar
Austin Loveless
cloudposse/terraform-aws-rds-replica

Terraform module that provisions an RDS replica. Contribute to cloudposse/terraform-aws-rds-replica development by creating an account on GitHub.

jose.amengual avatar
jose.amengual

just change the size to 2

Austin Loveless avatar
Austin Loveless

the size of the primary?

jose.amengual avatar
jose.amengual

a read replica alongside the writer in the same region, vpc ?

Austin Loveless avatar
Austin Loveless

yes the use case is to have a primary DB that’s core to the application and another DB that is fed all the data from the primary (like a read replica would do) and then I want to hook up the read replica to a Bi reporting tool like tableau

jose.amengual avatar
jose.amengual

but you want a replica not a cluster replica

jose.amengual avatar
jose.amengual

?

Austin Loveless avatar
Austin Loveless

I believe I only need a replica. Just somethign that will always stay up to date with my primary DB

Austin Loveless avatar
Austin Loveless

We use aurora postgres though and it doesn’t look like that is supported

Austin Loveless avatar
Austin Loveless

Oh nevermind this cloudposse doc is old. Aurora does support it

jose.amengual avatar
jose.amengual

yes we use postgress replicas

jose.amengual avatar
jose.amengual

there is more tendency to use cluster replicas than single instances

jose.amengual avatar
jose.amengual

is there any reason why it has to be an instance?

jose.amengual avatar
jose.amengual

I do not think there is much price difference

Austin Loveless avatar
Austin Loveless

I think I’m going to just bump the numder of instances in the aurora cluster up by 1

jose.amengual avatar
jose.amengual

if you need to read from it is the fastest way to do it

Austin Loveless avatar
Austin Loveless

Okay right on

jose.amengual avatar
jose.amengual

you can add more than one read replica

Austin Loveless avatar
Austin Loveless

Thanks this worked!

jose.amengual avatar
jose.amengual

nool

Austin Loveless avatar
Austin Loveless

@jose.amengual I have a quick question about the behavior of the reader node on aurora postgres. So after following what you said yesterday i have a reader node that’s now apart of my aurora cluster. it gives me a specific endpoint to connect to it, but doesn’t have a master password or master username. Do you know how I can connect to specifcally that node or is that even possible?

jose.amengual avatar
jose.amengual

the reader node uses the same creds that then writer

jose.amengual avatar
jose.amengual

usually I create user with select access grants

Austin Loveless avatar
Austin Loveless

interesting when I use the same master credentials on the reader endpoint it tells me “that role doesn’t exist”

jose.amengual avatar
jose.amengual

weird

jose.amengual avatar
jose.amengual

unless is still replicating….

Austin Loveless avatar
Austin Loveless

hmm, I thought the connection string would be the “reader_endpoint” that’s outlines in the terraform docs, but when going to the RDS console it looks different

jose.amengual avatar
jose.amengual

they should be the same

jose.amengual avatar
jose.amengual

on has ro in the name

jose.amengual avatar
jose.amengual

the other one does not

Austin Loveless avatar
Austin Loveless

that’s what I thought.

Austin Loveless avatar
Austin Loveless

but in the console the endpoint looks a bit different.

Austin Loveless avatar
Austin Loveless

so in the console if you click on the main cluster resource in rds it shows two endpoints: reader and writer, but if you click on the actual reader node the endpoint looks different.

Austin Loveless avatar
Austin Loveless

it appends a -2 to the end of the string like “<RDS NAME>-1.<RDS ID>.<REGION>.rds.amazonaws.com

jose.amengual avatar
jose.amengual

yes that is normal

jose.amengual avatar
jose.amengual

one is the Cluster enpoint

jose.amengual avatar
jose.amengual

the other is the Instance endpoint

jose.amengual avatar
jose.amengual

each instance endpoint have its own

jose.amengual avatar
jose.amengual

the cluster endpoint have two

jose.amengual avatar
jose.amengual

r/o and r/w endpoint

jose.amengual avatar
jose.amengual

is always better to use the cluster endpoint

jose.amengual avatar
jose.amengual

if you need to read you use the reader endpoint

Austin Loveless avatar
Austin Loveless

okay gotcha

Arjun Venkatesh avatar
Arjun Venkatesh

Heads up if you are attempting to apply a terraform repo using helm https://github.com/hashicorp/terraform-provider-helm/issues/645

Any plans as of the morning of Dec 16th with helm_releases in EKS attempt to change version of helm release to 0.2.2 · Issue #645 · hashicorp/terraform-provider-helm

As strange as it sounds, across 8 clusters with 0 local or remote state changes and with no module changes (confirmed with three engineers), any attempt to create a plan that should result in nothi…

loren avatar

anyone happen to know any magic for generating a random string that can be used in a for_each expression in the same state, without using -target? i was trying to be cute with try() but no love…

locals {
  id = substr(uuid(),0,8)
}

resource null_resource id {
  triggers = {
    id = local.id
  }

  lifecycle {
    ignore_changes = [
      triggers,
    ]
  }
}

resource null_resource this {
  for_each = try(toset([null_resource.id.triggers.id]), toset([local.id]))
}
loren avatar
$ terraform apply

Error: Invalid for_each argument

  on main.tf line 18, in resource "null_resource" "this":
  18:   for_each = try(toset([null_resource.id.triggers.id]), toset([local.id]))

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
Alex Jurkiewicz avatar
Alex Jurkiewicz

Can you create a random_string resource with for_each and then reference the results?

loren avatar

no, the random_string must be applied for it’s result to be known

loren avatar

only variables, data sources, and builtin functions are known before apply

loren avatar

hence the cuteness here with uuid() and a null_resource that also ignores its own triggers, to create a random value that does not change from one apply to the next

Alex Jurkiewicz avatar
Alex Jurkiewicz

Hm maybe I misunderstood the use case?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You want to create a for_each resource block where the number of resources depends on the output of another resource?

loren avatar

well yes, but i know that doesn’t work. but the use case is very targeted here. i need an random value that doesn’t change from apply to apply, and to use that value as part of the for_each expression

loren avatar

i have some modules that use for_each with a list of objects, each with a “name” attribute. pretty standard. but in my tests, i want to give each object a generated id

loren avatar

in the code above, i generate an id using uuid(), then i store it in null_resource.id and ignore changes to the trigger to make it static. this works fine.

then i try to use a for_each expression to resolve the value from either null_resource.id or local.id. the first will not work on first apply but will subsequently. the second will work on first apply but will change every subsequent apply

  for_each = try(toset([null_resource.id.triggers.id]), toset([local.id]))
loren avatar

but it seems terraform doesn’t even try to resolve the expression. it just sees the reference to null_resource.id and gives up

Alex Jurkiewicz avatar
Alex Jurkiewicz

So the use case is something like a module with an input variable number_of_rds_instances and you want to create that many aws_rds_instance resources, with a random but fixed name for each

mfridh avatar

Why the need to use the random string in the for each? Pass the random_id to the name variable to all those modules? I don’t fully understand but I’m curious

mfridh avatar

I use random_string, random_id, random_password for quite a few things. It seems very static to me…

loren avatar

indeed, we got a little sidetracked from the question. the module is what the module is. it uses for_each on the name attribute of this list of objects. if the module is invoked in a way where the name attribute is set using the output of a random_* resource, then you get the error “cannot be determined until apply”.

1
loren avatar

i am making zero claims disputing that random_* resources generate static values, just that they do not work as inputs for the for_each key…

Alex Jurkiewicz avatar
Alex Jurkiewicz

sorry. I don’t understand why this format won’t work:

variable number_of_ec2_instances {
  type = number
}

resource random_id default {
  count = var.number_of_ec2_instances
}

resource aws_ec2_instance default {
  count = var.number_of_ec2_instances

  tags {
    Name = random_id.default[count.index].result
  }
}
loren avatar

i make no claims about that construction. that is using a static var with count. i am generating the value with terraform and using for_each

loren avatar

if i expose a variable, and generate the random value outside terraform, that certainly works. that’s my backup plan. i was just trying to avoid having the variable, and keeping it all within the tf config

1
loren avatar

The problem was uuid() and my assumption that functions were resolved in the plan phase (apparently that’s not always true). Switched to a data source that has a random output and got something working. Will post code in the morning

loren avatar

here’s what i came up with… null_data_source outputs a random value, and it is resolved in the plan phase, so this works:

locals {
  random_id = substr(md5(data.null_data_source.id.random),0,8)

  id = try(null_resource.id.triggers.id, local.random_id)
}

data null_data_source id {}

resource null_resource id {
  triggers = {
    id = local.random_id
  }

  lifecycle {
    ignore_changes = [
      triggers,
    ]
  }
}

resource null_resource this {
  for_each = toset([local.id])
}
1
1
Joe Niland avatar
Joe Niland

Is there a way to create a subset map from another map based on conditions? The map has elements with data types by I got around that by using a type of any.

For example, with the map below, is there a way to remove the check-name key completely by checking if it equals []? I’ve tried various things with for loops, e.g.

locals {
  cleaned_pattern = {
    for label, value in var.cloudwatch_event_rule_pattern :
    label => value if coalesce(value) != null
  }
}

Output is unchanged.

      + event_pattern  = jsonencode(
            {
              + check-name  = []
              + detail      = {
                  + status = [
                      + "ERROR",
                      + "WARN",
                    ]
                }
              + detail-type = [
                  + "Trusted Advisor Check Item Refresh Notification",
                ]
              + source      = [
                  + "aws.trustedadvisor",
                ]
            }
        )
Joe Niland avatar
Joe Niland

ok very simple solution

cleaned_pattern = {
    for label, value in var.cloudwatch_event_rule_pattern :
    label => value if length(value) > 0
  }

2020-12-18

Pierre-Yves avatar
Pierre-Yves

Hello, I have recreated my private cluster. and it try to start creating namespaces before the cluster is created, despite the depends_on azurerm_kubernetes_cluster. do you have any input on how to do it one by one ?

resource "kubernetes_namespace" "terra_test_namespace" {
  ...
  depends_on = [azurerm_kubernetes_cluster.kube_infra, var.vnet_subnet_id]
}

i have found same error on awk_eks with some tricks to fix it. I have tried them but it don’t work for now. ( terraform plan failed telling the cluster is not available ) . Can you give me some guidelines on how to solve this ?

https://github.com/terraform-aws-modules/terraform-aws-eks/issues/943

Document how to wait for cluster availability before creating kubernetes resources · Issue #943 · terraform-aws-modules/terraform-aws-eks

I have issues I&#39;m submitting a… bug report feature request support request - read the FAQ first! kudos, thank you, warm fuzzy What is the current behavior? I was unable to figure out how to d…

Pierre-Yves avatar
Pierre-Yves

so far I have solved two issue: • when recreating the cluster ( due to changing servers size ), the terraform resources “kubernetes_namespace” where not destroyed from the tfstate. So I have remove them manually • I am using RBAC to filter access and the above null_ressource wait_for_cluster call  wget --no-check-certificate -O - -q API_ENDPOINT and failed to authenticate

• so for now I just check that the clusterid exists before creating the namespace

Document how to wait for cluster availability before creating kubernetes resources · Issue #943 · terraform-aws-modules/terraform-aws-eks

I have issues I&#39;m submitting a… bug report feature request support request - read the FAQ first! kudos, thank you, warm fuzzy What is the current behavior? I was unable to figure out how to d…

Gareth avatar

Hi, Anybody know if its possible and probably more importantly advisable, to have the output of a lambda function as input of a Data source? Use case: I need to generate machine keys for IIS but only way I’ve found to do this is via PowerShell. I don’t believe I can use a local PowerShell provider as not all the members of my team run on windows machines and therefore create a dependency on installing PowerShell core etc. Same could be said for the Jenkins pipelines. So was think a lambda could generate the keys and a data source could read them in.

Side note: I know I could generate and inject the machine keys in as part of the build process but for historical reasons we’ve extracted security items from the build process and re-inject them at time of build.

Joe Niland avatar
Joe Niland

Are you planning to run it using aws_lambda_invocation ?

Gareth avatar

Hello Joe, Honest answer, is I’ve not fully thought this through. I was starting by thinking could I read the output from a Lambda in as a source. Hadn’t given a thought as to how that Lambda would be executed but looking at aws_lambda_invocation it looks the most logical but really am open to any suggestions

Gareth avatar

The other option to achieve the same thing would be to have a data source that could make a web request and use the response as the input but I’m not aware of a way to do that either.

Gareth avatar

Many thanks Joe, I’ve not come across that provider. Not really sure how I’ve overlooked it before but given what I’m trying to achieve, this might be the simplest way around my scenario. Thanks for taking the time to reply.

Joe Niland avatar
Joe Niland

np, that provider is actually pretty simple but powerful!

Joe Niland avatar
Joe Niland

so your Lambda will call WinRM on each server?

Gareth avatar

What I’ll probably do is create a endpoint that points the to the lambda and then either return the data for terraform to then write to the ssm parameter store for later retrieval by the user data when a machine boots or I’ll I’ll supply something along with the web request and get the lambda to write to the ssm parameter store.

Currently, I tend to pull most security items from the ssm parameter store on boot via userdata. Which Help keep it out of the session state and removed from the admins etc etc. Each instance of an ASG knows which products its responsible for and then just reads in the required values.

Probably a lot of reasons not to do it this way, that I’ve not thought about e.g. use a chef style agent etc but it works for us.

that said, if somebody things I’m opening myself up for problems, feel free to shout. Always whiling to listen to reason

Joe Niland avatar
Joe Niland

yeah calling them to generate and store then return the ssm key seems secure enough

1
Gareth avatar

Thanks again for the help

Joe Niland avatar
Joe Niland

you’re welcome

Christian avatar
Christian

Is it possible to output state from resources created by child modules but are not declared as outputs in the child module?

For example, I would like to output the DB username when using the cloudposse/rds/aws . The values are available in the state as demonstrated by terraform state show 'module.rds_instance.aws_db_instance.default[0]' However, an output rule like the following does not work…

output "aws_db_instance" {
  value       = module.rds_instance.aws_db_instance
}

results in an error

An output value with the name "aws_db_instance" has not been declared in
module.rds_instance.
Jon Bevan avatar
Jon Bevan

Hi, I’m trying to use https://github.com/cloudposse/terraform-aws-dynamodb v0.23.0 with terraform 0.13 but I’m getting this output from terraform plan:

Error: Invalid count argument
  on .terraform/modules/dynamodb_table.dynamodb_autoscaler/main.tf line 92, in resource "aws_appautoscaling_target" "read_target_index":
  92:   count              = var.enabled ? length(var.dynamodb_indexes) : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

which I kinda understand but having to run plan -target seems a bit hacky… seems to have been reported here https://github.com/cloudposse/terraform-aws-dynamodb/issues/70 too

cloudposse/terraform-aws-dynamodb

Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb

invalid count argument on read/write_target_index with multiple dynamodb tables · Issue #70 · cloudposse/terraform-aws-dynamodb

Found a bug? Maybe our Slack Community can help. Describe the Bug Error: Invalid count argument for read_target_index & write_target_index with running plan or apply. on .terraform/modules/dyna…

loren avatar

that null_resource approach doesn’t seem necessary at all, anymore. could just use a for expression

cloudposse/terraform-aws-dynamodb

Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb

invalid count argument on read/write_target_index with multiple dynamodb tables · Issue #70 · cloudposse/terraform-aws-dynamodb

Found a bug? Maybe our Slack Community can help. Describe the Bug Error: Invalid count argument for read_target_index & write_target_index with running plan or apply. on .terraform/modules/dyna…

Rhys Davies avatar
Rhys Davies

Hi all - really enjoyed the recent cast on TACOS and I’m really interested in not having to manage my own Terraform or create the governance that I want around our infra on my own. Basically (and I understand that this is a really broad question, that I expect to differ between Terraform Cloud, Env0, Scalr, Spacelift) I would to ask how you transition your self-hosted Terraform solution to one of these SaaS providers without downtime and, maybe more importantly, how your previous small-team customers have driven buy-in from their wider org that this stuff is really important (please don’t sell me on it, I know it’s critical)

Igor avatar

I haven’t listened to it yet, but I think there was a lot of talk on this topic in the last #office-hours

Igor avatar

Nevermind, sounds you listened to it based on your follow-up comment

Ryan Fee avatar
Ryan Fee

@Rhys Davies For Scalr:

  1. Transition (state) - First you’ll want to migrate existing state, which is straight forward and can be automated//docs.scalr.com/en/latest/migration.html>
  2. Transition (workflow) - We would need to understand your current workflow, but whether it is CLI based or vcs based, either are straightforward. If CLI, just add a code snippet to your TF config files and it will start using Scalr to execute the runs. If VCS, just add the repository to Scalr and kick off a run or have it automatically execute on the next commit.

Neither of the above require downtime. You won’t need to reprovision infrastructure/services.

In terms of buy-in, this depends on the team you need it from, the area of most importance, or where the biggest pain point is.

  1. A few of our smaller customers have gone through SOC2 or other similar compliance reviews lately and Scalr accelerated that for them through audits, policy, etc. Leadership bought in quickly as soon as they saw they could accelerate it.
  2. Others have had major issues around an unorganized module process, which caused outages. The template, module registry, and OPA greatly improved their process and standards.
  3. The idea of a more efficient PR process or general workflow has been another big one. Many users did not want to babysit the existing DIY workflow. Really not much buy-in needed as the benefit is fairly obvious.
  4. Our larger customers get buy-in from the wider orgs through the idea of autonomy and self-service. Many of them call it an app “vending machine”. Teams sign up to use Scalr and they automatically get an environment or workspace created for them and then they are off and running on their own.
omry avatar

Hi @Rhys Davies - for env0:

  1. For migration you need to create a new template with your Terraform code at env0, connect your cloud account credentials, and add all the relevant variables, and create an environment with the same workspace name. You can read more here
  2. For us the benefits we see with small teams are the following:

• Gitops for continuous deployment and plan on PRs.

Custom flows, that allows you to run everything in you Terraform pipeline.

• Self service environment management with TTL policies and Scheduling for cost reduction.

• Creating environment for each PR, also a lot of them are using it using Kubernetes.

Terragrunt support.

Actual cost over time with correlation to deployments.

• RBAC and plan before apply, which creates a workflow that is similar to a PR for infrastructure changes. Bigger teams buy in to our SAML, Policies, Self service environment management capabilities, OPA, Teams management, environment limits and budget limits. You can read more about our use case here:

  1. IaC automation
  2. Teams and governance
  3. Managed self service Hope it helps, and let us know if you have any questions.
Rhys Davies avatar
Rhys Davies

sorry for the late reply guys, I wrote something out and never hit enter. thank you all so much for the explanations and help, really enjoyed the podcast

1
1
Rhys Davies avatar
Rhys Davies

Even if I don’t get a reply here, fascinating stream, really enjoyed and will be tuning in for the next one

2020-12-19

Christos avatar
Christos

Hey morning. everyone! :wave:

Having a short question.

I define my local module in the modules directory. Can modules in that directory have reference to another module from github for instance?

github140 avatar
github140

Hi, yes that’s possible.

Christos avatar
Christos

Alright. Cool. You got any examples of where is this implemented? Like some github repo? I am getting this warning saying that “the module cannot be found in the directory”.

github140 avatar
github140
hashicorp/terraform-aws-vault

A Terraform Module for how to run Vault on AWS using Terraform and Packer - hashicorp/terraform-aws-vault

Christos avatar
Christos

Thanks!

hashicorp/terraform-aws-vault

A Terraform Module for how to run Vault on AWS using Terraform and Packer - hashicorp/terraform-aws-vault

github140 avatar
github140

This should show it.

2020-12-21

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Can anyone help with this issue please …

terraform {
  backend "s3" {
    ...
  }

  required_version = "= 0.13.4"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.9.0"
    }
    fugue = {
      source  = "fugue/fugue"
      version = "0.0.1"
    }
  }

}

provider "fugue" {
  client_id     = var.fugue_client_id
  client_secret = var.fugue_client_secret
}
Initializing the backend...

Initializing provider plugins...
- Using previously-installed hashicorp/aws v3.9.0
- Finding fugue/fugue versions matching "0.0.1"...
- Finding latest version of hashicorp/fugue...
- Installing fugue/fugue v0.0.1...
- Installed fugue/fugue v0.0.1 (self-signed, key ID B14956EDEF9DD1A2)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
<https://www.terraform.io/docs/plugins/signing.html>

Error: Failed to install provider

Error while installing hashicorp/fugue: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/fugue

Why is it trying to find hashicorp/fugue ?

loren avatar

do you have existing tfstate? are you upgrading from tf 0.12? do you have modules using the fugue provider with incorrect provider/terraform blocks?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

this is newly added

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

the module being called has the following …

provider "fugue" {
  alias = "terraform-runner"
}
loren avatar

terraform providers output?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
Providers required by configuration:
.
├── provider[registry.terraform.io/fugue/fugue] 0.0.1
├── provider[registry.terraform.io/hashicorp/aws] ~> 3.9.0
└── module.aws_account
    ├── provider[registry.terraform.io/hashicorp/aws]
    ├── provider[registry.terraform.io/hashicorp/fugue]
    ├── module.cloudtrail_bucket
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.default_account_roles
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.default_vpc_flowlog_key
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.terraform_runner
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.default_vpc_flowlogs
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.cloudtrail_key
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.default_vpc
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.s3_access_logs
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.default_vpc_flowlog_bucket
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.iam_password_policy
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.securityhub
    │   └── provider[registry.terraform.io/hashicorp/aws]
    ├── module.awsconfig
    │   ├── provider[registry.terraform.io/hashicorp/aws]
    │   └── module.config_bucket
    │       └── provider[registry.terraform.io/hashicorp/aws]
    └── module.cloudtrail
        └── provider[registry.terraform.io/hashicorp/aws]

Providers required by state:

    provider[registry.terraform.io/hashicorp/aws]
loren avatar
module.aws_account
    ├── provider[registry.terraform.io/hashicorp/aws]
    ├── provider[registry.terraform.io/hashicorp/fugue]
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

how can i delete that?

loren avatar

delete it?

loren avatar

if that module is not using any fugue resources, then you can delete it

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is this not correct though in that module …

provider "aws" {
  alias = "terraform-runner"
}

provider "fugue" {
  alias = "terraform-runner"
}
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

as that module will optionally create resources via the fugue provider

loren avatar

how are you passing the provider from your root to aws_account?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

let me create a gist

loren avatar

ok, so you have a single aws provider, and a single fugue provider

loren avatar
  providers = {
    aws   = aws.terraform-runner
    fugue = fugue.terraform-runner
  }
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yes

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

then the module itself is pretty much all aws apart from two fugue resources

loren avatar

then in your aws_account module, you do not need the provider block at all, you can remove this:

provider "aws" {
  alias = "terraform-runner"
}
provider "fugue" {
  alias = "terraform-runner"
}
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

ok let me try that

loren avatar

in this declaration, aws and fugue are the default unaliased providers…

loren avatar
  providers = {
    aws   = aws.terraform-runner
    fugue = fugue.terraform-runner
  }
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am still getting the same error

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
➜  data-engineering-qa git:(configure-fugue-environment) ✗ terraform providers

Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] ~> 3.9.0
├── provider[registry.terraform.io/fugue/fugue] 0.0.1
└── module.aws_account
    ├── provider[registry.terraform.io/hashicorp/fugue]
    ├── provider[registry.terraform.io/hashicorp/aws]
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am still seeing this

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is terraform providers coming from state?

loren avatar
Terraform 0.13 can't find locally-installed provider · Issue #25485 · hashicorp/terraform

@alisdair Thanks for the information. However i did what you mentioned and it still does not work - ~/.terraform.d/plugins/kyma-project.io/kyma-incubator/terraform-provider-gardener/0.0.9/linux_amd…

loren avatar

annoying…
each module must declare its own set of provider requirements

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

so i am going to have to set this inside the module itself

loren avatar

so you need to add this to the module:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
    }
    fugue = {
      source  = "fugue/fugue"
    }
  }
}
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

thats annoyung

loren avatar

not the versions, just the source location

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a convention for what to call that file within side the module itself?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i normally called it [config.tf](http://config.tf)

loren avatar

well since it is often used to manage provider versions, the tf upgrade utilities generally add it as versions.tf

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

perfect thanks for that

Joan Porta avatar
Joan Porta

Hi! I have a bunch of variables, 200,  in AWS Parameter Store,  all of them, in same path, any way to create a kind of loop and get all of them in Terraform instead of going one by one?

1
loren avatar

i think you have to provide all the names, not just the path, but you can use for_each on the data source to loop over them all

1
Tom Dugan avatar
Tom Dugan

Do you have 200 vars stored in /path/to/200-vars/ or is it /path/to/var-1/ /path/to/var-2/?

Joan Porta avatar
Joan Porta

I think SSM doesn’t have the option to do this /path/to/200-vars/ you can have /path/var1 /path/var2 no other way.

Joan Porta avatar
Joan Porta
locals {
  ssm_path = "/hopin/dev/hopin/env"
  envs_list = [ "var1", "var2"]
}

// Get all values from var1, var,2 ....
data "aws_ssm_parameter" "env_vars" {
  for_each = local.envs_list
  name  = "${local.ssm_path}/each.key"
}
Tom Dugan avatar
Tom Dugan

You could store a json object representing 200 kvs in one path, I wouldn’t recommend it :laughing: that Terraform would be my approach. Do you not have to toset a list with a for_each anymore?

Joan Porta avatar
Joan Porta

yes I u are right I think I need toset

Tom Dugan avatar
Tom Dugan

ah ok didnt know if it was a 0.14. thing or not

loren avatar

aws ssm does have a “GetParameters” api that will return all parameters from a list in a single call, and also the “GetParametersByPath” api that works specifically for paths, but terraform does not currently offer a data source based on those APIs

https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameters.html

https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameters-by-path.html

1
Joan Porta avatar
Joan Porta

Ohhhh… :disappointed: and any idea of doing a api call to AWS to get parameters? something with provisioner "local-exec" ?

loren avatar

people have done some crazy things to get local-exec to return values they can use, but it is pretty crazy and a little hard to recommend

loren avatar
matti/terraform-shell-resource

Run (exec) a command in shell and capture the output (stdout, stderr) and status code (exit status) - matti/terraform-shell-resource

loren avatar

here’s a version that uses the external provider, but requires ruby… https://github.com/matti/terraform-shell-outputs

matti/terraform-shell-outputs

Contribute to matti/terraform-shell-outputs development by creating an account on GitHub.

loren avatar

or here’s a shell provider, probably the best option. haven’t tried this one… https://github.com/scottwinkler/terraform-provider-shell

scottwinkler/terraform-provider-shell

Terraform provider for executing shell commands and saving output to state file - scottwinkler/terraform-provider-shell

Babar Baig avatar
Babar Baig

I am using aws ssm get-parameters-by-path to get parameters under the same path. Like @loren suggested. I am calling this in CircleCI pipeline. I usejq to process and set those as environment variables for later use.

2020-12-22

Laurynas avatar
Laurynas

I use terraform for scheduled lambda fuction with cloudwatch. It worked perfectly 5 months ago but now I came back to the code and when I run terraform plan with no changes i get this:

Error: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListTargetsByRuleInput.EventBusName.

What does it even mean? I don’t even use target bus I use resource "aws_cloudwatch_event_target" "start_alarms" {

Laurynas avatar
Laurynas

Turns out it’s an issue with terraform AWS provider version. After updating to latest it works. How do you deal with versioning of aws provider? Do you have it specified like version = "3.14.1" ?

Tom Dugan avatar
Tom Dugan

This what we do. As you can see, if there is a version that has a bug rendering it incompatible with our TF we just drop in a quick !=

provider "aws" {
  version = ">= 2.70.0, != 3.17.0"
}
1
loren avatar

in tf 0.12.29 or later, use the terraform block with required_providers, as version in the provider block is being deprecated…

terraform {
  required_providers {
    aws = {
      source  = "registry.terraform.io/hashicorp/aws"
      version = ">= 2.70.0, != 3.17.0"
    }
  }
}
Laurynas avatar
Laurynas

Thank you both! I’m just curious why do you both have != 3.17.0 in your providers?

loren avatar

oh, i just copied tom’s example :slightly_smiling_face: i’m currently pinning like this: "~> 3.18.0"

1
Tom Dugan avatar
Tom Dugan

Ah thanks loren that insight! That syntax was because of bug in that provider with gov cloud

Alex Muntean avatar
Alex Muntean

Hi! I built a module which is using restapi provider to create kibana space/roles and users, the module needs two configuration for the restapi provider and I am going to use provider alias to pass the configuration from root module.

provider "restapi" {
  alias = "kibana"
  uri   = "<https://X.X.X.X:5601>"

  username = "user"
  password = "pass"
  insecure = true
  headers = {
        "kbn-xsrf" = "true"
    }
  write_returns_object = true
}

provider "restapi" {
  alias = "elastic"
  uri   = "<https://X.X.X.X:9200>"

  username = "user"
  password = "pass"
  insecure = true
  write_returns_object = true
}

The module will create the space/roles and user in one instance of kibana, and I need to configure 3 instances of kibana, which means that I will need to define 6 provider configuration in the root module. What would be the best practices for this situation ?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can’t loop over providers in any way with Terraform. So the users of your module will need to define all 6 providers and call your module 3 times with differing configuration

Alex Muntean avatar
Alex Muntean

Thanks @Alex Jurkiewicz

Prasanth Kommini avatar
Prasanth Kommini

Hi Team,

Alex Jurkiewicz avatar
Alex Jurkiewicz

you can use threads in this Slack

Cloudposse modules accept pull requests to add 0.14 support

Prasanth Kommini avatar
Prasanth Kommini

Thank you. Will snd a pr.

1
Prasanth Kommini avatar
Prasanth Kommini

@Alex Jurkiewicz I’m unable to push change to repository.

Prasanth Kommini avatar
Prasanth Kommini

How do I go about pushing the PR?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Fork the repo, which will create a copy you own

Alex Jurkiewicz avatar
Alex Jurkiewicz

Then you can push to your copy and create a pull request from your repo’s branch to the master

Prasanth Kommini avatar
Prasanth Kommini

gotcha.

Prasanth Kommini avatar
Prasanth Kommini

Prasanth Kommini avatar
Prasanth Kommini
Initializing modules...

Error: Unsupported Terraform Core version

  on .terraform/modules/ec2-bastion-server.dns.this/versions.tf line 2, in terraform:
   2:   required_version = ">= 0.12.0, < 0.14.0"

Module module.ec2-bastion-server.module.dns.module.this (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>)
does not support Terraform version 0.14.2. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
aaratn avatar

Whats the version of terraform you are using, it should be less than 14.0 and greater than 12.0

aaratn avatar

I guess you might be running terraform 0.14

Prasanth Kommini avatar
Prasanth Kommini

I’m running 0.14.2

Prasanth Kommini avatar
Prasanth Kommini

I figured out the cause of the error.

Prasanth Kommini avatar
Prasanth Kommini

I was trying to find a fix.

Prasanth Kommini avatar
Prasanth Kommini

Surely downgrading my terraform is not a valid solution.

aaratn avatar

I guess you can fork this module and bump up terraform version on that and see if that works as expected with 0.14.2, if you think its working okay. You can create a PR to upstream and someone should approve and merge it !

Prasanth Kommini avatar
Prasanth Kommini

Done.

Prasanth Kommini avatar
Prasanth Kommini

aaratn avatar

yay !

Prasanth Kommini avatar
Prasanth Kommini

I get the above error message while using this module

Prasanth Kommini avatar
Prasanth Kommini
cloudposse/terraform-aws-ec2-bastion-server

Terraform Module to define a generic Bastion host with parameterized user_data - cloudposse/terraform-aws-ec2-bastion-server

2020-12-23

joe.acurtis avatar
joe.acurtis

Hello all

joe.acurtis avatar
joe.acurtis

Have some questions on a Terraform and best practices

tim.j.birkett avatar
tim.j.birkett

Can’t answer questions that you don’t ask @joe.acurtis

1
joe.acurtis avatar
joe.acurtis

Hey wanted to see if people where here first

tim.j.birkett avatar
tim.j.birkett

TL;DR - Don’t feel the need to “be polite” by asking if someone can help you and waiting in some sort of virtual queue for help. Just ask the question.

When it comes to things like Slack, it’s best to compose a nicely formatted list of questions or message for people to digest at their leisure in whatever timezone they are in - asking permission or vague things like: “Hey, anyone about to answer a question?” will likely get no interaction.

It may seem polite, but the polite thing is to give the reader context and enough info to make these decisions:

  1. Can I help this person with the knowledge and experience that I might have?
  2. Am I interested in helping this person? I have to give coaching all the time to people I work with who message things like: “Have you got a minute?” - hmm what for? Or even worse: “Hey, I’m getting an error!” - er, what are you doing to get it? what is it? what are you expecting?
6
joe.acurtis avatar
joe.acurtis

Thanks for this it’s genuinely helpful

1
roth.andy avatar
roth.andy


“Have you got a minute?” - hmm what for? Or even worse: “Hey, I’m getting an error!”
Ugh, I feel this. My pet peeve is people who slack/skype me “Good Morning!“. They have a question for me but won’t ask it until I respond with some trite greeting.

roth.andy avatar
roth.andy

I usually just give them the wave emoji: wave

joe.acurtis avatar
joe.acurtis

Basically new to Terraform and want to send people to different sites depending if they are on the test network behind a VPN failing the resources not being there check public site. Can I use failover routing to solve this or is it better to use a lambda

tim.j.birkett avatar
tim.j.birkett

Okay, a bit more information would be helpful:

• What is the site hosted on? AWS?

• How do requests get to the site? CDN? ALB? ELB? Kubernetes Ingress?

• Do you have split DNS when connected to the VPN? Is the VPN in some office, datacenter, cloud? This isn’t so much a Terraform question as it is a general network architecture question.

joe.acurtis avatar
joe.acurtis

It’s is hosted on AWS deployed using Terraform uses a cloudfront CDN to deliver images and pdfs. The issues is they aren’t mirrored so for testing when the testing cdns is hit and doesn’t have the file check the production CDN

tim.j.birkett avatar
tim.j.birkett

Interesting… Is Cloudfront in front of S3 or something?

joe.acurtis avatar
joe.acurtis

The assets live in S3

tim.j.birkett avatar
tim.j.birkett

There’s probably a few things that you can try… The first thing that came to my mind was Cloudfront origin failover (See: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html)

You could setup your test Cloudfront distribution with a primary origin (test bucket) and setup a secondary origin which points to the production bucket for 404 responses from the primary origin?

Optimizing high availability with CloudFront origin failover - Amazon CloudFront

Learn how to increase the availability of your website, application, or content with Amazon CloudFront origin failover and other features.

joe.acurtis avatar
joe.acurtis

Outstanding will have a read now

tim.j.birkett avatar
tim.j.birkett

It might be necessary for you to make use of dynamic blocks to configure the Cloudfront Distribution based on environment (test or production).

1
joe.acurtis avatar
joe.acurtis

Thanks again

Emily Melhuish avatar
Emily Melhuish

Hiya peeps! Has anyone used the terraform-aws-elastic-beanstalk-environment and attached an RDS instance before? I have set the relevant aws:rds:dbinstance namespace values and put that in the additional_options but it doesn’t appear to be creating the Database when I look at the environment Configuration in AWS console - is there something else I need to do to get the module to create the link? (Note this is for an RDS instance attached to the environment itself - not a seperate RDS instance, this is only for an internal tool, not production)

Prasanth Kommini avatar
Prasanth Kommini
Error: expected length of name to be in the range (1 - 64), got · Issue #52 · cloudposse/terraform-aws-ec2-bastion-server

Found a bug? Maybe our Slack Community can help. Describe the Bug module &quot;bastion&quot; { source = &quot;cloudposse/ec2-bastion-server/aws&quot; version = &quot;0.17.0&quot; ami = &quot;ami-03…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Try setting id_length_limit

Error: expected length of name to be in the range (1 - 64), got · Issue #52 · cloudposse/terraform-aws-ec2-bastion-server

Found a bug? Maybe our Slack Community can help. Describe the Bug module &quot;bastion&quot; { source = &quot;cloudposse/ec2-bastion-server/aws&quot; version = &quot;0.17.0&quot; ami = &quot;ami-03…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(it’s a feature of null label)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I tried different values for

  id_length_limit
1
Prasanth Kommini avatar
Prasanth Kommini

Thank you was able to resolve this issue with that exact fix.

Prasanth Kommini avatar
Prasanth Kommini
Error: expected length of name to be in the range (1 - 64), got 

  on .terraform/modules/bastion/main.tf line 9, in resource "aws_iam_role" "default":
   9:   name  = module.this.id
Prasanth Kommini avatar
Prasanth Kommini
cloudposse/terraform-aws-ec2-bastion-server

Terraform Module to define a generic Bastion host with parameterized user_data - cloudposse/terraform-aws-ec2-bastion-server

jose.amengual avatar
jose.amengual

you tried the module context variable right ?

cloudposse/terraform-aws-ec2-bastion-server

Terraform Module to define a generic Bastion host with parameterized user_data - cloudposse/terraform-aws-ec2-bastion-server

jose.amengual avatar
jose.amengual

id_length_limit ? in your instantiation of the module?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you prob need to provide at least one of namespace, environment, stage, name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the resource names and IDs are calculated from those 4 vars

Prasanth Kommini avatar
Prasanth Kommini
module "bastion" {
  source                        = "cloudposse/ec2-bastion-server/aws"
  version                       = "0.17.0"

  ami                           = "ami-03130878b60947df3"
  instance_type                 = "t2.micro"
  id_length_limit               = 10
  enabled                       = true
  name                          = "${var.app_name}-bastion"

  vpc_id                        = aws_vpc.main.id
  associate_public_ip_address   = true
  subnets                       = aws_subnet.public.*.id
  allowed_cidr_blocks           = var.allowed_cidr_blocks

  ssh_user                      = "user"
  key_name                      = module.dispatch_key_pair.this_key_pair_key_name
  user_data                     = ["sudo amazon-linux-extras enable postgresql11"]

  tags = {
    name        = "${var.app_name}-bastion"
    description = "Used to connect to db."
    environment = var.env
  }
}
Prasanth Kommini avatar
Prasanth Kommini

Passing the name worked.

Prasanth Kommini avatar
Prasanth Kommini

Thanks a ton folks

1
Prasanth Kommini avatar
Prasanth Kommini

Would you happen to have any idea what could be the issue here?

Prasanth Kommini avatar
Prasanth Kommini

I tried different values for

  id_length_limit
Prasanth Kommini avatar
Prasanth Kommini

I tried 0, 5 and default null

Prasanth Kommini avatar
Prasanth Kommini

but always getting the same eror

Prasanth Kommini avatar
Prasanth Kommini

I seem to have fixed it/

Prasanth Kommini avatar
Prasanth Kommini

Need to pass enabled = true

jose.amengual avatar
jose.amengual

enabled? where?

jose.amengual avatar
jose.amengual

can you past a code snippet?

Joe Niland avatar
Joe Niland

That should be defaulted to true

this1
Prasanth Kommini avatar
Prasanth Kommini

along with a non-default value for id_length_limit

Prasanth Kommini avatar
Prasanth Kommini

and a name

2020-12-24

ravi avatar

Hi All

ravi avatar
ravi
12:44:14 PM

I have written the Terraform modules for creating EKS Cluster and EKS Node groups everything is running as expected but the EC2 instances under the node groups does not have a name.

aaratn avatar

May be put Name tag ?

ravi avatar

This is what i have its creating the name for node groups but not for the EC2 instances.

resource "aws_eks_node_group" "node" {
  cluster_name    = aws_eks_cluster.aws_eks.name
  node_role_arn   = aws_iam_role.eks_nodes.arn
  instance_types  = "${var.eks_instance_type}"
  node_group_name = "${var.generictag}-${var.env}-ec2-eksng"
  tags = "${merge(var.tags,map("Name", "${var.generictag}-${var.env}-ec2-eks-nodes"))}"
  subnet_ids    = [ "${var.private_subnet_ids[0]}","${var.private_subnet_ids[1]}","${var.private_subnet_ids[2]}" ]

  remote_access {
    ec2_ssh_key     = "${aws_key_pair.eks.key_name}"
    source_security_group_ids = "${var.bastion_security_group_id}"
  }

  scaling_config {
    desired_size = "${var.eks_asg_desir}"
    max_size     = "${var.eks_asg_max}"
    min_size     = "${var.eks_asg_min}"
  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
}
ravi avatar
ravi
12:59:00 PM
ravi avatar
ravi
12:59:46 PM
aaratn avatar
aaratn
01:02:14 PM

Okay looks like tags wont propagate to instances as per this https://docs.aws.amazon.com/eks/latest/userguide/eks-using-tags.html#tag-resources

1
ravi avatar

ok, is there any other way i can tag my EC2 instances under the node groups?

ravi avatar
ravi
01:21:48 PM
1
roth.andy avatar
roth.andy

You hit the nail on the head. Launch Templates are what should be used here.

1
ravi avatar
ravi
01:01:49 PM

Any help

I have written the Terraform modules for creating EKS Cluster and EKS Node groups everything is running as expected but the EC2 instances under the node groups does not have a name.

2020-12-28

Steffan avatar
Steffan

hi guys i hope to get an advice on this. So i am trying to create an extra db on an existing aws_db_instance cluster so that my applications on fargate can connect to it. However i keep getting a connection timed out error during creation. wondering if anyone has had this kind of encounter. How did you go about it. my config looks like this

# Create a database server
resource "aws_db_instance" "default" {
  engine         = "mysql"
  engine_version = "5.6.17"
  instance_class = "db.t1.micro"
  name           = "initial_db"
  username       = "rootuser"
  password       = "rootpasswd"

  # etc, etc; see aws_db_instance docs for more
}

# Configure the MySQL provider based on the outcome of
# creating the aws_db_instance.
provider "mysql" {
  endpoint = "${aws_db_instance.default.endpoint}"
  username = "${aws_db_instance.default.username}"
  password = "${aws_db_instance.default.password}"
}

# Create a second database, in addition to the "initial_db" created
# by the aws_db_instance resource above.
resource "mysql_database" "app" {
  name = "another_db"
}
jose.amengual avatar
jose.amengual

unless that db instance have a plublic ip you need to have a tunnel/vpn or something so that the computer running terraform can connect to the instance on 3306

1
Steffan avatar
Steffan

just thinking aloud do you think it will work if i set it to publicly accessible. i actually wish i wouldnt have to go through that before it works. thanks for the pointer

jose.amengual avatar
jose.amengual

yes it will work, just make sure open it to only your ip

jose.amengual avatar
jose.amengual

but then it will have to deployed in the public subnet and your app might now be able to reach it

Steffan avatar
Steffan

my app and db both run in private subnet

Steffan avatar
Steffan

so when we need to connect to the db we use bastion. i was wondering is tf not supposed to already be in that vpc to create resources. plus i am using tf cloud to deploy (i dont know if this counts)

jose.amengual avatar
jose.amengual

then it will be easier to set a ssh tunnel and then set the port of the provider to the ssh-tunnel port

Steffan avatar
Steffan

thanksgot it

2020-12-29

Matt Gowie avatar
Matt Gowie

Does anyone know of a way for Terraform Cloud to connect to internal AWS resources without using the business tier hosted Terraform agents? I have a database root module where I manage multiple RDS DB instances and Amazon MQ vhosts. I’d like to make that as an automated Terraform Cloud workspace, but right now I manage accessing those private resources via port forwarding into a bastion host on the applier’s machine, which obviously isn’t possible for the TFC workspace.

jose.amengual avatar
jose.amengual

Interesting, they don’t offer a solution for this?

Matt Gowie avatar
Matt Gowie

They offer TFC Agents, which are self hosted runners… but to use them they require their business tier.

Matt Gowie avatar
Matt Gowie

Super weak.

jose.amengual avatar
jose.amengual

I imagine you could set a port forward and then allow TF cloud ips to it but that sounds pretty insecure

jose.amengual avatar
jose.amengual

Or run localexec to call o a VPN client and connect to a VPN…..

Matt Gowie avatar
Matt Gowie

Yeah and I’d need to make my RDS / RabbitMQ / ElasticSearch clusters externally available… Can’t do that.

jose.amengual avatar
jose.amengual

No one can

Matt Gowie avatar
Matt Gowie

jose.amengual avatar
jose.amengual

I’m curious about other people experiences with this

jose.amengual avatar
jose.amengual

If you run Atlantis then you could do this

Matt Gowie avatar
Matt Gowie

Yeah — for real. I will post on the Hashi discussion board if nobody gets back by EOD.

Matt Gowie avatar
Matt Gowie

Yeah, trying to avoid more self-hosted tooling honestly. Atlantis is awesome, but my client doesn’t need to add another self-hosted tool.

jose.amengual avatar
jose.amengual

I agree you already have a tool to do this

Alex Jurkiewicz avatar
Alex Jurkiewicz

subscribing. I looked for a solution earlier this year and couldn’t find one

Matt Gowie avatar
Matt Gowie

Yeah… honestly, I don’t think there is a solution, but I figure I should check thoroughly before giving up on that front.

jose.amengual avatar
jose.amengual

isn’t this a very basic feature for IaC?

jose.amengual avatar
jose.amengual

I mean they do have an option but you need to pay for it

btai avatar

I chatted with a Hashicorp account manager recently and asked about this, and I believe you need to pay

btai avatar

the only workaround I can think about is managing that particular workspace with your CI provider and setting it to local run and just use Terraform Cloud for remote state storage

Alex Jurkiewicz avatar
Alex Jurkiewicz

I spoke to Hashicorp AM when I was looking too. I was given the impression that Hashicorp simply isn’t interested in non-enterprise customers. If you aren’t looking to spend six figures, their product isn’t really for you.

Alex Jurkiewicz avatar
Alex Jurkiewicz

FWIW, we did try Terraform Cloud for the great module directory and remote state storage only, like you suggest @btai. But it’s not really cost effective so we didn’t commit

btai avatar

I committed a bit early when it was newly released as free and moved everything over because I really liked those two things you mentioned (state storage / module directory). I’m a little bummed w/ my decision as their pricing model is not a model that my finance team appreciates (charging per successful terraform apply). For now, we’ve been able to get away with just doing local runs for workspaces that require a null resource script or matt’s use case.

kskewes avatar
kskewes

Likewise. We had convo with Hashicorp team and options are pay for business or run terraform in Gitlab CI with agents in our AWS somewhere.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@btai what did you think of the spacelift presentation?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie unfortunately, you need to pony up for TFC for business.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I haven’t found any workaround.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Spacelift pricing will be a lot more affordable. Pay per user. Pay per concurrency.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Scalr is also coming out with runners.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Scalr has an API similar to TFC and a terraform provider.

Matt Gowie avatar
Matt Gowie

Yeah, everybody saying the same thing — I figured as much when asking the question, but it’s pretty weak.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What I love about spacelift is you don’t have to move state backend.

Matt Gowie avatar
Matt Gowie

Definitely see myself recommending those other tools to clients in the future once they’re more mature.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-kubernetes-tfc-cloud-agent

Provision a Terraform Cloud Agent on an existing Kubernetes cluster. - cloudposse/terraform-kubernetes-tfc-cloud-agent

Alex Jurkiewicz avatar
Alex Jurkiewicz

Spacelift looks really cool, but seems to be a very early stage startup and no public pricing. Bit risky surely?

Matt Gowie avatar
Matt Gowie

Do you use that with any clients that have purchased the biz tier? Or was that a forward thinking thing?

Matt Gowie avatar
Matt Gowie

Yeah — that was my thoughts exactly Alex: Spacelift looks awesome, but seems too early to invest into it now.

And I’m not sure I can get behind that they built their own configuration language for their SaaS. Seems esoteric and cumbersome. But maybe I’m being short sighted and that’s a necessary thing?

btai avatar

I share similar concerns. I’m also concerned about tooling fatigue. I already have a CI provider and Terraform Cloud. For every new tool I introduce, my (very small) team needs to learn yet another thing. This concern is separate from Spacelift (or the other 3 products attempting to solve similar problems). That’s not to say that I wasn’t excited about the demos, but I’m approaching yet another tool a little more cautiously because of the mistake I made w/ moving everything to TFC so quickly.

4
btai avatar

Also naively forgetting that while Hashicorp has provided a ton of tooling for free, that they are still first and foremost a business. A part of me initially thought we’d continue to see more features added to the free version of Terraform Cloud .

Matt Gowie avatar
Matt Gowie

Yeah, that’s what I’d like to see. And honestly, I don’t need these things to be free — I’m sure companies would be happy to pay for them (I would), but not at crazy prices or prices I assume are crazy because why are you not showing them to me on your pricing page.

As Alex mentioned above: “given the impression that Hashicorp simply isn’t interested in non-enterprise customers”. I get the same sentiment and that sucks because there is no reason TFC couldn’t be leagues ahead of the up and comers, but it doesn’t seem they’re interested in doing so.

kskewes avatar
kskewes

Moving the agents down a TFC tier would be great. Make it a no-brainer.

this1
Miguel Zablah avatar
Miguel Zablah

Hi all! I’m new here, but I wanted to ask what do you use to test your terraform modules? I have created some but I’m looking for some options on how to maybe create some testing and maybe lint?

Hope all of you have a great christmas party_parrot

jose.amengual avatar
jose.amengual

Terratests

Miguel Zablah avatar
Miguel Zablah

Nice I will try it out it looks really cool

Miguel Zablah avatar
Miguel Zablah

@jose.amengual Do you know if terratest mock the creation of the resources?

jose.amengual avatar
jose.amengual

mmmmm I do not know

jose.amengual avatar
jose.amengual

I know it can create the resources for sure

Miguel Zablah avatar
Miguel Zablah

Interesting I will look into it I will like to mock at least some resources

Miguel Zablah avatar
Miguel Zablah

Nice thanks!

Hao Wang avatar
Hao Wang

a quick quesiton, can ALB module support terraform 0.14?

Hao Wang avatar
Hao Wang

now the version file got >0.12:

terraform {
  required_version = ">= 0.12.0"

  required_providers {
    aws      = ">= 2.0"
    template = ">= 2.0"
    null     = ">= 2.0"
    local    = ">= 1.3"
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should support TF 0.13 and 0.14

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

>= 0.12.0 means 0.12 and up

Hao Wang avatar
Hao Wang

The error message I got

Hao Wang avatar
Hao Wang
Module module.vpc (from
git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.8.1>) does
not support Terraform version 0.14.3. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
Hao Wang avatar
Hao Wang

it works after changed to source = "cloudposse/vpc/aws"

Hao Wang avatar
Hao Wang

it seems the doc need an update

Hao Wang avatar
Hao Wang

VPC and subnet work but failed at ALB

Hao Wang avatar
Hao Wang
Error: Unsupported Terraform Core version

  on .terraform/modules/alb.access_logs.s3_bucket.this/versions.tf line 2, in terraform:
   2:   required_version = ">= 0.12.0, < 0.14.0"

Module module.alb.module.access_logs.module.s3_bucket.module.this (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>)
does not support Terraform version 0.14.3. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
Hao Wang avatar
Hao Wang
- alb.access_logs.s3_bucket in .terraform/modules/alb.access_logs.s3_bucket
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for alb.access_logs.s3_bucket.this...
- alb.access_logs.s3_bucket.this in .terraform/modules/alb.access_logs.s3_bucket.this
Downloading git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2> for alb.access_logs.this...
Hao Wang avatar
Hao Wang

seems it downloaded an old version of null-label plugin

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it uses sub-modules which were not converted yet, it will not work with TF 0.14

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are working on converting all modules

Hao Wang avatar
Hao Wang

thanks Andriy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

null-label latest versions support TF 0.14

Hao Wang avatar
Hao Wang

yeah, I’m looking for which file uses the old version lol

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s the other modules that use the prev versions of null-label that will cause issues

Hao Wang avatar
Hao Wang

alb.access_logs.this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-alb

Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb

Hao Wang avatar
Hao Wang

ah

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this one for example ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Hao Wang avatar
Hao Wang

got it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are working on converting all modules, but to speed up the process, PRs are welcome

Hao Wang avatar
Hao Wang

cool, let me put up one if you don’t mind I steal from you lol

Hao Wang avatar
Hao Wang

0.22.1 is the newest version of null-label

Hao Wang avatar
Hao Wang

which supports 0.14

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in [versions.tf](http://versions.tf), we use it like this

terraform {
  required_version = ">= 0.12.26"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 2.0"
    }
    local = {
      source  = "hashicorp/local"
      version = ">= 1.2"
    }
    null = {
      source  = "hashicorp/null"
      version = ">= 2.0"
    }
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

required_version w/o upper limit

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the new syntax for required_providers

Hao Wang avatar
Hao Wang

the current one is

terraform {
  required_version = ">= 0.12.0"

  required_providers {
    aws      = ">= 2.0"
    template = ">= 2.0"
    null     = ">= 2.0"
    local    = ">= 1.3"
  }
}
Hao Wang avatar
Hao Wang

I can add source to all if needed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are using this format for all sources now

module "label" {
  source  = "cloudposse/label/null"
  version = "0.22.1"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(not GitHub URls)

Hao Wang avatar
Hao Wang

oh, got it, I used this way just now

Hao Wang avatar
Hao Wang

without git::

Hao Wang avatar
Hao Wang

ok, will update this as well

Hao Wang avatar
Hao Wang

how can I test if the change works?

Hao Wang avatar
Hao Wang

I find test folder

Hao Wang avatar
Hao Wang

and it asks me to install bats

Hao Wang avatar
Hao Wang

a bit late here, will sync up tomorrow

Hao Wang avatar
Hao Wang

thanks

Hao Wang avatar
Hao Wang

@Andriy Knysh (Cloud Posse) good morning

Hao Wang avatar
Hao Wang

how can I find the new source of package which is not git url?

Hao Wang avatar
Hao Wang

for example, git::<https://github.com/cloudposse/terraform-aws-lb-s3-bucket.git?ref=tags/0.9.0>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
source  = "cloudposse/lb-s3-bucket/aws"
version = "0.9.0"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
The syntax for specifying a registry module is <NAMESPACE>/<NAME>/<PROVIDER>. For example: hashicorp/consul/aws
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

<NAMESPACE> is cloudposse

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

<PROVIDER> is aws

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform-aws-lb-s3-bucket becomes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

cloudposse/lb-s3-bucket/aws

Hao Wang avatar
Hao Wang

cool

Hao Wang avatar
Hao Wang

will put up a PR soon

1
Hao Wang avatar
Hao Wang
Add 0.14 support and update new syntax by snowsky · Pull Request #67 · cloudposse/terraform-aws-alb

what Add Terraform 0.14 support Use new syntax in main.tf/versions.tf why This module needs to support Terraform 0.14

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Hao Wang reviewed, thanks for the PR. Looks good, just a few comments

Hao Wang avatar
Hao Wang

cool, just updated

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Release v0.25.0 · cloudposse/terraform-aws-alb

Add 0.14 support and update new syntax @snowsky (#67) what Add Terraform 0.14 support Use new syntax in main.tf/versions.tf why This module needs to support Terraform 0.14

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Hao Wang

Hao Wang avatar
Hao Wang

thanks Andriy

2020-12-30

Austin Loveless avatar
Austin Loveless

I’m working with the “terraform-aws-eks-node-group” module https://github.com/cloudposse/terraform-aws-eks-node-group, and am having issues adding user data. I followed examples in the repo:

  before_cluster_joining_userdata = var.before_cluster_joining_userdata

When I run a terraform plan I’m getting

An argument named "before_cluster_joining_userdata" is not expected here.

I’m using terraform version 0.13.2.

Has anyone else had this problem?

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)

Not sure what version of module you are using, but most recent one have

required_version = ">= 0.13.3"
cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

1
Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)
Remove autoscaler permissions from worker role by Nuru · Pull Request #34 · cloudposse/terraform-aws-eks-node-group

Potentially breaking changes Terraform 0.13.3 or later required This release requires Terraform 0.13.3 or later because it is affected by these bugs that are fixed in 0.13.3: hashicorp/terraform#2…

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)

it may not be related still, but better to fit the requirements

Austin Loveless avatar
Austin Loveless

Ah, good call out.

ravi avatar

I created security groups for eks cluster and eks nodes and also created an ingress rule for eks cluster to add an inbound rule for port 443 and it works fine but if i run the plan or apply the second time the ingress security rule which has added previously gets deleted and its not added back again. But when i run it for the third time it again created the ingress rule and if i run plan/apply again it deleted the rule and vice versa any idea why is it behaving like that.

resource "aws_security_group" "eks_cluster_sg" {
    name = "${var.generictag}-${var.env}-scg-ekscls"
    description = "The eks cluster master security group"
    vpc_id = "${var.vpc}"

    ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    description = "Allowed the inbound connection to VPC CIDR"
    security_groups = ["${aws_security_group.bastion.id}"]
    }

    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }

  tags = "${merge(
    var.tags,
    map(
        "Name", "${var.generictag}-${var.env}-scg-ekscls"
    )
  )}"
}

resource "aws_security_group" "eks_node_security_group" {
    name = "${var.generictag}-${var.env}-scg-eks-node"
    description = "The eks cluster master security group"
    vpc_id = "${var.vpc}"

    ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    description = "Allowed the inbound connection from bastion to eks nodes"
    security_groups = ["${aws_security_group.bastion.id}"]
    }

    ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    description = "Allowed the inbound connection from eks controle plane to eks nodes"
    security_groups = ["${aws_security_group.eks_cluster_sg.id}"]
    }

    ingress {
    from_port = 10250
    to_port = 10250
    protocol = "tcp"
    description = "Allowed the inbound from eks controle plane to eks nodes for internal connectivity"
    security_groups = ["${aws_security_group.eks_cluster_sg.id}"]
    }

    ingress {
    from_port = 1025
    to_port = 65535
    protocol = "tcp"
    description = "Allowed the inbound connection port range of eks nodes to itself"
    self        = "true"
    }


    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }

  tags = "${merge(
    var.tags,
    map(
        "Name", "${var.generictag}-${var.env}-scg-eks-node"
    )
  )}"
}

resource "aws_security_group_rule" "eks_cluster-to-eks_worker_node" {
  type = "ingress"
  from_port = 443
  to_port = 443
  protocol = "tcp"
  description = "Allow Inbound rule in eks cluster to eks nodes"
  security_group_id = "${aws_security_group.eks_cluster_sg.id}"
  source_security_group_id = "${aws_security_group.eks_node_security_group.id}"

  depends_on = [aws_security_group.eks_node_security_group,]

  lifecycle {
    create_before_destroy = "true"
    }
}
ravi avatar

output of plan:

# module.sg.aws_security_group.eks_cluster_sg will be updated in-place
  ~ resource "aws_security_group" "eks_cluster_sg" {
        id                     = "sg-09b067394f3f35d99"
      ~ ingress                = [
          - {
              - cidr_blocks      = []
              - description      = ""
              - from_port        = 443
              - ipv6_cidr_blocks = []
              - prefix_list_ids  = []
              - protocol         = "tcp"
              - security_groups  = [
                  - "sg-07e3a22f09247060c",
                ]
              - self             = false
              - to_port          = 443
            },
            # (1 unchanged element hidden)
        ]
        name                   = "a0266d-prd-scg-ekscls"
        tags                   = {
            "Environment" = "prd"
            "Name"        = "a0266d-prd-scg-ekscls"
            "Projectcode" = "a0266d"
            "Terraformed" = "true"
        }
        # (6 unchanged attributes hidden)
    }

  # module.sg.aws_security_group_rule.eks_cluster-to-eks_worker_node will be updated in-place
  ~ resource "aws_security_group_rule" "eks_cluster-to-eks_worker_node" {
      + description              = "Allow Inbound rule in eks cluster to eks nodes"
        id                       = "sgrule-1645696446"
        # (10 unchanged attributes hidden)
    }

Plan: 0 to add, 4 to change, 0 to destroy.
mfridh avatar

You cannot combine rules inside an aws_security_group resource with individual _rule resources.

mfridh avatar

They will be competing so to speak…

ravi avatar

Do i need to specify them in different module or what is the workaround for it?

ravi avatar

Ok i got it thanks here is the how its done http://cavaliercoder.com/blog/inline-vs-discrete-security-groups-in-terraform.html#<i class="em em-~"</i>text=At%20this%20time%20you%20cannot,science%20lab%20experiment%20for%20you>!

Inline vs. discrete rules for AWS Security Groups in Terraform

There are two ways to configure AWS Security Groups in Terraform. You may definerules inline with a aws_security_group resource or you may define additionaldiscrete aws_security_group_rule resources.

Hao Wang avatar
Hao Wang

hey, I run into an issue and it may be an easy fix. When I use both terraform-aws-ecs-alb-service-task and rdstogether, they both create a security group with the same name so I got the error message like

Error creating Security Group: InvalidGroup.Duplicate: The security group 'eg-test-test' already exists for VPC 'vpc-0a4474b6d776a7b74'
Hao Wang avatar
Hao Wang

I did some researches and found both modules will call cloudposse/label/null and create the SG with the same ID

Hao Wang avatar
Hao Wang

how could I use a different name for different module?

Hao Wang avatar
Hao Wang

let me try pass attributes to rds

jose.amengual avatar
jose.amengual

you need to add something to the name, or add a attribute or something to make it different.

Hao Wang avatar
Hao Wang

got it, thanks

Hao Wang avatar
Hao Wang

for vpc module, how can I use custom security group rules?

Hao Wang avatar
Hao Wang

enable_default_security_group_with_custom_rules is a flag and enabled by default

jose.amengual avatar
jose.amengual

you want a security group for the whole VPC?

jose.amengual avatar
jose.amengual

that is usually not recommended

Hao Wang avatar
Hao Wang

I’d like to have a security group for EC2 instance

jose.amengual avatar
jose.amengual

then you create one and attached to the instance

Hao Wang avatar
Hao Wang

ok, thanks

jose.amengual avatar
jose.amengual
cloudposse/terraform-aws-ec2-instance

Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance

jose.amengual avatar
jose.amengual
cloudposse/terraform-aws-ec2-instance

Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance

jose.amengual avatar
jose.amengual

so you create the security group and then pass it to the instance when you are creating it

jose.amengual avatar
jose.amengual

and that SG will be for that instance only

Hao Wang avatar
Hao Wang

thanks for the details, it works

2020-12-31

Matt Gowie avatar
Matt Gowie

This is pretty weak — terraform cloud does not support refresh : https://github.com/hashicorp/terraform/issues/23247

Terraform refresh is not working with Terraform Cloud remote backend · Issue #23247 · hashicorp/terraform

Terraform Version Terraform v0.12.10 provider.aws v2.33.0 Hey guys, I&#39;m using Terraform Cloud as a remote backend. For example, I&#39;ve changed output and nothing else and need it to be update…

Matt Gowie avatar
Matt Gowie

No response from Hashi or anything in that thread. And it doesn’t even seem to make that much sense… Why can’t terraform refresh work the same way against their remote backend?

Terraform refresh is not working with Terraform Cloud remote backend · Issue #23247 · hashicorp/terraform

Terraform Version Terraform v0.12.10 provider.aws v2.33.0 Hey guys, I&#39;m using Terraform Cloud as a remote backend. For example, I&#39;ve changed output and nothing else and need it to be update…

1
Ryan Ryke avatar
Ryan Ryke

some basic updates to the cloudtrail s3 bucket module… also Happy NYE and NYD to everyone in here

Ryan Ryke avatar
Ryan Ryke
update-null-label by rryke · Pull Request #31 · cloudposse/terraform-aws-cloudtrail-s3-bucket

Updating this to the latest null-label module. Otherwise, it doesn&#39;t work with TF 14 what getting this error when trying to init with tf 14 Error: Unsupported Terraform Core version on .te…

jose.amengual avatar
jose.amengual

@Ryan Ryke can you follow this guide ?

update-null-label by rryke · Pull Request #31 · cloudposse/terraform-aws-cloudtrail-s3-bucket

Updating this to the latest null-label module. Otherwise, it doesn&#39;t work with TF 14 what getting this error when trying to init with tf 14 Error: Unsupported Terraform Core version on .te…

Ryan Ryke avatar
Ryan Ryke

feel free to add to my pr, just wanted to let you know that it wasnt currently working

jose.amengual avatar
jose.amengual

We do not merge PRs without passing tests or without following the new standards and we are very grateful community contributions but there is guidelines for contributing. If you want the module to be update in your pr you can follow the steps otherwise it will have to wait until we get to it, sadly i don’t have time to do it right now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Terraform 0.14 upgrade by maximmi · Pull Request #32 · cloudposse/terraform-aws-cloudtrail-s3-bucket

what Upgrade to support Terraform 0.14 and bring up to current Cloud Posse standard why Support Terraform 0.14

Ryan Ryke avatar
Ryan Ryke

thanks… i saw it and closed mine

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Anyone has any experience with blue/green deployments with EC2 using launch templates and ASG? Basically, I’m trying to update the launch template and launch the new EC2 before spinning down the old one. Looks like I need to duplicate the templates and ASGs and use a script to fail over, like these guys did: https://github.com/skyscrapers/terraform-bluegreen/blob/master/bluegreen.py

I was just hoping not to need to do that…

skyscrapers/terraform-bluegreen

Terraform module to setup blue / green deployments - skyscrapers/terraform-bluegreen

jose.amengual avatar
jose.amengual

You can use codedeploy to do that or app mesh

skyscrapers/terraform-bluegreen

Terraform module to setup blue / green deployments - skyscrapers/terraform-bluegreen

jose.amengual avatar
jose.amengual

Otherwise you need duplicate asg, TG, the task Def etc

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Yeah which I was hoping to avoid. CodeDeploy good thinking… I was thinking about it only for ECS but it can be used in EC2 too.

jose.amengual avatar
jose.amengual

Codedeploy is way older than ecs, we used quite a lot in the past

Zach avatar

CodeDeploy does not work well with ASGs, forwarning

Zach avatar

it doesn’t duplicate the entire config - they claim it does a blue green but its sort of half-arsed

Zach avatar

@Yoni Leitersdorf (Indeni Cloudrail) we do this in a blue-green manner, although you can’t pause/stop halfway, by tying the name of the ASG to the name of the AMI or the launch template version etc, and using ‘create_before_destroy’

Zach avatar

terraform will then spin up a new ASG using all the rest of the infra ‘as is’ and only destroy the old one once the new ASG is stable

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

create_before_destroy on the ASG, not the template, right?

Zach avatar

yup

Zach avatar

Oh looks like we used the creation_time of the AMI actually

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Where?

Zach avatar
locals {
   namespace_timestamp = "${local.namespace}-${formatdate("YYYYMMDDhhss", data.aws_ami.this.creation_date)}"
}

resource "aws_autoscaling_group" "this" {
  name = local.namespace_timestamp
  ...
  lifecycle {
    create_before_destroy = true
  }
1
Zach avatar

We also add a dependency on the launch_template within the ASG after we found some weird dependency looping

Zach avatar
depends_on = [aws_launch_template.this]
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Cool thanks!

loren avatar

Cool trick with the ami creation date!

jose.amengual avatar
jose.amengual

Codedeploy still have that problem with the ASG in EC2? I thought they fixed it

Zach avatar

Nope, I even reached out to support some time ago

Zach avatar

it makes an ASG, but doesn’t copy any of the autoscaling policies

Zach avatar

and I think doesn’t register it with ALBs either

Zach avatar

… frankly I don’t understand the value of the feature

jose.amengual avatar
jose.amengual

so stupid

jose.amengual avatar
jose.amengual

in ECS it did work

jose.amengual avatar
jose.amengual

with fargate

Zach avatar

and the documentation is incredibly vague about it

jose.amengual avatar
jose.amengual

it is

Zach avatar

Here’s what they told me, verbatim
AWS CodeDeploy doesn’t make a copy of the cloudwatch alarms at the moment. Hence does the following:

~ In the first approach, AWS CodeDeploy makes a copy of an Auto Scaling group. It, in turn, provisions new Amazon EC2 instances, deploys the application to these new instances, and then redirects traffic to the newly deployed code.
~ In the second approach, you use instance tags or an Auto Scaling group to select the instances that will be used for the green environment. AWS CodeDeploy then deploys the code to the tagged instances.

Furthermore, I have created a feature request on your behalf.

Zach avatar

I can’t even really think of a situation where this feature is helpful, even for something like an ASG that just reads off a queue - w/o the alarms and scaling policies it will turn on with ‘min instances’ and then never budge from that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Using Terraform for zero downtime updates of an Auto Scaling group in AWSattachment image

A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…

Zach avatar

Yup that outlines the approach I mentioned

Zach avatar


We can force the ASG resource to be inextricably tied to the launch configuration. To do this, we reference the launch configuration name in the name of the Auto Scaling group.

Zach avatar

I’m using launch templates though, which are versioned, so we used the ami creation-date as a way of forcing the b/g replacement

Austin Loveless avatar
Austin Loveless

Anyone have experience adding before_cluster_joining_userdata to an eks_node_group https://github.com/cloudposse/terraform-aws-eks-node-group.git?ref=tags/0.9.0?

I want to add user data to my worker nodes without any downtime. Is this possible to do with this?

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Austin Loveless avatar
Austin Loveless

I deployed this in my dev environment but all my pods went down. I’d like to not have that happen in my prod environment.

I was thinking about creating a new instance of the worker node module, spin up the new worker node pool as part of one deployment and then spin down the old worker node pool as part of a separate changeset.

Does that make sense?

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

mfridh avatar

Yes it makes sense. So you can cordon and drain the old nodes at your leisure.

Jeff Everett avatar
Jeff Everett

I think I may have hit a bug on the ACM certificate module. Any time I try and specify subject alternative names (even using the example code from the readme), I’m getting these errors. Environment and other details in thread.

Error: Invalid index

  on .terraform/modules/acm_request_certificate/main.tf line 31, in resource "aws_route53_record" "default":
  31:   name            = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
    |----------------
    | count.index is 1
    | local.domain_validation_options_list is empty list of dynamic

The given key does not identify an element in this collection value.

https://github.com/cloudposse/terraform-aws-acm-request-certificate

cloudposse/terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

Jeff Everett avatar
Jeff Everett

env:

Terraform v0.12.9
+ provider.aws v2.70.0
+ provider.local v1.4.0
+ provider.null v2.1.2

Your version of Terraform is out of date! The latest version
is 0.14.3. You can update by downloading from www.terraform.io/downloads.html
cloudposse/terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

Jeff Everett avatar
Jeff Everett

as well as the definition i’m executing:

module "acm_request_certificate" {
  source                            = "git::<https://github.com/cloudposse/terraform-aws-acm-request-certificate.git?ref=tags/0.10.0>"
  domain_name                       = "mytableauvnext.tableaucorp.com"
  process_domain_validation_options = true
  ttl                               = "300"
  subject_alternative_names         = ["mytableauvnext.ea.tableaucorp.com"]
  zone_name                         = "mytableauvnext.ea.tableaucorp.com"
}
pjaudiomv avatar
pjaudiomv

Can you bump the aws provider to 3.x

pjaudiomv avatar
pjaudiomv

Or back the module version down

Jeff Everett avatar
Jeff Everett

I tried module versions as far back as 0.3.0, and they all seemed to have the same issue.

Jeff Everett avatar
Jeff Everett

I can also try the newer provider verison.

jose.amengual avatar
jose.amengual

I have been using this module without any issues

jose.amengual avatar
jose.amengual

maybe is related to the route53 zone lookup?

pjaudiomv avatar
pjaudiomv

I’m guessing it’s the aws provider version

pjaudiomv avatar
pjaudiomv

It was changed from a list to set in 3.x

jose.amengual avatar
jose.amengual

I’m using the 3.x provider

pjaudiomv avatar
pjaudiomv

actually after looking over the module, it does not support 3.x only 2.x

pjaudiomv avatar
pjaudiomv

looks like the ball is in the PR creators court

jose.amengual avatar
jose.amengual

You can create a PR if you want to upgrade it, and we can close the other one

pjaudiomv avatar
pjaudiomv
updates for tf 0.14 and aws provder 3.x by pjaudiomv · Pull Request #35 · cloudposse/terraform-aws-acm-request-certificate

what Upgrade to support Terraform 0.14 and bring up to current Cloud Posse standard why Support Terraform 0.14 Support AWS Provider >= 3.x references https://registry.terraform.io/providers

1
1
    keyboard_arrow_up